text
stringlengths
3
1.04M
lang
stringclasses
4 values
len
int64
3
1.04M
\begin{document} \title{A universal, memory-assisted entropic uncertainty relation} \author{Z.-H. Ma,$^{1,2,+}$ C.-M. Yao,$^{3,4,+}$ Z.-H. Chen,$^{5}$ S. Severini$^{2}$, A. Serafini$^{4}$} \address{$^1$Department of Mathematics, Shanghai Jiao-Tong University, Shanghai, 200240, P. R. China} \address{$^2$Department of Computer Science, University College London, Gower St., WC1E 6BT London, United Kingdom} \address{$^3$Department of Physics and Electronic Science, Hunan University of Arts and Science, Changde, 415000, China} \address{$^4$Department of Physics and Astronomy, University College London, Gower St., WC1E 6BT London, United Kingdom} \address{$^5$Department of Science, Zhijiang College, Zhejiang University of Technology, Hangzhou, 310024, China} \address{$^{+}$ These two authors contributed equally to this work.} \date{\today} \begin{abstract} We derive a new memory-assisted entropic uncertainty relation for non-degenerate Hermitian observables where both quantum correlations, in the form of conditional von Neumann entropy, and quantum discord between system and memory play an explicit role. Our relation is `universal', in the sense that it does not depend on the specific observable, but only on properties of the quantum state. We contrast such an uncertainty relation with previously known memory-assisted relations based on entanglement and correlations. Further, we present a detailed comparative study of entanglement- and discord-assisted entropic uncertainty relations for systems of two qubits -- one of which plays the role of the memory -- subject to several forms of independent quantum noise, in both Markovian and non-Markovian regimes. We thus show explicitly that, partly due to the ubiquity and inherent resilience of quantum discord, discord-tightened entropic uncertainty relations often offer a better estimate of the uncertainties in play. \end{abstract} \pacs{03.67.-a,03.67.Mn,03.65.Yz,03.65.Ud} \maketitle \section{Entropic uncertainty relations} One of the key aspects of quantum theory is that it is fundamentally impossible to know certain things, such as a particle's position and momentum, simultaneously with infinite precision. In fact, quantum mechanical uncertainty principles assert fundamental limits on the precision with which certain pairs of physical properties, such as position and momentum, may be simultaneously known. Originally observed by Heisenberg \cite{Heisenberg}, the uncertainty principle is best known in the Robertson-Schr\"odinger form \cite{Robertson} $$\Delta X\Delta Y\geq \frac{1}{2}|\langle[X,Y]\rangle|,$$ where $\Delta X$ ( $\Delta Y$) represents the standard deviation of the corresponding observable $X (Y)$, formally represented by a Hermitian operator. The entropic uncertainty relation for any two general observables was first given by Deutsch in terms of an information-theoretic model \cite{Deutsch}. Afterwards, an improved version was given by Kraus and then proved by Maassen and Uiffink \cite{Kraus}, which strengthen and generalizes Heisenberg's uncertainty relations, and can be written as follows: $$H(X)+H(Y)\geq \log_2\left(\frac{1}{c}\right),$$ where $H$ is the Shannon entropy of the measured observable and $c$ quantifies the `complementarity' between the observables: $c=\max_{(i,j)} |\langle x_i | y_j\rangle|^2$ if $X$ and $Y$ are non-degenerate observables ($|x_i\rangle , |y_j\rangle$ being the eigenvectors of $X$ and $Y$, respectively). Recently, the uncertainty principle in terms of entropy has been extended to the case involving entanglement with a quantum memory: It was proven by Berta et al.\ \cite{Berta} that \begin{equation} S(X|B)+S(Y|B)\geq \log_2\left(\frac{1}{c}\right)+S(A|B), \label{berta} \end{equation} where $S(X|B)=S\left[\sum_{j}\left(|x_j\rangle\langle x_j|\otimes\mathbbm{1}\right)\varrho_{AB}\left(|x_j\rangle\langle x_j|\otimes\mathbbm{1}\right)\right]$ and $(S(S|B))=S\left[\sum_{j}\left(|y_j\rangle\langle y_j|\otimes{1}\right)\varrho_{AB}\left(|y_j\rangle\langle y_j|\otimes{1}\right)\right]$ are the average conditional von Neumann entropies representing the uncertainty of the measurement outcomes of $X$ and $Y$ obtained using the information stored in system $B$, given that the initial system plus memory state was $\varrho_{AB}$. The quantity $S(A|B) = S(\varrho_{AB})-S({\rm Tr}_A(\varrho_{AB}))$ represents instead the conditional von Neumann entropy between system $A$ and $B$, defined in analogy with the classical definition of conditional entropy. The measurement outcomes of two incompatible observables on a particle can be precisely predicted when it is maximally entangled with a quantum memory, as indicated by Eq.(\ref{berta}). This entanglement-assisted entropic uncertainty relation was promptly experimentally tested [6]. Note that the quantum conditional entropy $S(A|B)$, appearing in the lower bound above, is a quantifier of quantum correlations that can be negative for entangled states, and thus may tighten the uncertainty relation without memory. Quantum correlations may also be assessed by quantum discord, which we define in the following. The total correlations between two quantum systems $A$ and $B$ are quantified by the quantum mutual information \begin{equation}\label{Total} \mathcal{I}(\varrho_{AB})=S(\varrho_{A})+S(\varrho_{B})-S(\varrho_{AB}) \; , \end{equation} where $\varrho_{A(B)}= \mathrm{Tr}_{B(A)}(\varrho_{AB})$. On the other hand, the classical part of correlations is defined as the maximum information that can be obtained by performing a local measurement, which is defined as $\mathcal{I}(\varrho_{AB}|\{\hat{\Pi}_{A}^{j}\})=S(\varrho_{B})-\sum_j p_j S(\varrho_{B|j})$, where $\varrho_{B|j}=\tr_{A}((\hat{\Pi}_{A}^{j}\otimes \mathbbm{1})\varrho(\hat{\Pi}_{A}^{j}\otimes \mathbbm{1}))/\tr_{AB}[(\hat{\Pi}_{A}^{j}\otimes I)\varrho(\hat{\Pi}_{A}^{j}\otimes \mathbbm{1})]$, $\{\hat{\Pi}_{A}^{j}\}$ are POVM locally performed on subsystem $A$, $p_j$ is the probability of the measurement outcome $j$. Classical correlations are thus quantified by \cite{ved}: \begin{equation}\label{classical}J_{A}(\varrho_{AB})=\mathrm{sup}_{\{\hat{\Pi}_{A}^{j}\}}\mathcal{I}(\varrho_{AB}|\{\hat{\Pi}_{A}^{j}\})\end{equation} Then quantum $A$-side discord (quantum correlation) is defined as the difference of total correlation and $A-$side classical correlation \cite{Ollivier}: \begin{equation} D_{A}(\varrho_{AB})=\mathcal{I}(\varrho_{AB})- J_{A}(\varrho_{AB}) \; .\label{Bdiscord} \end{equation} Recently, by considering the role of quantum discord and classical correlations of the joint system-memory state, Pati et al.\ \cite{Pati} obtained a modified entropic uncertainty relation that tightens the lower bound (\ref{berta}) of Berta et al.\: \begin{equation} S(R|B)+S(S|B)\geq \log_{2}\frac{1}{c}+S(A|B) +\max\{0, D_{A}(\varrho_{AB})- J_{A}(\varrho_{AB})\} \; .\label{pati} \end{equation} Entropic uncertainty relations have been shown to hold fundamental consequences for the security of cryptographic protocols \cite{tomamichel,ng,moloktov}, the foundations of thermodynamics \cite{hanggi} and entanglement theory \cite{guhne,niekamp}. See also \cite{wehner} for a quite recent review. The main finding of the present work is the derivation of a new discord-assisted uncertainty relation, which we shall then contrast with the already known ones (Section 2). We will then move on to consider the behaviour of the different uncertainty relations under decoherence (Section 3), both to shed further light on our newly derived universal entropic uncertainty relation, and to provide the reader with a detailed comparative analysis of the different memory-assisted entropic uncertainty relations in realistic, noisy situations. We shall draw conclusions in Section 4. \section{An observable-indipendent entropic uncertainty relation} In the following, we use $X$ and $Z$ to denote two {\em non-degenerate} Hermitian observables described by the POVMs $X=\{X_{j}\}$ and $Z=\{Z_{k}\}$, where $X_j$ and $Z_{j}$ are orthogonal projectors. Otherwise, we shall apply all the notation introduced in the previous section. Please also bear in mind that, in this paper, we always use $S(\varrho_{AB})$ ($S(X|B)$) to denote the von Neumann entropy (conditional von Neumann entropy) of the quantum state $\varrho_{AB}$ ($(X\otimes I)\varrho_{AB}$), while we use $H(X)$ to denote the Shannon entropy of the discrete probability distribution $P$ of measurement outcomes for $X$. Further, we will make use of the following lemma: \nonumberoindent {\bf Lemma 1.} Let $X:=\{X_{j}\}$ and $Z:=\{Z_{k}\}$ be arbitrary POVMs on $A$, then for all single-partite state $\varrho_{A}$, \begin{equation} H(X)+ H(Z)\geq \log\frac{1}{c(X)}+\log\frac{1}{c(Z)}\\ +2S(A),\label{Cond2} \end{equation} where $c(X):=\max_{i} \tr(X_{i})$ and $c(Z):=\max_{i} \tr(Z_{i})$, $H(X)$ is the Shannon entropy of the probability $p_i :={\rm Tr}[(X_i ) \varrho_{A}]$, $H(Z)$ is the Shannon entropy of the probability $q_i :={\rm Tr}[(Z_i ) \varrho_{A}]$. \nonumberoindent {\bf Proof.} From Corollary 7 of Ref.\ \cite{Coles}, we know that \begin{equation} H(X)\geq \log\frac{1}{c(X)}+S(A) \; .\label{Cond2a} \end{equation} By using the above relation twice, we get the result: \begin{equation} H(X)+ H(Z)\geq \log\frac{1}{c(X)}+\log\frac{1}{c(Z)}\\ +2S(A) \; , \label{Cond2} \end{equation} which proves our lemma. \qed Clearly, if $X$ and $Z$ are non degenerate Hermitian observables, such that their POVM elements are all one-dimensional projectors, the inequality of Lemma 1 becomes: \begin{equation} H(X)+ H(Z)\geq 2S(A) \; . \label{Cond3} \end{equation} We can now move on to our main result: \nonumberoindent {\bf Theorem 1.} Let $X:=\{X_{j}\}$ and $Z:=\{Z_{k}\}$ be non-degenerate Hermitian observables on subsystem $A$, and let $\varrho_{AB}$ be any bipartite state of systems $A$ and $B$. One has \begin{align} S(X|B)+ S(Z|B)\geq \nonumberonumber\\ +2S(A|B)+ 2D_{A}(\varrho_{AB})\label{bound} \; , \end{align} where $D_{A}(\varrho_{AB})$ is the quantum discord, $S(X|B)$ and $S(Z|B)$ conditional entropies after measurements on $A$, and $S(A|B)$ is the von Neumann conditional entropy of state $\varrho_{AB}$, all defined above. \nonumberoindent {\bf Proof.} Consider a bipartite density operator $\varrho_{AB}$. If Alice performs a measurement of an observable $X$ on subsystem $A$, then the post-measurement state is $\varrho_{AB}^X = \sum_i(X_{i} \otimes {\mathbbm 1}) \varrho_{AB} (X_i \otimes {\mathbbm 1}) = \sum_i p_i X_{i} \otimes \varrho_{B|i}$, where $p_i = {\rm Tr}[(X_{i} \otimes I) \varrho_{AB}]$ is the probability of obtaining the $i^{\rm th}$ outcome and $\varrho_{B|i}={\rm Tr}_{A}[(X_{i} \otimes I) \varrho_{AB} (X_{i} \otimes {\mathbbm 1})]/p_i$ is the conditional state of the memory $B$ corresponding to this outcome. The conditional von Neumann entropy $S(X|B)$ denotes the ignorance about the measurement outcome $X$ given information stored in a quantum memory held by an observer $B$. Thus, $S(X|B)=S(\varrho_{AB}^X)-S(\varrho_B)$ is the conditional entropy of the state $\varrho_{AB}^X$. This is given by \begin{equation} S(X|B) = \sum_i p_i S(\varrho_B|i) + H(P) -S(\varrho_B)\label{Cond} \end{equation} Here $P:=(p_i)$ is the probability distribution of the outcomes. It is worth noticing that, if $B$ is a null system then, of the three terms in (\ref{Cond}), only the term $H(P)$ survives, {\em i.e.}\ $S(X|B) = H(P)$: the conditional von Neumann entropy of the quantum state after the measurement is the same as the Shannon entropy of the discrete probability distribution of measurement outcomes. Now, denote by $P:=(p_i)$ the probability distribution of the measurement outcomes, so $p_i:={\rm Tr}[(X_i \otimes I) \varrho_{AB}]$; it is clear that ${\rm Tr}[(X_i \otimes I) \varrho_{AB}]={\rm Tr}[(X_i ) \varrho_{A}]$. Hence, the probability distribution $P:=(p_i)$ only depends on the reduced density matrix $\varrho_{A}$. Then the Shannon entropy of $P$ only depends on the single partite state $\varrho_{A}$. For the bipartite state $\varrho_{AB}$, with Hermitian measurements $X:=\{X_{j}\}$ and $Z:=\{Z_{k}\}$ performed on subsystem $A$, the following holds: \begin{align} & S\left(X|B\right) +S\left(Z|B\right) \nonumberonumber\\ & =H\left(X\right) -\mathcal{I}(\varrho_{AB}|\{X_{j}\}) +H\left(Z\right) -\mathcal{I}(\varrho_{AB}|\{Z_{k}\}) \nonumberonumber\\ & \geq H\left(X\right) +H\left(Z|\right) -2J_{A}\left(\varrho_{AB}\right)\nonumberonumber\\ & \geq2S(\varrho_{A})-2J_{A}(\varrho_{AB}) \nonumberonumber\\ & =2S(A|B)+ 2D_{A}(\varrho_{AB})\label{proof-of-new-ineq} \, . \end{align} The first identity is a consequence of Eq.~(\ref{Cond}). The first inequality follows from the definition of the classical correlation $J_{A}\left( \varrho_{AB}\right)$. Since $J_{A}\left( \varrho_{AB}\right)$ is defined as the maximal among all POVMs for $\mathcal{I}(\varrho_{AB}|\{\hat{\Pi}_{A}^{j}\})$, in general, $\mathcal{I}(\varrho_{AB}|\{X_{j}\})\leq J_{A}\left( \varrho_{AB}\right)$; similarly, $\mathcal{I}(\varrho_{AB}|\{Z_{k}\})\leq J_{A}\left( \varrho_{AB}\right)$. The second inequality comes from Eq.~(\ref{Cond3}), a consequence of the Lemma reported above. Final, the last equality follows by the definition of quantum discord. \qed We now intend to compare our bound with the inequality (\ref{pati}), which is also related to quantum discord. The term $\frac{1}{c}$ in (\ref{pati}) quantifies the compatibility of the two observables, and thus accounts for specific information concerning the measurements carried out. Thus, one should expect such a bound to be typically tighter than the relation (\ref{bound}). This is in fact often the case. However, aside from the intrinsic value of a universal, observable-independent relation, we find that, in several significant cases which we shall cover in the next section, our bound is more strict than the Inequality (\ref{pati}), and is almost the same as the actual value of the uncertainty. In a sense, in such cases, quantum correlations between the two subsystems make up for the absence of a measurement-specific term like $\frac{1}{c}$ of Ineq.~(\ref{pati}). In particular, as shown in what follows, our bound turns out to improve quite often on Berta et al.'s lower bound of Inequality (\ref{berta}), based on quantum correlations. \subsection{Examples} Let us first illustrate the relevance of our relationship by considering some ad-hoc instances. \subsubsection{Two-qubit Werner states.} Consider the two-qubit Werner state $\varrho_{AB}=\frac{1-f}{4}I_{A}\otimes I_{B}+f|\Psi^{-}\rangle\langle\Psi^{-}|$, where $|\Psi^{-}\rangle=(|01\rangle-|10\rangle)/\sqrt{2}$ is the anti-symmetric Bell state, and $0\leq f\leq 1$. We choose observables $X$ and $Z$ as the two spin observables $\sigma_{x}$ and $\sigma_{z}$. Then the uncertainty can be determined as: \begin{align*} S(X|B)+ S(Z|B)=2-(1-f)\log_2(1-f)-(1+f)\log_2(1+f) \; . \end{align*} The lower bound in (\ref{bound}) reads: \begin{align*} 2D_{A}(\varrho_{AB}) =2-(1-f)\log_2(1-f)-(1+f)\log_2(1+f) \; , \end{align*} which is exactly equal to the uncertainty $S(X|B)+ S(Z|B)$. this also coincides with the lower bound (\ref{pati}) of Ref.~\cite{Pati}: \begin{align*}2-(1-f)\log_2(1-f)-(1+f)\log_2(1+f) \; . \end{align*} Instead, the lower bound (\ref{berta}) of Ref.~\cite{Berta} is \begin{align*}2-\frac{1+3f}{4}\log_2((1+3f))-\frac{3(1-f)}{4}\log_2(1-f)\; ,\end{align*} which is smaller, and thus less informative, than the other two. \subsubsection{Two-qutrit Werner states.} For two qutits, a Werner state can be written as $\varrho_{AB}=\frac{1-f}{6}\Pi^{+}+\frac{f}{3}\Pi^{-}$, where $\Pi^{+}$ is the projector onto the symmetric subspace and $\Pi^{-}$ is the projector onto the antisymmetric subspace. We choose observables $X$ and $Z$ as two generators of $SU(3)$ and define : \begin{align*} |0\rangle=(1,0,0)^{T}, |1\rangle=(0,1,0)^{T},|2\rangle=(0,0,1)^{T} \, , \end{align*} \begin{align*} X=|0\rangle\langle 1|+|1\rangle\langle 0|, \quad Z=|0\rangle\langle 0|-|1\rangle\langle 1| \, , \end{align*} such that \begin{align*} S(X|B)+ S(Z|B) =f+3-(1-f)\log_2(1-f)-(1+f)\log_2(1+f) \; , \end{align*} \begin{align*} D_{A}(\varrho_{AB})&=2+f\log_2(\frac{f}{2})+(1-f)\log_2(\frac{1-f}{4})\\& -\frac{1-f}{2}\log_2(1-f)-\frac{1+f}{2}\log_2(\frac{1+f}{2}). \end{align*} Our bound (\ref{bound}) then reads: \begin{align*} 2S(A|B)+ 2D_{A}(\varrho_{AB})\\=f+3-(1-f)\log_2(1-f)-(1+f)\log_2(1+f)\; , \end{align*} which is equal to the uncertainty $S(X|B)+ S(Z|B)$ But the lower bound (\ref{pati}) of Ref.~\cite{Pati} is \begin{align*} -f+1-(1-f)\log_2(1-f)-f\log_2f\\+ \max\{0,2f+f\log_2f-\log_2\frac{3(1+f)}{4}\\-f\log_2(1+f)\}\\ =f+3-\log_23-(1-f)\log_2(1-f)-(1+f)\log_2(1+f)\; , \end{align*} which is smaller than what we obtained. For the lower bound (\ref{berta}) of Ref.~\cite{Berta} one has \begin{align*} -f+1-(1-f)\log_2(1-f)-f\log_2f \; , \end{align*} which is smaller than our lower bound. We have thus identified a situation, with 2-qutrit Werner states, {\em where our bound performs better than the previously known ones}. \subsubsection{Isotropic states.} Consider a bipartite isotropic state with local Hilbert space dimension $d$, $\varrho=f \phi_{d}+\frac{1-f}{d^2-1}(I-\phi_{d})$. For $d=2$, consider the following observables \begin{align*} |0\rangle=(1,0)^{T},|1\rangle=(0,1)^{T}, \end{align*} \begin{align*} X=|0\rangle\langle 1|+|1\rangle\langle 0|,Z=|0\rangle\langle 0|-|1\rangle\langle 1|. \end{align*} Then \begin{align*} S(X|B)+ S(Z|B)=-\frac{2}{3}(-2f+2(1-f)\log_2(1-f)+\log_2\frac{4(1+2f)}{27}+2f\log_2(1+2f)) \, , \end{align*} \begin{align*} D_{A}(\varrho_{AB})=\frac{1-f}{3}\log_2(\frac{1-f}{3})+f\log_2f-\frac{1+2f}{3}\log_2\frac{1+2f}{6}. \end{align*} The universal lower bound (\ref{bound}) reads: \begin{align*} -\frac{2}{3}(-2f+2(1-f)\log_2(1-f)+\log_2\frac{4(1+2f)}{27}+2f\log_2(1+2f)) \, , \end{align*} which is exactly equal to the uncertainty $S(X|B)+ S(Z|B)$. The lower bound (\ref{pati}) of Ref.~\cite{Pati} is also equal to the uncertainty $S(X|B)+ S(Z|B)$ in this case. However, the lower bound (\ref{berta}) of Ref.~\cite{Berta} equals \begin{align*} -(1-f)\log_2\frac{1-f}{3}-f\log_2 f \; , \end{align*} which is always smaller than what obtained with discord-assisted bounds. \begin{figure} \caption{Plot of the different lower bounds for isotropic states with dimension $d=2$ and observables given by the matrices in Eqs.~(\ref{X1} \label{bound1} \end{figure} When $d=3$, instead, by choosing the observables \begin{align*} |0\rangle=(1,0,0)^{T},|1\rangle=(0,1,0)^{T},|2\rangle=(0,0,1)^{T}, \end{align*} \begin{align*} X=|0\rangle\langle 1|+|1\rangle\langle 0|,Z=|0\rangle\langle 0|-|1\rangle\langle 1| , \end{align*} one gets \begin{align*} S(X|B)+ S(Z|B)=-\frac{1}{2}(3(1-f)\log_2\frac{1-f}{8}+\log_2\frac{27(1+3f)}{4}+3f\log_2\frac{1+3f}{12}) , \end{align*} \begin{align*} D_{A}(\varrho_{AB})=\frac{1-f}{4}\log_2(\frac{1-f}{8})+f\log_2f-\frac{1+3f}{4}\log_2\frac{1+3f}{12}. \end{align*} The universal lower bound (\ref{bound}) then reads: \begin{align*} -\frac{1}{2}(3(1-f)\log_2\frac{1-f}{8}+\log_2\frac{27(1+3f)}{4}+3f\log_2\frac{1+3f}{12}), \end{align*} which is equal to the uncertainty $S(X|B)+ S(Z|B)$. The lower bound (\ref{pati}) of Ref.~\cite{Pati} is instead \begin{align*} -\frac{1}{2}(3(1-f)\log_2\frac{1-f}{8}+\log_2\frac{243(1+3f)}{4}+3f\log_2\frac{1+3f}{12}), \end{align*} which is smaller than ours. The lower bound (\ref{berta}) of Ref.~\cite{Berta} is equal to \begin{align*} -(1-f)\log_2(1-f)-f\log_2 f-(3f-3)-\log_23 \; , \end{align*} which is also smaller than our bound. \begin{figure} \caption{Plot of the different lower bounds for isotropic states with dimension $d=3$ and observables given by the matrices in Eqs.~(\ref{X2} \label{bound2} \end{figure} For an isotropic state with $d=2$, other cases can be constructed where our lower bound provides one with a substantial advantage, such as \begin{equation} X=\begin{pmatrix} 0.272007 , 0.0483473+0.584816 i \\ 0.0483473-0.584816 i, 0.246297 \end{pmatrix},\label{X1} \end{equation} \begin{equation} Z=\begin{pmatrix} 0.43916 , 0.857154+0.976248 i \\ 0.857154-0.976248 i, 0.515329 \end{pmatrix}. \label{Z1} \end{equation} The different bounds for the observables above are displayed in Fig.~\ref{bound1}. Likewise, cases where our lower bound is tighter can be found for $d=3$, such as \begin{equation} X=\begin{pmatrix} 0.246301 , 0.267394 + 0.627628 i,0.155311 + 0.270053 i \\ 0.267394 - 0.627628 i, 0.752065,0.231887 + 0.500147 i\\ 0.155311 - 0.270053 i,0.231887 - 0.500147 i,0.94377 \end{pmatrix}, \label{X2} \end{equation} \begin{equation} Z=\begin{pmatrix} 0.586665 , 0.146795 + 0.957852 i,0.687252 + 0.677623 i \\ 0.146795 - 0.957852 i, 0.709581,0.405322 + 0.525615 i\\ 0.687252 - 0.677623 i,0.405322 - 0.525615 i,0.901804 \end{pmatrix}, \label{Z2} \end{equation} whose uncertainties and different bounds are depicted in Fig.~\ref{bound2} \subsubsection{Qubit-qudit states.} Consider the qubit-qutrit state $\varrho=\alpha (|02\rangle\langle 02|+|12\rangle\langle 12|)+\beta (|\phi^{+}\rangle\langle\phi^{+}|+|\phi^{-}\rangle\langle\phi^{-}|+ |\psi^{+}\rangle\langle\psi^{+}|)+\gamma|\psi^{-}\rangle\langle\psi^{-}|$, where $\phi_{\pm}=\frac{1}{\sqrt{2}}(|00\rangle\pm|11\rangle)$ and $\psi_{\pm}=\frac{1}{\sqrt{2}}(|01\rangle\pm|10\rangle))$, as well as the following observables: \begin{align*} |0\rangle=(1,0)^{T},|1\rangle=(0,1)^{T}, \end{align*} \begin{align*} X=|0\rangle\langle 1|+|1\rangle\langle 0|,Z=|0\rangle\langle 0|-|1\rangle\langle 1|. \end{align*} Then \begin{align*} S(X|B)+ S(Z|B)=4\alpha-4\beta-4\beta \log_2(\beta)-2(\beta+\gamma)\log_2(\beta+\gamma)+2(3\beta+\gamma)\log_2(3\beta+\gamma) \, . \end{align*} Our bound (\ref{bound}) and the lower bound (\ref{pati}) of Ref.~\cite{Pati} are all equal to the uncertainty $S(X|B)+ S(Z|B)$ in this instance. The bound (\ref{berta}) of Ref.~\cite{Berta} is instead \begin{align*} 4\alpha-3\beta \log_2(\beta)-\gamma \log_2(\gamma)+(3\beta+\gamma)\log_2(3\beta+\gamma) \; , \end{align*} which is always smaller than the previous bounds. Under the following choice of observables: \begin{equation} X=\begin{pmatrix} 0.826411 , 0.443371+0.745704 i \\ 0.443371-0.745704 i, 0.459166 \end{pmatrix}, \label{X3} \end{equation} \begin{equation} Z=\begin{pmatrix} 0.832848 , 0.191194+0.608568 i \\ 0.191194-0.608568 i, 0.509301 \end{pmatrix}, \label{Z3} \end{equation} and parameters $\alpha=0.25$, $0\le\gamma\le0.5$ then, as depicted in Fig.~\ref{bound3}, the universal bound we derived is tighter than any of the previously known ones. \begin{figure} \caption{Plot of the different lower bounds for the qubit-qutrit state defined in the text and observables given by the matrices in Eqs.~(\ref{X3} \label{bound3} \end{figure} For the following state of one qubit times a four-dimensional quantum system $\varrho=\alpha (|02\rangle\langle 02|+|03\rangle\langle 03|+|12\rangle\langle 12|+|13\rangle\langle 13|)+\beta (|\phi^{+}\rangle\langle\phi^{+}|+|\phi^{-}\rangle\langle\phi^{-}|+ |\psi^{+}\rangle\langle\psi^{+}|)+\gamma|\psi^{-}\rangle\langle\psi^{-}|$, with observables \begin{align*} |0\rangle=(1,0,0)^{T},|1\rangle=(0,1,0)^{T},|2\rangle=(0,0,1)^{T}, \end{align*} \begin{align*} X=|0\rangle\langle 1|+|1\rangle\langle 0|, Z=|0\rangle\langle 0|-|1\rangle\langle 1|, \end{align*} one has \begin{align*} S(X|B)+ S(Z|B)=8\alpha-4\beta-4\beta \log_2(\beta)-2(\beta+\gamma)\log_2(\beta+\gamma)+2(3\beta+\gamma)\log_2(3\beta+\gamma). \end{align*} The universal bound (\ref{bound}) and the lower bound (\ref{pati}) of Ref.~\cite{Pati} are all equal to the uncertainty $S(X|B)+ S(Z|B)$, while the bound (\ref{berta}) of Ref.~\cite{Berta} is always smaller and reads \begin{align*} 8\alpha-3\beta \log_2(\beta)-\gamma \log_2(\gamma)+(3\beta+\gamma)\log_2(3\beta+\gamma). \end{align*} Under the following choice of observables: \begin{equation} X=\begin{pmatrix} 0.370786 , 0.344509+0.694499 i \\ 0.344509-0.694499 i, 0.60978 \end{pmatrix},\label{X4} \end{equation} \begin{equation} Z=\begin{pmatrix} 0.303997 , 0.332044+0.448198 i \\ 0.332044-0.448198 i, 0.342387 \end{pmatrix}, \label{Z4} \end{equation} and parameters $\alpha=0.1$, $0\le\gamma\le0.6$ then, as shown in Fig.~\ref{bound4}, the universal bound we derived is tighter than any of the previously known ones. \begin{figure} \caption{Plot of the different lower bounds for the qubit by four-dimensional system state defined in the main text and observables given by the matrices in Eqs.~(\ref{X4} \label{bound4} \end{figure} \section{Uncertainty relations under decoherence} In the real world, as already remarked, quantum states are unavoidably disturbed by decoherence induced by the environment. The extent to which the environment affects quantum entanglement or quantum and classical correlations beyond entanglement is a central problem, and much work has been done along these lines \cite{T. Yu,maziero}, in both Markovian \cite{Francesco} and non-Markovian \cite{Wang} regimes. It is thus natural to ask what impact such environmental decoherence has on the quantities entering entropic uncertainty relations in the presence of a quantum memory. Recently, Z. Y. Xu et al.\ \cite{Z. Y.} considered the behaviour of the uncertainty relation under the action of local unital and non-unital noisy channels. Thus, they found out that while unital noise increases the amount of uncertainty, the amplitude- damping nonunital noises may reduce the amount of uncertainty, and bring it closer to its lower bound in the long-time limit. These results shed light on the different competitive mechanisms governing quantum correlations on the one hand and the minimal missing information after local measurements on the other. In this section, we focus on two quantum bits, with one of the two qubits acting as a memory, and examine the behaviour of different entropic uncertainty relations -- including the newly derived one of Theorem 1 -- with assisting quantum and classical correlations when the two qubits interact with independent environments, in both Markovian and non-Markovian regimes. The most common noise channels (amplitude and phase damping) are analysed. Here, we shall focus on three scenarios in succession: first, we discuss the influence of system-reservoir dynamics of quantum and classical correlations on the entropic uncertainty relation, and compare the differences among three entropic uncertainty relations in Eq. (\ref{berta}), (\ref{pati}) and (\ref{bound}); second, we explore non-Markovian dynamics influence on the entropic uncertainty relation; and third, we discuss two special examples. \subsection{Uncertainty relations under unital and non-unital local noisy channels} In order to investigate the behaviour of the uncertainty relation under the influence of independent local noisy channels, in what follows we will consider a system $S$ comprised of qubits $A$ and $B$, each of them interacting independently with its own environment $E_A$ and $E_B$, respectively. The dynamics of two qubits interacting independently with individual environments are described by the solutions of the appropriate Born-Markov Lindblad equations \cite{H. P.}, which can be described conveniently in the Kraus operator formalism \cite{M. A. Nielsen}. Given an initial state for two qubits, its evolution can be written compactly as \begin{equation} \varrho_{AB}(t)=\sum\limits_{uv}M_{uv}\varrho_{AB}(0)M_{uv}^{\dag} , \end{equation} where the Kraus operators $M_{u,v}=M_u\otimes M_v$ \cite{M. A. Nielsen} satisfy the completeness relation $\sum\limits_{u,v}M_{u,v}^{\dag} M_{u,v}=\mathbbm{1}$ at all times. The operators $M_u$ and $M_v$ describe the one-qubit quantum channels. In the following we shall restrict to two-qubit states with maximally mixed local states that can be written in the form \begin{equation} \varrho_{AB} = \frac14\left(\mathbbm{1}+\sum\limits_{i=1}^3(c_{i}\sigma_i^A\otimes\sigma_i^{B}) \right) , \end{equation} where $\sigma_i^R$ is the standard Pauli matrix in direction $j$ acting on the space of subsystem $R$ for $R=A,B$ and $c_{i}$ is a triple of real coefficients satisfying $0\le c_i \le 1$. Including environments, the whole initial state will be taken as $\varrho_{AB}\otimes|00\rangle_{E_A E_B}$, where $|00\rangle_{E_A E_B}$ is the nominal vacuum state of the environments $E_A$ and $E_B$ in which the qubits $A$ and $B$, respectively, are immersed. We present below what happens to the entropic uncertainty relation for some qubit channels of broad interest ({\em i.e.}, amplitude damping and phase damping). \subsubsection{Amplitude damping channel} The amplitude-damping channel, which is a classical noise process representing the dissipative interaction between system $S$ and the environment $E$, can be modelled by treating $E$ as a large collection of independent harmonic oscillators interacting weakly with $S$ \cite{H. P.}. The effect of a dissipative channel over one qubit is depicted by the following map \begin{equation} \begin{split} \vert 0 \rangle_{S}\vert 0 \rangle_{E}\mapsto \vert 0 \rangle_{S}\vert 0 \rangle_{E},\\ \vert 1 \rangle_{S}\vert 0 \rangle_{E}\mapsto \sqrt{1-p}\vert 1 \rangle_{S}\vert 0 \rangle_{E}+\sqrt{p}\vert 0 \rangle_{S}\vert 1 \rangle_{E}, \end{split} \end{equation} where $|0\rangle_S$ is the ground and $|1\rangle_S$ the excited state of the qubit. The states $|0\rangle_E$ and $|1\rangle_E$ describe the states of the environment with no excitation and one excitation distributed over all its modes ({\em i.e.}, in the normal mode coupled to the qubit). The quantity $p\in[0,1]$ represents a decay probability, which will be a decreasing exponential function of time under Markov approximation. The corresponding Kraus operators describing the amplitude-damping channel acting on the system are given by \cite{M. A. Nielsen} \begin{equation} \begin{split} M_{0}=\left( \begin{array}{cc} 1 & 0 \\ 0 & \sqrt{1-p} \\% \end{array} \right)\otimes \left( \begin{array}{cc} 1 & 0 \\ 0 & \sqrt{1-p} \\% \end{array} \right),\\ M_{1}=\left( \begin{array}{cc} 0 & \sqrt{p} \\ 0 & 0 \\% \end{array} \right)\otimes \left( \begin{array}{cc} 0 & \sqrt{p} \\ 0 & 0 \\% \end{array} \right), \end{split} \end{equation} The total system evolves under the action of the operators in Eq.~(7), obtained by tracing out the degrees of freedom of the reservoir, in the computational basis ${|00\rangle_{AB} , |01\rangle_{AB} , |10\rangle_{AB} , |11\rangle_{AB}}$ for qubits $A$ and $B$. The density operator after the action of the channel is given by \cite{J. Maziero} \begin{equation} \hspace*{-2cm} \varrho_{AB}=\frac{1}{4}\left( \begin{array}{cccc} (1+p)^2+(1-p)^2c_3 & 0&0&(1-p)(c_1-c_2) \\ 0 &((1-c_3)+(1+c_3)p)(1-p)&(1-p)(c_1+c_2)&0 \\ 0 &(1-p)(c_1+c_2)&((1-c_3)+(1+c_3)p)(1-p)&0 \\ (1-p)(c_1-c_2) &0 &0&(1-p)^2(1+c_3) \\% \end{array} \right) . \end{equation} Due to the X structure of the density matrices in Eq.(8), there is a simple closed expression for the concurrence $Con$ present in all bipartitions \cite{S. Luo} \begin{equation} Con(p)=2\max\{0,\lambda_1(p),\lambda_2(p)\}, \end{equation} with $\lambda_1 (p)=|\varrho_{14} |-\sqrt{\varrho_{22} \varrho_{33} }$ and $\lambda_2 (p)=|\varrho_23 |-\sqrt(\varrho_11 \varrho_44 )$. We can also derive analytical expressions for mutual information and classical correlation: \begin{eqnarray} \fl I[\varrho_{AB}(p)]=-(1-p)\log_2(1-p)-(1+p)\log_2(1+p)\nonumberonumber\\ +\frac{1}{4}(1-p)(1+c_1+c_2-c_3+p+c_3p) \log_2[(1-p)(1+c_1+c_2-c_3+p+c_3p)]\nonumberonumber\\ +\frac{1}{4}(1-p)(1-c_1-c_2-c_3+p+c_3p) \log_2[(1-p)(1-c_1-c_2-c_3+p+c_3p)]\nonumberonumber\\ +\frac{1}{4}(1+p^2+c_3(1-p)^2-\sqrt{(c_1-c_2)^2(1-p)^2+4p^2})\nonumberonumber\\ \log_2[1+p^2+c_3(1-p)^2-\sqrt{(c_1-c_2)^2(1-p)^2+4p^2}]\nonumberonumber\\ +\frac{1}{4}(1+p^2+c_3(1-p)^2+\sqrt{(c_1-c_2)^2(1-p)^2+4p^2})\nonumberonumber\\ \log_2[1+p^2+c_3(1-p)^2+\sqrt{(c_1-c_2)^2(1-p)^2+4p^2}],\nonumberonumber \end{eqnarray} \begin{eqnarray} \fl C[\varrho_{AB}(p)]=\frac{1}{4}(1+p^2+c_3(1-p)^2-\sqrt{(c_1-c_2)^2(1-p)^2+4p^2}) \log_2[1+p^2+c_3(1-p)^2 \nonumberonumber\\ -\sqrt{(c_1-c_2)^2(1-p)^2+4p^2}]+\frac{1}{4}(1+p^2+c_3(1-p)^2\nonumberonumber\\ +\sqrt{(c_1-c_2)^2(1-p)^2+4p^2}) \log_2[1+p^2+c_3(1-p)^2 +\sqrt{(c_1-c_2)^2(1-p)^2+4p^2}] \nonumberonumber\\- \frac{1}{4}(1+p^2+c_3(1-p)^2+(c_1-c_2)(1-p)) \log_2[1+p^2+c_3(1-p)^2+(c_1-c_2)(1-p)]\nonumberonumber\\- \frac{1}{4}(1+p^2+c_3(1-p)^2-(c_1-c_2)(1-p)) \log_2[1+p^2+c_3(1-p)^2-(c_1-c_2)(1-p)] \nonumberonumber\\ +\frac{1+c}{2}\log_2(1+c)+\frac{1-c}{2}\log_2(1-c),\nonumberonumber \end{eqnarray} where $c=\max\{|c_1(1-p)|,|c_2(1-p)|,c_3(1-p)^2+p^2\}$. The quantum discord is then given by [7] \begin{equation} D[\varrho_{AB}(p)]=I[\varrho_{AB}(p)]-C[\varrho_{AB}(p)] . \end{equation} If one chooses two of the Pauli observables $R=\sigma_i$ and $S=\sigma_j$ ($i, j=1, 2, 3$) as measurements, the left-hand side of Eq.~(\ref{berta}) can be written as \begin{eqnarray} \fl U=2+(1-p)\log_2(1-p)+(1+p)\log_2(1+p) \nonumberonumber\\ -\frac{1}{2}[(1-\sqrt{c_1^2(1-p)^2+p^2})\log_2(1-\sqrt{c_1^2(1-p)^2+p^2})\nonumberonumber\\+ (1+\sqrt{c_1^2(1-p)^2+p^2}) \log_2(1+\sqrt{c_1^2(1-p)^2+p^2})\nonumberonumber\\+ (1-\sqrt{c_2^2(1-p)^2+p^2}) \log_2(1-\sqrt{c_2^2(1-p)^2+p^2})\nonumberonumber\\+ (1+\sqrt{c_2^2(1-p)^2+p^2}) \log_2(1+\sqrt{c_2^2(1-p)^2+p^2}] . \nonumberonumber \end{eqnarray} On the other hand, the complementarity c of the observables $\sigma_i$ and $\sigma_j$ is always equal to $1/2$, so that the right-hand sides of Eq. (\ref{berta}), Eq. (\ref{pati}) and Eq. (\ref{bound}), which we shall denote by $U_{b1}$, $U_{b2}$ and $U_{b3}$ take the form, respectively, \begin{equation} U_{b1}=1+S(\varrho_{AB})-S(\varrho_{B}) , \end{equation} where $S(\varrho_{AB})=-\frac{1}{4}(1-p)(1+c_1+c_2-c_3+p+c_3p)\log_2[(1-p)(1+c_1+c_2-c_3+p+c_3p)] -\frac{1}{4}(1-p)(1-c_1-c_2-c_3+p+c_3p)\log_2[(1-p)(1-c_1-c_2-c_3+p+c_3p)] -\frac{1}{4}(1+p^2+c_3(1-p)^2-\sqrt{(c_1-c_2)^2(1-p)^2+4p^2})\log_2[(1+p^2+c_3(1-p)^2-\sqrt{(c_1-c_2)^2(1-p)^2+4p^2})] -\frac{1}{4}(1+p^2+c_3(1-p)^2+\sqrt{(c_1-c_2)^2(1-p)^2+4p^2})\log_2[(1+p^2+c_3(1-p)^2+\sqrt{(c_1-c_2)^2(1-p)^2+4p^2})]$ , $S(\varrho_{B})=1-\frac{1-p}{2}\log_2(1-p)-\frac{1+p}{2}\log_2(1+p)$, \begin{equation} U_{b2}=1+S(\varrho_{AB})-S(\varrho_{B})+\max\{0,D[\varrho_{AB}(p)]-C[\varrho_{AB}(p)]\} , \end{equation} \begin{equation} U_{b3}=2S(\varrho_{AB})-2S(\varrho_{B})+2 D[\varrho_{AB}(p)] . \end{equation} \begin{figure} \caption{(a) $U$ and $U_{bi} \label{fig:Fig1} \end{figure} Let us now choose the initial state $c_1=c_2=c_3=-0.8$. The specifics of the dynamics of the uncertainties clearly depend on the initial state, but these values represent a typical situation. Also, let us denote by $U$ the left hand side of Eqs.~(\ref{berta}), (\ref{pati}) and (\ref{bound}) when two Pauli observables $\sigma_1$ and $\sigma_2$ are chosen. As shown in fig. 1, the quantity $U$ will increase over time due to the gradual decay of correlations between system qubit $A$ and memory qubit $B$, while $U_{b1}$, $U_{b2}$ and $U_{b3}$, corresponding to the lower bounds of the uncertainty in Eq. (\ref{berta}), (\ref{pati}) and (\ref{bound}) respectively, will increase at first, and then decrease to asymptotic values. The dynamics of discord, entanglement and classical correlations present in qubits A and B are shown in the inset of Fig.\ 1. In this case, the non-unital channel induces disentanglement in asymptotic time; the quantum discord first decreases then increases for a short interval, and finally decreases to disappear asymptotically (if one assumes $p$ to fall exponentially in time). This dynamics induces substantial differences among the behaviours of $U_{b1}$, $U_{b2}$ and $U_{b3}$: at very short times, $U_{b3}$ better approximates the evolution of the Shannon entropies, thanks to the permanence of classical correlations at such times, while $U_{b3}$ becomes tighter at intermediate times. \subsubsection{Phase damping channel} The phase-damping channel is a unital channel (where a maximally mixed input state is left unchanged) leads to a loss of quantum coherence without loss of energy. The map of this channel on a one-qubit system is given by \begin{equation} \begin{split} \vert 0 \rangle_{S}\vert 0 \rangle_{E}\mapsto \vert 0 \rangle_{S}\vert 0 \rangle_{E},\\ \vert 1 \rangle_{S}\vert 0 \rangle_{E}\mapsto \sqrt{1-p}\vert 1 \rangle_{S}\vert 0 \rangle_{E}+\sqrt{p}\vert 1 \rangle_{S}\vert 1 \rangle_{E}, \end{split} . \end{equation} The corresponding Kraus operators describing the phase-damping channel for the system of qubits A and B can be written as \begin{equation} \begin{split} M_{AB}^0=\left( \begin{array}{cc} 1 & 0 \\ 0 & \sqrt{1-p} \\% \end{array} \right)\otimes \left( \begin{array}{cc} 1 & 0 \\ 0 & \sqrt{1-p} \\% \end{array} \right), \\ M_{AB}^1=\left( \begin{array}{cc} 0 & 0 \\ 0 & \sqrt{p} \\% \end{array} \right)\otimes \left( \begin{array}{cc} 0 & 0 \\ 0 & \sqrt{p} \\% \end{array} \right), \end{split} . \end{equation} For the initial state (5), the evolved density operator of the system $AB$, obtained by tracing out the degrees of freedom of the reservoirs, is given by \begin{equation} \varrho_{AB}=\frac{1}{4}\left( \begin{array}{cccc} \frac{1+c_3}{4} & 0&0&\frac{(1-p)c^{-}}{4} \\ 0 &\frac{1-c_3}{4}&\frac{(1-p)c^{+}}{4}&0 \\ 0 &\frac{(1-p)c^{+}}{4}&\frac{1-c_3}{4}&0 \\ \frac{(1-p)c^{-}}{4} &0 &0&\frac{1+c_3}{4} \\% \end{array} \right) , \end{equation} where $c^{\pm}=c_1\pm c_2$. The mutual information and the classical correlation present in qubits $A$ and $B$ can be computed analytically and are given by \begin{equation} \begin{split} I[\varrho_{AB}(p)]=\frac{1}{4}(1+c_1+c_2-c_3-c_1p-c_2p) \log_2(1+c_1+c_2-c_3-c_1p-c_2p)\\ +\frac{1}{4}(1-c_1+c_2+c_3+c_1p-c_2p) \log_2(1-c_1+c_2+c_3+c_1p-c_2p)\\ +\frac{1}{4}(1+c_1-c_2+c_3-c_1p+c_2p) \log_2(1+c_1-c_2+c_3-c_1p+c_2p)\\ +\frac{1}{4}(1-c_1-c_2-c_3+c_1p+c_2p) \log_2(1-c_1-c_2-c_3+c_1p-c_2p),\\ \end{split} \end{equation} \begin{equation} C[\varrho_{AB}(p)]=\frac{1-c}{2}\log_2(1-c)+\frac{1+c}{2}\log_2(1+c) , \end{equation} where $c=\max\{|c_1(1-p)|,|c_2(1-p)|,|c_3|\}$ The concurrence of qubits $A$ and $B$ is instead given by \begin{equation} \begin{split} Con(p)=\frac{1}{2} \max\{1+c_3-(c_1-c_2)(1-p),1+c_3\\+(c_1-c_2)(1-p), 1-c_3-(c_1+c_2)(1-p),\\1-c_3+(c_1+c_2)(1-p)\}-1 . \end{split} \end{equation} Thus we can get the following \begin{eqnarray} U&=&2-\frac{1}{2}(1+c_1(1-p))\log_2(1+c_1(1-p))\nonumberonumber\\&&-\frac{1}{2}(1-c_1(1-p))\log_2(1-c_1(1-p)) \nonumberonumber \\ &&-\frac{1}{2}(1+c_2(1-p))\log_2(1+c_2(1-p))\nonumberonumber\\&&-\frac{1}{2}(1-c_2(1-p))\log_2(1-c_2(1-p)), \\ U_{b1}&=&S(\varrho_{AB}), \\ U_{b2}&=&S(\varrho_{AB})+\max\{0,DC\},\\ U_{b3}&=&2S(\varrho_{AB})-2+2(I[\varrho_{AB}(p)]-C[\varrho_{AB}(p)]),\\ \end{eqnarray} where $DC=\frac{1}{4}(1+c_1+c_2-c_3-c_1p-c_2p)\log_2(1+c_1+c_2-c_3-c_1p-c_2p) +\frac{1}{4}(1-c_1+c_2+c_3+c_1p-c_2p)\log_2(1-c_1+c_2+c_3+c_1p-c_2p) +\frac{1}{4}(1+c_1-c_2+c_3-c_1p+c_2p)\log_2(1+c_1-c_2+c_3-c_1p+c_2p) +\frac{1}{4}(1-c_1-c_2-c_3+c_1p+c_2p)\log_2(1-c_1-c_2-c_3+c_1p-c_2p)-(1-c)\log_2(1-c)-(1+c)\log_2(1+c)$. \begin{figure} \caption{(a) $U$ and $U_{bi} \label{fig:Fig2} \end{figure} As depicted in Fig.\ 2, choosing $c_1=c_2=c_3=-0.8$, while $U$, $U_{b1}$ and $U_{b2}$ increase all the time while $U_{b3}$ is constant as this unital channel induces the loss of entanglement and discord gradually in asymptotic time, whereas classical correlations remain unchanged. $U_{b1}$ and $U_{b2}$ are almost the same curve during this process, except for a short initial time interval where $U_{b3}$ and $U_{b2}$ coincide. This situation should be contrasted with the analysis carried out in Ref. [11], where the behaviours of memory-assisted entropic uncertainty relations with under noise acting on the system qubit are shown. Obvious differences on the behaviours of the uncertainties are due to the different dynamics of quantum and classical correlations of the joint qubits with respect to single qubit decoherence. Furthermore, here we pay more attention to the action of discord-assited memories, while Ref. [11] mainly focuses on the effect of entanglement assistance. \subsection{Uncertainty relations in non-Markovian environments} In this section we study the effect of dissipation on the uncertainty relation by exactly solving a model consisting of two independent qubits subject to two zero-temperature non-Markovian reservoirs. We shall see how the behaviour of the uncertainty relation due to correlation dynamics is affected by the environment being, respectively, `quantum' or `classical', {\em i.e.} with or without back-action on the system. \subsubsection{Reservoirs with system-environment back-action} The non-Markovian effects on the dynamics of entanglement and discord presented in a two qubits system have been studied recently [19,20]. Assuming a two qubits system A and B whose dynamics is described by the damped Jaynes-Cummings model, the qubits are coupled to a single cavity mode, which in turn is coupled to a non-Markovian environment. The environments are described by a bath of harmonic oscillators, and the spectral density is written as \cite{H. P.} \begin{equation} J(\omega)=\frac{1}{2\pi}\frac{\gamma_0\tau^2}{(\omega_0-\omega)^2+\tau^2} , \end{equation} where $\tau$ is associated with the reservoir correlation time $t_B$ by the relation $t_B\approx\frac{1}{\tau}$, and $\gamma_0$ is connected to the time scale $t_{R}$ over which the two-qubit system changes, here $t_R\approx\frac{1}{\gamma_0}$, and the strong coupling condition $t_R<2t_B$ is assumed. The two-qubit Hamiltonian under independent amplitude-damping channels can be written as \cite{F. F. Fanchini} \begin{equation} H=\omega_{0}^{j}\sigma_{\dag}^{j}\sigma_{-}^{j}+\sum\limits_{k}\omega_{k}^{j}a_{k}^{(j)\dag}a_{k}^{j}+(\sigma_{\dag}^{j}B^{j}+\sigma_{-}^{j}B^{j\dag}), \end{equation} where $B^((j))=\sum\limits_k g_k^((j))a_k^((j))$ with $g_k^((j))$ being the coupling constant, $\omega_0^((j))$ is the transition frequency of the $j^{\rm th}$ qubit, and $\sigma_{\pm}^((j))$ are the system raising and lowering operators of the $j$th qubit. Here the index $k$ labels the reservoir field modes with frequencies$\omega_k^((j))$, and $a_k^((j)^\dag) (a_k^((j)))$ is their creation (annihilation) operator. Here, and in the following, the Einstein convention sum is used. The initial state of the two qubits is the Bell-like state \begin{equation} |\psi\rangle=\alpha|00\rangle+\sqrt{1-\alpha^2}|11\rangle . \end{equation} According to the dynamics of the initial state's density matrix elements given in Ref. \cite{B. Bellomo}, the mutual information, classical correlation and concurrence present in qubits $A$ and $B$ are given by \begin{eqnarray} \fl I[\varrho_{AB}(t)] = -2a^2p_t\log_2(a^2p_t)-2(1-a^2p_t)\log_2(1-a^2p_t)+2a^2p_t(1-p_t)\log_2(a^2p_t(1-p_t)) \nonumberonumber\\ +[-a^2p_t(1-p_t)\nonumberonumber\\ +\frac{1}{2}(1-\sqrt{1-4a^2(1-p_t)p_t})] \log_2(-a^2p_t(1-p_t)+ \frac{1}{2}(1-\sqrt{1-4a^2(1-p_t)p_t})) \nonumberonumber\\ +[-a^2p_t(1-p_t)\nonumberonumber\\ +\frac{1}{2}(1+\sqrt{1-4a^2(1-p_t)p_t})] \log_2(-a^2p_t(1-p_t)+\frac{1}{2}(1+\sqrt{1-4a^2(1-p_t)p_t})),\nonumberonumber\\ \fl D[\varrho_{AB}(t)] = \min{D_1,D_2},\nonumberonumber\\ \fl Con(p)= 2\max\{a^2(1-p_t)p_t,\sqrt{2a^2p_t^2+a^4p_t^2(p_t^2-2p_t-1)-2\sqrt{a^4(1-a^2)p_t^4(1-p_ta^2(2-p_t))}}\nonumberonumber\\ \sqrt{2a^2p_t^2+a^4p_t^2(p_t^2-2p_t-1)+2\sqrt{a^4(1-a^2)p_t^4(1-p_ta^2(2-p_t))}}\}\nonumberonumber\\ -2a^2(1-p_t)p_t-\sqrt{2a^2p_t^2+a^4p_t^2(p_t^2-2p_t-1)-2\sqrt{a^4(1-a^2)p_t^4(1-p_ta^2(2-p_t))}}\nonumberonumber\\ -\sqrt{2a^2p_t^2+a^4p_t^2(p_t^2-2p_t-1)+2\sqrt{a^4(1-a^2)p_t^4(1-p_ta^2(2-p_t))}},\nonumberonumber \end{eqnarray} where \begin{eqnarray} \fl D_1 = a^2p_t(1-p_t)\log_2(a^2p_t(1-p_t)) \nonumberonumber\\ +[-a^2p_t(1-p_t)+\frac{1}{2}(1-\sqrt{1-4a^2(1-p_t)p_t})]\log_2(-a^2p_t(1-p_t) \nonumberonumber\\ +\frac{1}{2}(1-\sqrt{1-4a^2(1-p_t)p_t}))\nonumberonumber\\ +[-a^2p_t(1-p_t)+\frac{1}{2}(1+\sqrt{1-4a^2(1-p_t)p_t})]\log_2(-a^2p_t(1-p_t)\nonumberonumber\\ +\frac{1}{2}(1+\sqrt{1-4a^2(1-p_t)p_t})) -a^2p_t\log_2(a^2p_t)-(1-a^2p_t)\log_2(1-a^2p_t)\nonumberonumber\\ -a^2p_t\log_2p_t-a^2p_t(1-p_t)\log_2(1-p_t)- a^2(1-p_t)p_t\log_2\frac{a^2p_t(1-p_t)}{1-a^2p_t}-\nonumberonumber\\ (1-2a^2p_t+a^2p_t^2)\log_2\frac{1-2a^2p_t+a^2p_t^2}{1-a^2p_t} , \nonumberonumber \end{eqnarray} \begin{eqnarray} \fl D_2=a^2p_t(1-p_t)\log_2(a^2p_t(1-p_t))\nonumberonumber\\ + [-a^2p_t(1-p_t)+\frac{1}{2}(1-\sqrt{1-4a^2(1-p_t)p_t})]\log_2(-a^2p_t(1-p_t)\nonumberonumber\\ +\frac{1}{2}(1-\sqrt{1-4a^2(1-p_t)p_t}))\nonumberonumber\\ +[-a^2p_t(1-p_t)+\frac{1}{2}(1+\sqrt{1-4a^2(1-p_t)p_t})]\log_2(-a^2p_t(1-p_t)\nonumberonumber\\ +\frac{1}{2}(1+\sqrt{1-4a^2(1-p_t)p_t}))\nonumberonumber\\ -\frac{1+\sqrt{1-4a^2p_t(1-p_t)}}{2}\log_2\frac{1+\sqrt{1-4a^2p_t(1-p_t)}}{2}\nonumberonumber\\ -\frac{1-\sqrt{1-4a^2p_t(1-p_t)}}{2}\log_2\frac{1-\sqrt{1-4a^2p_t(1-p_t)}}{2}. \nonumberonumber \end{eqnarray} \begin{figure} \caption{(a) $U$ and $U_{bi} \label{fig:Fig3} \end{figure} In Fig.\ 3 we plot the uncertainty $U$, as well as the lower bounds $U_{b1}$, $U_{b2}$ and $U_{b3}$ as functions of the rescaled time $\gamma_0 t$ in the strong coupling regime, with $1/t=0.01\gamma_0$ and $\alpha=\frac{1}{\sqrt{10}}$. $U$, $U_{b1}$, $U_{b2}$ and $U_{b3}$ all oscillate in the long-time limit due to the entanglement and discord between qubits A and B periodically vanishing and reviving. It is apparent that the behaviours of $U_{b1}$ and $U_{b2}$ are the same at short times and tighten the lower bound of the uncertainty with respect of $U_{b3}$. The amplitudes of the oscillations of $U_{b1}$ (or $U_{b2}$) and $U_{b3}$ reduce slowly as the peaks of entanglement and discord dwindle after each revival. \subsubsection{Reservoirs without system-environment back-action} We now want to explore how the entropic uncertainty relations are affected by revivals of correlations, including quantum discord and entanglement, occurring in `classical' non-Markovian environments with no back-action. Suppose the pair of non-interacting qubits is in a generic initial Bell-diagonal state: \begin{equation} \varrho(0)=\sum\limits_{kn}c_{k}^{n}(0)|k^n\rangle\langle k^n|,(k=1,2;n=\pm) , \end{equation} where $|1^{\pm}\rangle=\frac{|01\rangle\pm|10\rangle}{\sqrt{2}}$,$|2^{\pm}\rangle=\frac{|00\rangle\pm|11\rangle}{\sqrt{2}}$. Each of the two qubits is coupled to a random external field acting as a local environment, so the global dynamical map $\Omega$ applied on the initial state $\varrho(0)$ is of the random external field type \cite{R. Alicki} and yields the state \begin{equation} \varrho(t)=\frac{1}{4}\sum\limits_{j,k=1}^2U_{j}^{A}(t)U_{k}^{B}(t)\varrho(0)U_{j}^{A\dag}(t)U_{k}^{B\dag}(t) \end{equation} where $U_j^R (t)=e^{-iH_j/\hbar} (R=A,B;j=1,2)$ is the time evolution operator with $H_j=i\hbar g(\sigma_+ e^{-i\phi_j }-\sigma_- e^{i\phi_j })$, and $H_j$ is expressed in the rotating frame at the qubit-field resonant frequency $\omega$. In the basis ${|1\rangle, |0\rangle}$, the time evolution operators $U_j^R (t)$ have the following matrix form \cite{R. Lo Franco} \begin{equation} U_{j}^{R}(t)=\left( \begin{array}{cc} \cos(gt) & e^{-i\phi_j}\sin(gt) \\ e^{-i\phi_j}\sin(gt) &\cos(gt) \\% \end{array} \right), \end{equation} where $j=1, 2, \phi_j$ is the phase of the field at the location of each qubit and is either 0 or$\pi$ with probability $p = \frac{1}{2}$. The interaction between each qubit and its local field mode is assumed to be strong enough so that, for sufficiently long times, the dissipation effects of the vacuum radiation modes on the qubit dynamics can be neglected. From the matrix of Eq.\ (30) we know that the dynamics is cyclic, and the global map $\Omega$ acts within the class of Bell-diagonal states (or states with maximally mixed marginals \cite{S. Luo}). These properties allow us to analytically calculate the correlation quantifiers under the map of Eq.\ (29) for different initial states $\varrho(0)$. As shown in Fig.\ 4a (right hand side), for an initial Bell-diagonal state with $c_1^+ (0)=0.9$, $c_1^- (0)=0.1$, and $c_2^+ (0)=c_2^- (0)=0$, choosing $p_1=1-p_2=0.025$, under the map of Eq. (29), both entanglement and classical correlations will collapse and revive during the dynamics, while discord keeps approximately constant. In Fig.\ 4a (left hand side) we observe that the lower bounds $U_{b1}$ and $U_{b2}$ coincide all the time, and that $U$, $U_{b1}$, $U_{b2}$ and $U_{b3}$ present periodic oscillations: interestingly, $U$ oscillates is out of phase, periodically saturating the entanglement-assisted uncertainty relation . The amplitudes of oscillating revival do not decay. Also, $U_{b1}$ (or the equivalent $U_{b2}$) provides one with a tighter lower bound for the uncertainty than $U_{b3}$ at all times. \begin{figure} \caption{(a) The upper-left figure:$U$ and $U_{bi} \label{fig:Fig4a} \end{figure} The choice $p_1=1-p_2=0.08$ for the same initial state, plotted in Fig. 4b, shows a similar behaviour, with the exceptions that the oscillations of $U$ are now in phase with those of the lower bounds (but the periodic saturation of the bounds still occurs), and that $U_{b3}$ provides, at certain points in time, a bound as tight as the other two assisted uncertainty relations. When entanglement and discord increase to their maximum, $U$, $U_{b1}$ (or $U_{b2}$) and $U_{b3}$ will decrease to their minimum, and viceversa. It is worth mentioning that the periodical oscillation of the uncertainties and lower bounds is only a consequence of the non-Markovian character of the independent qubit-reservoir dynamics, whether back-action on the system is present or not. This fact might hence be used to quantify the non-Markovianity of single-qubit dynamics. \subsection{Special cases} Let us now discuss two special examples of open system dynamics characterised, respectively, by a particular interplay between quantum discord and classical correlations, and by the presence of discord without entanglement. \subsubsection{Sudden transition between classical and quantum decoherence} A sharp transition between `classical' and `quantum' loss of correlations in a composite system characterises certain open quantum systems, when properly parametrized. This kind of behaviour has first been noticed in the case of two qubits locally subject to non-dissipative channels \cite{L. Mazzola}, and then observed in an all-optical experimental setup \cite{Jin-Shi}. An environment-induced sudden change has also been observed in a room temperature nuclear magnetic resonance setup \cite{R. Auccaise}. Moreover sudden change and immunity against some sources of noise were still found when an environment is modelled as classical instead of quantum \cite{R. Lo Franco}, indicating that such a peculiar behavior is in fact quite general. Here, we adopt the model of Ref. \cite{L. Mazzola}, supposing the initial state is in the class of states of Eq. (5) and consider two independent phase damping channel, so that the time evolution of the whole system is given by \cite{maziero} \begin{equation} \varrho_{AB}(t)=\sum\limits_{kn}\lambda_{k}^{n}(t)|k^n\rangle\langle k^n|,(k=1,2;n=\pm) , \end{equation} where $\lambda_1^{\pm}(t)=\frac{1}{4}(1\pm c_1(t)\mp c_2(t)+c_3(t))$,$\lambda_2^{\pm}(t)=\frac{1}{4}(1\pm c_1(t)\pm c_2(t)-c_3(t))$, $c_1(t)=c_1 e^{-2\gamma t}$,$c_2(t)=c_2 e^{-2\gamma t}$,$c_3(t)=c_3$, for a damping rate $\gamma$. The parameters chosen for the initial state are $c_1=1, c_3=-c_2=0.6$. \begin{figure} \caption{(a) $U$ and $U_{bi} \label{fig:Fig5} \end{figure} As shown in Fig. 5, the uncertainty will increase in the long-time limit due to the gradually missing quantum correlations. The dynamics of correlations are shown on the right plot of Fig. 5. In this case, while mutual information and entanglement decrease gradually, classical correlations and discord display two mutually exclusive plateaux (when discord remains constant, classical correlation is decreasing, and vice versa). We can observe that this same phenomenon of sudden transition between discord and classical correlation decoherence occurs in the inset of Fig. 4b. Clearly, this sudden transition influences the behaviour of the corresponding entropic uncertainty relations: as shown in Fig. 5, $U_{b1}$ and $U_{b2}$ here coincide all the time (discord does not tighten the entanglement-assisted uncertainty relation) and $U_{b1}$ (or $U_{b2}$) is always a tighter lower bound for the uncertainty than $U_{b3}$. In particular, $U_{b3}$ is increasing at first, and then has a sudden change as its maximum value coincides with $U_{b1}$ (and $U_{b2}$), {\em i.e.}, $U_{b3}$ keeps constant when discord decays in time. \subsubsection{Quantum correlations without entanglement} It is well known that many operations in quantum information processing depend largely on quantum correlations represented by quantum entanglement. However, there are indications that some protocols might display a quantum advantage without the presence of entanglement \cite{animesh}. Besides, correlations quantified by quantum discord can always be ``activated'', even when no quantum entanglement is initially present \cite{marco}. We consider here our two qubits system under a one-sided phase damping channel \cite{Jin-Shi}, in the following initial state \begin{equation} \begin{split} \varrho(0)=dR |2^{\dag}\rangle\langle 2^{\dag}|+b(1-R)dR |2^{-}\rangle\langle 2^{-}|+bR |1^{\dag}\rangle\langle 1^{\dag}|+d(1-R)dR |1^{-}\rangle\langle 1^{-}|, \end{split} \end{equation} \begin{figure} \caption{(a) $U$ and $U_{bi} \label{fig:Fig6} \end{figure} where $b=0.7, R=0.7$ and $d=0.3$. As shown in the right plot of Fig.6, entanglement is almost zero during this process, while quantum discord is larger than classical correlations for a certain short time interval. We observe the behaviour of the uncertainty relations from Fig. 6: at first, $U$, $U_{b1}$, $U_{b2}$ and $U_{b3}$ coincide and quickly increase; then, while $U$, $U_{b2}$ and $U_{b3}$ coincide (as discord is bigger than classical correlations), $U_{b1}$ is lower than $U$ (or $U_{b2}$). Furthermore, $U_{b1}$ and $U_{b2}$ will provide a higher lower bound than $U_{b3}$ and coincide with each other when discord becomes smaller than classical correlation. \section{Conclusions} We have introduced a new memory-assisted, observable-independent entropic uncertainty relation where quantum discord between system and memory plays an explicit role. We have shown that this uncertainty relation can be tighter than the ones obtained previously by Berta {\em et al.}\ and Pati {\em et al.} Moreover, we have explored the behaviour of these three entropic uncertainty relations with assisting quantum correlations for a two-qubit composite system interacting with two independent environments, in both Markovian and non-Markovian regimes. The most common noise channels (amplitude damping, phase damping) were discussed. The entropic uncertainties (or their lower bounds) will increase under independent local unital Markovian noisy channels, while they may be reduced under the non-unital noise channel. The entropic uncertainties (and their lower bounds) exhibit periodically oscillation due to correlation dynamics under independently non-Markovian reservoirs, whether environment is modeled as quantum or classical. In addition, we have compared the differences among three entropic uncertainty relations in Eq.\ (\ref{berta}), (\ref{pati}) and (\ref{bound}). The lower bound $U_{b2}$ or $U_{b3}$ will tighten the bound on the uncertainty when discord is bigger than classical correlation, which is often the case in practice. The relation between quantum correlations and the uncertainties is subtle, since a certain reduction int he uncertainty may also happen in the presence of small quantum correlations without entanglement. We have also shown that, in essence due to the greater resilience of the nearly ubiquitous quantum discord \cite{ferraro}, situations arise where uncertainty relations tightened by quantum discord offer a better estimate of the actual uncertainties in play. However, the advantage offered in this sense by the quantity $U_{b3}$ of Eq.\ (\ref{bound}) seems to be limited to rather specific circumstances. \ack This work was supported by the National Natural Science Foundation of China under Grant 61144006, by the Foundation of China Scholarship Council, by the Project Fund of Hunan Provincial Science and Technology Department under Grant 2010FJ3147, and by the Educational Committee of the Hunan Province of China through the Overseas Famous Teachers Programme. \end{document}
math
56,833
\begin{document} \title{Analysis and synthesis of feature map for kernel-based quantum classifier \thanks{Y. Suzuki, H. Yano: Equally contributing authors.} } \author{Yudai Suzuki \and Hiroshi Yano \and Qi Gao \and Shumpei Uno \and Tomoki Tanaka \and Manato Akiyama \and Naoki Yamamoto } \institute{ Yudai Suzuki \at Department of Mechanical Engineering, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan \and Hiroshi Yano \at Department of Information and Computer Science, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan \and Qi Gao \and Shumpei Uno \and Tomoki Tanaka \and Naoki Yamamoto \at Quantum Computing Center, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan \and Manato Akiyama \at Department of Biosciences and Informatics, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan \and Qi Gao \at Mitsubishi Chemical Corporation Science \& Innovation Center, 1000, Kamoshida-cho, Aoba-ku, Yokohama 227-8502, Japan \and Shumpei Uno \at Mizuho Information \& Research Institute, Inc., 2-3 Kanda-Nishikicho, Chiyoda-ku, Tokyo 101-8443, Japan \and Tomoki Tanaka \at Mitsubishi UFJ Financial Group, Inc. and MUFG Bank, Ltd., 2-7-1 Marunouchi, Chiyoda-ku, Tokyo 100-8388, Japan \and Naoki Yamamoto \at Department of Applied Physics and Physico-Informatics, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223- 8522, Japan, \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} A method for analyzing the feature map for the kernel-based quantum classifier is developed; that is, we give a general formula for computing a lower bound of the exact training accuracy, which helps us to see whether the selected feature map is suitable for linearly separating the dataset. We show a proof of concept demonstration of this method for a class of 2-qubit classifier, with several 2-dimensional dataset. Also, a synthesis method, that combines different kernels to construct a better-performing feature map in a lager feature space, is presented. \end{abstract} \keywords{Quantum computing \and Support vector machine \and Kernel method \and Feature map} \section{Introduction\label{intro}} Over the last 20 years, the unprecedented improvements in the cost-effectiveness ration of computer, together with improved computational techniques, make machine learning widely applicable in every aspect of our lives such as education, healthcare, games, finance, transportation, energy, business, science and engineering~\cite{HastieTF09,Alpaydin16}. Among numerous developed machine learning methods, Support Vector Machine (SVM) is a very established one which has become an overwhelmingly popular choice for data analysis~\cite{Boser1992}. In SVM method, a nonlinear dataset is transformed via a feature map to another dataset and is separated by a hyperplane in the feature space, which can be effectively performed using the kernel trick. In particular, the Gaussian kernel is often used. Quantum computing is expected to speed-up the performance of machine learning through exploiting quantum mechanical properties including superposition, interference, and entanglement. As for the quantum classifier, a renaissance began in the past few years, with the quantum SVM method \cite{Rebentrost2014}, the quantum circuit learning method \cite{Mitarai2018}, and some parallel development by other research groups \cite{Neven 2018,Zhuang Zhang 2019,Wilson2018}. In particular, the kernel method can be exploited as a powerful mean that can be introduced in the quantum SVM as in the classical case \cite{Rebentrost2014,Chatterjee 2017,Bishwas2018,LiTongyang2019,havlivcek2019,Schuld2019,Nori 2019,Park Rhee 2019,Negoro 2019,Lloyd2020,LaRose2020}; importantly, the concept of kernel-based quantum classifier has been experimentally demonstrated on a real device~\cite{havlivcek2019,Park Rhee 2019,Negoro 2019}. These studies show the possibility that machine learning will get a further boost by using quantum computers in the near future. For all the developed quantum classification methods, the feature map plays a role to encode the dataset taken from its original low dimensional real space onto a high dimensional quantum state space (i.e., the Hilbert space). A possible advantage of quantum classifier lies in the fact that this high-dimensional feature space is realized on a physical quantum computer even with a medium number of qubits, as well as the fact that the kernel could be computed faster than the classical case. However, in the framework of using gate-based quantum computers, a suitable feature map has to be explicitly specified, which is of course less trivial than specifying a suitable kernel. (Note that a kernel induces a feature map through the reproducing kernel Hilbert space.) Actually, for choosing a suitable feature map, one could prepare many map candidates and try to find a best one by comparing the results of training accuracy attained with all those maps, but this clearly needs numerous times of classification or regression analysis. Hence it is desirable if we have a method for easily having a rough estimate of the training accuracy of every feature map candidate. This paper gives one such method, based on the {\it minimum accuracy}, a lower bound of the exact training accuracy attained by any optimized classifier. The minimum accuracy is determined only from a chosen feature map and the input classical dataset, hence it can be used to screen a library of suitable feature maps. A critical drawback of this method is that it needs calculation of the order of the dimension of the feature space, which is exponential to the number of qubits. Hence, to show the proof of concept, in this paper we study the case where the quantum classifier is composed of only two qubits (see Section \ref{sec:conclusion} for a possible extension of the method). This simple setup gives us an explicit form of the feature map and further a visualization of the encoded input data distribution in the feature space; this eventually enables us to easily calculate the minimum accuracy. Moreover, the visualized feature map candidates might be exploited for combining them to construct a better-performing feature map and accordingly a better kernel. Although the concept of these synthesizing tools is device-independent, in this paper we demonstrate the idea in the framework of \cite{havlivcek2019}, with a special type of five encoding functions and four 2-dimensional nonlinear datasets. \section{Methods} \subsection{Real-vector representation of the feature map via Pauli decomposition} In the SVM method with quantum kernel estimator, the feature map transforms an input dataset to a set of multi-qubit states, which forms the feature space (i.e., Hilbert space); then the kernel matrix is constructed by calculating all the inner products of quantum states, and it is finally used in the (standard) SVM for classifying the dataset. Surely different feature maps lead to different kernels and accordingly influence on the classification accuracy, meaning that a careful analysis of the feature space is necessary. However, due to the complicated structure of the feature space, such analysis is in general not straightforward. Here, we propose the Pauli-decomposition method for visualizing the feature space, which might be used as a guide to select a suitable feature map. We begin with a brief summary of the kernel-based quantum SVM method proposed in \cite{havlivcek2019}. First, a $\tilde{n}$-dimensional classical data $\bm{x}\in\mathbb{R}^{\tilde{n}}$ is encoded into the unitary operator $\mathcal{U}_{\Phi(\bm{x})}$ through an {\it encoding function} ${\Phi(\bm{x})}$, and it is applied to the initial state $\ket{0}^{\otimes n}$ with $\ket{0}$ the qubit ground state. Thus, the feature map is a transformation from the classical data $\bm{x}$ to the quantum state $\ket{\Phi(\bm{x})}=\mathcal{U}_{\Phi(\bm{x})}\ket{0}^{\otimes n}$, and the feature space is $(\mathbb{C}^2)^{\otimes n}=\mathbb{C}^{2^n}$. The kernel is then naturally defined as $K(\bm{x},\bm{z})=|\braket{\Phi(\bm{x})|\Phi(\bm{z})}|^2$; this quantity can be practically calculated as the ratio of zero strings $0^n$ in the $Z$-basis measurement result, for the state $\mathcal{U}^\dagger_{\Phi(\bm{x})} \mathcal{U}_{\Phi(\bm{z})} \ket{0}^{\otimes n}$. Finally, the constructed kernel is used in the standard manner in SVM; that is, a test data $\bm{x}$ is classified into two categories depending on the sign of \begin{equation} \sum_{i=1}^N \alpha_i y_i K(\bm{x}_i,\bm{x}) + b, \end{equation} where $(\bm{x}_i, y_i)~(i=1,\ldots, N)$ are the pairs of training data, and $(\alpha_i, b)$ are the optimized parameters. Now we introduce the real-vector representation of the feature map. The key idea is simply to use the fact that the kernel can be expressed in terms of the density operator $\rho(x) = \ket{\Phi(\bm{x})}\bra{\Phi(\bm{x})}$ as \begin{equation} K(\bm{x},\bm{z}) =|\braket{\Phi(\bm{x})|\Phi(\bm{z})}|^2 = \mathrm{tr}\left[ \rho(\bm{x}) \rho(\bm{z}) \right], \label{eq:trace_rho} \end{equation} and the density operator can be always expanded by the set of Pauli operators as \begin{equation} \rho(\bm{x}) = \sum_{i=1}^{4^n} a_i(\bm{x}) \sigma_i \label{eq:paulidec} \end{equation} with $a_i(\bm{x})\in \mathbb{R}$ and $\sigma_i \in P_n=\{I, X, Y, Z\}^{\otimes n}$ the multi-qubit Pauli operators. The followings are examples of the elements of $P_2$: \begin{equation*} \begin{split} XI &= \left[\begin{array}{rr} 0 & 1 \\ 1 & 0 \\ \end{array}\right] \otimes \left[\begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array}\right], ~~ ZY = \left[\begin{array}{rr} 1 & 0 \\ 0 & -1 \\ \end{array}\right] \otimes \left[\begin{array}{rr} 0 & -i \\ i & 0 \\ \end{array}\right], ~~ \\ YZ &= \left[\begin{array}{rr} 0 & -i \\ i & 0 \\ \end{array}\right] \otimes \left[\begin{array}{rr} 1 & 0 \\ 0 & -1 \\ \end{array}\right]. \end{split} \end{equation*} Then, by substituting Eq.~\eqref{eq:paulidec} into Eq.~\eqref{eq:trace_rho} and using the trace relation $\mathrm{tr}\left( \sigma_i \sigma_j \right)= 2^n \delta_{i,j}$, the kernel can be written as \begin{equation} K(\bm{x}, \bm{z}) = 2^n \sum_{i = 1}^{4^n} a_{i}(\bm{x})a_{i}(\bm{z}), \label{eq:p_coef_prod} \end{equation} meaning that the vector $\bm{a}(\bm{x})=[a_1(\bm{x}), \ldots, a_{4^n}(\bm{x})]^\top$ serves as the feature map corresponding to the kernel $K(\bm{x},\bm{z})$. That is, the input dataset $\{\bm{x}_i\}$ are encoded into the set of vectors $\{ \bm{a}(\bm{x}_i)\}$ in a (bigger) real feature space $\mathbb{R}^{4^n}$ and will be classified by the SVM with the kernel \eqref{eq:p_coef_prod}. Note that $\bm{a}(\bm{x})$ is a generalization of the Bloch vector, and thus the corresponding feature space is interpreted as the generalized Bloch sphere. \subsection{Feature map for the 2-qubit classifier} \label{sec:formulation} \begin{figure} \caption{Quantum circuit of $U_{\Phi(\bm{x} \label{fig:circ} \end{figure} In this paper, we study the 2-qubit classifier proposed in \cite{havlivcek2019}; an input data $\bm{x}\in\mathbb{R}^{\tilde{n}}$ is mapped to the unitary operator $\mathcal{U}_{\Phi(\bm{x})}$, which is composed of two layers of Hadamard gate $H^{\otimes 2}$ and the unitary gate $U_{\Phi(x)}$ as follows: \begin{equation} \mathcal{U}_{\Phi(\bm{x})}=U_{\Phi(\bm{x})}H^{\otimes 2}U_{\Phi(\bm{x})}H^{\otimes 2}, \label{eq:unitary} \end{equation} where \begin{equation} U_{\Phi(\bm{x})} =\exp\Big( i\phi_1(\bm{x})ZI + i\phi_2(\bm{x})IZ + i\phi_{1,2}(\bm{x})ZZ \Big), \label{eq:unitarycomponent} \end{equation} and $\Phi(\bm{x})=\{ \phi_1(\bm{x}), \phi_2(\bm{x}), \phi_{1,2}(\bm{x})\}$ is the set of encoding functions. The quantum circuit representation realizing this unitary gate is shown in Fig.~\ref{fig:circ}. The three user-defined encoding functions $\phi_1(\bm{x}), \phi_2(\bm{x})$, and $\phi_{1,2}(\bm{x})$ nonlinearly transform the input data $\bm{x}$ into the qubit $\ket{\Phi(\bm{x})}=\mathcal{U}_{\Phi(\bm{x})} \ket{0}^{\otimes 2}$. A lengthy calculation then gives the explicit Pauli decomposed form \eqref{eq:paulidec} of the density operator $\rho(\bm{x})=\ket{\Phi(\bm{x})}\bra{\Phi(\bm{x})}$; the coefficients $\{ a_i(\bm{x})\}$ with $i=II, XI, YI, \ldots, ZZ$ are listed in Table~\ref{tb:pdc}. The coefficients are composed of bunch of trigonometric functions, which make the kernel complicated enough to transform the input data highly nonlinearly. \begin{table*}[hbtp] \begin{center} \caption{Coefficients of the density operator \eqref{eq:paulidec} in the setup shown in Section \ref{sec:formulation}; that is, $\{ a_i(\bm{x})\}$ with $i=II, XI, YI, \ldots, ZZ$. } \label{tb:pdc} \begin{tabular}{c|c} \hline Index $i$ & Pauli decomposition coefficients $a_i$ \\ \cline{1-2} II & $1/4$ \\ XI & $\{\sin{\phi_{1}}(\sin{\phi_{2}}\sin{\phi_{1,2}}^{2}+\sin{\phi_{1}}\cos{\phi_{1,2}}^{2}+\cos{\phi_{2}}\cos{\phi_{1}}\sin{\phi_{1,2}})\}/4$ \\ YI & $\{-\sin{\phi_{2}}\cos{\phi_{1}}\sin{\phi_{1,2}}^{2}-\sin{\phi_{1}}\cos{\phi_{1}}\cos{\phi_{1,2}}^{2}+\cos{\phi_{2}}\sin{\phi_{1}}^{2}\sin{\phi_{1,2}}\}/4$ \\ ZI & $\cos{\phi_{1}}\cos{\phi_{1,2}}/4$ \\ IX & $\{\sin{\phi_{2}}(\sin{\phi_{1}}\sin{\phi_{1,2}}^{2}+\sin{\phi_{2}}\cos{\phi_{1,2}}^{2}+\cos{\phi_{1}}\cos{\phi_{2}}\sin{\phi_{1,2}})\}/4$ \\ XX & $\{\sin{\phi_{1}}^{2}\sin{\phi_{2}}^{2}+\sin{\phi_{1,2}}\cos{\phi_{1}}\cos{\phi_{2}}(\sin{\phi_{1}}+\sin{\phi_{2}})\}/4$ \\ YX & $\{-\sin{\phi_{2}}^{2}\sin{\phi_{1}}\cos{\phi_{1}}+\sin{\phi_{1,2}}\cos{\phi_{2}}(\sin{\phi_{1}}\sin{\phi_{2}}-\cos{\phi_{1}}^{2})\}/4$ \\ ZX & $\{\cos{\phi_{1,2}}(-\sin{\phi_{1}}\cos{\phi_{2}}\sin{\phi_{1,2}}+\cos{\phi_{1}}\sin{\phi_{2}}^{2}+\sin{\phi_{2}}\cos{\phi_{2}}\sin{\phi_{1,2}})\}/4$ \\ IY & $\{-\sin{\phi_{1}}\cos{\phi_{2}}\sin{\phi_{1,2}}^{2}-\sin{\phi_{2}}\cos{\phi_{2}}\cos{\phi_{1,2}}^{2}+\cos{\phi_{1}}\sin{\phi_{2}}^{2}\sin{\phi_{1,2}}\}/4$ \\ XY & $\{-\sin{\phi_{1}}^{2}\sin{\phi_{2}}\cos{\phi_{2}}+\sin{\phi_{1,2}}\cos{\phi_{1}}(\sin{\phi_{1}}\sin{\phi_{2}}-\cos{\phi_{2}}^{2})\}/4$ \\ YY & $\{\sin{\phi_{1}}\cos{\phi_{1}}\sin{\phi_{2}}\cos{\phi_{2}}-\sin{\phi_{1,2}}(\cos{\phi_{2}}^{2}\sin{\phi_{1}}+\sin{\phi_{2}}\cos{\phi_{1}}^{2})\}/4$ \\ ZY & $\{\sin{\phi_{2}}(-\sin{\phi_{1}}\sin{\phi_{1,2}}\cos{\phi_{1,2}}-\cos{\phi_{2}}\cos{\phi_{1}}\cos{\phi_{1,2}}+\sin{\phi_{2}}\cos{\phi_{1,2}}\sin{\phi_{1,2}})\}/4$ \\ IZ & $\cos{\phi_{2}}\cos{\phi_{1,2}}/4$ \\ XZ & $\{\cos{\phi_{1,2}}(-\sin{\phi_{2}}\cos{\phi_{1}}\sin{\phi_{1,2}}+\cos{\phi_{2}}\sin{\phi_{1}}^{2}+\sin{\phi_{1}}\cos{\phi_{1}}\sin{\phi_{1,2}})\}/4$ \\ YZ & $\{\sin{\phi_{1}}(-\sin{\phi_{2}}\sin{\phi_{1,2}}\cos{\phi_{1,2}}-\cos{\phi_{1}}\cos{\phi_{2}}\cos{\phi_{1,2}}+\sin{\phi_{1}}\cos{\phi_{1,2}}\sin{\phi_{1,2}})\}/4$ \\ ZZ & $\cos{\phi_{1}}\cos{\phi_{2}}/4$ \\ \hline \end{tabular} \end{center} \end{table*} \subsection{Minimum accuracy} \label{sec:min accuracy} \begin{figure} \caption{ An example for calculating the minimum accuracy in the case of $N=10$ data points $\{ \bm{a} \label{fig:class rate} \end{figure} The minimum accuracy is defined as the maximum classification accuracy where the hyperplane used for classifying the training dataset is restricted to being orthogonal to any basis axis in the feature space. The main points of this definition are as follows; due to this restriction, the minimum accuracy is calculated without respect to the actual classifiers; moreover, it gives a lower bound of the accuracy for the training dataset achieved by any optimized classifier, because the optimized hyperplane is not necessarily orthogonal to any basis axis. This means that the minimum accuracy can be used to evaluate the chosen feature map and accordingly the kernel, without designing an actual classifier; in particular, if the minimum accuracy takes a relatively large value, any optimized classifier is guaranteed to achieve an equal or a higher accuracy in that feature space. Note that a similar concept is found in Ref.~\cite{aronoff 1985}, which is yet defined in a different way. In this study, the feature map is given by the vector $\bm{a}(\bm{x})=[a_1(\bm{x}), \ldots, a_{4^n}(\bm{x})]^\top$, and the minimum accuracy is calculated as follows, for the training dataset $\{\bm x_{k}, y_{k}\}_{k=1,\ldots, N}$, for the case $N$ being an even number (if $N$ is an odd number, generate one more training data). In particular, we assume that the output $y_k=+1$ has been assigned to $N/2$ data points and $y_k=-1$ for the rest $N/2$ data points. \begin{description} \item[(i)] For a fixed index $i\in\{1,\cdots,4^n\}$, consider the dataset $\{ a_i(\bm{x}_k) \}_{k=1,\ldots, N}$, which are the projection of all the transformed data onto the $i$-th axis in the feature space, as shown in Fig.~\ref{fig:class rate}(a). \item[(ii)] Choose a hyperplane orthogonal to this $i$-th axis; they intersect at the threshold between a pair of neighboring projected data points, as indicated by the thick arrow in Fig. ~\ref{fig:class rate}(a) and (b). \item[(iii)] Calculate the accuracy at the $j$-th threshold as follows. Let $N_+$ and $N_{-}$ be the number of data points with output $y_k=+1$ and $y_k=-1$ in the left side of the threshold, respectively. If $N_+ > N_-$, the desirable classification pattern of the dataset is such that the points with $y_k=+1$ are in the left and the points with $y_k=-1$ are in the right; now the number of points with $y_k=-1$ is $N/2 - N_-$, meaning that the classification accuracy is $(N_+ + N/2 - N_-)/N$. (The perfect case is such that $N_+=N/2$ and $N_-=0$, leading that the accuracy is $1$.) Combining the case $N_+ < N_-$, hence, the accuracy is defined by \begin{equation} \label{accuracy at the threshold} R^{j}_{i} = \frac{1}{N} \Big( {\rm max}\{N_+, N_{-} \} + \frac{N}{2} - {\rm min}\{N_+, N_{-}\} \Big). \end{equation} Recall that this quantity \eqref{accuracy at the threshold} corresponds to the accuracy of classifying the dataset by the hyperplane orthogonal to the $i$-th axis at the threshold. \item[(iv)] Calculate the accuracy for all the thresholds with indices $j=1,\ldots, N+1$, and then take the maximum: $R_i = \max_j R^{j}_{i}$. \item[(v)] The minimum accuracy is defined as $R = \max_{i} R_i$, where the index runs from $i=1$ to $i=4^n$. \end{description} A simple example to demonstrate calculating the minimum accuracy is given in Fig.~\ref{fig:class rate}. Note that the above procedure can be readily conducted for the 2-qubit case, using the explicit form of $\{a_{i}(\bm{x})\}$ listed in Table ~\ref{tb:pdc}. \section{Results and Discussions} \subsection{Classification accuracy with different encoding functions} \begin{figure} \caption{Dataset called (a) Circle, (b) Exp, (c) Moon, and (d) Xor.} \label{fig:dataset} \end{figure} Here we apply the quantum SVM method, with several encoding functions, to some benchmark classification problems. We consider the nonlinear 2-dimensional datasets named Circle, Exp, Moon, and Xor, shown in Fig.~\ref{fig:dataset}; in each case, $N=100$ data points $(\bm{x}_k, y_k)~(k=1,\ldots, 100)$ are generated and categorized to two groups depending on $y_k=+1$ or $y_k=-1$ (orange or blue points in the figure). Each dataset is encoded into the 2-qubit quantum state, with the following five encoding functions: \begin{align} \phi_{\, 1}(\bm{x}) & = x_1, ~ \phi_{\, 2}(\bm{x}) = x_2, ~ \phi_{\, 1,2}(\bm{x}) = \pi x_1 x_2, \label{eq:ef1} \\ \phi_{\, 1}(\bm{x}) & = x_1, ~ \phi_{\, 2}(\bm{x}) = x_2, ~ \phi_{\, 1,2}(\bm{x}) = \frac{\pi}{2}(1 - x_1) (1 - x_2), \label{eq:ef2} \\ \phi_{\, 1}(\bm{x}) & = x_1, ~ \phi_{\, 2}(\bm{x}) = x_2, ~ \phi_{\, 1,2}(\bm{x}) = \exp\left( \frac{|x_1 - x_2|^2}{ 8/\ln(\pi) } \right), \label{eq:ef3} \\ \phi_{\, 1}(\bm{x}) & = x_1, ~ \phi_{\, 2}(\bm{x}) = x_2, ~ \phi_{\, 1,2}(\bm{x}) = \frac{\pi}{3 \cos(x_1) \cos(x_2)}, \label{eq:ef4} \\ \phi_{\, 1}(\bm{x}) & = x_1, ~ \phi_{\, 2}(\bm{x}) = x_2, ~ \phi_{\, 1,2}(\bm{x}) = \pi \cos(x_1) \cos(x_2). \label{eq:ef5} \end{align} The functions $\phi_{\, 1,2}(\bm{x})$ are chosen from a set of various nonlinear functions in the range of $2\pi$, i.e. ${\rm max}(\phi) - {\rm min}(\phi) \le 2\pi$ for $x_1, x_2 \in [-1,1]$. In particular, the coefficient of \eqref{eq:ef3} and \eqref{eq:ef4} are determined empirically so that the resulting classifier achieves a high accuracy on the prepared datasets. Also the reason of fixing $\phi_{\, 1}(\bm{x})=x_1$ and $\phi_{\, 2}(\bm{x})=x_2$ for all the encoding functions is that here we aim to investigate the dependence of the classification accuracy on $\phi_{\, 1,2}(\bm{x})$. In this work, the classification accuracy are evaluated as the average accuracy of the 5-fold cross validation, where one dataset is divided into 5 groups with equal number of datasets (i.e., 20 data-points), for both the training and test dataset. All the calculations are carried out using QASM simulator included in the Qiskit package \cite{Qiskit}; to construct each element of the kernel, 10,000 shots (measurements) is performed. Also to perform the optimization procedure of SVM, scikit-learn, a popular machine learning library for Python, was employed; in particular the hyperparameter $C$ is set to $10^{10}$ for realizing the hard-margin SVM, which is the scenario where the notion of minimum accuracy is valid. The classification accuracy of the four datasets, achieved by the above four encoding functions, are shown in Table~\ref{tb:train} for the training case and Table~\ref{tb:test} for the test case. Overall, the function \eqref{eq:ef4} achieves good accuracy, which is larger than 0.95 for the training set and 0.88 for the test set. On the other hand, the function \eqref{eq:ef1} does not always work well for classification; this function achieves the accuracy 1.00 for the training dataset of Circle and Xor, whereas the accuracy for the training Moon dataset is decreased to 0.85. Hence the different encoding functions, which lead to the different feature maps and kernels, may largely influence the resulting classification accuracy. \begin{table} \begin{center} \caption{Classification accuracy achieved by the quantum SVM method with the five different encoding functions.} \subfigure[Training accuracy]{ \begin{tabular}{ccccc} encoding function & Circle & Exp & Moon & Xor \\ \hline (\ref{eq:ef1}) & 1.00 & 0.91 & 0.85 & 1.00 \\ (\ref{eq:ef2}) & 1.00 & 0.93 & 0.96 & 0.97 \\ (\ref{eq:ef3}) & 1.00 & 0.97 & 0.91 & 0.93 \\ (\ref{eq:ef4}) & 1.00 & 0.98 & 1.00 & 0.95 \\ (\ref{eq:ef5}) & 1.00 & 0.94 & 0.98 & 0.93 \\ \end{tabular} \label{tb:train} } \subfigure[Test accuracy]{ \begin{tabular}{ccccc} encoding function & Circle & Exp & Moon & Xor \\ \hline (\ref{eq:ef1}) & 0.97 & 0.83 & 0.85 & 0.99 \\ (\ref{eq:ef2}) & 0.96 & 0.89 & 0.87 & 0.96 \\ (\ref{eq:ef3}) & 1.00 & 0.92 & 0.86 & 0.91 \\ (\ref{eq:ef4}) & 1.00 & 0.88 & 0.92 & 0.89 \\ (\ref{eq:ef5}) & 1.00 & 0.92 & 0.87 & 0.88 \\ \end{tabular} \label{tb:test} } \end{center} \end{table} \subsection{Analysis of the feature map} \begin{table} \begin{center} \caption{The minimum accuracy calculated only with the feature maps and the dataset.} \begin{tabular}{ccccc} encoding function & Circle & Exp & Moon & Xor \\ \hline (\ref{eq:ef1}) & 0.99 & 0.77 & 0.83 & 0.99 \\ (\ref{eq:ef2}) & 0.99 & 0.76 & 0.80 & 0.91 \\ (\ref{eq:ef3}) & 0.99 & 0.86 & 0.89 & 0.85 \\ (\ref{eq:ef4}) & 0.99 & 0.88 & 0.89 & 0.84 \\ (\ref{eq:ef5}) & 0.99 & 0.81 & 0.85 & 0.78 \\ \end{tabular} \label{tb:MinimumAccuracy} \end{center} \end{table} \begin{figure} \caption{Comparison between the exact training accuracy in Table~\ref{tb:train} \label{fig:compareTables} \end{figure} Here we examine, for the classification problem described above, how the feature map $\bm{a}(\bm{x})$ would benefit us to figure out the distribution of dataset in the feature space and whether the minimum accuracy would actually predict a suitable encoding function. First, Figs.~\ref{fig:ef1}-\ref{fig:ef5} show the color map of $a_i(\bm{x})$ listed in Table~\ref{tb:pdc}, as a function of $\bm{x}\in\mathbb{R}^2$ with $i=II, XI, YI, \ldots, ZZ$ for the encoding functions \eqref{eq:ef1}-\eqref{eq:ef5}, respectively. Note that each $a_i(\bm{x})$ is not determined from a specific input dataset. Nevertheless, very interestingly, some of those 2-dimensional spaces intrinsically possess the shape of distribution of the coming input dataset, which will thus affect on the resulting classification accuracy. For example, in all cases of Figs.~\ref{fig:ef1}-\ref{fig:ef5} the ZZ element $a_{ZZ}(\bm{x})$ has a circle shape, meaning that Circle dataset can be classified only by $a_{ZZ}(\bm{x})$; this observation is consistent with the fact that Circle dataset can be indeed classified with high training/test accuracies as shown in Tables \ref{tb:train} and \ref{tb:test}. Similar results can also be clearly found in Fig.~\ref{fig:ef1} (the case of encoding function \eqref{eq:ef1}) and in Fig.~\ref{fig:ef4} (the case of \eqref{eq:ef4}); the shape of $a_{YX}(\bm{x})$ in Fig.~\ref{fig:ef1} has a similar distribution to Xor dataset, and actually \eqref{eq:ef1} achieves the best training accuracy 1.00 for Xor dataset; the shape of $a_{YI}(\bm{x})$ in Fig.~\ref{fig:ef4} has a similar distribution to Exp dataset, and actually \eqref{eq:ef4} achieves the high training accuracy 0.98 for Exp dataset. Next, Table~\ref{tb:MinimumAccuracy} gives the minimum accuracy for the encoding functions \eqref{eq:ef1}-\eqref{eq:ef5}, which are calculated according to the procedure given in Section~\ref{sec:min accuracy}. Recall that the minimum accuracy gives a lower bound of the exact training accuracy achieved by any optimized classifier, or in other words, it guarantees a worst-case accuracy. Hence, the minimum accuracy may be used as a guide to determine the feature map; that is, the encoding function with the largest minimum accuracy is recommended. Then, for Moon dataset, Table~\ref{tb:MinimumAccuracy} suggests the encoding functions \eqref{eq:ef3} or \eqref{eq:ef4}; similarly, for Exp and Xor datasets, \eqref{eq:ef4} and \eqref{eq:ef1} are recommended, respectively. Also, for the case of Circle dataset, any encoding function can be used. Now let us compare the minimum accuracy with the exact training accuracies given in Table~\ref{tb:train}, to see if the above suggestions are consistent to the actual classification performance achieved by the quantum SVM. Figure~\ref{fig:compareTables} gives the summary, where the minimum and exact accuracies are indicated with the orange and blue bars, respectively. Importantly, the encoding function selected according to the aforementioned guide based on the minimum accuracy produce the best training accuracies; hence, as expected, the minimum accuracy may be used as a convenient measure for determining a suitable encoding function and accordingly a good feature map. Also in many cases we find positive correlation in the minimum accuracy and the exact accuracy. In particular we here consider the following simple definition; in each dataset (b), (c), and (d), a pair of functions are positively correlated if the order of their minimum accuracies is the same as that of their exact accuracies. For instance, for all the cases (b), (c), and (d), the function \eqref{eq:ef4} has a higher minimum accuracy than \eqref{eq:ef5}, and this order holds also for the exact accuracy; hence they are positively correlated. In fact, except the pair \eqref{eq:ef3} and \eqref{eq:ef4} in (c), the ratio of positively correlated functions is $23/29\approx 79\%$. This fact also supports the validity of the use of minimum accuracy as a reasonable guide for choosing the encoding function. \subsection{Synthesis of the feature map via the combined Kernel method: Toward ensemble learning} \begin{figure} \caption{ Scheme of the kernel synthesizer composed of different (weak) quantum computers. } \label{fig:CombKernels} \end{figure} \begin{table} \begin{center} \caption{Training and Test accuracies via the classifier with combined kernels.} \subfigure[Classification accuracies for Moon dataset]{ \begin{tabular}{ccccc} Encoding function & (\ref{eq:ef1}) + (\ref{eq:ef2}) & (\ref{eq:ef1}) + (\ref{eq:ef3}) & (\ref{eq:ef1}) + (\ref{eq:ef4}) & (\ref{eq:ef1}) + (\ref{eq:ef5}) \\ \hline Training & 1.00 & 0.94 & 1.00 & 1.00 \\ Test & 0.95 & 0.90 & 0.98 & 0.96 \\ \end{tabular} \label{tb:Kernel sum Moon} } \subfigure[Classification accuracies for Exp dataset]{ \begin{tabular}{ccccc} Encoding function & (\ref{eq:ef1}) + (\ref{eq:ef2}) & (\ref{eq:ef1}) + (\ref{eq:ef3}) & (\ref{eq:ef1}) + (\ref{eq:ef4}) & (\ref{eq:ef1}) + (\ref{eq:ef5}) \\ \hline Training & 0.96 & 0.93 & 0.95 & 0.95 \\ Test & 0.92 & 0.90 & 0.88 & 0.92 \\ \end{tabular} \label{tb:Kernel sum Exp} } \end{center} \end{table} In the classical regime there have been a number of works on designing efficient kernels; a simple strategy is to combine some different kernels to construct a single kernel, so that the constructed one might have a desired characteristic by compensating the weakness of each kernel \cite{Bishop2006}. Here we demonstrate that this idea works in quantum regime as well, as actually in the above sections we have introduced several types of encoding functions which indeed lead to different kernel functions and accordingly different classification performances. Note that the idea of combined kernel for quantum classifier was briefly addressed in \cite{Chatterjee 2017} yet without concrete demonstration. A typical combining method of kernels is to take a summation of them, as illustrated in Fig.~\ref{fig:CombKernels}: \begin{equation*} K(\bm{x}, \bm{z}) = \sum_{i = 1}^{m} \lambda_{i} K_{\Phi_i}(\bm{x}, \bm{z}), \end{equation*} where $\lambda_i$ are the weighting parameters satisfying $\sum_{i = 1}^{m} \lambda_{i} = m$ with $0\leq \lambda_i \leq m$ (normalization of $\{\lambda_i\}$ does not lead to any essential difference). Here, to demonstrate this idea, we consider the combination of two equally-weighted kernels, i.e., the case of $m=2$ and $\lambda_1=\lambda_2=1$; see \cite{Lanckriet2004,Dios2007} for the validity of this choice in the classical case. Even in this simple case a possible advantage may be readily seen; that is, a sum of kernels theoretically results in a higher dimensional feature space than that of the original ones as follows: \begin{align*} K_{\Phi_1}(\bm{x}, \bm{z}) + K_{\Phi_2}(\bm{x}, \bm{z}) & = |\braket{\Phi_1(\bm{x})|\Phi_1(\bm{z})}|^2 + |\braket{\Phi_2(\bm{x})|\Phi_2(\bm{z})}|^2 \\ & = |\braket{\Phi_1\oplus\Phi_2(\bm{x})|\Phi_1\oplus\Phi_2(\bm{z})}|^2, \end{align*} where $\ket{\Phi_1\oplus\Phi_2(\bm{x})} = \ket{\Phi_1(\bm{x})}\oplus\ket{\Phi_2(\bm{x})}$, meaning that the classical data $\bm{x}\in\mathbb{R}^{\tilde{n}}$ is encoded into the direct sum of two Hilbert spaces and hence the dimension of the feature space is doubled. Here we consider the same benchmark classification problems as above, by applying the 2-qubit classifier with the kernel constructed from the encoding function \eqref{eq:ef1} and the other four. In this case $\bm{a}(\bm{x})$ is a 32 dimensional real vector, but with two redundant elements $a_{II}$ and $a_{ZZ}$. We first see how much the combined kernel may improve the classification accuracy for Moon dataset for which the classifier using the single encoding function \eqref{eq:ef1} showed the worst training accuracy 0.85. The resultant classification accuracies obtained by applying the combined kernels are shown in Table~\ref{tb:Kernel sum Moon}; every kernel results in improving the classification accuracy. Especially, when combining the weak classifiers with the kernels \eqref{eq:ef1} and \eqref{eq:ef3}, in which case the training accuracy is 0.85 and 0.91 respectively, the classifier with this new constructed kernel achieves the accuracy 0.94. This would make sense, because the feature space visualized by $a_i(\bm{x})$ of the encoding functions \eqref{eq:ef1} and \eqref{eq:ef3} look very different, indicating that the advantages of each classifiers might be well synthesized to achieve better classification. Similarly, we test four combined kernels composed of the encoding function \eqref{eq:ef1} and the others, to classify Exp dataset for which the single encoding function \eqref{eq:ef1} led to the worst training accuracy 0.91. The resultant classification accuracies are shown in Table~\ref{tb:Kernel sum Exp}. In this situation, however, some of the classification performance were not so improved compared to the results using the original encoding function. This issue might be resolved by carefully choosing the weighting parameters $\{\lambda_i\}$ when synthesizing the kernels. A broad concept behind what we have demonstrated here is the so-called {\it ensemble learning} \cite{Dietterich2000}, which is a general and effective strategy to combine several weak classifiers to generate a single stronger classifier. Actually some quantum extension of this method have been deeply investigated in \cite{Schuld2018,Ximing2019}. In our work, each classifier is weak in the sense that their circuit depth and the number of qubits are severely limited; also the difference of weak classifiers simply comes from the difference of encoding functions, and the single stronger classifier is constructed merely by taking the summation of the corresponding kernels. Systematic strategy for synthesizing weak classifiers for producing a single stronger one is important particularly in the current status where only noisy intermediate-scale quantum devices are available. \section{Conclusion} \label{sec:conclusion} In this paper, we proposed a method that helps us to analyze and synthesize the feature map for the 2-qubit kernel-based quantum classifier, based on the real-valued representation of it; the minimum accuracy, which serves as a lower bound of the exact accuracy achieved by any optimized classifier, was introduced as a tool to effectively screen a library of feature maps suitable for classification; also the method of combining (weak) feature maps to produce a better-performing map was demonstrated with some benchmarking classification problems. It is important to extend the presented method, beyond a demonstration, to a general and systematic one for constructing a quantum classifier which fully makes use of its intrinsic power. We finally remark that, although calculating the minimum accuracy $R = \max_{i} R_i$ $(i=1,\ldots,4^n)$ is intractable when $n\gg 1$, there might be some circumventing approaches. For instance, we may take $\underline{R} = \max_{i\in{\cal I}} R_i$ where the elements of ${\cal I}$ are randomly chosen from all $i\in\{1,\ldots,4^n\}$; because $\underline{R}$ also serves as a measure to evaluate the worst case accuracy for any optimized classifier, it is interesting to investigate how to construct ${\cal I}$ to have a good measure while keeping the size of ${\cal I}$ tractable. Another interesting direction is to study the connection to the quantum random access coding \cite{Ambainis,Iwama}, which discusses the method encoding large-size classical bits into small-size quantum bits; hence it is expected that even a relatively small-size quantum classifier such that $R$ can be calculated in a reasonable time might have some quantum advantages. We will work out these problems in the future. \begin{figure} \caption{Color map of all the 2-dimensional spaces $a_i(\bm{x} \label{fig:ef1} \end{figure} \begin{figure} \caption{Color map of all the 2-dimensional spaces $a_i(\bm{x} \label{fig:ef2} \end{figure} \begin{figure} \caption{Color map of all the 2-dimensional spaces $a_i(\bm{x} \label{fig:ef3} \end{figure} \begin{figure} \caption{Color map of all the 2-dimensional spaces $a_i(\bm{x} \label{fig:ef4} \end{figure} \begin{figure} \caption{Color map of all the 2-dimensional spaces $a_i(\bm{x} \label{fig:ef5} \end{figure} \end{document}
math
37,041
\betaetagin{document} \tauitle[]{On the local well-posedness of the nonlinear heat equation associated to the fractional Hermite operator in modulation spaces} \taufrac{\alpha}{p}\,hauthor{Elena Cordero} \taufrac{\alpha}{p}\,haddress{Department of Mathematics, University of Torino, Via Carlo Alberto 10, 10123 Torino, Italy} \email{[email protected]} \keywords{Modulation spaces, Gabor matrix, fractional Hermite operators, Shubin and H\"{o}rmander classes} \subjclass[2010]{42B35,35K05,35K55,35A01} \date{} \betaetagin{abstract} In this note we consider the nonlinear heat equation associated to the fractional Hermite operator $H^\betaetata =(-\Deltalta+|x|^2)^\betaetata$, $0<\betaetata\langleeq 1$. We show the local solvability of the related Cauchy problem in the framework of modulation spaces. The result is obtained by combining tools from microlocal and time-frequency analysis. As a byproduct, we compute the Gabor matrix of pseudodifferential operators with symbols in the H\"{o}rmander class $S^m_{0,0}$, $m\inftyn\varphiield{R}$. \end{abstract} \maketitle \section{Introduction and results} In this note we study the Cauchy problem for the nonlinear heat equation associated to the fractional Hermite operator \betaetagin{equation}\langleanglebel{cpw} \betaetagin{cases} \partial_t u+H^\betaetata u=F(u)\\ u(0,x)=u_0(x) \end{cases} \end{equation} with $t\inftyn [0,T]$, $T>0$, $x\inftyn\mathbb{R}^d$, $H^\betaetata =(-\Deltalta+|x|^2)^\betaetata$, $0<\betaetata\langleeq 1$, $\Deltalta=\partial^2_{x_1}+\dots \partial^2_{x_d}$, $d\geq1$. $F$ is a scalar function on $\varphiield{C}$, with $F(0)=0$. The solution $u(t,x)$ is a complex valued function of $(t,x)\inftyn \mathbb{R}\tauimes\betaR^d$. We will consider the case in which $F$ is a real analytic function with an entire extension. \par Well-posedness of the heat equation has been studied by many authors, see e.g. \betareakite{nicola,fio2} and the many contributions by Wong, for instance \betareakite{Wong1,Wong2}, see also \betareakite{catana}. In particular, heat equations associated to fractional Hermite operators were recently studied in \betareakite{bhimani1}, for related results see also \betareakite{bhimNTI}. Hermite multipliers are considered in \betareakite{bhimani2}, see also the textbook \betareakite{thangavelu}. Recently the study of Cauchy problems in modulation spaces have been pursued by many authors, see the pioneering works \betareakite{benyi,benyi3}. Many deep results in this framework for nonlinear evolution equations have been obtained by B. Wang et al. in \betareakite{baoxiang3,baoxiang2} and are also available in the textbook \betareakite{Wangbook2011}. Following the spirit of \betareakite{CNwave,CZ}, we shall prove the local existence and uniqueness of the solutions in modulation spaces to the Cauchy problem \eqref{cpw}. The key arguments come from both microlocal and time-frequency analysis. In fact, we shall rely on the results related to spectral theory of globally elliptic operators developed by Helffer \betareakite{helffer} to understand the properties of the fractional Hermite operators $H^\betaetata$. Namely, they are pseudodifferential operators with Weyl symbols $a_\betaetata$ positive globally elliptic and in the Shubin classes $\Gammamma^{2\betaetata}_1$ (see Definition \ranglengleef{defshubin} and the estimate \eqref{pge} below). The spectral decomposition of the Hermite operator $H=-\Deltalta+|x|^2$ is given by $H=\sum_{k=0}^\inftynfty (2k +d) P_k$, where $P_k$ is the orthogonal projection of $L^2(\betaR^d)$ onto the eigenspace corresponding to the eigenvalue $(2k + d)$. Namely, the range of the operator $P_k$ is the space spanned by the Hermite functions $\Phi_\taufrac{\alpha}{p}\,halphapha$ in $\betaR^d$, with $\taufrac{\alpha}{p}\,halphapha$ multi-index in $\varphiield{N}^d$, such that $|\taufrac{\alpha}{p}\,ha|=k$. The solution to the homogeneous Cauchy problem \eqref{cpw} (i.e., $F=0$) can be formally written in terms of the heat semigroup related to $H^\betaetata$ \[ e^{-tH^\betaetata}=\sum_{k=0}^{+\inftynfty} e^{-t(2k+d)^\betaetata} P_k \] as ${u}(t,x)=K_\betaetata(t)u_0=e^{-tH^\betaetata} u_0(x),\quad t\geq 0,\, x\inftyn\betaR^d.$ We shall prove that the propagator $K_\betaetata(t)=e^{-t H^\betaetata}$ can be represented as a pseudodifferential operator with Weyl symbol in the Shubin class $\Gammamma^0_1$, with related semi-norms uniformly bounded with respect to the time variable $t\inftyn [0,T]$, for any fixed $T>0$. After that we shall leave the microlocal techniques to come to time-frequency analysis. We perform a general study concerning the boundedness of Shubin $\tauau$-pseudodifferential operators with symbols in the H{\"o}rmander classes $S^m_{0,0}$ (for $\tauau=1/2$ we recapture the Weyl case). The outcomes are contained in Theorem \ranglengleef{GBsm} below (see also the subsequent corollary and remark). The main tool here is to study the decay of their related Gabor matrix representations, which we shall also control by the semi-norms in $S^m_{0,0}$. We think that such result is valuable in and of itself. We then use the special case of Weyl operators to study \eqref{cpw}. The integral version of the problem \eqref{cpw} has the form \betaetagin{equation}\langleanglebel{solop} u(t,\betaC^dot)=K_\betaetata(t)u_0+\mathcal{B}F(u), \end{equation} where\betaetagin{equation}\langleanglebel{op2} \mathcal{B}=\inftynt _0^t K_\betaetata(t-\tauau)\betaC^dot d\tauau. \end{equation} To show that the Cauchy Problem \eqref{cpw} has a unique solution, we use a variant of the contraction mapping theorem (see Proposition \ranglengleef{AIA} below). \par As already mentioned, the function spaces used for our results are weighted modulation spaces $M_w^{p,q}$, $1\langleeq p,q\langleeq\inftynfty$, introduced by H. Feichtinger in 1983 \betareakite{F1} (then extended to $0<p,q\langleeq\inftynfty$ in \betareakite{Galperin2004}). We refer the reader to Section $2$ for their definitions and main properties. The local well-posedness results for modulation spaces read as follows: \betaetagin{theorem}\langleanglebel{T1} Assume $s\geq 0$, $1\langleeq p<\inftynfty$, $u_0\inftyn {M}^{p,1}_s(\betaR^d)$ and $$F(z)=\sum_{j,k=0}^\inftynfty c_{j,k} z^j \betaar{z}^k,$$ an entire real-analytic function on $\varphiield{C}$ with $F(0)=0$. For every $R>0$, there exists $T>0$ such that for every $u_0$ in the ball $B_R$ of center $0$ and radius $R$ in ${M}^{p,1}_s(\betaR^d)$ there exists a unique solution $u\inftyn \mathcal{C}^0([0,T];{M}^{p,1}_s(\betaR^d))$ to \eqref{cpw}. Furthermore, the map $u_0\mapsto u$ from $B_R$ to $ \mathcal{C}^0([0,T];{M}^{p,1}_s(\betaR^d))$ is Lipschitz continuous. For $p=\inftynfty$ the result still holds if we replace $M^{\inftynfty,1}_s(\betaR^d)$ with the space $\mathcal{M}^{\inftynfty,1}_s(\betaR^d)$, the closure of the Schwartz class in the $M^{\inftynfty,1}_s$-norm. \end{theorem} We actually do not know whether it is possible to obtain better results concerning the nonlinearity $F(u)=\langleanglembda|u|^{2k} u$, $k\inftyn\varphiield{N}$, we refer to the work \betareakite{bhimNTI} for a discussion on the topic. The tools employed follow the pattern of similar Cauchy problems studied for other equations such as the Schr\"odinger, wave and Klein-Gordon equations \betareakite{benyi,benyi3,CNwave,CZ}. To compare with other results in the literature, we observe that also in \betareakite{Wong1,Wong2}, and in \betareakite{catana} the authors use Wigner distributions and pseudodifferential operators as tools for their main results. In the latter paper the author gives a formula for the one-parameter strongly continuous semigroup $e^{-t H^\betaetata}$ in terms of the Weyl transforms of a $L^2$-orthonormal basis made of generalized Hermite eigenfunctions. This is then used to obtain $L^2$-estimates for the solution of the related initial value problem with data in $L^p$ spaces, $1 \langlee p \langleeq\inftynfty$. Here the approach uses similar ideas of joining microlocal and time-frequency analysis tools, but the spaces employed are different: we use modulation spaces, which are the most common ones in time-frequency analysis. \section{Function spaces and preliminaries} We denote by $v$ a continuous, positive, submultiplicative weight function on $\betaR^d$, i.e., $ v(z_1+z_2)\langleeq v(z_1)v(z_2)$, for all $ z_1,z_2\inftyn\mathbb{R}en$. We say that $w\inftyn \mathcal{M}_v(\betaR^d)$ if $w$ is a positive, continuous weight function on $\mathbb{R}en$ {\inftyt $v$-moderate}: $w(z_1+z_2)\langleeq Cv(z_1)w(z_2)$ for all $z_1,z_2\inftyn\mathbb{R}en$ (or for all $z_1,z_2\inftyn \betaZ^d$). We will mainly work with polynomial weights of the type \betaetagin{equation}\langleanglebel{vs} v_s(z)=\langleangle z\ranglenglea^s =(1+|z|^2)^{s/2},\quad s\inftyn\varphiield{R},\quad z\inftyn\betaR^d\,\, (\mbox{or}\, \betaZ^d). \end{equation} Observe that, for $s<0$, $v_s$ is $v_{|s|}$-moderate. Moreover, we limit to weights $w$ with at most polynomial growth, that is there exists $C>0, s>0$ such that \betaetagin{equation}\langleanglebel{growth} w(z)\langleeq C\langleangle z\ranglenglea^s,\quad z\inftyn\betaR^dd. \end{equation}\par We define $(w_1\xitimes w_2)(x,\xi ):=w_1(x)w_2(\xi)$, for $w_1,w_2$ weights on $\betaR^d$. The main features of time-frequency analysis are $T_x$ and $M_\xi$, the so-called translation and modulation operators, defined by $T_x g(y)=g(y-x)$ and $M_\xi g(y)=e^{2\pi i\xi y}g(y)$. Let $g\inftyn\mathcal{S}(\betaR^d)$ be a non-zero window function in the Schwartz class and consider the short-time Fourier transform (STFT) $V_gf$ of a function/tempered distribution $f$ in $\mathcal{S}'(\betaR^d)$ with respect to the the window $g$: \[ V_g f(x,\xi)=\langleangle f, M_{\xi}T_xg\ranglenglea =\inftynt e^{-2\pi i \xi y}f(y)\xiverline{g(y-x)}\,dy, \] i.e., the Fourier transform $\mathcal{F}$ applied to $f\xiverline{T_xg}$. \par For $z=(z_1,z_2)\inftyn\betaR^dd$, we call \emph{time-frequency shifts} the composition $$\pi(z)=M_{z_2}T_{z_1}.$$ {\betaf Modulation Spaces.} For $1\langleeq p,q\langleeq \inftynfty$ such spaces were introduced by H. Feichtinger in \betareakite{F1} (see also their characterization in \betareakite{Birkbis}), then extended to $0<p,q\langleeq\inftynfty$ by Y.V. Galperin and S. Samarah in \betareakite{Galperin2004}. \betaetagin{definition}\langleanglebel{def2.4} Fix a non-zero window $g\inftyn\mathcal{S}(\betaR^d)$, a weight $w\inftyn\mathcal{M}_v$ and $0<p,q\langleeq \inftynfty$. The modulation space $M^{p,q}_w(\betaR^d)$ consists of all tempered distributions $f\inftyn\mathcal{S}'(\betaR^d)$ such that the (quasi-)norm \betaetagin{equation}\langleanglebel{norm-mod} \|f\|_{M^{p,q}_w}=\|V_gf\|_{L^{p,q}_w}=\langleeft(\inftynt_{\ranglengled}\langleeft(\inftynt_{\ranglengled} |V_g f (x,\xi )|^p w(x,\xi )^p dx \ranglengleight)^{\varphirac qp}d\xi\ranglengleight)^\varphirac1q \end{equation} (with obvious changes with $p=\inftynfty$ or $q=\inftynfty)$ is finite. \end{definition} For $1\langleeq p,q\langleeq \inftynfty$ they are Banach spaces, whose norm does not depend on the window $g$, in the sense that different window functions in $\mathcal{S}(\betaR^d)$ yield equivalent norms. Moreover, the window class $\mathcal{S}(\betaR^d)$ can be extended to the modulation space $M^{1,1}_v(\betaR^d)$ (so-called Feichtinger algebra). For shortness, we write $M^p_w(\betaR^d)$ in place of $M^{p,p}_w(\betaR^d)$, $M^{p,q}(\betaR^d)$ if $w\equiv 1$. Moreover, for $w(x,\xi)=(1\xitimes v_s)(x,\xi )$, we shall simply write, using the standard notation \betareakite{F1}, $$M^{p,q}_{1\xitimes v_s}(\betaR^d)=M^{p,q}_s(\betaR^d).$$ In our study, we will apply Minkowski's integral inequality to study the operator $\mathcal{B}$ in \eqref{op2}. Such inequalities do not hold whenever the indices $p<1$ or $q<1$, hence we shall limit ourselves to the cases $1\langleeq p,q\langleeq\inftynfty$. We do not know whether the local well-posedness is still valid in the quasi-Banach setting. Recall that for $1\langleeq p,q\langleeq \inftynfty$, $w\inftyn\mathcal{M}_v$ and $g\inftyn M^1_v(\betaR^d)$, the norm $\|V_gf \|_{L^{p,q}_w}$ is an equivalent norm for $M^{p,q}_w(\mathbb{R}en)$ \betareakite[Thm.~11.3.7]{grochenig}). In other words, given any $g \inftyn M^1_v (\betaR^d )$ and $f\inftynM_w^{p,q} $ we have the norm equivalence \betaetagin{equation}\langleanglebel{normwind} \|f\|_{M_w^{p,q} } \taufrac{\alpha}{p}\,hasymp \|V_{g}f \|_{\langleeft(mpq }. \end{equation} For this work we will use the inversion formula for the STFT (see \betareakite[Proposition 11.3.2]{grochenig}): assume $g\inftyn M^{1}_v(\betaR^d)\setminus\{0\}$, $f\inftyn M^{p,q}_w(\betaR^d)$, with $w\inftyn\mathcal{M}_v$, then \betaetagin{equation}\langleanglebel{invformula} f=\varphirac1{\|g\|_2^2}\inftynt_{\mathbb{R}^{2d}} V_g f(z) \pi (z) g\, dz \, , \end{equation} and the equality holds in $M^{p,q}_w(\betaR^d)$.\par We also recall their inclusion relations: \betaetagin{equation}\langleanglebel{inclmod}{M}^{p_1,q_1}_w\hookrightarrow {M}^{p_2,q_2}_w, \quad \mbox{if}\,\, p_1\langleeq p_2,\,\, q_1\langleeq q_2. \end{equation} Other properties and more general definitions of modulation spaces can now be found in textbooks \betareakite{CR,grochenig}. \subsection{ Shubin classes and symbols of the operators $H^\betaetata$ and $e^{-t H^\betaetata}$} Let us first recall the definition of Shubin classes (Shubin \betareakite[Definition 23.1]{shubin}): \betaetagin{definition}\langleanglebel{defshubin} Let $m\inftyn\mathbb{R}$. The symbol class $\Gammamma_1 ^{m}(\mathbb{R}^{2d})$ consists of all complex functions $a\inftyn C^{\inftynfty }(\mathbb{R}^{2d})$ such that for every $\taufrac{\alpha}{p}\,halphapha\inftyn\mathbb{N}^{2d}$ there exists a constant $C_{\taufrac{\alpha}{p}\,halphapha}\geq0$ with \betaetagin{equation} |\partial_{z}^{\taufrac{\alpha}{p}\,halphapha}a(z)|\langleeq C_{\taufrac{\alpha}{p}\,halphapha}\langleeft\langleanglengle z\ranglengleight\ranglengleangle ^{m-|\taufrac{\alpha}{p}\,halphapha|},\quad z\inftyn\mathbb{R}^{2d}. \langleanglebel{est1} \end{equation} \end{definition} It immediately follows from this definition that if $a\inftyn\Gammamma_{1} ^{m}(\mathbb{R}^{2n})$ and $\taufrac{\alpha}{p}\,halphapha\inftyn\mathbb{N}^{2n}$ then $\partial _{z}^{\taufrac{\alpha}{p}\,halphapha}a\inftyn\Gammamma_{1}^{m-|\taufrac{\alpha}{p}\,halphapha|}(\mathbb{R}^{2n}).$ Obviously $\Gammamma_{1}^{m}(\mathbb{R}^{2n})$ is a complex vector space for the usual operations of addition and multiplication by complex numbers, and we have \betaetagin{equation} \Gammamma_{1}^{-\inftynfty}(\mathbb{R}^{2d})=\betaigcap\nolimits_{m\inftyn\mathbb{R} }\Gammamma_{1}^{m}(\mathbb{R}^{2d})=\mathcal{S}(\mathbb{R}^{2d}). \langleanglebel{gammaminf} \end{equation} The notion of asymptotic expansion of a symbol $a\inftyn\Gammamma _{1}^{m}(\mathbb{R}^{2d})$ (cf. \betareakite{shubin}, Definition 23.2) reads as follows. \betaetagin{definition} \langleanglebel{23.2}Let $(a_{j})_{j}$ be a sequence of symbols $a_{j}\inftyn\Gammamma_{1 }^{m_{j}}(\mathbb{R}^{2d})$ such that \\$\langleim_{j\ranglengleightarrow+\inftynfty} m_{j}\ranglengleightarrow-\inftynfty$. Let $a$ $\inftyn C^{\inftynfty}(\mathbb{R}^{2d})$. If for every integer $r\geq2$ we have \betaetagin{equation} a-\sum_{j=0}^{r-1}a_{j}\inftyn\Gammamma_{1}^{\xiverline{m}_{r}}(\mathbb{R}^{2d}) \langleanglebel{23.4} \end{equation} where $\xiverline{m}_{r}=\max_{j\geq r}m_{j}$ we will write $a\tauhetaicksim \sum_{j=0}^{\inftynfty}a_{j}$ and call this relation an asymptotic expansion of the symbol $a$. \end{definition} The interest of the asymptotic expansion comes from the fact that every sequence of symbols $(a_{j})_{j}$ with $a_{j}\inftyn\Gammamma_{1}^{m_{j} }(\mathbb{R}^{2d})$, the degrees $m_{j}$ being strictly decreasing and such that $m_{j}\ranglengleightarrow-\inftynfty$ determines a symbol in some $\Gammamma_{1} ^{m}(\mathbb{R}^{2d})$, that symbol being unique up to an element of $\mathcal{S}(\mathbb{R}^{2d})$. The symbol of the Hermite operator (or harmonic oscillator) $H(z)=(|x|^{2} +4\pi^2|\xi|^{2})$ obviously belongs to $\Gammamma_{1}^{2}(\mathbb{R}^{2d})$. From \betareakite[Theorem 1.11.1]{helffer} we infer that the fractional power $H^\betaetata$, $0<\betaetata<1$, can be written as a Weyl pseudodifferential operator having real symbol $a_\betaetata $ in the Shubin class $\Gammamma^{2\betaetata}_1$ and positive globally elliptic. Recall that a symbol $a_\betaetata$ is positive globally elliptic if there exist $C>0$ and $R>0$ such that \betaetagin{equation}\langleanglebel{pge} a_\betaetata(z)\geq C\langleangle z \ranglenglea, \quad |z|\geq R. \end{equation} Thanks to the properties of $H^\betaetata$ above, we can exploit a result by Nicola and Rodino in \betareakite[Theorem 4.5.1]{NR} to prove that the operator $e^{-tH^\betaetata}$ is a pseudodifferential operator with Weyl symbol in the Shubin class $\Gammamma_1^0$, with uniform estimates with respect to $t\inftyn [0,T]$, for any fixed $T>0$. For this purpose, we use the above theorem in the following setting: $$\Phi(z)=\Psi(z)=\langleangle z\ranglenglea,\quad h(z)=\Phi(z)^{-1}\Psi(z)^{-1}=\langleangle z\ranglenglea ^{-2};$$ moreover we choose the parameters $l=N=0$ and $J=1$. If we consider the asymptotic expansion $a_\betaetata\sim \sum_{0}^\inftynfty a_{\betaetata,j}$ as in Definition \ranglengleef{23.2}, together with the ellipticity condition $a_{\betaetata,0}(z) \gtrsim \langleangle z\ranglenglea ^{2\betaetata}$, $|z|\geq R$, then Theorem $4.5.1.$ guarantees that the operator $e^{-tH^\betaetata}$ is a pseudodifferential operator with Weyl symbol $b(t,z)$ satisfying, for every $k\inftyn\varphiield{N}$, $T>0$, the estimate \betaetagin{equation}\langleanglebel{stima} | b(t,\betaC^dot)-b_0(t,\betaC^dot)|_{k}\langleesssim \langleangle z\ranglenglea^{-2},\quad t\inftyn[0,T], \end{equation} where $b_0(t,z)=e^{-t a_{\betaetata,0}(z)}$, and the semi-norms $|\betaC^dot |_k$, $k\inftyn \varphiield{N}$, are defined by \betaetagin{equation}\langleanglebel{semingamm} |a|_k:=\sup_{|\taufrac{\alpha}{p}\,halphapha|+|\betaetata|\langleeq k}|\partial^\taufrac{\alpha}{p}\,halphapha_\xi \partial^\betaetata_x a(x,\xi)|\langleangle (x,\xi)\ranglenglea^{|\taufrac{\alpha}{p}\,halphapha|+|\betaetata|}. \end{equation} Defining the remainder $R_1(t,z):= b(t,z)-b_0(t,z)$, we infer from \eqref{stima} that $R_1(t,\betaC^dot)\inftyn \Gammamma_1^{-2}(\betaR^dd)\subset \Gammamma_1^{0}(\betaR^dd) $ uniformly w.r.t. $t$ on $[0,T]$. Moreover, using the ellipticity condition $a_{\betaetata,0}(z)\gtrsim\langleangle z \ranglenglea,$ for $ |z|\geq R$, and the property $a_{\betaetata,0}\inftyn \Gammamma_1^{2\betaetata}(\betaR^dd)$, one easily shows by induction that there exists a constant $C>0:$ $$|\partial^\taufrac{\alpha}{p}\,halphapha_z b_0(t,z)|\langleeq C \langleangle z\ranglenglea^{-|\taufrac{\alpha}{p}\,halphapha|},\quad \varphiorall t\inftyn[0,T]$$ that is to say, the symbol $b_0(t,z)$ is in the Shubin class $\Gammamma^0_1(\betaR^dd)$ with uniform estimates w.r.t. the time variable $t\inftyn [0,T]$. Hence, $b(t,\betaC^dot)=b_0(t,\betaC^dot)+R_1(t,\betaC^dot)\inftyn \Gammamma^0_1(\betaR^dd)$, with uniform estimate w.r.t. $t\inftyn [0,T]$.\par For applications of Shubin classes in the framework of Born-Jordan quantization we refer to the work \betareakite{cgnp}. \subsection{Gabor analysis of $\tauau$-pseudodifferential operators} For $\tauau \inftyn [0,1]$, $f,g\inftynL^2(\betaR)d$, the (cross-)$\tauau $-Wigner distribution is defined by \betaetagin{equation} W_{\tauau }(f,g)(x,\ximegaega )=\inftynt_{\mathbb{R}^{d}}e^{-2\pi iy\ximegaega }f(x+\tauau y) \xiverline{g(x-(1-\tauau )y)}\,dy,\quad x,\ximegaega\inftyn\betaR^d. \langleanglebel{tauwig} \end{equation} It can be used to define the $\tauau$-pseudo\-differential operator\ with symbol $\sigma$ via the formula \betaetagin{equation} \langleanglengle operatort(\sigma)f,g\ranglengleangle = \langleanglengle \sigma,W_{\tauau }(g,f)\ranglengleangle, \quad f,g\inftyn \mathcal{S}(\mathbb{R}^{d}). \langleanglebel{tauweak} \end{equation} For $\tauau=1/2$ we recapture the Weyl operator. We want to consider $\tauau$-pseudo\-differential operator s with symbols $\sigma$ in the H\"{o}rmander class $S^m_{0,0}$, $m\inftyn\varphiield{R}$, consisting of functions $\sigma\inftyn \mathcal{C}^\inftynfty(\betaR^dd)$ such that, for every $\taufrac{\alpha}{p}\,ha\inftyn \varphiield{N}^{2d}$, \betaetagin{equation}\langleanglebel{semi-normSm} |\partial^\taufrac{\alpha}{p}\,halphapha \sigma(z)|\langleeq C_\taufrac{\alpha}{p}\,halphapha \langleangle z\ranglenglea^m, \quad z\inftyn \betaR^dd. \end{equation} The related semi-norms are denote by \betaetagin{equation}\langleanglebel{semihormander} |\sigma|_{N,m}:=\sup_{|\taufrac{\alpha}{p}\,ha|\langleeq N} |\partial^\taufrac{\alpha}{p}\,halphapha \sigma(z)|\langleangle z\ranglenglea^{-m}. \end{equation} Fix $g\inftyn \mathcal{S}(\betaR^d)\setminus \{0\}$. We define the \emph{Gabor matrix} of a linear continuous operator $T$ from $\mathcal{S}(\betaR^d)$ to $\mathcal{S}'(\betaR^d)$, the mapping from $\betaR^dd\tauimes\betaR^dd$ into $\varphiield{C}$, \betaetagin{equation}\langleanglebel{unobis2s} (z,y)\mapsto \langleanglengle T \pi(z) g,\pi(y)g\ranglengleangle,\quad z,y\inftyn \betaR^dd. \end{equation} This is a slightly abuse of notation, since originally Gabor matrices we defined for time-frequencys s $\pi(\langleanglembda)$, with $\langleanglembda$ varying in a lattice $\langleeft(ambdambda\subset \betaR^dd$. We observe that the almost diagonalization of Gabor matrices of pseudodifferential operators with symbols in the modulation space $M^{\inftynfty,1}(\betaR^dd)$ treated in \betareakite{grochenig2} (and in many subsequent papers on the topic) are valid in both the continuous and discrete case. So we adopt this terminology in the continuous framework. For $m=0$ we are reduced to the H\"{o}rmander class $S^0_{0,0}$, whose Gabor matrix characterization for Weyl operators was shown in \betareakite[Theorem 6.1]{GR}, see also \betareakite{rochberg}. Even though $m=0$ is our case of interest, for our goal we need to control such matrix by the semi-norms of $S^0_{0,0}$. Moreover, for further references, we shall formulate our result in the case of $\tauau$-pseudo\-differential operator s having symbols in the more general class $S^m_{0,0}$, $m\inftyn\varphiield{R}$. We are going to use the following result for $\tauau$-pseudodifferential operators \betareakite[Lemma 4.1]{CNT}. \betaetagin{lemma} \langleanglebel{lem:STFT-gaborm} Fix a non-zero window $g\inftyn \mathcal{S}(\betaR^d)$ and set $\Phi_{\tauau}=W_{\tauau}(g,g)$ for $\tauau\inftyn\langleeft[0,1\ranglengleight]$. Then, for $\sigma\inftyn \mathcal{S}'\langleeft(\mathbb{R}^{2d}\ranglengleight)$, \betaetagin{equation} \langleeft|\langleeft\langleanglengle operatort \langleeft(\sigma\ranglengleight)\pi\langleeft(z\ranglengleight)g,\pi\langleeft(y\ranglengleight)g\ranglengleight\ranglengleangle \ranglengleight|=\langleeft|{V}_{\Phi_{\tauau}}\sigma\langleeft(\mathcal{T}_{\tauau}\langleeft(z,y\ranglengleight),J\langleeft(y-z\ranglengleight)\ranglengleight)\ranglengleight|\langleanglebel{eq:gaborm as STFT}. \end{equation} where $z=(z_1,z_2)$, $y=(y_1,y_2)$ and $\mathcal{T}_{\tauau}$ and $J$ are defined as follows: \betaetagin{equation*} \mathcal{T}_{\tauau}(z,y)=((1-\tauau)z_1+\tauau y_1,\tauau z_2+(1-\tauau)y_2),\quad J(z)=(z_2,-z_1). \end{equation*} \end{lemma} The Gabor matrix for a $\tauau$-pseudo\-differential operator\, $operatort(\sigma)$ with symbol $\sigma\inftyn S^m_{0,0}$ enjoys the following decay. \betaetagin{theorem}\langleanglebel{GBsm} Fix $g\inftyn\mathcal{S}(\betaR^d)\setminus\{0\}$, $m\inftyn\varphiield{R}$, $\tauau\inftyn [0,1]$. Consider a $\tauau$-pseudo\-differential operator\, $operatort(\sigma)$ with symbol $\sigma\inftyn S^m_{0,0}$. Then for every $N\inftyn\varphiield{N}$ there exists $C=C(N)>0$ such that \betaetagin{equation}\langleanglebel{ltau} \langleeft|\langleeft\langleanglengle operatort \langleeft(\sigma\ranglengleight)\pi\langleeft(z\ranglengleight)g,\pi\langleeft(y\ranglengleight)g\ranglengleight\ranglengleangle \ranglengleight|\langleeq C |\sigma|_{2N,m} \varphirac{\langleangle \mathcal{T}_{\tauau}(z,y)\ranglenglea ^m }{\langleangle y-z\ranglenglea^{2N}},\quad z,y\inftyn\betaR^dd, \end{equation} where the semi-norms $|\betaC^dot|_{N,m}$ are defined in \eqref{semihormander}. \end{theorem} \betaetagin{proof} Using the representation in \eqref{eq:gaborm as STFT} and $$ (1-\Deltalta_\langleanglembda)^N e^{-2\pi i \langleanglembda J(y-z)} =\langleangle 2\pi(y-z)\ranglenglea ^{2N} e^{-2\pi i \langleanglembda J(y-z)}$$ we can write \betaetagin{align*} \langleeft|\langleeft\langleanglengle operatort \langleeft(\sigma\ranglengleight)\pi\langleeft(z\ranglengleight)g,\pi\langleeft(y\ranglengleight)g\ranglengleight\ranglengleangle \ranglengleight|&\langleeq C \varphirac{1}{\langleangle 2\pi(y-z)\ranglenglea ^{2N}}\\ &\quad \tauimes\quad \langleeft|\inftynt_{\ranglengled}d e^{-2\pi i \langleanglembda J(y-z)} (1-\Deltalta_\langleanglembda)^N \langleeft[\betaar{\sigma}(\langleanglembda)T_{ \mathcal{T}_{\tauau}(z,y)}\Phi_\tauau\ranglengleight] d\langleanglembda\ranglengleight| \end{align*} (observe that the above integral is absolutely convergent since $\Phi_\tauau\inftyn\mathcal{S}(\betaR^dd)$). Now we estimate \betaetagin{align*}(1-\Deltalta_\langleanglembda)^N \langleeft[\betaar{\sigma}(\langleanglembda)T_{ \mathcal{T}_{\tauau}(z,y)}\Phi_\tauau\ranglengleight]&\langleeq \sum_{|\taufrac{\alpha}{p}\,ha|+|\betaetata|\langleeq 2N}|C_{\taufrac{\alpha}{p}\,ha,\betaetata}| |\partial^\taufrac{\alpha}{p}\,halphapha \sigma(\langleanglembda)| \,|\partial^\betaetata \Phi_\tauau(\langleanglembda-\mathcal{T}_{\tauau}(z,y)))|\\ &\langleeq C_N |\sigma|_{2N,m} \langleangle \langleanglembda\ranglenglea ^m \langleangle \langleanglembda-\mathcal{T}_{\tauau}(z,y)\ranglenglea^{-s} \end{align*} for every $s\geq 0$ since $\Phi_\tauau\inftyn\mathcal{S}(\betaR^dd)$. Choose $s=|m|+2d+1$. Then the submultiplicativity of $\langleangle \betaC^dot\ranglenglea ^{|m|}$ allows us to control from above the right-hand side of the last inequality by $$ C_N |\sigma|_{2N,m} \langleangle \mathcal{T}_{\tauau}(z,y)\ranglenglea ^m \langleangle \langleanglembda -\mathcal{T}_{\tauau}(z,y)\ranglenglea^{-(2d+1)}.$$ Hence, for every $N\inftyn\varphiield{N}$ we can find $C(N)>0$ such that \eqref{ltau} is satisfied. This concludes the proof. \end{proof} For fixed $\tauau\inftyn(0,1)$ we observe that $\langleangle \mathcal{T}_{\tauau}(z,y)\ranglenglea\taufrac{\alpha}{p}\,hasymp \langleangle z+y\ranglenglea$, hence the matrix decay can be controlled by a function which does not depend on the $\tauau$-quantization. Namely, \betaetagin{corollary} Fix $g\inftyn\mathcal{S}(\betaR^d)\setminus\{0\}$, $m\inftyn\varphiield{R}$, $\tauau\inftyn (0,1)$. Consider a $\tauau$-pseudo\-differential operator\, $operatort(\sigma)$ with symbol $\sigma\inftyn S^m_{0,0}$. Then, for every $N\inftyn\varphiield{N}$, there exists $C=C(\tauau,N)>0$ such that \betaetagin{equation}\langleanglebel{ltau} \langleeft|\langleeft\langleanglengle operatort \langleeft(\sigma\ranglengleight)\pi\langleeft(z\ranglengleight)g,\pi\langleeft(y\ranglengleight)g\ranglengleight\ranglengleangle \ranglengleight|\langleeq C |\sigma|_{2N,m} \varphirac{\langleangle z+y\ranglenglea ^m }{\langleangle y-z\ranglenglea^{2N}},\quad z,y\inftyn\betaR^dd. \end{equation} \end{corollary} \betaetagin{Remark} We conjecture that pseudodifferential operators in the H\"{o}rmander class $S^m_{0,0}$, $m\inftyn\varphiield{R}$, can be characterized via the Gabor matrix in \eqref{ltau}, extending the case $m=0$ already shown in \betareakite{GR}. To be precise, we allow to write $$S^m_{0,0}=\betaigcap_{s\geq 0} M^\inftynfty_{\langleangle \betaC^dot\ranglenglea ^m\xitimes \langleangle \betaC^dot\ranglenglea^s}=\betaigcap_{s\geq 0} M^{\inftynfty,1}_{\langleangle \betaC^dot\ranglenglea ^m\xitimes \langleangle \betaC^dot\ranglenglea^s}.$$ Studying the Gabor matrix decay for $M^\inftynfty_{\langleangle \betaC^dot\ranglenglea ^m\xitimes \langleangle \betaC^dot\ranglenglea^s}$ and following the pattern of the proofs as in the paper \betareakite{GR} one should get the result easily. Since this subject is outside the scope of the paper, we will write the details in a separate work. \end{Remark} \section{Local Well-posedness in modulation spaces} For $m=0$ the semi-norms on $\Gammamma^0_1$ are exactly the ones in \eqref{semingamm}. Observe that \betaetagin{equation}\langleanglebel{inclsem} \Gammamma^m_1\hookrightarrow S^m_{0,0} \end{equation} (the inclusion is continuous). The results in the previous yields the boundedness of the Weyl operator $ e^{-t H^\betaetata}$ on modulation spaces. \betaetagin{theorem}\langleanglebel{T3.1} Consider $1\langleeq p,q\langleeq\inftynfty$, $0<\betaetata\langleeq 1$, $w\inftyn \mathcal{M}_{v}$. Then for every $T>0$ there exists $C=C(T)>0$ such that \betaetagin{equation}\langleanglebel{20} \| e^{-t H^\betaetata} u_0\|_{{M}_w^{p,q}}\langleeq C \|u_0\|_{{M}_w^{p,q}}, \quad \varphiorall t\inftyn [0,T], \quad u_0\inftyn {M}_w^{p,q}(\betaR^d). \end{equation} \end{theorem} \betaetagin{proof} Consider $u_0\inftyn {M}_w^{p,q}(\betaR^d)$. Fix $g\inftyn\mathcal{S}(\betaR^d)\setminus\{0\}$ such that $\|g\|_2=1$. Then using the inversion formula for $u_0$ in \eqref{invformula} we can write \betaetagin{align*} |V_g(e^{-t H^\betaetata }u_0)(y)w(y)|&=\langleeft|\inftynt_{\ranglengled}d \langleangle e^{-t H^\betaetata} \pi (z)g, \pi(y)g\ranglenglea V_g u_0(z)\,dz\ranglengleight|\\ &\langleeq \inftynt_{\ranglengled}d w(y) |\langleangle e^{-t H^\betaetata} \pi (z)g, \pi(y)g\ranglenglea| |V_g u_0(z)|\,dz \end{align*} In the previous section we showed that $e^{-t H^\betaetata}$ is a Weyl operator with symbol $b(t,\betaC^dot)$ in $\Gammamma_1^0$ with semi-norms uniformly bounded w.r.t. $t\inftyn [0,T].$ The continuous embedding in \eqref{inclsem} and Theorem \ranglengleef{GBsm} let us write $$|\langleangle e^{-t H^\betaetata} \pi (z)g, \pi(y)g\ranglenglea|\langleeq C \varphirac{1}{\langleangle y-z\ranglenglea^{2N}}.$$ Since $w(y)\langleesssim v(y-z) w(z)$, and $v(z)\langleesssim \langleangle z\ranglenglea ^s$ for some $s>0$, we can write \betaetagin{align*} |V_g(e^{-t H^\betaetata}u_0)(y)w(y)|&\langleeq C \inftynt_{\ranglengled}d \langleangle y-z\ranglenglea^s w(z) |V_g u_0(z)\varphirac{1}{\langleangle y-z\ranglenglea^{2N}}dz\\ &\langleeq C \langleeft[\varphirac{1}{\langleangle \betaC^dot\ranglenglea^{2N-s}}\taufrac{\alpha}{p}\,hast (|V_g u_0|w)\ranglengleight](y). \end{align*} Choosing $N$ such that $2N-s>2d+1$ and using the convolution relations $L^1\taufrac{\alpha}{p}\,hast L^{p,q}\hookrightarrow L^{p,q}$ we obtain the claim. \end{proof} Choosing $p=q=2$ and recalling that, for $w(x,\xi)=\langleangle \xi\ranglenglea^s$, $M^2_w(\betaR^d)=H^s(\betaR^d)$ (Sobolev spaces), whereas for $w(z)=\langleangle z\ranglenglea^s$, $z\inftyn\betaR^dd$, $M^2_w(\betaR^d)=\mathcal{Q}_s$ (Shubin-Sobolev spaces), cf., e.g., \betareakite[Chapter 2]{CR}, we obtain boundedness results also for these classical spaces. \betaetagin{corollary} Consider $0<\betaetata\langleeq 1$, $s\inftyn\varphiield{R}$. For any fixed $T>0$ there exists $C=C(T)>0$ such that \betaetagin{equation}\langleanglebel{201} \| e^{-t H^\betaetata} u_0\|_{H^s}\langleeq C \|u_0\|_{H^s}, \quad \varphiorall t\inftyn [0,T],\quad u_0\inftyn H^s(\betaR^d). \end{equation} The same result holds by replacing the Sobolev space $H^s$ with the Shubin-Sobolev space $\mathcal{Q}_s$. \end{corollary} As already done in \betareakite{CZ}, in order to show the local existence of the solution we will make use of the following variant of the contraction mapping theorem (cf., e.g., \betareakite[Proposition 1.38]{tao}). \betaetagin{proposition}\langleanglebel{AIA} Let $\mathcal{N}$ and $\mathcal{T}$ be two Banach spaces. Consider a linear operator $\mathcal{B}:\mathcal{N}\tauo \mathcal{T}$ such that \betaetagin{equation}\langleanglebel{aia1} \|\mathcal{B} f\|_{\mathcal{T}}\langleeq C_0\|f\|_{\mathcal{N}},\quad \varphiorall f\inftyn\mathcal{N}, \end{equation} for some $C_0>0$, and suppose to have a nonlinear operator $F:\mathcal{T}\tauo\mathcal{N}$ with $F(0)=0$ and Lipschitz bounds \betaetagin{equation}\langleanglebel{aia2} \|F(u)-F(v)\|_{\mathcal{N}}\langleeq\varphirac{1}{2C_0}\|u-v\|_{\mathcal{T}}, \end{equation} for all $u,v$ in the ball $B_\mu:=\{u\inftyn\mathcal{T}: \|u\|_\mathcal{T}\langleeq \mu \}$, for some $\mu>0$. Then, for all $u_{\ranglenglem lin}\inftyn B_{\mu/2}$ there exists a unique solution $u\inftyn B_\mu$ to the equation $ u=u_{\ranglenglem lin}+\mathcal{B} F(u), $ with the map $u_{lin}\mapsto u$ Lipschitz continuous with constant at most $2$. \end{proposition} \betaetagin{proof}[Proof of Theorem \ranglengleef{T1}] We apply Theorem \ranglengleef{T3.1} with $T=1$, $q=1$ and $w(x,\xi)=\langleangle \xi\ranglenglea^s$. For every $1\langleeq p< \inftynfty$, the operator $K_\betaetata(t)$ in \eqref{op2} is a bounded operator on ${M}^{p,1}_s(\betaR^d)$, and there exists a $C>0$ such that \betaetagin{equation}\langleanglebel{G1} \|K_\betaetata(t) u_0\|_{{M}^{p,1}_s}\langleeq C \|u_0\|_{{M}^{p,1}_s},\quad t\inftyn [0,1]. \end{equation} Notice that such result provides the uniformity of the constant C, when t varies in $[0,1]$. Now the result follows by Proposition \ranglengleef{AIA}, with $\mathcal{T}=\mathcal{N}=C^0([0,T];{M}^{p,1}_s)$, the linear operator $\mathcal{B}$ in \eqref{op2}, where $0<T\langleeq 1$ will be chosen later on. Here $u_{\ranglenglem lin}:=K_\betaetata(t)u_0$ is in the ball $B_{\mu/2}\subset\mathcal{T}$ by \eqref{G1}, if $\mu$ is sufficiently large, depending on the radius $R$ in $M^{p,1}_s(\betaR^d)$ in the assumptions. Using Minkowski's integral inequality and \eqref{G1}, we obtain \eqref{aia1}. Namely, $$\|\mathcal{B} u\|_{{M}^{p,1}_s}\langleeq T C\|u\|_{_{{M}^{p,1}_{s}}}. $$ \par The proof of Condition \eqref{aia2} can be found in \betareakite[proof of Theorem 4.1]{CNwave}. Hence, by choosing $T$ small enough we prove the existence, and also the uniqueness among the solution in $\mathcal{T}$ with norm $O(R)$ (with $R$ being the radius of the ball $B_R$, centred in $0$, in $M^{p,1}_s(\betaR^d)$). Standard continuity arguments allow to eliminate the last constraint (see, e.g., \betareakite[Proposition 3.8]{tao}). For $p=\inftynfty$, by repeating the argument above, one can obtain well-posedness when the initial datum is in $$\mathcal{M}^{\inftynfty,1}_s(\betaR^d):=\xiverline{\mathcal{S}}^{M^{\inftynfty,1}_s}(\betaR^d).$$ \end{proof} Observe that similar results were obtained in \betareakite[Theorem 1.1]{nicola}. We conclude this note by addressing the reader to open problems in this field. First, it is still not clear whether better results can be obtained when considering the nonlinearity \betaetagin{equation} \langleanglebel{PW} F(u)=F_k(u)=\langleanglembda |u|^{2k}u=\langleanglembda u^{k+1}\betaar{u}^k, \quad \langleanglembda\inftyn\mathbb{C},\ k\inftyn\mathbb{N}. \end{equation} In fact, this was the case for the wave and vibrating place equation, cf. \betareakite{CNwave,CZ}, where more general modulation spaces were considered. Moreover, another open question is the well-posedness of the Cauchy problem \eqref{cpw} with initial datum $u_0\inftyn M^{p,q}_s(\betaR^d)$, $0<p\langleeq \inftynfty$, $0<q\langleeq1$. We conjecture that the result holds true as well, but the techniques employed so far do not apply in this case. \end{document}
math
35,923
\begin{equation}taegin{document} \begin{equation}taegin{abstract} We are concerned with the global existence of entropy solutions for the compressible Euler equations describing the gas flow in a nozzle with general cross-sectional area, for both isentropic and isothermal fluids. New viscosities are delicately designed to obtain the uniform bound of approximate solutions. The vanishing viscosity method and compensated compactness framework are used to prove the convergence of approximate solutions. Moreover, the entropy solutions for both cases are uniformly bounded independent of time. No smallness condition is assumed on initial data. The techniques developed here can be applied to compressible Euler equations with general source terms. \end{equation}d{abstract} {\begin{equation}taf m}aketitle {\begin{equation}taf m}edskip \noindent {\begin{equation}taf 2010 AMS Classification}: 35L45, 35L60, 35Q35. {\begin{equation}taf m}edskip \noindent {\begin{equation}taf Key words}: isentropic gas flow, isothermal gas flow, compensated compactness, uniform estimate, independent of time. S_0ection{Introduction} We consider one dimensional gas flow in a general nozzle for the isentropic and isothermal flows separately. The nozzle is widely used in some types of steam turbines, rocket engine nozzles, supersonic jet engines, and jet streams in astrophysics. The motion of the nozzle flow is governed by the following system of compressible Euler equations: \begin{equation}taegin{eqnarray}\lambdaambdabel{iso1} \lambdaeft\{ \begin{equation}taegin{array}{ll} {\begin{equation}taf d}isplaystyle \rhoho_t+m_x=a(x)m,\,x{\mathcal I}n{{\begin{equation}taf m}athbb R},\,t>0,\\ {\begin{equation}taf d}isplaystyle m_t+\lambdaeft(\fracrac{m^2}{\rhoho}+p(\rhoho)\rhoight)_x=a(x)\fracrac{m^2}{\rhoho}, \,x{\mathcal I}n{{\begin{equation}taf m}athbb R},\,t>0, \end{equation}d{array} \rhoight. \end{equation}d{eqnarray} where $\rhoho$ is the density, the momentum $m=\rhoho u$ with $u$ being the velocity, and $p(\rhoho)$ is the pressure of the gas. Here the given function $a(x)$ is represented by $a(x)=-\fracrac{A'(x)}{A(x)}$ with $A(x){\mathcal I}n C^2({{\begin{equation}taf m}athbb R})$ being a slowly variable cross-sectional area at $x$ in the nozzle. For $\gamma$-law gas, $p(\rhoho)=p_0\rhoho^\gamma$ with $\gamma$ denoting the adiabatic exponent and $p_0=\fracrac{\thetaheta^{2}}{\gamma},\thetaheta=\fracrac{\gamma-1}{2}$. When $\gamma>1,$ equationref{iso1} is called the isentropic gas flow. When $\gamma=1,$ equationref{iso1} is called isothermal one. We consider the Cauchy problem for equationref{iso1} with large initial data \begin{equation}taegin{equation}\lambdaambdabel{ini1} (\rhoho, m)|_{t=0}=(\rhoho_0(x), m_0(x)){\mathcal I}n L^{\mathcal I}nfty. \end{equation}d{equation} The above Cauchy problem equationref{iso1}-equationref{ini1} can be written in compact form as follows: \begin{equation}taegin{eqnarray}\lambdaambdabel{iso3} \lambdaeft\{ \begin{equation}taegin{array}{llll} {\begin{equation}taf d}isplaystyle U_t+f(U)_x=g(x, U),\\ {\begin{equation}taf d}isplaystyle U|_{t=0}=U_0(x), x{\mathcal I}n{{\begin{equation}taf m}athbb R}, \end{equation}d{array} \rhoight. \end{equation}d{eqnarray} where $U=(\rhoho, m)^\thetaop,$ $f(U)=(m, \fracrac{m^2}{\rhoho}+p(\rhoho))^\thetaop,$ and $g(x, U)=(-\fracrac{A'(x)}{A(x)}m, -\fracrac{A'(x)}{A(x)}\fracrac{m^2}{\rhoho})^\thetaop.$ There have been extensive studies and applications of homogeneous $\gamma$-law gas, i.e., $g(x,U)=0$. Diperna cite{Diperna} proved the global existence of entropy solutions with large initial data by the theory of compensated compactness and vanishing viscosity method for $\gamma=1+\fracrac{2}{2n+1},$ where $n$ is a positive integer. Subsequently, Ding, Chen, and Luocite{Ding,DingDing} and Chen cite{chen} successfully extended the result to $\gamma{\mathcal I}n(1, \fracrac{5}{3}]$ by using a Lax-Friedrichs scheme. Lions, Perthame, and Tadmor cite{Lions1} and Lions, Perthame, and Souganidis cite{Lions2} treated the case $\gamma>\fracrac{5}{3}$. The existence of entropy solutions to the isothermal gas, i.e., $\gamma=1$, was proved in Huang and Wang cite{HuangWang} by introducing complex entropies and utilizing the analytic extension method. For the isentropic Euler equations with source term, Ding, Chen, and Luo cite{Ding1} established a general framework to investigate the global existence of entropy solution through the fractional step Lax-Friedrichs scheme and compensated compactness method. Later on, there have been extensive studies on the inhomogeneous case (see cite{Chen2,Chen3,Lu1997, Marcati, Marcati2, Wangzejun, Wangzejun2}). For the nozzle flow problem, see cite{Courant,Embid,Glaz,Glimm1984,Liu1979,Liu1982,Liu1987,Whitham}. For converging-diverging de Laval nozzles, as flow speed accelerates from the subsonic to the supersonic regime, the physical properties of nozzle and diffuser flows are altered. This kind of nozzle is particularly designed to converge to a minimum cross-sectional area and then expand. Liu cite{Liu1979} first proved the existence of a global solution with initial data of small total variation and away from sonic state by a Glimm scheme. Tsuge cite{Tsuge3,Tsuge4,Tsuge5} first studied the global existence of solutions for Laval nozzle flow and transonic flow for large initial data by introducing a modified Godunov scheme. Recently, Chen and Schrecker cite{Chen2018} proved the existence of globally defined entropy solutions in transonic nozzles in an $L^p$ compactness framework, whose uniform bound of approximate solutions may depend on time $t$. In our paper, we are focusing on the $L^{{\mathcal I}nfty}$ compactness framework. Moreover, general cross-sectional areas of nozzles are considered, which include several important physical models, such as the de Laval nozzles with closed ends, that is, the cross-sectional areas are tending to zero as $x\rhoightarrow{\mathcal I}nfty.$ In our paper, we assume the cross-sectional area function $A(x)>0$ satisfies that there exists a $C^{1, 1}$ function $a_0(x){\mathcal I}n L^1({{\begin{equation}taf m}athbb R})$ such that \begin{equation}taegin{equation}\lambdaambdabel{acondition} \lambdaeft|\fracrac{A'(x)}{A(x)}\rhoight|=|a(x)|\lambdaeq a_0(x). \end{equation}d{equation} Here, $A(x)>0$ is a natural assumption. The smallest cross-sectional area of the nozzle is the throat of the nozzle. We allow the general varied cross-sectional area and no assumption is assumed on the sign of $a(x).$ The main purpose of the present paper is to prove the existence of a global entropy solution with uniform bound independent of time for large initial data in both the isentropic case $1<\gamma<3$ and isothermal one $\gamma=1$. We are interested in solutions that can reach the vacuum $\rhoho=0.$ Near the vacuum, the system equationref{iso1}-equationref{ini1} is degenerate and the velocity $u$ cannot be defined uniquely. We define the weak entropy solution as follows. \begin{equation}taegin{definition} A measurable function $U(x, t)$ is called a global weak solution of the Cauchy problem equationref{iso3} if \begin{equation}taegin{equation*} {\mathcal I}nt_{t>0}{\mathcal I}nt_{{\begin{equation}taf m}athbb R} U{\begin{equation}taf v}arphi_t+f(U){\begin{equation}taf v}arphi_x+g(x, U){\begin{equation}taf v}arphi dxdt+{\mathcal I}nt_{{\begin{equation}taf m}athbb R} U_0(x){\begin{equation}taf v}arphi(x, 0)dx=0 \end{equation}d{equation*} holds for any test function ${\begin{equation}taf v}arphi{\mathcal I}n C^1_0({{\begin{equation}taf m}athbb R}\thetaimes{{\begin{equation}taf m}athbb R}^+)$. In addition, for the isentropic flow, if $U$ also satisfies that for any weak entropy pair $(\eta, q)$ (see Section 2), the inequality \begin{equation}taegin{equation}\lambdaambdabel{entropyinequal} \eta(U)_t+q(U)_x-\nablabla\eta(U)cdotot g(x,U)\lambdaeq0 \end{equation}d{equation} holds in the sense of distributions, then $U$ is called a weak entropy solution to equationref{iso3}. For the isothermal flow, $U$ is called a weak entropy solution if $U$ additionally satisfies equationref{entropyinequal} for mechanical entropy pair $$\eta_*=\fracrac{m^2}{2\rhoho}+\rhoho\lambdan\rhoho,\quad q_*=\fracrac{m^3}{2\rhoho^2}+m\lambdan\rhoho.$$ \end{equation}d{definition} Two main results of the present paper are given as follows. \begin{equation}taegin{theorem}\lambdaambdabel{mainisen}{\thetaext(isentropic case)} Let $1<\gamma<3.$ Assume that there is a positive constant $M$ such that the initial data satisfies \begin{equation}taegin{equation*} 0\lambdaeq\rhoho_0(x)\lambdaeq M,~~|m_0(x)|\lambdaeq M\rhoho_0(x),~\thetaext{ $a.e.,x{\mathcal I}n{{\begin{equation}taf m}athbb R} $}, \end{equation}d{equation*} and $a(x)$ satisfies equationref{acondition} with \begin{equation}taegin{equation}\lambdaambdabel{a01} \|a_0(x)\|_{L^{1}({{\begin{equation}taf m}athbb R})}\lambdaeq\fracrac{1-\thetaheta}{1+\thetaheta}. \end{equation}d{equation} Then, there exists a global entropy solution of equationref{iso1}-equationref{ini1} satisfying \begin{equation}taegin{equation*} 0\lambdaeq\rhoho(x,t)\lambdaeq C,~|m(x, t)|\lambdaeq C\rhoho(x, t), \thetaext{ a.e., $(x,t){\mathcal I}n{{\begin{equation}taf m}athbb R}\thetaimes{{\begin{equation}taf m}athbb R}^{+}$}, \end{equation}d{equation*} where $C$ depends only on initial data and is independent of time $t$. \end{equation}d{theorem} \begin{equation}taegin{theorem}\lambdaambdabel{main}{\thetaext(isothermal case)} Let $\gamma=1.$ Assume that there is a positive constant $M$ such that the initial data satisfy \begin{equation}taegin{equation*} 0\lambdaeq\rhoho_0(x)\lambdaeq M,~~|m_0(x)|\lambdaeq\rhoho_0(x)(M+|\lambdan\rhoho_0(x)|),~\thetaext{ a.e., }x{\mathcal I}n{{\begin{equation}taf m}athbb R} , \end{equation}d{equation*} and $a(x)$ satisfies equationref{acondition} with \begin{equation}taegin{equation}\lambdaambdabel{a02} \|a_0(x)\|_{L^{1}({{\begin{equation}taf m}athbb R})}\lambdaeq\thetafrac{1}{2}. \end{equation}d{equation} Then, there exists a global entropy solution of equationref{iso1}-equationref{ini1} satisfying \begin{equation}taegin{equation*} 0\lambdaeq\rhoho(x,t)\lambdaeq C,~|m(x, t)|\lambdaeq\rhoho(x, t)(C+|\lambdan\rhoho(x, t)|), \thetaext{ a.e., }x{\mathcal I}n{{\begin{equation}taf m}athbb R}, \end{equation}d{equation*} where $C$ depends only on initial data and is independent of time $t$. \end{equation}d{theorem} \begin{equation}taegin{remark} Here, the conditions equationref{a01} (Theorem \rhoef{mainisen}) and equationref{a02} (Theorem \rhoef{main}) are assumed to guarantee a uniform bound of $(\rhoho, m)$ independent of time. This condition illustrates a new physical phenomena that is important in engineering. For example, if we consider an isothermal nozzle with a monotone cross-sectional area, $a_0(x)=\fracrac{A'(x)}{A(x)}\geq0,$ and denote $A_+$ and $A_-$ the far field of a variable cross-sectional area, respectively, then the ratio of the outlet and inlet cross-sectional area can be controlled, i.e., $\fracrac{A_+}{A_-}\lambdaeq e^{\fracrac{1}{2}}.$ \end{equation}d{remark} \begin{equation}taegin{remark} The condition equationref{a01} in Theorem \rhoef{mainisen} is different from that in Tsuge cite{Tsuge5}. Here, in our paper, we allow $1<\gamma<3.$ \end{equation}d{remark} The main difficulty we came across is how to construct approximate solutions with uniform bound independent of time. Another difficulty is the interaction of nonlinear resonance between the characteristic modes and geometrical source terms. Our strategy is applying the maximum principle (Lemma 3.1) introduced in cite{Huang20171,Huang20172}, which is similar to invariant region theory cite{Smoller}, to a viscous equation with novel viscosity. To be more specific, for the isentropic case, we add $-2{\begin{equation}taf v}arepsilon b(x)\rhoho_x$ on the momentum equation (c.f equationref{isen-vis}); for the isothermal case, we raise $n:=A(x)\rhoho$ with ${\mathcal P}rtialta$ and also add $-4{\begin{equation}taf v}arepsilon b(x)n_x$ on the momentum equation (c.f equationref{iso-vis2}). Two modified Riemann invariants are introduced and a system of decoupled new parabolic equations along the characteristic are derived. Owing to the hyperbolicty structure of equationref{iso1}, we can transform the integral of source terms along characteristics with time $t$ into the integral with space $x.$ Finally after establishing the estimate of $H^{-1}_{loc}$ compactness, we apply a compensated compactness framework in cite{Ding,DingDing, Lions2, HuangWang} to show the convergence of approximate solutions. To the best of our knowledge, for the isothermal flow, the uniform bound for the approximate solutions depends on time $t$ in all the previous results. We remark that the method in our paper can be applied to obtain the existence of weak solutions of related gas dynamic models, such as Euler-Poisson for a semiconductor model cite{Huang20172} or an Euler equation with geometric source terms cite{Huang20171}, and may also shed light on the large time behavior of entropy solutions. Besides, we avoid a laborious numerical scheme to construct approximate solutions. The present paper is organized as follows. In Section \rhoef{formula}, we introduce some basic notions and formulas for the isentropic Euler system. In Section \rhoef{isentheorem}, we prove Theorem \rhoef{mainisen} for the global existence of isentropic gas flow in general nozzle. Subsequently, in Section \rhoef{formula2}, we further formulate several preliminaries and formula for the isothermal Euler system. The proof of Theorem \rhoef{main} for global existence of isothermal gas flow in general nozzle will be presented in Section \rhoef{isotheorem}. In the appendix, we provide the proof of variant version of invariant region theory for completeness. S_0ection{Preliminary and Formulation for Isentropic Flow}\lambdaambdabel{formula} First we list some basic notation for the isentropic system equationref{iso1}. The eigenvalues are \begin{equation}taegin{equation*} \lambdaambdambda_1=\fracrac{m}{\rhoho}-\thetaheta\rhoho^\thetaheta,\quad \lambdaambdambda_2=\fracrac{m}{\rhoho}+\thetaheta\rhoho^\thetaheta,\quad \end{equation}d{equation*} and the corresponding right eigenvectors are \begin{equation}taegin{equation*} r_1=\lambdaeft[\begin{equation}taegin{array}{cc} 1\\ \lambdaambdambda_1 \end{equation}d{array} \rhoight],\quad r_2=\lambdaeft[\begin{equation}taegin{array}{cc} 1\\ \lambdaambdambda_2 \end{equation}d{array} \rhoight]. \end{equation}d{equation*} The Riemann invariants $w, z$ are given by \begin{equation}taegin{equation}\lambdaambdabel{2.3} w=\fracrac{m}{\rhoho}+\rhoho^\thetaheta,\quad z=\fracrac{m}{\rhoho}-\rhoho^\thetaheta, \end{equation}d{equation} satisfying $\nablabla wcdotot r_1=0$ and $\nablabla zcdotot r_2=0$. A pair of functions $(\eta, q):{{\begin{equation}taf m}athbb R}^+\thetaimes{{\begin{equation}taf m}athbb R}{\begin{equation}taf m}apsto{{\begin{equation}taf m}athbb R}^2$ is defined to be an entropy-entropy flux pair if it satisfies \begin{equation}taegin{equation*} \nablabla q(U)=\nablabla\eta(U)\nablabla\lambdaeft[\begin{equation}taegin{array}{ccc} m\\ \fracrac{m^2}{\rhoho}+p(\rhoho) \end{equation}d{array} \rhoight]. \end{equation}d{equation*} When $$\eta\lambdaeft|_{\fracrac{m}{\rhoho}\thetaext{ fixed }}\rhoightarrow0, \thetaext{ as } \rhoho\rhoightarrow 0, \rhoight.$$ $\eta(\rhoho, m)$ is called weak entropy. In particular, the mechanical entropy pair \begin{equation}taegin{equation*} \eta^*(\rhoho, m)=\fracrac{m^2}{2\rhoho}+\fracrac{p_0\rhoho^\gamma}{\gamma-1},~~ q^*(\rhoho, m)=\fracrac{m^3}{2\rhoho^2}+\fracrac{\gamma p_0\rhoho^{\gamma-1}m}{\gamma-1} \end{equation}d{equation*} is a strictly convex entropy pair. As shown in cite{Lions1} and cite{Lions2}, any weak entropy for the system equationref{iso1} is given by \begin{equation}taegin{equation}\lambdaambdabel{2.6} \begin{equation}taegin{split} \eta=\rhoho{\mathcal I}nt_{-1}^1chi(\fracrac{m}{\rhoho}+\rhoho^\thetaheta s)(1-s^2)^\lambdaambdambda ds,~~ q=\rhoho{\mathcal I}nt_{-1}^1(\fracrac{m}{\rhoho}+\rhoho^\thetaheta\thetaheta s)chi(\fracrac{m}{\rhoho}+\rhoho^\thetaheta s)(1-s^2)^\lambdaambdambda ds \end{equation}d{split} \end{equation}d{equation} with $\lambdaambdambda=\fracrac{3-\gamma}{2(\gamma-1)}$ for any function $chi(cdotot){\mathcal I}n C^2({{\begin{equation}taf m}athbb R})$. S_0ection{Proof of Theorem \rhoef{mainisen}}\lambdaambdabel{isentheorem} S_0ubsection{Construction of approximate solutions} We first construct approximate solutions to equationref{iso1} satisfying the framework in cite{Ding,DingDing, Lions2}. Indeed, for any ${\begin{equation}taf v}arepsilon{\mathcal I}n(0,1)$ we construct approximate solutions by adding suitable artificial viscosity as follows: \begin{equation}taegin{eqnarray}\lambdaambdabel{isen-vis} \lambdaeft\{ \begin{equation}taegin{array}{llll} {\begin{equation}taf d}isplaystyle \rhoho_t+m_x=a(x)m+{\begin{equation}taf v}arepsilon\rhoho_{xx},\\ {\begin{equation}taf d}isplaystyle m_t+\lambdaeft(\fracrac{m^2}{\rhoho}+p(\rhoho)\rhoight)_x =a(x)\fracrac{m^2}{\rhoho}+{\begin{equation}taf v}arepsilon m_{xx}-2{\begin{equation}taf v}arepsilon b(x)\rhoho_x \end{equation}d{array} \rhoight. \end{equation}d{eqnarray} with initial data \begin{equation}taegin{equation}\lambdaambdabel{isenini-vis} (\rhoho, m)|_{t=0}=(\rhoho_0^{\begin{equation}taf v}arepsilon(x), m_0^{\begin{equation}taf v}arepsilon(x))=(\rhoho_0(x)+{\begin{equation}taf v}arepsilon, m_0(x))\alphast j^{\begin{equation}taf v}arepsilon, \end{equation}d{equation} where $b(x)$ is a function to be given later, and $j^{\begin{equation}taf v}arepsilon$ is the standard mollifier. S_0ubsection{Global existence of approximate solutions}\lambdaambdabel{approximate} For the global existence to Cauchy problem equationref{isen-vis}-equationref{isenini-vis}, we have the following. \begin{equation}taegin{theorem}\lambdaambdabel{thm-isenthvis} For any time $T>0,$ there exists a unique global classical bounded solution to the Cauchy problem equationref{isen-vis}-equationref{isenini-vis} that has following $L^{{\mathcal I}nfty}$ estimates \begin{equation}taegin{equation}\lambdaambdabel{bound} e^{-C({\begin{equation}taf v}arepsilon, T)}\lambdaeq \rhoho^{\begin{equation}taf v}arepsilon(x, t)\lambdaeq C, ~~|m^{\begin{equation}taf v}arepsilon(x, t)|\lambdaeq C\rhoho^{\begin{equation}taf v}arepsilon(x, t). \end{equation}d{equation} \end{equation}d{theorem} We shall show Theorem \rhoef{thm-isenthvis} in two steps. In the section, we omit the upper index ${\begin{equation}taf v}arepsilon$ for simplicity. \thetaextbf{Step 1. Uniform upper bound.} First, we can rewrite the first equation of equationref{isen-vis} as \begin{equation}taegin{equation*} \rhoho_t+u \rhoho_x={\begin{equation}taf v}arepsilon\rhoho_{xx}+\rhoho(a(x)u-u_x), \end{equation}d{equation*} and then applying the maximum principle of parabolic equation yields that \begin{equation}taegin{align*} \rhoho\geq{\begin{equation}taf m}in\rhoho_0(x)e^{-{\mathcal I}nt_0^t\|a(x)u-u_x\|_{L^{\mathcal I}nfty}ds}>0, \end{equation}d{align*} which implies $w\geq z.$ Second, we recall a revised version of the invariant region theory cite{Smoller} introduced in cite{Huang20171,Huang20172}. \begin{equation}taegin{lemma}(Maximum principle)\lambdaambdabel{modified maximum} Let $p(x,t), q(x,t)$, $(x,t){\mathcal I}n {{\begin{equation}taf m}athbb R}\thetaimes[0,T]$ be any bounded classical solutions of the quasilinear parabolic system \begin{equation}taegin{eqnarray}\lambdaambdabel{pq} \lambdaeft\{ \begin{equation}taegin{aligned} {\begin{equation}taf d}isplaystyle &p_t+{\begin{equation}taf m}u_1 p_x= {\begin{equation}taf v}arepsilon p_{xx}+a_{11}p+a_{12}q+R_1,\\ {\begin{equation}taf d}isplaystyle &q_t+{\begin{equation}taf m}u_2 q_x= {\begin{equation}taf v}arepsilon q_{xx}+a_{21}p+a_{22}q+R_2 \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray} with initial data $p(x,0)\lambdaeq0, ~~q(x, 0)\geq0, $ where $${\begin{equation}taf m}u_{i}={\begin{equation}taf m}u_i(x,t,p(x,t),q(x,t)),a_{ij}=a_{ij}(x,t,p(x,t),q(x,t)),$$ and the source terms $$R_i=R_i(x,t,p(x,t),q(x,t),p_{x}(x,t),q_{x}(x,t)),i,j=1,2,\fracorall(x,t){\mathcal I}n{{\begin{equation}taf m}athbb R}\thetaimes[0,T]$$ ${\begin{equation}taf m}u_{i},a_{ij}$are bounded with respect to $(x,t,p,q){\mathcal I}n{{\begin{equation}taf m}athbb R}\thetaimes[0,T]\thetaimes K,$ where $K$ is an arbitrary compact subset in ${{\begin{equation}taf m}athbb R}^2,$ $a_{12},a_{21},R_{1},R_{2}$ are continuously differentiable with respect to $p,q.$ Assume the following conditions hold: \begin{equation}taegin{description} {\mathcal I}tem[(C1)] When $p=0$ and $q\geq0,$ there is $a_{12}\lambdaeq0;$ when $q=0$ and $p\lambdaeq0,$ there is $a_{21}\lambdaeq0.$ {\mathcal I}tem[(C2)] When $p=0$ and $q\geq0,$ there is $~R_1=R_1(x,t,0,q,\zeta,\eta)\lambdaeq0;$ when $q=0$ and $p\lambdaeq0,$ there is $~R_2=R_2(x,t,p,0,\zeta,\eta)\geq0.$ \end{equation}d{description} Then for any $(x, t){\mathcal I}n{{\begin{equation}taf m}athbb R}\thetaimes[0,T],$ $p(x,t)\lambdaeq0, ~~q(x, t)\geq0.$ \end{equation}d{lemma} \begin{equation}taegin{remark} The modified version of invariant region theory (Lemma \rhoef{modified maximum}) is valid not only for the Cauchy problem with source terms, but also for the initial boundary value problem with Dirichlet and Neumann boundary conditions. \end{equation}d{remark} We shall apply maximum principle Lemma \rhoef{modified maximum} to get the uniform bound of $\rhoho, m$. By the formulas of Riemann invariants equationref{2.3}, the viscous perturbation system equationref{isen-vis} can be transformed as \begin{equation}taegin{eqnarray}\lambdaambdabel{wzisen} \lambdaeft\{ \begin{equation}taegin{aligned} {\begin{equation}taf d}isplaystyle &w_t+\lambdaambdambda_2 w_x={\begin{equation}taf v}arepsilon w_{xx}+2{\begin{equation}taf v}arepsilon(w_x-b)\fracrac{\rhoho_x}{\rhoho}-{\begin{equation}taf v}arepsilon\thetaheta(\thetaheta+1)\rhoho^{\thetaheta-2}\rhoho_x^2+\thetaheta\fracrac{w^2-z^2}{4}a(x),\\ {\begin{equation}taf d}isplaystyle &z_t+\lambdaambdambda_1 z_x={\begin{equation}taf v}arepsilon z_{xx}+2{\begin{equation}taf v}arepsilon(z_x-b)\fracrac{\rhoho_x}{\rhoho}+{\begin{equation}taf v}arepsilon\thetaheta(\thetaheta+1)\rhoho^{\thetaheta-2}\rhoho_x^2-\thetaheta\fracrac{w^2-z^2}{4}a(x). \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray} Set the control functions $(\phi,\psi)$ as \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\phi=C_0+{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t+{\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy,\\ &\psi=C_0+{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t+{\mathcal I}nt^{{\mathcal I}nfty}_xb(y)dy. \end{equation}d{split} \end{equation}d{equation*} Then a simple calculation shows that \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\phi_t={\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}},~\phi_x=b(x), ~\phi_{xx}=b'(x);\\ &\psi_t={\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}},~\psi_x=-b(x), ~\psi_{xx}=-b'(x). \end{equation}d{split} \end{equation}d{equation*} Define the modified Riemann invariants $(\begin{equation}taar{w},\begin{equation}taar{z})$ as \begin{equation}taegin{equation}\lambdaambdabel{r} \begin{equation}taar{w}=w-\phi, ~~\begin{equation}taar{z}=z+\psi. \end{equation}d{equation} Inserting equationref{r} into equationref{wzisen} yields the decoupled equations for $\begin{equation}taar{w}$ and $\begin{equation}taar{z}:$ \begin{equation}taegin{eqnarray}\lambdaambdabel{phipsi11} \lambdaeft\{\begin{equation}taegin{aligned} \begin{equation}taar{w}_t+\lambdaambdambda_2\begin{equation}taar{w}_x=&{\begin{equation}taf v}arepsilon\begin{equation}taar{w}_{xx} +{\begin{equation}taf v}arepsilon\phi_{xx}-\phi_t-\lambdaambdambda_2\phi_x+2{\begin{equation}taf v}arepsilon\fracrac{\rhoho_x}{\rhoho}\begin{equation}taar{w}_x-{\begin{equation}taf v}arepsilon\thetaheta(\thetaheta+1)\rhoho^{\thetaheta-2}\rhoho_x^2 \\ &+\thetaheta\fracrac{(\begin{equation}taar{w}+\phi)^{2}-(\begin{equation}taar{z}-\psi)^{2}}{4}a(x),\\ \begin{equation}taar{z}_t+\lambdaambdambda_1\begin{equation}taar{z}_x=&{\begin{equation}taf v}arepsilon\begin{equation}taar{z}_{xx} -{\begin{equation}taf v}arepsilon\psi_{xx}+\psi_t+\lambdaambdambda_1\psi_x+2{\begin{equation}taf v}arepsilon\fracrac{\rhoho_x}{\rhoho}\begin{equation}taar{z}_x+{\begin{equation}taf v}arepsilon\thetaheta(\thetaheta+1)\rhoho^{\thetaheta-2}\rhoho_x^2\\ &-\thetaheta\fracrac{(\begin{equation}taar{w}+\phi)^{2}-(\begin{equation}taar{z}-\psi)^{2}}{4}a(x). \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray} Noting that \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\lambdaambdambda_1=\fracrac{w+z}{2}-\thetaheta\fracrac{w-z}{2},\\ &\lambdaambdambda_2=\fracrac{w+z}{2}+\thetaheta\fracrac{w-z}{2}, \end{equation}d{split} \end{equation}d{equation*} the system equationref{phipsi11} becomes \begin{equation}taegin{eqnarray}\lambdaambdabel{rst} {\begin{equation}taf d}isplaystyle\lambdaeft\{ \begin{equation}taegin{aligned} &\begin{equation}taar{w}_t+(\lambdaambdambda_2-2{\begin{equation}taf v}arepsilon\fracrac{\rhoho_x}{\rhoho})\begin{equation}taar{w}_x ={\begin{equation}taf v}arepsilon\begin{equation}taar{w}_{xx}+a_{11}\begin{equation}taar{w} +a_{12}\begin{equation}taar{z}+R_1,\\ &\begin{equation}taar{z}_t+(\lambdaambdambda_1-2{\begin{equation}taf v}arepsilon\fracrac{\rhoho_x}{\rhoho})\begin{equation}taar{z}_x ={\begin{equation}taf v}arepsilon\begin{equation}taar{z}_{xx}+a_{21}\begin{equation}taar{w} +a_{22}\begin{equation}taar{z}+R_2, \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray} where \begin{equation}taegin{equation*} \begin{equation}taegin{split} &a_{11}=-\lambdaeft(\fracrac{1+\thetaheta}{2}\phi_x-\thetaheta\fracrac{\begin{equation}taar{w}+2\phi}{4}a(x)\rhoight), \quad a_{12}=-\lambdaeft(\fracrac{1-\thetaheta}{2}\phi_x+\thetaheta\fracrac{\begin{equation}taar{z}-2\psi}{4}a(x)\rhoight),\\ &a_{21}=\lambdaeft(\fracrac{1-\thetaheta}{2}\psi_x-\thetaheta\fracrac{\begin{equation}taar{w}+2\phi}{4}a(x)\rhoight), \quad a_{22}=\lambdaeft(\fracrac{1+\thetaheta}{2}\psi_x+\thetaheta\fracrac{\begin{equation}taar{z}-2\psi}{4}a(x)\rhoight), \end{equation}d{split} \end{equation}d{equation*} and \begin{equation}taegin{equation*} \begin{equation}taegin{split} &R_1={\begin{equation}taf v}arepsilon\phi_{xx}-\phi_t-\fracrac{1+\thetaheta}{2}\phi\phi_{x}+\fracrac{1-\thetaheta}{2}\psi\phi_{x} -{\begin{equation}taf v}arepsilon\thetaheta(\thetaheta+1)\rhoho^{\thetaheta-2}\rhoho_x^2+\thetaheta\fracrac{\phi^{2}-\psi^{2}}{4}a(x),\\ &R_2=-{\begin{equation}taf v}arepsilon\psi_{xx}+\psi_t+\fracrac{1-\thetaheta}{2}\phi\psi_{x}-\fracrac{1+\thetaheta}{2}\psi\psi_{x} +{\begin{equation}taf v}arepsilon\thetaheta(\thetaheta+1)\rhoho^{\thetaheta-2}\rhoho_x^2-\thetaheta\fracrac{\phi^{2}-\psi^{2}}{4}a(x). \end{equation}d{split} \end{equation}d{equation*} To apply Lemma \rhoef{modified maximum}, we need to verify $\thetaextbf{(C1)}$ and $\thetaextbf{(C2)}.$ For $\thetaextbf{(C1)},$ when $\begin{equation}taar{w}=0 ,\begin{equation}taar{z}\geq0,$ we have \begin{equation}taegin{align*} 0\lambdaeq\begin{equation}taar{z}=z+\psi\lambdaeq w+\psi=\phi+\psi, \end{equation}d{align*} and then \[ \begin{equation}taegin{aligned} a_{12}=&-\fracrac{1-\thetaheta}{2}\lambdaeft(b(x)+\fracrac{\thetaheta}{2(1-\thetaheta)}(\begin{equation}taar{z}-2\psi)a(x)\rhoight) \\ \lambdaeq& {\begin{equation}taf d}isplaystyle\lambdaeft\{ \begin{equation}taegin{aligned} &-\fracrac{1-\thetaheta}{2}\lambdaeft(b(x)-\fracrac{\thetaheta}{2(1-\thetaheta)}(\phi-\psi)|a(x)|\rhoight) \thetaext{ if} ~~a(x)<0,\\ &-\fracrac{1-\thetaheta}{2}\lambdaeft (b(x)-\fracrac{\thetaheta}{2(1-\thetaheta)}2\psi|a(x)|\rhoight) \thetaext{ if} ~~a(x)\geq0. \end{equation}d{aligned} \rhoight. \end{equation}d{aligned} \] Hence, we take $b(x)=M_0a_0(x)$ with $$M_0\geq\fracrac{\thetaheta}{2(1-\thetaheta)}{\begin{equation}taf m}ax(\phi-\psi,2\psi),$$ and using equationref{acondition}, one has $a_{12}\lambdaeq0.$ Moreover, when $\begin{equation}taar{w}\lambdaeq0, \begin{equation}taar{z}=0,$ we have \begin{equation}taegin{align*} 0\geq\begin{equation}taar{w}=w-\phi\geq z-\phi=-\psi-\phi, \end{equation}d{align*} and then \[ \begin{equation}taegin{aligned} a_{21}=& -\fracrac{1-\thetaheta}{2}\lambdaeft(b(x)+\fracrac{\thetaheta}{2(1-\thetaheta)}(\begin{equation}taar{w}+2\phi)a(x)\rhoight) \\ \lambdaeq &{\begin{equation}taf d}isplaystyle\lambdaeft\{ \begin{equation}taegin{aligned} & -\fracrac{1-\thetaheta}{2}\lambdaeft(b(x)-\fracrac{\thetaheta}{2(1-\thetaheta)}2\phi|a(x)|\rhoight) \thetaext{ if} ~~a(x)<0,\\ & -\fracrac{1-\thetaheta}{2}\lambdaeft(b(x)-\fracrac{\thetaheta}{2(1-\thetaheta)}(\psi-\phi)|a(x)|\rhoight) \thetaext{ if} ~~a(x)\geq0, \end{equation}d{aligned} \rhoight. \\ \lambdaeq &~0, \end{equation}d{aligned} \] provided $$M_0\geq\fracrac{\thetaheta}{2(1-\thetaheta)}{\begin{equation}taf m}ax(\phi-\psi,2\phi).$$ Thus we require\\ $$M_0\geq\fracrac{\thetaheta}{2(1-\thetaheta)}{\begin{equation}taf m}ax \lambdaeft({\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy-{\mathcal I}nt_x^{\mathcal I}nfty b(y)dy,2C_0+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t+2{\mathcal I}nt_{-{\mathcal I}nfty}^{\mathcal I}nfty b(y)dy\rhoight).$$ Taking ${\begin{equation}taf v}arepsilon$ sufficient small such that ${\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}T\lambdaeq1,$ we have $$M_0\geq\fracrac{\thetaheta}{1-\thetaheta}\lambdaeft({C_0+1+M_0\|a_0(x)\|_{L^1}}\rhoight),$$ that is, \begin{equation}taegin{equation}\lambdaambdabel{2} \|a_0(x)\|_{L^1}\lambdaeq\fracrac{1-\thetaheta}{\thetaheta}-\fracrac{C_0+1}{M_0}. \end{equation}d{equation} Hence $\thetaextbf{(C1)}$ is also satisfied by $(\begin{equation}taar{w}, \begin{equation}taar{z}).$ As for $\thetaextbf{(C2)},$ one can derive \begin{equation}taegin{equation*} \begin{equation}taegin{split} R_1&\lambdaeq {\begin{equation}taf v}arepsilon b'(x)-{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}\\ &\qquad +b(x)\lambdaeft(-\fracrac{1+\thetaheta}{2}\phi+\fracrac{1-\thetaheta}{2}\psi\rhoight)+\thetaheta\fracrac{(\phi+\psi)(\phi-\psi)}{4}a(x)\\ &\lambdaeq b(x)\lambdaeft(-\thetaheta C_0-{\begin{equation}taf v}arepsilon\thetaheta\|b'(x)\|_{L^{{\mathcal I}nfty}}t-\fracrac{1+\thetaheta}{2}{\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy +\fracrac{1-\thetaheta}{2}{\mathcal I}nt^{{\mathcal I}nfty}_xb(y)dy\rhoight)\\ &\qquad +\fracrac{\thetaheta}{4}({2C_0+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}}t+\|b\|_{L^1}) \lambdaeft({\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy-{\mathcal I}nt_x^{\mathcal I}nfty b(y)dy\rhoight) a(x)\\ &\lambdaeq-\begin{equation}taigg[M_0(\thetaheta C_0+\thetaheta{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t-\fracrac{1-\thetaheta}{2}\|b\|_{L^1})\\ &\qquad-\fracrac{\thetaheta}{4}(2C_0+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t+\|b\|_{L^1})\|b\|_{L^1}\begin{equation}taigg]a_0(x)\\ &\lambdaeq -M_0a_0(x)\begin{equation}taigg[\thetaheta C_0-\fracrac{1}{2}(\thetaheta C_0+(1-\thetaheta)M_0+\fracrac{\thetaheta}{2} M_0\|a_0\|_{L^1})\|a_0\|_{L^1}\begin{equation}taigg]\\ &\lambdaeq0. \end{equation}d{split} \end{equation}d{equation*} The last inequality holds on the condition that \begin{equation}taegin{equation}\lambdaambdabel{1} \|a_0(x)\|_{L^1}\lambdaeq\fracrac{2\thetaheta C_0}{\thetaheta C_0+M_0} \thetaext{ and } \|a_0(x)\|_{L^1}\lambdaeq1. \end{equation}d{equation} Then we also have \begin{equation}taegin{equation*} \begin{equation}taegin{split} R_2&\geq {\begin{equation}taf v}arepsilon b'(x)+{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}\\ &\qquad +b(x)\lambdaeft(-\fracrac{1-\thetaheta}{2}\phi+\fracrac{1+\thetaheta}{2}\psi\rhoight)-\thetaheta\fracrac{(\phi+\psi)(\phi-\psi)}{4}a(x)\\ &\geq b(x)\lambdaeft(\thetaheta C_0+{\begin{equation}taf v}arepsilon\thetaheta\|b'(x)\|_{L^{{\mathcal I}nfty}}t-\fracrac{1-\thetaheta}{2}{\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy +\fracrac{1+\thetaheta}{2}{\mathcal I}nt^{{\mathcal I}nfty}_xb(y)dy\rhoight)\\ &\qquad -\fracrac{\thetaheta}{4}({2C_0+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}}t+\|b\|_{L^1}) \lambdaeft({\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy-{\mathcal I}nt_x^{\mathcal I}nfty b(y)dy\rhoight) a(x)\\ &\geq\begin{equation}taigg[M_0(\thetaheta C_0+\thetaheta{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t-\fracrac{1-\thetaheta}{2}\|b\|_{L^1})\\ &\qquad-\fracrac{\thetaheta}{4}(2C_0+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{{\mathcal I}nfty}}t+\|b\|_{L^1})\|b\|_{L^1}\begin{equation}taigg]a_0(x)\\ &\geq M_0a_0(x)\begin{equation}taigg[\thetaheta C_0-\fracrac{1}{2}(\thetaheta C_0+(1-\thetaheta)M_0+\fracrac{\thetaheta}{2} M_0\|a_0\|_{L^1})\|a_0\|_{L^1}\begin{equation}taigg]\\ &\geq0. \end{equation}d{split} \end{equation}d{equation*} Hence $\thetaextbf{(C2)}$ is verified for $(\begin{equation}taar{w}, \begin{equation}taar{z})$. From equationref{2} and equationref{1}, $a_0$ must satisfy \begin{equation}taegin{equation}\lambdaambdabel{e:a0} \|a_0\|_{L^1}\lambdaeq{\begin{equation}taf m}in\lambdaeft\{1, \, \fracrac{2\thetaheta C_0}{\thetaheta C_0+M_0}, \, \fracrac{1-\thetaheta}{\thetaheta}-\fracrac{C_0+1}{M_0}\rhoight\}. \end{equation}d{equation} Now we turn to choose $M_0$ and $C_0.$ Considering the initial values of approximate solutions, we shall choose $C_0$ large enough first such that \begin{equation}taegin{equation*} C_0\geq{\begin{equation}taf m}ax\{S_0up w(x, 0), -{\mathcal I}nf z(x, 0)\}, \end{equation}d{equation*} and then we have $w(x, 0)\lambdaeq\phi(x, 0), z(x, 0)\geq-\psi(x, 0).$ One choice of $M_0$ is $$M_0=C_0\fracrac{3\thetaheta^2+\thetaheta}{1-\thetaheta},$$ and then $$\fracrac{2\thetaheta C_0}{\thetaheta C_0+M_0}=\fracrac{1-\thetaheta}{1+\thetaheta} \thetaext{ and } \fracrac{1-\thetaheta}{\thetaheta}-\fracrac{C_0}{M_0}-\fracrac{1}{M_0}\geq\fracrac{1-\thetaheta}{1+\thetaheta}$$ if $M_0$ is large enough. Thus our condition equationref{a01} on $a_0$ satisfies equationref{e:a0}, which is \thetaextit{the key reason for equationref{a01}}. Therefore, an application of Lemma \rhoef{modified maximum} yields $$\begin{equation}taar{w}(x, t)\lambdae0,~~ \begin{equation}taar{z}(x, t)\ge0,$$ which implies \begin{equation}taegin{equation*} \begin{equation}taegin{split} &w(x, t)\lambdaeq\phi(x, t)\lambdaeq C_0+\|b\|_{L^1}+1= C,\\ &z(x,t)\geq-\psi(x,t)\geq-C_0-\|b\|_{L^1}-1= -C, \end{equation}d{split} \end{equation}d{equation*} where we can see that $C$ is independent of time. Hence we obtain \begin{equation}taegin{equation}\lambdaambdabel{uniform} 0\lambdae\rhoho(x, t)\lambdaeq C,~~ |m(x, t)|\lambdaeq C\rhoho(x, t). \end{equation}d{equation} \thetaextbf{Step 2. Lower bound of density.} By equationref{uniform}, we know that the velocity $u=\fracrac{m}{\rhoho}$ is uniformly bounded, i.e., $|u|\lambdae C$. Then the lower bound of density can be derived by the method of cite{Huang20171}. Set $v=\lambdan\rhoho$, and then we get a scalar equation for $v$ \begin{equation}taegin{equation}\lambdaambdabel{v} v_t+v_xu+u_x={\begin{equation}taf v}arepsilon v_{xx}+{\begin{equation}taf v}arepsilon v_x^2+a(x)u \end{equation}d{equation} from which we have \begin{equation}taegin{equation*} v={\mathcal I}nt_{{{\begin{equation}taf m}athbb R}}G(x-y, t)v_0(y)dy+{\mathcal I}nt_0^t{\mathcal I}nt_{{{\begin{equation}taf m}athbb R}}({\begin{equation}taf v}arepsilon v_x^2-v_xu-u_x+a(x)u)G(x-y, t-s)dyds, \end{equation}d{equation*} where $G$ is the heat kernel satisfying \[ {\mathcal I}nt_{{\begin{equation}taf m}athbb R} G(x-y, t)dy=1, ~~{\mathcal I}nt_{{\begin{equation}taf m}athbb R}|G_y(x-y, t)|dy\lambdaeq\fracrac{C}{S_0qrt{{\begin{equation}taf v}arepsilon t}}. \] Then it follows that \begin{equation}taegin{equation*} \begin{equation}taegin{split} v&={\mathcal I}nt_{{{\begin{equation}taf m}athbb R}}G(x-y, t)v_0(y)dy+{\mathcal I}nt_0^t{\mathcal I}nt_{{{\begin{equation}taf m}athbb R}}({\begin{equation}taf v}arepsilon v_y^2-v_yu-u_y+au)G(x-y, t-s)dyds\\ &\geq{\mathcal I}nt_{{{\begin{equation}taf m}athbb R}}G(x-y, t)v_0(y)dy+{\mathcal I}nt_0^t{\mathcal I}nt_{{{\begin{equation}taf m}athbb R}}uG_y(x-y, t-s)+(au-\thetafrac{u^2}{4{\begin{equation}taf v}arepsilon})G(x-y, t-s)dyds\\ &\geq\lambdan{\begin{equation}taf v}arepsilon-\fracrac{Ct}{{\begin{equation}taf v}arepsilon}-\fracrac{CS_0qrt{t}}{S_0qrt{{\begin{equation}taf v}arepsilon}}:=-C({\begin{equation}taf v}arepsilon, t). \end{equation}d{split} \end{equation}d{equation*} Thus \begin{equation}taegin{equation}\lambdaambdabel{rholow} \rhoho\geq e^{-C({\begin{equation}taf v}arepsilon, t)}. \end{equation}d{equation} From equationref{uniform} and equationref{rholow}, we get equationref{bound}. The lower bound of density guarantees that there is no singularity in equationref{isen-vis}. Then we can apply classical theory of quasilinear parabolic systems to complete the proof of Theorem \rhoef{thm-isenthvis}. S_0ubsection{Convergence of approximate solutions} In this section, we will provide the proof of Theorem \rhoef{mainisen}. Since we are focusing on the uniform bound of $\rhoho$ and $m,$ in this section we assume $1<\gamma\lambdaeq2$ for simplicity. For the case $2<\gamma\lambdaeq3$, one can follow the similar argument in cite{Lions2} or cite{Wang} to obtain the same conclusions. Denote $\Pi_T={{\begin{equation}taf m}athbb R}\thetaimes [0, T]$ for any $T{\mathcal I}n(0, {\mathcal I}nfty).$ \thetaextbf{Step 1. $H^{-1}_{loc}$ compactness of the entropy pair.} We consider \begin{equation}taegin{equation*} \eta(\rhoho^{\begin{equation}taf v}arepsilon, m^{\begin{equation}taf v}arepsilon)_t+q(\rhoho^{\begin{equation}taf v}arepsilon, m^{\begin{equation}taf v}arepsilon)_x, \end{equation}d{equation*} where $(\eta,q)$ is any weak entropy-entropy flux pair given in equationref{2.6}. We will apply the Murat lemma to achieve the goal. \begin{equation}taegin{lemma}{(Murat cite{Murat})}\lambdaambdabel{murat} Let $\Omegamega{\mathcal I}n{{\begin{equation}taf m}athbb R}^n$ be an open set, then \begin{equation}taegin{equation*} (\thetaext{compact set of } W^{-1, q}_{loc}(\Omegamega))cap(\thetaext{bounded set of } W^{-1, r}_{loc}(\Omegamega))\\ S_0ubset(\thetaext{compact set of } H^{-1}_{loc}(\Omegamega)), \end{equation}d{equation*} where $1<q\lambdaeq 2<r.$ \end{equation}d{lemma} Let $KS_0ubset\Pi_T$ be any compact set, and choose ${\begin{equation}taf v}arphi{\mathcal I}n C_c^{\mathcal I}nfty(\Pi_T)$ such that ${\begin{equation}taf v}arphi|_{K}=1$ and $0\lambdaeq{\begin{equation}taf v}arphi\lambdaeq1.$ Multiplying equationref{isen-vis} by ${\begin{equation}taf v}arphi\nablabla\eta^* $ with $\eta^*$ the mechanical entropy, we obtain \begin{equation}taegin{equation}\lambdaambdabel{4.1} \begin{equation}taegin{split} &{\begin{equation}taf v}arepsilon{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arphi(\rhoho_x, m_x)\nablabla^2\eta^*(\rhoho_x, m_x)^\thetaop dxdt\\ =&{\mathcal I}nt{\mathcal I}nt_{\Pi_T}(a(x)\fracrac{m^2}{\rhoho}-2{\begin{equation}taf v}arepsilon b(x)\rhoho_x)\eta^*_m{\begin{equation}taf v}arphi+a(x)m\eta^*_\rhoho{\begin{equation}taf v}arphi +\eta^*{\begin{equation}taf v}arphi_t+q^*{\begin{equation}taf v}arphi_x+{\begin{equation}taf v}arepsilon\eta^*{\begin{equation}taf v}arphi_{xx}dxdt. \end{equation}d{split} \end{equation}d{equation} A direct calculation tells us that \[ (\rhoho_x, m_x)\nablabla^2\eta^*(\rhoho_x, m_x)^\thetaop=p_0\gamma\rhoho^{\gamma-2}\rhoho_x^2 +\rhoho u_x^2. \] Noting that \begin{equation}taegin{equation*} |(a(x)\fracrac{m^2}{\rhoho}-2{\begin{equation}taf v}arepsilon b(x)\rhoho_x)\eta^*_m|\lambdaeq\fracrac{{\begin{equation}taf v}arepsilon p_0\gamma}{2}\rhoho^{\gamma-2}\rhoho_x^2+{\begin{equation}taf v}arepsilon Cb^2m^2\rhoho^{-\gamma}+a_0\fracrac{m^{3}}{\rhoho^{2}}, \end{equation}d{equation*} we get \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\fracrac{{\begin{equation}taf v}arepsilon}{2}{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arphi(\rhoho_x, m_x)\nablabla^2\eta^*(\rhoho_x, m_x)^\thetaop dxdt\\ &\lambdaeq{\mathcal I}nt{\mathcal I}nt_{\Pi_T}(C{\begin{equation}taf v}arepsilon b^2m^2\rhoho^{-\gamma}+a_0\fracrac{m^{3}}{\rhoho^{2}}){\begin{equation}taf v}arphi+ \eta^*{\begin{equation}taf v}arphi_t+q^*{\begin{equation}taf v}arphi_x+{\begin{equation}taf v}arepsilon\eta^*{\begin{equation}taf v}arphi_{xx}\\ &~~+(\fracrac{m^{3}}{2\rhoho^{2}} +\fracrac{\gamma}{\gamma-1}m\rhoho^{\gamma-1}p_0)a_0{\begin{equation}taf v}arphi dxdt\\ &\lambdaeq C({\begin{equation}taf v}arphi).\\ \end{equation}d{split} \end{equation}d{equation*} Hence \begin{equation}taegin{equation}\lambdaambdabel{en}{\begin{equation}taf v}arepsilon(\rhoho_x, m_x)\nablabla^2\eta^*(\rhoho_x, m_x)^\thetaop{\mathcal I}n L^1_{loc}(\Pi_T),\end{equation}d{equation} i.e., \begin{equation}taegin{equation}\lambdaambdabel{isenlocestimate} {\begin{equation}taf v}arepsilon\rhoho^{\gamma-2}\rhoho_x^2+{\begin{equation}taf v}arepsilon\rhoho u_x^2{\mathcal I}n L^1_{loc}(\Pi_T). \end{equation}d{equation} For any weak entropy-entropy flux pairs given in equationref{2.6}, as in equationref{4.1}, we have \begin{equation}taegin{equation}\lambdaambdabel{4.3} \begin{equation}taegin{split} \eta_t+q_x&={\begin{equation}taf v}arepsilon\eta_{xx}-{\begin{equation}taf v}arepsilon(\rhoho_x, m_x)\nablabla^2\eta(\rhoho_x, m_x)^\thetaop +(\eta_\rhoho a(x)m+\eta_ma(x)\fracrac{m^2}{\rhoho})-2{\begin{equation}taf v}arepsilon \eta_m\rhoho_xb(x)\\ &=:S_0um_{i=1}^4I_i. \end{equation}d{split} \end{equation}d{equation} Using equationref{isenlocestimate}, it is straightforward to check that $I_1$ is compact in $H^{-1}_{loc}(\Pi_T).$ Note that for any weak entropy, the Hessian matrix $\nablabla^2\eta$ is controlled by $\nablabla^2\eta^*$ (cite{Lions2}), that is, \begin{equation}taegin{equation}\lambdaambdabel{4.4} (\rhoho_x, m_x)\nablabla^2\eta(\rhoho_x, m_x)^\thetaop\lambdaeq(\rhoho_x, m_x)\nablabla^2\eta^*(\rhoho_x, m_x)^\thetaop, \end{equation}d{equation} and thus $I_2$ is bounded in $L^1_{loc}(\Pi_T)$ and thus compact in $W_{loc}^{-1, \alphalpha}(\Pi_T)$ for some $1<\alphalpha<2$ by the Sobolev embedding theorem. For $I_3$, we have $$|I_3|=|\eta_\rhoho a(x)m+\eta_ma(x)\fracrac{m^2}{\rhoho}|\lambdaeq Ca_0,$$ which implies that $I_3$ is bounded in $L^1_{loc}(\Pi_T).$ For the last term $I_4$, we get $$|I_4|\lambdaeq C{\begin{equation}taf v}arepsilon \rhoho^{\gamma/2-1}|\rhoho_x|.$$ It follows from equationref{en} that $I_4$ is compact in $H^{-1}_{loc}(\Pi_T).$ Therefore, $$\eta_t+q_x \thetaext{ is compact in } W^{-1, \alphalpha}_{loc}(\Pi_T) \thetaext{ with some } 1<\alphalpha<2.$$ On the other hand, since $\rhoho$ and $m$ are uniformly bounded, we have $$\eta_t+q_x \thetaext{ is bounded in } W^{-1, {\mathcal I}nfty}_{loc}(\Pi_T).$$ We conclude that \begin{equation}taegin{equation}\lambdaambdabel{4.5} \eta_t+q_x~ \thetaext{is compact in}~ H^{-1}_{loc}(\Pi_T) \end{equation}d{equation} for all weak entropy-entropy flux pairs with the help of the Murat lemma \rhoef{murat}. \thetaextbf{Step 2. Strong convergence and consistency.} By equationref{4.5} and the compactness framework established in cite{Ding,DingDing, Diperna, Lions2}, we can prove that there exists a subsequence of $(\rhoho^{\begin{equation}taf v}arepsilon,m^{\begin{equation}taf v}arepsilon)$ (still denoted by $(\rhoho^{\begin{equation}taf v}arepsilon,m^{\begin{equation}taf v}arepsilon)$) such that \begin{equation}taegin{equation} \lambdaambdabel{4.6} (\rhoho^{\begin{equation}taf v}arepsilon, m^{\begin{equation}taf v}arepsilon)\thetao(\rhoho, m) ~~~ \thetaext{ in } L^p_{loc}(\Pi_T), ~~p\geq1, \end{equation}d{equation} from which it is easy to show that $(\rhoho, m)$ is a weak solution to the Cauchy problem equationref{iso1}-equationref{ini1}. We omit the proof for brevity. \thetaextbf{Step 3. Entropy inequality.} We shall also prove that $(\rhoho, m)$ satisfies the entropy inequality in the sense of distributions for all weak convex entropies. Let $(\eta, q)$ be any entropy-entropy flux pair with $\eta$ being convex. Multiplying equationref{isen-vis} by ${\begin{equation}taf v}arphi\nablabla\eta $ with $0\lambdae {\begin{equation}taf v}arphi{\mathcal I}n C_c^{\mathcal I}nfty(\Pi_T)$, we get \begin{equation}taegin{equation*} \begin{equation}taegin{split} &{\mathcal I}nt{\mathcal I}nt_{\Pi_T}\eta_t{\begin{equation}taf v}arphi+q_x{\begin{equation}taf v}arphi dxdt\\ =&{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arepsilon\eta_{xx}{\begin{equation}taf v}arphi-{\begin{equation}taf v}arepsilon{\begin{equation}taf v}arphi(\rhoho_x, m_x)\nablabla^2\eta(\rhoho_x, m_x)^\thetaop+\eta_{\rhoho}a(x)m{\begin{equation}taf v}arphi+\eta_m(a(x)\fracrac{m^{2}}{\rhoho}-2{\begin{equation}taf v}arepsilon b(x)\rhoho_x){\begin{equation}taf v}arphi dxdt. \end{equation}d{split} \end{equation}d{equation*} As in Step 1, we have \begin{equation}taegin{equation*} \lambdaeft|{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arepsilon\eta_{xx}{\begin{equation}taf v}arphi dxdt\rhoight|\thetao 0 \thetaext{ as } {\begin{equation}taf v}arepsilon\thetao 0. \end{equation}d{equation*} Moreover, \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\lambdaeft|{\mathcal I}nt{\mathcal I}nt_{\Pi_T}2{\begin{equation}taf v}arepsilon \rhoho_xb(x)\eta_m{\begin{equation}taf v}arphi dxdt\rhoight|\\ \lambdaeq&\lambdaeft[{\mathcal I}nt{\mathcal I}nt_{\Pi_T}C{\begin{equation}taf v}arepsilon \rhoho^{2-\gamma}b^2{\begin{equation}taf v}arphi dxdt\rhoight]^\fracrac{1}{2} \lambdaeft[{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arphi {\begin{equation}taf v}arepsilon\rhoho_x^2\rhoho^{\gamma-2}dxdt\rhoight]^\fracrac{1}{2}\\ \lambdaeq& C{\begin{equation}taf v}arepsilon^{\fracrac{1}{2}}\thetao 0 \thetaext{ as } {\begin{equation}taf v}arepsilon\thetao0. \end{equation}d{split} \end{equation}d{equation*} Noting that $${\begin{equation}taf v}arepsilon{\begin{equation}taf v}arphi(\rhoho_x, m_x)\nablabla^2\eta(\rhoho_x, m_x)^\thetaop\geq0,$$ we conclude that \begin{equation}taegin{equation*} {\mathcal I}nt{\mathcal I}nt_{\Pi_T}\eta{\begin{equation}taf v}arphi_t+q{\begin{equation}taf v}arphi_x dxdt +(\eta_m\fracrac{m^{2}}{\rhoho}+m\eta_{\rhoho})a(x){\begin{equation}taf v}arphi dxdt\geq 0 \thetaext{ as } {\begin{equation}taf v}arepsilon\thetao 0, \end{equation}d{equation*} that is, $(\rhoho,m)$ is indeed an entropy solution to the Cauchy problem equationref{iso1}-equationref{ini1}. Therefore, the proof of Theorem \rhoef{mainisen} is completed. S_0ection{Preliminary and Formulation for Isothermal Flow}\lambdaambdabel{formula2} In this section, we provide some preliminaries and formulation for the isothermal case. Here, we adopt a similar notion as in Section \rhoef{formula} with no confusion. Letting \begin{equation}taegin{equation*} n=A(x)\rhoho,~~ J=A(x)m, \end{equation}d{equation*} and using $\gamma=1,$ we can rewrite equationref{iso1} as \begin{equation}taegin{eqnarray}\lambdaambdabel{iso2} \lambdaeft\{ \begin{equation}taegin{array}{llll} {\begin{equation}taf d}isplaystyle n_t+J_x=0,\\ {\begin{equation}taf d}isplaystyle J_t+\lambdaeft(\fracrac{J^2}{n}+n\rhoight)_x=-a(x)n, x{\mathcal I}n{{\begin{equation}taf m}athbb R} \end{equation}d{array} \rhoight. \end{equation}d{eqnarray} with $a(x)=-\fracrac{A'(x)}{A(x)},$ $J=nu$. Then seeking weak entropy solutions of equationref{iso1}-equationref{ini1} is equivalent to solving equationref{iso2} with the following initial data: \begin{equation}taegin{equation}\lambdaambdabel{ini2} (n, J)|_{t=0}=(n_0(x), J_0(x))=(A(x)\rhoho_0(x), A(x)m_0(x)){\mathcal I}n L^{{\mathcal I}nfty}({{\begin{equation}taf m}athbb R}). \end{equation}d{equation} The eigenvalues of equationref{iso2} are \begin{equation}taegin{equation*} \lambdaambdambda_1=\fracrac{J}{n}-1,\quad \lambdaambdambda_2=\fracrac{J}{n}+1, \end{equation}d{equation*} and the corresponding right eigenvectors are \begin{equation}taegin{equation*} r_1=\lambdaeft[\begin{equation}taegin{array}{ll} 1\\ \lambdaambdambda_1 \end{equation}d{array} \rhoight],\quad r_2=\lambdaeft[\begin{equation}taegin{array}{ll} 1\\ \lambdaambdambda_2 \end{equation}d{array} \rhoight]. \end{equation}d{equation*} The Riemann invariants $(w, z)$ are given by \begin{equation}taegin{equation*} w=\fracrac{J}{n}+\lambdan n,\quad z=\fracrac{J}{n}-\lambdan n. \end{equation}d{equation*} The mechanical energy $\eta^*(n, J)$ and mechanical energy flux $q^*(n, J)$ have the following formula \begin{equation}taegin{equation*} \eta^*(n, J)=\fracrac{J^2}{2n}+n\lambdan n,~~ \quad q^*(n, J)=\fracrac{J^3}{2n^2}+J\lambdan n. \end{equation}d{equation*} S_0ection{Proof of Theorem \rhoef{main} }\lambdaambdabel{isotheorem} We first recall the compactness framework in Huang and Wang cite{HuangWang}. \begin{equation}taegin{theorem}\lambdaambdabel{framework} Let $(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)$ be a sequence of bounded approximate solutions of equationref{iso2}-equationref{ini2} satisfying \begin{equation}taegin{equation*} 0<{\mathcal P}rtialta\lambdaeq n^{\begin{equation}taf v}arepsilon\lambdaeq C, ~~|J^{\begin{equation}taf v}arepsilon|\lambdaeq n^{\begin{equation}taf v}arepsilon(C+|\lambdan n^{\begin{equation}taf v}arepsilon|) \end{equation}d{equation*} with $C$ being independent of ${\begin{equation}taf v}arepsilon, T,$ ${\mathcal P}rtialta=o({\begin{equation}taf v}arepsilon)$. Assume that \begin{equation}taegin{equation*} {\mathcal P}rtial_t\eta(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)+{\mathcal P}rtial_xq(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon) \thetaext{ is compact in } H^{-1}_{loc}(\Pi_T), \end{equation}d{equation*} where $(\eta, q)$ is defined as \begin{equation}taegin{equation*} \eta={n}^{\fracrac{1}{1-{\begin{equation}taf x}i^2}}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}},~~ q=\lambdaeft(\fracrac{J}{n}+{\begin{equation}taf x}i\rhoight)\eta \end{equation}d{equation*} for any fixed ${\begin{equation}taf x}i{\mathcal I}n(-1, 1)$. Then there exists a subsequence of $(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)$, still denoted by $(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon),$ such that \begin{equation}taegin{equation*} (n^{\begin{equation}taf v}arepsilon(x, t), J^{\begin{equation}taf v}arepsilon(x, t))\rhoightarrow(n(x, t), J(x, t)) \thetaext{ in } L^p_{loc}({{\begin{equation}taf m}athbb R}\thetaimes{{\begin{equation}taf m}athbb R}^+),~ p\geq1, \end{equation}d{equation*} for some function $(n(x, t), J(x, t))$ satisfying \begin{equation}taegin{equation*} 0\lambdaeq n\lambdaeq C, ~~|J|\lambdaeq n(C+|\lambdan n|), \end{equation}d{equation*} where $C$ is a positive constant independent on $T.$ \end{equation}d{theorem} S_0ubsection{Construction of approximate solutions} Next we construct approximate solutions satisfying the conditions in Theorem \rhoef{framework}. Raising density, which is motivated by cite{lu1}, we add artificial viscosity as follows: \begin{equation}taegin{eqnarray}\lambdaambdabel{iso-vis2} \lambdaeft\{ \begin{equation}taegin{aligned} {\begin{equation}taf d}isplaystyle &n_t+(J-{\mathcal P}rtialta\fracrac{J}{n})_x={\begin{equation}taf v}arepsilon n_{xx},\\ {\begin{equation}taf d}isplaystyle &J_t+\lambdaeft(\fracrac{J^2}{n}-\fracrac{{\mathcal P}rtialta}{2}\fracrac{J^2}{n^2}+{\mathcal I}nt^{n}_{{\mathcal P}rtialta}\fracrac{t-{\mathcal P}rtialta}{t}dt\rhoight)_x={\begin{equation}taf v}arepsilon J_{xx}-a(x)(n-{\mathcal P}rtialta)+2b(x){\mathcal P}rtialta\fracrac{J}{n}-4{\begin{equation}taf v}arepsilon b(x)n_x \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray} with initial data \begin{equation}taegin{equation}\lambdaambdabel{ini-vis2} (n, J)|_{t=0}=(n_0^{\begin{equation}taf v}arepsilon(x), J_0^{\begin{equation}taf v}arepsilon(x))=(n_0(x)+{\mathcal P}rtialta, J_0(x))\alphast j^{\begin{equation}taf v}arepsilon, \end{equation}d{equation} where $b$ is a function to be determined later, ${\mathcal P}rtialta=o({\begin{equation}taf v}arepsilon),$ and $j^{\begin{equation}taf v}arepsilon$ is the standard mollifier and $0<{\begin{equation}taf v}arepsilon<1.$ By a direct computation, the eigenvalues are \begin{equation}taegin{equation} \lambdaambdambda^{\mathcal P}rtialta_1=\fracrac{J}{n}-\fracrac{n-{\mathcal P}rtialta}{n},\quad \lambdaambdambda^{\mathcal P}rtialta_2=\fracrac{J}{n}+\fracrac{n-{\mathcal P}rtialta}{n}, \end{equation}d{equation} and the Riemann invariants are \begin{equation}taegin{equation*} w=\fracrac{J}{n}+\lambdan n,\quad z=\fracrac{J}{n}-\lambdan n. \end{equation}d{equation*} S_0ubsection{Global existence of approximate solutions}\lambdaambdabel{approximate-1} In this section, we show the global existence of classical solutions to the Cauchy problem of quasilinear parabolic system equationref{iso-vis2}-equationref{ini-vis2} and obtain the following theorem. \begin{equation}taegin{theorem}\lambdaambdabel{thm-isothvis} There exists a unique global classical bounded solution $(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)$ to the Cauchy problem equationref{iso-vis2}-equationref{ini-vis2} satisfying \begin{equation}taegin{equation}\lambdaambdabel{isoth-bound} {\mathcal P}rtialta\lambdaeq n^{\begin{equation}taf v}arepsilon\lambdaeq C, ~~|J^{\begin{equation}taf v}arepsilon|\lambdaeq n^{\begin{equation}taf v}arepsilon(C+|\lambdan n^{\begin{equation}taf v}arepsilon|). \end{equation}d{equation} \end{equation}d{theorem} We divide the proof of Theorem \rhoef{thm-isothvis} into three steps. In this section, we omit the up index ${\begin{equation}taf v}arepsilon.$ \thetaextbf{Step 1. Local existence and lower bound of density.} The local existence of the solution for equationref{iso-vis2}-equationref{ini-vis2} can be proved by using the heat kernel and the same way in cite{Diperna}. For the lower bound of density, we denote $$v=n-{\mathcal P}rtialta,$$ and then $v$ satisfies \begin{equation}taegin{equation}\lambdaambdabel{v-1} v_t+(uv)_x={\begin{equation}taf v}arepsilon v_{xx},~~~~~~~~v|_{t=0}=v_0(x) \end{equation}d{equation} with $u=\fracrac{J}{n}.$ From the definition of $n_0$, we have $v_0\geq0$. Rewrite equationref{v-1} as \begin{equation}taegin{align*} v_t+uv_x={\begin{equation}taf v}arepsilon v_{xx}-u_x v, \end{equation}d{align*} and then it is easy to obtain from the maximum principle of the parabolic equation that \begin{equation}taegin{align*} v(x, t)\geq{\begin{equation}taf m}in v_0(x)e^{-\|u_x\|_{L^{\mathcal I}nfty}t}\geq0, \end{equation}d{align*} and hence we gain $n\geq{\mathcal P}rtialta.$ \thetaextbf{Step 2. Uniform upper bound.} We apply Lemma \rhoef{modified maximum} to obtain the uniform $L^{\mathcal I}nfty$ estimates. As before, to estimate the uniform bound of the approximate solution, we shall investigate a parabolic system derived by Riemann invariants. We transform equationref{iso-vis2} into the following form: \begin{equation}taegin{eqnarray*} \lambdaeft\{ \begin{equation}taegin{aligned} {\begin{equation}taf d}isplaystyle &w_t+\lambdaambdambda^{\mathcal P}rtialta_2 w_x={\begin{equation}taf v}arepsilon w_{xx}+2{\begin{equation}taf v}arepsilon(w_x-2b(x))\fracrac{n_x}{n}-{\begin{equation}taf v}arepsilon\fracrac{n_x^2}{n^2}-a(x)\fracrac{n-{\mathcal P}rtialta}{n}+2b(x){\mathcal P}rtialta\fracrac{J}{n^2},\\ {\begin{equation}taf d}isplaystyle &z_t+\lambdaambdambda^{\mathcal P}rtialta_1 z_x={\begin{equation}taf v}arepsilon z_{xx}+2{\begin{equation}taf v}arepsilon(z_x-2b(x))\fracrac{n_x}{n}+{\begin{equation}taf v}arepsilon\fracrac{n_x^2}{n^2}-a(x)\fracrac{n-{\mathcal P}rtialta}{n}+2b(x){\mathcal P}rtialta\fracrac{J}{n^2}. \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray*} Set the control functions $(\phi,\psi)$ as follows: \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\phi=M+2{\mathcal I}nt_{-{\mathcal I}nfty}^xb(y)dy+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{\mathcal I}nfty}t,\\ &\psi=M+2{\mathcal I}nt^{{\mathcal I}nfty}_x b(y)dy+2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{\mathcal I}nfty}t. \end{equation}d{split} \end{equation}d{equation*} We remark that $\phi,\psi$ in this Section is different from those in Section \rhoef{isentheorem} for simplicity. Then we obtain \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\phi_t=2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{\mathcal I}nfty},~\phi_x=2b(x), ~\phi_{xx}=2b'(x);\\ &\psi_t=2{\begin{equation}taf v}arepsilon\|b'(x)\|_{L^{\mathcal I}nfty},~\psi_x=-2b(x), ~\psi_{xx}=-2b'(x). \end{equation}d{split} \end{equation}d{equation*} Let $$\begin{equation}taar{w}=w-\phi,\begin{equation}taar{z}=z+\psi.$$ A simple calculation yields \begin{equation}taegin{eqnarray}\lambdaambdabel{wz} \lambdaeft\{ \begin{equation}taegin{aligned} {\begin{equation}taf d}isplaystyle \begin{equation}taar{w}_t+\lambdaeft(\lambdaambdambda^{\mathcal P}rtialta_2-2{\begin{equation}taf v}arepsilon\fracrac{n_x}{n}\rhoight)\begin{equation}taar{w}_x=&{\begin{equation}taf v}arepsilon\begin{equation}taar{w}_{xx}+2{\begin{equation}taf v}arepsilon b'(x)-2{\begin{equation}taf v}arepsilon \|b'(x)\|_{L^{\mathcal I}nfty}-{\begin{equation}taf v}arepsilon\fracrac{n^2_x}{n^2}\\ &-2\lambdaeft(\fracrac{J}{n}+\fracrac{n-{\mathcal P}rtialta}{n}\rhoight)b(x)-\fracrac{n-{\mathcal P}rtialta}{n}a(x)+2b(x){\mathcal P}rtialta\fracrac{J}{n^2},\\ {\begin{equation}taf d}isplaystyle \begin{equation}taar{z}_t+\lambdaeft(\lambdaambdambda^{\mathcal P}rtialta_{1}-2{\begin{equation}taf v}arepsilon\fracrac{n_x}{n}\rhoight)\begin{equation}taar{z}_x=&{\begin{equation}taf v}arepsilon\begin{equation}taar{z}_{xx}+2{\begin{equation}taf v}arepsilon b'(x)+2{\begin{equation}taf v}arepsilon \|b'(x)\|_{L^{\mathcal I}nfty}+{\begin{equation}taf v}arepsilon\fracrac{n^2_x}{n^2}\\ &-2\lambdaeft(\fracrac{J}{n}-\fracrac{n-{\mathcal P}rtialta}{n}\rhoight)b(x)-\fracrac{n-{\mathcal P}rtialta}{n}a(x)+2b(x){\mathcal P}rtialta\fracrac{J}{n^2}. \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray} Note that $$\fracrac{J}{n}=\fracrac{w+z}{2}=\fracrac{\begin{equation}taar{w}+\phi+\begin{equation}taar{z}-\psi}{2},$$ and then the system equationref{wz} becomes \begin{equation}taegin{equation*} {\begin{equation}taf d}isplaystyle\lambdaeft\{ \begin{equation}taegin{aligned} &\begin{equation}taar{w}_t+\lambdaeft(\lambdaambdambda^{\mathcal P}rtialta_2-2{\begin{equation}taf v}arepsilon\fracrac{n_x}{n}\rhoight)\begin{equation}taar{w}_x ={\begin{equation}taf v}arepsilon\begin{equation}taar{w}_{xx}+a_{11}\begin{equation}taar{w} +a_{12}\begin{equation}taar{z}+R_1,\\ &\begin{equation}taar{z}_t+\lambdaeft(\lambdaambdambda^{\mathcal P}rtialta_{1}-2{\begin{equation}taf v}arepsilon\fracrac{n_x}{n}\rhoight)\begin{equation}taar{z}_x ={\begin{equation}taf v}arepsilon\begin{equation}taar{z}_{xx}+a_{21}\begin{equation}taar{w} +a_{22}\begin{equation}taar{z}+R_2 \end{equation}d{aligned} \rhoight. \end{equation}d{equation*} with \begin{equation}taegin{equation*} \begin{equation}taegin{split} &a_{11}=-b(x)\fracrac{n-{\mathcal P}rtialta}{n},\quad ~~a_{12}=-b(x)\fracrac{n-{\mathcal P}rtialta}{n}\lambdaeq0,\\ &a_{21}=-b(x)\fracrac{n-{\mathcal P}rtialta}{n}\lambdaeq0,\quad ~~a_{22}=-b(x)\fracrac{n-{\mathcal P}rtialta}{n}\\ \end{equation}d{split} \end{equation}d{equation*} and \begin{equation}taegin{equation*} \begin{equation}taegin{split} R_1=&2{\begin{equation}taf v}arepsilon b'(x)-2{\begin{equation}taf v}arepsilon \|b'(x)\|_{L^{\mathcal I}nfty}-{\begin{equation}taf v}arepsilon\fracrac{n^2_x}{n^2}+(-a-b)\fracrac{n-{\mathcal P}rtialta}{n}\\ &+b(x)\fracrac{n-{\mathcal P}rtialta}{n}\lambdaeft(2{\mathcal I}nt^{{\mathcal I}nfty}_xb(y)dy-2{\mathcal I}nt^x_{-{\mathcal I}nfty}b(y)dy-1\rhoight),\\ R_2=&2{\begin{equation}taf v}arepsilon b'(x)+2{\begin{equation}taf v}arepsilon \|b'(x)\|_{L^{\mathcal I}nfty}+{\begin{equation}taf v}arepsilon\fracrac{n^2_x}{n^2}+(-a+b)\fracrac{n-{\mathcal P}rtialta}{n}\\ &+b(x)\fracrac{n-{\mathcal P}rtialta}{n}\lambdaeft(2{\mathcal I}nt^{{\mathcal I}nfty}_xb(y)dy-2{\mathcal I}nt^x_{-{\mathcal I}nfty}b(y)dy+1\rhoight), \end{equation}d{split} \end{equation}d{equation*} where we have used $n\ge {\mathcal P}rtialta$. Since $$S_0up\lambdaeft\{x{\mathcal I}n{{\begin{equation}taf m}athbb R}\Big|2{\mathcal I}nt^{{\mathcal I}nfty}_xb(y)dy-2{\mathcal I}nt^x_{-{\mathcal I}nfty}b(y)dy\rhoight\}=2\|b\|_{L^1},$$ we can take $b(x){\mathcal I}n C^{1}({{\begin{equation}taf m}athbb R})$ such that $$\|b(x)\|_{L^{1}}\lambdaeq\fracrac{1}{2}, \quad |a(x)|\lambdaeq b(x), $$ and then we have $R_1\lambdaeq0,R_2\geq0.$ In fact, from our assumption on $a(x)$, we take $b(x)=a_0(x),$ which is \thetaextit{our key reason for the condition equationref{a02}}. By our conditions on initial data, we can take $M$ large enough such that $$\begin{equation}taar{w}(x, 0)\lambdae0,~~ \begin{equation}taar{z}(x,0)\ge0.$$ Then, Lemma \rhoef{modified maximum} yields $$\begin{equation}taar{w}(x, t)\lambdae0,~~ \begin{equation}taar{z}(x, t)\ge0,$$ which implies that \begin{equation}taegin{equation*} \begin{equation}taegin{split} &w(x, t)\lambdaeq\phi(x, t)\lambdaeq M+2\|b\|_{L^1}+2{\begin{equation}taf v}arepsilon \|b'\|_{L^{{\mathcal I}nfty}}t\lambdaeq C,\\ &z(x,t)\geq-\psi(x,t)\geq-M-2\|b\|_{L^1}-2{\begin{equation}taf v}arepsilon \|b'\|_{L^{{\mathcal I}nfty}}t\geq- C, \end{equation}d{split} \end{equation}d{equation*} where for any fixed time $T,$ we choose ${\begin{equation}taf v}arepsilon$ small such that $${\begin{equation}taf v}arepsilon \|b'\|_{L^{{\mathcal I}nfty}}t\lambdaeq{\begin{equation}taf v}arepsilon \|b'\|_{L^{{\mathcal I}nfty}}T\lambdaeq1.$$ Hence we obtain equationref{isoth-bound}. From Steps 1 and 2, using the classical theory of quasilinear parabolic systems, we can complete the proof of Theorem \rhoef{thm-isothvis}. S_0ubsection{Convergence of approximate solutions}\lambdaambdabel{convergence} As stated in Section \rhoef{formula}, equationref{iso1}-equationref{ini1} is equivalent to equationref{iso2}-equationref{ini2}. Thus we only need to show that a subsequence of $(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)$ in Section \rhoef{approximate-1} converges to the solutions of equationref{iso2}-equationref{ini2} by verifying the conditions in Theorem \rhoef{framework}. We also divide the proof into three steps. \thetaextbf{Step 1. $H^{-1}_{loc}$ compactness of the entropy pair.} We will verify the $H^{-1}_{loc}$ compactness of the entropy pair \begin{equation}taegin{equation*} \eta(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)_t+q(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)_x \end{equation}d{equation*} for some weak entropy $(\eta, q)$ of equationref{iso2} with \begin{equation}taegin{equation*} \eta=n^{\fracrac{1}{1-{\begin{equation}taf x}i^2}}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}},~~ q=\lambdaeft(\fracrac{J}{n}+{\begin{equation}taf x}i\rhoight)\eta \end{equation}d{equation*} for any fixed ${\begin{equation}taf x}i{\mathcal I}n(-1, 1).$ It is easy to calculate that \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\eta_n=\fracrac{1}{1-{\begin{equation}taf x}i^2}\lambdaeft(1-{\begin{equation}taf x}i\fracrac{J}{n}\rhoight)\fracrac{\eta}{n},~~\eta_J=\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{\eta}{n},\\ &\eta_{nn}=\fracrac{{\begin{equation}taf x}i^2}{(1-{\begin{equation}taf x}i^2)^2}\lambdaeft(1-2{\begin{equation}taf x}i\fracrac{J}{n}+\fracrac{J^2}{n^2}\rhoight) n^{\fracrac{{\begin{equation}taf x}i^2}{1-{\begin{equation}taf x}i^2}-1}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}},\\ &\eta_{nJ}=\fracrac{{\begin{equation}taf x}i^2}{(1-{\begin{equation}taf x}i^2)^2}\lambdaeft({\begin{equation}taf x}i-\fracrac{J}{n}\rhoight)n^{\fracrac{{\begin{equation}taf x}i^2}{1-{\begin{equation}taf x}i^2}-1}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}},\\ &\eta_{JJ}=\fracrac{{\begin{equation}taf x}i^2}{(1-{\begin{equation}taf x}i^2)^2}n^{\fracrac{{\begin{equation}taf x}i^2}{1-{\begin{equation}taf x}i^2}-1}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}}.\\ \end{equation}d{split} \end{equation}d{equation*} Hence $$\eta_{nn}\eta_{JJ}-\eta_{nJ}^2=\fracrac{{\begin{equation}taf x}i^4}{(1-{\begin{equation}taf x}i^2)^3}n^{\fracrac{2{\begin{equation}taf x}i^2}{1-{\begin{equation}taf x}i^2}-2}e^{\fracrac{2{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}}>0.$$ It indicates that $\eta$ is strictly convex for any ${\begin{equation}taf x}i{\mathcal I}n(-1,1).$ Then \begin{equation}taegin{equation*} \begin{equation}taegin{split} &(n_x, J_x)\nablabla^2\eta(n_x, J_x)^\thetaop\\ =&\fracrac{{\begin{equation}taf x}i^2}{(1-{\begin{equation}taf x}i^2)^2}n^{\fracrac{1}{1-{\begin{equation}taf x}i^2}-2}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}} \lambdaeft[n_x^2+\lambdaeft(\fracrac{J}{n}n_x-J_x\rhoight)^2-2{\begin{equation}taf x}i n_x\lambdaeft(\fracrac{J}{n}n_x-J_x\rhoight)\rhoight]\\ \geq&\fracrac{{\begin{equation}taf x}i^2}{(1-{\begin{equation}taf x}i^2)^2}\fracrac{\eta}{n^2} \lambdaeft[(1-|{\begin{equation}taf x}i|)n_x^2+(1-|{\begin{equation}taf x}i|)\lambdaeft(\fracrac{J}{n}n_x-J_x\rhoight)^2\rhoight]. \end{equation}d{split} \end{equation}d{equation*} Let $KS_0ubset\Pi_T$ be any compact set, and choose ${\begin{equation}taf v}arphi{\mathcal I}n C_c^{\mathcal I}nfty(\Pi_T)$ such that ${\begin{equation}taf v}arphi|_{K}=1,$ and $0\lambdaeq{\begin{equation}taf v}arphi\lambdaeq1.$ After multiplying equationref{iso-vis2} by $ {\begin{equation}taf v}arphi\nablabla\eta,$ and integrating over $\Pi_T$, we obtain \begin{equation}taegin{equation*} \begin{equation}taegin{split} &{\begin{equation}taf v}arepsilon{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arphi(n_x, J_x)\nablabla^2\eta(n_x, J_x)^\thetaop dxdt\\ =&{\mathcal I}nt{\mathcal I}nt_{\Pi_T}[-4{\begin{equation}taf v}arepsilon n_xb-a(n-{\mathcal P}rtialta)+2b{\mathcal P}rtialta\fracrac{J}{n}+{\mathcal P}rtialta\fracrac{n_x}{n}+\fracrac{{\mathcal P}rtialta}{2}(\fracrac{J^2}{n^2})_x] \eta_J{\begin{equation}taf v}arphi+\eta{\begin{equation}taf v}arphi_t+{\begin{equation}taf v}arepsilon\eta{\begin{equation}taf v}arphi_{xx}dxdt.\\ \end{equation}d{split} \end{equation}d{equation*} Due to $$\lambdaeft|\fracrac{J}{n}\rhoight|\lambdaeq C+|\lambdan n|, \thetaext{ and }\eta\lambdaeq n^{\fracrac{1}{1-{\begin{equation}taf x}i^2}}e^{\fracrac{|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}(C-\lambdan n)}\lambdaeq Cn^{\fracrac{1-|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}}, $$ it is easy to get \begin{equation}taegin{equation}\lambdaambdabel{i3} |-a(n-{\mathcal P}rtialta)+2b{\mathcal P}rtialta\fracrac{J}{n}\eta_J|\lambdaeq Cb. \end{equation}d{equation} Besides, \begin{equation}taegin{equation}\lambdaambdabel{i4} |4{\begin{equation}taf v}arepsilon n_xb\eta_J|\lambdaeq{\begin{equation}taf v}arepsilon b|n_x|\fracrac{4|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}\fracrac{\eta}{n}\lambdaeq\fracrac{{\begin{equation}taf v}arepsilon{\begin{equation}taf x}i^2(1-|{\begin{equation}taf x}i|)}{4(1-{\begin{equation}taf x}i^2)^2}\fracrac{\eta}{n}\fracrac{n_x^2}{n}+C{\begin{equation}taf v}arepsilon\eta b^2. \end{equation}d{equation} Moreover, we have \begin{equation}taegin{equation} \begin{equation}taegin{split}\lambdaambdabel{i5} &\lambdaeft|{\mathcal P}rtialta\fracrac{n_x}{n}\eta_J\rhoight|\lambdaeq\fracrac{|n_x|}{n}\fracrac{|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}\fracrac{\eta}{n}\lambdaeq\fracrac{{\begin{equation}taf v}arepsilon{\begin{equation}taf x}i^2(1-|{\begin{equation}taf x}i|)}{4(1-{\begin{equation}taf x}i^2)^2}\fracrac{\eta}{n}\fracrac{n_x^2}{n}+C\fracrac{{\mathcal P}rtialta^2}{{\begin{equation}taf v}arepsilon}\fracrac{\eta}{n^2},\\ &\lambdaeft|{\mathcal P}rtialta\lambdaeft(\fracrac{J^2}{n^2}\rhoight)_x\eta_J\rhoight|\lambdaeq\lambdaeft|{\mathcal P}rtialta\fracrac{J}{n}\fracrac{|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}\fracrac{\eta}{n}\lambdaeft(\fracrac{J}{n}\rhoight)_x\rhoight|\lambdaeq\fracrac{{\begin{equation}taf v}arepsilon{\begin{equation}taf x}i^2(1-|{\begin{equation}taf x}i|)}{4(1-{\begin{equation}taf x}i^2)^2}\eta\lambdaeft|\lambdaeft(\fracrac{J}{n}\rhoight)_x\rhoight|^2 +C\fracrac{{\mathcal P}rtialta^2}{{\begin{equation}taf v}arepsilon}\fracrac{\eta}{n^2}\fracrac{J^2}{n^2}. \end{equation}d{split} \end{equation}d{equation} Taking ${\mathcal P}rtialta={\begin{equation}taf v}arepsilon^3$ such that ${\mathcal P}rtialta^2/{\begin{equation}taf v}arepsilon\lambdaeq{\mathcal P}rtialta^{5/3}\lambdaeq n^{5/3},$ and choosing small $|{\begin{equation}taf x}i|\neq0$ , from the two facts \begin{equation}taegin{equation*} \begin{equation}taegin{split} \fracrac{\eta}{n^2}&=n^{\fracrac{1}{1-{\begin{equation}taf x}i^2}-2}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}}\lambdaeq Cn^{\fracrac{1}{1-{\begin{equation}taf x}i^2}-2-\fracrac{|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}}=Cn^{-\fracrac{2|{\begin{equation}taf x}i|+1}{1+|{\begin{equation}taf x}i|}},\\ \fracrac{\eta}{n^2}\fracrac{J^2}{n^2}&=n^{\fracrac{1}{1-{\begin{equation}taf x}i^2}-2}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}}\fracrac{J^2}{n^2}\lambdaeq Cn^{\fracrac{1}{1-{\begin{equation}taf x}i^2}-2-\fracrac{|{\begin{equation}taf x}i|}{1-{\begin{equation}taf x}i^2}}(1+|\lambdan n|^2)\lambdaeq Cn^{-\fracrac{4|{\begin{equation}taf x}i|+1}{1+|{\begin{equation}taf x}i|}} \end{equation}d{split} \end{equation}d{equation*} we get \begin{equation}taegin{equation}\lambdaambdabel{i2} \fracrac{{\begin{equation}taf v}arepsilon}{4}{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arphi(n_x, J_x)\nablabla^2\eta(n_x, J_x)^\thetaop dxdt\lambdaeq C({\begin{equation}taf v}arphi) \end{equation}d{equation} with constant $C({\begin{equation}taf v}arphi)$ depending on the $H^2(\Pi_T)$ norm of ${\begin{equation}taf v}arphi$. Hence for small $|{\begin{equation}taf x}i|\neq0,$ \begin{equation}taegin{equation}\lambdaambdabel{L1estimate} {\begin{equation}taf v}arepsilon\fracrac{\eta}{n^2}n_x^2+{\begin{equation}taf v}arepsilon\fracrac{\eta}{n^2}\lambdaeft(\fracrac{J}{n}n_x-J_x\rhoight)^2 ={\begin{equation}taf v}arepsilon\fracrac{\eta}{n^2}n_x^2+{\begin{equation}taf v}arepsilon\eta\lambdaeft|\lambdaeft(\fracrac{J}{n}\rhoight)_x\rhoight|^2 {\mathcal I}n L^1_{loc}(\Pi_T). \end{equation}d{equation} Now we investigate the dissipation of the entropy as follows: \begin{equation}taegin{equation*} \begin{equation}taegin{split} \eta_t+q_x=&{\begin{equation}taf v}arepsilon\eta_{xx}-{\begin{equation}taf v}arepsilon(n_x, J_x)\nablabla^2\eta(n_x, J_x)^\thetaop+[-a(n-{\mathcal P}rtialta)+2b{\mathcal P}rtialta\fracrac{J}{n}]\eta_J\\ &-4{\begin{equation}taf v}arepsilon n_xb\eta_J+\lambdaeft({\mathcal P}rtialta(\fracrac{J}{n})_x\eta_n+[{\mathcal P}rtialta\fracrac{n_x}{n}+\fracrac{{\mathcal P}rtialta}{2}(\fracrac{J^2}{n^2})_x]\eta_J\rhoight)\\ :=&S_0um_{k=1}^5I_k. \end{equation}d{split} \end{equation}d{equation*} Combining equationref{i3}, equationref{i4}, equationref{i5}, equationref{i2}, we obtain that $I_2+I_3+I_4+I_5$ is bounded in $L^1_{loc}(\Pi_T),$ and then compact in $W_{loc}^{-1, \alphalpha}(\Pi_T)$ with some $1<\alphalpha<2$ by the Sobolev embedding theorem. For $I_1, $ from equationref{L1estimate}, for any ${\begin{equation}taf v}arphi{\mathcal I}n H^1_0(\Pi_T),$ \begin{equation}taegin{equation*} \begin{equation}taegin{split} &\lambdaeft|{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arepsilon\eta_{xx}{\begin{equation}taf v}arphi dxdt\rhoight| =\lambdaeft|{\mathcal I}nt{\mathcal I}nt_{\Pi_T}{\begin{equation}taf v}arepsilon(\eta_nn_x+\eta_JJ_x){\begin{equation}taf v}arphi_x dxdt\rhoight|\\ \lambdaeq&{\mathcal I}nt{\mathcal I}nt_{\Pi_T}\fracrac{{\begin{equation}taf v}arepsilon\eta|{\begin{equation}taf v}arphi_x|}{1-{\begin{equation}taf x}i^2} \lambdaeft|\fracrac{n_x}{n}-\fracrac{{\begin{equation}taf x}i}{n}\lambdaeft(\fracrac{J}{n}n_x+J_x\rhoight)\rhoight|dxdt\\ \lambdaeq&S_0qrt{{\begin{equation}taf v}arepsilon}\lambdaeft({\mathcal I}nt{\mathcal I}nt_{\Pi_T}\fracrac{\eta{\begin{equation}taf v}arphi_x^2}{n(1-{\begin{equation}taf x}i^2)}dxdt\rhoight)^{\fracrac{1}{2}} \begin{equation}taigg[\lambdaeft({\mathcal I}nt{\mathcal I}nt_{\Pi_T}\fracrac{{\begin{equation}taf v}arepsilon \eta n_x^2}{n^2}dxdt\rhoight)^{\fracrac{1}{2}}\\ &~~~~~~+\lambdaeft({\mathcal I}nt{\mathcal I}nt_{\Pi_T}\fracrac{{\begin{equation}taf v}arepsilon\eta}{n^2}\lambdaeft(\fracrac{J}{n}n_x-J_x\rhoight)^2dxdt\rhoight)^{\fracrac{1}{2}} \begin{equation}taigg], \end{equation}d{split} \end{equation}d{equation*} and thus we have that $I_1$ is compact in $H^{-1}_{loc}(\Pi_T).$ Finally, we get $$\eta_t+q_x \thetaext{ is compact in } W^{-1, \alphalpha}_{loc}(\Pi_T) \thetaext{ with } 1<\alphalpha<2.$$ Moreover, $$q=\lambdaeft(\fracrac{J}{n}+{\begin{equation}taf x}i\rhoight)\eta\lambdaeq (C-\lambdan n+|{\begin{equation}taf x}i|)\eta\lambdaeq C+|\lambdan n|n^{\fracrac{1}{1-{\begin{equation}taf x}i^2}}e^{\fracrac{{\begin{equation}taf x}i}{1-{\begin{equation}taf x}i^2}\fracrac{J}{n}}\lambdaeq C,$$ and then $$\eta_t+q_x \thetaext{ bounded in } W^{-1, {\mathcal I}nfty}_{loc}(\Pi_T).$$ Therefore, taking $|{\begin{equation}taf x}i|$ small, we conclude that $$\eta_t+q_x \thetaext{ is compact in } H^{-1}_{loc}(\Pi_T) \thetaext{ for small } |{\begin{equation}taf x}i|\lambdaeq 1,$$ by Lemma \rhoef{murat}. \thetaextbf{Step 2. Convergence and consistency.} Since our approximate solutions satisfy all the conditions in Theorem \rhoef{framework}, applying Theorem \rhoef{framework} yields $$(n^{\begin{equation}taf v}arepsilon, J^{\begin{equation}taf v}arepsilon)\thetao(n, J) ~~~ \thetaext{ in } L^p_{loc}(\Pi_T), ~~p\geq1.$$ This implies that $(n, J)$ is a weak solution to the Cauchy problem equationref{iso2}-equationref{ini2}. Similar to the previous argument, we can show that $(n, J)$ satisfies the energy inequality. Thus $(n, J)$ is an entropy solution. The proof of Theorem \rhoef{main} is completed. S_0ection{Appendix} Here we provide the proof of Lemma \rhoef{modified maximum} for completeness. \begin{equation}taegin{proof} Let $$\begin{equation}taar{M_0}=\|p\|_{L^{\mathcal I}nfty({{\begin{equation}taf m}athbb R}\thetaimes[0,T])}+\|q\|_{L^{\mathcal I}nfty({{\begin{equation}taf m}athbb R}\thetaimes[0,T])}.$$ We define two new variables $$\begin{equation}taar{p}=p-{\begin{equation}taf x}i,~~ \begin{equation}taar{q}=q+{\begin{equation}taf x}i,$$ where $${\begin{equation}taf x}i={\begin{equation}taf x}i(x,t)=2\begin{equation}taar{M_0}\fracrac{cosh x}{cosh N}e^{\Lambda t}, N>0, $$ and $\Lambda>0$ will be determined later. For $\,(i,j)=(1,2)\,$or $\,(2,1),$ we write \begin{equation}taegin{equation*} \begin{equation}taegin{split} a_{ij}(x,t,p,q)=&a_{ij}(x,t,\begin{equation}taar{p},\begin{equation}taar{q})\\ +&\lambdaeft({\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial a_{ij}}{{\mathcal P}rtial p}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i)d\thetaau-{\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial a_{ij}}{{\mathcal P}rtial q}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i)d\thetaau\rhoight){\begin{equation}taf x}i \end{equation}d{split} \end{equation}d{equation*} and \begin{equation}taegin{equation*} \begin{equation}taegin{split} {\begin{equation}taf d}isplaystyle &R_{i}(x,t,p,q,\zeta,\eta)=R_{i}(x,t,\begin{equation}taar{p},\begin{equation}taar{q},\zeta,\eta)\\ &+\lambdaeft({\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial R_{i}}{{\mathcal P}rtial p}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i,\zeta,\eta)d\thetaau-{\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial R_{i}}{{\mathcal P}rtial q}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i)d\thetaau\rhoight){\begin{equation}taf x}i. \end{equation}d{split} \end{equation}d{equation*} Denote \begin{equation}taegin{align*} &\omegaverline{a}_{ij}=a_{ij}(x,t,\begin{equation}taar{p},\begin{equation}taar{q}),\omegaverline{R}_{i}=R_{i}(x,t,\begin{equation}taar{p},\begin{equation}taar{q}),\\ &\omegaverline{b}_{ij}={\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial a_{ij}}{{\mathcal P}rtial p}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i)d\thetaau-{\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial a_{ij}}{{\mathcal P}rtial q}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i)d\thetaau,\\ &\omegaverline{c}_{i}={\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial R_{i}}{{\mathcal P}rtial p}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i,\zeta,\eta)d\thetaau-{\mathcal I}nt_{0}^1 \fracrac{{\mathcal P}rtial R_{i}}{{\mathcal P}rtial q}(x,t,\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i,\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i,\zeta,\eta)d\thetaau. \end{equation}d{align*} Then we get a system for $(\begin{equation}taar{p}, \begin{equation}taar{q}),$ \begin{equation}taegin{eqnarray*} \lambdaeft\{ \begin{equation}taegin{aligned} {\begin{equation}taf d}isplaystyle \begin{equation}taar{p}_t+{\begin{equation}taf m}u_1 \begin{equation}taar{p}_x=& {\begin{equation}taf v}arepsilon \begin{equation}taar{p}_{xx}+a_{11}\begin{equation}taar{p}+\omegaverline{a}_{12}\begin{equation}taar{q}+\omegaverline{b}_{12}\begin{equation}taar{q}{\begin{equation}taf x}i+\begin{equation}taar{R}_1+\begin{equation}taar{c}_1{\begin{equation}taf x}i-2{\begin{equation}taf m}u_1\begin{equation}taar{M_0}\fracrac{S_0inh x}{cosh N}e^{\Lambda t}\\ &+{\begin{equation}taf x}i(-\Lambda+{\begin{equation}taf v}arepsilon+a_{11}-a_{12}),\\ {\begin{equation}taf d}isplaystyle \begin{equation}taar{q}_t+{\begin{equation}taf m}u_2 \begin{equation}taar{q}_x=& {\begin{equation}taf v}arepsilon \begin{equation}taar{q}_{xx}+\omegaverline{a}_{21}\begin{equation}taar{p}+a_{22}\begin{equation}taar{q}+\omegaverline{b}_{21}\begin{equation}taar{p}{\begin{equation}taf x}i+\begin{equation}taar{R}_2+\begin{equation}taar{c}_2{\begin{equation}taf x}i+2{\begin{equation}taf m}u_2\begin{equation}taar{M_0}\fracrac{S_0inh x}{cosh N}e^{\Lambda t}\\ &+{\begin{equation}taf x}i(\Lambda-{\begin{equation}taf v}arepsilon+a_{21}-a_{22}). \end{equation}d{aligned} \rhoight. \end{equation}d{eqnarray*} Note that for any $\Lambda>0$, $\begin{equation}taar{p}(x, t)< 0$ and $ \begin{equation}taar{q}(x, t)> 0$ hold for any $|x|\ge N$. Next we show \begin{equation}taegin{align*} \thetaextbf{Claim}: &\thetaext{There exists } \Lambda=\Lambda(\begin{equation}taar{M_0}) \thetaext{ such that } \\ &\begin{equation}taar{p}(x, t)\lambdae0 \thetaext{ and } \begin{equation}taar{q}(x, t)\ge0 \thetaext{ for } x{\mathcal I}n(-N, N), 0\lambdaeq t\lambdaeq s_{\alphast}=\fracrac{1}{\Lambda}. \end{equation}d{align*} To this end, let $$A=\{t{\mathcal I}n[0,s_{\alphast}]|\thetaext{ there exist } x{\mathcal I}n [-N,N] \thetaext{ such that }\begin{equation}taar{p}(x, t)>0\thetaext{ or } \begin{equation}taar{q}(x, t)<0\}.$$ We shall prove the set $A$ is empty by contradiction. In fact, if $A$ is not empty, let $t_*={\mathcal I}nf A>0$, and then there exists $|x_*|\lambdae N$ such that $\begin{equation}taar{p}(x_*, t_*)=0$ or $\begin{equation}taar{q}(x_*, t_*)=0.$ Without loss of generality, we assume $\begin{equation}taar{p}(x_*, t_*)=0.$ Then, $\begin{equation}taar{p}(x, 0)\lambdaeq0,~\begin{equation}taar{q}(x, 0)\geq0,\,|x|\lambdaeq N.$ For $0\lambdaeq t<t_*$, $$~\begin{equation}taar{p}(\pm N, t)<0,~\begin{equation}taar{q}(\pm N, t)>0,~~|x|\lambdae N,$$ and thus $\begin{equation}taar{p}(x, t)$ takes the maximum value over $[-N, N]\thetaimes[0, t_*]$ at the point $(x_*, t_*)$. We have $$\begin{equation}taar{p}_x(x_*, t_*)=0,~\begin{equation}taar{p}_{xx}(x_*, t_*)\lambdaeq0,~\begin{equation}taar{p}_t(x_*, t_*)\geq0, ~\begin{equation}taar{q}(x_*, t_*)\geq 0.$$ Note that at the point $(x_*, t_*),$ $\omegaverline{a}_{12}\lambdaeq0,\begin{equation}taar{R}_1\lambdaeq0, \Lambda t_{\alphast}\lambdaeq \Lambda s_{\alphast}=1.$ Moreover, for any $\thetaau{\mathcal I}n[0,1],$ $$|\begin{equation}taar{p}+\thetaau{\begin{equation}taf x}i|\lambdaeq|p|+2{\begin{equation}taf x}i\lambdaeq\begin{equation}taar{M_0}+4\begin{equation}taar{M_0}e\lambdaeq C_1(\begin{equation}taar{M_0}),$$ $$|\begin{equation}taar{q}-\thetaau{\begin{equation}taf x}i|\lambdaeq|q|+2{\begin{equation}taf x}i\lambdaeq\begin{equation}taar{M_0}+4\begin{equation}taar{M_0}e\lambdaeq C_1(\begin{equation}taar{M_0}).$$ Therefore, $|\omegaverline{b}_{12}|\lambdaeq C_2(\begin{equation}taar{M_0}),|\omegaverline{b}_{21}|\lambdaeq C_2(\begin{equation}taar{M_0}),$ $|\omegaverline{c}_{1}|\lambdaeq C_3(\begin{equation}taar{M_0}),|\omegaverline{c}_{2}|\lambdaeq C_3(\begin{equation}taar{M_0}).$ A direct computation yields that at the point $(x_*, t_*)$, \[ \begin{equation}taegin{aligned} \begin{equation}taar{p}_t+{\begin{equation}taf m}u_1\begin{equation}taar{p}_x \lambdaeq&\omegaverline{a}_{12}\begin{equation}taar{q}+\omegaverline{b}_{12}\begin{equation}taar{q}{\begin{equation}taf x}i+\begin{equation}taar{R}_1+\begin{equation}taar{c}_1{\begin{equation}taf x}i+{\begin{equation}taf x}i(-\Lambda+{\begin{equation}taf v}arepsilon+a_{11}-a_{12}+|{\begin{equation}taf m}u_1|)\\ \lambdaeq &{\begin{equation}taf x}i\lambdaeft(-\Lambda+{\begin{equation}taf v}arepsilon+a_{11}-a_{12}+|{\begin{equation}taf m}u_1|+C_2(\begin{equation}taar{M_0})\begin{equation}taar{M_0}+2C_2(\begin{equation}taar{M_0})\begin{equation}taar{M_0}e+2C_3(\begin{equation}taar M_{0}) M_{0}e\rhoight).\\ \end{equation}d{aligned}\] Then, choosing $$\Lambda=:2{\begin{equation}taf v}arepsilon+S_0um_{i=1}^{2}\|{\begin{equation}taf m}u_i\|_{L^{\mathcal I}nfty}+S_0um_{i,j=1}^2 \|a_{ij}\|_{L^{\mathcal I}nfty}+C_2(\begin{equation}taar{M_0})\begin{equation}taar{M_0}(2e+1)+2C_3(\begin{equation}taar M_{0}) M_{0}e,$$ we get $\begin{equation}taar{p}_t+{\begin{equation}taf m}u_1\begin{equation}taar{p}_x<0.$ It contradicts with $$\begin{equation}taar{p}_t+{\begin{equation}taf m}u_1\begin{equation}taar{p}_x\geq 0,\thetaext{ at }(x_{\alphast},t_{\alphast}).$$ Hence $A$ is empty and \thetaextbf{Claim} holds. Letting $N$ tend to infinity, we obtain that $ p(x,t)\lambdaeq 0,\, {\begin{equation}taf m}athrm{and}~q(x,t)\geq0 ~\, \fracorall x{\mathcal I}n{{\begin{equation}taf m}athbb R}, 0\lambdaeq t\lambdaeq s_{\alphast}.$ From the above analysis, we have proved that the set $$\Omegamega=\{t{\mathcal I}n[0,T]|\,p(x,s)\lambdaeq0,\,\,q(x,s)\geq0 \,\fracorall x{\mathcal I}n{{\begin{equation}taf m}athbb R},\, 0\lambdaeq s\lambdaeq t\}$$ is an open set. It is obvious that $\Omegamega$ is a closed subset of $[0,T].$ Therefore, $\Omegamega=[0,T]$. We thus complete Lemma\,\rhoef{modified maximum}. \end{equation}d{proof} S_0mallskip S_0ection*{Acknowledgments} Wentao Cao's research is supported by ERC Grant Agreement No.724298. Feimin Huang is partially supported by National Center for Mathematics and Inter-disciplinary Sciences, AMSS, CAS, and NSFC Grant No.11371349 and 11688101. Difan Yuan is supported by China Scholarship Council No.201704910503. The authors would like to thank Professor Naoki Tsuge for valuable comments and suggestions. \begin{equation}taigskip \begin{equation}taegin{thebibliography}{99} \begin{equation}taibitem{chen} G. Q. Chen, {Convergence of the Lax-Friedrichs scheme for ientropic gas dynamics(III)}. Acta Math. Sci. {\begin{equation}taf6}(1986), 75-120. \begin{equation}taibitem{Chen2} G. Q. Chen, J. Glimm, {Global solutions to the compressible Euler equations with geometrical structure}. Comm. Math. Phys. {\begin{equation}taf180} (1996), 153-193. \begin{equation}taibitem{Chen3} G. Q. Chen, {Remarks on Diperna's paper ``Convergence of the viscosity method for isentropic gas dynamics"}. Proc. Amer Math Soc. {\begin{equation}taf125} (1997), 2981-2986. \begin{equation}taibitem{Chen2018} G. Q. Chen, Matthew R. I. Schrecker, {Vanishing viscosity approach to the compressible Euler equations for transonic nozzle and spherically symmetric flows}. Arch. Rational Mech. Anal. {\begin{equation}taf19} (2018), 591-626. \begin{equation}taibitem{Courant} R. Courant, K. O. Friedriches, {Supersonic Flow and Shock waves}. Springer, New York, (1962). \begin{equation}taibitem{Ding} X. X. Ding, G. Q. Chen and P. Z. Luo, {Convergence of the Lax-Friedrichs scheme for ientropic gas dynamics(I)}. Acta Math. Sci. {\begin{equation}taf5} (1985), 415-432. \begin{equation}taibitem{DingDing} X. X. Ding, G. Q. Chen and P. Z. Luo, {Convergence of the Lax-Friedrichs scheme for ientropic gas dynamics(II)}. Acta Math. Sci. {\begin{equation}taf5} (1985), 433-472. \begin{equation}taibitem{Ding1} X. X. Ding, G. Q. Chen and P. Z. Luo, {Convergence of the fractional step Lax-Friedrichs scheme and Godunov scheme for isentropic system of gas dynamics}. Comm. Math. Phys. {\begin{equation}taf 121} (1989), 63-84. \begin{equation}taibitem{Diperna} R. J. DiPerna, {Convergence of the viscosity method for isentropic gas dynamics}. Comm. Math. Phys. {\begin{equation}taf 91} (1983), 1-30. \begin{equation}taibitem{Embid} P. Embid, J. Goodman and A. Majda, {Multiple steady states for 1-D transonic flow}. SIAM J. Sci. and Stat. Comput. {\begin{equation}taf5} (1984), 21-41. \begin{equation}taibitem{Glaz} H. Glaz, T. Liu, {The asymptotic analysis of wave interactions and numerical calculations of transonic nozzle flow}. Adv. Appl. Math. {\begin{equation}taf5} (1984), 111-146. \begin{equation}taibitem{Glimm1984} J. Glimm, G. Marshall and B. Plohr, {A generalized Riemann problem for quasi-one-dimensional gas flow}. Adv.Appl.Math. {\begin{equation}taf5} (1984),1-30. \begin{equation}taibitem{Huang20171} F. Huang, T. Li and D. Yuan, {Global solutions to isentropic compressible Euler equations with spherical symmetry}. arXiv:1711.04430, 2017. \begin{equation}taibitem{HuangWang} F. Huang, Z. Wang, {Convergence of viscosity solutions for isentropic gas dynamics}. SIAM J. Math. Anal. {\begin{equation}taf34} (2003), 595-610. \begin{equation}taibitem{Huang20172} F. Huang, H. Yu and D. Yuan, {Weak entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductor devices}. Z. Angew. Math. Phys. {\begin{equation}taf69} (2018). \begin{equation}taibitem{Lu1997} C. Klingenberg and Y. Lu, {Existence of Solutions to Hyperbolic Conservation Laws with a Source}. Comm. Math. Phys. {\begin{equation}taf187} (1997), 327-340. \begin{equation}taibitem{Lions1} P. L. Lions, B. Perthame and E. Tadmor, {Kinetic formulation of the isentropic gas dynamics and p-systems}. Comm. Math. Phys. {\begin{equation}taf163} (1994), 415-431. \begin{equation}taibitem{Lions2} P. L. Lions, B. Perthame and P. Souganidis, {Existence and stability of entropy solutions for the hyperbolic systems of isentropic gas dynamics in Eulerian and Lagrangian coordinates}. Comm. Pure Appl. Math. {\begin{equation}taf49} (1996), 599-638. \begin{equation}taibitem{Liu1979} T. Liu, {Quasilinear hyperbolic systems}. Comm. Math. Phys. {\begin{equation}taf68} (1979), 141-172. \begin{equation}taibitem{Liu1982} T. Liu, {Nonlinear stability and instability of transonic flows through a nozzle}. Comm. Math. Phys. {\begin{equation}taf83} (1982), 243-260. \begin{equation}taibitem{Liu1987} T. Liu, {Nonlinear resonance for quasilinear hyperbolic equation}. J. Math. Phys. {\begin{equation}taf28} (1987), 2593-2602. \begin{equation}taibitem{lu1} Y. Lu, {Some results for the general system of the isentropic gas dynamics}. Diff. Eqs. {\begin{equation}taf 43} (2007), 130-138. \begin{equation}taibitem{Marcati} P. Marcati, R. Natalini, {Weak solutions to a hydrodynamic model for semiconductors: the Cauchy problem}. Proc. R. Soc. Edinb. {\begin{equation}taf 125 A} (1995), 115-131. \begin{equation}taibitem{Marcati2} P. Marcati, R. Natalini, {Weak solutions to a hydrodynamic model for semiconductors and relaxation to the drift-diffusion eqation}. Arch. Rational Mech. Anal. {\begin{equation}taf 129} (1995), 129-145. \begin{equation}taibitem{Murat} F. Murat, {Compacit\'{e} par compensation}. Ann. Scuola Norm. Sup. Pisa Sci. Fis. Mat. {\begin{equation}taf 5} (1978), 489-507. \begin{equation}taibitem{Smoller} J. Smoller, {Shock Waves and Reaction-Diffusion Equations}. Springer, New York, (1983). \begin{equation}taibitem{Tsuge3} N. Tsuge, {Existence of global solutions for unsteady flow in a Laval nozzle}. Arch. Ration. Mech. Anal. {\begin{equation}taf 205} (2012), 151-193. \begin{equation}taibitem{Tsuge4} N. Tsuge, {Isentropic gas flow for the compressible Euler equation in a nozzle}. Arch. Ration. Mech. Anal. {\begin{equation}taf209} (2013), 365-400. \begin{equation}taibitem{Tsuge5} N. Tsuge, {Global entropy solutions to the compressible Euler equations in the isentropic nozzle flow for large data: Application of the generalized invariant regions and the modified Godunov scheme.} Nonlinear Anal. Real World Appl. {\begin{equation}taf37} (2017),217-238. \begin{equation}taibitem{Wangzejun} D. Wang, Z. Wang, {Large BV solutions to the compressible isothermal Euler-Poisson equations with spherical symmetry}. Nonlinearity {\begin{equation}taf19} (2006), 1985-2004. \begin{equation}taibitem{Wangzejun2} D. Wang, Z. Wang, {Global entropy solution for the system of isothermal self-gravitating isentropic gases}. Proc. Roy. Soc. Edinburgh Sect. A {\begin{equation}taf138} (2008), 407-426. \begin{equation}taibitem{Wang} J. Wang, X. Li and J. Huang, {Lax-Friedrichs difference approximations to isentropic equations of gas dynamics}, Syst. Sci. Math. Sci. {\begin{equation}taf 1} (1988), no. 2, 109-118. \begin{equation}taibitem{Whitham} G. B., Whitham, {Linear and Nonlinear Waves}. Wiley, New York, (1974). \end{equation}d{thebibliography} \end{equation}d{document}
math
90,229
\begin{document} \title{Non-parametric Bayesian modeling of complex networks} \author{Mikkel N. Schmidt and Morten M{\o}rup} \date{Section for Cognitive Systems, DTU Informatics, Technical University of Denmark} \maketitle \begin{abstract} Modeling structure in complex networks using Bayesian non-parametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This paper provides a gentle introduction to non-parametric Bayesian modeling of complex networks: Using an infinite mixture model as running example we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model's fit and predictive performance. We explain how advanced non-parametric models for complex networks can be derived and point out relevant literature. \end{abstract} \section{Introduction} We are surrounded by complex networks. From the networks of cell interaction in our immune system to the complex network of neurons communicating in our brain, our cells signal to each other to coordinate the functions of our body. We live in cities with complex power and water systems and these cities are linked by advanced transportation systems. We interact within social circles and our computers are connected through the Internet forming the World Wide Web. To understand the structure of these large systems of biological, physical, social, and virtual networks, there is a great need to be able to model them mathematically~\cite{borner2007network}. Complex networks are studied in several different fields from computer science and engineering to physics, biology, sociology, and psychology. ``Network science is an emerging, highly interdisciplinary research area that aims to develop theoretical and practical approaches and techniques to increase our understanding of natural and manmade networks''~\cite{borner2007network}. Network science can be considered ``the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena''~\cite{national2005Network}. To understand the many large-scale complex networks we sample and store today, there is a growing demand for advanced mathematical and statistical models that can account for the structure in these systems. The modeling aims are twofold; to provide a comprehensible description (i.e., descriptive modeling) and to infer unobserved properties (i.e., predictive modeling). In particular, a statistical analysis is useful when the focus lies beyond single node properties and local interactions but on the characteristics and behaviors of the entire system~\cite{borner2007network,leskovec2008dynamics,sporns2010networks}. A complex network can be represented as a graph $G(V,E)$ with vertices (nodes) $V$ and edges (links) $E$ where an edge defines a connection between two of the vertices. In the following we denote the number of nodes in the graph by $N$ and the number of links by $L$. Graphs are often represented in terms of their corresponding adjacency matrix $X$ defined such that $x_{i,j}=1$ if there exists a link between node $i$ and $j$ and $x_{i,j}=0$ otherwise. Common types of graphs include undirected, directed, and bipartite graphs, and these can in turn be weighted such that each link has an associated strength (see Figure~\ref{fig:NetworkTypes}). Complex networks are commonly stored in a sparse representation as an ``edge list''; a set of $L$ 3-tuples $(i,j,w)$ where $w$ is the weight of the link from node $i$ to node $j$. Using this representation, the storage requirements for a network grows linearly in the number of edges of the graph. \begin{figure*} \caption{Illustration of undirected, directed, weighted, and bipartite graphs. An undirected graph consists of a set of nodes and a set of edges. In directed graphs, edges point from one node to another. Edges in a weighed graph have an associated value e.g. representing the strength of the relation. A bipartite graph represents a set of relations between two disjoint sets of nodes. Non-parametric Bayesian models can be formulated for all of these types of network structures.} \label{fig:NetworkTypes} \end{figure*} \subsection{Network characteristics} An important regimen in network science is to examine different characteristics or metrics computed from an observed network. The characteristics that have been examined include the distribution of the number of edges for each vertex (the degree distribution), the tendency of vertices to cluster together in tightly knit groups (the clustering coefficient), the average number of links required to move from one vertex to another (the characteristic path length), and many more (see Figure~\ref{fig:NetworkCharacteristics}, and for a detailed list of studied network characteristics see \cite{Rubinov2010}.) To assess the importance of these characteristics they can be contrasted with the properties of some class of random graphs: To discover significant properties which cannot be explained by pure chance. The most simple class of random graphs used for comparison is the socalled Erd\H{o}s-R\'{e}nyi graphs in which pairs of nodes connect independently at random with a given connection probability $\phi$, \begin{equation} x_{i,j}\sim \mathrm{Bernoulli}(\phi), \quad \phi\in[0;1]. \end{equation} Amongst the findings is that many real networks exhibit ``scale free'' and ``small-world'' properties. A network is said to be scale free if its degree distribution follows a power law~\cite{Barabasi1999} in contrast to Erd\H{o}s-R\'{e}nyi random graphs which have a binomial degree distribution. The power law degree distribution indicates that many nodes have very few links whereas a few nodes (hubs) have a large number of links. A network is said to be small-world if it has local connectivity and global reach such that any node can be reached from any other node in a small number of steps along the edges. This associates with having a large clustering coefficient and small characteristic path length \cite{Watts1998} and suggests that generic organizing principles and growth mechanisms may give rise to the structure of many existing networks \cite{Watts1998,Barabasi1999,sporns2010networks,Eguiluz2005,borner2007network,leskovec2008dynamics}. Using analytic tools from network science, studies have demonstrated that many complex networks behave far from random \cite{Watts1998,Barabasi1999,sporns2010networks,Eguiluz2005}. \begin{figure} \caption{Illustration of three important network characteristics: the degree distribution, clustering coefficient, and characteristic path length. The \emph{degree} \label{fig:NetworkCharacteristics} \end{figure} \subsection{Exponential random graphs} To understand the processes that govern the formation of links in complex networks, statistical models consider some class of probability distributions over networks. A prominent and very rich, general class of models for networks is the exponential random graph family \cite{frank1986markov,robins2007recent,wasserman2005introduction}, also denoted the $p^*$ model. In the exponential random graph model the probability of an observed network takes the form of an exponential family distribution, \begin{equation} p(X|\theta) = \frac{1}{\kappa(\theta)}\exp\left\{\theta^\top s(X)\right\}, \end{equation} where $\theta$ is a vector of parameters, $s(X)$ is a vector of sufficient statistics, and $\kappa(\theta)$ is the normalizing constant that ensures that the distribution sums to unity. In general, the sufficient statistic can depend on three different types of quantities: \begin{LaTeXdescription} \item[Exogenous predictors:] In addition to the network, side information is often available which can aid in modeling the network structure. Including such observed covariates on the node or dyad level allows the analysis of networks and side information in a single model. \item[Network statistics:] Statistics computed on the network itself, such as counts of different network motifs can be included. This could be quantities such as the number of edges, triangles, two-stars, etc. Since these terms depend on the graph, they introduce a self-dependency in the model, significantly complicating the inference procedure. There is virtually no limit to which terms could potentially be included, and how to choose a suitable set terms for a specific network domain is an open problem. \item[Latent variables:] The network can be endowed with a latent structure that characterizes the network generating process. The latent variables could for example be continuous or categorical variables on the node level or a latent hierarchical structure. The latent variables are most often jointly inferred with the model parameters. One reason for including latent variables is to aid in the understaning of the model: For example, if each network node is given a categorical latent variable, this corresponds to a clustering of the network nodes. \end{LaTeXdescription} The parameters in exponential random graphs are usually estimated using maximum likelihood which can be non-trivial since the normalizing constant usually can not be explicitly evaluated. While exponential random graph models are very flexible and work well for predicting links, they have the following important shortcomings: \begin{LaTeXdescription} \item[Model complexity:] It can be difficult to determine the suitable model complexity: Which network statistics to include, how many latent dimensions or categories to include etc. To address this issue different approaches have been taken, including imposing sparsity on the parameters and using model order selection tools such as BIC and AIC. \item[Computational complexity:] In general, the computational complexity of inference in exponential random graph models grows with the size of the network, $\mathcal{O}(N^2)$, rather than with the number of edges, $\mathcal{O}(L)$, making exact large scale analysis infeasible. There are, however, certain special cases for which the complexity of inference scales linearly in the number of edges, which we will discuss further in the sequel. \item[Inferential complexity:] When only exogenous predictors and latent variables are included in the model, inference is fairly straightforward; however, when \emph{network statistics} are included inference can be challenging, involving either heuristics such as pseudo likelihood estimation or complicated Markov chain Monte Carlo methods~\cite{robins2007recent,robins2007introduction}. \end{LaTeXdescription} \subsection{Non-parametric Bayesian network models} In the following we present a number of recent network modeling approaches based on Bayesian non-parametrics which can all be seen as extensions or special cases of the exponential random graph model. In non-parametric modeling, the structure of the model is not fixed, and thus the model complexity can adapt as needed according to the complexity of the data. This forms a principled framework for addressing the first issue (model complexity) mentioned above. With respect to the second issue (computational complexity), it turns out that many of these non-parametric Bayesian models can be constructed such that their computational complexity is linear in the number of links, allowing these methods to scale to large networks. While it certainly is possible to include network statistics in non-parametric Bayesian network models, Bayesian non-parametrics does not address the third issue (inferential complextiy) which is an open area of research. The focus of the remainder of this paper is twofold:~i)~To provide a comprehensible tutorial on the most simple non-parametric Bayesian network model: The infinite relational model~\cite{kemp2006learning,xu2006learning}. ~ii)~To give a brief overview of current advances in non-parametric Bayesian network models. \begin{figure*} \caption{A brief introduction to Bayesian modeling introducing the concepts needed in this paper.} \label{sec:Bayes} \label{eq:likelihood} \label{eq:prior} \label{eq:posterior} \label{fig:Bayes} \end{figure*} \section{Tutorial on the Infinite relational model} In the following we give a tutorial introduction to the infinite relational model~\cite{kemp2006learning,xu2006learning} which is perhaps the most simple non-parametric Bayesian network model. We will derive the necessary Bayesian non-parametric machinery from first principles by taking limits of a parametric Bayesian model. Understanding the details of involved in deriving this simple model later serves as a foundation for understanding other more complicated non-parametric constructions. Further, we go though the details involved in inference by Markov chain Monte Carlo, and show how a Gibbs sampler can be implemented in a few lines of computer code. Finally, we demonstrate the model on three network datasets and compare with other models from the exponential random graph model family. \subsection{The infinite relational model} The infinite relational model is a latent variable model where each node is assigned to a category, corresponding to a clustering of the network nodes. The number of clusters is learned from data as part of the statistical inference. As a starting point, we introduce a Bayesian parametric version of the model, which we later extend to the non-parametric setting. For readers unaccustomed with Bayesian modeling, we provide a short introduction, see Figure~\ref{fig:Bayes}. \subsubsection{A parametric Bayesian stochastic blockmodel} \label{sec:BayesStochasticBM} A simple and very powerful approach to modeling structure in a complex network is to use a \emph{mixture model}, leading to a Bayesian version of the socalled \emph{stochastic blockmodel} \cite{Nowicki2001}. In a mixture model, the observations are assumed to be distributed according to a mixture of $K$ components belonging to some parametric family. Conditioned on knowing which mixture components generated each datum, the observations are assumed independent. In a mixture model for network data, each node belongs to a single mixture component, and since each edge is associated with two nodes, its likelihood will depend on two components. Thus, the likelihood of the network will take the following form, \begin{equation} \label{eq:mixture} p(X|\theta) = \prod_{(i,j)} p(x_{i,j}|z_i,z_j,\phi) \end{equation} where the product ranges over all node pairs, and the parameters are given by $\theta=\big\{\{z_i\}_{i=1}^{N},\phi\big\}$ where $z_i$ indicates which mixture component the $i$th node belongs to and $\phi$ denotes any further parameters. In the most simple setting, each term in the likelihood could be a Bernoulli distribution (a biased coin flip), \begin{align} p(x_{i,j}|z_i,z_j,\phi) &= \mathrm{Bernoulli}(\phi_{z_i,z_j})\\ &= (\phi_{z_i,z_j})^{x_{i,j}}(1-\phi_{z_i,z_j})^{1-x_{i,j}}, \end{align} such that $\phi_{k,\ell}$ denotes the probability of an edge between two nodes in group $k$ and $\ell$. To finish the specification of the model, we must define prior distributions for the mixture component indicators $z$ as well as the link probabilities $\phi$. Starting with $\phi$, a natural choice would be independent $\mathrm{Beta}$ distributions for each pair of components, \begin{align} p(\phi_{k,\ell}) &= \mathrm{Beta}(a,b) \\ &= \frac{1}{\mathrm{B}(a,b)}(\phi_{k,\ell})^{a-1}(1-\phi_{k,\ell})^{b-1}, \end{align} where the parameters for example can be set to $a=b=1$ to yield a uniform distribution. A natural choice for $z$ would be a $K$-dimensional categorical distribution, \begin{equation} \label{eq:zprior} p(z_i=k|\pi) = \pi_k \end{equation} parameterized by $\pi=\{\pi_k\}_{k=1}^{K}$ where $\sum_{k=1}^{K}\pi_k=1$. How, then, should $\pi$ be chosen? We could for example set each of these parameters to a fixed value, e.g. $\pi_k=\tfrac{1}{K}$, but this would be a strong prior assumption specifying that the mixture components have the same number of members on average. A more flexible option would be to define a hierarchical prior, where $\pi$ is generated from a Dirichlet distribution, \begin{align} p(\pi) &= \mathrm{Dirichlet}(\alpha)\\ &= \frac{1}{\mathrm{B}(\alpha)}\prod_{k=1}^{K}\pi_k^{\alpha_k-1}. \end{align} where $\mathrm{B}(\alpha)$ is the multinomial beta function, which can be expressed using the gamma function, \begin{equation} \mathrm{B}(\alpha)=\frac{\prod_{k=1}^{K}\Gamma(\alpha_k)}{\Gamma(\sum_{k=1}^{K}\alpha_k)}. \end{equation} Since each component a priori is equally likely, we select the concentration parameters to be equal to each other, $\alpha_1=\dots=\alpha_K=\tfrac{A}{K}$ such that the scale of the distribution is $\sum_{k=1}^{K}\alpha_k=A$. This results in a joint prior over $z$ and $\pi$ given by \begin{align} p(z,\pi) &= \left[\prod_{i=1}^{N}p(z_i|\pi)\right]\times p(\pi|\alpha)\\ &= \frac{1}{\mathrm{B}(\alpha)}\prod_{k=1}^{K}\pi_k^{n_k+\alpha_k-1}, \end{align} where $n_k$ denotes the number of $z_i$'s with the value $k$. \subsubsection{Nuisance parameters} As we are not particularly interested in the mixture component probabilities $\pi$ (they can be considered nuisance parameters) we can compute the effective prior over $z$ by marginalizing over $\pi$ which has a closed form expression due to the conjugacy between the Dirichlet and Categorical distributions (i.e., the posterior distribution of $\pi$ has the same functional form as the prior), \begin{align} p(z) &= \int p(z,\pi) \mathrm{d}\pi = \frac{\mathrm{B}(\alpha+n)}{\mathrm{B}(\alpha)} \\ &= \frac{\Gamma(A)}{\Gamma(A+N)} \prod_{k=1}^{K} \frac{\Gamma(\alpha_k+n_k)}{\Gamma(\alpha_k)}. \label{eq:p(z)} \end{align} This resulting effective prior distribution is known as a multivariate P\'{o}lya distribution. Furthermore, the link probabilities $\phi$ can also be considered nuisance parameters, and can also be marginalized analytically due to the conjugacy between the Beta and Bernoulli distributions, \begin{align} p(X|z) &= \int p(X|z,\phi)p(\phi)\mathrm{d} \phi\\ &= \prod_{(k,\ell)}\frac{\mathrm{B}(m_{k,\ell}+a,\bar m_{k,\ell}+b)}{\mathrm{B}(a,b)}, \label{eq:p(x|z)} \end{align} where the product ranges over all pairs of components and $m_{k,\ell}$ and $\bar m_{k,\ell}$ denote the number of links and non-links between nodes in component $k$ and $\ell$ respectively. \subsubsection{An infinite number of components} \label{sec:NPBayes} In the previous section we specified a \emph{parametric} Bayesian mixture model for complex networks. In the following we move to the non-parametric setting in which the number of mixture components is allowed to be countably infinite. First, consider what happens when the number of components is much larger than the number of nodes in the graph. In that situation, many of the components will not have any nodes assigned to them; in fact, no more than $N$ components can be non-empty, corresponding to the worst case situation where each node has a component of its own. To handle the situation with an infinite number of components, we can not explicitly represent the components but, as we will show in the following, we need only an explicit representation of the finite number of non-empty components. As we defined the model so far we have introduced $K$ \emph{labelled} mixture components. This means that if we for example have $N=5$ nodes and $K=4$ components, we assign a separate probability to, say, the configurations $\{1,2,1,4,2\}$ and $\{3,4,3,2,4\}$ even though they correspond to the same clustering of the network nodes. A better choice is to specify the probability distribution directly over the \emph{equivalence class} of partitions of the network nodes. Since we have $K$ labels in total to choose from, there are $K$ possible labellings for the first component, $K-1$ for the second, etc. resulting in a total of \begin{equation} \frac{K!}{(K-\bar K)!} \end{equation} labellings corresponding to the same partitioning, where $\bar K$ is the number of non-empty components. Thus, defining a parameter $\bar z$ that holds the partitioning of the network nodes, we have \begin{align} p(\bar z) = \frac{K!}{(K-\bar K)!} \frac{\Gamma(A)}{\Gamma(A+N)} \prod_{k=1}^{K} \frac{\Gamma(\alpha_k+n_k)}{\Gamma(\alpha_k)}. \label{eq:CRPPrior} \end{align} Since $\bar z$ represents partitions rather than labels it can be finitely represented. We can now simply let the number of components go to infinity by computing the limit of the prior distribution for $\bar z$, \begin{align} \lim_{K\rightarrow \infty} p(\bar z) = \frac{\Gamma(A)A^{\bar K}}{\Gamma(A+N)} \prod_{k=1}^{\bar K} \Gamma(n_k). \end{align} The details involved in computing this limit can be found in \cite{Green2001} and \cite{Neal1992}. The limiting stochastic process is known as a \emph{Chinese restaurant process} (CRP) \cite{aldous1985} (for an introduction to the CRP, see \cite{Gershman2011}). Compactly, we may write \begin{align} \bar z & \sim \mathrm{CRP}(A). \end{align} \begin{figure} \caption{Example of graphs generated according to the \emph{infinite relational model} \label{fig:fig4} \end{figure} \subsubsection{Summary of the generative model} In summary, the generative process for the infinite relational model can be expressed as follows: \begin{align} \bar z & \sim \mathrm{CRP}(A),\\ \phi_{k,\ell} & \sim \mathrm{Beta}(a,b),\\ x_{i,j} & \sim \mathrm{Bernoulli}(\phi_{z_i,z_j}).\label{eq:irm_likelihhod_final} \end{align} The network nodes are partitioned according to a Chinese restaurant process; a probability of linking between each pair of node clusters is simulated from a Beta distribution; and each link in the network is generated according to a Bernoulli distribution depending on which clusters the pair of nodes belong to. Identically, in the notation of exponential random graph models the likelihood in Eq.~(\ref{eq:irm_likelihhod_final}) can be expressed as \begin{equation} \label{eq:IRM_as_ERGM} p(X|z,\phi) = \frac{1}{\kappa(z,\phi)}\exp\left[\theta(\phi)^\top s(X,z)\right] \end{equation} where the sufficient statistics are the counts of links between each pair of clusters, $s(X,z) = \{m_{k,\ell}\}$, and the natural parameter is the log odds of links between each pair of clusters, $\theta(\phi)=\left\{\log\tfrac{\phi_{k,\ell}}{1-\phi_{k,\ell}}\right\}$. \subsection{Inference} Having specified the model in terms of the joint distribution, the next step is to examine the posterior distribution which is given as \begin{align} p(\bar z|X) = \frac{p(X|\bar z)p(\bar z)}{\displaystyle\sum_{\bar z} p(X|\bar z)p(\bar z)}. \end{align} Here, the numerator is easy to compute as the product of Eq.~(\ref{eq:p(x|z)}) and (\ref{eq:CRPPrior}); however, the denominator is difficult to handle as it involves an elaborate summation over all possible node partitionings. Consequently, some form of approximate inference is needed. There are two major paradigms in approximate inference: Variational and Monte Carlo inference. The idea in variational inference is to approximate the posterior distribution with a simple, tractable distribution which is fitted to the posterior by minimizing some criterion such as the information divergence~\cite{Blei2006}. In Monte Carlo approximation the idea is to generate a number of random samples from the posterior distribution and approximate intractable integrals and summations by empirical averages based on the samples. In the following, we focus on Monte Carlo inference. In particular we review the Gibbs sampler for the infinite relational model. \subsubsection{Gibbs sampling} In Gibbs sampling the variables are iteratively sampled from their conditional distribution, and repeating this process the samples will eventually approximate the posterior distribution. We iteratively sample the partition assignments, $\bar z_n$, from their conditional distribution, \begin{align} p(\bar z_n = k|\bar z^{\setminus n},X), \end{align} where $\bar z^{\setminus n}$ denotes all partition assignments except $\bar z_n$. An expression for this conditional distribution can be found by considering which terms in the likelihood and prior will change when node $n$ is assigned to a different partition. For the prior in Eq.~(\ref{eq:CRPPrior}) we have \begin{align} p(\bar z_n = k|\bar z^{\setminus n}) \propto \left\{\begin{array}{ll} n^-_k & k \text{ is an existing partition,} \\ A & k \text{ is a new partition,} \end{array} \right. \label{eq:GibbsPrior} \end{align} where $n^-_k$ is the number of nodes associated with component $k$ not counting node $n$. Adding node $n$ to an existing component increases the argument of the corresponding Gamma function by one, effectively multiplying the prior by $n^-_k$, whereas adding the node to a new cluster increases $\bar K$ by one, effectively multiplying the prior by $A$. For the likelihood in Eq.~(\ref{eq:p(x|z)}), adding node $n$ to partition $k$ effectively multiplies the likelihood by \begin{align} \prod_\ell \frac{\mathrm{B}\left(m^{\setminus n}_{k,\ell}\!+\!r_{n,\ell}\!+\!a,\ \bar m^{\setminus n}_{k,\ell}\!+\!n_\ell\!-\!r_{n,\ell}\!+\!b\right)} {\mathrm{B}\left(m^{\setminus n}_{k,\ell}\!+\!a,\ \bar m^{\setminus n}_{k,\ell}\!+\!b\right)}, \label{eq:GibbsLikelihood} \end{align} where $m^{\setminus n}_{k,\ell}$ and $\bar m^{\setminus n}_{k,\ell}$ denote the number of links and non-links between nodes in component $k$ and $\ell$, not counting any links from node $n$, and $r_{n,\ell}$ is the number of links from node $n$ to any nodes in component $\ell$. In order to perform Gibbs sampling we can now simply consider each node in turn; for each partition (including a new, empty partition) compute the product of Eq.~(\ref{eq:GibbsPrior}) and (\ref{eq:GibbsLikelihood}); normalize to yield a categorical distribution over partitions; and sample a new $\bar z_n$ according to this distribution. The final algorithm is summarized in Fig.~\ref{fig:GibbsAlgorithm}. The result after running the Gibbs sampler for $2T$ iterations is a set of samples of $\bar z$, where usually the first half is discarded for burn in. This yields a final ensemble $\{\bar z^{(t)}: t\in 1,\dots,T\}$ approximately sampled from the posterior. \begin{figure*} \caption{MATLAB\ code implementing the infinite relational model. $X$ is the symmetric adjacency matrix, $T$ is the number of Gibbs sweeps, and $a$, $b$, and $A$ are the hyperparameters. The code illustrates the computations involved in the Gibbs sampler, but is not efficient since it recomputes all the needed link counts in each iteration.} \label{fig:GibbsAlgorithm} \end{figure*} \subsubsection{Computational complexity} In the algorithm outlined in Figure~\ref{fig:GibbsAlgorithm} it can be observed that there are two loops: One over the $T$ simulated samples and one over the $N$ nodes in the network. In each run of the inner loop, a node is assigned to a cluster by the Gibbs sampler. In the following we consider the number of clusters $K$ a constant (although of course it will vary depending on the network data), and examine how the computational complexity of the algorithm depends on the number of nodes and edges in the network. In the code in Figure~\ref{fig:GibbsAlgorithm} the variables \texttt{M0}, \texttt{M1} and \texttt{m}, which hold the counts of nonlinks, links, and nodes, are re-computed in each iteration. In a more sensible implementation, these quantities would be precomputed and efficiently updated during the Gibbs sampling. Evaluating the probability of assigning a node to each cluster then requires the computation of the vector \texttt{r} which holds the count of links from node $n$ to each of the clusters. The time complexity of this computation is on the order of the node degree. Looping over the nodes gives a total time complexity of $\mathrm{O}(L)$ where $L$ is the number of edges in the graph. To calculate the probabilities of assigning the nodes to the clusters for all $N$ Gibbs samples requires $2K^2N$ evaluations of the (logarithm of the) Beta function so the time complexity of this computation is $\mathrm{O}(N)$. As a result, since in general $L>N$, the total computational complexity of the Gibbs sampler for the IRM model is $\mathrm{O}(L)$. Figure~\ref{fig:fig6} demonstrates that this linear scaling is observed in practice when analyzing networks of varying numbers of nodes and edges. For comparison, Monte Carlo maximum likelihood inference in exponential random graph model based on endogenous network statistics requires the simulation of random networks from the ERGM distribution, which is inherently an $\mathrm{O}(N^2)$ operation. We should note though, that in practice we would not expect the time complexity of the IRM to scale linearly in the number of edges, since the number of clusters most likely would increase with the size of the network and since the number of required iterations of the Gibbs sampler might also go up. \begin{figure} \caption{Experiment demonstrating that the computational complexity grows linear in both the number of nodes $N$ and edges $L$ for the IRM model. The graphs used in the experiments are generated with $K=5$ communities of equal size and $\phi=\phi_c/N$ where $\phi_c$ is kept constant in the experiments ensuring that the number of edges $L$ grows linearly with the number of nodes $N$ in the generated networks. The Gibbs sampler used in the experiment was implemented to pre-compute \texttt{M0} \label{fig:fig6} \end{figure} \subsection{Checking model fit} Once an approximation of the posterior distribution has been obtained, we wish to check the implications of the model. This can include computing the posterior distribution of important quantities of interest, evaluating how well the model fits the data, and making predictions about unobserved data. \subsubsection{Computing posterior quantities} Say we are interested in some function $f(\bar z)$ that depends on the model. We can now compute the posterior distribution of this quantity, \begin{align} p\left(f(\bar z)\right) &= \sum_{\bar z^\prime} \delta(f(\bar z)=f(\bar z^\prime)) p(\bar z^\prime|X)\\ &\approx \frac{1}{T}\sum_{t=1}^T \delta(f(\bar z)=f(\bar z^{(t)})), \end{align} approximated by an empirical average over the posterior samples. For example, the approach can be used to compute the posterior distribution over the number of components in the mixture model or other quantities of interest. \subsubsection{Link prediction} Missing data is easily handled in the Bayesian framework, simply by leaving out the terms in the likelihood corresponding to unobserved links. If we observe only a part of the network and are interested in predicting the presence or absence of an unobserved link between two nodes, we can simply compute the posterior predictive distribution of the missing link, \begin{align} p(x_{i,j}|X) &= \sum_{\bar z} p(x_{i,j}|\bar z,X) p(\bar z|X) \\ & \approx \frac{1}{T}\sum_{t=1}^T p(x_{i,j}|\bar z^{(t)},X). \end{align} Here $X$ denotes the observed part of the network, and $\bar z^{(t)}$ is simulated from the posterior distribution where only the observed part of the network is conditioned on. Inserting $p(x_{i,j}|\bar z,X) = \int p(x_{i,j}|\theta,\bar z)p(\theta|\bar z,X)d\theta$ yields \begin{equation} p(x_{ij}|X)\approx \mathrm{Bernoulli}(\rho_{i,j}), \label{eq:LinkPredictionBernoulli} \end{equation} where \begin{equation} \rho_{i,j} = \frac{1}{T}\sum_t\frac{m_{z^{(t)}_i,z^{(t)}_j}+a} {m_{z^{(t)}_i,z^{(t)}_j}+\bar{m}_{z^{(t)}_i,z^{(t)}_j}+a+b} \end{equation} Predicting missing links can be used to compare different models: A number of links can be excluded when fitting the models which can then be compared by assessing their ability to predict the held-out links. Since the links in a network are highly correlated and because many networks exhibit a highly imbalanced distribution of links and nonlinks, care must be taken in choosing a hold out test set in an appropriate way. If the test set is chosen to balance the number of links and nonlinks, its distribution will not correspond to the full network which makes the absolute link prediction results difficult to interpret. Thus, although indicative of a model's predictive performance, this approach is perhaps best suited for the relative comparison of different models. If, on the other hand, several examples of full networks are available, a whole network can used as test data making the absolute link prediction results directly interpretable. \subsubsection{Posterior predictive checking} Finally, we might be interested in examining how well our model describes the data to assess if the model is appropriate for the data at hand or if a more suitable model should be constructed. A principled approach to achieving this is posterior predictive checking. First, an ensemble of replicated networks from the posterior predictive distribution is generated from, \begin{align} p(X^{\mathrm{rep}}|X) = \sum_{\bar z}p(X^{\mathrm{rep}}|\bar z,X) p(\bar z|X), \end{align} which as before can be approximated using samples of $\bar z$ simulated from the posterior using Eq.~(\ref{eq:LinkPredictionBernoulli}). Now, the idea is to compare characteristics of the observed network, such as the degree distribution, clustering coefficient, and characteristic path length, with the posterior predictive distribution of these properties, approximated by the empirical distribution over the ensemble of replicated networks. If the model fits well, the observed characteristic of the network should be quite likely under the posterior predictive distribution, whereas a large discrepancy indicates model mismatch. Posterior predictive checking is useful for model \emph{critisism}, i.e., for exploring lack of fit as opposed to testing whether the model is correct. Discovering network characteristics for which the model does not fit the data well can inspire to the development of more sophisticated models; however, even a simple model which does not fit the data in all respects can be useful. \subsection{Directed, weighted, bipartite, and multiple networks} The infinite relational model readily extends to other types of graphs including directed, weighted and bipartite networks as well as multiple networks on the same set of nodes. These extensions can be arrived at by modifying the model parametrization and the observational model (the likelihood function) as well as making appropriate changes to the priors. The process of formulating the joint distribution and deriving a Markov chain Monte Carlo procedure for inference closely follows the steps we have taken for the basic infinite relational model described in the previous sections. The extensions described below can also be combined, for example to model a set of directed, bipartite networks with edge weights. \subsubsection{Directed networks} In a directed network, the links have an associated direction, so that they point from one node to another. A directed network can be represented by an asymmetric adjacency matrix, and the directionality of links between groups can be modelled through the parameter $\phi$ by the existence of asymmetric interactions between the groups such that $\phi_{k,\ell}\neq \phi_{\ell,k}$. This double the number of link probability parameters $\phi$. The rest of the model is unaffected, except for the likelihood which must now be evaluated not for each \emph{pair} of nodes but for each \emph{ordered pair} of nodes. This extension of the infinite relational model assigns different probabilities to links in each direction between each pair of clusters, but has only a single parameter for the link probability within each cluster---thus, directionality is not modelled within clusters. \subsubsection{Bipartite networks} A bipartite network is defined as a set of links between two disjoint sets of nodes, possibly with different cardinality. The adjacency matrix for a bipartite network can thus be non-square. We can then use two independent Chinese restaurant processes to model the clustering of the two sets of nodes, \begin{equation} \bar z \sim \mathrm{CRP}(A_z),\quad \bar w \sim \mathrm{CRP}(A_w) \end{equation} and change the likelihood to (cf. Eq.~(\ref{eq:irm_likelihhod_final}) \begin{align} x_{i,j} \sim \mathrm{Bernoulli}(\phi_{z_i,w_j}). \end{align} This latter parameterization is also useful for the modeling of directed networks when the groupings of the nodes may be different for the rows and columns of the adjacency matrix. \subsubsection{Weighted networks} In a weighted network, each edge has a (scalar) weight associated with it. Depending on the type of weights, the Bernoulli likelihood can be changed to some other suitable distribution: For example, if the weights are positive integers~\cite{MMMNS_NECO2012}, a Poisson distribution could be employed, \begin{align} x_{i,j} \sim \mathrm{Poisson}(\lambda_{z_i,z_j}), \end{align} where $\lambda$ is the rate parameter for the edge weights, playing the role of $\phi$ in the Bernoulli model, c.f. Eq.~(\ref{eq:irm_likelihhod_final}). As a prior over $\lambda$, the typical choice is a Gamma distribution, replacing the Beta priors for $\phi$. If the weights are real numbers~\cite{Tue_MLSP2012} an observational model based on a Normal distribution might be appropriate, \begin{align} x_{i,j}\sim \mathrm{Normal}(\mu_{z_i,z_j},\sigma_{z_i,z_j}^2). \end{align} Here, we have two sets of parameters, $\mu$ and $\sigma^2$, denoting the means and variances of the edge weights between nodes in groups $i$ and $j$. Again, appropriate priors for $\mu$ and $\sigma^2$ should be selected. \subsubsection{Multiple networks} Sometimes the data consists of multiple observations of networks on the same set of nodes (see \cite{kemp2006learning,miller2009nonparametric,MMMNS_NECO2012,Andersen2012}). The only required change to the model is that the likelihood should be evaluated as the product of the likelihoods for each observed network. It can then either be assumed that the clustering structure as well as the link probabilities are equal across the multiple networks, that the clustering structure is shared but the link probabilities only shared according to an additional clustering of the multiple graphs, or that the clustering is shared but each network has an individual set of link probabilities,$\phi$. When the link probabilities are analytically marginalized this leads to three different expressions for the marginal likelihood. \subsection{Experimental evaluation} In the following we conduct a series of experimental evaluations with the infinite relational model, highlighting some of its properties and comparing it with other models. \subsubsection{Analysis of three example networks} To demonstrate the non-parametric Bayesian modeling framework in practise, we analyzed three real networks: \begin{LaTeXdescription} \item[Zachary's Karate Club:] Zachary's Karate club is an undirected unweighted network of friendships between 34 members of a karate club at a US university in the 1970s \cite{zachary1977information}. A total of 74 undirected links between the members of the Karate club are observed. In the analysis the standard IRM model was used. \item[Connectome of Caenorhabditis Elegans:] The only complete connectome currently recorded of an organism is the directed integer weighted network of the 8,799 connections between the 302 neurons of the Caenorhabditis Elegans. The network has been compiled in \cite{Watts1998}. In the analysis the weighted IRM model with a Poisson likelihood and Gamma priors was used. \item[Drugs and side effects:] The drugs and side effects network is a bipartite network on marketed medicines and their recorded adverse drug reactions extracted from public documents and package inserts. The network currently consists of 996 drugs and 4,199 side effects with 100,049 unweighted links between drugs and side effects \cite{kuhn2010side}. In the analysis the bipartite IRM model was used. \end{LaTeXdescription} These three networks in turn represent three important complex network application domains within social science, neuroscience and bio-informatics. The parameters of the models were inferred by Markov chain Monte Carlo sampling such that 250 iterations were used as burn in for the sampler and 250 iterations for drawing samples from the posterior. To improve mixing the data was analyzed based on 5 randomly initialized runs. In addition to a Gibbs sampler (as described in Fig.~\ref{fig:GibbsAlgorithm}) the socalled split-merge sampler described in~\cite{kemp2006learning,Jain2004} was also employed. The hyper-parameters for the Beta distribution we set to $a=b=1$. The posterior distribution of the number of components was computed. For assessing model fit by prediction of missing links, excluded 10\% of links and an equivalent number of non-links in the analysis of the Zachary's Karate Club data and 5\% of links and an equivalent number of non-links in the two larger Connectome and Drug-side effects networks. For posterior predictive checking, we stored every 25th posterior sample and generated 20 replicated networks for each sample for each of the five random initializations. From the ensemble of these networks the distribution over the network characteristics; \emph{degree mean, degree standard deviation, characteristic path length and clustering coefficient} were calculated and compared to the true values of these quantities for the actual network. The results of the modeling is given in Fig.~\ref{fig:ZacharyCelegansDrugs}. The figure illustrates the network as well as a permutation of the networks adjacency matrix. The nodes are color coded according to the partition given by the sample with highest posterior likelihood across the five random initializations. From the permuted adjacency matrices it can be seen that the nodes of the networks have been grouped into clusters that share similar patterns of interactions, defining regions of network homogeneities. These blocks are color coded according to the expected value of the corresponding group interactions using a logarithmic gray scale. On the right, the posterior distribution of the number of components is shown as well as the models performance in predicting held-out links. The link prediction performance is quantified by the area under curve (AUC) of the receiver operator characteristic (ROC)~\cite{miller2009nonparametric}. In addition, the results of the posterior predictive checking of the models ability to account for the mean and standard deviation of the degree distribution, characteristic path length and clustering coefficient are given. Since the degree is explicitly modelled in the IRM model, this posterior predictive check serves only as a sanity check: The IRM should by definition get this right except for a small bias due to the prior. From these results it can be seen that the IRM model accounts well for all the considered characteristics in the Zachary Karate Club network but that it poorly accounts for the degree standard deviation, the clustering coefficient, and the characteristic path length of the connectome of C. Elegans. As expected, the average degree falls within the lower tails of the simulated distributions for all the estimated models, but they underestimate the standard deviation of the node degree of both the connectome and drugs and side effect networks. This highlights a deficiency of the IRM model, namely that it does not explicitly model the degree distribution. While the IRM is adept in identifying blocks of homogeneous network regions with the $\phi$ (nuisance) parameter specifying the density of each of these blocks, it does not explicitly account for microscale properties such as triangles and node degree. Hence, the clustering coefficient, characteristic path length and the standard deviation of the degree distribution is not well accounted for by the model as is evident in the posterior predictive checks. Despite these limitations, the infinite relational model does well account for mesoscale structure in the networks as quantified by its ability to predict links: For all the three networks the infinite relational model is able to predict links significantly better than random guessing. Apart from being able to predict links the IRM model has made the structure of the networks substantially more comprehensible by reducing the complex network of pairwise interactions to a much smaller number of groups (defined by $\bar z$) and their interactions (defined by $\phi$). The IRM can therefore be considered an efficient framework for compressing a large complex network to a smaller network constituting consistent patterns of interactions between groups of nodes which can substantially facilitate in the understanding of mesoscale network patterns. For example, the analysis of the Zachary's Karate Club network, the infinite relational model reveals six groups of club members including two large groups and two singletons (actually the posterior has support for five to eight groups, so these other configurations should also be considered in the interpretation of the results). It is known from the literature~\cite{zachary1977information} that the karate club later split into two fractions, corresponding to the two large groups, led by the president and the instructor which are the two singletons. \begin{figure*} \caption{Infinite relational model analysis of three networks: Social relations in Zachary's karate club, neural network of Caenorhabditis Elegans, and relations between drugs and side effects. Networks are shown as graphs (20 pct. of links shown for C. Elegans and 10 pct. shown for Drugs and side effects) as well as adjacency matrix. Posterior distribution of the number of components as well as ROC curve indicating performance on predicting missing links is shown (shaded regions indicate two times the standard deviation on the mean across the separate runs). Posterior predictive distribution of node degree (mean and standard deviation), clustering coefficient, and characteristic path length is shown with vertical lines indicating values for the observed networks.} \label{fig:ZacharyCelegansDrugs} \end{figure*} \subsubsection{Comparison with other models} Next, we compare the IRM model to several other methods on a set of social networks derived from a study of intra-organizational relations: \begin{LaTeXdescription} \item[Intra-organizational relations:] This set of undirected networks \cite{Cross_Parker_2004} consists of two types of relations defined on the same set of nodes corresponding to employees in a consultancy company. Links in the first network signifies employees who iteract whereas links in the second network signifies that either of the employees thinks that the other has expertise in an area important to her. The networks were generated by thresholding and symmetrizing the original directed weighted networks~\cite{Cross_Parker_2004}. The two networks are highly correllated since employees would be expected to interact frequently with colleagues with important expertise. \end{LaTeXdescription} We used the first of the two networks for training and examined the model fit by assessing the posterior predictive distribution of the node degree distribution. We fit an IRM model as well as two other non-parametric Bayesian network models, the infinite multiple membership relational model (IMRM) and the Bayesian community detection model (BCD) which are discussed further in the sequel. These models were fit using MCMC with $10,000$ rounds of Gibbs sampling where the first half of the samples were discarded for burn-in. Furthermore, we fit an exponential random graph model (ERGM) using the network statistics \emph{sociality} and \emph{gwdegree}~\cite{Morris:Handcock:Hunter:2007:JSSOBK:v24i04} as well as a latent position and cluster model (ERGMM)~\cite{Krivitsky:Handcock:2007:JSSOBK:v24i05} using a latent space of dimension four and six latent clusters (varying these parameters gave similar results). To compare how well the models fit the data we plotted the posterior predictive distribution of the degree distribution (see Figure~\ref{fig:ComparisonWithERGM}). The results show that the two most flexible models, the ERGM and the IMRM fit the data very well in terms of reproducing the degree distribution. The fit of the IRM and BCD models which are both simple latent cluster models is less good: Both models appear to underestimate the number of nodes with a high degree, i.e., employees interacting with more than 15 colleagues. The ERGMM model on the other hand appears to overestimate the number of nodes with degrees around 15--20. Next, we compared the models' predictive performance by evaluating their ability to predict links in the second network (see Figure~\ref{fig:ComparisonWithERGM}). Here, all models except the ERGM performed on par, suggesting that the inclusion of latent variables in the model is beneficial for this task. \begin{figure} \caption{Comparison of five network models: The plots show the network's observed degree distribution as well as the posterior predictive 95\% and 50\% intervals (shaded areas) for each of the models. The plot on the lower right shows the fraction of correctly predicted links/nonlinks when the models are trained on one network and used to predict links in another related network. } \label{fig:ComparisonWithERGM} \end{figure} \section{Review of non-parametric Bayesian network models} In the previous section we have discussed the infinite relational model, which is the most simple example of a non-parametric Bayesian latent variable model for complex network. In that model the latent variable is categorical, introducing a clustering of the network nodes; however, many other types of non-parametric Bayesian network models have been proposed in which the latent variables take other forms. Most of these can be classified as latent class, latent feature, or latent hierarchy models. In the following, we review a number of recent non-parametric Bayesian network models: We present their generative model and discuss the underlying modeling assumptions, but omit the specific details involved in inference and model checking. \subsection{Latent class models} In latent class models each node is assumed to belong to one class and the CRP is used a non-parametric distribution of these latent classes. The infinite relational model is the most prominent example of non-parametric latent class models for complex networks. This can be attributed to the fact that the model can capture multiple types of network structures. Contrary to other network modeling approaches such as spectral clustering~\cite{Luxburg2007} and modularity~\cite{Newman2006} groups are defined by how they interact not only internally but also externally. As such, groups are not only defined in terms of their internal properties but in particular by how they interact with the remaining parts of the network. Groups may therefore be defined as having no links between the nodes within the group as illustrated by the fourth (light blue) group of the Zachary Karate Club network in Fig.~\ref{fig:ZacharyCelegansDrugs}. Communities in the IRM model can in turn be defined as clusters with high within-cluster density relative to their between-cluster density, interactions between groups can be accounted for by the off-diagonal elements of the $\phi$ matrix while hierarchical structures form a structured system of interaction between the elements in the $\phi$ matrix, see also Fig.~\ref{fig:OtherModels}. The IRM model can be considered a compression of a complex network into a subgraph formed by $\phi$ that accounts for the connectivity between the components. If the number of components is the same as the number of vertices of the graph the model will recover the actual graph (when we disregard potential influences of priors) and nothing is learned in terms of structure in networks. As such the IRM model can adjust its complexity, interpolating between the full graph and the Erd\H{o}s-R\'{e}nyi graph that corresponds to an IRM model with only one component. Bayesian non-parametrics, i.e. the Chinese Restaurant Process, here admits inference over the hypothesis space encompassing all models between these two extremes in order to find plausible accounts of block structure in networks. \subsubsection{Restrictions on cluster interactions} Although, the IRM model is very flexible in terms of the structure it is able to account for, specialized non-parametric latent class methods have been proposed that specifically aim at extracting specific types of network structures. These models can be characterized by the restrictions which they impose on the between-class interactions $\phi$. In \cite{hofman2008} the $\phi$ matrix is constrained to only include two parameters, a within-group link probability $\rho_w$ and a between-group link probability $\rho_b$ such that \begin{equation} \phi_{k,\ell}=\left\{\begin{array}{ll} \rho_w & \text{if}\ k=\ell, \\ \rho_b & \text{otherwise}. \end{array}\right. \end{equation} In \cite{MMMNS_NECO2012} the within-group link probabilities are individual for each group but between-group probabilities are shared for all combinations of groups, \begin{equation} \phi_{k,\ell}=\left\{\begin{array}{ll} \rho_\ell & \text{if}\ k=\ell, \\ \rho_b & \text{otherwise}. \end{array}\right. \end{equation} \subsubsection{Bayesian community detection} Both of the models mentioned above are inspired by the notion of communities defined as \begin{quotation} ``the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters.''~\cite{fortunato2010community} \end{quotation} This definition is used explicitly in \cite{MMMNS_NECO2012} forming the Bayesian Community Detection (BCD) method. The BCD is based on the following non-parametric generative model that strictly enforces community structure by constraining the diagonal elements of the $\phi$ matrix to be larger than the off-diagonal elements. The generative model for BCD is given by \begin{align} \bar{z}&\sim \mathrm{CRP}(A),\\ \gamma_k& \sim \mathrm{Beta}(\varphi,\varphi),\\ \phi_{k,\ell}& \sim \left\{\begin{array}{ll} \mathrm{Beta}(a,b) & \text{if}\ k=\ell, \\ \mathrm{BetaInc}(a,b,w_{lm}) & \text{otherwise}, \end{array}\right.\\ &\phantom{\sim}\mathrm{where}\ w_{k,\ell}=\min[\gamma_k\phi_{kk},\gamma_\ell\phi_{\ell\ell}],\\ x_{ij}&\sim \mathrm{Bernoulli}(\phi_{z_iz_j}). \end{align} According to the model the probability of a link between communities $k$ and $\ell$ is strictly smaller than $w_{k,\ell}$ defined as the minimum over the two communities of some number $\gamma_k$ times the within-community probability $\phi_k$. This is enforced by generating the between-group probabilities according to an incomplete Beta distribution (BetaInc). The parameters $\gamma_k$ define a relative gap between link probabilities within and between communities, such that $\gamma_k=1$ says that there should be fewer links (on average) between than within, and $\gamma_k=0$ says that no links can be generated from nodes in community $k$ to other communities. The gap parameter $\gamma_{k}$ can in turn be learned from data and used to define the extend to which networks are community structured. \subsubsection{Subset infinite relational model} In \cite{ishiguro2012} the IRM model was extended to handle irrelevant data entries by letting these entries constitute a separate noise cluster forming the subset infinite relational model (SIRM). The generative model for SIRM can be written as \begin{align} r_i&\sim\mathrm{Bernoulli}(\lambda),\\ \phi_{k,\ell}&\sim \mathrm{Beta}(a,b),\\ \rho&\sim \mathrm{Beta}(c,d),\\ \bar{z}&\sim \mathrm{CRP}(A),\\ x_{ij}&\sim \mathrm{Bernoulli}(\phi_{z_iz_j}^{r_ir_j}\rho^{1-r_ir_j}). \end{align} For each node, the binary variable $r_i=0$ indicates that the node belongs to the noise cluster. For all pairs of nodes $(i,j)$ not in the noise cluster the model is identical to the IRM model; however, links between pairs of nodes of which at least one is in the noise cluster are generated with a shared probability $\rho$. All the above extensions can potentially improve on identification of structure in complex networks by substantially reducing the parameters space of the within and between group interaction matrix $\phi$ compared to the IRM model. The above extensions are illustrated in Figure~\ref{fig:LatentClassModels}. \begin{figure} \caption{Examples of existing latent class models. {\sffamily a)} \label{fig:LatentClassModels} \end{figure} \subsection{Latent feature models} While latent class models restrict each node to belong to one and only one class, latent feature models endow each node with a vector of latent feature values. Exponential random graph models that embed each node in a latent feature space of fixed dimension belong to the class of latent feature models. In contrast, in non-parametric Bayesian latent feature models, the dimensionality of the latent space is learned from data to best fit the observed network. Existing non-parametric latent feature models for networks are based on the Indian Buffet Process (IBP) \cite{griffiths2006,griffiths2011}. Similarly to the CRP the IBP can be derived by starting with a finite model and considering the limit as the number of features goes to infinity. A finite set of $k=(1,\dots,K)$ binary features $z_{i,k}$ with entry 1 if node $i$ possesses feature $k$ and zero otherwise, can be generated according to \begin{align} \pi_k&\sim \mathrm{Beta}(\alpha_k,1),\\ z_{i,k}&\sim \mathrm{Bernoulli}(\pi_k). \end{align} Each $z_{i,k}$ is independent of all other assignments conditioned on $\pi_k$ while the $\pi_k$ are generated independently~\cite{griffiths2011}. As in the derivation of the CRP, we define $\alpha_1=\ldots=\alpha_K=A/K$ and marginalize over the nuisance parameter $\pi_k$, yielding the expression~\cite{griffiths2011} \begin{align} p(\bar{Z})&=\prod_{k=1}^K\int (\prod_{i=1}^N p(z_{i,k}))p(\pi_k)d\pi_k\\ &=\prod_{k=1}^K \frac{\alpha_k\Gamma(n_k+\alpha_k)\Gamma(N-n_k+1)}{\Gamma(N+1+\alpha_k)}. \end{align} Since again, as in the CRP, the labels of the features are arbitrary, we define an appropriate equivalence class for the binary matrix $Z$ by ordering the columns of the matrix from left to right according to their ``history'' $h$ in decreasing order. A history $h$ denotes one of the potential $2^{N}$ specific combinations of nodes a feature can possess enumerated according to the order of the nodes such that a feature possessed by the $n$th node contributes by a factor of $2^{N-n}$ to its history. For example in a network with 3 nodes if only node 1 and 3 possess feature $q$ the feature will have the history enumerated by $h=2^{3-1}+2^{3-3}=5$ which is greater than a feature $q^\prime$ possessed by only node 2 and 3 which has the history $h'=2^{3-2}+2^{3-3}=3$. As a result, feature $q$ will be to the left of feature $q^{\prime}$. Features which are not possessed by any nodes have $h=0$ and are ordered last. Since a permutation of the ordering of the features in $Z$ is inconsequential, we consider the equivalence class of features ordered by their history. The number of equivalent feature matrices can be computed as \begin{align} \frac{K!}{\prod_{h=0}^{2^N-1}K_h!}, \end{align} where $K_h$ is the number of features with history $h$ and $K_0$ denotes the number of features that are empty. This equivalence class is used in a similar way as when we considered the distribution over partitions in the CRP. Taking the limit yields \begin{multline} \lim_{K\rightarrow \infty}p(\bar{Z})=\frac{A^{\bar K}\exp(-AH_N)}{\prod_{h=1}^{2^N-1}K_h!}\\\times \prod_{k=1}^{\bar K}\frac{\Gamma(N-n_k+1)\Gamma(n_k)}{\Gamma(N+1)}, \end{multline} where $\bar{Z}$ denotes the left ordered equivalence class, $\bar K$ the number of non-empty features and $H_N$ denotes the $N$th harmonic number~\cite{griffiths2011}. Since this defines a distribution over an infinite size feature matrix of which only a finite subset of the features are used, the construction makes it possible to infer the number of features best suited to model the data. Compactly, we write $\bar Z \sim \mathrm{IBP}(A)$. \subsubsection{Latent feature relational model} In \cite{miller2009nonparametric} the binary matrix factorization model \cite{Meeds2007} based on an IBP is considered for network data. The following generative model embodies the latent feature relational model (LFRM) \begin{align} \bar{Z}&\sim \mathrm{IBP}(A),\\ \phi_{k,\ell}&\sim \mathrm{Normal}(0,\sigma_w^2),\\ x_{i,j}&\sim \mathrm{Bernoulli}\mathrm{B}igg(\sigma\bigg[\sum_{k,\ell}z_{i,k}z_{j,\ell}\phi_{k,\ell}\bigg]\mathrm{B}igg), \end{align} where $\sigma[x]$ is a sigmoid function such as the the logit or probit. This model is inspired by the IRM in its parameterization but the model admits the nodes to belong to multiple groups, i.e., for each node to possess multiple features. \subsubsection{Infinite latent attribute model} In \cite{Palla2012} the infinite latent attribute model (ILAM) is proposed in which each of the nodes have a number of associated binary feature, and within each feature the nodes belong to an individual subcluster. The model can be summarized by the generative process, \begin{align} \bar{Z}&\sim \mathrm{IBP}(A),\\ c^{(m)} &\sim \mathrm{CRP}(\gamma),\\ \phi_{k,\ell}^{(m)}&\sim \mathrm{Normal}(0,\sigma_w^2),\\ x_{i,j}&\sim \mathrm{Bernoulli}\mathrm{B}igg(\sigma\bigg[s+\sum_m z_{i,m}z_{j,m}\phi^{(m)}_{c_i^{(m)} c_j^{(m)}}\bigg]\mathrm{B}igg). \end{align} For each feature $m$, the nodes that possess that feature are clustered according to a CRP. Here, $s$ is a bias term and $c^{(m)}_i$ is the cluster assignment of the $i$th node in the $m$th latent feature. Both the LFRM and ILAM have been demonstrated to perform better than the IRM on a variety of link-prediction tasks \cite{miller2009nonparametric,Palla2012}. An important property of these models is that they allow for the membership of nodes in one group to inhibit the probability of linking to nodes in other groups as $\phi$ may include negative (i.e. antagonistic) elements. This property may indeed be an important reason for the models' superior link prediction performance over IRM. \subsubsection{Infinite multiple-membership relational model} In \cite{mmmns_imrm2010,MMMNS_IMRM2011} the infinite multiple-membership relational model (IMRM) was proposed. Here the probability of observing a link between vertex $i$ and $j$ is generated independently given the (multiple) groups that vertex $i$ and $j$ belongs to and their interactions $\phi$. The generative model for the IMRM is given by \begin{align} \bar{Z}&\sim \mathrm{IBP}(A),\\ \phi_{k,\ell}&\sim \mathrm{Beta}(a,b),\\ x_{i,j}&\sim\mathrm{Bernoulli}\mathrm{B}igg(1-\prod_{k,\ell}(1-\phi_{k,\ell})^{z_{i,k}z_{j,\ell}}\mathrm{B}igg). \end{align} If, for example, node $i$ possesses feature $k$ and node $j$ possess feature $\ell$ the quantity $\phi_{k,\ell}$ denotes the probability of a link being generated between node $i$ and $j$ on account of that pair of features. The expression $1-\prod_{k,\ell}(1-\phi_{k,\ell})^{z_{i,k}z_{j,\ell}}$ defines the probability of observing a links between vertex $i$ and $j$ as the total probability of one or more of the pairs of features possessed by the two nodes to independently generate the link. This construction is referred to as a ``noisy or process''. Notably, the IRM model is recovered when nodes belong to one and only one group. Contrary to the LFRM and ILAM, the IMRM scales computationally in the number of observed links in the network rather than the number of potential links in the network which admits large scale analysis (see \cite{MMMNS_IMRM2011} for the details). However, scalability comes at the price of not being able to model antagonistic interactions between groups as for LFRM and ILAM. The LFRM and IMRM are illustrated in Figure~\ref{fig:LatentFeatureModels}. \begin{figure} \caption{Illustration of the LFRM and IMRM models. {\sffamily a)} \label{fig:LatentFeatureModels} \end{figure} \subsubsection{Latent factor models} The IBP is useful for defining non-parametric representations of binary latent variable models and both the LFRM and ILAM can be considered non-parametric latent variable models within the exponential random graph formulation. One approach for model order selection within framework of exponential random graph models is to impose sparse priors. The IBP can here be considered a non-parametric sparse prior for latent variable modeling in general as also proposed for factor analysis in \cite{knowles2007infinite}. As such, the IBP works in a similar manner as a slab-and-spike type prior, where a feature is either present or not according to the IBP while its contribution if present can be drawn separately. This can be used to extend existing sparse latent variable models within the exponential random graph model framework to form non-parametric models. For instance, a non-parametric version of a latent factor model~\cite{hoff_2009_cmot} can be defined by the following generative process using the IBP as a non-parametric sparsity promoting prior. \begin{align} \bar{Z}&\sim \mathrm{IBP}(A),\\ u_{i,k}&\sim \mathrm{Normal}(0,\sigma_u^2),\\ x_{i,j}&\sim\mathrm{Bernoulli}\mathrm{B}igg(\sigma\bigg[ \sum_k(z_{i,k}u_{i,k})(z_{j,k}u_{j,k})\bigg]\mathrm{B}igg). \end{align} \subsection{Latent hierarchical models} Many complex networks are believed to be hierarchically organized such that a latent hierarchy plays an important role in accounting for the structure of the network connectivity \cite{simon1962,Ravasz2002,roy2007learning,sales2007,clauset2008hierarchical,RoyTeh2009a,Meunier2010,HerlauEtAlCIP2012}. Bayesian non-parametrics can be used to define flexible priors over all conceivable hierarchical structures and from data infer the particular hierarchical structure that is supported by the data in a similar manner as the CRP and IBP is used to infer latent clusters and features respectively. \subsubsection{Hierarchical random graphs} In \cite{clauset2008hierarchical} the perhaps most simple non-parametric model for hierarchical organization is proposed. This model imposes a uniform prior over all binary trees, which in the following we refer to as $\mathrm{UBT}$. The probability of generating a link between two nodes is defined by a parameter located at the level of their nearest common ancestor in the binary tree. A model for network with $N$ nodes thus has $N-1$ such parameters associated with each of the internal nodes in the tree. The generative model for the hierarchical random graph is given by \begin{align} T&\sim \mathrm{UBT}(N),\\ \phi_n&\sim \mathrm{Beta}(a,b),\\ x_{i,j}&\sim \mathrm{Bernoulli}(\phi_{t_{i,j}}), \end{align} where $t_{i,j}$ denotes the index the nearest common ancestral node of vertex $i$ and $j$. In \cite{roy2007learning} a related generative model for binary hierarchies is proposed where each edge in the tree has an associated weight that defined the propensity in which the network complies with the given split. \subsubsection{The Mondrian process} One way to view the hierarchical random graph models is by first considering the top level of the hierarchy. Here the set of nodes is split into two partitions, and a single parameter is assigned to model the probability of observing a link between nodes in the two partitions. Next, the process continues recursively on the two partitions until each node is in a partition for itself. This framework was generalized and extended to the Mondrian process~\cite{RoyTeh2009a} which can be seen as a distribution over a $k$-dimensional tree. Used as a prior in a non-parametric Bayesian model of a bipartite network, at the top level the Mondrian process splits either of the two sets of nodes (chosen by random) into two partitions and continues this random bisectioning of the nodes until a stopping criterion is met. Parameters are then assigned to model the probability of links between each of the resulting pairs. \subsubsection{Infinite tree-structured model} In \cite{HerlauEtAlCIP2012} the uniform prior over binary trees of \cite{clauset2008hierarchical} where replaced by a uniform prior over multifurcating trees and the leafs of the trees rather than terminating at each vertex of the graph terminate at the levels of clusters generated from a CRP based on the following generative model \begin{align} \bar{z}&\sim \mathrm{CRP}(A),\\ T&\sim \mathrm{UT}(K_{\bar{z}}),\\ \phi_n&\sim \mathrm{Beta}(a,b),\\ x_{i,j}&\sim \mathrm{Bernoulli}(\phi_{t_{z_i,z_j}}). \end{align} Here $K_{\bar{z}}$ denotes the number of clusters in $\bar{z}$ and $\mathrm{UT}$ defines a uniform prior over multifurcating trees. A benefit of this model is that it can be used to detect the presence of hierarchical structure as it includes the IRM model in its hypothesis space defined by a split at the root of the tree directly into all K clusters (i.e. forming a flat hierarchy). The model of \cite{clauset2008hierarchical} can on the other hand be considered the special case where the CRP only generates singleton clusters while the tree structure is strictly binary. As the leafs terminates in clusters rather than singletons the complexity of the model is in general substantially reduced compared to the models of \cite{clauset2008hierarchical,roy2007learning,RoyTeh2009a} while the CRP defines the level at which to terminate the tree. \subsubsection{Gibbs fragmentation trees} In \cite{Schmidt2012} the Gibbs fragmentation tree was used as prior over multifurcating trees terminating at the vertex level of the network according to the following generative model \begin{align} T&\sim \mathrm{GFT}(\alpha,\beta),\\ \phi_n&\sim \mathrm{Beta}(a,b),\\ x_{i,j}&\sim \mathrm{Bernoulli}(\phi_{t_{z_i,z_j}}). \end{align} The Gibbs fragmentation tree is closely related to the two parameter nested Chinese restaurant process \cite{aldous1985} differing in explicitly accounting for the occurrence in the nested CRP of trivial non-splits. The Gibbs fragmentation tree has several attractive properties. It is i) \emph{exchangeable} in that the distribution does not depend on the labelling of the leaf nodes, ii) \emph{Markovian} in that a subtree of the full tree is in turn a Gibbs fragmentation tree, and iii) \emph{consistent} in that marginalizing over all leafs not considered in the subtree has the same distribution as only considering the Gibbs fragmentation tree of the subtree, see also \cite{mccullagh2008,Schmidt2012}. Apart from these attractive properties, the Gibbs fragmentation tree gives explicit control of the prior over multifurcating trees by its two parameters $\alpha$ and $\beta$, that makes it possible to bias the model toward deep vs. flat hierarchies. The probability of a given Gibbs fragmentation tree can be calculated using a simple recursive formula, see also~\cite{mccullagh2008,Schmidt2012}. \begin{figure} \caption{Example of networks with hierarchical structure. {\sffamily a)} \label{fig:OtherModels} \end{figure} \subsection{Modeling side-information} The Bayesian modeling framework readily extends to the modeling of side-information, i.e, exogenous predictors. The side-information can be used either for providing further data in support of the latent structure or directly for modeling the network links. \paragraph{Information about latent structure} In \cite{kemp2006learning,xu2006learning} multiple data sources were used in the IRM model to both model dyadic relationships as well as side information such that the partitioning of the nodes in the graph and the corresponding side-information available were identical. \paragraph{Information about the network} Instead of having the side information inform about the latent variables, it can be used to directly model the links. This approach was used in the LFRM model~\cite{miller2009nonparametric} modifying the Bernoulli likelihood function in the LFRM model according to \begin{multline} x_{i,j} \sim \mathrm{Bernoulli}\mathrm{B}igg(\sigma\bigg[\sum_{k,\ell}z_{i,k}z_{j,\ell}\phi_{k,\ell} +\boldsymbol{w}^\top \boldsymbol{r}_{ij}\\ +(\boldsymbol{\gamma}^\top \boldsymbol{s}_i+a_i ) +(\boldsymbol{\upsilon}^\top \boldsymbol{t}_j+b_j )+c \bigg]\mathrm{B}igg), \end{multline} where $\boldsymbol{r}_{ij}$ denotes a vector of various between node similarities, $\boldsymbol{s}_i$ and $\boldsymbol{t}_i$ denotes vectors of features (i.e. side-information) for node $i$ and $j$ respectively. $\boldsymbol{w},\ \boldsymbol{\gamma},\ \boldsymbol{\upsilon}$ are parameters specifying the effect of the side-information in predicting links and $\boldsymbol{a}$, $\boldsymbol{b}$ specify node specific biases whereas $c$ is a global offset that can be used to define the overall link density. This formulation is closely related to the way in which exogenous predictors are included in the exponential random graph model. These frameworks readily generalizes to the non-parametric latent class, feature, and hierarchical models described above and makes it possible to include all the available information when modeling complex networks. In particular including side information may improve the identification of latent structure \cite{kemp2006learning,xu2006learning} as well as the prediction of links \cite{miller2009nonparametric}. \section{Outlook} The non-parametric models for complex networks use latent variables to represent structure in networks. As such they can be considered extensions of the traditional exponential random graph models. The non-parametric models here provide a principled framework for inferring the number of latent classes, features, or levels of hierarchy using non-parametric distributions such as the Chinese restaurant process (CRP), Indian buffet process (IBP) and Gibbs fragmentation trees (GFT). A benefit of these non-parametric models over traditional parametric models of networks is that they can adapt to the complexity of the networks by defining an adaptive parametrization that can account for the needed level of model complexity. In addition, the Bayesian modeling approach admits a principled framework for the statistical modeling of networks and enables to take parameter uncertainty into account. In particular, the Bayesian modeling approach defines a generative process for networks which in turn can be used to simulate graphs, validate the models ability to account for network structure and predict links \cite{miller2009nonparametric,MMMNS_IMRM2011,Palla2012} while Bayesian non-parametrics bring an efficient framework for the inevitable issue of model order selection. The non-parametric Bayesian modeling of complex networks have many important challenges that are yet to be addressed. Below we outline some of these major challenges to point out some avenues of future research. \subsection{Scalability} Many networks are very large and efficient algorithms for inference in these large systems of millions to billions of nodes and billions to trillions of links will pose important challenges for inferring the parameters of the models. Here it is our firm belief it will be very important to focus on models that grow in complexity by the number of links rather than the sizes of the networks as well as inference procedures that can exploit distributed computing. As such, models will have to be carefully designed in order to be scalable and parallelizable. While the latent class models described all scale by the number of links the LFRM and ILAM models explicitly have to account for both links and non-links which makes them scale poorly compared to the more restricted IMRM model. Thus, flexibility here comes at the price of scalability. In particular, existing models that are scalable do not include the modeling of side-information for the direct modeling of links. Thus, future work should focus on building flexible scalable models for networks. \subsection{Structure emerging at multiple levels} Network structure is widely believed to emerge at multiple scales~\cite{simon1962,Ravasz2002,roy2007learning,sales2007,clauset2008hierarchical,RoyTeh2009a,Meunier2010,HerlauEtAlCIP2012}. A limitation of latent class models are that they define a given level of resolution in which structure is inferred. Whereas latent feature models can generate features defining clusters at multiple scales~\cite{Palla2012} this property can be explicitly taken into account by the latent hierarchical models. An important future challenge will be to define models that can operate at multiple scales while efficiently accounting for prominent network structure by combining ideas from the latent hierarchical models with existing latent class and feature models. This includes hierarchical models that explicitly account for community structure and models that allow for the nodes to be part of multiple groups on multiple hierarchical levels. \subsection{Temporal evolution} Many networks are not static but evolve over time~\cite{mucha2010,ishiguro2010,sarkar2012}. Rather than modeling snapshots of graphs as independent, taking into account the timing in which links are generated, when nodes emerge and vanish etc. potentially brings important information about the structure in these systems. To formulate non-parametric Bayesian models that can model networks exhibiting time-varying complexity, such as clusters that emerge and disappear and hierarchies that expand and contract, poses an important future challenge for the modeling of these time evolving networks. \subsection{Generic modeling tools} As of today non-parametric Bayesian models for complex networks often have to be implemented more or less from scratch in order to accommodate the specific structure of the networks at hand. It will be very useful in the future to develop generic modeling tools in which general non-parametric Bayesian models can be specified including how parameters are tied, various distributions invoked, and side-information incorporated. Publicly available non-parametric Bayesian software tools for complex networks that can well accommodate the needs of researchers modeling complex networks will be essential for these models to fully meet their potentials and be adopted by the many different research communities that today use models and analysis of complex network as an indispensable tool. \subsection{Testing efficiently multiple hypotheses} Despite the very different origin of complex networks it is widely believed generic properties exist across the domains of these systems. What are the generic properties of networks and how can they be best modelled is an important open problem that is in need of being addressed. Non-parametric Bayesian modeling forms a framework for inferring structure across multiple hypothesis. For example, the IRM model itself encompasses the hypotheses of the Erd\H{o}s-R\'{e}nyi random graph (an IRM with a single cluster) as well as the limit of the network itself (an IRM with a cluster for each node). Bayesian non-parametrics can here in general be used to infer structure across multiple hypotheses both including model order as in latent class models, feature representation as in the latent feature models, and types of hierarchies as in the latent hierarchical models. Non-parametric Bayesian models for complex networks is emerging as a prominent modeling tool that both provides a principled framework for model order selection as well as model validation. As the non-parametric Bayesian models also can give an interpretable account of otherwise complex systems it is our firm belief these models will become essential in order to deepen our understanding of the structure and function of the many networks that surrounds us. There is no doubt the future will bring many new non-parametric Baysian models for complex networks and that these models will find important new application domains. We hope this paper will facilitate researchers to tap into the power of Bayesian non-parametric modeling of complex networks as outlined in this paper to address the major challenges we face in our effort to understand and be able to predict the behaviors of the many complex systems we are an integral part of. \end{document}
math
80,905
\begin{document} \title{On the maximum diameter of path-pairable graphs} \singlespace \begin{abstract} \setlength{\parskip}{ amount} \setlength{\parindent}{0pt} \noindent A graph is \emph{path-pairable} if for any pairing of its vertices there exist edge disjoint paths joining the vertices in each pair. We obtain sharp bounds on the maximum possible diameter of path-pairable graphs which either have a given number of edges, or are $c$-degenerate. Along the way we show that a large family of graphs obtained by blowing up a path is path-pairable, which may be of independent interest. \end{abstract} \section{Introduction} \emph{Path-pairability} is a graph theoretical notion that emerged from a practical networking problem introduced by Csaba, Faudree, Gy\'arf\'as, Lehel, and Schelp \cite{CS}, and further studied by Faudree, Gy\'arf\'as, and Lehel \cite{mpp,F,pp} and by Kubicka, Kubicki and Lehel \cite{grid}. Given a fixed integer $k$ and a simple undirected graph $G$ on at least $2k$ vertices, we say that $G$ is {\it $k$-path-pairable} if, for any pair of disjoint sets of distinct vertices $\{x_1,\dots,x_k\}$ and $\{y_1,\dots,y_k\}$ of $G$, there exist $k$ edge-disjoint paths $P_1,P_2,\dots,P_k$, such that $P_i$ is a path from $x_i$ to $y_i$, $1\leq i\leq k$. The path-pairability number of a graph $G$ is the largest positive integer $k$, for which $G$ is $k$-path-pairable, and it is denoted by $\pp(G)$. A $k$-path-pairable graph on $2k$ or $2k+1$ vertices is simply said to be {\it path-pairable}. Path-pairability is related to the notion of \textit{linkedness}. A graph is $k$-\emph{linked} if for any choice of $2k$ vertices $\{s_1, \ldots , s_k, t_1, \ldots , t_k\}$ (not necessarily distinct), there are internally vertex disjoint paths $P_1, \ldots , P_k$ with $P_i$ joining $s_i$ to $t_i$ for $1 \le i \le k$. Bollob{\'a}s and Thomason~\cite{BollobasThomason} showed that any $2k$-connected graph with a lower bound on its edge density is $k$-linked. On the other hand, a graph being path-pairable imposes no constraint on the connectivity or edge-connectivity of the graph. The most illustrative examples of this phenomenon are the stars $K_{1, n-1}$. Indeed, it is easy to see that stars are path-pairable, while they are neither $2$-connected nor $2$-edge-connected. Note that, for any pairing of the vertices of $K_{1, n-1}$, joining two vertices in a pair is straightforward due to the presence of a vertex of high degree, and the fact that the diameter is small. This example motivates the study of two natural questions about path-pairable graphs: given a path-pairable graph $G$ on $n$ vertices, how small can its maximum degree $\mathcal Delta(G)$ be, and how large can its diameter $d(G)$ be? This note addresses some aspects of the second question. To be precise, for a family of graphs $\mathcal{G}$ let us define $d(n, \mathcal{G})$ as follows: \[ d(n, \mathcal{G}) = \max\{d(G): G \in \mathcal{G} \text{ and } G \text{ is path-pairable on } n \text{ vertices}\}. \] When $\mathcal{G}$ is the family of path-pairable graphs, we shall simply write $d(n)$ instead of $d(n, \mathcal{G})$. \begin{comment} It was proved by Faudree et al. that $\mathcal Delta_\text{min}(n)$ has to grow with the size of the graph; in particular, if $G$ is a path-pairable graph on $n$ vertices with maximum degree $\mathcal Delta$, then $n\leq 2\mathcal Delta^\mathcal Delta$ holds. The result places sublogarithmic lower bound on $\mathcal Delta_\text{min}(n)$, that is, $\mathcal Delta_\text{min}(n) = \Omega\left(\frac{\log n}{\log\log n}\right)$. To date the best known asymptotic upper bound is $\mathcal Delta_\text{min}(n) = O(\log n)$ due to Gy\H ori et al \cite{ntp}. \end{comment} The maximum diameter of arbitrary path-pairable graphs was investigated by M\'esz\'aros \cite{me_diam} who proved that $d(n) \le 6 \sqrt{2} \sqrt{n}$. Our aim in this note is to investigate the maximum diameter of path-pairable graphs when we impose restrictions on the number of edges and on how the edges are distributed. To state our results, let us denote by $\mathcal{G}_m$ the family of graphs with at most $m$ edges. The following result determines $d(n, \mathcal{G}_m)$ for a certain range of $m$. \begin{thm}\label{diam_m} If $2n \le m \le \frac{1}{4}n^{3/2}$ then \[ \sqrt[3]{\frac{1}{2}m-n} \le d(n, \mathcal{G}_m) \le 16 \sqrt[3]{m}. \] \end{thm} We remark that the upper bound in the Theorem~\ref{diam_m} holds for $m$ in any range, but when $m \ge \frac{1}{4}n^{3/2}$ the bound obtained by M\'esz\'aros \cite{me_diam} is sharper. Determining the behaviour of the maximum diameter among path-pairable graphs on $n$ vertices with fewer than $2n$ edges remains an open problem. In particular, we do not know if the maximum diameter must be bounded (see Section \ref{sec:final}). Following this line of research, it is very natural to consider the problem of determining the maximum attainable diameter for other classes of graphs. For example, what is the behaviour of the maximum diameter of path-pairable \emph{planar} graphs? Although we could not give a satisfactory answer to this particular question, we were able to do so for graphs which are $c$-\emph{degenerate}. As usual, we say that an $n$-vertex graph $G$ is $c$-\emph{degenerate} if there exists an ordering $v_1,\ldots,v_n$ of its vertices such that $|\{v_j: j > i, v_iv_j\in E(G) \}|\leq c$ holds for all $i=1,2,\ldots,n$. We let $\mathcal{G}_{c\text{-deg}}$ denote the family of $c$-degenerate graphs. Clearly all $c$-degenerate graphs have a linear number of edges, so Theorem~\ref{diam_m} implies that $d(n, \mathcal{G}_{c\text{-deg}}) = O(\sqrt[3]{n})$. However, as the next result shows, this bound is far from the truth. \begin{thm}\label{diam_cdeg} Let $c \ge 5$ be an integer. Then \[ (2+o(1)) \frac{\log(n)}{\log({\frac{c}{c-2}})} \leq d(n, \mathcal{G}_{c\text{-deg}}) \leq (12+o(1)) \frac{\log(n)}{\log(\frac{c}{c-2})} \] as $n \rightarrow \infty$. \end{thm} We remark that we have not made an effort to optimize the constants appearing in the upper and lower bounds of Theorems~\ref{diam_m} and~\ref{diam_cdeg}. \subsection{The Cut-Condition} While path-pairable graphs need not be highly connected or edge-connected, they must satisfy certain `connectivity-like' conditions that we shall need in the remainder of the paper. We say a graph $G$ on $n$ vertices satisfies the \emph{cut-condition} if for every $X \subset V(G)$, $|X| \le n/2$, there are at least $|X|$ edges between $X$ and $V(G)\setminus X$. Clearly, a path-pairable graph has to satisfy the cut-condition. On the other hand, satisfying the cut-condition is not sufficient to guarantee path-pairability in a graph; see \cite{me_pp} for additional details. \subsection{Organization and Notation} The proofs of the lower bounds in Theorems~\ref{diam_m} and~\ref{diam_cdeg} require constructions of path-pairable graphs with large diameter. In Section~\ref{sec:blowup}, we show how to obtain such graphs by proving that a more general class of graphs is path-pairable. In Sections~\ref{sec:proofdiam_m} and~\ref{sec:proofdiam_cdeg} we shall complete the proofs of Theorems~\ref{diam_m} and~\ref{diam_cdeg}, respectively. Finally, we mention some open problems in Section~\ref{sec:final}. Our notation is standard. Thus, for a (simple, undirected) graph $G$ we shall denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$, respectively. We also let $|G|$ and $d(G)$ denote the number of vertices and diameter of $G$, respectively. For a vertex $x \in V(G)$ we let $N_{G}(x)$ denote the neighbourhood of $x$ in $G$, and we shall omit the subscript `$G$' when no ambiguity arises. \section{Path-pairable graphs from blowing up paths}\label{sec:blowup} In this section, we will show how to construct a quite general class of graphs which have high diameter and are path-pairable. Let $G$ be a graph with vertex set $V(G)=\{v_1,\ldots,v_k\}$, and let $G_1,\ldots, G_k$ be graphs. We define the \textit{blown-up graph} $G(G_1,\ldots, G_k)$ as follows: replace every vertex $v_i$ in $G$ by the corresponding graph $G_i$, and for every edge $v_iv_j \in E(G)$ insert a complete bipartite graph between the vertex sets of $G_i$ and $G_j$. Let $P_k$ denote the path on $k$ vertices. The following lemma asserts that if we blow-up a path with graphs $G_1, \ldots , G_k$, such that $G_{i}$ is path-pairable for $i \le k-1$, and certain properties inherited from the cut-condition hold, then the resulting blow-up is path-pairable. \begin{lemma}\label{blown-up} Suppose that $G_1, \ldots ,G_k$ are graphs on $n_1, \ldots , n_k$ vertices, respectively, where $G_{i}$ is path-pairable for $i \le k-1$. Let $n = \sum_{i =1}^k n_i$ and let $u_i = \sum_{j=1}^i n_j$ for $i=1, \dots , k-1$. Then $P_k(G_1,\ldots, G_k)$ is path-pairable if and only if \begin{equation}\label{eq_1} n_i\cdot n_{i+1}\geq \min(u_i,n-u_i) \end{equation} holds for $i=1,\ldots,k-1$. \end{lemma} \begin{proof} For each $i = 1, \ldots , k$, let $U_i = \bigcup_{j=1}^i V(G_j)$ so that $u_i = |U_i|$. Now, if $P_k(G_1, \ldots , G_k)$ is path-pairable, then we may apply the cut-condition to the cut $\{U_i, V(G)\setminus U_i\}$. This implies $n_i\cdot n_{i+1}\geq \min(u_i,n-u_i)$ must hold for $i=1,\ldots , k-1$. In the remainder, we show that this simple condition is enough to yield the path-pairability of $G := P_k(G_1, \ldots , G_k)$. Assume that a pairing $\mathcal{P}$ of the vertices of $G$ is given. If $\{u, v\} \in \mathcal{P}$ we shall say that $u$ is a \emph{sibling} of $v$ (and vice-versa). We shall define an algorithm that sweeps through the classes $G_1,G_2,\ldots, G_k$ and joins each pair of siblings via edge-disjoint paths. First we give an overview of the algorithm. We proceed by first joining pairs $\{u, v\} \in \mathcal{P}$ via edge-disjoint paths such that $u$ and $v$ belong to different $G_i$'s, and then afterwards joining pairs that remain inside some $G_j$ (using the path-pairability of $G_j$). Before round $1$ we use the path-pairability property of $G_{1}$ to join those siblings which belong to $G_{1}$. In round $1$ we assign to every vertex $u$ of $G_1$ a vertex $v$ of $G_2$. If $\{u, v\} \in \mathcal{P}$ are siblings, then we simply choose the edge $uv$. Then we join the siblings which are in $G_{2}$ again using the path-pairability property of $G_{2}$. For those paths $uv$ that have not ended (because $\{u, v\} \notin \mathcal{P}$) we shall continue by choosing a new vertex $w$ in $G_3$ and continue the path with edge $vw$, and so on. Paths which have not finished joining a pair of siblings we shall call \emph{unfinished}; otherwise, we say the path is \emph{finished}. The last edge which completes a finished path we shall call a \emph{path-ending edge}. During round $i$ we shall first choose those vertices in $G_{i+1}$ which, together with some vertex of $G_i$, form path-ending edges. At the end of round $i$, in $G_{i+1}$ we will have endpoints of unfinished paths and perhaps also some endpoints of finished paths. Note that the vertices of $G_{i+1}$ might be endpoints of several unfinished paths. For $x \in G_{i+1}$ let $w(x)$ denote the number of unfinished paths $P\cup \{x\}$ with $P \subset U_i$ at the end of round $i$ which are to be extended by a vertex of $G_{i+2}$ (including the single-vertex path $x$ in the case when $x$ was not joined to its sibling in the latest round). Note that every such path corresponds to a yet not joined vertex in $U_{i+1}$ as well as to another vertex yet to be joined lying in $V(G)\setminus U_{i+1}$. It follows that \begin{equation}\label{eq:weights} \sum_{x \in G_{i+1}}w(x) \le \min(u_{i+1}, n-u_{i+1}). \end{equation} Let us now be more explicit in how we make choices in each round. We shall maintain the following two simple conditions throughout our procedure (the first of which has been mentioned above): \begin{itemize} \item[(a)] During round $i$ ($1\le i \le k-1$), if $w \in G_i$ is the current endpoint of the path which began at some vertex $u\in U_i$ (possibly $u=w$), and $\{u, v\} \in \mathcal{P}$ for $v \in G_{i+1}$, then we join $w$ to $v$. Informally, we choose path-ending edges when we can. \item[(b)] $w(x) \leq n_{i+1}$ for all $x \in G_i$, for $i =1, \ldots , k-1$. \end{itemize} The second condition above is clearly necessary in order to proceed during round $i$, as ${|N(x)\cap G_{i+1}| = n_{i+1}}$ for every $x \in G_i$, and hence we cannot continue more than $n_{i+1}$ unfinished paths through $x$. We claim that as long as both of the above conditions are maintained, the proposed algorithm finds a collection of edge-disjoint paths joining every pair in $\mathcal{P}$. Both conditions are clearly satisfied for $i=1$ as $w(x) \le 1\leq n_2$ for all $x\in G_1$. Let $i \ge 2$ and suppose both conditions hold for rounds $1, \ldots , i-1$. Our aim is show that an appropriate selection of edges between $G_i$ and $G_{i+1}$ exists in round $i$ to maintain the conditions. We start round $i$ by choosing all path-ending edges with endpoints in $G_i$ and $G_{i+1}$; this can be done since, by induction, $w(x) \le n_{i+1}$ for every $x \in G_i$. Observe that if $i = k-1$ then the only remaining siblings are in $G_{k}$. Then for every $\left\{ u, v \right\} \in \mathcal{P}$ such that $u,v \in G_{k}$ we can find a vertex $w$ in $G_{k-1}$ and join $u, v$ with the path $uwv$. When $i < k-1$ then the remaining paths can be continued by assigning arbitrary vertices from $G_{i+1}$ (without using any edge multiple times). We choose an assignment that balances the `weights' in $G_{i+1}$. More precisely, let us choose an assignment of the vertices that minimizes \[ \sum\limits_{a\in G_{i+1}}w(a)^2. \] If for every $x \in G_{i+1}$ we have that $w(x) \le n_{i+2}$ we are basically done. It remains to find edge-disjoint paths inside $G_{i+1}$ for those pairs $\{x, y\} \in \mathcal{P}$ whose vertices belong to $G_{i+1}$. But this is possible because of the assumption that $G_{i+1}$ is path-pairable. Suppose then that in the above assignment there exists $x \in G_{i+1}$ with $w(x) \geq n_{i+2}+1$. We first claim that, under this assignment, no other vertex of $G_{i+1}$ has small weight. \begin{claim}\label{claim:y-big} Every vertex $y\in G_{i+1}$ satisfies $w(y)\geq n_{i+2}-1$. \end{claim} \begin{proof} Suppose there is $y \in G_{i+1}$ such that $w(y) \le n_{i+2}-2$. Then, as $w(x)>w(y)+2$, there exist vertices $v_1,v_2\in G_i$ such that certain paths ending at $v_1$ and $v_2$ were joined in round $i$ to $x$ ($x$ was assigned as the next vertex of these paths) but no paths at $v_1$ or $v_2$ were assigned $y$ as their next vertex. Observe that at least one of the edges $v_1x$ and $v_2x$ is not a path ending edge which could have been replaced by the appropriate $v_1y$ or $v_2y$ edge, respectively. That operation would result in a new assignment with a smaller square sum $\sum_{a\in G_{i+1}}w(a)^2$, which is a contradiction. \end{proof} Therefore, we may assume $w(y)\geq n_{i+2} - 1$ for all $y\in G_{i+1}$. In this case, partition the vertices of $G_{i+1}$ into three classes: \begin{align*} X &= \{v\in G_{i+1}: w(v) \geq n_{i+2} +1\}\\ Y &= \{v\in G_{i+1}: w(v) = n_{i+2} - 1\} \\ Z &= \{v\in G_{i+1}: w(v) = n_{i+2}\}. \end{align*} Observe first that $1 \le |X| \leq |Y|$, since otherwise using~(\ref{eq:weights}) we have \[n_{i+1}n_{i+2}+1\leq \sum\limits_{s\in G_{i+1}}w(s)\leq \min(u_{i+1},n-u_{i+1}),\] contradicting condition~(\ref{eq_1}). Notice also that the same argument as in Claim~\ref{claim:y-big} shows that $w(v) \le n_{i+2}+1$ for every $v \in G_{i+1}$, hence we can actually write \[ X = \left\{ v \in G_{i+1}: w(v) = n_{i+2}+1 \right\}. \] We will need the following claim which asserts that if there are siblings in $G_{i+1}$ then they must belong to $Z$. \begin{claim}\label{claim:Z_pairs} If $\{u,v\} \in \mathcal{P}$ and $u, v \in G_{i+1}$, then $u, v \in Z$. \end{claim} \begin{proof} We first show that every $y \in Y$ is incident to a path-ending edge. Suppose, to the contrary, that there is $y \in Y$ such that there is no path-ending edge which ends at $y$. It follows that there are at most $w(y)$ vertices in $G_{i}$ which had been joined to $y$. Hence we can take any $x \in X$ and find $z \in G_{i}$ which was not joined to $y$, and such that $xz$ is not a path-ending edge. Replacing $zx$ by $zy$ would result in a smaller square sum $\sum_{a \in G_{i+1}}w(a)^{2}$, which gives a contradiction. Now, let $\{u, v\} \in \mathcal{P}$ such that $u, v \in G_{i+1}$. Since every $y \in Y$ is incident to a path-ending edge, we have that $u, v \not\in Y$. Suppose, for contradiction, that $u \in X$. Then $u$ was joined to $w(u) = n_{i+2}+1$ vertices in $G_{i}$, and hence for every $y \in Y$, there is $z \in G_{i}$ which was joined to $u$ but not $y$. Replacing $zu$ by $zy$ would result in a smaller square sum $\sum_{a \in G_{i+1}}w(a)^{2}$, which again gives a contradiction. \end{proof} Finally, we shall show that we can reduce the weights of the vertices in $X$ (and pair the siblings inside $G_{i+1}$) using the path-pairable property of $G_{i+1}$. For every $x \in X$ pick a different vertex $y_{x} \in Y$ (which we can do, since $|Y| \ge |X|$) and let $\mathcal{P'} = \left\{ \{u, v\} \in \mathcal{P} : u, v \in G_{i+1} \right\} \cup \left\{ \{x, y_{x}\} : x \in X \right\}$. Since $G_{i+1}$ is path-pairable, we can find edge-disjoint paths joining the siblings in $\mathcal{P'}$ (note that by Claim~\ref{claim:Z_pairs} none of the pairs $\{x, y_x\}$ interfere with any siblings $\{u, v\} \in \mathcal{P}$ with $u, v \in G_{i+1}$). Observe now that for every $x \in X$ one path has been channeled to a vertex $y\in Y$, thus the number of unfinished path endpoints at $x$ has dropped to $n_{i+2}$ and so the condition is maintained. \end{proof} We close the section by pointing out that the condition that the graphs $G_{i}$ are path-pairable is necessary. We do this by giving an example of a blown-up path $P_k(G_{n_1},\ldots,G_{n_k})$ that satisfies the cut-conditions of Lemma~\ref{blown-up} yet it is not path-pairable unless some of $G_i$'s are path-pairable as well. For the sake of simplicity we set $k = 5$ and prove that $G_3$ has to be path-pairable. Let $n = 2t^2 + t$ for some even $t \in \mathbb{N}$ and let $n_1 = n_5 = t^2-t$, $n_2=n_3 = n_4 = t$. Clearly $P_5(G_{n_1},\ldots,G_{n_5})$ satisfies the Condition~\ref{eq_1} of Lemma~\ref{blown-up}. Observe, that any pairing of the vertices in $G_{1} \cup G_{2}$ with the vertices in $G_{4} \cup G_{5}$ has to use all the edges between $G_{3}$ and $G_{2} \cup G_{4}$. Therefore if we additionally pair the vertices inside $G_{3}$, then the paths joining those vertices can only use the edges in $G_{3}$, therefore $G_{3}$ has to be path-pairable. \section{Proof of Theorem \ref{diam_m}}\label{sec:proofdiam_m} Take $x,y\in V(G)$ such that $d(x,y) = d(G)$ and let $V_{i}$ be the set of vertices at distance exactly $i$ from $x$, for every $i$. Observe that $V_0 = \{x\}$ and $y\in V_{d(G)}$. For $i \in \left\{ 1, \dots, d(G) \right\}$ define $n_{i}$ to be the size of $V_{i}$ and let $u_{i} = \sum_{j=0}^{i} n_{j}$. We need the following claim. \begin{claim} $u_{2k+1} \geq \binom{k+2}{2}$ as long as $u_{2k+1}\leq\frac{n}{2}$. \end{claim} \begin{proof} We shall use induction on $k$. For $k = 0$ it is clear. Assume that $u_{2k-1} \ge \binom{k+1}{2}$. By the cut-condition we have that the number of edges between $V_{2k}$ and $V_{2k+1}$ is at least $u_{2k-1}$, hence $n_{2k}\cdot n_{2k+1} \ge u_{2k-1} \ge \binom{k+1}{2}$. By the arithmetic-geometric mean inequality, $n_{2k} + n_{2k+1} \ge 2\sqrt{\binom{k+1}{2}} \ge k+1$. As $u_{2k+1} = u_{2k-1} + n_{2k} + n_{2k+1}$, we have $u_{2k+1} \ge \binom{k+2}{2}$. \end{proof} Now, let $A = \bigcup_{i=0}^{\lfloor d / 3 \rfloor}V_{i}$, $B = \bigcup_{i = \lfloor d / 3 \rfloor +1}^{2d/3} V_{i}$, $C = \bigcup_{i = \lfloor 2d/3 \rfloor + 1}^{d} V_{i}$. Observe, that $|A|, |C| \ge \min\left\{\frac{n}{2}, \frac{d^{2}}{100}\right\}$, so joining vertices in $A$ with vertices in $C$ requires at least $ \min\left\{\frac{n}{2},\frac{d^2}{100}\right\}\cdot\frac{d}{3}$ edges. Hence, \[ \min\left\{\frac{n}{2},\frac{d^2}{100}\right\}\cdot\frac{d}{3}\leq m,\] which implies \[d\leq \max\left\{\frac{6m}{n},16\sqrt[3]{m}\right\}.\] Notice that whenever $m \le 4n^{3/2}$ we have $d \le 16\sqrt[3]{m}$. Let us remark that if $m \ge \frac{1}{4}n^{3/2}$ then the upper bound is trivially satisfied by the general upper bound obtained in \cite{me_diam}. For the lower bound, let $n$ and $2n \le m \le \frac{1}{4}n^{3/2}$ be given. For any natural number $\ell$ we shall denote by $S_\ell$ the star $K_{1, \ell-1}$ on $\ell$ vertices. Consider the graph $G = P_{k}(G_{1},\dots,G_{k})$ on $n$ vertices, where $k = \left\lfloor \sqrt[3]{\frac{m}{2}-n} \right\rfloor$ and $G_{1} = G_{2} = \dots = G_{k} = S_{k}$, $G_{k+1} = S_{k^{2}}$, $G_{k+2} = S_{2}$, and $G_{k+3}$ is an empty graph on $n-2k^{2}-2$ vertices. Straightforward calculation shows that $u_i = i\cdot k$, for $i\leq k, u_{k+1}= 2k^2$, and $u_{k+2} = 2k^2 + 2$. Also $n_1n_2=n_2n_3=\ldots=n_{k-1}n_{k}=k^2$, $n_{k}n_{k+1}=k^3$, $n_{k+1}n_{k+2}=2k^2$, and $n_{k+2}n_{k+3}=2n-4k^2-4$. Therefore, for $i \in \left\{ 1, \dots, k+1 \right\}$ we have $n_{i} \cdot n_{i+1} \ge u_{i} \ge \min(u_{i}, n-u_{i})$ and $n_{k+2} \cdot n_{k+3} \ge n_{k+3} \ge \min(u_{k+2}, n-u_{k+2})$. Hence it follows from Lemma~\ref{blown-up} that $G$ is path-pairable. It is easy to check that the number of edges in $G$ is at most $2n + 2k^{3} \le m$. On the other hand, the diameter of $G$ is $k+2 \ge \sqrt[3]{\frac{m}{2} - n}$. \section{Proof of Theorem \ref{diam_cdeg}}\label{sec:proofdiam_cdeg} In this section, we investigate the maximum diameter a path-pairable $c$-degenerate graph on $n$ vertices can have. We shall assume that $c$ is an integer and $c\geq 5$. Let $G$ be a $c$-degenerate graph on $n$ vertices with diameter $d$. We shall show first that $d \le 4\log_{\frac{c+1}{c}}(n)+3$. Let $x \in G$ be such that there is $y \in G$ with $d(x,y) = d$. For $i \in \left\{ 0, \dots, d \right\}$, write $V_{i}$ for the set of vertices at distance $i$ from $x$. Let $n_{i} = |V_{i}|$ and $u_{i} = \sum_{j = 0}^{i}n_{j}$. Observe that $|V_{i}| \ge 1$ for every $i \in \left\{ 0, \dots, d \right\}$. We can assume that $u_{\lfloor \frac{d}{2} \rfloor} \le \frac{n}{2}$ (otherwise we repeat the argument below with $V'_{i} = V_{d-i})$. The result will easily follow from the following claim. \begin{claim} $u_{2k+1} \ge \left( \frac{c+1}{c} \right)^{k}$ as long as $u_{2k+1} \le \frac{n}{2}$. \end{claim} Let us assume the claim and prove the result. Letting $k = \frac{\lfloor\frac{d}{2}\rfloor -1}{2}$, we have that $n/2 \ge u_{2k+1} \ge \left( \frac{c+1}{c} \right)^{\frac{\lfloor \frac{d}{2} \rfloor-1}{2}}$. Hence $d \leq 4\log_{\frac{c+1}{c}}(n)+3 = 4 \frac{\log(n)}{\log(\frac{c+1}{c})} + 3 \le 4 \frac{\log(n)}{\log(\frac{c}{c-2})}\frac{\log(\frac{c}{c-2})}{\log(\frac{c+1}{c})} + 3\le 12 \frac{\log(n)}{\log(\frac{c}{c-2})}+3$, where the last inequality follows from the easy to check fact that $\frac{\log(\frac{c}{c-2})}{\log(\frac{c+1}{c})} \le 3$, for all $c \ge 5$. \begin{proof}[Proof of the Claim] We shall prove the claim by induction on $k$. The base case when $k=0$ is trivial as $u_{1} \ge 2$. Suppose the claim holds for every $l \le k-1$. Since $G$ is $c$-degenerate we have that $e(V_{2k}, V_{2k+1}) \le c\left( n_{2k} + n_{2k+1} \right)$. On the other hand, it follows from the cut-condition that $e(V_{2k}, V_{2k+1}) \ge u_{2k} = u_{2k-1}+ n_{2k}$. Therefore, by the induction hypothesis, we have $n_{2k} + n_{2k+1} \ge \frac{1}{c}\left( u_{2k-1}+n_{2k} \right) \ge \frac{1}{c} \left( \left( {\frac{c+1}{c}}\right)^{k-1} +n_{2k} \right) \ge \frac{1}{c}\left( \frac{c+1}{c} \right)^{k-1}$. Hence, $u_{2k+1}=u_{2k-1}+n_{2k}+n_{2k+1}\geq \left( \frac{c+1}{c} \right)^{k-1}+ \frac{1}{c}\left( \frac{c+1}{c} \right)^{k-1}= \left( 1+\frac{1}{c} \right) \left( \frac{c+1}{c} \right)^{k-1} \ge \left( \frac{c+1}{c} \right)^{k}$, which proves the claim. \end{proof} We shall prove the lower bound assuming $c$ is an odd integer; when $c$ is even we apply the same argument for $c-1$. To do so, consider the graph $G = P(G_{1},\ldots, G_{2m^{\prime}-1})$ for some $m^{\prime}\in \mathbb{N}$, which we specify later. Firstly, we shall define the sizes of $G_i$ for $i \in \{1,\ldots, 2m'-1\}$. To do so, let us define a sequence $\{n_i\}_{i\in\mathbb{N}}$ where $n_{2i} =\frac{c-1}{2} $ and $n_{2i+1}$ is defined recursively in the following way: \begin{equation} n_{2i+1}= \left \lceil \frac{2}{c-1} \cdot \sum_{j=1}^{2i} n_j \right \rceil \leq \frac{2}{c-1}\sum_{j=1}^{2i} n_j +1 \end{equation} Let $m$ be the largest integer such that $\sum_{j=1}^{m} n_j \leq n/2$. We let $m'=m$ when $m$ is odd and $m'=m-1$ when $m$ is even. Moreover, let $|G_{m'}|=n-2\sum_{j=1}^{j=m'-1} n_j$ and let $|G_i|=n_i$ for $1\leq i < m'$ and $|G_{m'+j}|= |G_{m'-j}|$ for $ j \in \{1,\ldots, m'-1\}$. For all $i\in \{1,\ldots 2m'-1\}$ let $G_i = S_{n_i}$ be a star on $n_i$ vertices. It is easy to check that the graph $P_{2m'-1}(G_1,\ldots,G_{2m'-1}) $ is path-pairable by Lemma \ref{blown-up}. It has diameter at least $2m-4$ and $m \geq \log_{\frac{c+1}{c-1}}(n)(1+o(1))$. Again an easy verification shows that the graph $G$ is $c$-degenerate. \section{Final remarks and open problems}\label{sec:final} We obtained tight bounds on the parameter $d(n, \mathcal{G}_{m})$ when $(2+\varepsilonilon)n \leq m \leq \frac{1}{4} n^{3/2}$, for any fixed $\varepsilonilon >0$. It is an interesting open problem to investigate what happens when the number of edges in a path-pairable graph on $n$ vertices is around $2n$. We ask the following: \begin{question} Is there a function $f$ such that for every $\varepsilonilon>0$ and for every path-pairable graph $G$ on $n$ vertices with at most $(2-\varepsilonilon)n$ edges, the diameter of $G$ is bounded by $f(\varepsilonilon)$? \end{question} Another line of research concerns determining the behaviour of $d(n, \mathcal{P})$, where $\mathcal{P}$ is the family of planar graphs. Since planar graphs are $5$-degenerate, it follows from Theorem~\ref{diam_cdeg} that the diameter of a path-pairable planar graph on $n$ vertices cannot be larger than $c \log{n}$. This fact makes us wonder whether there are path-pairable planar graphs with unbounded diameter. \begin{question} Is there a family of path-pairable planar graphs with arbitrarily large diameter? \end{question} The graph constructed in the proof of the lower bound in Theorem~\ref{diam_cdeg} when $c=5$ is not planar since it contains a copy of $K_{3,3}$. Therefore, it cannot be used to show that the diameter of a path-pairable planar graph can be arbitrarily large (note, however, that this graph does not contain a $K_7$-minor nor a $K_{6,6}$-minor). We end by remarking that we were able to construct an infinite family of path-pairable planar graphs with diameter $6$, but not larger. \begin{comment} \begin{thm} \label{diameter_thm} For every even $n\geq 6$ there is a path-pairable planar graph on $n$ vertices and with diameter equal $6$. \end{thm} \begin{proof} \textbf{There are few errors here. First, the diameter is $5$. When $k_{1} = 1$ the graph seems not to be path-pairable. Also, when applying the induction we have to make sure that $A_{3}$ doesn't have too few vertices\dots} Consider the following planar graph on $n=k_1+k_2+6$ vertices, where $k_1,k_2\geq 1$ and $k_1+k_2$ is even. Partition the vertex set of $G$ into $7$ non-empty subsets $A_1, A_2,A_3,A_4,A_5,A_6$ and $A_7$, namely $A_1=\{a_1\}$,$A_2=\{a_2\}$, $A_3=\{v_1,v_2,...,v_{k_1}\}$, $A_4=\{a_3,a_4\}$ and $A_5=\{w_1,w_2,...,w_{k_2}\}$, $A_6=\{a_5\}$ and $A_7=\{a_6\}$. Connect $a_1$ to $a_2$ and $a_2$ to every vertex in $A_3$. Let $a_3$ be joined to every vertex in $A_3\cup A_5$ and $a_4$ to the set $\{v_1,v_k,a_3,w_1,w_{k_2}\}$. Symmetrically, let $a_5$ be connected to every vertex in $A_4$ and finally let $a_6$ be connected to $a_5$. We also add the edges of respective paths inside $A_3, A_4,A_5$, that is, we join consecutive vertices of the sequences $(v_1, v_2,...,v_{k_1})$, $(a_3,a_4)$, and $(w_1,w_2,...,w_{k_2})$. Clearly $d(a_1,a_6)=6$, so the diameter of $G$ is $6$. It is also easy to see that the graph is planar. We need to show it is path-pairable. The statement is fairly obvious for small values of $n$; we leave the verification of these cases to the reader. On the other hand, if $k_1+k_2>6$ (i.e. $n>12$), then every pairing $\mathcal{P}$ of the vertices contains at least one pair of terminals $(u,v)$ such that their corresponding vertices lie in $A_3\cup A_5$. We can join this pair by a path of length 2 through the vertex $a_3$ and complete the pairing process via induction that completes our proof. \end{proof} \end{comment} \begin{comment} -------------------------------------Need to re-write this proof------------------\\ For every pair between vertices in $\{a_3\}\cup A_2\cup A_3$ use the vertex $a_2$ to join them. If $(a_3,a_2) \in \mathcal{P}$ then just join them via the edge between them. Also if whenever $a_1$ is paired with some vertex $v_t$ in $A_2$, use the edge $(a_1,v_t)$ and similarly if $a_4$ is paired with a vertex in $A_4$. We split our analysis into three cases, regarding how $\mathcal{P}$ pairs the vertices in ${a_1,a_2,a_3,a_4}$. \begin{itemize} \item[i)]$(a_1,a_4)\in \mathcal{P}$; \\ To join $(a_1,a_4)$ use the path $(a_1,v_1,a_3,w_1,a_4)$. Then either $a_3$ is paired with $a_2$ (which we can solve) or $a_3$ is paired with something in $A_2\cup A_3$, say $v_i$, then use the path $a_3,v_k,v_{k-1},...,v_i$. \item[ii)] $(a_1,a_2) \in \mathcal{P}$; (when $(a_4,a_2) \in \mathcal{P}$, it is symmetric)\\ Firstly assume $a_3$ and $a_4$ are paired with vertices $v_{j_1},v_{j_2} \in A_2$ with $j_1\leq j_2$ (the other case is symmetric), respectively. Then use the path $(a_1,v_j,a_2)$ to join $a_1$ to $a_2$ and the path $(a_3,v_1,v_2,...,v_{j_1})$ to join $a_3$ and the path $(a_4,w_1,a_3,v_k,v_{k-1},...,v_{j_2})$. The case when $a_3$ is paired with a vertex $w_j \in A_4$ the argument works. If $a_3$ is paired with $a_4$ then use the path $(a_1,v_1,a_3,a_2)$ to join $a_1$ to $a_2$ and the path $(a_3,w_1,a_4)$. When $a_3$ is paired with some vertex $w_l \in A_4$ and $a_4$ is paired with some vertex $v_j \in A_2$ then, as before, use the path $(a_1,v_j,a_2)$ to join $a_1$ to $a_2$, use the path $(a_3,w_1,w_2,...,w_l)$ to join $a_3$ to $w_l$ and the path $(a_4,w_k,a_3,v_1,v_2,...,v_j)$. \item[iii)] $a_1$ and $a_4$ are paired with some vertex $w_j \in A_4$ and $ v_j \in A_2$, respectively. Then use the paths $(a_1,v_1,a_3,w_1,a_2,w_j)$ and $(a_2,w_k,a_3,v_k,a_1,w_j)$ to join $a_1$ to $w_j$ and $a_4$ to $v_j$, respectively. Now if $a_3$ is paired $w_l$ (or $v_l$) then join them via $(a_3,a_2,w_l)$ (or $(a_3,a_2,v_l)$). \end{itemize} Our case analysis exhausted all possibilities, so we proved our graph is path-pairable. \end{comment} \end{document}
math
31,833
\begin{document} \title{Real interpolation and transposition of certain function spaces} \author{by\\ Gilles Pisier\\ Texas A\&M University\\ College Station, TX 77843, U. S. A.\\ and\\ Universit\'e Paris VI\\ Equipe d'Analyse, Case 186, 75252\\ Paris Cedex 05, France} \date{} \maketitle \begin{abstract} Our starting point is a lemma due to Varopoulos. We give a different proof of a generalized form this lemma, that yields an equivalent description of the $K$-functional for the interpolation couple $(X_0,X_1)$ where $X_0=L_{p_0,\infty}(\mu_1; L_q(\mu_2))$ and $X_1=L_{p_1,\infty}(\mu_2; L_q(\mu_1))$ where $0<q<p_0,p_1\le \infty$ and $(\Omega_1,\mu_1), (\Omega_2,\mu_2)$ are arbitrary measure spaces. When $q=1$, this implies that the space $(X_0,X_1)_{\theta,\infty}$ ($0<\theta<1$) can be identified with a certain space of operators. We also give an extension of the Varopoulos Lemma to pairs (or finite families) of conditional expectations that seems of independent interest. The present paper is motivated by non-commutative applications that we choose to publish separately. \end{abstract} Motivated by certain non-commutative analogues of a Lemma due to Varopoulos \cite{Va}, we noticed an extension of his lemma that was overlooked in \cite{HP}. The main result is a dual characterization of the functions in the space \begin{equation}\label{real-eq1} L_{p_1,\infty} (\mu_1; L_q(\mu_2)) + L_{p_2,\infty}(\mu_2;L_q(\mu_1)) \end{equation} when $0<q<p_1,p_2\le \infty$ and $(\Omega_1,\mu_1), (\Omega_2,\mu_2)$ are arbitrary measure spaces. The equivalent condition for a (measurable) function $f\colon \ \Omega_1\times\Omega_2\to {\bb R}$ to belong to this space is \begin{equation}\label{real-eq2} \sup \int_{E_1\times E_2} |f| \ d\mu_1 d\mu_2 \left(\mu_1(E_1)^{\frac1q -\frac1{p_1}} + \mu_2(E_2)^{\frac1q-\frac1{p_2}}\right)^{-1} < \infty \end{equation} where the sup runs over all possible (measurable) subsets $E_j\subset\Omega_j$ $(j=1,2)$.\\ In \cite{HP} only the case $p_1=p_2$ is considered and the proof does not seem to extend further. This result extends to functions $f(\omega_1,\omega_2,\ldots,\omega_n)$ of any number of variables. The relevant space is then \[ \sum\noindentolimits^n_{j=1} L_{p_j,\infty}(\mu_j; L_q(\widehat\mu_j)) \] where $\widehat\mu_j = \mu_1 \times\cdots\times \mu_{j-1} \times \mu_{j+1} \times\cdots\times \mu_n$. Our main result can of course be formulated as a two-sided inequality expressing the equivalence of the norms appearing in \eqref{real-eq1} and \eqref{real-eq2}. When we write this inequality for the pair $(\mu_1,t\mu_2)$ with $t>0$, we obtain an equivalent form of the $K$-functional for the pair of spaces composing the sum in \eqref{real-eq1}. Using this expression for the $K$-functional, one derives a description for the real interpolation space \[ (X_1,X_2)_{\theta,\infty} \] when $X_j = L_{p_j,\infty}(\mu_j; L_q(\widehat\mu_j))$, $j=1,2$. The condition we find for this space is particularly striking, it is simply \[ \sup_{E_j\subset\Omega_j} \left(\int_{E_1\times E_2} |f|^q \ d\mu_1 d\mu_2 \right)^{1/q}(\mu(E_1)^{\alpha_1(1-\theta)} \mu(E_2)^{\alpha_2\theta})^{-1}. \] where $\alpha_j = \frac1q - \frac1{p_j}$. Let $(\Omega,\mu)$ be a measure space. Recall that the ``weak $L_p$'' space $L_{p,\infty}(\mu)$ is formed of all measurable functions $f\colon \ \Omega\to {\bb R}$ such that \[ \|f\|_{p,\infty} = \sup_{c>0} (c^p\mu\{|f|>c\})^{1/p} < \infty. \] When $p>1$, the quasi-norm $\|\cdot\|_{p,\infty}$ is equivalent to the following norm \begin{equation}\label{99} \|f\|_{[p,\infty]} = \sup\left\{\int_E |f| \frac{d\mu}{\mu(E)^{1/p'}}\ \Big|\ E\subset \Omega\right\}. \end{equation} When $p>1$, $L_{p,\infty}(\mu)$ is the dual of the ``Lorentz space'' $L_{p',1}(\mu)$ that can be defined (see e.g. \cite{BL}) as formed of those $f$ such that \begin{equation}\label{100} [f]_{p,1} = \int^\infty_0 \mu\{|f|>c\}^{1/p} \ dc < \infty. \end{equation} For a Banach space valued function $f\colon \ \Omega\to B$, we set (for $p>1$) \begin{equation}\label{101} \|f\|_{L_{p,\infty}(\mu ; B)} = \sup_{E\subset\Omega} \int_E\|f\| d\mu \ \mu(E)^{-1/p}, \end{equation} and we denote by ${L_{p,\infty}(\mu ; B)}$ the space of functions in $L_1(\mu ; B)$ (in Bochner's sense) for which the latter norm is finite. \begin{rem}\label{rk-lo} Let $(\Omega,\mu)$ be any measure space. Let $f\colon \ \Omega\to {\bb R}$ be any measurable function. Note that \begin{equation}\label{re-eq11} |f| = \int^\infty_0 1_{\{|f|>c\}} dc = \int^\infty_0 \mu\{|f| >c\}^{1/p} \varphi_c \ dc \end{equation} where $\varphi_c = \mu\{|f|>c\}^{-1/p} 1_{\{|f|>c\}}$. Let ${\cl S}$ be the space of integrable step functions and let $T\colon \ {\cl S}\to B$ be a linear map into a Banach space. Then $T$ extends boundedly to $L_{p,1}(\mu)$ iff there is a constant $c$ such that for any measurable subset $E\subset\Omega$ and any $g$ in $L_\infty(\mu)$ we have \[ \|T(g1_E)\|_B \le c\mu(E)^{1/p}. \] This well known fact can be derived easily from \eqref{100}. Indeed, if $g=f|f|^{-1}$, we deduce from \eqref{re-eq11} that if $\|f\|_{p,1}\le 1$ then $f$ lies in the closed convex hull of the set $\{g\varphi_c\mid c>0\}$. \end{rem} We start by stating the Varopoulos lemma. In \cite{Va} he proved it for $\Omega_1=\Omega_2 = [1,\ldots, n]$ equipped with counting measure. Later on, some generalizations were given in \cite{HP}. \begin{lem}\label{varo} Consider $f\colon \ \Omega_1\times\Omega_2 \to {\bb R}$ measurable. Let $X_1=L_\infty(\mu_1; L_1(\mu_2))$ and $X_2= L_\infty(\mu_2;L_1(\mu_1))$. Consider the following two properties: \begin{itemize} \item[\rm (i)] There are $f_1\in X_1, f_2\in X_2$ such that \[ f = f_1+f_2 \quad \text{and}\quad \|f_1\|_{X_1} + \|f_2\|_{X_2}\le 1. \] \item[\rm (ii)] For any pair of measurable subsets $E_1\subset \Omega_1, E_2\subset\Omega_2$, we have \[ \int_{E_1\times E_2} |f| \ d\mu_1 d\mu_2 \le \mu_1(E_1) + \mu_2(E_2). \] Then (i) $\Rightarrow$ (ii) and (ii) implies (i) for the function $f/2$. \end{itemize} \end{lem} The Varopoulos proof, and that of \cite{HP} that mimics it, focus on the case when $f$ is an $n\times n$ matrix and use induction on $n$. We found a completely different very direct (but dual) proof as follows: \begin{proof} That (i) $\Rightarrow$ (ii) is obvious. Conversely, assume (ii). To show that $f/2$ satisfies (i), it suffices by duality to show that \begin{equation}\label{real-eq3} \left|\int fg\right| \le 2 \end{equation} for any $g\colon \ \Omega_1\times\Omega_2$ such that $\max\{\|g\|_{X^*_1}, \|g\|_{X^*_2}\}\le 1$, equivalently we may restrict to $g$ such that \[ \max\left\{\int \|g\|_{L_\infty(\mu_2)} d\mu_1, \int \|g\|_{L_\infty(\mu_1)} d\mu_2\right\} \le 1. \] Let \[ \alpha(\omega_1) = \|g(\omega_1,\omega_2)\|_{L_\infty(d\mu(\omega_2))} \quad \text{and}\quad \beta(\omega_2) = \|g(\omega_1,\omega_2)\|_{L_\infty(d\mu(\omega_1))}. \] We set $\varphi = \alpha\wedge\beta$ (where $(\alpha\wedge\beta)(t) \overset{\text{def}}{=} \min(\alpha(t), \beta(t))$). We have \[ g = \varphi\cdot \widetilde g \] where $\|\widetilde g\|_{L_\infty(\mu_1\times \mu_2)} \le 1$. We now use $\alpha\wedge \beta = \int^\infty_0 1_{\{\alpha\wedge \beta>c\}} dc$, this gives us \[ \alpha\wedge\beta = \int^\infty_0 1_{E_c\times F_c} \ dc \] where $E_c = \{\alpha>c\}$ and $F_c = \{\beta>c\}$. We can rewrite this as \[ \alpha\wedge\beta = \int^\infty_0 (\mu_1(E_c) + \mu_2(F_c) )\varphi_c\ dc \] where $\varphi_c = (\mu_1(E_c) + \mu_2(F_c))^{-1} 1_{E_c\times F_c}$. Then, observing that $\int^\infty_0 \mu_1(E_c) + \mu_2(E_c)dc=$ $\int \alpha d\mu_1 + \int \beta \ d\mu_2 \le 2$, we find \[ \int |fg| \ d\mu_1 d\mu_2 = \left|\int |f(\alpha\wedge\beta)\widetilde g|\right| \le 2 \sup_c \int|f|\varphi_c \ d\mu_1 d\mu_2 \] but (ii) implies $\int |f|\varphi_c\le 1$ so we obtain \eqref{real-eq3}. \end{proof} \begin{rk} We could not find a modified formulation avoiding the presence of some extra factor (here equal to 2) spoiling the equivalence in Lemma \ref{varo}, but this might exist. \end{rk} Our extension of the Varopoulos result is based on the following \begin{lem}\label{lem100} Let $1<p_1,p_2<\infty$. Let $f\colon \ \Omega_1\times\Omega_2\to {\bb R}$ be measurable such that \[ \sup_{E_j\subset \Omega_j}\left\{\int_{E_1\times E_2} |f| \ d\mu_1 d\mu_2 (\mu_1(E_1)^{1/p'_1} + \mu_2(E_2)^{1/p'_2})^{-1}\right\} \le 1. \] Then there is a decomposition \[ f=f_1+f_2 \] with $\|f_1\|_{L_{p_1,\infty}(\mu_1;L_1(\mu_2))} + \|f_2\|_{L_{p_2,\infty}(\mu_2; L_1(\mu_1))}\le 2$. \end{lem} \begin{proof} We proceed exactly as in the above proof. By duality it suffices to show \[ \int |fg|\le 2 \] for any $g\colon \ \Omega_1\times\Omega_2$ such that \[ \max\{\|g\|_{L_{p'_1,1}(\mu_1,L_1(\mu_2))}, \|g\|_{L_{p'_2,1}(\mu_2,L_1(\mu_1))}\} \le 1. \] Let $\alpha$ and $\beta$ be as before. We now have \[ \max\left\{\int^\infty_0 \mu_1(\alpha>c)^{1/p'_1} dc, \int^\infty_0 \mu_2\{\beta>c\}^{1/p'_2} dc\right\}\le 1. \] We can thus write \[ \alpha\wedge\beta = \int^\infty_0 (\mu_1(E_c)^{1/p'_1} + \mu_2(F_c)^{1/p'_2})\psi_c \ dc \] where $\psi_c = (\mu_1(E_c)^{1/p'_1} + \mu_2(F_c)^{1/p'_2})^{-1} 1_{E_c\times F_c}$. Since our assumption implies \[ \int |f|\psi_c \ d\mu_1 d\mu_2 \le 1 \] we conclude as before that \[ \int |fg| d\mu_1 d\mu_2 \le 2 \sup_c \int |f|\psi_c \le 2.\eqno }\end{proof \] \renewcommand{}\end{proof}{}\end{proof} \begin{rem}\label{rem-q} Let $0<q<\infty$. The preceding Lemma applied to $|f|^{q}$ gives us a sufficient condition for $f$ to decompose as a sum $f=f_1+f_2$ with \[ f_1\in L_{qp_1,\infty} (\mu_1; L_q(\mu_2)), f_2\in L_{qp_2,\infty}(\mu_2;L_q(\mu_1)). \] \end{rem} We now give applications to the real interpolation method. We refer to \cite{BL} for all undefined terms. We just recall that if $(A_0,A_1)$ is a compatible couple of Banach spaces, then for any $x\in A_0+A_1$ the $K$-functional is defined by $$\forall t>0\qquad K_t(x;A_0,A_1)= \inf \big({\|x_0\|_{A_0}+t\|x_1\|_{A_1}\ | \ x=x_0+x_1,x_0\in A_0,x_1\in A_1}). $$ Recall that the (``real" or ``Lions-Peetre" interpolation) space $(A_0,A_1)_{\theta,q}$ is defined, for $0<\theta<1$ and $1\le q\le \infty$, as the space of all $x$ in $A_0+A_1$ such that $\|x\|_{\theta,q} <\infty$ where $$\|x\|_{\theta,q} =(\int{(t^{-\theta}K_t(x,A_0,A_1))^{q} dt/t})^{1/q} ,$$ with the usual convention when $q=\infty$. \begin{thm}\label{thm8} Let $f\colon \ \Omega_1\times\Omega_2\to {\bb R}$ be measurable. Let $0<q<p_1,p_2\le \infty$. For simplicity of notation, let \[ K_t(f) =K_t(f; L_{p_1,\infty}(\mu_1; L_q(\mu_2)), L_{p_2,\infty}(\mu_2; L_q(\mu_1)). \] Then there are positive constants $c,C$ depending only on $q,p_1,p_2$ such that \begin{equation} ck_t(f) \le K_t(f) \le Ck_t(f)\tag*{$\forall t>0$} \end{equation} where \begin{equation}\label{re-eq10} k_t(f) = \sup\left\{\left(\int_{E_1\times E_2} |f|^q\ d\mu_1 d\mu_2\right)^{1/q} (\mu _1(E_1)^{\frac1q-\frac1{p_1}} + t^{-1}\mu(E_2)^{\frac1q-\frac1{p_2}})^{-1}\right\}. \end{equation} \end{thm} \begin{proof} Assume first that $q=1$. Let $X_1=L_{p_1,\infty}(\mu_1; L_1(\mu_2))$, $X_2 = L_{p_2,\infty}(\mu_2; L_1(\mu_1))$. The Lemma applied to the pair $(\mu_1,t\mu_2)$ in place of the pair $(\mu_1,\mu_2)$ and then dividing the result by $t$ shows that $t^{-1} \inf\limits_{f=f_1+f_2}\{t\|f_1\|_{X_1} + t^{1/p_2}\|f_2\|_{X_2}\}$ is equivalent to \[ \sup_{E_1,E_2} \left\{\int_{E_1\times E_2} |f|\ d\mu_1 d\mu_2(\mu_1(E_1)^{1/p'_1} + t^{1/p'_2}\mu(E_2)^{1/p'_2})^{-1}\right\}. \] Thus we find that this last expression is equivalent to $K_s(f)$ for $s=t^{\frac1{p_2}-1}=t^{-\frac1{p'_2}}$. If we now change variable from $t$ to $s$ we find $K_s(f)$ equivalent to \[ \sup_{E_1,E_2} \left\{\iint_{E_1\times E_2} |f|\ d\mu_1 d\mu_2 (\mu_1(E_1)^{1/p'_1} + s^{-1}\mu_2(E_2)^{1/p'_2})\right\}. \] If we now apply the case $q=1$ that we just verified with $|f|^q$ in place of $|f|$, and use Remark \ref{rem-q}, we find an equivalent for $K_s(|f|^q; X_1,X_2)$. The result then follows from elementary calculations: since \[ (K_s(|f|^q; X_1,X_2))^{1/q} \simeq K_{s^{1/q}}(f; L_{qp_1,\infty}(\mu_1; L_q(\mu_2)), L_{qp_2,\infty}(\mu_2; L_q(\mu_1))), \] we find that $K_{s^{1/q}}(f; L_{qp_1,\infty}(\mu_1; L_q(\mu_2)), L_{qp_2,\infty}(\mu_2; L_q(\mu_1)))$ is equivalent to \[ \sup_{E_1,E_2} \left\{ \left(\int_{E_1\times E_2} |f|^q\ d\mu_1 d\mu_2\right)^{1/q}(\mu_1(E_1)^{1/qp'_1} + s^{-1/q}\mu(E_2)^{1/qp'_2})^{-1}\right\}. \] The final adjustment consists in replacing $(qp_1,qp_2)$ by $(p_1,p_2)$ and ${s^{1/q}}$ by $t$, we then find \eqref{re-eq10}. \end{proof} \begin{cor}\label{cor9} Let $f$ be as in Theorem~\ref{thm8}. Then $f\in (L_{p_1,\infty}(\mu_1; L_q(\mu_2)), L_{p_2,\infty}(\mu_2; L_q(\mu_1)))_{\theta,\infty}$ iff \[ \sup_{E_1,E_2\subset\Omega} \left(\int_{E_1\times E_2} |f|^q\ d\mu_1 d\mu_2\right)^{\frac1q} \cdot (\mu_1(E_1)^{(1-\theta)\alpha_1} \mu(E_2)^{\theta\alpha_2})^{-1} < \infty \] where $\alpha_j = \frac1q-\frac1{p_j}$ $(j=1,2)$. The corresponding norms are equivalent. When $q=1$, this condition means that the operator admitting $|f|$ as its kernel, namely the operator \[ T_{|f|}\colon \ g\to \int|f(x,y)| g(y) d\mu_2(y) \] is bounded from $L_{r,1}(\mu_2)$ to $L_{s,\infty}(\mu_1)$ where $\frac1r = \theta\alpha_2 = \frac\theta{p'_2}$ and $\frac1{s'} = 1 - \frac1s = \frac{(1-\theta)}{p'_1}$. \end{cor} \begin{proof} The first part is clear using the definition of the norm in $(~~,~~)_{\theta,\infty}$ and the identity: \begin{equation}\label{re-eq10a} \forall a_0,a_1>0\qquad\qquad a^{1-\theta}_0a^\theta_1 = \inf_{t>0} \{(1-\theta) a_0t^\theta + \theta a_1t^{\theta-1}\}. \end{equation} The second part follows from Remark \ref{rk-lo} \end{proof} \begin{rem}\label{rem19} Let us say that an operator $T$ from a Lorentz space $X$ to another one $Y$ is regular if it is a linear combination of positive (i.e. preserving positivity) bounded operators from $X$ to $Y$. In the complex case this means that $T$ can be decomposed as $T=T_1-T_2 +i(T_3-T_4 ) $ with all $T_j$'s positive and bounded from $X$ to $Y$. (In the real case the imaginary part can be ignored). We will say that $T$ is a kernel operator if it is defined using a scalar kernel $f$ that is measurable on $\Omega_1\times \Omega_2$. It is easy to check that a kernel operator $T$ is regular from $L_{r,1}(\mu_2)$ to $L_{s,\infty}(\mu_1)$ iff its kernel $f$ is such that \[ T_{|f|}\colon \ g\to \int|f(x,y)| g(y) d\mu_2(y) \] is bounded from $L_{r,1}(\mu_2)$ to $L_{s,\infty}(\mu_1)$. In the real case, this means equivalently that there is a positive operator $S$ such that $\pm T\le S$ that is bounded from $L_{r,1}(\mu_2)$ to $L_{s,\infty}(\mu_1)$. It is worthwhile to observe that $$\|T_{|f|}\colon\ L_\infty(\mu_2)\to L_{p_1,\infty}(\mu_1)\|=\|f\|_{L_{p_1,\infty}(\mu_1; L_1(\mu_2))}$$ and $$\|T_{|f|}\colon\ L_{p'_1,1}(\mu_2) \to L_1(\mu_1) \|=\|f\|_{L_{p_2,\infty}(\mu_2; L_1(\mu_1))}.$$ Let $B_0$ (resp. $B_1$) be the space of regular kernel operators $T \colon\ L_\infty(\mu_2)\to L_{p_1,\infty}(\mu_1)$ (resp. $T \colon\ L_{p'_1,1}(\mu_2) \to L_1(\mu_1) $). Then the preceding Corollary can be interpreted, when $q=1$, as the identification of $(B_0,B_1)_{\theta,\infty}$ with the space of regular kernel operators $T \colon\ L_{r,1}(\mu_2) \to L_{s,\infty}(\mu_1) $, with $\frac 1r=\frac {1-\theta}{\infty}+\frac{\theta}{p_1'}$ and $ \frac 1s=\frac {1-\theta}{p_1}+\frac{\theta}{1}$. \end{rem} The extension of these results to functions $f$ of $n$ variables is immediate (note that the constant $2$ becomes $n$). We merely state the main point. Consider measure spaces $(\Omega_j;\mu_j)$ $1\le j\le n$ and a measurable function $f\colon \ \Omega_1\times\cdots\times \Omega_n \to {\bb R}$. Let $0 < q < p_1,\ldots, p_n\le \infty$. Let $Y_j = L_{p_j,\infty}(\mu_j, L_q(\widehat\mu_j))$. Then $f\in Y_1 +\cdots+ Y_n$ iff \[ \sup_{E_j\subset\Omega_j} \left(\int_{E_1\times\cdots\times E_n} |f|^q d\mu_1\ldots d\mu_n\right)^{1/q} (\mu_1(E_1)^{\alpha_1} +\cdots+ \mu_n(E_n)^{\alpha_n})^{-1} < \infty \] where $\alpha_j = \frac1q-\frac1{p_j}$. Replacing each $\mu_j$ by $ s_j \mu_j$ ($s_j>0$) and then setting $t_j=s_j^{-\alpha_j}$ we find that there are constants $c,C>0$ such that the generalized $K$-functional $$K(t_1,\cdots,t_n)=\inf\{ t_1\|x_1\|_{Y_1}+\cdots +t_n \|x_n\|_{Y_n}\mid x=x_1+\cdots+x_n\}$$ is equivalent (with constants independent of $t=(t_j)$) to $$ \sup_{E_j\subset\Omega_j} \left(\int_{E_1\times\cdots\times E_n} |f|^q d\mu_1\ldots d\mu_n\right)^{1/q} (t_1^{-1}\mu_1(E_1)^{\alpha_1} +\cdots+ t_n^{-1}\mu_n(E_n)^{\alpha_n})^{-1} .$$ This gives a new example where the $K$-functionals considered in \cite{S} (see also \cite{CP}) can be computed, at least up to equivalence. \begin{rem}\label{rem20} The preceding Lemma \ref{lem100} can be reformulated as showing the following implication:\ If \begin{equation} \int_{E_1\times E_2} |f| \le \mu_1(E_1)^{1/p'_1} + \mu_2(E_2)^{1/p'_2}, \tag*{$\forall E_j\subset \Omega_j$} \end{equation} then there is a decomposition $f=f_1+f_2$ with $f_1,f_2$ such that \begin{equation} \int_{E_1\times E_2} |f| \le 2\mu_1(E_1)^{1/p'_1} \quad \text{and}\quad \int_{E_1\times E_2} |f_2| \le 2\mu_2(E_2)^{1/p'_2}. \tag*{$\forall E_j\subset \Omega_j$} \end{equation} Except for the factor 2, this resembles very much the kind of statements that are usually proved by the Hahn--Banach theorem, but we do not see how to prove it in this way. More generally, the above simple minded argument extends to the spaces originally introduced by G.G. Lorentz who denoted them by $\Lambda(\varphi)$ in \cite{Lo}. Here $\varphi$ is a non-negative decreasing (meaning non-increasing) function on an interval of the real line equipped with Lebesgue measure. We denote by $\Phi$ the primitive of $\varphi$ that vanishes at the origin (so $\Phi$ is concave and increasing). One can obtain a generalization of the preceding decomposition to the case when $x\mapsto x^{1/p_1'}$ and $x\mapsto x^{1/p_2'}$ are replaced by two such functions $x\mapsto \Phi_1(x)$ and $x\mapsto \Phi_2(x)$. \end{rem} There is a generalization of Lemma \ref{varo} to functions of not necessarily independent variables that seems to be of independent interest, as follows. Consider two conditional expectation operators ${\bb E}_j\colon \ L_1(\mu)\to L_1(\mu)$ $(j=1,2)$ on a measure space $(\Omega,{\cl A},\mu)$. For simplicity, we assume that $\mu(\Omega)=1$ but this is not really essential. Then let ${\cl B}_j\subset {\cl A}$ be the $\sigma$-subalgebra that is fixed by ${\bb E}_j$. Let $C_j$ be the space of measurable scalar valued functions $x$ on $(\Omega,\mu)$ such that \begin{equation}\label{eq200} \|x\|_{C_j} \overset{\text{def}}{=} \|{\bb E}_j(|x|)\|_\infty <\infty. \end{equation} We view $(C_1,C_2)$ as a compatible pair of Banach spaces. Consider $x\in C_1+C_2$ with $\|x\|_{C_1+C_2} \le 1$. Then a simple verification shows that for any pair of subsets $E_j\subset\Omega$ $(j=1,2)$, with $E_j$ assumed ${\cl B}_j$-measurable, we have \begin{equation}\label{eq100} \int_{E_1\cap E_2} |x| d\mu \le \mu(E_1) + \mu(E_1). \end{equation} Conversely, the above proof of Lemma \ref{varo} shows that \eqref{eq100} implies that \[ \|x\|_{C_1+C_2}\le 2. \] Indeed, we may run the same duality argument:\ given $y$ such that both $|y|\le f_j$ $(j=1,2)$ with $f_j$, ${\cl B}_j$-measurable such that $\|f_j\|_1\le 1$. Then we have $y=(f_1\wedge f_2)\widehat y$ with $\|\widehat y\|_\infty \le 1$ and \[ f_1\wedge f_2 = \int^\infty_0 1_{E^c_1\cap E^c_2} \ dc = \int^\infty_0 (\mu(E^c_1)+ \mu(E^c_2))\varphi_c \ dc \] with $E^c_j = \{f_j>c\}$ and $$\varphi_c = 1_{E^c_1\cap E^c_2}(\mu(E^c_1) +\mu(E^c_2))^{-1}.$$Thus we find that \eqref{eq100} implies $\int|xy| d\mu\le 2$ and we conclude by duality. The same argument works for any number of conditional expectations. In terms of operators and kernels, the norm \eqref{eq200} can be described like this: we associate to $x\in C_j$ the operator $M_x$ of multiplication by $x$ on $L_1({\cl A},\mu)$. We then have \[ \|x\|_{C_j} = \|M_x\colon \ L_1({\cl B}_j,\mu)\to L_1({\cl A},\mu)\|. \] We leave the extension of Lemma~\ref{lem100} to the reader. In a separate paper \cite{P5}, we give non-commutative generalizations of the preceding results. \end{document}
math
20,277
\begin{document} \maketitle \begin{abstract} We show that there is no iterated identity satisfied by all finite groups. For $w$ being a non-trivial word of length $l$, we show that there exists a finite group $G$ of cardinality at most $\exp(l^C)$ which does not satisfy the iterated identity $w$. The proof uses the approach of Borisov and Sapir, who used dynamics of polynomial mappings for the proof of non residual finiteness of some groups. \end{abstract} \section{Introduction} It is well-known and not difficult to see that there is no non-trivial group identity which is satisfied by all finite groups. We strengthen this fact by showing that there is no {\it iterated} group identity which is satisfied by all finite groups, and we construct a group violating a given iterated identity, providing an upper bound for the cardinality of this group. We recall the definition of iterated identity from \cite{erschleriterated}. We say that a group $G$ satisfies {\it an Engel type iterated identity} $w$ if for any $x_1, \dots, x_m \in G$ there exists $n$ such that \begin{equation} \label{eq:iterated} w_{\circ n}(x_1, x_2, \dots, x_m) =w(w(\dots(w(x_1, x_2, \dots, x_m), x_2, \dots, x_m), x_2, \dots, x_m))=e. \end{equation} In the sequel, we call Engel type iterated identities for short {\it iterated identities}. For definitions other than that of Engel type see \cite{erschleriterated}. The fact the group satisfies an iterated identity depends only on the element of the free group represented by this word, in other words, the property to satisfy an iterated identity does not change if we replace a word by a freely equivalent one, in particular, any group satisfies $w$ if $w$ is freely equivalent to an empty word. The definition of iterated identities is close to the notion of "correct sequences", studied by Plotkin, Bandman, Greuel, Grunewald, Kuniavskii, Pfister, Guralnick and Shalev in \cite{plotkin,bandmanetal,guralnickplotkinshalev}. Examples of such sequences, without this terminology, were previously constructed by Brandl and Wilson \cite{brandlwilson}, Bray and Wilson \cite{braywilson} and Ribnere to characterize finite solvable groups. See also \cite{grunewaldkuplotkin}. For some groups and some classes of groups a priori not bounded number of iteration in the definition of iterated identity is essential, as for example it is the case for the first Grigorchuk group, which is a $2$ torsion group \cite{grigorchuk}, that is, it satisfies the iterated identity $w(x_1)=x_1^2$, but this group does not satisfy any identity by a result of Abert \cite{abert}. For some other groups the number of such iterations is bounded for all $x_1$, \dots, $x_n$, a strong version of this phenomena is when such bound does not depend on the iterated identity $w$, as it is for example the case for any finitely generated metabelian group \cite{erschleriterated}. \begin{thm} \label{thm:noid} Let $w(x_1, \dots, x_m)$ be a word (on $n$ letters, $m\ge1$) which is not freely equivalent to an empty word. Then there exists a finite group $G$ such that $G$ does not satisfy an iterated identity $w$. Moreover, there exists $C>0$ such that for any $n\ge 1$ and any word $w$ on $n$ letters the group $G$ can be chosen to have at most $ \exp(l^{C})$ elements, where $l=l(w)$ is the length of the word $w$. \end{thm} An upper bound for the cardinality of a finite group in the second part of the theorem might not be optimal. One can ask whether one can replace $\exp(l^{C})$ by $l^C$. For related questions see also Section \ref{se:openquestions}. A standard argument to show that there is no identity for all finite groups is to observe that free non-Abelian groups are residually finite, and to conclude that if $w$ is an identity satisfied by all finite groups, then the free group $F_2$ also satisfies $w$. Observe that this argument does not work for iterated identities. Indeed, free groups are residually nilpotent, however every nilpotent group satisfies the iterated identity $w(x_1, x_2)= [x_1, x_2]$, while a free group does not satisfy any non-trivial iterated identity. To prove the theorem, we show that for any word $w$ on $x_1$, $x_2$ representing an element in the commutator subgroup of $F_2$ there exists $n \ge 1$ such that $w_{\circ n}(x_1, x_2) =x_1$ admits a solution with $x_1\ne 1$ in some finite group. Here $w_{\circ n}$ is as defined in the equation $1$. Much progress has been achieved in recent years in understanding the image of the verbal mapping from $G^n \to G$ $(x_1, \dots, x_n) \to w(x_1, \dots, x_n)$. Larsen, Shalev and Tiep prove in \cite{larsenshalevtiep} that for any word $w$ and for any sufficiently large finite simple non-Abelian group $w(G^n)w(G^n)= G$, that is, for any $g\in G$ there exists $x_1, \dots, x_n \in G$ and $x'_1, \dots, x'_n \in G$ such that $w(x_1, \dots, x_n) w(x'_1, \dots, x'_n) =g$. Moreover, for some words $w$ such verbal mapping turn out to be surjective. Libeck, O'Brien , Shalev and Tiep \cite{liebecketal}, proving the Ore conjecture, show that this is the case for $w(x_1,x_2)=[x_1,x_2 ]$ and any finite simple non-Abelian group. Observe however that the image of the mappings from $G^n \to G^n$ which sends $(x_1, \dots, x_n)$ to $\left( w(x_1, \dots, x_n), x_2, x_3, \dots, x_n \right)$ is far from being surjective, and the structure of periodic points for such mappings seems to be less understood. To solve the equation $w_{\circ m}(x_1, x_2) =x_1$, we use the idea and the result of Borisov and Sapir from \cite{borisovsapir}, who use {\it quasi-periodic} points of polynomial mappings to prove non-residual finiteness of some one relator groups, namely of what is called {\it mapping tori} (also called ascending $HNN$ extensions) of injective group endomorphisms: those are groups of the form $(x_1, x_2, \dots , x_k, t | R, tx_it^{-1}=w_i, i \le i \le k)$, where $x_i \to w_i$ is an injective endomorphism of the group $(x_1, x_2, \dots, x_i | R)$. In contrast with mapping tori of groups endomorphisms, general one relator groups are not necessary residually finite, and it is a long standing problem to characterize residually finite one relator groups. A conjecture of Baumslag, proven by Wise in \cite{wise} states that one relator group containing a non-trivial torsion element is residually finite. The situation for groups without torsion elements is less understood. Consider a sequence of the one relator group $G_m=[x_1,x_2: w_{\circ m}(x_1, x_2) =x_1]$. If a finite quotient of a group $G_m$ is such that the image of $x_1$ is not equal to one in this finite quotient, then in this finite quotient the image of $x_1$ is a non-fixed periodic point for the verbal map $x \to w(x,y)$, for a fixed $y$. We will construct finite quotients of groups $G_m$ as subgroups of $SL(2,\mathcal{K})$, for an appropriately chosen finite field $\mathcal{K}$. In Section \ref{se:moduloBS} we outline the proof of the theorem and prove its first claim. To do this, we choose a two-times-two integer valued matrix $y_0$ which can be one of the free generators of a free non-Abelian subgroup in $SL(2, \mathbb{C})$, regard $w(x,y_0)$ as a function of $x$, observe that the entries of $w(x,y_0)$ are rational functions $R_{i_1, i_2}(x_{1,1}, x_{1,2},x_{2,1},x_{2,2})$ in $x_{1,1}$, $x_{1,2}$, $x_{2,1}$, $x_{2,2}$. Multiplying by a power of the determinant of the matrix for $x$, we get polynomials $H_{i_1, i_2}(x_{1,1}, x_{1,2},x_{2,1},x_{2,2})$. We will need to check that the system of equations $H_{i_1, i_2}(x_{1,1},x_{1,2}, x_{2,1}, x_{2,2})= x_{i_1,i_2}^Q$ satisfies the assumption of Theorem $3.2$ of \cite{borisovsapir} and we aplly this theorem to solve this system of equations, assuming that $q$ is a large enough prime and $Q$ is a large power of $q$. We check that the image on the $4$-th iteration of the polynomial mapping in question contains at least one point with non-zero determinant, and such that the matrix is not a diagonal matrix. Hence we obtain at least one non-trivial solution of the system of the equations for $H_{i_1,i_2}$, which is non-diagonal matrix with non-zero determinant. Normalizing, if necessary, this solution by the square of the determinant of the corresponding matrix, we will obtain a non-identity solution for the system of the equations $R_{i_1, i_2}(x_{1,1},x_{1,2}, x_{2,1}, x_{2,2})= x_{i_1,i_2}^Q$. This solution belongs to a finite extension of $F_q$. Such solution provides a non-identity periodic point $x$ for iteration of $w$, for some $m\ge 1$, and this implies in particular that $w$ is not an iterated identity in the subgroup generated by $x$ and $y_0$ in $SL(2,k)$. In Section \ref{se:secondpart} we obtain a bound for the cardinality of a subgroup $SL(2, \mathcal{K})$. To do this, we need to control the cardinality of the finite field $\mathcal{K}$. For this purpose, instead of using Theorem $3.2$ of of \cite{borisovsapir}, we prove and use a version of that theorem, see Theorem \ref{thm:borisovsapir}. Given $n$ polynomials $f_i$ on $n$ variables over a finite field, and a polynomial $D_0$, this theorem provides a lower bound for $Q$ in terms of degree of these polynomials with the following property. If $D_0$ is equal to zero on any solution over algebraic closure of $F_q$ of the system of equations $$ f_i(x_1, \dots, x_n)= x_i^Q, $$ then $D$ is equal to zero on any point of the $n$-th iteration of the polynomial mapping $f=(f_1, \dots, f_n)$. To prove this theorem we follow the strategy of the proof of Borisov Sapir, the main ingredient of the proof is Lemma 3.4, which provides an explicit estimate for Lemma 3:5 in [11]. Given a solution $a_1, \dots, a_n$ of the system of the equations, this lemma provides an estimate for $k$ such that $(f_i^{(n)}-{\rm Const})^k$ belongs to the localisation at $(a_1, a_2, \dots, a_n)$ of the ideal generated by $f_i(x_1, \dots, x_n)- x_i^Q$, for each $i$. Here $f_1^{(n)}, \dots, f^{(n)}_n$ is the $n$-th iteration of the polynomial mapping $f$. As the last step of the proof, rather than using one of two possible arguments used in \cite{borisovsapir}, we make use of the fact that the polynomials $H^{(4)}_i- x_i^Q$ form a {\it Gr{\"o}bner basis} with respect to Graded Lex order. In section \ref{se:generaldynamics} we give a more general version of Theorem \ref{thm:noid}, where instead of iterations on one variable we consider iterations of verbal mappings on several variables. Given an endomorphism $\phi$ of a free group $F_n$, $\phi_{\circ m}$ denotes the $m$-th iteration of $\phi$ and $H_n$ denotes the kernel of $\phi_{\circ n}$. It is clear that $H_n$ is a normal subgroup, and $F_n/H_n$ is isomorphic to the image of $\phi_{\circ n}$, this image is isomorphic to a subgroup of $F_n$, and thus is is finitetly generated free group, $F_n/H_n$. \begin{thm} \label{thm:borsapmol} Let $\phi$ be an endomorphism of a free group. For any $g \in F_n$, $g\notin H_n$, there exist a finite quotient group $G$ of $F_n$ such that $\phi(g) \ne e$, where $\phi$ is the projection map from $F_n$ to $G$ and such that $\phi$ induces an automorphsim of $G$. Moreover, we can choose $G$ as above of cardinality at most $\exp(L^{C_n})$, wrere $L= \sum_{i=1}^n \phi(x_i)$, $x_i$ is a free generating set of $F_n$, and $C_n$ is a positive constant depending on $n$. \end{thm} \subsection*{Aknowledgements.} The authors are grateful to Boris Kunyavsky, Evgeny Plotkin and Mark Sapir for comments on the preliminary version of the paper. We would like to thank Andreas Thom for informing us about an impoved estimate for the result of \cite{bradfordthom}, Alexander Borisov for drawing our attention to \cite{varshavsky}. \section{Idea of the proof of Theorem \ref{thm:noid} and the proof of its first claim. } \label{se:moduloBS} We start with a not difficult lemma that shows that it is enough to consider only iterated identities on two letters. \begin{lem} Suppose that a class of groups does not satisfy any non-trivial iterated on two letters. Then is no iterated identity satisfied by this class of groups. \end{lem} {\bf Proof.} Suppose that $\bar{w}(x_1, x_2, \dots , x_m)$ is an iterated identity and $w$ is not freely equivalent to an empty word. Choose $u_2 (x,y)$, \dots, $u_m(x,y)$ and put $$ w(x_1, x_2) = \bar{w}(x_1, u_2(x_1, x_2),u_m(x_1, x_2)). $$ It is clear that $w$ is an iterated identity. Now suppose that $u_2$, \dots , $u_n$ are such that $x_1, u_2 (x_1, x_2)$, ..., $u_m(x_1, x_2)$ generate a free group of rank $m$ in the free group generated by $x_1$ and $x_2$. Observe that in this case $w$ is not freely equivalent an empty word, and thus $w$ is a non-trivial iterated identity on two letters. Take a non-trivial word $w(x,y)$ on two letters. The following obvious remark shows that for the proof of the theorem it is enough to consider words representing elements in the commutator subgroup of $F_2$. \begin{rem} \label{obviousremark} If $w$ is a word on $x,y$ which depends on $y$ only, then $w$ is freely equivalent to $y^m$, $m \ne 1$. If $m\ne 1$ and $M>m$ is relatively prime with $m$, then $w$ is not iterated identity in a finite cyclic group of $M$ elements. More generally, if $w$ is a word on $x,y$ which does not belong to the commutator group $[F_2,F_2]$, then $w(x,y)=x^my^k\bar{w}$, where at least one of $k$ and $m$ is not equal to $0$, and $\bar{w}$ is a word representing an element of the commutator group. Considering the iterated values of $w(x,y)$ in $0,y$ and $0,x$ we conclude that $w$ is not iterated identity in a finite cyclic group of $M$ elements, for any $M$ which is relatively prime with $m$. \end{rem} In the sequel we assume that $w$ is a word on two letters representing an element of the commutator subgroup of $F_2$. Now consider two times two matrices $x$ and $y$ \[ x= \left( \begin{array}{ccc} x_{1,1} & x_{1,2} \\ x_{2,1} & x_{2,2} \end{array} \right) \mbox{ and } y= \left( \begin{array}{ccc} y_{1,1} & y_{1,2} \\ y_{2,1} & y_{2,2} \end{array} \right)\] \begin{convention}\label{conv:freesubgroup} We assume that $y_{i_1,i_2} \subset \mathbb{Z}$, $i_1, i_2=1,2$, are such that for some choice of $x_i$ in $\mathbb{C}$, the group generated by the matrices $x$ and $y$ is free, $y$ is in $SL(2, \mathbb{R})$. For example, one can take \[ y= \left( \begin{array}{ccc} 1 & 0 \\ 1 & 1 \end{array} \right)\] \end{convention} \begin{convention}\label{strongerconv:freesubgroup} We asume that $y_i \subset \mathbb{Z}$, $1 \le i \le 4$, are such that for some choice of $x_i$ in $\mathbb{Z}$, the group generated by the matrices $x$ and $y$ is free and $x,y$ are in $SL(2,\mathbb{Z})$. For example, one can take \[ y= \left( \begin{array}{ccc} 1 & 0 \\ 2 & 1 \end{array} \right)\] \end{convention} First observe that we can chose $y_i$ as in the convention \ref{strongerconv:freesubgroup}, since $SL(2,\mathbb{Z})$ is virtually free, and in particular this group contains free subgroups. For example take any integer $m\ge 2$ (e.g. $m=2$), put $\alpha=\beta=m$ and consider \[ x= \left( \begin{array}{ccc} 1 & \alpha \\ 0 & 1 \end{array} \right), \mbox{ and } y= \left( \begin{array}{ccc} 1 & 0 \\ \beta & 1 \end{array} \right).\] The subgroup generated by such $x$ and $y$ is free whenever $\alpha=\beta\ge 2$ are positive integers, see e.g. Theorem $14.2.1$ in \cite{kargapolovmerzljakov}. (For $\alpha= \beta=2$ this subgroup is called Sanov subgroup). Moreover, it is easy to see that the subgroup generated by $x$ and $y$ depends up to an isomorphism only on the product $\alpha \beta$ \cite{changetal}; this group is also free for any $\alpha$ and $\beta$ such that $\alpha \beta$ is transcendental \cite{fuksrabinovich,changetal} (the group is known to be free for example for any complex $\alpha$, $\beta$ such that $|\alpha \beta |, |\alpha \beta -2| >2, |\alpha \beta +2|$ \cite{changetal}, but apparently it is not known in general when it is free). In particular, $y$ for $\beta=1$ satisfies the assumption of the Convention \ref{conv:freesubgroup}, since it is sufficient to consider $x$ as above with $\alpha$ which is transcendental. While for the proof of Theorem 1 any matrix $y$ as in solution $2$ will suffice, in case we want to get more information about finite groups we construct, it might be interesting to start with various choices of $y$ (for some open questions see Section 5). Now we fix integers $y_{i_1, i_2}$, $i_1, i_2=1,2$ as in Convention \ref{conv:freesubgroup}. Note that \[ x^{-1}= \frac{1}{x_{1,1}x_{2,2}-x_{1,2}x_{2,1}}\left( \begin{array}{ccc} x_{2,2} & -x_{1,2} \\ -x_{2,1} & x_{1,1} \end{array} \right) \] Observe that \[ w(x,y)= \left( \begin{array}{ccc} R_{1,1}(x_{i_1, i_2},y_{i_1,i_2}) & R_{1,2}(x_{i_1,i_2},y_{i_1,i_2}) \\ R_{2,1} (x_{i_1,i_2},y_{i_1,i_2})& R_{2,2}(x_{i_1, i_2},y_{i_1, i_2}), \end{array} \right)\] where $R_{j_1,j_2}$, $j_1, j_2 =1,2$ are rational functions in $x_{i_1,i_2},y_{i_1, i_2}$, $i_1, i_2 =1,2$ with integer coefficients. We consider fixed integers $y_{i_1, i_2}$, with $y\in SL(2, \mathbb{Z})$ (e.g. $y_{1,1}=1$, $y_{1,2}=0$, $y_{2,1}=2$, $y_{2,2}=1$) as above, and then $R_{j_1,j_2}(x_{i_1,i_2}) = R_{j_1,j_2}(x_{i_1, i_2}, y_{i_1, i_2})$ are rational functions in $x_{1,1}$, $x_{1,2}$, $x_{2,1}$ and $x_{2,2}$ with integer coefficients. For each $j_1, j_2$ it holds $$ R_{j_1, j_2}(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}) = H_{j_1, j_2}(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}) /(x_{1,1}x_{2,2}-x_{1,2}x_{2,1})^s, $$ where $H_{j_1, j_2}$ are polynomials with integer coefficients in $x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}$ and $s$ is the number of occurrences of $x^{-1}$ in $w$. Observe that $$ H= \left( \begin{array}{ccc} H_{1,1}(x_{1,1},x_{1,2},x_{2,1}, x_{2,2}) & H_{1,2}(x_{1,1},x_{1,2},x_{2,1}, x_{2,2}) \\ H_{2,1}( x_{1,1},x_{1,2},x_{2,1}, x_{2,2} ) & H_{2,2}( x_{1,1},x_{1,2},x_{2,1}, x_{2,2} ) \end{array} \right) $$ is not an identity matrix. Now observe that if $y$ satisfies the assumption of Convention \ref{strongerconv:freesubgroup}, then we know moreover that for some values of $x_i \in \mathbb{Z}$ the corresponding matrices $w(x,y)$ and $w(x',y)$ do not commute. Indeed, observe that if $w(x,y)$ is a freely reduced word on two letters that has at least one entry of $x$ or $x^{-1}$, then $w(x,y)$ and $w(x^k,y)$ do not commute in the free group generated by $x$ and $y$; this implies in particular that at least one of the rational functions $R_{1,2}$ and $R_{2,1}$ is not zero, and therefore that at least one of polynomials $H_{1,2}$ and $H_{2,1}$ is not zero. We consider $Q$ to be a power of $q$ and we want to solve over field $\mathcal{K}$ of characteristic $q$ the system of four equations: \begin{equation} \label{RQ} R_{j_1,j_2}(x_{i_1,i_2}, y_{i_1,i_2}) = x_{j_1,j_2}^Q, \end{equation} for $j_1,j_2=1,2$. To do this, we start by solving the system of polynomials equations: \begin{equation} \label{HQ} H_{j_1,j_2}(x_{i_1, i_2}, y_{i_1, i_2}) = x_{j_1, j_2}^Q, \end{equation} $j_1, j_2=1,2$. It is easier to work with the system of the equations (\ref{HQ}) rather then (\ref{RQ}) is that polynomials $H_{j_1,j_2}(x_{i_1, i_2}, y_{i_1, i_2}) - x_{j_1, j_2}^Q$ form a Groebner basis (in the next section we recall a definition and basic properties of Groebner bases), while polynomials obtained from rational functions $(R_{j_1, j_2}(x_{i_1, i_2}, y_{i_1, i_2}) = x_{j_1, j_2}^Q)$, after multiplication on the denominator, do not in general form such basis. The solutions of the system of polynomial equations are Zariski dense in the image the fourth iteration of the polynomial mappings from $\bar{F_q}$ to $\bar{F_q}^4$, where $\bar{F_q }$ denotes the algebraic closure of $F_q$ by Theorem $3.2$ in Borisov Sapir \cite{borisovsapir}; for a more general statement see Corollary $1.2$ on page $5$ of the preprint of Hrushovski \cite{hrushovski}, see also \cite{vashavsky}. Indeed, observe we know that the dimension of the fourth iteration of $H$ is not zero, since the image contains at least two points over the field on $q$ elements, for any sufficiently large $q$. (And there exists a variety of dimension greater than $0$ , such that the iteration of the polynomial mapping corresponding to $H$, restricted to this variety, is dominant). Moreover, observe that for sufficiently large $q$ there is at least one point $v_{1,1}, v_{1,2}, v_{2,1}, v_{2,2}$ in the image of $f$ such that $v_{1,1}v_{2,2}-v_{1,2}v_{2,1} \ne e$ and either $v_{2,1}$ or $v_{1,2}$ is not equal to $0$. Indeed, suppose that $w$ is reduced word containing at least one entry of $x$ or $x^{-1}$. Take any $x$, $y$ as in Convention \ref{strongerconv:freesubgroup}, that is $x$ and $y$ are in $SL(2, \mathbb{Z})$ such that $x$ and $y$ generate a free subgroup. Observe that $w(x^m,y)$ belongs to $SL(2,\mathbb{Z})$ for all $m$, in particular, determinant of this matrix is $1$. Observe that $w(x,y)$ and $w(x^2,y)$ to not commute in the free group, and hence they do not commute in $SL(2,{\mathbb Z})$. If $q$ is large enough, their images under the quotient map do not commute in $SL(2,F_q)$. Therefore, either $v_{2,1}$ or $v_{1,2}$ for one of these two matrices is not equal to $0$, and we know $v_{1,1}v_{2,2}-v_{1,2} v_{2,1}=1$ in $F_q$. We conclude, that for some point in the image of $f$ over $F_q$ either $v_{2,1} (v_{1,1}v_{2,2}-v_{1,2} v_{2,1}) \ne 0$ or $v_{1,2} (v_{1,1}v_{2,2}-v_{1,2} v_{2,1})\ne 0$. Without loss of generality we can suppose that there exists a point in the image of $f$ such that $v_{2,1} (v_{1,1}v_{2,2}-v_{1,2} v_{2,1})$. In this case, we know that there exist at least one solution of the system of the polynomial equations in $\bar{F_q}$, such that $x_{2,1} (x_{1,1}x_{2,2}-x_{1,2} x_{2,1}) \ne 0$. Consider a field generated by elements of this solution $x_{1,1}$, $x_{1,2}$, $x_{2,1}$, $x_{2,2}$. This field is clearly a finite extension of $K$, which we denote by $\mathcal{K}$. Observe that if $x_{i_1,i_2}$, $i_1, i_2=1,2$ is the solution of the system of the equations above over $\mathcal{K}$, then there exist $m$ such that $x_i$ is $m$ periodic point in the group of two times two invertible matrices over $\mathcal{K}$, for polynomial mapping corresponding to $H$. Indeed observe that $$ H_{j_1, j_2}^{(4)^{(2)}}(x_{i_1, i_2},y_{i_1, i_2})=H_{j_1, j_2}^{(4)}(H_{j_1, j_2}^{(4)}(x_{i_1, i_2},y_{i_1, i_2}), y_i) = H_{j_1, j_2}^{(4)}(x_{i_1, i_2}^Q, y_{i_1, i_2}) = (x_{j_1, j_2}^{Q^2}) $$ and, arguing by induction, we obtain that $$ H_{j_1, j_2}^{(4^l)}(x_{i_1, i_2},y_{i_1, i_2})=x_{j_1, j_2}^{(q^l)}. $$ Observe that there exist $l$ such that $x_{i_1, i_2}^{(q^l)}=x_i$ (for $i_1, i_2 = 1, 2$). Now consider $$ \mathcal{K'}= \mathcal{K} [\sqrt{\det x}] = \mathcal{K} [\sqrt{ (x_{1,1} x_{2,2} - x_{1,2} x_{2,1}} )]. $$ Recall that we know that $(x_{1,1} x_{2,2} - x_{1,2} x_{2,1}) \ne 0$. Put $x'=x/ \sqrt{ (x_{1,1} x_{2,2} - x_{1,2} x_{2,1})}$, $x' \in SL(2, \mathcal{K}')$. Note that $x',y$ is $4^l$ periodic in $SL(2, \mathcal{K'})$: $x' \ne e$ in $SL(2, \mathcal{K'})$ and the $n$-th iteration $w_{\circ n}$ of $w$ satisfies $$ w_{\circ n}(x',y)=w(w(\dots w(x',y), y , \dots, y) =x'. $$ From Remark \ref{obviousremark} we know that it is sufficient to consider words $w$ such that the total number of $x$ in $w$ is equal to zero (otherwise, $w$ is not an iterated identity in some finite cyclic group). So we assume that $w(x,y)$ is such that the total number of $x$ is equal to zero. Then $w(e,z) =e$ for all $z$. Therefore, for any periodic point $x'\ne e$ it holds $$ w_{\circ n}(x',y) \ne e $$ for some positive integer $n$, and hence $w$ is not an iterated identity for $SL(2, \mathcal{K'})$. \begin{rem} Alternatively, the first claim of the theorem can be proved by combining the result of Borisov and Sapir about residual finiteness of the mapping tori (Theorem $1.2$ in \cite{borisovsapir}, rather than its proof , as explained above) with characterization of residual finiteness of HNN extension in terms of {\it "compatible"} subgroups (Theorem $1$ in \cite{moldavanskii}), in case when the corresponding endomorphism is injective, and then reduce the general case in our theorem (when the endomorphism is not necessary injective) to this one. \end{rem} \section{Explicit estimates for Theorem $3.2$ of Borisov and Sapir in \cite{borisovsapir} and the proof of the second part of the Theorem.}\label{se:secondpart} Theorem \ref{thm:borisovsapir} below is a version of Theorem $3.2$ in \cite{borisovsapir}, which, given an upper bound on the degree of polynomial $D$, provides an explicit estimate $Q$ in such a way that if $D$ is equal to $0$ on all solutions of the system of polynomial equations, than $D$ is equal to zero on the image of the polynomial mapping. For a prime $q$, $F_q$ denotes the filed on $q$ elements. \begin{thm}\label{thm:borisovsapir} Let $q$ be a prime number, $d,n\ge 1$. Let $f=f_1, \dots, f_n$ be polynomials on $n$ variables of degree $\le d$, with coefficients in $F_q$, such that $f_i(0, \dots, 0)=0$ for all $i: 1\le i \le n$. Assume that $Q$ is a power of $q$ and $D_0\ge 1$ satisfy $$ Q/D_0>n(n+1)d^{n^2+1}. $$ Consider a polynomial $D$ of degree at most $D_0$ over $F_q$ on $n$ variables, such that $$ D(x_1, \dots, x_n)=0 $$ for all $x_i \in \bar{F_q}$ that are solution of the system of the equations \begin{equation} \label{fiQ} f_i(x_1, \dots, x_n) = x_i^Q, \end{equation} for all $i: 1 \le i \le n$. Then $D$ is equal to zero on all points in the image of $\bar{F}_q^n$ under $f^{(n)}=\left(f_1^{(n)}, \dots, f_n^{(n)} \right)$. \end{thm} We need this theorem in a particular case when there is no non-zero solution for the system of the equations. In this case it is sufficient to consider $D$ of degree $1$, $D(x_1, \dots, x_n)=x_i$ for some $i$, and we get \begin{corlem} Let $q$ be a prime number, $d,n \ge 1$. Let $f=f_1, \dots, f_n$ be polynomials on $n$ variables of degree $\le D_f$, with coefficients in $F_q$, such that $f_i(0, \dots, 0)=0$ for all $i: 1\le i \le n$. Suppose that there exists $v_1, \dots, v_n \in F_q$ and $i: 1 \le i \le n$ such that $f_i^{(n)}(v_1, \dots, v_n) \ne 0$. Suppose that $Q$ is a power of $q$ such that $$ Q>n(n+1)d^{n^2+1}. $$ Then the system of equations $$ f_i(x_1, \dots, x_n) = x_i^Q, $$ has at least one non-zero solution in $\bar{F_q}$. \end{corlem} More precisely, for the proof of Theorem \ref{thm:noid} we need the to find a non-zero solution of the system of $n$ equations, $n=4$, satisfying additionally an inequality $x_1x_4-x_2x_3 \ne 0$, and to obtain such solution we apply Theorem \ref{thm:borisovsapir} for the polynomials $D(x_1, x_2, x_3, x_4)$ of degree $3$ the form $$ D(x_1, x_2, x_3, x_4)=(x_2 x_1x_4-x_2x_3)x_2 $$ and $$ D(x_1, x_2, x_3, x_4)=(x_2 x_1x_4-x_2x_3)x_3. $$ (In our matrix notation of the previous section these $x_i$ correspond to $x_1=x_{1,1}$, $x_2=x_{1,2}$, $x_3=x_{2,1}$, $x_4=x_2,2$). For a more general version in Theorem \ref{thm:borsapmol}, we will need to find a system of $4s$ equations, each solution in not proportional to an identity matrix, and the determinants of the corresponding matrices are not equal to zero. To to this, we will apply theorem \ref{thm:borisovsapir} to the polynomial of degree $3s$, which as a product of the polynomials as above. Given $(\alpha_1, \dots, \alpha_n)$, $(\beta_1, \dots, \beta_n) \in {\mathbb Z}^n$ we say that $(\alpha_1, \dots, \alpha_n)$ is {\it greater} than $(\beta_1, \dots, \beta_n)$ in {\it the lexicographic order} if for the minimal $i$ such that $\alpha_i-\beta_i \ne 0)$ it holds $\alpha_i>\beta_i$. Now we recall the definition of the {\it graded lexicographic order} (or for short {\it graded lex order}). Given two monomials $x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ and $x_1^{\beta_1} \cdots x_n^{\beta_n}$, we say that $x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ is greater than $x_1^{\beta_1} \cdots x_n^{\beta_n}$ in the graded lex order, if either the degree of the first monomial is greater, that is, $\alpha_1 + \cdots + \alpha_n > \beta_1+\cdots + \beta_n$, or if the degrees are equal ( $\alpha_1 + \cdots + \alpha_n > \beta_1+\cdots + \beta_n$ ) and $\alpha_1,\dots, \alpha_n$ is greater than $\beta_1, \dots, \beta_n$ in the lexicographic order. Lexicographic order if a particular case of {\it monomial order}, that is, it is a total ordering of $\mathbb{Z}^n$, satisfying $\alpha+\gamma$ is greater then $\beta+\gamma$ whenever $\alpha$ is greater then $\beta$ and it is a {well-ordering}, meaning that any non-empty subset of $\mathbb{Z}^d$ has a minimal element with respect to this order (see Section 2, Chapter 2 in \cite{coxlittleoshea}). Fixing a monomial order and given a polynomial $\phi= \sum_i a_{\alpha_{1,i}, \dots, \alpha_{j_i}} x_1^{\alpha_{1,i}} x_2^{\alpha_{2,i}} \cdots x_{n}^{\alpha_{n,i}}$, one can speak about its leading monomial $x_1^{\alpha_{1,i}} x_2^{\alpha_{2,i}} \cdots x_{n}^{\alpha_{n,i}}$ denoted by $LM(\phi)$, and its {\it leading term} $a_{\alpha_{1,i}, \dots, \alpha_{j_i}} x_1^{\alpha_{1,i}} x_2^{\alpha_{2,i}} \cdots x_{n}^{\alpha_{n,i}}$, denoted by $LT(\phi)$. This allows to use {\it division algorithm} in $\mathcal{K}[x_1, \dots, x_n]$, $\mathcal{K}$ is some field (see Theorem 3 and its proof, Section 3, Chapter 2 in \cite{coxlittleoshea}). Given an ordered tuple $f_1$, \dots, $f_s$ with respect to a fixed monomial order, and given a polynomial $f \in \mathcal{K}[x_1, \dots, x_n]$, the division algorithm procedes as follows. Given $f$, it looks for a minimal $i$ such that the leading term of $f$ is divided by the leading term of $f_i$, and replaces $f$ by $f- f_i g$, where $g$ is the monomial such that $LT(f) = LT(f_i)g$. At the end we obtain $$ f=a_1 f_1 + \cdots + a_s f_s +r, $$ where $r$ is such that no term in $r$ is divisible by a leading term of some $f_i$. In general, given some tuple $f_s$, this remaining term $r$ is not defined uniquely by the decomposition above, it is difficult therefore to work with the division algorithms. This problem no longer occurs if we assume that $f_i$ form a {\it a Gr{\"o}bner basis} (also called {\it a standard basis}). There are several equivalent ways to define a Gr{\"o}bner basis, one of such ways is as follows. Elements of $\phi_1$, $f_2$, \dots, $\phi_m \in \mathcal{K}[x_1, \dots, x_n]$ are said to be a Gr{\"o}bner basis, if the ideal of $\mathcal{K}[x_1, \dots, x_n]$ generated by leading terms of $\phi_j$ is equal to the ideal, generated by the leading term of the ideal $I$ generated by $\phi_j$ (Definition 5 in \cite{coxlittleoshea}). In general, the ideal of leading terms of $I$ can be larger than that generated by leading terms of $\phi_j$. For a Gr{\"o}bner basis this can not happen, and this provides an effective way to determine whether a polynomial $F$ belongs to the ideal $I$ generated by $f_j$. For example, given polynomials $f_i$ ($i: 1 \le i \le n$) on $x_1$, $x_2$, \dots, $x_n$, consider polynomials $\phi_i=f_i -x_i^Q$. Suppose that that the degrees of $f_i$ are smaller than $Q$, for all $i: 1 \le i \le n$. Then the leading terms of $f_i$ with respect to graded lex order is $x_i^Q$. The Least Commond Multiple of the Leading Monomials of $\phi_i$ and $\phi_j$ is equal to their product , $x_i^Qx_j^Q$, and hence by Diamond Lemma $\phi_i$ is a Gr{\"o}bner basis (see e.g. Theorem 3 and Proposition 4 in Ch.2, Sect.9 of \cite{coxlittleoshea}). A straightforward observation in Lemma \ref{le: lemma3.3} below is Lemma $3.3$ from \cite{borisovsapir}. While the formulation of that lemma in \cite{borisovsapir} states "for $Q$ large enough", the proof shows that it is sufficient to take $Q$ which greater than the maximum of the degrees of $f_i$, as stated below, and it is not difficult to estimate this codimension. \begin{lem} \label{le: lemma3.3} [a version of Lemma 3.3 in \cite{borisovsapir}] \label{le:codimension} Let $f_1$, \dots, $f_n$ be polynomials on $x_1$, \dots, $x_n$ over $F_q$, $q$ is a prime number. Take $Q$ such that $Q$ is greater than the maximum of the degrees of $f_i$, $i: 1 \le i \le n$. Let $I_Q$ be the ideal in $\bar{F}_q[x_1, \dots, x_n]$ generated by polynomials $f_i(x_1, \dots, x_n)- x_i^Q$, $i: 1 \le i \le n$. Then $I_Q$ has finite codimension in $\bar{F}_q[x_1, \dots, x_n]$, this codimension is at most $Q^n$. \end{lem} This shows in particular, that any solution (in $\bar{F}_q$ the system of equations $f_i(x_1, \dots, x_n) = x_i^Q$ belongs to a finite extension of $F_q$, of degree at most $Q^n$. Observe that a cardinality of such finite extension is at most $q^{Q^n}$. {\bf Proof.} Observe that if a monomial on $x_1$, $x_2$, \dots , $x_n$ is divisible by $x_i^Q$, then this monomial is equivalent $\pmod {I_Q}$ to a sum of monomials of lesser degree. Indeed, $$ x_i^Q \prod x_i^{\alpha_i} \equiv f_i(x_1, \dots, x_n ) \prod x_i^{\alpha_i}. $$ All monomials of $f_i$ have degree strictly lesser than $Q$, and the degree of monomials on the right hand side is therefore lesser than $Q + \sum_i \alpha_i$. Therefore, any polynomial on $x_1$, $x_2$, \dots , $x_n$ is equivalent $\pmod {I_Q}$ to a linear combinations of monomials of the form $\prod x_i^{\alpha_i}$, such that $\alpha_i < Q$ for all $i$. Observe that the number of such monomials is $Q^n$ \begin{rem} We will use only the (trivial) upper bound for the codimension, but it is not difficult to see that the codimension is in fact equal to $Q^n$. Indeed, as we have already mentioned the $f_i-x_i^Q$ is a Gr{\"o}bner basis, and by a "Diamond lemma" we know that any linear combination of $\prod x_i^{\alpha_i}$, with at least one non-zero coefficient , such that $\alpha_i < Q$ does not belong to $I_Q$. \end{rem} Given polynomial $f_i$, $1 \le i \le n$ over some field $\mathcal{K}$, we can consider a mapping $f = (f_1, f_2, \dots, f_n)$ from $\mathcal{K}^n$ to $\mathcal{K}^n$. For $j\ge 1$ we denote by $f^{(j)}= (f_1^{(j)}, \dots, f_n^{(j)})$ its $j$-th iteration. We recall in Lemma \ref{le:lem3.4} below Lemma 3.4 in \cite{borisovsapir}), for the convenience of the reader we recall its proof. \begin{lem} \label{le:lem3.4} [Lemma $3.4$ in \cite{borisovsapir}] Let $f_i$ be polynomials on $x_1$, \dots, $x_n$ with coefficients in $F_q$, $q$ is a prime number, and $Q$ be a power of $q$. For each $j\ge 1$ the $j$-th iteration of the polynomial mapping $f=\left( f_1, \dots, f_n\right)$ satisfies for all $i: 1 \le i \le n$ $$ f_i^{(j)} - x_i^{Q^j} \in I_Q, $$ where $I_Q$ is the ideal generated by $f_i-X_i^Q$, $i: 1 \le i \le n$. \end{lem} {\bf Proof.} The proof is by induction on $j$. Suppose that the statement is true for all $j\le m$. Observe that $$ f_i^{(m+1)}(x_1, x_2, \dots, x_n) = f_i \left(f_1^{(m)}, f_2^{(m)}, \dots, f_n^{(m)} \right) \equiv f_i \left(x_1^{Q^m}, \dots, x_n^{Q^m} \right) $$ The last congurence above follow from the induction hypothesis for $j=m$. Observe also that since $Q$ is a power of $q$, over any field of characteristic $q$ it holds $$ f_i \left(x_1^{Q^j}, \dots, x_n^{Q_j} \right) = f_i(x_1, \dots, x_n)^{Q^j} \equiv x_i^{Q^{j+1}} \pmod I_Q, $$ the last congruence is a consequence of the induction hypothesis for $j=1$. \begin{lem} \label{le:algebraicdependance} Let $F_1$, $F_2$, $F_{n+1}$ are polynomials on $x_1$, $x_2$, \dots, $x_{n}$ over some field $\mathcal{K}$. Suppose that the degrees of $F_i$ are $\le d$. If $s>0$ is such that the binomial coefficients satisfy $$ C_{s+n+1}^{n+1} \ge C_{sd+n}^n, $$ then there exists a non-zero polynomial $\Psi$ over $\mathcal{K}$ on $n+1$ variables of degree at most $s$ such that $$ \Psi(F_1, \dots, F_{n+1})=0. $$ The assumption on $s$ is in particular satisfied if $$ s \ge (n+1)d^n $$ \end{lem} In the lemma above, it is essential that $s \ge {\rm Const} \cdot d^n$. {\bf Proof.} It is clear that it is sufficient to consider the case when at least one of $f_i$ has at least one non-zero coefficient. Take some integer $s$ and a polynomial $\Psi$ of degree $s$. Let us compute the number of possible monomials on $n$ variables $x_1$, \dots, $x_n$ in $\Psi(F_1, \dots, F_{n+1})$. All monomials are of degree at most $s d$, that is , of the form $X_1^{\beta_1} X_2^{\beta_2} \cdots X_{n}^{\beta_{n}}$, $\beta_i \ge 0$, $\sum_j\beta_j \le sd$. This is the number to write $sd$ as the sum of $n+1$ non-negative summands, which is equal to $C_{sd+n}^n$. Consider possible monomials of degree $\le s$ on $n+1$ variables, they are of the form $y_1^{\alpha_1} y_2^{\alpha_2} \cdots y_{n+1}^{\alpha_{n+1}}$, $\alpha_i \ge 0$, $\sum_j\alpha_j \le n$ and hence their number is equal to $C_{s+n+1}^{n+1}$. Take $s$ such that $$ C_{s+n+1}^{n+1} \ge C_{sd+n}^n. $$ Observe that there exists a non-zero polynomial $\Psi$ of degree at most $s$ such that $$ \Psi(F_1, \dots, F_{n+1})=0. $$ Indeed, if we consider the coefficients of the polynomial $\Psi$ (taking value in the field $\mathcal{K}$) as variables, we get $C_{sd}^n$ linear equations on at least $C_{s+n+1}^{n+1}$ variables. Since the number of variables greater or equal to the number of linear equation, this system has at least one non-zero solution over $k$. Finally, observe that the assumption on $s$ in the formulation of the Lemma is satisfied if $$ (s+1) (s+2) \cdots (s+n+1) \ge (n+1) (sd+1) \cdots (sd+n), $$ and the latter is satisfies whenever $s+n+1 \ge s \ge (n+1)d^n$. Lemma \ref{le:algebraicdependance} allows us to obtain explicit estimates for Lemma $3.5$ in \cite{borisovsapir}: \begin{lem} \label{le:l3.5} Given $d$, take an integer $Q$ which is a power of a prime $q$ such that $Q > (n+1) d^{n^2}$ and $k=(n+1) d^{n^2} $. Consider polynomials $f_1$, $f_2$, \dots, $f_n$ over $F_q$ on $n$ variables. Suppose that the degrees of $f_i$ are $\le d$. Let $a_1$, \dots, $a_n$ in the algebraic closure $\bar{F}_q$ of $F_q$ are the solution of the system of equations $$ f_i(a_1, a_2, \dots, a_n) = a_i^Q. $$ Then for all $i: 1 \le i \le n$ the polynomial $$ (f_i^{(n)}(x_1, \dots, x_n) -f_i^{(n)}(a_1, \dots, a_n))^k $$ is contained in the localization of $I_Q$ at $a_1, \dots, a_n$. As before, $I_Q$ denotes the ideal in $\bar{F}_q[x_1, \dots, x_n]$ generated by polynomials $f_i(x_1, \dots, x_n)- x_i^Q$, $i: 1 \le i \le n$. \end{lem} {\bf Proof.} For each $i: 1 \le i \le n$ consider the $i$-th coordinate of the iterations of $f$: $F_{1,i} =x_i$, $F_{2,i}(x_1, \dots, x_n)=f_i(x_1, \dots, x_n)$, $F_{3,i}(x_1, \dots, x_n) = f_i^{(2)(x_1, \dots, x_n)}$, \dots, $F_{n+1,i}(x_1, \dots, x_n)=f_i^{(n)(x_1, \dots, x_n)}$. Observe that for all $j$ the degree of $F_{j,i}$ is at most $d^n$. Apply lemma \ref{le:algebraicdependance} to $F_{1,i}$, $F_{2,i}$, \dots, $F_{n+1,i}$. We conclude that for each $i: 1 \le i \le n$ there exists a non-zero polynomial $\Psi_i$ over $F_q$ on $n+1$ variables of degree at most $(n+1)(d^n)^n$ such that $$ \Psi_i(x_i, f_i(x_1, \dots, x_n), f_i(x_1, \dots, x_n), \dots, f_i(x_1, \dots, x_n))=0 $$ The rest of the proof follows the argument from \cite{borisovsapir}: using the fact that $f_i^{(j)} - x_i^{Q^j} \in I_Q$ (see Lemma \ref{le:lem3.4}), we can rewrite $$ \Psi_i(x_i, f_i, f_i^{(2)}, \dots, f_i^{(n)}) $$ as a polynomial $P_{Q,i}$ in one variable $x_i$ modulo $I_Q$. Since $\Psi_i(x_i, f_i, f_i^{(2)}, \dots, f_i^{(n)})=0$, we have $P_{Q,i}(x_i) \in I_Q$ By the assumption of the lemma, $Q> (n+1)d^{n^2}$, and hence $Q$ is larger than the degree of $\Psi_i$. Observe that in this case the polynomial in $x_i$ we get is not zero. (Indeed, take maximal $j$ such that $y_j$ is present at least in one monomial of $\Psi$; among monomials of $\Psi$ consider those where the degree of $y_j$ is maximal. Among such monomials, if there several like this, take maximal $j'$ such that $y_{j'}$ is present, take a monomial where its degree is maximal, etc. In this way we obtain some monomial in $\Psi$ which will give maximal degree of $x_i$ for $P_{Q,i}$). Note that the degree of $P_{Q,i}$ is at most $Q^n \deg \Psi \le Q^n (n+1) d^{n^2}$. Write $$ P_{Q,i}(x_i) = \sum_{m=1}^M b_m (x_i-a_i)^m, $$ here $b_m \in \bar{F}_q$, $1 \le m \le M$ are such that $b_M \ne 0$. It is clear that $M \le \deg P_{Q,i} \le Q^n (n+1) d^{n^2}$, and in particular $$ P_{Q,i}(x) = (x-a_i)^L u(x), $$ where $L \le M \le Q^n (n+1) d^{n^2}$ and the polynomial $u(x)$ is such that $u(a_i) \ne 0$. Recall that by the assumption of the lemma $k = (n+1) d^{n^2}$. It is essential for the proof that $k$ does not depend on $Q$. Since $P_{Q,i}(x_i) \in I_Q$, we conclude that $(x_i-a_i)^L \in I_Q^{a_1, a_2, \dots, a_n}$. We have $Q^nk \ge L$, and therefore $(x_i-a_i)^{Q^n k} \in I_Q^{a_1, a_2, \dots, a_n}$. By the assumption of the Lemma, $f_i(a_1, \dots, a_n)=a_i^Q$. Since the characteristic of the field is $p$, this implies that $f_i^{(m)}(a_1, \dots, a_n) = a_i^{Q^m}$ for all $m\ge 1$ . Hence by Lemma \ref{le:lem3.4} we obtain $$ f_i^{(n)}(x_1, \dots, x_n) - f_i^{(n)}(a_1, \dots, a_n)= f_i^{(n)}(x_1, \dots, x_n) - a_i^{Q^n} \equiv x_i^{Q^n} - a_i^{Q^n} \pmod {I_Q^{(a_1, \dots, a_n}} $$ Since the characteristic of the field is $p$ and $Q$ is a power of $p$, we know that $x_i^{Q^n} - a_i^{Q^n}=(x_i-a_i)^{Q^n}$. Therefore we can conclude that $$ \left( f_i^{(n)}(x_1, \dots, x_n) - f_i^{(n)}(a_1, \dots, a_n) \right)^k = (x_i-a_i)^{kQ^n} \equiv 0 \pmod {I_Q^{a_1, \dots, a_n}} $$ As a corollary, we obtain a version of Lemma $3.6$ in \cite{borisovsapir}. \begin{corlem} \label{le:3.6} Let $q$ be a prime number, $d,n\ge 1$. Consider $n$ polynomials $f_i$, $1 \le i\le n$ on $n$ variables, with coefficients in $F_q$, of degree at most $d$. Take a polynomial $D$ with coefficients in $F_q$, which vanishes on all solutions in $\bar{F_q}$ of the system of the equations $$ f_i(a_1, a_2, \dots, a_n) = a_i^Q $$ Assume, as in Lemma \ref{le:l3.5}, that $Q > (n+1) d^{n^2}$ and $k= (n+1) d^{n^2} $. Put $K= (k-1)n+1$. Then for any $a_i$, $1\le i \le n$ which the solution of the above mentioned system of polynomial equations $$ \left(D(f_1^{(n)}(x_1, x_2, \dots, x_n),..., f_n^{(n)}(x_1, x_2, \dots, x_n) ) \right)^K = 0 \pmod {I_Q^{(a_1, \dots, a_n)}}. $$ We recall that $I_Q$ denotes the ideal in $\bar{F}_q[x_1, \dots, x_n]$ generated by polynomials $f_i(x_1, \dots, x_n)- x_i^Q$, $i: 1 \le i \le n$. \end{corlem} {Proof of Corollary \ref{le:3.6}.} Take $a_i \in \bar{F}_q$ such that $f_i(a_1, a_2, \dots, a_n) = a_i^Q$. We have $f_i^{(j)}(a_1,\dots, a_n) = a_i^{Q^j}$, for all $j\ge 1$. Rewrite $D(x_1, \dots, x_n)$ as a polynomial in $x_i-a_i^{Q^n}$, ($1 \le i \le n$), that is, $$ D(x_1, x_2, \dots, x_n) =E(x_1 -a_i^{Q^n}, \dots, x_n- a_n^{Q^n}), $$ where $E$ is a polynomial (depending on $a_1$, \dots, $a_n$) with coefficients in $\bar{F_q}$. Since $$ f_i(a_1, a_2, \dots, a_n) = a_i^Q $$ for all $i$, we know by the assumption of the Corollary that $D(a_1, \dots, a_n)=0$. Hence $$ E(0,0,\dots, 0) =D(a_1^{Q^n}, \dots, a_n^{Q^n})= D(a_1, \dots, a_n)^{Q^n} =0, $$ and therefore the polynomial $E$ does not have a free term. Observe that $D^K$ can be therefore written as sum of monomials in $x_i-a_i^{Q^n}$. Since $K\ge (k-1)n+1$, for each of these monomials there exists $i$, $1\le i \le n$ such that this monomial is divisible by at $(x_i-a_i^{Q^n})^k$. Therefore, $\left(D(f_1^{(n)}(x_1, x_2, \dots, x_n),..., f_n^{(n)}(x_1, x_2, \dots, x_n)) \right)^K$ is congruent $\pmod {I_Q}$ to a sum of polynomials, for each of these polynomial there exists $i: 1\le i \le n$ such that the polynomial is divisible by $(f_i^{(n)}- a_i^{Q^n})^k$. In other words, each of the above mentioned polynomials is divisible by $$ (f_i^{(n)}(x_1, \dots, x_n)- f_i^{(n)}(a_1, \dots, a_n))^k. $$ Applying Lemma \ref{le:l3.5} we conclude that each of these polynomials belong to ${I_Q^{(a_1, \dots, a_n)}}$, and hence their sum belongs to ${I_Q^{(a_1, \dots, a_n)}}$ {\bf Proof of Theoreom \ref{thm:borisovsapir}.} By the assumption of the theorem, $$ Q/D_0>n(n+1)d^{n^2+1}, $$ and hence $$ Q/D_0 > dn((n+1)d^{n_2}-1)+1. $$ We will prove the theorem under the assumption above. Observe that $Q >D_0 dn((n+1)d^{n_2}-1)+1 \ge d+1 >d$. This shows that $Q$ is $Q$ is greater than the degrees of $f_i$. We have already mentioned that in this case we know that $f_i-x_i^Q$ form a Gr{\"o}bner basis with respect to Graded Lex order. Recall that in this situation no non-zero polynomial of degree strictly smaller than $Q$ belongs to the ideal generated by $f_i-x_i^Q$, $1\le i \le n$. In particular, if we assume that the degree of the polynomial $$ P=\left(D(f_1^{(n)}(x_1, x_2, \dots, x_n),..., f_n^{(n)}(x_1, x_2, \dots, x_n) )\right)^K $$ (where $D$ and $K$ are as in Lemma \ref{le:3.6}), is strictly less then $Q$, we conclude that $P$ does not belong to the ideal $I_Q$ generated by $f_i-x_i^Q$. Take a polynomial $D$ satisfying the assumption of Theorem \ref{thm:borisovsapir} which is zero on all the solutions of the system of equations, and non-zero at at least one point of the image of $f$. We want to obtain a contradiction. Observe that since $$ Q/D_0 > dn((n+1)d^{n_2}-1)+1. $$ we know that $Q/D_0> d ((k-1)n)+1$ for $k= (n+1) d^{n^2} $, and that $Q > (n+1) d^{n^2}$. Put $K= (k-1)n+1 = ((n+1) d^{n^2} -1) n+1$. Observe that $K$ and $k$ satisfy the assumption of the corollary \ref{le:3.6}. It is essential for our argument that $K$ does not depend on $Q$. Observe that the polynomial $D(f_1, \dots, f_n)$ has at least one non-zero coefficient, since from the assumption of the theorem we know that this polynomial takes at least one non-zero value. This implies that the polynomial $$ \left(D(f_1^{(n)}(x_1, x_2, \dots, x_n),..., f_n^{(n)}(x_1, x_2, \dots, x_n)) \right)^K. $$ has at least one non-zero coefficient. This polynomial above belongs to $I_Q^{(a_1, \dots, a_n)}$ for any solution of the system of equations $a_1, \dots, a_n$, with $Q$ satisfying the assumption of the corollary. Now, like in the second version of the proof of \cite{borisovsapir}, observe that if $a_1, \dots, a_n$ is not a solution of the system of the equations \ref{fiQ}, then the localisation of $I_Q$ at $a_1, \dots, a_n$ is the whole ring of polynomials $\bar{F_q}[x_1, \dots, x_n]$. Indeed, we know in this case that there exists $i$, $1 \le i \le n$ such that $$ (f_i-x_i^Q)(a_1, \dots, a_n) \ne 0. $$ Then $1/(f_i-x_i^Q)$ belongs to the localisation, and since $f_i-x_i^Q$ belongs to $I_Q$, we conclude that $1 \in I_Q^{a_1, \dots, a_n}$. We know therefore that for any $a_1, ..., a_n \in \bar{F_q}^n$ (whether it is a solution of the system of equations of whether it is not) the polynomial $D(f_1, \dots, f_k)^K$ belongs to the localisation of $I_Q$ at $a_1, \dots, a_n$, and hence $D(f_1, \dots, f_k)^K$ belongs to $I_Q$. But in this case the degree of $D(f_1, \dots, f_k)^K$ is greater of equal to $Q$. This completes the proof of Theorem \ref{thm:borisovsapir}. {\bf Proof of Theorem \ref{thm:noid}.} First we observe again that it is sufficient to consider words on two letters: \begin{rem} Let $m\ge 2$. For each $m$ fix $u_2(x,y)$, \dots, $u_m(x,y)$ in the free group generated by $x$ and $y$ such $x_1$, $u_2(x,y)$, \dots, $u_m(x,y)$ freely generate a free subgroup on $m$ generators. For any word $\bar{w}(x_1, \dots, x_m)$, not freely equivalent to an empty word, consider the following word on two letters: $$ w(x_1, x_2) = \bar{w}(x_1, u_2(x_1, x_2),u_m(x_1, x_2)). $$ As we have mentioned already, this word is not freely equivalent to an empty word, and $\bar{w}$ is an iterated identity in some group whenever this is the case for $w$. Now assume in addition that for each $j: 2 \le j \le m$ there is a single occurance of $x$ or $x^{-1}$ in the word $u_j(x,y)$. (For example, one can take $u_j(x,y)= y^j x y^{-j}$). Then the total number of $x_1$ and $x_1^{-1}$ in $w$ and the length of $\bar{w}$ satisfy $l_{x_1}(w) \le l(\bar{w})$ \end{rem} \begin{rem} Take a word $w(x,y)$ of length $l$. Then $w_{\circ 4}$ has length at most $l^4$. The four polynomials in $x_{1,1}$, $x_{1,2}$, $x_{2,1}$, $x_{2,2}$ for the entries of $w(x,y)$ have degree at most $l$ The polynomials for for the entries of $w_{\circ 4}(x,y)$ have degree st most $l^4$. \end{rem} \begin{rem} \label{rem:twomatrices} Take \[ \bar{x}= \left( \begin{array}{ccc} 1 & 2 \\ 0 & 1 \end{array} \right) \mbox{ and } y= \left( \begin{array}{ccc} 1 & 0 \\ 2 & 1 \end{array} \right)\] Take a product of $L$ terms of the form $\bar{x}$, $\bar{x}^{-1}$, $y$ or $y^{-1}$. Then the entries of the resulting matrix are at most $3^L=\exp(\ln3 L)$. For a product of $L$ terms of the form $\bar{x}^2$, $\bar{x}^{-2}$, $y$ or $y^{-1}$ the entries of the resulting matrix are at most $6^L=\exp(2\ln 3 L)$. \end{rem} Let $w$ be a word on $x_1$ and $x_2$ of length at most $l$. Take $y$ as in Remark \ref{rem:twomatrices} (this $y$ satisfies Convention \ref{strongerconv:freesubgroup}), consider rationals functions $R_i$ in $x_1$, \dots, $x_4$ which are entries for $w(x,y)$, and the corresponding polynomials $H_i$. For each $j$ it holds $$ R_{j_1,j_2}(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}) = H_{j}(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}) /(x_{1,1}x_{2,2}-x_{1,2}x_{2,1})^s, $$ where $s$ is the number of occurrences of $x^{-1}$ in $w$. Observe that the coefficients of these polynomials $H_{i,j}$ satisfy the assumption of Remark \ref{rem:twomatrices}, and hence their coefficients are at most $4^l$. Take $x$ as in Remark \ref{rem:twomatrices}. Since $\bar{x}$ and $y$ generate a free group on two generators and since $w(x,y)$ is not freely equivalent to an empty word, we know that $w(\bar{x},y)$ is not an identity matrix. Moreover, we know that $w(x,y)$ does not commute with $w(x^2,y)$ in the free group generated by $x$ and $y$, and hence the matricies $w(\bar{x},y)$ $w(\bar{x}^2,y)$ do not commute, that is their commutator $[w(\bar{x},y),w(\bar{x}^2,y)]$ is not an identity matrix. The coefficients of $w_{\circ 4}(\bar{x},y)$ are at most $\exp(Cl^{4})$ and the coefficients of $w_{\circ 4}(\bar{x}^2,y)$ are at most $\exp(2Cl^4)$, for $C=\ln 3$. Since these matrices do not commute, for at least one of these matrices either $x_2 \ne e$ or $x_3 \ne e$. We conclude that there exist intergers $x_i \in \mathbb {Z}$, in the image of $w^{(4)}$, such that the matrix $x$ they form is in $SL(2,{\mathbb Z})$, such that their coefficients are at most $\exp(2Cl^4)$, for $C=ln 3$ and such that the matrix $x$ is not a diagonal matrix (that is, either $x_2 \ne e$ or $x_3 \ne e$). We want to find a prime number $q$ , such that the image of the matrix $x$ over quotient map to $F_q$ is not a diagonal matrix. That is, we want that the coefficients of the above mentioned matrix $x$ modulo $q$ satisfy $x_2 \ne e$ or $x_3 \ne e$ in $F_q$. Recall that Prime Number theorem says that the number $\phi(x)$ of prime numbers smaller than $x$ satisfies $\phi(x) /(x/\ln(x)) \to 1$ as $x \to \infty$. In particular, we know that the number of primes between $x/2$ and $x$ is $(1\pm \epsilon)x/(2 \ln x)$, for any $\epsilon>0$ and all sufficiently large $x$. This implies that for any positive integer $M$, there is a prime $q$, $q \le C' \ln{M}$, such that $M$ is not divided by $q$. If $M$ is sufficiently large, we can take $C'$ to be close to $1$. We can therefore choose a prime $q \le C' Cl^4$, with $C=2\ln(3)$ and $C'$ close to one if $l$ is large enough, such there is a non-diagonal matrix in $SL(2, F_q)$ in the image of the polynomial mapping corresponding to $w^{(4)}$. In this case, $f_i$ and $v$, considered over $F_q$ satisfy the assumption on the second part of Theorem \ref{thm:borisovsapir}. Consider the polynomial $D(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}) = x_{1,2}(x_{1,1}x_{2,2}-x_{1,2}x_{2,1})$, and the polynomial $ D_2(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2}) = x_{2,1}(x_{1,1}x_{2,2}-x_{1,2}x_{2,1})$. The degree of these two polynomials is equal to $3$. Observe that there exists a point $v_{1,1}, v_{1,2}, v_{2,1}, v_{2,2}$ in the image of $F_q^4$ of the polynomial mappings corresponding to $H^{(4)}$, such that either $D(v_{1,1}, v_{1,2}, v_{2,1}, v_{2,2})\ne 0$ or $D_2(v_{1,1}, v_{1,2}, v_{2,1}, v_{2,2})\ne 0$. Without loss of the generality we can assume that $D(v_{1,1}, v_{1,2}, v_{2,1}, v_{2,2})\ne 0$ Taking $Q$ satisfying the assumption of Theorem \ref{thm:borisovsapir} for $D_0=3$, $d=l$, $n=4$, that is $$ Q> 3 \times 20 \times l^{17}. $$ we conclude that the system of equations $H_{i_1, i_2}(x_{1,1}, x_{1,2}, x_{2,1} x_{2,2}) = x_{i_1, i_2}^Q$ has a solution over the algebraic closure of $F_q$, such that $D(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2})\ne 0$. If this is the case, by Lemma $3.1$ we know that the solution belongs to a finite extension $\mathcal{K}$ of $F_q$, of degree at most $Q^4$. The number of elements in this field is $q^{Q^4}$. Consider $\mathcal{K'}= K(\sqrt{x_{1,1}x_{2,2}-x_{1,2}x_{2,1}})$. The cardinality of $\mathcal{K'}$ is at most $q^{2Q^4}$. Taking in mind that we can chose $q \le C' Cl^4$, with $C=2\ln(3)$ and $C'$ close to one if $l$ is large enough and $Q = 61 l^{17}$, we see that we can chose the field $K'$ as above of cardinality at most $(2l^{4})^{61^4 l^{17 \times 4}} \le \exp(l^{68+\epsilon})$ for all $l\ge L$. Since $D(x_{1,1}, x_{1,2}, x_{2,1}, x_{2,2})\ne 0$, we know that $x_{1,1}x_{2,2} - x_{1,2} x_{2,1} \ne 0$, and $x_{1,2}\ne 0$. Dividing $x$ by $\sqrt{x_{1,1}x_{2,2} - x_{1,2} x_{2,1}}$ we obtain a non-identity solution in $SL(2, \mathcal{K'})$ of the system of the equations $$ R_{i_1, i_2}= x_{i_1, i_2}^Q. $$ Observe that the cardinality of $SL(2)$ over a finite field of cardinality $N$ is $N^3-N \le N^3$. Therefore, the number of elements of $SL(2, \mathcal{K'})$ is at most $\exp(l^{68+\epsilon})$, for all $l> L$. As in the previous section, we observe that any non-identity solution $x$ in $SL(2, \mathbb{Z})$ of the above mentioned system of equations provides a periodic point : $w_{\circ m}(x)=x$ for some $m\ge 1$. And we can remark again that if $w$ represents an element of a commutator group in the free group, then $x\ne e$, $w_{\circ m(x)}=x$ implies that $w_{\circ m'}(x) \ne e$ for all $m' \ge 1$. \section{General dynamics $a_1, \dots, a_k \to w_1(a_1, \dots, a_k), \dots, w_k(a_1, \dots, a_k)$. Proof of Theorem \label{thm:borsapmol}} \label{se:generaldynamics} In the definition of iterated identities we consider the iteration on the first letter. Now more generally consider $s\ge 1$ and words $w_1(a_1, \dots, a_s)$, \dots, $w_k(a_1, \dots, a_s)$, the mapping $w: a_1, \dots, a_s \to w_1(a_1, \dots, a_s), \dots, w_k(a_1, \dots, a_s)$ and its iterations: $$ w_{\circ n}(a_1, \dots, a_n) =w_1 (w_{1, \circ n-1}(a_1, \dots, a_s), \dots , w_{k, \circ n-1}(a_1, \dots, a_s)), \dots, $$ $$ \dots w_{s, \circ n-1}(a_1, \dots, a_s)). $$ For some tuples of words, in contrast when the iteration only on the first letter is allowed, it may happen that some iteration of $w$ is freely equivalent to the identity, in this case its image is trivial in the free group, and hence in any other group. For example, if $k=2$ and $w_1(a_1,a_2) = w_2(a_1, a_2) = [a_1, a_2]$, then it is clear that $w_{1,\circ 2}(a_1, a_2) = w_{2, \circ 2}(a_1, a_2) = [[a_1,a_2], [a_1,a_2]] \equiv e$ in the free group generated by $a_1$ and $a_2$. In fact, it is possible that all the coordinates of the first $n-1$ iterations are not equal to one in the free group, and all coordinates on the $n$-th iteration is equal to one: \begin{exa} Consider words $w_i$ on $x_1, \dots, x_n$, $n\ge 2$: $w_1= [x_1, x_n]$, $w_2 = [x_1,x_n]$, $w_3=[x_2,x_n]$, $w_i= [x_{i-1}, x_n]$ for $i \ge 3$. Then for all $m\ge n-1$ and all $i$ ($1 \le i \le n$) the word $w_{m,i}$ is not freely equivalent to an empty word. For all $m \ge n$ and all $i$ ($1 \le i \le n$) the word $w_{m,i}$ is not freely equivalent to an empty word. \end{exa} {\bf Proof.} Note that $$ x_1 \to [x_1, x_n] \to [ [x_1, x_n], [x_{n-1}, x_n]] \to ... $$ $$ x_2 \to [x_1, x_n] \to [ [x_1, x_n], [x_{n-1}, x_n]] \to ... $$ $$ x_3 \to [x_2, x_n] \to [ [x_1, x_n], [x_{n-1}, x_n]] \to ... $$ $$ x_4 \to [x_3, x_n] \to [ [x_2, x_n], [x_{n-1}, x_n]] \to ... $$ We see that for all $m\ge 1$ the images of $m$-th iteration evaluated at $x_1$, $x_2$, \dots, $x_{m+1}$ are equal. In particular, for $m=n-1$ the image of $n-1$-th iteration takes the same value at all $x_i$, and hence $w_{n,i}$ is freely equivalent to an empty word for all $i$ ($1 \le i \le n$). Observe, that if for some $k$ the elements $y_1$, $y_2$, \dots, $y_k$ freely generate a free group on $k$ generators, then $[y_1,y_k]$, \dots, $[y_{k-1}, y_k]$ freely generate a group on $k-1$ generators. Using this fact and arguing by induction on $j$ we observe for all $j<n$ the elements $w_{j,i}$, $i : n-j +1 \le i \le n$ freely generate a group on $n-j$ generators. This implies in particular that for $j<n$ all coordinates of the $j$-th iteration are non-trivial. A generalization of the first part of Theorem \ref{thm:noid} says that if the all components of the iteration map are not trivial in a free group, then there is a finite group where where all components of the iterations remain non-trivial. \begin{rem} Suppose that the words $w_1(x_1, \dots, x_n)$, \dots, $w_n(x_1, \dots, x_n)$ are such that $w_1(x_1, \dots, x_n)$, \dots, $w_n(x_1, \dots, x_n)$ generate a free subgroup of rank $n$ in the free group generated by $x_1$, \dots, $x_n$. Then for all $m \ge 1$ and all $i$ , $1 \le i \le n$ the iteration $w^i_{\circ m} \ne e$ in the free group generated by $x_1$, \dots, $x_n$. \end{rem} {\bf Proof.} Observe that the endomorphism $w: x_i \to w_i(x_1, \dots, x_n)$ is injective, since otherwise the free group $F_n$ would have a quotient over non-trivial normal subgroup with the image isomorphic to $F_n$. It is well known and not difficult to see that this can not happen, in other words, the free group (as any other residually finite gorup) is Hopfian (see e.g. Thm 6.1.12 in \cite{robinsonbook}). Therefore, any itetation $w_{\circ m} $ of the endomorphism $w$ is injective. Hence the image of $w^i_{\circ m}$ is isomorphic to $F_n$, that is, this image is a free group of rank $n$. This implies in particular that for all $m\ge 1$ and all $i$ $w^i_{\circ m} \ne e$ in the free group generated by $x_1$, \dots, $x_n$. \begin{rem} Suppose that $w^i_{\circ m} \ne e$ in the free group, for all $m\le n$ and all $i$. Then $w^i_{\circ m} \ne e$ for all $m$ and all $i$. \end{rem} {\bf Proof.} Consider images of the free group $F_n$ (generated by $x_1$, \dots, $x_n$) with respect to $w$, $w_{\circ 2}$, \dots, $w_{\circ n}$. If there exists at list some element , not equal to $e$, in the image of $w_{\circ n}$, then there exists $m<n$ such that the rank of the free group in the image of $w_{\circ m}$ is equal to that in the image of $w_{\circ m+1}$, and this rank is at least $1$. In this case the restriction of $w$ to the image of $w_{\circ m}$ is injective, that is, if $g,h$ in the image of $w_{\circ m}$ are such that $w(g)=w(h)$, then $g=h$. Arguing by induction we see that for all $t\ge 1$ the restriction of $w_{\circ t}$ to the image of $w_{\circ m}$ is injective. Therefore, if $w_{\circ m} (x_i) \ne e$ in the free group generated by $x_1$, \dots, $x_n$, then $w_{\circ m+t} (x_i) \ne e$ for all $t\ge 1$. \begin{cor} \label{cor:generaldynamics} $k\ge 1$, take words $w_1(a_1, \dots, a_k)$, \dots $w_k(a_1, \dots, a_s)$ and suppose that for all $m$ the words $w_{j, \circ m} $ are not freely equivalent to identity, for all $j : 1 \le s$. Then there exist a finite group $G$ such that for all $m\ge 1$ and all $j : 1 \le j \le s$ the iterations $w^i_{\circ m} \ne e$ in $G$. \end{cor} {\it Proof of Theorem \ref{thm:borsapmol} and Corollary \ref{cor:generaldynamics}} We know that for all $m$, and hence in particular for $m=4s$ that the words $w_{j, \circ m} $ are not freely equivalent to identity, for all $j : 1 \le s$. Consider $s$ matrices $M_j$ over $\mathbb{Z}[x_{i_1,i_2, j}])$, where $i_1,i_2: 1 \le i_1, i_2 \le 2$, $j: 1 \le j \le s$ and $x_{i,j}$ are independant variables: \[ M_j= \left( \begin{array}{ccc} x_{1,1,j} & x_{1,2,j} \\ x_{2,1,j} & x_{2,2,j} \end{array} \right) \] Consider rational functions $R^{(n)}_{r_1,r_2,t}$ in $x_{i_1,i_2,j}$ , $i_1,i_2: 1 \le i_1, i_2 \le 4$, $j: 1 \le j \le s$ and $r_1: 1 \le r_1,r_2 \le 2$, $s: 1 \le s \le s$ which are the entries of $w_{\circ n, t}(m_1,\dots, m_s)$, $t: 1 \le t \le s$, these rational functions are of the form $$ R^{(n)}_{r_1,r_2,t}= P^{(n)}_{r,t}/\prod _j (\det M_j)^{\alpha_j}, $$ where $P^{(n)}_{r,t}$ are polynomials in $x_{i_1,i_2,j}$ with integer coefficients, and $\alpha_j$ are some integers. We want to find a non-trivial solution over a finite field of the system of equations for some $n\ge 1$ \begin{equation} \label{systemR} R^{(n)}_{i_1,i_2,j} =x_{i_1, i_2,j}. \end{equation} To to this, we want to find a solution of the system of the equation \begin{equation} \label{systemP} P^{(n)}_{i_1,i_2,j}(x_{r,t}) =x_{i_1, i_2,j}, \end{equation} $r \le 4$, $t \le s$, where none of the two-times-two matrices $m_j$, $j: 1 \le j \le s$ is proportional to a diagonal matrix and satisfying $\det m_j = x_{1,j} x_{4,j} - x_{2,j} x_{3,j} \ne e$, for all $j: 1 \le j \le s$. To do this it is sufficient to find, for some large power $Q$ of $q$ a solution of the system of the equations, each $M_j$ has determinant not equal to zero, none of $M_j$ is proportional to an identity matrix, none of the coordinates of the $u$-th iteration ($u \le m$, $m$ is an appropriate function of $Q$, $n$, and $s$ ) of $w$ applied to $M_1, \dots, M_s$ is proportional to the identity matrix. \begin{equation} \label{systemPQ} P_{i_1, i_2,j} =x_{i_1, i_2,j}^Q \end{equation} over a finite field of characteristics $q$. \begin{rem} Let $W$ is a word in $x_1$, \dots, $x_n$ which is not freely equivalent to an empty word Then the entry $M_{1,2}$ in the upper-right corner of the matrix $M$ (which is a rational function in $x_1$, \dots, $x_n$) for the matrix $$ M = W(m_1, \dots, m_2) $$ is not equal to zero. \end{rem} \begin{rem} \label{re:nonzero} Let $W_1$, $W_2$, \dots, $W_N$ are words in $x_1$, \dots, $x_n$ such that none of these words is freely equivalent to an empty word. Let $F_1$, \dots, $F_L$ are integer valued polynomials in $x_1$, \dots, $x_n$, each of $F_i$ is not identically zero. Then there exists a finite field $\mathcal{K}$ and $x_1$, \dots, $x_n \in \mathcal{K}$ such that the (upper-right) entry $M_{1,2,j}$ of the matrix $$ M_j = W_j(M_1, \dots, M_2) $$ is not equal to $0$, for each $j: 1\le j\le N$ and such that $F_j(x_1, x_2, \dots, x_n) \ne 0 $ for all $j \le L$. \end{rem} Now consider $n=N=s$ , $W_{j}=w_{\circ 4s,j}$. From the assumption of the theorem we known that none of the words $W_j$ is freely equivalent to an empty word, and hence these words satisfy the assumption of Remark \ref{re:nonzero}. Consider $M=s$ and $F_j = x_{1,j} x_{4,j} -x_{2,j} x_{3,j}$. From Remark \ref{re:nonzero} we know that there exist a point $v_{i,j}$, $ i \le 4$, $j\le s$ in the image of the mapping corresponding to $w_{\circ 4s,j}$ such that $ v_{1,j} v_{4,j} -v_{2,j} v_{3,j} \ne 0$ for all $j$ and such that $v_{2,j} \ne 0$ for all $j$. Consider the polynomial in $x_{j,i}$, $j: 1 \le j \le s$, $i: 1 \le i \le 4$ $$ D =\prod_{j=1}^s x_{j,2} \prod_{j=1}^s (x_1 x_4 -x_2 x_3) $$ The degree of $D$ is equal to $3s$, and $D(v_{i,j}) \ne 0$ for some $v_{i,j}$ in the image of the mapping corresponding to $w_{\circ 4s,j}$. Applying Theorem \ref{thm:borisovsapir} for $n=4s$ and $d$ to be equal to the length of $W$ we can conclude that there exists a solution $x_{i,j}$ for the system of the equations \ref{systemR}, such that $D(x_{i,j}) \ne 0$ so far as $Q$ satisfies $$ Q/3s > (4s)(4s+1) d^{16 s^2+1} $$ \section{Open questions} \label{se:openquestions} We recall again that our main interest are the words in the commutator, with total number of $x$ which is not zero (and with total number of $y$ which is not zero). Take $w(x,y)$ is such that the total number $X$ of $x$ is not zero. The total number of $x$ in the $n$-th iteration is equal $X^N$, and the total number in $w_{\circ n}(x)x^{-1}$ is $X^n-1$. So if $X\ne 2$, already without taking any iteration $w(x,y) =x$ has a solution with $x\ne 0$ in an Abelian finite cyclic group, and if $X=2$, the equation for the second iteration $w(w (x,y),y)=x$ has an equation in a finite Abelian group $\mathbb{Z}/2\mathbb{Z}$. For example, if we take $w(x,y)= yx^2y^{-1}$, then for the first iteration we obtain the solvable Baumslag Solitar group (so that the equation does not have solution in finite groups with $x\ne e$), but for the second iteration we do obtain such solutions. Given a word $w$, one can ask what is minimal $m$, which we denote by $m(w)$, such that $w_\circ(x)=x$ has a non-zero solution in a finite group. What is the minimal size $M(w)$ of a finite group which does not satisfy the iterated identity $w$. Absence of (usual) identities in the class of finite quotients of a given group $G$ (for example absence of identities for all finite groups, or all finite nipotent groups etc for $G=F_m$) can be a corollary of residual finiteness of $G$. One can make the statement quantative, by taking a word $w$ of length $l$ and ask for a minimal possible size of finite quotient of $G$ which does not satisfy $w$. Or a less stronger version: for a minimal possible size where $w(x_1, \dots, x_n) \ne e$ for a fixed finie set $x_1$, $x_2$, \dots, $x_n$ in $G$. This notion, introduced by Bou-Rabee in \cite{bourabeedef} is called {\it the normal residual finiteness growth function}, see also \cite{bourabeemcreynolds, bourabee7, buskin, kassabovmatucci}, and it is called {\it residual finiteness growth function} in Thom \cite{thom} and Bradford and Thom \cite{bradfordthom} (not to be confused with residual finiteness growth function in terminology of \cite{bourabeeetal}, that measures the size of finite, not necessary normal subgroups, not containing a given element), who have proven the lower bound $\ge C n ^{3/2}/\log^{9/2+\epsilon}{n}$, which holds for any $\epsilon>0$. Kassabov and Matucci suggested in \cite{kassabovmatucci} that the argument of Hadad \cite{hadad} can give a close upper bound for normal residual finiteness growth, function, namely $n^{3/2}$. A known upper bound so far is $n^3$ \cite{bourabeedef}, which is a corollary of the estimate for $SL(2, \mathbb{Z})$, using imbedding of a free group to this group. The estimate of Bradford and Thom is a corollary of their result, stating that for any $\delta>0$ and all $n\ge 1$ there exists a word $w_n$ of length at most $n^{2/3} \ln^{3+\delta}(n)$ which is an identity in all finite groups of cardinality at most $n$. Now one can ask corresponding questions related to iterated identities. In particular, one can ask, what is the minimal length of a word $w_n$ which is an iterated identity in all finite groups of cardinality at most $n$? Given a word $w$, we denote by $NI(w)$ the minimal cardinality of a group $G$, such that $w$ is not an iterated identity in $G$ and by $PE(w)$ the the minimal cardinality of a group $G$ such that $w_\circ{m}(g) =g$ has at least one non-identity solution in $G$, for some $m\ge 1$. It is clear that $PE(w) \ge NI(w)$. We also denote by $PE_d(n)$ and $NI_d(n)$ the maximum of $PE(w)$ and $NI(w)$, where the maximum is taken over all words of length at most $n$ on $d$ letters, not freely reduced to an empty word. Finally, given a word $w$ we can ask what is the minimal $m$ such that $w_\circ{m}(g) =g$ has at least one non-idenity solution in some finite group? Another question we can ask: what are possible classes of finite groups, where with the property that for any $w$, not freely equivalent to the identity, there exists a group in this class which does not satisfy an iterated identity $w$. In particular, given a subset $\Omega \subset \mathbb{N}$, one can ask: for which subsets $\Omega$, for any $w$, not freely equivalent to the identity, there exists a group $G$, with the cardinality of $G$ belonging to $\Omega$, such that $G$ does not satisfy the iterated identity $w$. We have seen in the proof of Theorem \ref{thm:noid} that it is sufficient to consider $SL(2,F_Q)$, for $Q$ which is a large power of a large prime $q$ and hence the set $\Omega$ containing numbers $q^n-q$, for large enough $q$ and large enough $n$, has this property. We denote by $ \mathcal{O}_{int}$ the set of $\Omega \subset \mathbb{N}$ with the property above. By $\mathcal{O}$ we denote the set of subsets $\Omega \subset \mathbb{N}$ such that for any word $w$, not freely equivalent to the identity, there exists a finite group $G$, of the cardinality belonging to $\Omega$ such that $w$ is not an identity in $G$. It is clear that $\mathcal{O}_{int} \subset \mathcal{O}$ and that $\mathcal{O}_{int} \ne\mathcal{O}$ since the set of powers of a given prime $p$ belongs to $\mathcal{O})$ for all $p$ and does not belong to $\mathcal{O}_{int}$. \end{document} \bib{olsapirgirth}{article}{ author={Ol$\prime$shanski\u\i , A. Yu.}, author={Sapir, M. V.}, title={On $F_k$-like groups}, language={Russian, with English and Russian summaries}, journal={Algebra Logika}, volume={48}, date={2009}, number={2}, pages={245--257, 284, 286--287}, issn={0373-9252}, translation={ journal={Algebra Logic}, volume={48}, date={2009}, number={2}, pages={140--146}, issn={0002-5232}, }, review={\MR{2573020}}, doi={10.1007/s10469-009-9044-2}, } \bib{wise}{book}{ author={Wise, Daniel T.}, title={From riches to raags: 3-manifolds, right-angled Artin groups, and cubical geometry}, series={CBMS Regional Conference Series in Mathematics}, volume={117}, publisher={Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI}, date={2012}, pages={xiv+141}, isbn={978-0-8218-8800-1}, review={\MR{2986461}}, doi={10.1090/cbms/117}, } \end{document}
math
70,336
\begin{document} \title{Measurements of entanglement over a kilometric distance to test superluminal models of Quantum Mechanics: preliminary results.} \author{B Cocciaro, S Faetti and L Fronzoni} \address{Department of Physics Enrico Fermi, Largo Pontecorvo 3, I-56127 Pisa, Italy} \ead{ [email protected], [email protected], [email protected]} \begin{abstract} As shown in the \emph{EPR} paper (Einstein, Podolsky e Rosen, 1935), Quantum Mechanics is a non-local Theory. The Bell theorem and the successive experiments ruled out the possibility of explaining quantum correlations using only local hidden variables models. Some authors suggested that quantum correlations could be due to superluminal communications that propagate isotropically with velocity \emph{$v_{t}>c$} in a preferred reference frame. For finite values of \emph{$v_{t}$} and in some special cases, Quantum Mechanics and superluminal models lead to different predictions. So far, no deviations from the predictions of Quantum Mechanics have been detected and only lower bounds for the superluminal velocities \emph{$v_{t}$} have been established. Here we describe a new experiment that increases the maximum detectable superluminal velocities and we give some preliminary results. \end{abstract} \section{\label{sec:Introduction}Introduction} The non local character of Quantum Mechanics (\textit{QM}) has been object of a great debate starting from the famous Einstein-Podolsky-Rosen (\textit{EPR}) paper \cite{EPR}. Consider, for instance, a quantum system made by two photons \emph{a} and \emph{b} that are in the polarization entangled state \begin{equation} |\psi>=\frac{1}{\sqrt{2}}\left(|H,H>+e^{i\phi}|V,V>\right)\label{eq:1} \end{equation} where \textit{H} and \textit{V} stand for horizontal and vertical polarization, respectively, and $\phi$ is a constant phase coefficient. The two entangled photons are created at point \textit{O}, propagate in space far away one from the other (see Fig.\ref{fig:fotoni entangled}) and reach at the same time points \textit{A} (Alice) and \textit{B} (Bob) that are equidistant from \textit{O} as schematically drawn in Fig.\ref{fig:fotoni entangled}. Two polarizing filters $P_{A}$ and $P_{B}$ lie at points \textit{A} and \textit{B}, respectively. \begin{SCfigure}[50][h] \centering \includegraphics[width=0.5\textwidth]{graph1.jpg} \hspace{0.05in} \caption{\protect\rule{0ex}{7ex}\textit{O}: source of entangled photons (\emph{a} and \emph{b}); \textit{A}(Alice) and \textit{B}(Bob): points equidistant from \textit{O} ($d_{A}=d_{B}$); \textit{$P_{A}$} and \textit{$P_{B}$}: Polarizing filters centered at points $A$ and $B$, respectively.} \label{fig:fotoni entangled} \end{SCfigure} Suppose, now, that the polarizers axes are aligned along the horizontal direction. According to \textit{QM}, the passage of photon \emph{a} (or \emph{b}) through polarizer $P_{A}$ (or $P_{B}$) leads to the collapse of the entangled state to $|H,H>$ everywhere, then, also photon \emph{b} (or \emph{a}) collapses to the horizontal polarization. This behaviour\textit{ }suggests the existence of a sort of ``action at a distance'' between entangled particles in complete disagreement with any other classical physic phenomenon (Electromagnetism, Gravity ....). According to Gisin~\cite{Gisin2014,Gisin_QuantumChance}, classical correlations between far events have always due to two possible mechanisms: Common Cause or Communications. The Bell theorem~\cite{Bell} and many successive \emph{EPR} experiments~\cite{Feedman_PhysRevLett_1972,Aspect,Zeilinger_PLA_1986,Tittel_PhysRevLett_1998,Weihs_PhysRevLett_1998,Aspect_Nature_1999,Pan_Nature_2000,Grangier_Nature_2001,Rowe_Nature_2001,Matsukevich_PRL_2008} demonstrated that correlations cannot be due only to a common cause (\emph{hidden variables theories}) or to common cause + subluminal communications. According to Bell, ``\emph{in these EPR experiments there is the suggestion that behind the scenes something is going faster than light}''~\cite{Davies_ghost_1993}. Models of \textit{QM} based on the presence of superluminal communications (tachyons) have been proposed~\cite{Eberhard_1989,Bohm_undivided_1991}. Tachyons are known to lead to causal paradoxes (see, for instance, pages 52-53 in~\cite{moller_theory_1955}), but no causal paradox arises if tachyons propagate isotropically in a preferred frame (\emph{PF}) with velocity $v_{t}=\beta_{t}c\,(\beta_{t}>1)$~\cite{Kowalczynski_IntJThPhys_1984,Reuse_AnPhys_1984,Caban_PhysRevA_1999,maudlin_quantum_2001,Cocciaro_2013_ShutYourselfUp}. Suppose, now, that quantum correlations are due to superluminal communications and that an \emph{ideal experiment }is performed in the tachyon preferred frame $S'$ where two polarizing filters lie at the same optical distances $d'_{A}=d'_{B}$ from source \emph{O}. Photons \emph{a} and \emph{b} get the polarizers at the same time and no communication is possible. Then, correlations between entangled particles should differ from the predictions of \emph{QM} and should satisfy the Bell inequality. However, from the experimental point of view, equality $d'_{A}=d'_{B}$ can be only approximatively verified within a given uncertainty $\Delta d'$. Consequently, photons \emph{a} and \emph{b} could get the polarisers at two different times ($\Delta t'=\nicefrac{\Delta d'}{c}$) and could communicate if the tachyon velocity exceeds a lower bound $v_{t,min}=c\left(d'_{AB}/\Delta d'\right)$ where $d'_{AB}$ is the distance between polarizers $P_{A}$ and $P_{B}$ in the \emph{PF}. Two are the possible experimental results:\emph{ i)} a lack of quantum correlations is observed; \emph{ii)} quantum correlations are always satisfied. In the first case (\emph{i)}) one should conclude that quantum correlations are due to exchange of superluminal messages with velocity lower than $v_{t,min}$. In the second case (\emph{ii)}), due to the experimental uncertainty $\Delta d'$, one cannot invalidate the superluminal model of \emph{QM} but can only establish a lower bound $v_{t,min}=c\left(d'_{AB}/\Delta d'\right)$ for the superluminal velocities. It has been recently demonstrated an important theorem~\cite{Bancal_NatPhys_2012,Barnea_PhysRevA2013}: if \emph{QM} correlations are due to superluminal signals with finite velocity $v_{t}$, then also a \emph{macroscopic superluminal signalling} becomes possible provided that states of three or four entangled particles are involved. This means that the superluminal signals do not remain hidden but they could lead to macroscopic superluminal communications. In conclusion, there are two possible alternative situations both involving some upheaval of the common thought: \emph{a}) Nature is intrinsically non local and far events can be correlated without any common cause or communication (orthodox \emph{QM}); \emph{b}) Nature is local but, in this case, macroscopic superluminal signalling is possible (superluminal models). Physics is an experimental Science and, thus, we think that only the experiments can decide between these two alternatives. The correlations between entangled particles can be experimentally tested measuring the number of coincidences $N(\alpha{}_{A},\alpha_{B})$ of photons passing through polarizers $P_{A}$ and $P_{B}$ for different values of the polarizers angles $\alpha{}_{A}$ and $\alpha_{B}$ with respect to the horizontal axis. In particular, two correlations parameters $S_{max}$ and $S_{min}$ can be measured (see equations (33) and (34) in reference~\cite{Aspect_2002}): \begin{equation} S_{max}=\frac{N(45\text{\textdegree,67.5\textdegree)- \emph{N}(0\textdegree,67.5\textdegree) -\emph{ N}(45\textdegree,112,5\textdegree) - \emph{N}(90\textdegree,22.5\textdegree)}}{N}\label{eq:Smax} \end{equation} and \begin{equation} S_{min}=\frac{N(135\text{\textdegree,202.5\textdegree)-\emph{ N}(0\textdegree,202.5\textdegree) -\emph{N}(135\textdegree,157,5\textdegree) - \emph{N}(90\textdegree,67.5\textdegree)}}{N},\label{eq:Smin} \end{equation} where $N$ is the number of coincidences with no polarizers (tacking into account for the polarizers transmission) that can be written as: \begin{equation} N=N(0\text{\textdegree,0\textdegree)+\emph{N}(0\textdegree,90\textdegree)+\emph{N}(90\textdegree,0\textdegree)+\emph{N}(90\textdegree,90\textdegree).}\label{eq:NTOT} \end{equation} Eqs. (\ref{eq:Smax}), (\ref{eq:Smin}) and \ref{eq:NTOT} have been obtained from equation (33) in reference~\cite{Aspect_2002} using the equalities: \begin{equation} \begin{array}{l} N(a',\infty)=N(a',b)+N\left(a',b+90\text{\textdegree}\right)\\ N(\infty,b)=N(a,b)+N\left(a+90\text{\textdegree},b\right). \end{array}\label{eq:Smax-1} \end{equation} Quantum Mechanics predicts $S_{max}$ = 0.207 and $S_{min}$ = - 1.207, respectively, whilst local theories must satisfy the inequalities $S_{max}$$\leq$ 0 and $S_{min}$ $\geq$ -1. Then, the measurement of one of these parameters makes possible a direct test of the superluminal models. So far we considered an ideal experiment performed in the preferred frame, but the\emph{ PF} is unknown. A more complex \emph{EPR} experiment can be still performed in the Earth if \emph{A} and \emph{B} are aligned along the Est-West axis and are equidistant (in the Earth frame) from the photons source at \emph{O}. Of course, the entangled photons get simultaneously polarizers $P_{A}$ and $P_{B}$ in the Earth reference frame but not in the \emph{PF}. However, according to Relativity, these events become simultaneous also in the \emph{PF} if the velocity vector $\vec{V}=\vec{\beta}c$ of the \emph{PF} is orthogonal to the \emph{A}-\emph{B} axis (see, for instance, the appendix in~\cite{Cocciaro_DICE2013}). If the \emph{A}-\emph{B} axis coincides with the East-West direction, due to the Earth rotation around its axis, there are always two times $t_{1}$ and $t_{2}$ during each sidereal day where vector $\vec{V}$ becomes orthogonal to the\emph{ A}-\emph{B} axis. If the \emph{A}-\emph{B }axis makes an angle $\gamma\neq0$ with the Est-West axis, vector $\vec{V}$ becomes orthogonal to the \emph{A}-\emph{B} axis only if angle $\theta$ between vector $\vec{V}$ and the Earth polar axis lies in the interval {[}$\gamma,\pi-\gamma$ {]}. If this condition is satisfied, a loss of Quantum correlations should be observed at two given unknown times $t_{1}$ and $t_{2}$ each day if the tachyon velocity $v_{t}$ is lower than the maximum detectable velocity $v_{t,min}$. However, there is an other important feature that can reduce the maximum detectable tachyons velocities in the Earth experiment. In fact, tachyons get simultaneously the polarizers also in the \emph{PF} only at the two well defined times $t_{1}$ and $t_{2}$ but the measure of the coincidences numbers $N(\alpha{}_{A},\alpha_{B})$ is not instantaneous and requires a finite acquisition time $\delta t$. This produces a further uncertainty on the equalization of the optical paths that is an increasing function of the acquisition time and the reduced velocity $\beta$ of the\emph{ PF}. Using the Relativity theory, it has been shown~\cite{Salart_nature_2008,Cocciaro_PLA_2011} that the lower limit of the detectable tachyon velocities in a Earth experiment is: \begin{equation} \beta_{t,min}=\sqrt{1+\frac{\left(1-\beta^{2}\right)\left[1-\bar{\rho}^{2}\right]}{\left[\bar{\rho}+\beta\sin\chi\sin\frac{\pi\delta t}{T}\right]^{2}}},\label{eq:betamin} \end{equation} where $\bar{\rho}=\nicefrac{\Delta d}{d_{AB}}$, $\Delta d$ is the uncertainty on the equalization of the optical paths in the Earth frame, \emph{T} is the duration of the sidereal day, $\delta t$ is the acquisition time, $\chi$ is the polar angle between the North-South axis and velocity \textbf{\emph{$\vec{V}$}} of the \emph{PF} and $\beta$ is the reduced\emph{ PF} velocity ($\beta=\nicefrac{V}{c}$). In typical experimental conditions~\cite{Salart_nature_2008,Cocciaro_PLA_2011,Cinesi_PhysRevLett2013}, the acquisition time $\delta t$ is much smaller than the sidereal day \emph{T }and $\beta_{t,min}$ is a decreasing function of both $\bar{\rho}$ and $\delta t$ that reaches a minimum value if $\chi=\nicefrac{\pi}{2}$. $\beta_{t,min}$ is also a decreasing function of $\beta$ that assumes its maximum value $\beta_{t,min}=\nicefrac{1}{\bar{\rho}}$ for $\beta=0$ and approaches the minimum value $\beta_{t,min}=1$ for $\beta\rightarrow1$. Our following considerations and figures will be restricted to $\delta t\ll T$ and to the most unfavourable condition $\chi=\nicefrac{\pi}{2}$. The typical plot of function $\beta_{t,min}$ versus the reduced velocity $\beta$ of the \emph{PF} for $\chi=\nicefrac{\pi}{2}$ and for some values of $\bar{\rho}$ and $\delta t$ is drawn in Fig.\ref{fig:2}. \begin{SCfigure}[50] \centering \includegraphics[width=0.5\textwidth]{Figura2.JPG} \hspace{0.05in} \caption{\protect\rule{0ex}{7ex}Function $\beta_{t,min}$ versus $\beta$ for the unfavourable case $\chi=\nicefrac{\pi}{2}$ and for some values of the experimental parameters $\bar{\rho}$ and $\delta t$. Curves \emph{a}, \emph{b} and \emph{c} correspond to the fixed acquisition time $\delta t=10^{-1}\left(\nicefrac{T}{\pi}\right)$ and to decreasing values of $\bar{\rho}$ ($a:\bar{\rho}=10^{-3},\, b:\bar{\rho}=10^{-5},\, c:\bar{\rho}=10^{-6}$). Curves \emph{c}, \emph{d} and \emph{e} correspond to a fixed value $\bar{\rho}=10^{-6}$ and to decreasing values of $\delta t$ ($c:\delta t=10^{-1}\left(\nicefrac{T}{\pi}\right),\, d:\delta t=10^{-3}\left(\nicefrac{T}{\pi}\right),\, e:\delta t=10^{-7}\left(\nicefrac{T}{\pi}\right)$).} \label{fig:2} \end{SCfigure} Experiments of this kind have been performed by some groups in the last years~\cite{Salart_nature_2008,Cocciaro_PLA_2011,Cinesi_PhysRevLett2013}. In all these experiments no loss of \emph{QM} correlations has been observed and, thus, only lower bounds $\beta_{t,min}$ for the tachyons reduced velocities have been established. Recently~\cite{Cocciaro_DICE2013} we proposed a new experiment to increase the maximum detectable tachyons velocities by about two orders of magnitude. Here we describe our improved experimental apparatus and we report some very preliminary experimental results. The main features of the experiment are discussed in Section \ref{sec:The-main-features}. The preliminary experimental results are in Section \ref{sec:Critical-points-and}, whilst the conclusions are in Section \ref{sec:Conclusions}. \section{\label{sec:The-main-features}The experimental apparatus and the main sources of error.} \subsection{\label{Entangled source}Production and detection of entangled photons.} The main goal of our experiment is to make parameters $\bar{\rho}=\nicefrac{\Delta d}{d_{AB}}$ and $\delta t$ as smaller as possible to increase the lower bound $\beta_{t,min}$. Small values of $\bar{\rho}=\nicefrac{\Delta d}{d_{AB}}$ ($\bar{\rho}\approx1.8\times10^{-7}$) are obtained using a large distance $d_{AB}$ ($d_{AB}=1200\thinspace\mathrm{m}$) and a small uncertainty $\Delta d$ ($\Delta d<220\thinspace\mathrm{\mathrm{\mu}m}$). A high intensity source of entangled photons provides a high coincidences rate ($15000\thinspace\mathrm{coinc/s}$) and, thus, a small minimum acquisition time that is estimated to be $\delta t\approx0.1\thinspace\mathrm{s}$. The experiment is performed in the ``East-West'' gallery of the European Gravitational Observatory (\emph{EGO}~\cite{EGO}) of Cascina that hosts the \emph{VIRGO} experiment on the detection of gravitational waves. Unfortunately, this gallery makes an angle $\gamma=18\text{\textdegree}$ with the actual East-West axis. Then, vector $\vec{V}$ becomes orthogonal to the gallery axis at two times $t_{1}$ and $t_{2}$ only if angle $\theta$ between vector $\vec{V}$ and the Earth polar axis lies in the interval {[}$\gamma,\pi-\gamma${]}. This means that we do not look at the entire celestial sphere but only at a $\approx95\%$ fraction of it. In fact, the excluded solid angle is $\Omega=2\int_{_{0}}^{\gamma}2\pi\sin\theta d\theta\approx5\%$ of the $4\pi$ total solid angle. \begin{figure} \caption{\label{fig:3} \label{fig:3} \end{figure} The experimental apparatus is schematically shown in figure \ref{fig:3}. A $220\thinspace\mathrm{mW}$ diode laser beam ($\lambda=406.5\,\mathrm{nm}$) is polarized (polariser $P_{0}$) and the polarization axis can be rotated by a motorized $\lambda/2$ plate. All the measurements reported in this paper have been performed with the polarization axis making a 45\textdegree{} angle with the horizontal axis. The beam passes through a Babinet-Soleil compensator and impinges at normal incidence on two thin ($0.56\,\mathrm{mm}$) adjacent non-linear optical crystals (\textit{BBO}) cut for type-I phase matching~\cite{Kwiat_PhysRevA_1999}. The beam is focused on the \emph{BBO }plates with the beam waist having a $0.6\thinspace\mathrm{mm}$ diameter. The optic axes of the \emph{BBO} plates are tilted at the angle $29.05\text{\textdegree}$ and lie in planes perpendicular to each other with the first plane that is horizontal. The pump beam induces down conversion at the wavelength $\lambda=813\,\mathrm{nm}$ in each crystal~\cite{Kwiat_PhysRevA_1999} with maximum emission at the two symmetric angles $\gamma_{A}=-\gamma_{B}=2.42\text{\textdegree}$ with respect to the pump laser beam. Suitable optical diaphragms select the entangled beams that are emitted within cones of aperture 0.8\textdegree{} centred at the maximum emission angles. The down converted photons are created in the maximally entangled state $\left(|H,H>+e^{i\phi}|V,V>\right)/\sqrt{2}$, where phase $\phi$ can be changed moving the motorized Babinet-Soleil compensator. Plates $C$, $C_{A}$ and $C_{B}$ in figure \ref{fig:3} are suitable compensating plates that provide a compensation of spurious effects due to the poor coherence of the pump beam ($C$) and to the anisotropy of the \emph{BBO} plates($C_{A}$ and $C_{B}$)~\cite{Kwiat_OptExpr_2005,Kwiat_OptExpr_2007,Kwiat_OptExpr_2009}. With these compensating plates we obtain a high intensity source of entangled photons with high fidelity. All the components described above lie on a central optical table that is entirely enclosed in an insulating box. One of the lateral internal walls is made by a $50\thinspace\mathrm{cm\times150\thinspace\mathrm{cm}}$ aluminium plate ($5\thinspace\mathrm{mm-\mathrm{thickness}}$) in thermal contact with copper tube coils where a paraflu fluid circulates. Two $80\thinspace\mathrm{W}$ fans inside the box move the air and homogenize the temperature everywhere. In this way, the internal temperature can be maintained fixed better than $\pm0.1\thinspace\text{\textdegree C}$. Two couples of specially designed achromatic lenses $L_{A},\thinspace L_{B},\thinspace L'_{A}$ and $L'_{B}$ (diameter = $15\thinspace\mathrm{cm}$, focal length = $6\thinspace\mathrm{m}$) allow us to obtain two 1:1 images of the entangled photons source on two thin near infrared polarizing films (\emph{LPNIR}, Thorlabs) $P_{A}$ and $P_{B}$ that lie at a $600\thinspace\mathrm{m}$ distance from the source. The entangled photons pass through polarizers $P_{A}$ and $P_{B}$ and through the optical sets $CO_{A}$ and $CO_{B}$ that will be described below. Then, they are transmitted (98\% transmission) by dichroic mirrors $DM_{A}$ and $DM_{B}$ (Chroma T760lpxr) and by two Chroma Techn. Corp. filtering sets $F_{A}$ and $F_{B}$ each composed by a bandpass filters ET810/40m ($\lambda=810\,\mathrm{nm}\pm20\,\mathrm{nm}$) and two low pass ET765lp filters ($\lambda_{c}=765\,\mathrm{nm}$). Finally, two identical optical lenses $O_{A}$ and $O_{B}$ focus the entangled photons on two Thorlabs multi mode optical fibres having a large diameter core ($200\,\mathrm{\mu m}$) and high numerical aperture (0.39). The ends of fibres are connected to the inputs of the single photons counters $D_{A}$ and $D_{B}$ (Perkin Elmer SPCM-AQ4C) that generate output voltage pulses with a $25\thinspace\mathrm{ns}$ width. The voltage pulses are transformed into optical pulses by LCM155EW4932-64 modules of Nortel Networks (\emph{V} to \emph{O} module in fig. \ref{fig:3}) that propagate in single mode optical fibres up to the central optical table where they are converted into electric pulses (\emph{O} to \emph{V} module in fig.\ref{fig:3}) and sent to an electronic monostable circuit that provides output squared voltage pulses together with coincidences pulses. Before starting the measurements we have measured the light spectral absorption due to air and we have verified that the adsorption in the wavelengths interval {[}$790\thinspace\mathrm{nm-}830\thinspace\mathrm{nm}${]} is essentially due to water vapour. The total adsorbed light in this interval is a fraction lower than 3\% of the incident light for a 45\% air relative humidity. The coincidence rate measured by counters versus phase $\phi$ is shown in figure \ref{fig:4}. Note the satisfactory contrast of the fringes that is obtained using the Kwiat et al. compensating plates~\cite{Kwiat_OptExpr_2005,Kwiat_OptExpr_2007,Kwiat_OptExpr_2009}. \begin{SCfigure}[50] \centering \includegraphics[width=0.5\textwidth]{Figura4.JPG} \hspace{0.05 in} \caption{\protect\rule{0ex}{7ex}Dependence of the coincidences rate on phase $\phi$ when all the polarizers angles are fixed at 45 degrees. Full squares correspond to the experimental values with subctracted statistical coincidences. The open squares are the experimental results with subctracted statistical coincidences but whithout using the Kwiat compensators \textit{$C_{A}, C_{B} $} and \textit{$C $}. The full line correspond to the prediction of Quantum mechanics for the entangled state of eq.(1). } \label{fig:4} \end{SCfigure} \subsection{\label{reference beams}Compensation of the beam deflections.} An interferometric method is used to equalize the optical paths from the source of the entangled photons to polarizers $P_{A}$ and $P_{B}$. The method exploits two reference beams (beams \emph{I} in figure \ref{fig:3}) of wavelength $\lambda=681\,\mathrm{nm}$ and coherence length $L_{A}=28.1\,\mathrm{\mu m}$ produced by a super luminous diode (\emph{SLED} in figure \ref{fig:3}). Due to the occurrence of vertical temperature gradients up to $3\,\mathrm{\text{\textdegree C/m}}$ in the \emph{EGO} gallery produced by sunlight, it has been needed to use two couples of different reference beams (beams\emph{ I} and \emph{II}) in each arm of the interferometer. It can be easily shown that an uniform vertical temperature gradient generates a vertical gradient of the air refractive index that produces the same effect as a diffused optical prism leading to a continuous deviation of the optical beams \emph{I }and \emph{II} up to about $1\,\mathrm{m}$ at a $600\,\mathrm{m}$ distance.The full curve in Figure \ref{figure:5-1}(b) shows the average trajectory of beam \emph{I }when a vertical temperature gradient occurs. A parabolic shape of the trajectory is predicted if the vertical temperature gradient is everywhere constant in the gallery. Furthermore, the small non uniformity of the vertical gradient of the air refractive index simulates a diffused cylindrical lens that leads to astigmatism of the images. \begin{figure} \caption{\label{figure:5-1} \label{figure:5-1} \end{figure} The accurate compensation of these effects is needed to collect a great number of entangled photons on the photon counting detectors. Beam \emph{I} follows the same optical path of the entangled photons and provides an interferometric signal whilst beam \emph{II} is horizontally displaced with respect to beam \emph{I} and allows us to compensate the vertical deflections of the beams (up to $1\,\mathrm{m}$ at a $600\,\mathrm{m}$ distance) produced by the air refractive index gradients. The method to generate the reference beams has been greatly improved with respect to that proposed in~\cite{Cocciaro_DICE2013}. Here we obtain the reference beams \emph{I} and \emph{II} using the beam displacer (Thorlabs BDY12U) in figure \ref{fig:3} to split the incident \emph{SLED} beam into two parallel beams at a $1.2\,\mathrm{mm}$ horizontal distance. The two beams pass through a beam splitter and are focused on a transmission phase grating that produce +1 and -1 order diffracted beams with 35\% intensity with respect to the incident beam and at the average diffraction angles $+2.43\text{\textdegree}$ and $-2.43\text{\textdegree}$ that are virtually coincident with the maximum emission angles of the entangled photons ($\pm2.42\text{\textdegree}$). The beam waists of the two reference beams spots on the optical grating have a $0.3\thinspace\text{mm}$ diameter and behave as two sources localized on the grating at a $1.2\thinspace\mathrm{mm}$ horizontal distance. The optical rays emitted by these sources at the angles $+2.43\text{\textdegree}$ and $-2.43\text{\textdegree}$ pass through an achromatic lens having a $150\thinspace\mathrm{mm}$ focal length, are reflected by a $565\thinspace\mathrm{nm}$ short pass dichroic mirror (Chroma T565spxe) and produce 1:1 images of the grating spots on the \emph{BBO} plates. Using a suitable optical method, the image of the reference source \emph{I} on the \emph{BBO} plates is centred with respect to the spot of the pump beam where the entangled photons are generated. The procedure above ensures that the reference beams \emph{I }outgoing from the \emph{BBO} plates are initially in phase and are superimposed to the entangled photons. This provides the easy alignment of the optical apparatus and the control of the optical paths of the entangled photons. Achromatic lenses $L_{A},\thinspace L_{B},\thinspace L'_{A}$ and $L'_{B}$ have been built to have the same $6000\thinspace\mathrm{mm}$ focal length (within $\pm10\thinspace\mathrm{mm}$) for the pump laser and for the \emph{SLED}. 1:1 images of the reference source \emph{I} and of the entangled photons source are produced by lenses $L_{A},\thinspace L_{B},\thinspace L'_{A}$ and $L'_{B}$ on polarizers $P_{A}$ and $P_{B}$ at a $600\thinspace\mathrm{m}$ distance from the source. Reference beams\emph{ II} are horizontally deflected by lenses $L_{A}$ and $L_{B}$ and produce two spots at a horizontal distance of $12\thinspace\mathrm{cm}$ from the centres of lenses $L'_{A}$ and $L'_{B}$ on two diffusing screens horizontally adjacent to the lenses (see figure \ref{fig:3}). Two optical objectives collect the diffused beams and produce images of the spots on two webcams. All lenses $L_{A},\thinspace L_{B},\thinspace L'_{A}$ and $L'_{B}$ can be moved horizontally and vertically using Sigma Koki \emph{PC} controlled motors. A labview program measures the position of the beams spots on the webcams and produces feedback signals that move lenses $L_{A}$ and $L_{B}$ to maintain the spots positions fixed. In such a way also the reference beams \emph{I }(and the entangled photons) always remain fixed at the centre of lenses $L'_{A}$ and $L'_{B}$ (see Figure \ref{figure:5-1}(c)). A $1\thinspace\mathrm{cm}$ movement of lenses $L_{A}$ and $L_{B}$ produces a $1\thinspace\mathrm{m}$ displacement of the beams spots at a $600\thinspace\mathrm{m}$ distance. The feedback procedure leads to a complete control of the slow drifts of the beams but it cannot eliminate the rapid changes of the beam trajectories occurring within a few seconds time. This leads to a residual fluctuation of the beam spots at the centres of lenses $L'_{A}$ and $L'_{B}$ that is lower than $\pm30\thinspace\mathrm{mm}$. These residual displacements are appreciably smaller than the radius of the lenses ($75\thinspace\mathrm{mm}$), then all the entangled photons are collected by them. The beams impinge at the centres of lenses $L'_{A}$ and $L'_{B}$ but the incidence angles change with time, due to the vertical refractive index gradient. Then, the images of the source of beams \emph{I }(and of the entangled photons) that occur on polarizers $P_{A}$ and $P_{B}$ do not remain fixed at the centre of the polarizers (see figure \ref{figure:5-1}c)). To stabilize these images at the centre of the polarizers, the reference beams \emph{I} transmitted by the polarizers pass through two systems of cylindrical lenses $CO_{A}$ and $CO_{B}$, are reflected by the $760\thinspace\mathrm{nm}$ long pass dichroic mirrors $DM_{A}$ and $DM_{B}$ (Chroma T760lpxr) and impinge on two optical position control systems. Each optical position control system produces two 1:1 images of the beam spots occurring on the polarizers: one image is collected by a position sensing detector (Thorlabs PDP90A) and the other by a webcam. Labview feedback programs read the output of the position sensing detectors and move lenses $L'_{A}$ and $L'_{B}$ to maintain fixed the position of the beam spots at the centre of the two polarizers (see Figure \ref{figure:5-1}(d)). In this case, too, slow drifts are completely removed but not the rapid displacements of the spots from the polarizers centres. These latter residual fluctuations remain always restricted below $\pm0.4\thinspace\mathrm{mm}$. Other labview feedback programs acquire the images of the webcams and measure the astigmatism of the images induced by non uniformities of the refractive index gradients. Each system of cylindrical lenses $CO_{A}$ and $CO_{B}$ is composed by a fixed cylindrical lens and a movable cylindrical lens that provide an effective cylindrical lens with a variable focal length. Suitable feedback signals generated by the labview program move the motorized cylindrical lenses to correct the astigmatism of the images. These procedures ensure that the spots of the reference beams \emph{I} remain virtually fixed at the centre of polarizers $P_{A}$ and $P_{B}$ with a circular shape having a $\approx0.3\thinspace\mathrm{mm}$ diameter. Two images of the spot of the reference beam on polarizer $P_{A}$ are shown in figures \ref{figure:5}a) and \ref{figure:5}b). Figure \ref{figure:5}a) shows the spot for moderate sunlight with feedback \emph{OFF}, whilst figure \ref{figure:5}b) shows the same image with feedback \emph{ON}. It must be remarked that our method stabilizes the spots of the 681 nm reference beams but the wavelengths of the entangled photons ($790\thinspace\mathrm{nm}-830\thinspace\mathrm{nm}$) are different from those of the sled beam. However, the differences of the air refractive indices corresponding to the reference beams and to the entangled photons are very small and it can be shown that also the spots of the entangled photons on the polarizers always remain very close to the centres of the polarizers within $0.4\thinspace\mathrm{mm}$. In conclusion, our compensation procedure maintains the spot of the entangled photons restricted to a circular region close to the centre of the polarizers with a small diameter ($\approx0.6\thinspace\mathrm{mm}$) and ensures that virtually all the entangled photons passing through the polarizers are collected by the photon counting detectors also in conditions of great sunlight. \begin{figure} \caption{\label{figure:5} \label{figure:5} \end{figure} \subsection{\label{reference beams-1} Equalization of the optical paths.} To equalize the optical paths we exploit the reflections of the reference beams \emph{ I} from polarizers $P_{A}$ and $P_{B}$. The reflected beams come back on the same path forward and impinge at the angles $+2.43\text{\textdegree}$ and $-2.43\text{\textdegree}$ on the optical phase grating where diffraction occurs again. The output beams that are diffracted orthogonally to the grating are reflected by the beam splitter and impinge on photodetector \emph{Ph} where interference occurs. Air density fluctuations inside the \emph{EGO} gallery induce oscillations of the optical path difference and, thus, an oscillating output voltage of the photodetector. The variations of the optical path differences are always greater than the optical wavelength and the output voltage oscillates from a minimum value (destructive interference) toward a maximum value (constructive interference). The peak to peak amplitude $V_{pp}$ is measured by a simple electronic circuit. $V_{pp}$ is maximized when the path difference is zero whilst $V_{pp}$ tends to vanish if the path difference becomes greater than the \emph{SLED} coherence length $L_{c}=28.1\,\mathrm{\mu m}$. Polarizer $P_{B}$ is moved by a precision linear motorized stage (Physik Instruments M-406.22s) that is controlled by a \emph{PC} through a labview program that generates a sweep of the $P_{B}$ position and acquires the corresponding $V_{pp}$ values. The typical dependence of $V_{pp}$ on the polarizer position \emph{x} during a summer night is shown in figure \ref{fig:6} (a) whilst the dependence during a summer day at the maximum sunlight is shown in figure \ref{fig:6}(b). Note that the curve in figure \ref{fig:6}(b) has a two bells profile. This behaviour can be explained assuming that the path difference oscillates with time around the average value $\Delta d_{av}$ with a mean oscillation amplitude \emph{A}. In these conditions, the typical Gaussian behaviour due to the finite coherence length of the \emph{SLED} is expected to split into two nearly Gaussian profiles at distance 2\emph{A}. The central point between the two Gaussian peaks corresponds to the position of polarizer $P_{B}$ where the average optical path difference is zero whilst the semi-distance between the two Gaussian Maxima corresponds to the average amplitude \emph{A} of the fluctuations of the paths difference. The Full lines in figures \ref{fig:6}(a) and \ref{fig:6}(b) are the labview best fits of the experimental results with two Gaussians having the width $w=0.020\thinspace\mathrm{mm}$ that characterizes the \emph{SLED} source. From these best fits we deduce that the main amplitude of the oscillations of the path difference is smaller than $10\thinspace\mu m$ during night but it becomes $33\thinspace\mu m$ at the maximum sunlight. \begin{figure} \caption{\label{fig:6} \label{fig:6} \end{figure} The labview feedback program operates in this way: first of all a large amplitude sweep is made to localize the central point $x_{c}$ between the two Gaussian, then the sweep amplitude is reduced to $200\thinspace\mu m$ around $x_{c}$ and the new $x_{c}$ value is memorized and plotted. This latter procedure with a $200\thinspace\mu m$ sweep is repeated continuously each $15\,\mathrm{s}$ for the entire measurement time (24 hours) and, thus, the difference between the optical paths remains restricted to $\Delta d=\pm100\thinspace\mathrm{\mu m}$ at each time. The time-dependence of $x_{c}$ due to the temperature variations for an entire Summer day is shown in figure \ref{fig:7}. \begin{SCfigure}[50] \centering \includegraphics[width=0.5\textwidth]{FIG7.JPG} \hspace{0.05 in} \caption{\protect\rule{0ex}{12ex}Variation of the equalization position $x_{c}$ of the motorized polarizer during an entire Summer day. The maximum variation during the entire day is about 0.4 mm. Note the sharp and high amplitude variations due to the air turbulence during the maximum sunlight hours (from 10 h to 16 h). } \label{fig:7} \end{SCfigure}Note that the complete interference pattern is somewhat more complex than the small portion shown in figures \ref{fig:6} since the \emph{LPNIR} Thorlabs polarizers are made by a thin polarizing film ($280\thinspace\mathrm{\mu m}$ thickness) sandwiched between two glass plates. We have measured with accuracy the thickness of the glasses and we have found that their values are $904\thinspace\mathrm{\mu m}\pm5\thinspace\mathrm{\mu m}$. Due to the sandwich shape of the polarizers, there are many interfaces giving reflected beams that can interfere. Also in the optimal night conditions we observe five main interference peaks that are separated each from the other by a distance about $1.5\thinspace\mathrm{m}\mathrm{m}$. After a careful analysis, the central peak has been clearly identified as that which corresponds to the equalization of optical paths from the source to the polarizing thin layers. Then, our analysis has been restricted only to the central peak region corresponding to figures \ref{fig:6}. As shown in Section \ref{sec:Introduction}, one of the most important parameters of our experiment is the uncertainty $\Delta d$ on the equalization of the optical paths. Here we resume the main contributions to this uncertainty (see also~\cite{Cocciaro_DICE2013}) : a) the above described uncertainty due to the motor sweep $\Delta d_{sweep}=100\thinspace\mathrm{\mu m}$ that corresponds to a half of the total sweep excursion, b) the uncertainty $\Delta d_{pol}$ due to the finite thickness ($280\thinspace\mu m$) of the \emph{LPNIR} Thorlabs polarizers layers. Since the extinction ratio of these polarizers at the entangled photons wavelengths is greater than $10^{5}$, then we can estimate that 99\% of photons with orthogonal polarization are adsorbed in a layer having a thickness $\approx120\thinspace\mathrm{\mu m}$ and we can assume the corresponding uncertainty value $\Delta d_{pol}=120\thinspace\mathrm{\mu m}$. c) The optical paths are equalized using the reference beams \emph{I }that have not the same wavelength of the entangled photons. If the temperature would be uniform in the \emph{EGO} gallery and the thickness of lenses $L_{A},\thinspace L_{B},\thinspace L'_{A}$ and $L'_{B}$ would be the same, also the entangled photons paths would be automatically equalized. This is no more true if there is a average temperature difference $\Delta T$ between the two arms of the interferometer or if there are differences between the thicknesses of the lenses. Calculations of these effects are somewhat complicate and need the knowledge of the temperature dependence of the refractive indices of the air and of the lenses at different wavelengths and the knowledge of the thickness differences between lenses $L_{A},\thinspace L_{B},\thinspace L'_{A}$ and $L'_{B}$. The lenses thickness differences have been measured to be smaller than 0.1 mm and they do not affect appreciably the uncertainty. It results from the calculations that an average temperature difference $\Delta T=1\text{\textdegree C}$ between the two arms of the interferometer produces an optical paths difference slightly smaller than $10\thinspace\mathrm{\mu m}$. Since the horizontal temperature differences in the gallery are always smaller than 2-3 degrees, we get $\Delta d_{\Delta T}<30\thinspace\mathrm{\mu m}$. d) In our experiment we detect entangled photons with wavelengths from $790\thinspace\mathrm{nm}$ toward $830\thinspace\mathrm{n}\mathrm{m}$. Due to the optical dispersion, photons of different wavelengths see different optical paths in air and in the lenses, although this latter contribution is negligible. The difference of the optical paths due to the air optical dispertion is given by $\Delta d_{disp}=\frac{\partial n}{\partial\lambda}\Delta\lambda d$, where \emph{d} is the distance from the source to polarizers (600 m) and $\Delta\lambda$ = 40 nm is the bandwidth of the bandpass filters and \emph{n} is the air refractive index. Substituting in the expression of $\Delta d_{disp}$ the value $\frac{\partial n}{\partial\lambda}=5.87\times10^{-9}nm^{-1}$ calculated using the Ciddor equation~\cite{CiddorEquation} at room conditions and with humidity = 50\% and $CO_{2}$ = 450 micromol/mol, we get $\Delta d_{disp}=$ 144 $\mu m$. The resulting uncertainty in the optical paths differences is, then \begin{equation} \Delta d=\sqrt{\Delta d_{sweep}^{_{^{2}}}+\Delta d_{pol}^{_{^{2}}}+\Delta d_{\Delta T}^{_{^{2}}}+\Delta d_{disp}^{_{^{2}}}}=215\thinspace\mu m.\label{eq:betamin-1} \end{equation} \section{\label{sec:Critical-points-and} Preliminary experimental results.} In this Section we report some very preliminary experimental results concerning the \emph{EPR} measurements. The main objective of these measurements is to verify that the experimental method provides accurate measurements of the\emph{ EPR} correlations with very small acquisition times of the coincidences. As shown in the Introduction, the presence of superluminal communications can be detected looking at the time dependence of the two correlation parameters $S_{max}$ and $S_{min}$. Counts $N_{A}$ and $N_{B}$ of photons transmitted by polarizers $P_{A}$ and $P_{B}$ are detected and back-ground counts due to unwanted external light and to dark noise of the detectors are subtracted. Furthermore, the statistic spurious coincidences $N_{sp}=N_{A}N_{B}\delta't$ are subtracted from the measured coincidences, where $\delta't$ denotes the pulse duration $\delta't=25\thinspace\mathrm{ns}$ of the output pulses generated by the photon counting modules. Numbers $N(\alpha_{A},\alpha_{B})$ of the measured coincidences that appear in the expressions of the correlation parameters $S_{max}$ and $S_{min}$ are affected by the time-variations of the transmission coefficients in the two arms. The main causes of a variation of the transmission coefficients are: the occurrence of residual displacements of the transmitted beams that are not completely eliminated by the feedback method that reduce the collection efficiency of the entangled photons; the variation of the air relative humidity that induces a change of the light adsorption. In fact, counts $N_{A}(\alpha_{A})$ and $N_{B}(\alpha_{B})$ and coincidences $N(\alpha_{A},\alpha_{B})$ are related to the transmission coefficients according to relations (\ref{eq:Smax-1-1}): \begin{equation} \begin{array}{l} N_{A}(\alpha_{A})=N\tau_{A}(\alpha_{A})\epsilon_{A}(\alpha_{A})p_{A}(\alpha_{A})\\ N_{B}(\alpha_{A})=N\tau_{B}(\alpha_{B})\epsilon_{B}(\alpha_{B})p_{B}(\alpha_{B})\\ N(\alpha_{A},\alpha_{B})=N\tau_{A}(\alpha_{A})\epsilon_{A}(\alpha_{A})\tau_{B}(\alpha_{B})\epsilon_{B}(\alpha_{B})p(\alpha_{A},\alpha_{B}) \end{array}\label{eq:Smax-1-1} \end{equation} where \emph{N} is the number of generated entangled photons, $\tau_{A}(\alpha_{A})$ and $\tau_{B}(\alpha_{B})$ are the transmission coefficients, $\varepsilon_{A}(\alpha_{A})$ and $\varepsilon_{B}(\alpha_{B})$ are the efficiencies of photon counting detectors, $p_{A}(\alpha_{A})$ and $p_{B}(\alpha_{B})$ are the probabilities that photons \emph{a }and \emph{b} pass through polarizers $P_{A}$ and $P_{B}$ (they are $p_{A}(\alpha_{A})=p_{B}(\alpha_{B})=1/2$ for the entangled state) and $p(\alpha_{A},\alpha_{B})$ is the joint probability. We see that dividing the coincidences counts $N(\alpha_{A},\alpha_{B})$ for the product $N_{A}(\alpha_{A})N_{B}(\alpha_{B})$ and multiplying for the product of the average values of $\bar{N}_{A}(\alpha_{A})$ and $\bar{N}_{A}(\alpha_{A})$ one obtains a coincidences number that is no more affected by changes of the transmission coefficients and of the photodetectors efficiencies. Here below we will indicate by $N(\alpha_{A},\alpha_{B})$ the coincidences corrected according to the procedure outlined above. The correlations parameters $S_{max}$ and $S_{min}$ are obtained repeating measurements of coincidences $N(\alpha_{A},\alpha_{B})$ with the proper values of angles $\alpha_{A}$ and $\alpha_{B}$ that appear in equations (\ref{eq:Smax}) and (\ref{eq:Smin}). According to equations (\ref{eq:Smax}), (\ref{eq:Smin}) and (\ref{eq:NTOT}), 12 different couples of values $\alpha_{A}$ and $\alpha_{B}$ have to be selected rotating the motorized polarizers $P_{A}$ and $P_{B}$. The rotations of polarizers $P_{A}$ and $P_{B}$ are controlled by a \emph{PC} through a labview program that operates in this way: a couple of angles $\alpha_{A}$ and $\alpha_{B}$ is set (for instance the first angles 45\textdegree{} and 67.5\textdegree{} of the first contribution in equation (\ref{eq:Smax})), then the polarizers axes are rotated until they reach the setted angles. The corresponding numbers of coincidences $N(\alpha_{A},\alpha_{B})$ in the acquisition time $\delta t$ are measured. Then, angles are changed according to equations (\ref{eq:Smax}), (\ref{eq:Smin}) and (\ref{eq:NTOT}) and the corresponding values of the coincidences are measured. When all the 12 values of coincidences that are needed to calculate parameters $S_{max}$ and $S_{min}$ have been measured, the labview program calculates these correlation parameters. Unfortunately, the average time that is needed to rotate the polarizers is of the order of 8 seconds and, thus, the duration of a single measurement of $S_{max}$ and $S_{min}$ requires a time $\Delta t\approx100\thinspace\mathrm{s}$ that is much larger than the acquisition time $\delta t$ of coincidences. Then, the maximum superluminal velocity $\beta_{t,min}$ that can be detected in the present experiment is not limited by the acquisition time $\delta t$ of the coincidences but by the much larger effective acquisition time $\Delta t=100$ s. \begin{figure} \caption{\label{fig:8} \label{fig:8} \end{figure} Figures \ref{fig:8}a) and \ref{fig:8}b) show the values of the correlation parameters $S_{max}$ and $S_{min}$ versus time during a sidereal day when the coincidences acquisition time was $\delta t=$1 s but the effective acquisition time was $\Delta t=100\thinspace\mathrm{s}$. The full horizontal lines in figures \ref{fig:8}a) e and \ref{fig:8}b) correspond to the prediction of \emph{QM} and to the maximum (figure \ref{fig:8}a)) and minimum (figure \ref{fig:8}b)) values allowed by local theories. According to the Introduction, parameter $S_{max}$ would become lower than 0 and $S_{min}$ would become greater than - 1 at two times each sidereal day if the superluminal signals have velocities lower than $\beta_{t,min}$. This behaviour is not observable in figures \ref{fig:8}a) and \ref{fig:8}b) and, thus, we can conclude that, if superluminal signals are responsible for \emph{QM} correlations, then the superluminal velocities are greater than the maximum measurable values ($\beta_{t}$$>$$\beta_{t,min}$). The results in figures \ref{fig:8}a) and \ref{fig:8}b) were obtained using the coincidences acquisition time $\delta t=1\thinspace\mathrm{s}$ but we have verified that sufficiently accurate results are also obtained using the much smaller acquisition time $\delta t=0.1\thinspace\mathrm{s}$ where the relative statistical noise increases by a factor $\sqrt{10}$. Notice that parameter $S_{min}$ exhibits much greater fluctuations than $S_{max}$ and, thus, this latter parameter provides a much more accurate test of the \emph{EPR} correlations. This behaviour is probably due to the fact that the absolute value of $S_{max}$ is about six times lower than that of $S_{min}$. For this reason the planned final measurements will be made using the $S_{max}$ parameter alone that is affected by a much smaller noise. Furthermore, it is important to remark that the measurements shown in figures \ref{fig:8}a) and \ref{fig:8}b) were obtained in a July day (30 July 2016) with very strong sunlight. The residual noisy effects due to sunlight are evident looking at the experimental points between times \emph{t }= 9 h and \emph{t }= 18 h in the figures. All these effects are absent in conditions of fully covered sky. \begin{figure} \caption{\label{fig:9} \label{fig:9} \end{figure} Substituting the effective acquisition time $\Delta t=100\thinspace\mathrm{s}$ in place of $\delta t$ in equation (\ref{eq:betamin}) with the uncertainty $\Delta d=215\thinspace\mathrm{\mu m}$, we obtain the lower bound $\beta_{t,min}$ that corresponds to our preliminary results. In figure \ref{fig:9} a) we show the lower bounds already found in some previous experiments~\cite{Salart_nature_2008,Cocciaro_PLA_2011,Cinesi_PhysRevLett2013} together with that obtained here. The filled region represents the new region of superluminal velocities investigated here. In figure \ref{fig:9} b) we show also the planned values of $\beta_{t,min}$ that should be obtained in our final experiment with a $0.1\thinspace\mathrm{s}$ effective acquisition time. The filled region in figure \ref{fig:9} b) corresponds to the new region of superluminal velocities that will become accessible in the final experiment. The experimental method that will be used to bypass the problems related to the polarizers movement will be briefly outlined in the Conclusions below. \section{\label{sec:Conclusions}Conclusions} In the present paper we have developed an accurate and stable method to equalize the optical paths of the entangled photons over a kilometric distance. Due to vertical gradients of the air refractive index in the \emph{EGO} gallery induced by sunlight it has been needed to greatly modify the experimental apparatus proposed in~\cite{Cocciaro_DICE2013} introducing a complex feedback procedure to correct the deviations of the beams and the astigmatism of the images. In such a way we were able to obtain two virtually stable 1:1 images of the source of entangled photons at the centre of two polarizers lying at a distance of $\approx600\thinspace\mathrm{m}$ from the source. This ensures that virtually all the entangled photons transmitted by the polarizers are collected by the photon counting detectors also in the unfavourable conditions of maximum sunlight. The interference method used to equalize the optical paths exploits two reference beams reflected by the polarizers. The reference beams follow the average paths of the entangled photons. The method to produce the reference beam has been greatly improved with respect to the original project~\cite{Cocciaro_DICE2013} thanks to the use of a suitable optical grating. The new method automatically ensures that the reference beams are superimposed to the entangled ones and that they have the same phase at the \emph{BBO} plates without using the complex equalization procedure outlined in our original paper~\cite{Cocciaro_DICE2013}. Finally, a suitable design of the optical components and the use of the compensation procedure developed by the Kwiat group~\cite{Kwiat_OptExpr_2005,Kwiat_OptExpr_2007,Kwiat_OptExpr_2009} provides a great number of measured coincidences and makes possible to use a very short acquisition time $\delta t=0.1\thinspace\mathrm{s}$. Using this experimental apparatus we have continuously measured the correlation parameters $S_{max}$ and $S_{min}$ for an entire sidereal day to obtain some very preliminary results. Our experimental results are greatly affected by the long time that is needed to rotate the polarizers that leads to an effective acquisition time $\Delta t=100\thinspace\mathrm{s}$ much greater than the minimum acquisition time of coincidences $\delta t=0.1\thinspace\mathrm{s}$. For this reason, the new explored region of the velocities of the superluminal signals investigated here (see figure \ref{fig:9} a)) is much smaller than the planned one (see figure \ref{fig:9} b)). Due to the strong vertical temperature gradients it has not been possible to perform the experiment along the true East-West direction that was proposed in our previous paper. In the present experiment the measures have been performed in the so called ``East-West'' gallery of \emph{EGO} that makes the angle $\gamma=18\text{\textdegree}$ with the actual East-West axis. Then, only a 95\% portion of the celestial sphere is accessible with our experimental apparatus. In order to become insensitive to the long time needed to rotate the polarizers and to reach an effective acquisition time $\delta t=0.1\thinspace\mathrm{s}$, it is needed to fully change the acquisition method. With this improved method we should obtain the planned results in figure \ref{fig:9} b). Here we describe only the main idea of the new method. Using a NTP+PTP GPS Network Time Server (TM2000A) and knowing the values of the difference \emph{UTC}-\emph{UT1} provided in the Web by \emph{IERS}~\cite{IERS} we will be able to synchronize the measurements of the coincidences with the Earth rotation angle with respect to the fixed stars within a few milliseconds uncertainty. That accurate synchronization of the measurements cannot be obtained using a \emph{PC} but requires the use of a real time acquisition. This will be obtained using a \emph{Compact DAQ }(National Instruments 9132) in place of the \emph{PC} to acquire the coincidences and a Real Time Labview program to control any aspect of the acquisition. In this way, will be possible to acquire the coincidences in successive days at the same Earth rotation times. The measurement method will follow the steps below: i) the real time labview program rotates polarizers to reach the angles $\alpha_{A}$ and $\alpha_{B}$ that correspond to the first contribution $N(45\text{\textdegree},\thinspace67.5\text{\textdegree})$ in the expression of $S_{max}$ in equation (\ref{eq:Smax}). ii) When the Earth rotation angle reaches a well defined value, the acquisition of coincidences starts and $2^{20}$ values of coincidences are acquired in a full Earth rotation day. The successive day, the real time labview program sets the polarizers angles to the values corresponding to the second term in the expression of $S_{max}$ and the acquisition of coincidences will start with sidereal synchronism with the measurements of the previous day. The same procedure will be repeated until all the contributions that are present in the expression of $S_{max}$ have been obtained. With this procedure, the effective acquisition time coincides with the coincidences acquisition time $\delta t$ and the planned region of superluminal velocities (filled region in figure \ref{fig:9} b)) should become accessible. If the revolution motion of the Earth around the Sun and the precession and nutation of the Earth axis would be absent, the losses of quantum correlations should occur exactly at the same earth rotation angles each sidereal day and, thus, one could utilize the coincidences $N(\alpha_{A},\alpha_{B})$ measured in successive days at the same earth rotation angles to calculate the time dependence of the correlation parameter $S_{max}$ using equations (\ref{eq:Smax}) and (\ref{eq:NTOT}). Unfortunately, the analysis of the experimental data will be much more complex due to the revolution motion of the earth around the Sun and to the precession and nutation motions. In fact, the losses of quantum correlations should occur when the relative velocity $\vec{V}$ of the preferred frame with respect to the Earth frame becomes orthogonal to the \emph{A}-\emph{B} axis. Due to the revolution motion of the earth around the Sun and to the precession and nutation motions, the angle between the relative velocity $\vec{V}$ of the preferred frame and the \emph{A}-\emph{B} axis is not a true periodic function having the period of the Earth rotation and, thus, the orthogonality condition will be not exactly satisfied at the same Earth rotation angles in different days. Then, the analysis of the experimental results will need more complex procedures that will be not discussed here. \end{document}
math
54,838
\begin{document} \title{Transfer of linear momentum from the quantum vacuum to a magnetochiral molecule} \author{M. Donaire$^{1}$, B.A. van Tiggelen$^{2}$ and G.L.J.A. Rikken$^{3}$} \address{$^{1}$Laboratoire Kastler-Brossel, CNRS, ENS and UPMC, Case 74, F-75252 Paris, France} \address{$^{2}$Universit\'{e} Grenoble 1/CNRS, LPMMC UMR 5493, B.P.166, 38042 Grenoble, France} \address{$^{3}$LNCMI, UPR 3228 CNRS/INSA/UJF Grenoble 1/UPS, Toulouse \& Grenoble, France} \ead{[email protected]} \begin{abstract}\\ In a recent publication \cite{PRLDonaire} we have shown using a QED approach that, in the presence of a magnetic field, the quantum vacuum coupled to a chiral molecule provides a kinetic momentum directed along the magnetic field. Here we explain the physical mechanisms which operate in the transfer of momentum from the vacuum to the molecule. We show that the variation of the molecular kinetic energy originates from the magnetic energy associated with the vacuum correction to the magnetization of the molecule. We carry out a semiclassical calculation of the vacuum momentum and compare the result with the QED calculation. \end{abstract} \pacs{42.50.Ct, 32.10.Fn, 32.60.+i} \submitto{Journal of Physics: Condensed Matter} \maketitle \section{Introduction} It is well known that the quantum fluctuations of the electromagnetic (EM) field coupled to electric charges generate an observable interaction energy \cite{Casimir,Lamoreaux,Milonnibook}. The fluctuations which mediate the self-interaction of electrons bound to atomic nuclei give rise to the Lamb shift of atomic levels; the fluctuations which mediate the interaction between nearby molecules generate van-der-Waals energies; and finally the fluctuations between macroscopic dielectrics generate the Casimir energy. Direct observation of these energies is possible by spectroscopy, atomic interferometry or nanomechanical means \cite{Alex,Gorza,Bressi,Capasso}.\\ \indent Less well-known is the fact that other observable quantities, functions of the EM field, can be influenced by quantum fluctuations under certain symmetry conditions. That is, when the time-space symmetries of the medium to which the fluctuations couple are compatible with the symmetries of some observable operator, the expectation value of that operator in the vacuum state of the system medium-EM field may take a non-zero value. This is the case of the linear momentum of the EM field when quantum fluctuations couple to a medium in which both parity (P) and time-reversal (T) symmetries are broken. In particular, the constitutive equations of a magneto-electric medium contain a non-reciprocal (\emph{nr}) EM susceptibility \cite{Barron84}, $\chi_{EM}^{nr}$, which results from broken P and T. Generally $\chi_{EM}^{nr}$ is an antisymmetric T,P-odd tensor which generates an electric polarization in response to a magnetic field, $\Delta\mathbf{P}=\chi^{nr}_{EM}\cdot\mathbf{B}$, and conversely, a magnetization as a response to an electric field, $\Delta\mathbf{M}=-\chi^{nr}_{EM}\cdot\mathbf{E}$. As a result of the matter-field coupling and momentum conservation, linear momentum can be transferred from the EM vacuum to matter during the process that controls the break down of the symmetries. A nonzero tensor $\mathbb{\chi}_{EM}^{nr}$ is found in any medium in crossed external electric and magnetic fields, in a moving dielectric medium and in a chiral medium exposed to a magnetic field \cite{vTg2008}. In this article we concentrate on the latter case.\\ \indent It is a generic phenomenon in field theory that the breakdown of a symmetry is accompanied by a non-zero vacuum expectation value (VEV) of some physical observable associated to the symmetry. In our case the P and T symmetries happen to be broken explicitly by the presence of a chiral molecule and the action of an external magnetic field, $\mathbf{B}_{0}$. Correspondingly, a non-zero VEV of the EM momentum shows up in the direction along which the symmetries are broken, $\mathbf{B}_{0}$. The question arises whether it could be possible to take advantage of this phenomenon for practical purposes. To this end we will show that, due to the conservation of total linear momentum, there exists necessarily a transfer of kinetic momentum to the chiral molecule of equal magnitude and opposite sign to the VEV of the EM momentum.\\ \indent Also, in the context of high energy physics, it is known that the Electro-Weak interaction violates the P and T symmetries \cite{Wu,Alavi}, and CP (i.e., T) is also expected to be naturally broken in QCD. In the latter case, the existence of a light pseudo-scalar particle, the axion, has been postulated as a solution of the so-called \emph{strong CP problem} \cite{Perccei-Quinn}, and indirect observations of the axion through its coupling to the EM field have been suggested \cite{Zavattini} and put into practice \cite{PVLAS}. In the PVLAS experiment \cite{PVLAS} signatures of the axion-photon interaction are investigated through the anomalous rotation of the polarization direction of optical light in the presence of an intense magnetic field, similar to the vacuum birefringence effect expected from the polarization of the QED vacuum \cite{Klein,Rizzo}. Nonetheless the rotation of the polarization axis of light may be caused by the break down of the P or T symmetries separately, while the effect described in the present article needs the simultaneous breakdown of both symmetries and is independent of polarization.\\ \indent Already in classical electrodynamics, a kinetic momentum is acquired by matter under external time-dependent electric and magnetic fields perpendicular to each other \cite{Rikken1,Rikken2}. Conservation of total momentum implies that the classical EM momentum and the kinetic momentum must be equal in magnitude and opposite in sign. Several expressions can be found in the literature for this classical EM momentum, all derived from some kind of phenomenological assumptions. This controversy is known as the Abraham-Minkowski problem. According to Abraham's prescription, the density of EM momentum is $\mathbf{G}_{A}=c^{-2}\mathbf{E}\wedge\mathbf{H}$, while Minkowski's formula reads $\mathbf{G}_{M}=\mathbf{D}\wedge\mathbf{B}$. More recently, the microscopical approach of Nelson \cite{Nelson}, which appeals to the Lorentz force over bound and free charges and currents for the coupling between matter and radiation, predicts a momentum density of an EM wave in a material medium, $\mathbf{G}_{N}=\epsilon_{0}\mathbf{E}\wedge\mathbf{B}$.\\ \indent Starting with a classical Lagrangian that reproduces Maxwell's equations and using the constitutive equations of a homogeneous and isotropic medium of density $\rho$ moving at velocity $\mathbf{v}$, Feigel \cite{Feigel} found a total conserved momentum density $\mathbf{G}_{M}=\rho\mathbf{v}+\mathbf{G}_{A}$, from which the kinetic momentum density of matter is found, $\rho\mathbf{v}=\epsilon_{0}\frac{\epsilon_{r}\mu_{r}-1}{\mu_{r}c}\mathbf{E}\wedge\mathbf{B}$ \cite{Rikken1}. However, following an approach close to Nelson's, van Tiggelen has arrived to another conservation law \cite{vTg2008}. On the one hand, the combination of Maxwell's equations and the constitutive equations for a moving medium of mass density $\rho_{m}$ yields, $\partial_{t}(\mathbf{D}\wedge\mathbf{B})=\nabla\cdot\mathbb{T}-\mathbf{F}_{L}$, with $\mathbb{T}_{ij}=(B_{i}H_{j}+D_{i}E_{j})-(\epsilon E^{2}+\mu^{-1}B^{2}+\rho_{m}v^{2})\delta_{ij}/2$ and $\mathbf{F}_{L}=\rho_{q}\mathbf{E}+\mathbf{J}\wedge\mathbf{B}$ the Lorentz force density, $\rho_{q}$ being the density of charge. On the other hand, Newton's law reads $\partial_{t}(\rho\mathbf{v})+(\mathbf{v}\cdot\nabla\rho_{m})\mathbf{v}=\mathbf{F}_{L}$. Lastly, combining both equations and integrating in space one obtains the total conserved momentum $m\mathbf{v}+\int\textrm{d}^{3}r\mathbf{G}_{N}$. The equation of motion which derives from this approach is \cite{Rikken1,Nelson,Kawka2} $\rho_{m}\mathbf{v}=\epsilon_{0}(\epsilon_{r}-1)\mathbf{E}\wedge\mathbf{B}$.\\ \indent In addition to the classical EM momentum coming from external crossed fields, EM quantum fluctuations can generate extra terms when the constitutive equations contain non-reciprocal susceptibilities, $\chi_{EM}^{nr}$ \cite{vTg2008,Feigel,Croze}. The basic argument is that a non-zero $\chi_{EM}^{nr}$ generates what is called spectral non-reciprocity. That is, there exists a difference in the frequency of normal modes propagating in opposite directions along the axis where the P symmetry is broken, $\tilde{\omega}(\mathbf{k})\neq \tilde{\omega}(-\mathbf{k})$. This implies that the addition of the linear momenta propagated by these modes do not cancel out. The total momentum of the normal modes is the momentum of the vacuum fluctuations and therefore it is referred to as Casimir momentum.\\ \indent As pointed out in Refs.\cite{vTg2008,KrsicvTgRikken} Feigel's approach suffers from a number of problems. In the first place, since $\mathbf{E}\wedge\mathbf{H}$ is proportional to the energy current, one expects that in thermodynamic equilibrium $\langle\mathbf{G}_{A}\rangle=\mathbf{0}$, and so no variation of the kinetic momentum can be obtained from variations of $\langle\mathbf{G}_{A}\rangle$. Second, the final result obtained by Feigel for the Casimir momentum density, $\sim\hbar\int\chi_{EM}^{nr}k^{3}\textrm{d}k$, seems to lack a reference frame since this equation is not Lorentz or even Galilean invariant \cite{KrsicvTgRikken}.\\ \indent In addition, the derivations of the above expressions for the conserved linear momentum are purely phenomenological and lack an explicit quantum interaction Hamiltonian between matter and radiation. Also, a macroscopic approach can easily lead to an erroneous prediction. For instance, it is shown in Ref.\cite{vTg2008} that for a chiral distribution of electrically polarizable molecules in an external magnetic field, $\langle\mathbf{E}(\mathbf{r})\wedge\mathbf{B}^{*}(\mathbf{r})\rangle=\mathbf{0}$ should hold locally \footnote{The reason for this result is basically that $\mathbf{B}(\mathbf{r})=\mu_{0}\mathbf{H}(\mathbf{r})$ for a non-magnetic medium, and the energy flow $\langle\mathbf{E}(\mathbf{r})\wedge\mathbf{H}^{*}(\mathbf{r})\rangle$ vanishes.}, although the medium contains a nonzero nonreciprocal effective (\emph{eff}) response which would make $\langle\mathbf{E}_{eff}\wedge\mathbf{B}_{eff}^{*}\rangle\neq\mathbf{0}$. Another problem that should be resolved in a quantum microscopic treatment is the UV divergence of the expression for the quantum EM momentum density obtained in a homogeneous medium \cite{Feigel}. This line of investigation was started by Kawka, van-Tiggelen and Rikken in Refs.\cite{Kawka2,Kawka1}, where they computed the Casimir momentum of an atom in crossed external electric and magnetic fields. The divergences were found there to disappear by mass renormalization and the leading quantum contribution was found to be a factor $\alpha^{2}$ smaller than the classical EM momentum.\\ \indent In Ref.\cite{PRLDonaire} we reported the quantum computation of the Casimir momentum for a chiral molecule in a uniform external magnetic field. In our model chirality breaks the parity symmetry and the magnetic field breaks time reversal, so it is symmetry allowed to have a non-vanishing value for the momentum of the electromagnetic field. In this case no classical contribution to the Abraham force exists since there is no external electric field. In this article we perform a semiclassical computation of the Casimir momentum for the same model and compare it with the quantum result. We explain the physical mechanisms which mediate the transfer of linear momentum from the EM vacuum to the chiral molecule and we analyse the transfer of energy.\\ \indent The article is organized as follows. In Section \ref{sec2} we present the model. Next, we perform a semiclassical computation of the Casimir momentum in Section \ref{sec3}. In Section \ref{Qapproach} we review the quantum computation of Ref.\cite{PRLDonaire}, paying special attention to the physical mechanisms involved in the transfer of momentum. In Section \ref{sec5} we explain the exchange of energy between molecule, quantum vacuum and magnetic field. In Section \ref{sec6} we summarize our conclusions. \section{The model}\label{sec2} We propose the simplest model for a chiral molecule that exhibits all necessary features to leading order in perturbation theory: broken mirror symmetry, Zeeman splitting of energy levels and coupling to the quantum vacuum, and we neglect relativistic effects. In our model the optical activity of the molecule is determined by a single chromophoric electron within a chiral object which is further simplified to be a two-particle system in which the chromophoric electron of charge $q_{e}=-e$ and mass $m_{e}$ is bound to a nucleus of effective charge $q_{N}=e$ and mass $m_{N}\gg m_{e}$. The binding interaction is modeled by a harmonic oscillator potential, $V^{HO}=\frac{\mu}{2}(\omega_{x}^{2}x^{2}+\omega_{y}^{2}y^{2}+\omega_{z}^{2}z^{2})$, to which we add a term $V_{C}=C\:xyz$ to break the mirror symmetry perturbatively in first order. The coordinates $x$, $y$, $z$ are those of the relative position vector, $\mathbf{r}=\mathbf{r}_{N}-\mathbf{r}_{e}$, and $\mu=\frac{m_{N}m_{e}}{M}$ with $M=m_{N}+m_{e}$. The center of mass position vector is $\mathbf{R}=(m_{N}\mathbf{r}_{N}+m_{e}\mathbf{r}_{e})/M$. The conjugate momentum of $\mathbf{r}$ is $\mathbf{p}=\mu(\mathbf{p}_{N}/m_{N}-\mathbf{p}_{e}/m_{e})$, while the conjugate momentum of $\mathbf{R}$ is the total conjugate momentum, $\mathbf{P}=\mathbf{p}_{e}+\mathbf{p}_{N}$. $V_{C}$ was first introduced by Condon \emph{et al}. \cite{Condon1,Condon2} to explain the rotatory power of chiral compounds with a single oscillator model. Both the anisotropy in $V^{HO}$ and the chiral potential $V_{C}$ are determined by the Coulomb interaction of the two-body system with the rest of atoms within the molecule. In particular, the parameter $C$ is the sum of all third order coefficients of the expansion of the Coulomb interaction of the chromophoric group with the surrounding charges around their mean distance [see Eq.(42) of Ref.\cite{Condon2}]. It is a pseudo-scalar which does not necessarily vanish for a chiral environment. In principle, all parameters of this model can be computed \emph{ab initio} from the experimental data \cite{Condon2}.\\ \indent When an external uniform and constant magnetic field $\mathbf{B}_{0}$ is applied the total Hamiltonian of the system reads, $H=H_{0}+H_{EM}+W$, with \begin{align} H_{0}&=\sum_{i=e,N}\frac{1}{2m_{i}}[\mathbf{p}_{i}-q_{i}\mathbf{A}_{0}(\mathbf{r}_{i})]^{2}+V^{HO}+V_{C},\label{H0}\\ H_{EM}&=\sum_{\mathbf{k},\mathbf{\epsilon}}\hbar\omega_{\mathbf{k}}(a^{\dagger}_{\mathbf{k}\mathbf{\epsilon}}a_{\mathbf{k}\mathbf{\epsilon}}+\frac{1}{2}) +\frac{1}{2\mu_{0}}\int\textrm{d}^{3}r\mathbf{B}_{0}^{2},\label{HEM}\\ W&=\sum_{i=e,N}\frac{-q_{i}}{m_{i}}[\mathbf{p}_{i}-q_{i}\mathbf{A}_{0}(\mathbf{r}_{i})]\cdot\mathbf{A}(\mathbf{r}_{i}) +\frac{q^{2}_{i}}{2m_{i}}\mathbf{A}^{2}(\mathbf{r}_{i}),\label{elW} \end{align} where $W$ is the minimal coupling interaction potential. In the vector potential we have separated the contribution of the external classical field, $\mathbf{A}_{0}(\mathbf{r}_{i})=\frac{1}{2}\mathbf{B}_{0}\wedge\mathbf{r}_{i}$, from the one of the quantum field operator, $\mathbf{A}(\mathbf{r}_{i})$. Note that, having incorporated the internal electrostatic interaction of the two-body system within $V^{HO}$ and the electrostatic interaction of the system with the surrounding within $V_{C}$, the EM vector potential in $W$ is just transverse. In the sum, $a^{\dagger}_{\mathbf{k}\mathbf{\epsilon}}$ and $a_{\mathbf{k}\mathbf{\epsilon}}$ are the creation and annihilation operators of photons with momentum $\hbar\mathbf{k}$, frequency $\omega_{\mathbf{k}}=ck$ and polarization vector $\mathbf{\epsilon}$ respectively. The magnetostatic energy is a constant irrelevant to us that we will discard.\\ \indent In the absence of coupling to the vacuum field, the system with Hamiltonian $H_{0}$ possesses a conserved pseudo-momentum, $\mathbf{K}_{0}=\mathbf{P}_{\textrm{kin}}+e\mathbf{B}_{0}\wedge\mathbf{r}$, which satisfies $[H_{0},\mathbf{K}_{0}]=\mathbf{0}$ and has continuous eigenvalues $\mathbf{Q}$. Here $\mathbf{P}_{\textrm{kin}}$ is the kinetic momentum of the center of mass, $\mathbf{P}_{\mathrm{kin}}=M\dot{\mathbf{R}}$, which relates to $\mathbf{P}$ through $\mathbf{P}_{\textrm{kin}}=\mathbf{P}-\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r}-e[\mathbf{A}(\mathbf{r}_{N})-\mathbf{A}(\mathbf{r}_{e})]$. The unitary operator U$=\exp{[i(\mathbf{Q}-\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r})\cdot\mathbf{R}/\hbar]}$ maps the Hamiltonian $H_{0}$ into $\tilde{H}_{0}=$U$^{\dagger}H_{0}$U, which conveniently separates the motion of the center of mass from the relative motion \cite{Herold,Dippel}, \begin{equation} \tilde{H}_{0}=\frac{1}{2M}\mathbf{Q}^{2}+\frac{1}{2\mu}\mathbf{p}^{2}+V^{HO}+V_{C}+V_{Z}+\Delta V.\label{effective} \end{equation} In this equation $V_{Z}=\frac{e}{2\mu^{*}}(\mathbf{r}\wedge\mathbf{p})\cdot\mathbf{B}_{0}$ is the Zeeman potential with $\mu^{*}=\frac{m_{N}m_{e}}{m_{N}-m_{e}}$. Terms of order $\textrm{Q}\textrm{B}_{0}$ and $\textrm{B}_{0}^{2}$ are cast in $\Delta V=(e^{2}/2)(1/M+\mu/\mu^{*2})(\mathbf{r}\wedge\mathbf{B}_{0})^{2}+e\mathbf{Q}\cdot(\mathbf{r}\wedge\mathbf{B}_{0})/M$. In the following, both $V_{C}$ and $V_{Z}$ will be considered first order perturbations to the harmonic oscillator potential and higher order perturbative terms like those in $\Delta V$ will be neglected.\\ \indent The ground state of the Hamiltonian $\tilde{H}_{0}$ is, up to order $C$B$_{0}$ in stationary perturbation theory, \begin{align} |\tilde{\Omega}_{0}\rangle&=|0\rangle-\mathcal{C}|111\rangle -i\mathcal{B}_{0}^{z}\eta^{yx}|110\rangle\nonumber\\ &+i\mathcal{B}_{0}^{z}\mathcal{C}\eta^{yx}\left(|001\rangle+2|221\rangle\right)\nonumber\\ &-\sqrt{2}i\mathcal{B}_{0}^{z}\mathcal{C}\left(\frac{2\omega_{x}-\omega_{z}\eta^{yx}}{\omega_{z}+2\omega_{x}}|201\rangle -\frac{2\omega_{y}+\omega_{z}\eta^{yx}}{\omega_{z}+2\omega_{y}}|021\rangle\right)\nonumber\\ &+\sum\textrm{ cyclic permutations}.\label{vacuum} \end{align} Correspondingly, the ground state of $H_{0}$ is $|\Omega_{0}\rangle=$U$|\tilde{\Omega}_{0}\rangle$, with a pseudo-momentum $\mathbf{Q}_{0}$ to be fixed. The fact that $\langle\Omega_{0}|\mathbf{r}|\Omega_{0}\rangle=\mathbf{0}$ implies that $\mathbf{Q}_{0}$ is the kinetic momentum of the oscillator, i.e., no other contribution exists to the pseudo-momentum $\langle\mathbf{K}_{0}\rangle$ of the bare molecule in its ground state. In the above equation the states $|n_{x} n_{y} n_{z}\rangle$ refer to the eigenstates of the harmonic oscillator Hamiltonian. The dimensionless parameters are, $\mathcal{B}_{0}^{i}=\frac{eB_{0}^{i}}{4\mu^{*}\sqrt{\omega_{j}\omega_{k}}}$ with $i\neq j\neq k$ and $i\neq k$, $\mathcal{C}=\frac{C\hbar^{1/2}}{(2\mu)^{3/2}(\omega_{x}+\omega_{y}+\omega_{z})(\omega_{x}\omega_{y}\omega_{z})^{1/2}}$, $\eta^{ij}=\frac{\omega_{i}-\omega_{j}}{\omega_{i}+\omega_{j}}$. Here, the indices $i,j,k$ take on the three spatial directions, $x,y,z$. The $\eta$ factors are assumed to be small quantities which quantify the anisotropy of the oscillator. They \emph{all} have to be nonzero for the optical activity of the molecule to survive rotational averaging. The dimensionless parameters, $\mathcal{C}$, $\mathcal{B}_{0}^{i}$ and $\eta^{ij}$, are the expansion parameters of our perturbative calculations. We will restrict to the lowest order in them all. \section{Semiclassical approach}\label{sec3} In Ref.\cite{vTg2008} van Tiggelen has derived an expression for the Casimir momentum using a Green's function formalism which is compatible with Nelson's prescription. That formalism was latter applied in Ref.\cite{BabingtonvTg2010} to compute the Casimir momentum in a configuration of magnetic dipoles disposed in a twisted H configuration.\\ \indent In our case we are interested in the Casimir momentum in the presence of a single molecule. As noticed in Ref.\cite{Rikken1}, the Casimir momentum which derives from Abraham and Nelson's prescriptions coincide approximately for a diluted medium (same applies to the forces generated by time varying fields). For the case of a single particle the so-called Abraham momentum reads $\mathbf{d}\wedge\mathbf{B}$, $\mathbf{d}$ being the electric dipole moment of the molecule. Here we apply a linear response formalism upon $\mathbf{d}\wedge\mathbf{B}$ following a similar semiclassical treatment to the one for the Lamb shift of a single dipole or the one for the van der Waals forces between atoms \cite{MilonniPhysScripta,Craigbook,CraigThiru}. This is equivalent to the Green's function formalism of Refs.\cite{vTg2008,BabingtonvTg2010} considering a single magnetochiral scatterer.\\ \indent We aim to compute the vacuum expectation value $\mathbf{P}^{\textrm{Cas}}=\Re{\{\langle\mathbf{d}\wedge\mathbf{B}\rangle\}}$. We start the calculation by decomposing both the electric dipole operator and the magnetic field into induced (\emph{ind}) and free (\emph{fr}) components. The induced operators relate to the free operators through the linear susceptibilities, i.e., polarizabilities and Green's functions. Any space-time dependent free operator $X_{fr}(\mathbf{R};t)$ can be written in Fourier space as a sum of positive frequency and negative frequency components, $X_{fr}(\mathbf{R};t)=\int_{0}^{\infty}\textrm{d}\omega[X_{fr}(\mathbf{R};\omega)e^{-i\omega t} +X_{fr}^{\dagger}(\mathbf{R};\omega)e^{i\omega t}]$, where $X_{fr}^{\dagger}(\mathbf{R};\omega)$ and $X_{fr}(\mathbf{R};\omega)$ are the $\omega$-mode creation and annihilation operators of $X_{fr}$ respectively. In the following we will be interested in the EM field and dipole moment operators evaluated at the position of the molecule, $\mathbf{R}_{0}$. Therefore, unless necessary, we will omit the dependence of the operators on the position. In the case of the electric field of a plane wave propagating in free space in the direction $\hat{\mathbf{k}}$, we find $\mathbf{E}_{fr}(\mathbf{R};\omega)\propto e^{i\mathbf{k}\cdot\mathbf{R}}$, with $\mathbf{k}=\omega\hat{\mathbf{k}}/c$, and likewise for the magnetic field. Therefore, together with the dependence on $\omega$ we will add the dependence on $\mathbf{k}$ to the electric field annihilation/creations operator of a plane wave, $\mathbf{E}_{fr}(\mathbf{k},\omega)$ --\emph{idem} for the magnetic field.\\ \indent Since the relation between induced and free operators is linear and the action of annihilation operators on the vacuum vanishes, it follows from $\mathbf{P}^{\textrm{Cas}}=\Re{\{\langle\mathbf{d}\wedge\mathbf{B}\rangle\}}$ that \begin{equation} \mathbf{P}^{\textrm{Cas}}=\Re{\{\langle\mathbf{d}_{fr}\wedge\mathbf{B}_{ind}\rangle\}}+\Re{\{\langle\mathbf{d}_{ind}\wedge\mathbf{B}_{fr}\rangle\}}.\label{PCASO} \end{equation} In Ref.\cite{EPJDonaire} we have obtained the constitutive equations of our magneto-chiral model by computing the response of the molecule to a monochromatic EM plane wave of frequency $\omega$ and wave vector $\mathbf{k}$. Up to electric quadrupole contributions, the induced electric dipole moment operator reads, \begin{align} \textrm{d}^{ind}_{i}(\omega)&=\alpha_{E}\delta_{ij}\textrm{E}_{fr}^{j}(\mathbf{k},\omega) +\chi\epsilon_{ijk}\textrm{B}^{j}_{0}\dot{\textrm{E}}_{fr}^{k}(\mathbf{k},\omega) \nonumber\\&-\beta\delta_{ij}\dot{\textrm{B}}^{j}_{fr}(\mathbf{k},\omega)+\gamma\epsilon_{ijk}\textrm{B}^{j}_{0}\textrm{B}_{fr}^{k}(\mathbf{k},\omega)\nonumber\\&+ \frac{1}{2}\xi[(\mathbf{B}_{0}\cdot\mathbf{k})\textrm{E}^{fr}_{i}(\mathbf{k},\omega)+(\mathbf{B}_{0}\cdot\mathbf{E}_{fr}(\mathbf{k},\omega))\textrm{k}_{i}],\label{dind} \end{align} where a rotational average is implicit in this equation. The factor $\alpha_{E}$ is the ordinary electric polarizability, $\chi$ describes the Faraday effect, $\beta$ is the molecular rotatory factor responsible for the natural optical activity, and $\gamma$ and $\xi$ give rise to the magnetochiral anisotropy. The expressions of all these factors have been summarized in \ref{appendB}. It has been shown in Ref.\cite{EPJDonaire} that, once the rotational power and the refractive index of a compound are given --i.e., the parameters $\alpha_{E}$ and $\beta$, all other parameters of our model can be deduced.\\ \indent Defining the effective electric polarizability and the crossed magneto-electric polarizability tensors respectively as, \begin{align} \alpha_{EE}^{ij}(\mathbf{k},\omega)&=[\alpha_{E}+\frac{1}{2}\xi(\mathbf{B}_{0}\cdot\mathbf{k})]\delta^{ij}+\textrm{k}^{i}\textrm{B}_{0}^{j}-i\omega\chi\epsilon_{ikj}\textrm{B}^{k}_{0}, \nonumber\\ \alpha_{EM}^{ij}(\omega)&=i\omega\beta\delta^{ij}+\gamma\epsilon_{ikj}\textrm{B}^{k}_{0}, \end{align} we can write Eq.(\ref{dind}) as, \begin{equation} \textrm{d}^{ind}_{i}(\omega)=\alpha_{EE}^{ij}(\mathbf{k},\omega)\textrm{E}^{fr}_{j}(\mathbf{k},\omega)+\alpha_{EM}^{ij}(\omega)\textrm{B}^{fr}_{j}(\mathbf{k},\omega).\label{dresp} \end{equation} Using Maxwell's equation for the free fields, $\dot{\mathbf{E}}_{fr}=\mathbf{\nabla}\wedge\mathbf{B}_{fr}$, we can write $\mathbf{d}_{ind}(\omega)$ as a response to the free electric field alone, \begin{align} \mathbf{d}^{ind}(\omega)&=\alpha_{E}\mathbf{E}_{fr}(\mathbf{k},\omega)-i\omega\chi\mathbf{B}_{0}\wedge\mathbf{E}_{fr}(\mathbf{k},\omega) +i\beta\mathbf{k}\wedge\mathbf{E}_{fr}(\mathbf{k},\omega)\nonumber\\&+(\xi/2-\gamma/\omega)(\mathbf{B}_{0}\cdot\mathbf{k})\mathbf{E}_{fr}(\mathbf{k},\omega) \nonumber\\&+(\xi/2+\gamma/\omega)[\mathbf{B}_{0}\cdot\mathbf{E}_{fr}(\mathbf{k},\omega)]\mathbf{k}. \end{align} From here it is obvious that the non-reciprocal response comes from the fourth term on the r.h.s., which depends on the relative direction of the wave vector, $\mathbf{k}$, with respect to the external magnetic field. We will denote this non-reciprocal (\emph{nr}) polarizability by $\alpha_{nr}(\mathbf{k},\omega)=(\xi/2-\gamma/\omega)(\mathbf{B}_{0}\cdot\mathbf{k})$. It is at the origin of the magnetochiral birefringence \cite{EPJDonaire}. The induced magnetic field can be written as a linear response to the free dipole located at $\mathbf{R}_{0}$, \begin{align} \mathbf{B}_{ind}(\mathbf{R},\omega)&=\epsilon_{0}^{-1}c^{-1}[\mathbb{G}^{(0)}_{me}(\mathbf{R},\mathbf{R}_{0};\omega)\cdot\mathbf{d}_{fr}(\mathbf{R}_{0},\omega) \nonumber\\&+c^{-1}\mathbb{G}^{(0)}_{mm}(\mathbf{R},\mathbf{R}_{0};\omega)\cdot\mathbf{m}_{fr}(\mathbf{R}_{0},\omega)],\label{bind} \end{align} where $\mathbf{m}_{fr}(\mathbf{R}_{0},\omega)$ is the $\omega$-mode of the free magnetic dipole moment operator and the Green functions $\mathbb{G}^{(0)}_{mm}$ and $\mathbb{G}^{(0)}_{me}$ relates to the Green function of Maxwell's equation for the EM vector potential in free space, \begin{equation}\label{Maxwellb} \Bigl[\omega^{2}\mathbb{I}-\mathbf{\nabla}\wedge\mathbf{\nabla}\wedge\Bigr]\mathbb{G}^{(0)}(\mathbf{R}-\mathbf{R}';\omega) =\delta^{(3)}(\mathbf{R}-\mathbf{R}')\mathbb{I}, \end{equation} \begin{align} &\textrm{through }\qquad\quad \mathbb{G}^{(0)}_{mm}(\mathbf{R},\mathbf{R}';\omega)=\mathbf{\nabla}_{\mathbf{R}}\wedge\mathbb{G}^{(0)}(\mathbf{R},\mathbf{R}';\omega)\wedge\mathbf{\nabla}_{\mathbf{R}'}, \nonumber\\&\textrm{and }\qquad\quad \mathbb{G}^{(0)}_{me}(\mathbf{R},\mathbf{R}';\omega)=ik\mathbf{\nabla}_{\mathbf{R}}\wedge\mathbb{G}^{(0)}(\mathbf{R},\mathbf{R}';\omega).\nonumber \end{align} \indent When substituting the induced operators in Eq.(\ref{PCASO}) as functions of the free operators we find the vacuum expectation values of bilinear operators. Those expectation values (dipole and field quadratic fluctuations) relate to the imaginary part of their respective linear response functions through the fluctuation-dissipation theorem \cite{Butcher}, \begin{align} \langle \mathbf{B}_{fr}(\mathbf{R},\omega)\otimes\mathbf{E}^{\dagger}_{fr}(\mathbf{R}',\omega')\rangle&= \frac{\hbar}{\pi\epsilon_{0}c}\Im{\{\mathbb{G}^{(0)}_{me}(\mathbf{R},\mathbf{R}';\omega)\}}\delta(\omega-\omega'),\nonumber\\ \langle \mathbf{B}_{fr}(\mathbf{R},\omega)\otimes\mathbf{B}_{fr}^{\dagger}(\mathbf{R}',\omega')\rangle&= \frac{\hbar}{\pi\epsilon_{0}c^{2}}\Im{\{\mathbb{G}^{(0)}_{mm}(\mathbf{R},\mathbf{R}';\omega)\}}\delta(\omega-\omega'),\nonumber \end{align} \begin{align} \langle\mathbf{d}_{fr}(\omega)\otimes\mathbf{d}^{\dagger}_{fr}(\omega')\rangle&=\frac{\hbar}{\pi} \Im{\{\alpha_{EE}\}}\delta(\omega-\omega'),\nonumber\\ \langle d^{fr}_{i}(\omega)\otimes m^{fr\dagger}_{j}(\omega')\rangle&= \frac{\hbar}{2\pi i}[\alpha_{EM}-\alpha_{ME}^{\dagger}]_{ij}\delta(\omega-\omega')\nonumber\\&= \frac{\hbar}{\pi}\epsilon_{ikj}\textrm{B}^{k}_{0}\Im{\{\gamma\}}\delta(\omega-\omega')\nonumber\\&+\textrm{reciprocal terms}\label{13}, \end{align} where it is understood that the dipole moment operators act upon the location of the only molecule at $\mathbf{R}_{0}$. In the last equation above we use the result obtained in Ref.\cite{EPJDonaire}, $\alpha_{EM}=-\alpha_{ME}$, and we restrict to nonreciprocal terms for simplicity. Using the linear response relations of Eqs.(\ref{dresp}) and (\ref{bind}) in Eq.(\ref{PCASO}) and applying the expectation values of Eq.(\ref{13}) we end up with the relation, \begin{align} P^{\textrm{Cas}}_{i}&=\frac{\hbar}{\pi\epsilon_{0}c}\varepsilon_{ij}^{\:\:\:k}\Im\int_{0}^{\infty}\textrm{d}\omega\int\frac{\textrm{d}^{3}q}{(2\pi)^{3}} \big[\alpha_{EE}^{jp}(\mathbf{q},\omega)\tilde{G}_{em,pk}^{(0)}(\mathbf{q},\omega)\nonumber\\ &+c^{-1}\alpha_{EM}^{jp}(\omega)\tilde{G}_{mm,pk}^{(0)}(\mathbf{q},\omega)\big],\label{cc} \end{align} where $\tilde{\mathbb{G}}_{em}^{(0)}(\mathbf{q},\omega)$ and $\tilde{\mathbb{G}}_{mm}^{(0)}(\mathbf{q},\omega)$ are the Fourier transforms of $\mathbb{G}_{em}^{(0)}(\mathbf{R},\mathbf{R}^{'};\omega)$ and $\mathbb{G}_{mm}^{(0)}(\mathbf{R},\mathbf{R}^{'};\omega)$ in $\mathbf{q}$-space respectively, \begin{eqnarray} \tilde{G}_{em,pk}^{(0)}(\mathbf{q},\omega)&=&\frac{-k}{k^{2}-q^{2}+i\mu}\varepsilon_{pkr}q^{r},\nonumber\\ \tilde{G}_{mm,pk}^{(0)}(\mathbf{q},\omega)&=&\frac{-q^{2}}{k^{2}-q^{2}+i\mu}(\delta_{pk}-q_{p}q_{k}/q^{2}),\:\mu\rightarrow0^{+}, \end{eqnarray} which are necessary to account for the dependence of $\alpha_{EE}$ on the wave vector. Symmetry considerations imply that only the nonreciprocal terms of $\alpha_{EE}$ and $\alpha_{EM}$ in $\alpha_{nr}$ survive the angular integration. After integrating over $\mathbf{q}$ we arrive at \begin{equation} \mathbf{P}^{\textrm{Cas}}=\frac{\hbar\mathbf{B}_{0}}{6\pi^{2}\epsilon_{0}c^{5}}\Re\int_{0}^{\infty}\textrm{d}\omega\:\omega^{4}(\xi/2-\gamma/\omega).\label{ec1} \end{equation} This expression is nothing but Feigel's formula \cite{Feigel} simplified to the case of a diluted medium in the one-particle limit $\mathcal{V}\rho\rightarrow1$, $\mathcal{V}$ being the total volume and $\rho$ the numerical density of molecules \cite{MilonniPhysScripta}. Hence, following Croze's argument \cite{Croze} we can write the above integral as the sum over the momenta of 'dressed' normal modes, $n\hbar\mathbf{k}$, with $n$ being the index of refraction, \begin{equation} \mathbf{P}^{\textrm{Cas}}=\mathcal{V}\sum_{\mathbf{k},\mathbf{\epsilon}} n\hbar\mathbf{k}=\mathcal{V}\sum_{\mathbf{k},\mathbf{\epsilon}} \hbar\delta n_{MCh}\mathbf{k},\label{ec2} \end{equation} with $\delta n_{MCh}$ the nonreciprocal part of the refractive index due to magnetochiral birefringence. At leading order in $\rho$ it reads \cite{EPJDonaire}, $\delta n_{MCh}=\rho(\xi/2-\gamma/\omega)(\mathbf{B}_{0}\cdot\mathbf{k})/\epsilon_{0}$. In the one-particle limit, $\mathcal{V}\rho\rightarrow1$, and turning the summation into an integral in the continuum limit, Eqs.(\ref{ec1}) and (\ref{ec2}) coincide. From the semiclassical result it follows that $\mathbf{P}^{\textrm{Cas}}$ is the momentum of radiative modes only. This result would reinforce the interpretation that the Casimir momentum originates in the non-reciprocity of the spectrum of normal modes propagating in the effective medium, i.e. $\tilde{\omega}(\mathbf{k})\neq\tilde{\omega}(-\mathbf{k})$, with $\tilde{\omega}(\pm\mathbf{k})$ the 'effective' frequencies of photons propagating in opposite directions with bare momentum vectors $\pm\hbar\mathbf{k}$. \indent To conclude, we use the explicit formulas for $\gamma$ and $\xi$ in \ref{appendB} and integrate over frequencies, \begin{equation}\label{PCascalss} \mathbf{P}^{\textrm{Cas}}=-\frac{\hbar^{2}e^{3}C\mathbf{B}_{0}}{1458\pi^{2}c^{5}\epsilon_{0}\omega_{0}\mu^{2}\mu^{*2}}\eta^{zy}\eta^{yx}\eta^{xz}, \end{equation} where $\omega_{0}=(\omega_{x}+\omega_{y}+\omega_{z})/3$. It is remarkable that the frequency integral does not present any UV divergence, unlike the expression for $\mathbf{P}^{\textrm{Cas}}$ in Ref.\cite{Feigel} for a magneto-electric medium in crossed fields. We will see later on that this semiclassical result is included in the microscopic QED result, but it enters as a small correction to the leading term. \section{Quantum approach}\label{Qapproach} The system of the Hamiltonian $H$ possesses a conserved pseudo-momentum \cite{Kawka1,Herold,Dippel}, \begin{equation}\label{Po} \mathbf{K}=\mathbf{P}+\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r}+\sum_{\mathbf{k},\mathbf{\epsilon}}\hbar\mathbf{k}(a^{\dagger}_{\mathbf{k}\mathbf{\epsilon}}a_{\mathbf{k}\mathbf{\epsilon}}+\frac{1}{2}), \end{equation} which satisfies $[H,\mathbf{K}]=\mathbf{0}$. Its eigenvalues are therefore good quantum numbers. The terms in $\mathbf{K}$ can be also arranged as, \begin{equation}\label{P} \mathbf{K}=\mathbf{P}_{\textrm{kin}}+\mathbf{P}_{\textrm{Abr}}+\mathbf{P}^{\textrm{Cas}}_{\parallel}+\mathbf{P}^{\textrm{Cas}}_{\perp}, \end{equation} where, beside the kinetic momentum, $\mathbf{P}_{\textrm{Abr}}=e\mathbf{B}_{0}\wedge\mathbf{r}$ is the Abraham momentum and we define the Casimir momentum operator, $\mathbf{P}^{\textrm{Cas}}=\mathbf{P}^{\textrm{Cas}}_{\parallel}+\mathbf{P}^{\textrm{Cas}}_{\perp}$, as the momentum operator of the vacuum field. $\mathbf{P}^{\textrm{Cas}}$ is composed of a longitudinal Casimir momentum, $\mathbf{P}^{\textrm{Cas}}_{\parallel}=e[\mathbf{A}(\mathbf{r}_{N})-\mathbf{A}(\mathbf{r}_{e})]$, and a transverse Casimir momentum, $\mathbf{P}^{\textrm{Cas}}_{\perp}=\sum_{\mathbf{k},\mathbf{\epsilon}}\hbar\mathbf{k}(a^{\dagger}_{\mathbf{k}\mathbf{\epsilon}}a_{\mathbf{k}\mathbf{\epsilon}}+\frac{1}{2})$ \cite{Cohen}. The latter is just the sum of the momenta $\hbar \mathbf{k}$ of radiative photons. The longitudinal momentum is more subtle. It stems from the transverse electromagnetic gauge field coupled to electric charges. Note that in the Coulomb gauge $\mathbf{A}$ is fully transverse. $\mathbf{P}^{\textrm{Cas}}_{\parallel}$ is referred to as "longitudinal" component since it can be written as the integral over the vector product of the longitudinal Coulomb electric field and the magnetic field generated by the charges, $\epsilon_{0}\int\textrm{d}^{3}r\mathbf{E}^{Coul}\wedge\mathbf{B}$ \cite{Cohen}.\\ \indent Under the action of a time-varying magnetic field, $\mathbf{B}_{0}(t)$, the time derivative of the expectation value of $\mathbf{K}$ in the ground state, $\langle\mathbf{K}\rangle$, vanishes, \begin{equation} \frac{\textrm{d}\langle\mathbf{K}\rangle}{\textrm{d}t}=i\hbar^{-1}\langle[H,\mathbf{K}]\rangle+e\frac{\partial\mathbf{B}_{0}(t)}{\partial t}\wedge\langle\mathbf{r}\rangle=\mathbf{0}.\nonumber \end{equation} This follows from $[H,\mathbf{K}]=\mathbf{0}$ and $\langle\mathbf{r}\rangle=\mathbf{0}$ for the chiral but unpolarized ground state \footnote{The oscillator in motion generates a dipole moment $\alpha_{E}(0)\mathbf{Q}_{0}\wedge\mathbf{B}_{0}/M$ \cite{Herold,Dippel}.}. The latter ensures also that the variation of the Abraham momentum vanishes in the ground state. From the expectation value of Eq.(\ref{P}) it follows that, for an arbitrary variation of the magnetic field, the variation of the kinetic momentum of the chiral oscillator is equivalent in magnitude and opposite in sign to the variation of the Casimir momentum of the vacuum field, \begin{equation}\label{dPs} \delta\langle\mathbf{P}_{\textrm{kin}}\rangle=-\delta\langle\mathbf{P}^{\textrm{Cas}}\rangle. \end{equation} \indent Let us consider the molecule initially at rest in its ground state at zero magnetic field. During the switching of the magnetic field Eq.(\ref{dPs}) implies that $\langle\mathbf{P}_{\textrm{kin}}\rangle=-\langle\mathbf{P}^{\textrm{Cas}}\rangle$ at any time. This relation makes the Casimir momentum an observable quantity. In particular this equality holds well after the switching has ended and the magnetic field achieves a stationary value $\mathbf{B}_{0}$. In the following we evaluate the Casimir momentum in the asymptotically stationary situation in which the molecule is in its ground state at constant kinetic momentum, $\mathbf{Q}_{0}$, constant magnetic field, $\mathbf{B}_{0}$ and once coupled to the EM vacuum. We denote by $|\Omega\rangle$ this asymptotic state of the molecule, which we will compute applying up to second order perturbation theory to $|\Omega_{0}\rangle$ with the interaction potential $W$. We define $\langle\mathbf{P}^{\textrm{Cas}}\rangle=\langle\Omega|\mathbf{P}^{\textrm{Cas}}|\Omega\rangle$, and calculate separately the transverse momentum, $\langle\mathbf{P}_{\perp}^{\textrm{Cas}}\rangle=\sum_{\mathbf{k},\mathbf{\epsilon}}\hbar\mathbf{k}\langle\Omega|a^{\dagger}_{\mathbf{k}\mathbf{\epsilon}}a_{\mathbf{k}\mathbf{\epsilon}}|\Omega\rangle$, and the longitudinal momentum, $\langle\mathbf{P}_{\parallel}^{\textrm{Cas}}\rangle=e\langle\Omega|\mathbf{A}(\mathbf{r}_{N})-\mathbf{A}(\mathbf{r}_{e})|\Omega\rangle$.\\ \subsection{Transverse Casimir momentum} For the computation of $\langle\mathbf{P}_{\perp}^{\textrm{Cas}}\rangle$ at $\mathcal{O}(\mathcal{C}\mathcal{B}_{0})$ and lowest order in the fine structure constant, $\alpha=e^{2}/4\pi\epsilon_{0}\hbar c$, we need to compute $|\Omega\rangle$ applying second order perturbation theory to $|\Omega_{0}\rangle$. Note that this implies applying up to fourth-order perturbation theory in $V_{Z}+V_{C}+W$ to the ground state of the harmonic oscillator Hamiltonian, $|0\rangle$. We use the U-transformed states and the U-transformed potential, with U$=\exp{[-i\frac{e}{2\hbar}(\mathbf{B}_{0}\wedge\mathbf{r})\cdot\mathbf{R}]}$, \begin{eqnarray}\label{tW} \tilde{W}&=&-\frac{e}{m_{N}}\left(\mathbf{p}+\frac{m_{N}}{M} \mathbf{P}-\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r}\right)\cdot \mathbf{A}(\mathbf{R}+\frac{m_{e}}{M}\mathbf{r})\nonumber\\ &-&\frac{e}{m_{e}}\left(\mathbf{p}-\frac{m_{e}}{M} \mathbf{P}+\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r}\right)\cdot \mathbf{A}(\mathbf{R}-\frac{m_{N}}{M}\mathbf{r})\nonumber\\ &+&\frac{e^{2}}{2m_{N}}\textrm{A}^{2}(\mathbf{R}+\frac{m_{e}}{M}\mathbf{r}) +\frac{e^{2}}{2m_{e}}\textrm{A}^{2}(\mathbf{R}-\frac{m_{N}}{M}\mathbf{r}),\label{tildeW} \end{eqnarray} to arrive at, \begin{eqnarray} \langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle&=&\sum_{\mathbf{Q},I,\gamma_{\mathbf{k}\mathbf{\epsilon}}}\sum_{\mathbf{Q}',I',\gamma_{\mathbf{k}'\mathbf{\epsilon}'}} \sum_{\mathbf{k}'',\mathbf{\epsilon}''} \frac{\langle\mathbf{Q}_{0},\tilde{\Omega}_{0}|\tilde{W}|\mathbf{Q},I,\gamma\rangle}{\hbar^{2}Q_{0}^{2}/2M+E_{0}-E_{\mathbf{Q},I,\mathbf{k}}}\nonumber\\ &\times&\langle\mathbf{Q},I,\gamma|\hbar\mathbf{k}''a^{\dagger}_{\mathbf{k}''\mathbf{\epsilon}''}a_{\mathbf{k}''\mathbf{\epsilon}''} |\mathbf{Q}',I',\gamma'\rangle\nonumber\\ &\times&\frac{\langle\mathbf{Q}',I',\gamma'|\tilde{W}|\mathbf{Q}_{0},\tilde{\Omega}_{0}\rangle}{\hbar^{2}Q_{0}^{2}/2M+E_{0}-E_{\mathbf{Q}',I',\mathbf{k}'}},\label{PCos} \end{eqnarray} where $|\mathbf{Q}_{0},\tilde{\Omega}_{0}\rangle=\exp{(i\mathbf{Q}_{0}\cdot\mathbf{R}/\hbar)}|\tilde{\Omega}_{0}\rangle$, $E_{0}=\hbar(\omega_{x}+\omega_{y}+\omega_{z})/2\equiv\hbar\omega_{0}/2$ and $E_{Q,I,k},E_{Q',I',k'}$ are the energies of the intermediate states, $|\mathbf{Q},I,\gamma\rangle=|\mathbf{Q},I\rangle\otimes|\gamma_{\mathbf{k}\mathbf{\epsilon}}\rangle$ (\emph{idem} for the prime states). The atomic states $|\mathbf{Q},I\rangle$ are eigenstates of $\tilde{H}_{0}$ and may have a priori any pseudo-momentum $\mathbf{Q}$. The EM states, $|\gamma_{\mathbf{k}\mathbf{\epsilon}}\rangle$, are 1-photon states with momentum $\hbar\mathbf{k}$ and polarization vector $\mathbf{\epsilon}$. Zeros in the denominator are avoided in the summation. Writing the EM quantum field in Eq.(\ref{tildeW}) as usual \cite{Loudon}, \begin{equation}\label{AQ} \mathbf{A}(\mathbf{r})=\sum_{\mathbf{k},\mathbf{\epsilon}}\sqrt{\frac{\hbar}{2ck\mathcal{V}\epsilon_{0}}} [\mathbf{\epsilon}a_{\mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{r}}+\mathbf{\epsilon}^{*}a^{\dagger}_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{r}}], \end{equation} with $\mathcal{V}$ a generic volume; passing the sums over $\mathbf{Q}$, $\mathbf{Q}'$, $\mathbf{k}$, $\mathbf{k}'$ and $\mathbf{k}''$ in Eq.(\ref{PCos}) to continuum integrals and by summing over polarization states we arrive at, \begin{align} \langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle&=\frac{\hbar^{2} e^{2}}{2c\epsilon_{0}} \int\frac{\textrm{d}^{3}k\:\mathbf{k}}{(2\pi)^{3}k}\nonumber\\&\times\langle\tilde{\Omega}_{0}|\Bigl[ (\mathbf{p}/m_{e}-\mathbf{Q}_{0}/M+\frac{e}{2m_{e}}\mathbf{B}_{0}\wedge\mathbf{r}) e^{-i\frac{m_{N}}{M}\mathbf{k}\cdot\mathbf{r}}\nonumber\\&+ (\mathbf{p}/m_{N}+\mathbf{Q}_{0}/M-\frac{e}{2m_{N}}\mathbf{B}_{0}\wedge\mathbf{r}) e^{i\frac{m_{e}}{M}\mathbf{k}\cdot\mathbf{r}}\Bigr]\nonumber\\ &\times\frac{\cdot(\mathbb{I}-\frac{\mathbf{k}\otimes\mathbf{k}}{k^{2}})\cdot} {(\hbar^{2}k^{2}/2M+\hbar ck-\hbar\mathbf{k}\cdot\mathbf{\mathbf{Q}}_{0}/M-E_{0}+H_{0})^{2}}\nonumber\\ &\times\Bigl[e^{-i\frac{m_{e}}{M}\mathbf{k}\cdot\mathbf{r}}(\mathbf{p}/m_{N}+\mathbf{Q}_{0}/M-\frac{e}{2m_{N}}\mathbf{B}_{0}\wedge\mathbf{r})\nonumber\\ &+e^{i\frac{m_{N}}{M}\mathbf{k}\cdot\mathbf{r}}(\mathbf{p}/m_{e}-\mathbf{Q}_{0}/M+\frac{e}{2m_{e}}\mathbf{B}_{0}\wedge\mathbf{r})\Bigr]|\tilde{\Omega}_{0}\rangle. \nonumber \end{align} In this equation we can distinguish four terms. In two of them the exponentials to the r.h.s. and l.h.s. of the fraction compensate. They correspond to the Feynman diagrams in which a photon is created and annihilated at the position of one of the particles, i.e., either at the nucleus or at the electron position [Fig.\ref{fig1}i($a$) and Fig.\ref{fig1}i($b$), respectively]. In the other two terms the exponentials amount to $e^{\pm i\mathbf{k}\cdot\mathbf{r}}$. They correspond to the Feynman diagrams in which the virtual photons are created and annihilated in different particles [Fig.\ref{fig1}ii($a$) and Fig.\ref{fig1}ii($b$)]. In the latter the complex exponentials evaluated in the states of the harmonic oscillator yield an effective cut-off for the momentum integrals at $k_{max}\sim\sqrt{\mu\omega_{0}/\hbar}$, making their contribution negligible w.r.t. the other diagrams. Among the diagrams of Fig.\ref{fig1}i the dominant one is i($b$) in which virtual photons are created and annihilated at the electron position, as for the non-relativistic calculation of the Lamb shift \cite{Milonnibook}. It reads, \begin{figure} \caption{i.Feynman diagrams of the dominant processes contributing to $\langle\mathbf{P} \label{fig1} \end{figure} \begin{align} \langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle&=\frac{\hbar^{2} e^{2}}{2cm_{e}^{2}\epsilon_{0}} \int\frac{\textrm{d}^{3}k\:\mathbf{k}}{(2\pi)^{3}k}\nonumber\\&\langle\tilde{\Omega}_{0}| (\frac{m_{e}}{M}\mathbf{Q}_{0}+\mathbf{p}+\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r}) e^{-i\frac{m_{N}}{M}\mathbf{k}\cdot\mathbf{r}}\nonumber\\ &\times\frac{\cdot(\mathbb{I}-\frac{\mathbf{k}\otimes\mathbf{k}}{k^{2}})\cdot}{(\hbar^{2}k^{2}/2M+\hbar ck-\hbar\mathbf{k}\cdot\mathbf{\mathbf{Q}}_{0}/M-E_{0}+H_{0})^{2}}\nonumber\\ &\times e^{i\frac{m_{N}}{M}\mathbf{k}\cdot\mathbf{r}}(\frac{m_{e}}{M}\mathbf{Q}_{0}+\mathbf{p}+\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r})|\tilde{\Omega}_{0}\rangle.\label{Pperpo} \end{align} For the above integral not to vanish, the product of the quantum operators in the integrand must be even in $\mathbf{k}$. We observe that when moving the complex exponential on the l.h.s. to the r.h.s. of the fraction, respecting the canonical commutation relations the momentum $\mathbf{p}$ in $\tilde{H}_{0}$ gets shifted, $\mathbf{p}\rightarrow\mathbf{p}-\frac{m_{N}}{M}\hbar\mathbf{k}$, \begin{align} &e^{-i\frac{m_{N}}{M}\mathbf{k}\cdot\mathbf{r}}(\hbar^{2}k^{2}/2M+\hbar ck-\hbar\mathbf{k}\cdot\mathbf{\mathbf{Q}}_{0}/M-E_{0}+\tilde{H}_{0})^{-1}\nonumber\\ &\times e^{i\frac{m_{N}}{M}\mathbf{k}\cdot\mathbf{r}}= [\hbar^{2}k^{2}/2m_{e}+\hbar ck-\hbar\mathbf{k}\cdot\mathbf{p}/m_{e}\nonumber\\&-\frac{\hbar\mu}{2\mu^{*}m_{e}}(\mathbf{r}\wedge\mathbf{k})\cdot\mathbf{B}_{0} -\hbar\mathbf{k}\cdot\mathbf{\mathbf{Q}}_{0}/M-E_{0}+\tilde{H}_{0}]^{-1}.\nonumber \end{align} As a result, the recoil kinetic energy in the denominators of Eq.(\ref{Pperpo}) becomes $\hbar^{2}k^{2}/2m_{e}$ and an additional term $-\hbar\mathbf{k}\cdot\mathbf{p}/m_{e}$ shows up there. Because the internal velocity of the electron is generally much greater than the center of mass velocity, $p/m_{e}\gg Q_{0}/M$, the term $\hbar\mathbf{k}\cdot\mathbf{\mathbf{Q}}_{0}/M$ will be a higher order correction. It is easy to see that the contribution of all the terms linear in $\mathbf{Q}_{0}$ is of the order of $(\alpha\hbar\omega_{0}/Mc^{2})\mathbf{Q}_{0}$ \cite{Kawka2}, which we interpret as a radiative correction to the total rest mass, $\delta M\sim \alpha\hbar\omega_{0}/c^{2}\ll M$. In contrast, the term $-\hbar\mathbf{k}\cdot\mathbf{p}/m_{e}$ generates a Doppler shift on the photon frequency which depends on the relative direction between $\mathbf{k}$ and the current of the chomophoric electron, $\mathbf{j}_{e}=-e\mathbf{p}/m_{e}$. It is the break down of time reversal and mirror symmetry that makes $\mathbf{p}$ take a non-vanishing transient value \footnote{By transient value of the operator $\mathbf{p}$ we mean the quantum amplitude of $\mathbf{p}$ evaluated between the ground state and an intermediate state, say $|I\rangle$, $\langle I|\mathbf{p}|\tilde{\Omega}_{0}\rangle$. The quantum amplitude $\langle\mathbf{p}\rangle$ in the expression for $\Delta\omega$ must be intended this way.} in the direction parallel to the external magnetic field. It is this term which generates the spectral non-reciprocity for photons propagating in opposite directions, $\Delta\omega(\mathbf{k})=\tilde{\omega}(\mathbf{k})-\tilde{\omega}(-\mathbf{k})\sim2\mathbf{k}\cdot\langle\mathbf{p}\rangle/m_{e}$, needed for the transfer of (transverse) linear momentum from the vacuum field to matter.\\ \indent The denominator of Eq.(\ref{Pperpo}) must be expanded up to first order in the Doppler shift term. At this order $\hbar\mathbf{k}\cdot\mathbf{p}/m_{e}$ is the non-relativistic Doppler shift, which does not clash with our non-relativistic approach \cite{RecoilRelativ}. This yields an even factor of $\mathbf{k}$ in the integrand, necessary for the integral not to vanish. The term $\frac{\hbar\mu}{\mu^{*}m_{e}}(\mathbf{r}\wedge\mathbf{k})\cdot\mathbf{B}_{0}$ generates a vanishing contribution. Disregarding the mass renormalization terms, we find, \begin{align} \langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle&=\Re{}\frac{\hbar^{3} e^{2}}{cm_{e}^{3}\epsilon_{0}} \int\frac{\textrm{d}^{3}k\:\mathbf{k}}{(2\pi)^{3}k}\langle\tilde{\Omega}_{0}| (\mathbf{p}+\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r})\nonumber\\&\times\frac{\cdot(\mathbb{I}-\frac{\mathbf{k}\otimes\mathbf{k}}{k^{2}})\cdot} {(\hbar^{2}k^{2}/2m_{e}+\hbar ck-E_{0}+\tilde{H}_{0})^{2}}(\mathbf{k}\cdot\mathbf{p})\nonumber\\ &\times\frac{1}{\hbar^{2}k^{2}/2m_{e}+\hbar ck-E_{0}+\tilde{H}_{0}}(\mathbf{p}+\frac{e}{2}\mathbf{B}_{0}\wedge\mathbf{r})|\tilde{\Omega}_{0}\rangle.\label{Pperpotro} \end{align} At the same time, since both the chiral interaction $V_{C}$ and the Zeeman potential $V_{Z}$ featuring in $\tilde{H}_{0}$ are treated as perturbations to the harmonic oscillator, the denominator is to be expanded up to order $V_{C}V_{Z}$ and only terms at $\mathcal{O}(C$B$_{0})$ must be retained. The calculation includes a number of finite integrals of the form, \begin{align} \Re&\int_{0}^{\infty}k^{3}\textrm{d}k[\frac{1}{(E^{m_{e}}_{k}-E_{1})(E^{m_{e}}_{k}-E_{2})^{2}}\nonumber\\&+ \frac{1}{(E^{m_{e}}_{k}-E_{2})(E^{m_{e}}_{k}-E_{1})^{2}}]\nonumber\\ &\simeq\frac{2m^{2}_{e}}{\hbar^{4}(E_{1}-E_{2})}\Bigl[\log{(E_{1}/E_{2})}+\mathcal{O}(\hbar\omega_{0}/m_{e}c^{2})\nonumber\\&+\mathcal{O}[(\hbar\omega_{0}/m_{e}c^{2})^{2}]+...\Bigr], \label{integral} \end{align} where $E^{m_{e}}_{k}=\hbar^{2}k^{2}/2m_{e}+\hbar ck$ and $E_{1,2}$ are the energies of the electronic transitions between intermediate states, which are of the order of $\hbar\omega_{0}$. Additional integrals involving the product of three and four fractions appear also in Eq.(\ref{Pperpotro}) which yield terms of the same orders. The final expression can be greatly simplified assuming small anisotropy factors, $\eta_{ij}\ll1$. Averaging over the molecule's orientations ($rot$) \cite{Craigbook} and expanding the result up to leading order in the anisotropy factors we obtain, \begin{align} \langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle_{rot}&=[20736\ln{(4/3)}-12928\ln{(2)}-14511]\nonumber\\ &\times\frac{Ce^{3}\mathbf{B}_{0}}{93312\pi^{2}c\epsilon_{0}m_{e}^{2}\omega_{x}\omega_{y}\omega_{z}} \eta^{zy}\eta^{xz}\eta^{yx}\nonumber\\&\simeq \frac{-1.06Ce^{3}\mathbf{B}_{0}}{144\pi^{2}c\epsilon_{0}m_{e}^{2}\omega_{x}\omega_{y}\omega_{z}} \eta^{zy}\eta^{xz}\eta^{yx}.\label{Pperptot} \end{align} \indent It is clear that $\langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle_{rot}$ is the expression to be compared with the semiclassical result of Eq.(\ref{PCascalss}), since from Eq.(\ref{ec2}) we read that the semiclassical $\mathbf{P}^{\textrm{Cas}}$ is the momentum of the radiative (i.e., transverse) modes only. First we observe that $\langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle_{rot}$ is a factor $(m_{e}c^{2}/\hbar\omega_{0})^{2}$ times larger than the semiclassical $\mathbf{P}^{\textrm{Cas}}$. From Eq.(\ref{integral}) we have that the semiclassical result is included in our quantum calculation (there are no other terms of the same order in the diagrams neglected), but it enters at a second order correction, $\mathcal{O}[(m_{e}c^{2}/\hbar\omega_{0})^{2}]$, to the leading term. We conclude that the reason for this discrepancy is the failure of the semiclassical approach to account for the Doppler effect due to the relative motion of the internal charges in order to generate the necessary spectral non-reciprocity. Instead, spectral non-reciprocity comes in the semiclassical approach from the effective magnetochiral birefringence. We also note that the result of Eq.(\ref{Pperptot}) assumes an integration over $k$ up to infinity in Eq.(\ref{Pperpotro}), which violates the non-relativistic limit. However, had we assumed a cut-off for $k_{max}$ of the order of the inverse of the Compton electronic wavelength, $k_{max}\sim m_{e}c/\hbar$, the expression on the r.h.s. of Eq.(\ref{Pperptot}) would change by only a factor of order unity. \subsection{Longitudinal Casimir momentum} For the computation of $\langle\mathbf{P}_{\parallel}^{\textrm{Cas}}\rangle$ the details of the calculation have been already published in Refs.\cite{PRLDonaire} and the Supplemental Material there. Here we concentrate on the underlying mechanism which originates the transfer of momentum from the vacuum field to the molecule. The calculation at $\mathcal{O}(C$B$_{0})$ implies up to third-order perturbation theory in $V_{Z}+V_{C}+W$ to the ground state of the harmonic oscillator Hamiltonian, $|0\rangle$, \begin{equation} \langle\mathbf{P}^{\textrm{Cas}}_{\parallel}\rangle=\sum_{\mathbf{Q},I,\gamma_{\mathbf{k}\mathbf{\epsilon}}} \frac{\langle\mathbf{Q}_{0},\tilde{\Omega}_{0}|e\Delta\mathbf{A}|\mathbf{Q},I,\gamma\rangle \langle\mathbf{Q},I,\gamma|\tilde{W}|\mathbf{Q}_{0},\tilde{\Omega}_{0}\rangle}{\mathbf{Q}^{2}_{0}/2M+E_{0}-E_{Q,I,k}}+c.c.,\nonumber \end{equation} where $|\mathbf{Q}_{0},\tilde{\Omega}_{0}\rangle=\exp{(i\mathbf{Q}_{0}\cdot\mathbf{R}/\hbar)}|\tilde{\Omega}_{0}\rangle$ and $\Delta\mathbf{A}=\mathbf{A}(\mathbf{r}_{N})-\mathbf{A}(\mathbf{r}_{e})$. The $\mathbf{Q}_{0}$ dependent terms are shown in Ref.\cite{PRLDonaire} to give rise to mass renormalization factors. As for the rest we have, \begin{align}\label{Plong} \langle\mathbf{P}^{\textrm{Cas}}_{\parallel}\rangle&=\Bigl[\frac{-\hbar e}{2c\epsilon_{0}}\int\frac{\textrm{d}^{3}k}{(2\pi)^{3}k}\langle\tilde{\Omega}_{0}| \frac{(\mathbb{I}-\frac{\mathbf{k}\otimes\mathbf{k}}{k^{2}})\cdot}{\hbar^{2}k^{2}/2m_{e}+\hbar ck-E_{0}+\tilde{H}_{0}}\nonumber\\ &e\mathbf{p}/m_{e}|\tilde{\Omega}_{0}\rangle+c.c.\Bigr]-[m_{e}\rightarrow m_{N}], \end{align} where $-[m_{e}\leftrightarrow m_{N}]$ means that the same expression within the square brackets must be evaluated exchanging $m_{e}$ with $m_{N}$ and subtracted. In contrast to $\langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle$, no Doppler shift term enters the calculation and no spectral non-reciprocity exists. Nonetheless the longitudinal momentum is also generated by the non-vanishing transient currents parallel to the external magnetic field. In this case, however, the currents are due to the internal motion of both the chromophoric electron and the nucleus, $\mathbf{j}_{e,N}=e\mathbf{p}/m_{e,N}$. They are the sources of the magnetic fields at some point $\mathbf{X}$ measured w.r.t. the center of mass, \begin{equation} \mathbf{B}_{e,N}(\mathbf{X})=\frac{-\mu_{0}}{4\pi}\mathbf{j}_{e,N}\wedge\frac{\mathbf{X}}{X^{3}}= -\frac{i\mu_{0}e}{m_{e,N}}\int\frac{\textrm{d}^{3}}{(2\pi)^3}e^{i\mathbf{k}\cdot\mathbf{X}}\frac{\mathbf{k}\wedge\mathbf{p}}{k^{2}}.\label{Bfield} \end{equation} Each of these magnetic fields, combined with the Coulomb field ($Coul$) of the respective charges, $+e$ for the nucleus and $-e$ for the electron, \begin{equation} \mathbf{E}^{Coul}_{e,N}(\mathbf{X})=\frac{\pm e}{4\pi\epsilon_{0}}\frac{\mathbf{X}}{X^{3}}= \mp\frac{ie}{\epsilon_{0}}\int\frac{\textrm{d}^{3}k}{(2\pi)^3}e^{i\mathbf{k}\cdot\mathbf{X}}\frac{\mathbf{k}}{k^{2}}, \end{equation} give rise to the longitudinal Casimir momentum of Eq.(\ref{Plong}), \begin{align} \langle\mathbf{P}^{\textrm{Cas}}_{\parallel}\rangle&=\epsilon_{0}\int\textrm{d}^{3}X\langle\mathbf{E}^{Coul}_{e}(\mathbf{X})\wedge\mathbf{B}^{*}_{e}(\mathbf{X})\rangle \nonumber\\&+\epsilon_{0}\int\textrm{d}^{3}X\langle\mathbf{E}^{Coul}_{N}(\mathbf{X})\wedge\mathbf{B}^{*}_{N}(\mathbf{X})\rangle.\label{Plongy} \end{align} In order to obtain an equivalence between equations (\ref{Plong}) and (\ref{Plongy}), one of the $k$ factors in the Fourier transform of the magnetic fields in Eq.(\ref{Bfield}) must be substituted by $k+\hbar k^{2}/2cm_{e,N}-(E_{0}-\tilde{H}_{0})/\hbar c$, where the momentum shift comes from the sum over the intermediate states which enter the transient currents. It is clear that the only difference between the two terms of Eqs.(\ref{Plong},\ref{Plongy}) is in the sign of the charges which enter the electrostatic fields and the masses which determine their respective currents. As a result, the addition of both terms must be proportional to $f(m_{e})-f(m_{N})$, with $f$ a real functional of the masses. The result is, \begin{equation}\label{Pkinparx} \langle \mathbf{P}^{\textrm{Cas}}_{\parallel}\rangle=\frac{Ce^{3}\ln{(m_{N}/m_{e})}}{96\pi^{2}c\epsilon_{0}\mu\mu^{*} (\omega_{x}+\omega_{y}+\omega_{z})}\sum_{i,j,k}\varepsilon_{ijk}\frac{\textrm{B}_{0}^{i}\eta^{kj}}{\omega_{k}\omega_{j}}\hat{i}, \end{equation} where $\varepsilon_{ijk}$ is the three-dimensional Levi-Civita tensor, the indices $i,j,k$ take on the axis labels $x,y,z$ and $\hat{i}$ is a unitary vector along the $i$ axis. After averaging over the molecule orientations we end up with, \begin{equation}\label{PkinparAverage} \langle\mathbf{P}^{\textrm{Cas}}_{\parallel}\rangle_{rot}=\frac{Ce^{3}\ln{(m_{e}/m_{N})}\mathbf{B}_{0}}{144\pi^{2}c\epsilon_{0}\mu\mu^{*}\omega_{x}\omega_{y}\omega_{z}} \eta^{zy}\eta^{xz}\eta^{yx}. \end{equation} As for the transverse Casimir momentum, the above calculation holds if we take the upper limit of integration of Eq.(\ref{Plong}) at infinity. Again, for a cut-off of the order of $k_{max}\sim m_{e}c/\hbar$, additional terms of the same order as those in Eq.(\ref{Pkinparx}) would have been obtained. \subsection{Casimir momentum as a function of optical parameters} Finally, adding up the transverse and the longitudinal contributions we obtain, \begin{equation}\label{PkinparAverage} \langle\mathbf{P}^{\textrm{Cas}}\rangle_{rot}=\frac{Ce^{3}\mathbf{B}_{0}[\ln{(m_{e}/m_{N})}+1]}{144\pi^{2}c\epsilon_{0}\mu\mu^{*}\omega_{x}\omega_{y}\omega_{z}} \eta^{zy}\eta^{xz}\eta^{yx}. \end{equation} This formula is a simple expression for $\langle\mathbf{P}^{\textrm{Cas}}\rangle_{rot}$ in terms of the chiral parameter $C$, the magnetic field and the natural frequencies of the oscillator. We can write it also in terms of the fine structure constant, $\alpha$, the static optical rotatory power, $\beta(0)$, and the static electric polarizability of the molecule, $\alpha_{E}(0)$. By comparing the above expression with the formulas for $\beta$ and $\alpha_{E}$ in \ref{appendB} we obtain,\footnote{We have fixed here an erroneous minus sign appearing in Ref.\cite{PRLDonaire}.} \begin{equation}\label{main1} \langle\mathbf{P}^{\textrm{Cas}}\rangle_{rot}\simeq\frac{2\alpha}{9\pi}\frac{\beta(0)}{\alpha_{E}(0)}[\ln{(m_{N}/m_{e})}+1]e\mathbf{B}_{0}. \end{equation} In Ref.\cite{PRLDonaire} we speculate that, apart from constants of order unity, this expression is model-independent. For a given set of natural frequencies, all the atomic lengths in the problem are determined by quantum mechanics. In particular, $\beta(0)/ \alpha_{E}(0)$ is a length, a fraction of the electronic Compton wavelength, that we identify with a chiral length, $l_{ch}$. Therefore, we can write $\textrm{P}^{\textrm{Cas}}\sim\alpha\:e$B$_{0}l_{ch}$ and thus interpret it as the leading QED correction to the classical Abraham momentum. \section{Origin of Casimir-induced kinetic energy}\label{sec5} \indent In the precedent sections we have found that the chiral molecule acquires a kinetic momentum during the switching of the external magnetic field as a result of its interaction with the vacuum field. On the other hand we have shown that, as a result of the conservation of the total momentum $\mathbf{K}$, there exists a transfer of linear momentum from the vacuum field to the molecule. The question arises whether the resultant variation of kinetic energy is provided by the vacuum field or by an external source. In the following we prove that this variation of kinetic energy is part of the energy provided or removed by the source of the external magnetic field to generate a vacuum correction to the molecular magnetization. That magnetization energy can be considered as part of the 'magnetic Lamb energy'.\\ \indent To this aim we make use of the Hellmann-Feynman-Pauli theorem \cite{Noztier}. According to this theorem, the variation of the total energy of a system in its ground state with respect to an adiabatic parameter, $\lambda$, can be computed from the expectation value of the functional derivative of the Hamiltonian w.r.t. that parameter. The variation of the total energy is the work done over the system of interest by an external source responsible for the adiabatic variation of the parameter. If the parameter enters an interaction potential, $W_{int}$, that work reads \begin{equation} \mathcal{W}_{\lambda}=\int_{0}^{\lambda}\delta\lambda' \langle \delta W_{int}/\delta\lambda'\rangle.\label{We} \end{equation} For instance, this approach is used to compute intermolecular forces by considering their center of mass position vectors as adiabatic parameters \cite{Zhang}.\\ \indent In our case, the adiabatic parameter to be varied is the magnetic field, $\mathbf{B}_{0}$, and the variation of energy to be computed is a magnetic energy, $\mathcal{W}_{\mathbf{B}_{0}}$. Since $\mathbf{B}_{0}$ enters $V_{Z}$ and $W$ linearly, and $\Delta V$ quadratically in Eq.(\ref{effective}) \footnote{The term of order $QB_{0}$ in $\Delta V$, $e\mathbf{Q}\cdot(\mathbf{r}\wedge\mathbf{B}_{0})/M$, does not contribute to $\mathcal{W}_{\mathbf{B}_{0}}$.}, the variation of the total energy of the system molecule-EM vacuum during the adiabatic switching of the magnetic field from $\mathbf{0}$ up to its final value $\mathbf{B}_{0}$ is, \begin{align} \mathcal{W}_{\mathbf{B}_{0}}&=\int_{\mathbf{0}}^{\mathbf{B}_{0}}\delta\mathfrak{B}_{0} \langle\delta(\tilde{W}+V_{Z}+\Delta V)/\delta\mathfrak{B}_{0}\rangle\nonumber\\&=\int_{\mathbf{0}}^{\mathbf{B}_{0}}\delta\mathfrak{B}_{0} [\langle \tilde{W}_{\mathfrak{B}_{0}}+V_{Z}\rangle/\mathfrak{B}_{0}+2\langle \Delta V\rangle/\mathfrak{B}_{0}],\label{Work} \end{align} where in the last equality $\tilde{W}_{\mathfrak{B}_{0}}=(e^{2}/2)(\mathbf{r}\wedge\mathfrak{B}_{0})\cdot[\mathbf{A}(\mathbf{R}-\frac{m_{N}}{M}\mathbf{r})/m_{e}- \mathbf{A}(\mathbf{R}+\frac{m_{e}}{M}\mathbf{r})/m_{N}]$ includes only the $\mathfrak{B}_{0}$-dependent terms of $\tilde{W}$. We note that $\mathcal{W}_{\mathbf{B}_{0}}$ is the work done by the external source that generates the uniform magnetic field $\mathbf{B}_{0}$ in the space occupied by the chiral molecule in the presence of the quantum vacuum. The expression within the square brackets in Eq.(\ref{Work}) is indeed the total magnetization of the molecule in its ground state, $\langle \mathbf{M}_{\mathfrak{B}_{0}}\rangle$, with a minus sign in front. Hence we can write, $\mathcal{W}_{\mathbf{B}_{0}}=-\int_{\mathbf{0}}^{\mathbf{B}_{0}}\langle \mathbf{M}_{\mathfrak{B}_{0}}\rangle\cdot\delta\mathfrak{B}_{0}$.\\ \indent Up to our level of approximation $\mathcal{W}_{\mathbf{B}_{0}}$ must include terms of up to order two in $C$, $\mathbf{B}_{0}$ and $\alpha$. It follows that perturbation theory has to be applied up to order six in $V_{C}+V_{Z}+\Delta V+\tilde{W}$. We first observe that at leading order in $\Delta V$ and quadratic in $V_{Z}$, and at zero order in $V_{C}$ and $\tilde{W}$, $\mathcal{W}_{\mathbf{B}_{0}}$ reduces to the magnetization energy of the diamagnetic molecule in the absence of vacuum field, $\sim-\alpha_{M}(0)B_{0}^{2}$, $\alpha_{M}(0)$ being the static magnetic polarizability given in \ref{appendB}. At higher order in perturbation theory, including the chiral potential and the vacuum field, we find a number of additional terms in $\mathcal{W}_{\mathbf{B}_{0}}$. Most of these terms amount to variations in the energy of internal atomic levels, which are not relevant to us. We are however interested in those terms of $\mathcal{W}_{\mathbf{B}_{0}}$ which are of the order of the variation of the kinetic energy of the chiral group, $\Delta E_{\textrm{kin}}(\mathbf{B}_{0})=[\mathbf{Q}_{0}-\langle\mathbf{P}^{\textrm{Cas}}(\mathbf{B}_{0})\rangle]^{2}/2M-\mathbf{Q}^{2}_{0}/2M$, for an initial kinetic momentum $\mathbf{Q}_{0}$ at zero magnetic field. In the following we search for these terms by inspection and show that altogether they add up to give $\Delta E_{\textrm{kin}}$. As a result, we can say that the variation of the kinetic energy is supplied by the source of the external magnetic field to produce a vacuum correction in the magnetization of the chiral molecule.\\ \indent First we note that keeping finite the momentum of the molecule, $\mathfrak{Q}_{0}$, and bearing in mind that during the adiabatic switching of the magnetic field $\mathfrak{Q}_{0}$ equals $\mathbf{Q}_{0}-\langle\mathbf{P}^{\textrm{Cas}}(\mathfrak{B}_{0})\rangle$, $\mathcal{W}_{\mathbf{B}_{0}}$ contains terms quadratic in $C$, $\mathbf{B}_{0}$ and $\alpha$ from the application of perturbation theory to $e^{-i\mathfrak{Q}_{0}\cdot\mathbf{R}/\hbar}|0\rangle$ up to order one in $V_{C}$ and $V_{Z}$, and up to order two in $\tilde{W}$, as for the computation of the Casimir momentum. Following the same steps as for the calculation of $\langle\mathbf{P}^{\textrm{Cas}}\rangle$, we start with the dressed ground state, $|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle=e^{-i\mathfrak{Q}_{0}\cdot\mathbf{R}/\hbar}|\tilde{\Omega}_{0}\rangle$, and apply to it second order perturbation theory with the interaction potential $\tilde{W}$. The calculation is therefore similar to that of the ordinary Lamb shift except for the fact that the ground state and the intermediate states in our case are dressed up to first order in $C$ and $\mathfrak{B}_{0}$. In addition to the diagram of Fig.\ref{fig1}i($b$) for the self-energy of the electron we must add the one of Fig.\ref{fig1}i($a$) for the nucleus. We will denote the resultant energy by $E^{Lamb}(\mathfrak{B}_{0})$ and refer to it as magnetic Lamb energy, \begin{equation} E^{Lamb}(\mathfrak{B}_{0})=\sum_{\mathbf{Q},I,\gamma_{\mathbf{k}\mathbf{\epsilon}}} \frac{\langle\mathfrak{Q}_{0},\tilde{\Omega}_{0}|\tilde{W}|\mathbf{Q},I,\gamma\rangle \langle\mathbf{Q},I,\gamma|\tilde{W}|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle}{\mathfrak{Q}^{2}_{0}/2M+E_{0}-E_{Q,I,k}}. \end{equation} Let us reorganize first the terms in $\tilde{W}$ as \begin{align}\label{tW} \tilde{W}&=-\frac{e}{m_{N}}\left(\mathbf{p}-\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r}\right)\cdot \mathbf{A}(\mathbf{r}_{N})\\ &-\frac{e}{m_{e}}\left(\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r}\right)\cdot \mathbf{A}(\mathbf{r}_{e})-\frac{\mathbf{P}\cdot\mathbf{P}_{\parallel}^{\textrm{Cas}}}{M}+\mathcal{O}(\textrm{A}^{2}).\nonumber \end{align} This shows that the longitudinal Casimir momentum appears naturally coupled to the canonical momentum of the center of mass. In the following we disregard the terms of $E^{Lamb}$ associated to internal energies and restrict ourselves to those of the order of $E_{\textrm{kin}}$. In the first place, from the combination of the factors $-\mathbf{p}\cdot[\frac{e}{m_{N}}\mathbf{A}(\mathbf{r}_{N})+\frac{e}{m_{e}}\mathbf{A}(\mathbf{r}_{e})]$ and $-M^{-1}\mathbf{P}\cdot\mathbf{P}_{\parallel}^{\textrm{Cas}}$ in $\tilde{W}^{2}$ we have, \begin{align} E_{\parallel}^{Lamb}&=\big[-\sum_{\mathbf{Q},I,\gamma_{\mathbf{k}\mathbf{\epsilon}}} \frac{\langle\mathfrak{Q}_{0},\tilde{\Omega}_{0}|e\mathbf{p}\cdot\mathbf{A}(\mathbf{r}_{e})/m_{e}|\mathbf{Q},I,\gamma\rangle}{\mathfrak{Q}^{2}_{0}/2M+E_{0}-E_{Q,I,k}}\nonumber\\ &\times\langle\mathbf{Q},I,\gamma|e\mathbf{A}(\mathbf{r}_{e})\cdot\mathbf{P}/M|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle+c.c.\big]\nonumber\\ &-[m_{e}\rightarrow m_{N}]. \end{align} Since the ground state $|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle$ is an eigenstate of the center of mass canonical momentum, $\mathbf{P}|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle=\mathfrak{Q}_{0}|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle$, the above equation can be written as \begin{align} E_{\parallel}^{Lamb}&=\big[-\sum_{\mathbf{Q},I,\gamma_{\mathbf{k}\mathbf{\epsilon}}} \frac{\langle\mathfrak{Q}_{0},\tilde{\Omega}_{0}|e\mathbf{p}\cdot\mathbf{A}(\mathbf{r}_{e})/m_{e}|\mathbf{Q},I,\gamma\rangle} {\mathfrak{Q}^{2}_{0}/2M+E_{0}-E_{Q,I,k}}\nonumber\\ &\times\langle\mathbf{Q},I,\gamma|e\mathbf{A}(\mathbf{r}_{e})|\mathfrak{Q}_{0},\tilde{\Omega}_{0}\rangle\cdot\mathfrak{Q}_{0}/M+c.c.\big]\nonumber\\ &-[m_{e}\rightarrow m_{N}]. \end{align} The expression under the sum is exactly the one for the longitudinal Casimir momentum in Eq.(\ref{Plong}) with a minus sign in front. Therefore we find, \begin{equation} E_{\parallel}^{Lamb}=-\langle\mathbf{P}^{\textrm{Cas}}_{\parallel}(\mathfrak{B}_{0})\rangle\cdot\mathfrak{Q}_{0}/M.\label{Eparal} \end{equation} This relation explains the subscript $\parallel$ in $ E_{\parallel}^{Lamb}$.\\ \indent Likewise, from the combination of two factors $-\frac{e}{m_{e}}\left(\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r}\right)\cdot \mathbf{A}(\mathbf{r}_{e})$ in $\tilde{W}^{2}$ we have, \begin{align} &\sum_{\mathbf{Q},I,\gamma_{\mathbf{k}\mathbf{\epsilon}}} \frac{|\langle\mathfrak{Q}_{0},\tilde{\Omega}_{0}|\frac{e}{m_{e}}\left(\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r}\right)\cdot \mathbf{A}(\mathbf{r}_{e})|\mathbf{Q},I,\gamma\rangle|^{2}} {\mathfrak{Q}^{2}_{0}/2M+E_{0}-E_{Q,I,k}} \nonumber\\&=\frac{-\hbar e^{2}}{2cm_{e}^{2}\epsilon_{0}} \int\frac{\textrm{d}^{3}k}{(2\pi)^{3}k}\langle\tilde{\Omega}_{0}| (\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r})\nonumber\\ &\times\frac{\cdot(\mathbb{I}-\frac{\mathbf{k}\otimes\mathbf{k}}{k^{2}})\cdot}{\hbar^{2}k^{2}/2m_{e}+\hbar ck-\hbar\mathbf{k}\cdot\mathbf{p}/m_{e} -\hbar\mathbf{k}\cdot\mathfrak{Q}_{0}/M-E_{0}+\tilde{H}_{0}}\nonumber\\ &\times(\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r})|\tilde{\Omega}_{0}\rangle.\label{Pperpal} \end{align} Next we note in the denominator a Doppler shift term due to the motion of the center of mass, $-\hbar\mathbf{k}\cdot\mathfrak{Q}_{0}/M$. Expanding the fraction up to first order in this term and discarding the zero-order terms we find, \begin{align} E^{\perp}_{Lamb}&=\frac{-\hbar e^{2}}{2cm_{e}^{2}\epsilon_{0}} \int\frac{\textrm{d}^{3}k}{(2\pi)^{3}k}\langle\tilde{\Omega}_{0}|\frac{\hbar}{M}(\mathbf{k}\cdot\mathfrak{Q}_{0}) (\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r})\nonumber\\ &\times\frac{\cdot(\mathbb{I}-\frac{\mathbf{k}\otimes\mathbf{k}}{k^{2}})\cdot}{(\hbar^{2}k^{2}/2m_{e}+\hbar ck-\hbar\mathbf{k}\cdot\mathbf{p}/m_{e} -E_{0}+\tilde{H}_{0})^{2}}\nonumber\\ &\times(\mathbf{p}+\frac{e}{2}\mathfrak{B}_{0}\wedge\mathbf{r})|\tilde{\Omega}_{0}\rangle.\label{Pperpal} \end{align} Lastly, by expanding the denominator up to first order in $-\hbar\mathbf{k}\cdot\mathbf{p}/m_{e}$ and comparing with Eq.(\ref{Pperpotro}) we end up with \begin{equation} E^{\perp}_{Lamb}=-\langle\mathbf{P}^{\textrm{Cas}}_{\perp}(\mathfrak{B}_{0})\rangle\cdot\mathfrak{Q}_{0}/M.\label{Eperp} \end{equation} \indent We have found in Sec.\ref{Qapproach} that $\langle\mathbf{P}^{\textrm{Cas}}(\mathfrak{B}_{0})\rangle\propto\mathfrak{B}_{0}$ in stationary conditions. Therefore, combining the results of equations (\ref{Eparal}) and (\ref{Eperp}) and considering Eq.(\ref{Work}) we can write the magnetic Lamb energy associated to these two terms as, \begin{equation}\label{ener} \int_{0}^{\mathbf{B}_{0}}\delta\mathfrak{B}_{0}(E_{\parallel}^{Lamb}+ E_{\perp}^{Lamb})/\mathfrak{B}_{0}= \langle\mathbf{P}^{\textrm{Cas}}(\mathbf{B}_{0})\rangle^{2}/2M-\mathbf{Q}_{0}\cdot\langle\mathbf{P}^{\textrm{Cas}}(\mathbf{B}_{0})\rangle/M, \end{equation} which equals $\Delta E_{\textrm{kin}}(\mathbf{B}_{0})=[\mathbf{Q}_{0}-\langle\mathbf{P}^{\textrm{Cas}}(\mathbf{B}_{0})\rangle]^{2}/2M-\mathbf{Q}^{2}_{0}/2M$ as anticipated. From this result we see that the variation of the kinetic energy of the chiral group, induced by the transfer of linear momentum from the vacuum to the molecule, has its origin in the magnetic Lamb energy (i.e., that part of the interaction energy that is induced by both the magnetic field and the quantum vacuum)\footnote{In our simplified model the chiral group is free to move within the molecule. The subsequent transfer of kinetic momentum from the chiral group to the rest of the molecule would be accompanied by a redistribution of mechanical energy.}. This kinetic energy is the magnetic energy associated with the vacuum correction to the magnetization of the molecule, $\Delta M_{\mathfrak{B}_{0}}=-(E_{\parallel}^{Lamb}+ E_{\perp}^{Lamb})/\mathfrak{B}_{0}$, $\Delta E_{\textrm{kin}}(\mathbf{B}_{0})=-\int_{0}^{\mathbf{B}_{0}}\Delta M_{\mathfrak{B}_{0}}\delta\mathfrak{B}_{0}$ \footnote{$\Delta M_{\mathfrak{B}_{0}}=-(E_{\parallel}^{Lamb}+ E_{\perp}^{Lamb})/\mathfrak{B}_{0}$ is a shortcut to write the vacuum correction to the magnetization, and so is the expression in the integrand of Eq.(\ref{Work}) for writing the total magnetization. Strictly speaking it should be written $\Delta M_{\mathfrak{B}_{0}}=-\delta(E_{\parallel}^{Lamb}+E_{\perp}^{Lamb})/\delta\mathfrak{B}_{0}+(\mathfrak{B}_{0}/2)\delta^{2}(E_{\parallel}^{Lamb}+E_{\perp}^{Lamb})/\delta\mathfrak{B}_{0}^{2}$.}. The sign of this energy depends on the magnitudes of the initial momentum and final magnetic field, $Q_{0}$ and $B_{0}$, as well as on their relative orientation. A positive sign means that the energy is provided by the external source which generates the magnetic field. A negative sign means that part of the initial kinetic energy of the molecule is removed by the external source during the magnetization process. \section{Conclusions}\label{sec6} We have derived an expression for the Casimir momentum transferred from the EM vacuum to a chiral molecule during the switching of an external magnetic field. We have modeled the chiral molecule using the single quantum oscillator model of Condon \emph{et al.} \cite{Condon1,Condon2,EPJDonaire}. We have applied both a semiclassical approach and a fully quantum non-relativistic approach.\\ \indent The quantum approach reveals that two distinct mechanisms operate in the transfer of momentum, they both based on the production of transient internal currents in the direction parallel to the external magnetic field. The transverse Casimir momentum [Eq.(\ref{Pperpotro})] has its origin in the spectral non-reciprocity generated by a Doppler shift in the frequency of the vacuum photons. This shift is due to the internal momentum of the chromophoric electron, $\mathbf{p}$. The break down of time reversal and mirror symmetry makes $\mathbf{p}$ take a non-vanishing transient value parallel to the external magnetic field. On the contrary, the longitudinal Casimir momentum [Eq.(\ref{Plong})] has its origin in the combination of the electrostatic field of the internal charges and the magnetic field generated by their transient currents parallel to the external magnetic field [Eq.(\ref{Plongy})]. Therefore, we have found that the longitudinal Casimir momentum is the momentum of the source EM field while the transverse momentum is the momentum of the sourceless vacuum field. In particular, the finding $\langle\mathbf{P}^{\textrm{Cas}}_{\perp}\rangle\neq\mathbf{0}$ proves that by modifying the symmetries of the space-time it is possible to vary the vacuum expectation value of observable quantities.\\ \indent The quantum result in Eq.(\ref{main1}) is linear in $\mathbf{B}_{0}$ and proportional to the fine structure constant, $\alpha$, and to the molecular rotatory power. It is conjectured that this result is universal and model-independent, up to numerical prefactors of order unity which might include relativistic corrections.\\ \indent The semiclassical approach only applies to the calculation of the transverse Casimir momentum and relies on the non-reciprocity of the spectrum of effective normal modes. However, it fails to incorporate the net effect of the Doppler shift which comes in into the microscopical quantum approach and which is seen to dominate largely. The semiclassical result of Eq.(\ref{PCascalss}) appears as a second order correction to the leading order term of the quantum result.\\ \indent We have proved that the variation of the kinetic energy of the chromophoric group has its origin in the magnetic Lamb energy [Eq.(\ref{ener})]. This kinetic energy is found to be part of the magnetic energy associated with the vacuum correction to the magnetization of the molecule. When positive, this energy is provided by the external source which generates the uniform magnetic field. When negative, it is part of the initial kinetic energy of the molecule which is removed by the external source during the magnetization process.\\ \ack This work was supported by the ANR contract PHOTONIMPULS ANR-09-BLAN-0088-01. \appendix \section{Rotationally averaged polarizabilities}\label{appendB} \indent At leading order in the anisotropy factors, the polarizabilities that enter Eq.(\ref{dind}) are \cite{EPJDonaire}, \begin{align} &\alpha_{E}=\frac{e^{2}}{\mu(\omega_{0}^{2}-\omega^{2})},\quad\alpha_{M}=\frac{4e^{2}\hbar\omega_{0}\mathcal{N}_{xyz}}{9\mu^{*2}(4\omega_{0}^{2}-\omega^{2})},\nonumber\\ &\chi=\frac{e^{3}}{\mu\mu^{*}(\omega_{0}^{2}-\omega^{2})^{2}},\quad\zeta=\frac{e^{3}\hbar\omega_{0}(4\omega_{0}^{2}-3\omega^{2})\mathcal{N}_{xyz}} {18\mu^{*3}\omega(4\omega_{0}^{2}-\omega^{2})^{2}},\nonumber\\ &\beta=\frac{-2e^{2}\hbar C\omega^{3}_{0}(\omega^{4}+7\omega_{0}^{2}\omega^{2}+4\omega_{0}^{4})\mathcal{M}_{xyz}} {\mu^{2}\mu^{*}(\omega^{4}-5\omega_{0}^{2}\omega^{2}+4\omega_{0}^{4})^{3}},\nonumber\\ &\gamma=\frac{e^{3}\hbar C\omega^{3}_{0}\omega^{2}(\omega^{2}+12\omega_{0}^{2})\mathcal{M}_{xyz}} {\mu^{2}\mu^{*2}(\omega^{4}-5\omega_{0}^{2}\omega^{2}+4\omega_{0}^{4})^{3}},\nonumber\\ &\xi=\frac{-2e^{3}\hbar C\mathcal{M}_{xyz}\omega^{3}_{0}\omega(19\omega^{6}-842\omega^{4}\omega_{0}^{2}-224\omega^{2}\omega_{0}^{4} -672\omega_{0}^{6})} {15\mu^{2}\mu^{*2}(\omega^{2}-\omega_{0}^{2})^{3}(\omega^{2}-4\omega_{0}^{2})^{5}},\nonumber \end{align} where $\mathcal{M}_{xyz}$ and $\mathcal{N}_{xyz}$ are dimensionless functions of the anisotropy factors, $\mathcal{M}_{xyz}\equiv\eta^{zy}\eta^{yx}\eta^{xz}$, $\mathcal{N}_{xyz}\equiv\eta^{yx}\eta^{yz}+\eta^{zx}\eta^{zy}+\eta^{xy}\eta^{xz}$. \section*{References} \end{document}
math
77,487
\begin{document} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \begin{center} \vskip 1cm{\LARGE\bf The Generalization of Faulhaber's Formula to \vskip0.1 in Sums of Arbitrary Complex Powers} \vskip 1cm \large Raphael Schumacher\\ Department of Mathematics\\ ETH Zurich\\ R\"amistrasse 101\\ 8092 Zurich\\ Switzerland\\ \href{[email protected]}{\tt [email protected]}\\ \end{center} \vskip .2 in \begin{abstract} In this paper we present a generalization of Faulhaber's formula to sums of arbitrary complex powers $m\in\mathbb{C}$. These summation formulas for sums of the form $\sum_{k=1}^{\lfloor x\rfloor}k^{m}$ and $\sum_{k=1}^{n}k^{m}$, where $x\in\mathbb{R}^{+}$ and $n\in\mathbb{N}$, are based on a series acceleration involving Stirling numbers of the first kind. While it is well-known that the corresponding expressions obtained from the Euler-Maclaurin summation formula diverge, our summation formulas are all very rapidly convergent. \end{abstract} \section{Introduction} \label{sec:Introduction} For two natural numbers $m,n\in\mathbb{N}_{0}$, the Faulhaber formula \cite{1}, given by \begin{equation} \sum_{k=0}^{n}k^{m}=\frac{1}{m+1}\sum_{k=0}^{m}(-1)^{k}{m+1\choose k}B_{k}n^{m-k+1}, \end{equation} where the $B_{k}$'s are the Bernoulli numbers, provides a very efficient way to compute the sum of the $m$-th powers of the first $n$ natural numbers. This formula was found by Jacob Bernoulli around 1700 and was first proved by Carl Gustav Jacobi in 1834.\\ We will prove a rapidly convergent exact generalization of Faulhaber's formula to finite sums of the form $\sum_{k=1}^{\lfloor x\rfloor}k^{m}$ and $\sum_{k=1}^{n}k^{m}$ for all exponents $m\in\mathbb{C}$. Our key tool will be the so called Weniger transformation \cite[\text{(4.1)}]{2} found by J. Weniger, transforming an inverse power series into an inverse factorial series \cite[\text{(1.1)}]{2}. This transformation of inverse power series was first found by Oskar Schl\"omilch around 1850 \cite{3,4,5} based on earlier works of James Stirling in 1730 \cite{6}.\\ In an expanded form, one of our summation formulas for the sum $\sum_{k=1}^{n}\sqrt{k}$, where $n\in\mathbb{N}$, looks like \begin{equation}\label{square roots formula} \begin{split} \sum_{k=1}^{n}\sqrt{k}&=\frac{2}{3}n^{\frac{3}{2}}+\frac{1}{2}\sqrt{n}-\frac{1}{4\pi}\zeta\left(\frac{3}{2}\right)+\sqrt{n}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(2l-3)!!}{2^{l}(l+1)!}B_{l+1}S_{k}^{(1)}(l)}{(n+1)(n+2)\cdots(n+k)}\\ &=\frac{2}{3}n^{\frac{3}{2}}+\frac{1}{2}\sqrt{n}-\frac{1}{4\pi}\zeta\left(\frac{3}{2}\right)+\frac{\sqrt{n}}{24(n+1)}+\frac{\sqrt{n}}{24(n+1)(n+2)}+\frac{53\sqrt{n}}{640(n+1)(n+2)(n+3)}\\ &\quad+\frac{79\sqrt{n}}{320(n+1)(n+2)(n+3)(n+4)}+\ldots, \end{split} \end{equation} where the $B_{l}$'s are the Bernoulli numbers and $S_{k}^{(1)}(l)$ denotes the Stirling numbers of the first kind.\\ The above identity \eqref{square roots formula} is deduced by setting the variable $x:=n\in\mathbb{N}$ into the more general formula \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\sqrt{k}&=\frac{2}{3}x^{\frac{3}{2}}-\frac{1}{4\pi}\zeta\left(\frac{3}{2}\right)-\sqrt{x}B_{1}(\{x\})+\sqrt{x}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}(-1)^{l}\frac{(2l-3)!!}{2^{l}(l+1)!}S_{k}^{(1)}(l)B_{l+1}(\{x\})}{(x+1)(x+2)\cdots(x+k)}\\ &=\frac{2}{3}x^{\frac{3}{2}}-\frac{1}{4\pi}\zeta\left(\frac{3}{2}\right)+\left(\frac{1}{2}-\{x\}\right)\sqrt{x}+\frac{\left(\frac{1}{4}\{x\}^{2}-\frac{1}{4}\{x\}+\frac{1}{24}\right)\sqrt{x}}{(x+1)}\\ &\quad+\frac{\left(\frac{1}{24}\{x\}^{3}+\frac{3}{16}\{x\}^{2}-\frac{11}{48}\{x\}+\frac{1}{24}\right)\sqrt{x}}{(x+1)(x+2)}+\frac{\left(\frac{1}{64}\{x\}^{4}+\frac{3}{32}\{x\}^{3}+\frac{21}{64}\{x\}^{2}-\frac{7}{16}\{x\}+\frac{53}{640}\right)\sqrt{x}}{(x+1)(x+2)(x+3)}\\ &\quad+\frac{\left(\frac{1}{128}\{x\}^{5}+\frac{19}{256}\{x\}^{4}+\frac{109}{384}\{x\}^{3}+\frac{29}{32}\{x\}^{2}-\frac{977}{768}\{x\}+\frac{79}{320}\right)\sqrt{x}}{(x+1)(x+2)(x+3)(x+4)}+\ldots, \end{split} \end{equation} where this time the $B_{l}(\{x\})$'s are the fractional Bernoulli polynomials and $x\in\mathbb{R}^{+}$ is a positive real number. All other formulas in this article have a similar shape, when we expand them.\\ We have searched our resulting formulas in the literature and on the internet. We could find only two of them, namely equation \eqref{StirlingExpansion} and its analogues for the sums $\sum_{k=1}^{n}\frac{1}{k^{m}}$ with $m\in\mathbb{N}_{\geq2}$, which were already known to Stirling in 1730 \cite{3,7}, and equation \eqref{GregorioFontanaExpansion}, which was obtained by Gregorio Fontana around 1780 \cite{3,8}. Both of these formulas were originally found in another form without the use of Bernoulli numbers.\\ We believe that all other generalized Faulhaber formulas presented in this article are new and that our method to obtain them has not been recognized before. \section{Definitions and Basic Facts} \label{sec:Definitions and Basic Facts} As usual, we denote the floor of $x$ by $\lfloor x\rfloor$ and the fractional part of $x$ by $\{x\}$.\\ The symbol $\mathbb{N}:=\{1,2,3,4,\ldots\}$ denotes the set of natural numbers and $\mathbb{R}^{+}:=\left\{x\in\mathbb{R}:x>0\right\}$ represents the set of positive real numbers. We also set $\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}$ and $\mathbb{R}_{0}^{+}:=\mathbb{R}^{+}\cup\{0\}$. Moreover, we define $\mathbb{H}^{+}:=\{z\in\mathbb{C}:\re(z)>0\}$ and denote by $\zeta(s)$ the Riemann zeta function at the point $s\in\mathbb{C}\setminus\{1\}$. For a complex number $z=re^{i\varphi}\in\mathbb{C}$, we denote by $|z|=r\in\mathbb{R}_{0}^{+}$ its absolute value and by $\varphi=\arg(z)\in(-\pi,\pi]$ its argument or phase. We define for all $\theta\in\mathbb{R}$ the secant function by $\sec(\theta):=\frac{1}{\cos(\theta)}$.\\ The double factorial function for $n\in\mathbb{N}_{0}$ is defined by $n!!:=\prod_{k=0}^{\left\lfloor\frac{n-1}{2}\right\rfloor}(n-2k)$. \begin{definition}(Pochhammer symbol)\;\cite[\text{p.\;1429}]{2}\\ We define the \emph{Pochhammer symbol} (or rising factorial function) $(z)_{k}$ by \begin{equation} (z)_{k}:=z(z+1)(z+2)(z+3)\cdots(z+k-1)=\frac{\Gamma(z+k)}{\Gamma(z)}, \end{equation} where $\Gamma(z)$ is the gamma function \cite[\text{(5.2.1), p.\;136}]{9} defined as the meromorphic continuation of the integral \begin{equation} \Gamma(z):=\int_{0}^{\infty}e^{-t}t^{z-1}dt\;\;\text{for all $z\in\mathbb{C}$ with $\re(z)>0$} \end{equation} to the whole complex plane $\mathbb{C}$.\\ \end{definition} \begin{definition}(Stirling numbers of the first kind)\;\cite[\text{(A.2), p.\;1437}]{2}, \cite[\text{A008275}]{10}\\ Let $k,l\in\mathbb{N}_{0}$ be two non-negative integers such that $k\geq l\geq0$. We set the \emph{Stirling numbers of the first kind} $S_{k}^{(1)}(l)$ as the connecting coefficients in the identity \begin{equation} (z)_{k}=(-1)^{k}\sum_{l=0}^{k}(-1)^{l}S_{k}^{(1)}(l)z^{l}, \end{equation} where $(z)_{k}$ is the rising factorial function. Furthermore, we set $S_{k}^{(1)}(l)=0$ if $k,l\in\mathbb{N}_{0}$ with $l>k$. \end{definition} \begin{definition}(Binomial Coefficients)\;\cite{11}\\ We introduce the \emph{binomial coefficient} ${z\choose s}$ for all $z\in\mathbb{C}$ and $s\in\mathbb{C}$ by \cite[\text{(5) and (11), pp.\;8-9}]{11} \begin{equation} \begin{split} {z\choose s}:&=\frac{\Gamma(z+1)}{\Gamma(s+1)\Gamma(z-s+1)}. \end{split} \end{equation} Moreover, we have for $z\in\mathbb{C}\setminus\{0,-1,-2,-3,\ldots\}$ the following asymptotic expansion as $k\rightarrow\infty$ \cite[\text{(18), p.\;2 and p.\;35}]{11} \begin{equation}\label{binomial coefficient bound} {z\choose k}=\frac{(-1)^{k}}{\Gamma(-z)k^{z+1}}+O\left(\frac{1}{k^{z+2}}\right)\;\;\text{for all $z\in\mathbb{C}$ and $k\in\mathbb{N}$}. \end{equation} \end{definition} \begin{definition}(Bernoulli polynomials and Bernoulli numbers)\;\cite{1}, \cite{12}, \cite{13}, \cite{14}\\ We define for $n\in\mathbb{N}_{0}$ the \emph{$n$-th Bernoulli polynomial} $B_{n}(x)$ via the following exponential generating function \cite{1} as \begin{equation} \frac{te^{xt}}{e^{t}-1}=\sum_{n=0}^{\infty}\frac{B_{n}(x)}{n!}t^{n}\;\;\forall t\in\mathbb{C}\;\text{with $|t|<2\pi$}. \end{equation} We also define the \emph{$n$-th Bernoulli number} $B_{n}$ as the value of the $n$-th Bernoulli polynomial $B_{n}(x)$ at the point $x=0$, that is \begin{equation}\label{Bernoulli number} B_{n}:=B_{n}(0). \end{equation} It holds for all $n\in\mathbb{N}_{0}$ the explicit formula \cite[\text{Proposition 23.2, p.\;86}]{12} \begin{equation}\label{Bernoulli polynomial} \begin{split} B_{n}(x)&=\sum_{k=0}^{n}{n\choose k}B_{k}x^{n-k}. \end{split} \end{equation} It holds for all $0\leq y\leq1$ that \cite[\text{Corollary B.4, (B.21), p.\;500}]{13} \begin{equation} \left|B_{1}(y)\right|\leq\frac{1}{2}\;\;\text{and that}\;\;\left|B_{n}(y)\right|\leq\frac{2\zeta(n)n!}{(2\pi)^{n}}\;\;\text{for all $n\in\mathbb{N}_{\geq2}$}. \end{equation} We have that \cite[\text{(1.10), p.\;282}]{14} \begin{equation}\label{Bernoulli relation} (-1)^{k}B_{k}(1-y)=B_{k}(y)\;\;\text{for all $k\in\mathbb{N}_{0}$ and $0\leq y\leq1$}. \end{equation} \end{definition} \begin{definition}\label{digammafunction}(Digamma function)\;\cite[\text{pp.\;136-138}]{9}\\ We set the \emph{digamma function} $\psi(z)$ to \begin{equation} \begin{split} \psi(z):&=\frac{\Gamma'(z)}{\Gamma(z)}\;\;\text{for all $z\in\mathbb{C}\setminus\{0,-1,-2,-3,\ldots\}$}. \end{split} \end{equation} Therefore, $\psi(z)$ is an analytic function for all $z\in\mathbb{C}\setminus{(-\infty,0]}$. For all $z\in\mathbb{C}\setminus\{0,-1,-2,-3,\ldots\}$, we have the identity \cite[\text{(5.5.2), p.\;138}]{9} \begin{equation}\label{digamma identity} \begin{split} \psi(z+1)&=\psi(z)+\frac{1}{z} \end{split} \end{equation} and for all $n\in\mathbb{N}$ we have the formula \cite[\text{(5.4.14), p.\;137}]{9} \begin{equation}\label{digamma summation formula} \begin{split} \sum_{k=1}^{n}\frac{1}{k}&=\psi(n+1)+\gamma. \end{split} \end{equation} \end{definition} \begin{definition}\label{Hurwitzzetafunction}(Hurwitz zeta function)\;\cite[\text{p.\;607}]{9}\\ We define the \emph{Hurwitz zeta function} $\zeta(s,z)$ for all complex numbers $s\in\mathbb{C}$ with $\re(s)>1$ and all $z\in\mathbb{C}\setminus\{0,-1,-2,-3,\ldots\}$ by \begin{equation} \begin{split} \zeta(s,z):&=\sum_{k=0}^{\infty}\frac{1}{(k+z)^{s}}. \end{split} \end{equation} The function $\zeta(s,z)$ extends to an analytic function on $\mathbb{C}\setminus\{0,-1,-2,-3,\ldots\}$ in the $z$-variable and for every $z\notin\{0,-1,-2,-3,\ldots\}$ to a meromorphic function in $s\in\mathbb{C}\setminus\{1\}$ with a simple pole at $s=1$. It satisfies for all $s\in\mathbb{C}\setminus\{1\}$ and all $z\in\mathbb{C}\setminus\{0,-1,-2,-3,\ldots\}$ the identity \begin{equation}\label{Hurwitz zeta identity} \begin{split} \zeta(s,z+1)&=\zeta(s,z)-\frac{1}{z^{s}}. \end{split} \end{equation} For all $m\in\mathbb{C}\setminus\{1\}$ and all $n\in\mathbb{N}$ we have the formula \cite[\text{(1.2), p.\;2}]{15} \begin{equation}\label{Hurwitz summation formula} \begin{split} \sum_{k=1}^{n}k^{m}&=\zeta(-m)-\zeta(-m,n+1). \end{split} \end{equation} \end{definition} \section{The Structure of Inverse Factorial Series Expansions} \label{sec:The Structure of Inverse Factorial Series Expansions} In this section we study the structure of inverse factorial series expansions for analytic functions possessing an asymptotic series expansion by applying a Theorem of G. N. Watson \cite[\text{Theorem\;2, p.\;45}]{16}. The main result of this section is Theorem \ref{Structure of inverse factorial series expansions}, from which we will later deduce convergent inverse factorial series expansions for the functions $\zeta(s,z+1-y)$ and $\psi(z+1-y)$, where $0\leq y\leq1$.\\ For this procedure, we need the following variant of a result found by J. Weniger \cite[\text{(4.1), p.\;1433}]{2} \begin{lemma}(finite Weniger transformation)\label{finite Weniger transformation}\;\cite{2}\\ For every finite inverse power series $\sum_{k=1}^{n}\frac{a_{k}}{z^{k}}$, where the $a_{k}$'s are any complex numbers and $n\in\mathbb{N}$, the following transformation formula holds \begin{equation} \label{finite Weniger transformation formula} \begin{split} \sum_{k=1}^{n}\frac{a_{k}}{z^{k}}&=\sum_{k=1}^{n}\frac{(-1)^{k}\sum_{l=1}^{k}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\;\;\;\;\text{as $z\rightarrow\infty$}. \end{split} \end{equation} Moreover, we have that \begin{equation} \label{Weniger transformation formula} \begin{split} \sum_{k=1}^{n}\frac{a_{k}}{z^{k}}&=\sum_{k=1}^{\infty}\frac{(-1)^{k}\sum_{l=1}^{n}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)}. \end{split} \end{equation} \end{lemma} \begin{proof} We have for $l\in\mathbb{N}$ that \cite[\text{(A.14), p.\;1438}]{2}, \cite[\text{(6), p.\;78}]{17} \begin{displaymath} \begin{split} \frac{1}{z^{l}}&=\sum_{k=0}^{\infty}\frac{(-1)^{k}S_{k+l}^{(1)}(l)}{(z+1)(z+2)\cdots(z+k+l)}\\ &=\sum_{k=1}^{\infty}\frac{(-1)^{k-l}S_{k}^{(1)}(l)}{(z+1)(z+2)\cdots(z+k)}\\ &=\sum_{k=1}^{n}\frac{(-1)^{k-l}S_{k}^{(1)}(l)}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\;\;\text{as $z\rightarrow\infty$}. \end{split} \end{displaymath} Therefore, we obtain that \begin{displaymath} \begin{split} \sum_{k=1}^{n}\frac{a_{k}}{z^{k}}=\sum_{l=1}^{n}\frac{a_{l}}{z^{l}}&=\sum_{l=1}^{n}a_{l}\sum_{k=1}^{n}\frac{(-1)^{k-l}S_{k}^{(1)}(l)}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\\ &=\sum_{k=1}^{n}\frac{(-1)^{k}\sum_{l=1}^{n}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\\ &=\sum_{k=1}^{n}\frac{(-1)^{k}\sum_{l=1}^{k}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\;\;\text{as $z\rightarrow\infty$}, \end{split} \end{displaymath} which is the first claimed formula \eqref{finite Weniger transformation formula}.\\ The second formula \eqref{Weniger transformation formula} follows from the calculation \begin{displaymath} \begin{split} \sum_{k=1}^{n}\frac{a_{k}}{z^{k}}=\sum_{l=1}^{n}\frac{a_{l}}{z^{l}}&=\sum_{l=1}^{n}a_{l}\sum_{k=1}^{\infty}\frac{(-1)^{k-l}S_{k}^{(1)}(l)}{(z+1)(z+2)\cdots(z+k)}\\ &=\sum_{k=1}^{\infty}\frac{(-1)^{k}\sum_{l=1}^{n}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)}, \end{split} \end{displaymath} because we can always interchange a finite summation with an infinite summation. \end{proof} \begin{lemma}\label{Uniqueness of inverse factorial series expansions}(Uniqueness of inverse factorial series expansions)\\ If a function $f(z)$ has for all $z\in\mathbb{C}$ with $\re(z)>0$ the absolutely convergent series expansion \begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)} \end{split} \end{displaymath} and the asymptotic expansion \begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{n}\frac{c_{k}}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\;\;\;\;\text{as $z\rightarrow\infty$}, \end{split} \end{displaymath} then we have that $c_{k}=b_{k}$ for all $k\in\mathbb{N}$ and we have the absolutely convergent series expansion \begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{\infty}\frac{c_{k}}{(z+1)(z+2)\cdots(z+k)}. \end{split} \end{displaymath} \end{lemma} \begin{proof} From the given absolutely convergent inverse factorial series expansion of $f(z)$, we deduce for all $n\in\mathbb{N}$ that \begin{displaymath} \begin{split} \sum_{k=n}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)}&\leq\sum_{k=n}^{\infty}\frac{|b_{k}|}{|(z+1)||(z+2)|\cdots|(z+k)|}=O\left(\frac{1}{z^{n}}\right)\;\;\text{as $z\rightarrow\infty$}, \end{split} \end{displaymath} which means that $\lim_{z\rightarrow\infty}(f(z))=0$ and that we have \begin{displaymath} \begin{split} z^{m}\sum_{k=n}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)}\longrightarrow0\;\;\text{as $z\rightarrow\infty$ for all $m\in\{0,1,2,\ldots,n-1\}$}. \end{split} \end{displaymath} The result now follows by induction on $n\in\mathbb{N}$ via a repeated application of the above limit.\\ For $n=1$, we get \begin{displaymath} \begin{split} f(z)&=\frac{b_{1}}{z+1}+\sum_{k=2}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)} =\frac{c_{1}}{z+1}+O\left(\frac{1}{z^{2}}\right)\;\;\;\;\text{as $z\rightarrow\infty$}, \end{split} \end{displaymath} which implies by multiplying both sides with $z+1$ and letting $z\rightarrow\infty$ that $c_{1}=b_{1}$.\\ Similarly, for $n=2$, we get using $c_{1}=b_{1}$ that \begin{displaymath} \begin{split} f(z)-\frac{c_{1}}{z+1}&=\frac{b_{2}}{(z+1)(z+2)}+\sum_{k=3}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)} =\frac{c_{2}}{(z+1)(z+2)}+O\left(\frac{1}{z^{3}}\right), \end{split} \end{displaymath} which implies by multiplying both sides with $(z+1)(z+2)$ and letting $z\rightarrow\infty$ that $c_{2}=b_{2}$.\\ In general, we can induct from $n-1$ to $n$ using that $c_{k}=b_{k}$ for all $k\in\{1,2,3,\ldots,n-1\}$ by the identity \begin{displaymath} \begin{split} f(z)-\sum_{k=1}^{n-1}\frac{c_{k}}{(z+1)(z+2)\cdots(z+k)}&=\frac{b_{n}}{(z+1)(z+2)\cdots(z+n)}+\sum_{k=n+1}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)}\\ &=\frac{c_{n}}{(z+1)(z+2)\cdots(z+n)}+O\left(\frac{1}{z^{n+1}}\right)\;\;\;\;\text{as $z\rightarrow\infty$}, \end{split} \end{displaymath} again by multiplying both sides with $(z+1)(z+2)\cdots(z+n)$ and letting $z\rightarrow\infty$ to conclude that $c_{k}=b_{k}$ holds also for $k=n$.\\ This proves that $c_{k}=b_{k}$ for all $k\in\mathbb{N}$. \end{proof} \noindent The key to our generalized Faulhaber formulas will be the following \begin{theorem}\label{Watson's Transformation Theorem}(Watson's Transformation Theorem)\;\cite[\text{Theorem\;2, p.\;45}]{16}\\ Let $f(z)$ be a function of $z\in\mathbb{C}$ which is analytic when $\re(z)>0$; and let $f(z)$ be also analytic in the region $D$ of the complex plane defined by \begin{displaymath} \begin{split} D:&=\left\{z\in\mathbb{C}:|z|>\gamma\;\text{and}\;|\arg(z)|\leq\frac{\pi}{2}+\alpha+3\delta\right\}, \end{split} \end{displaymath} where $\gamma\geq0$ is a finite number, $\alpha>0$, $\delta>0$ and $\alpha+3\delta<\frac{\pi}{2}$.\\ In the region $D$ let $f(z)$ possess the asymptotic expansion \begin{displaymath} \begin{split} f(z)&=\sum_{k=0}^{n}\frac{a_{k}}{z^{k}}+R_{n}(z)=a_{0}+\frac{a_{1}}{z}+\frac{a_{2}}{z^{2}}+\frac{a_{3}}{z^{3}}+\frac{a_{4}}{z^{4}}+\ldots+\frac{a_{n}}{z^{n}}+R_{n}(z), \end{split} \end{displaymath} where \begin{displaymath} \begin{split} |a_{n}|&<A\rho^{n}n!\;\;\;\;\text{and}\;\;\;\;|R_{n}(z)z^{n+1}|<B\sigma^{n}n!, \end{split} \end{displaymath} with some constants $A$, $B$, $\rho$ and $\sigma$, which are independent of $n$.\\ Let $M\leq M_{0}$ be any positive real number, where $M_{0}$ is the largest positive root of the equation \begin{displaymath} \begin{split} e^{-\frac{2\cos(\alpha)}{\rho M_{0}}}-2\cos\left(\frac{\sin(\alpha)}{\rho M_{0}}\right)\cdot e^{-\frac{\cos(\alpha)}{\rho M_{0}}}+1-p^{2}&=0, \end{split} \end{displaymath} where \begin{displaymath} \begin{split} 1&<p<1+e^{-\pi\cot(\alpha)}. \end{split} \end{displaymath} Then the function $f(z)$ can be expanded into the absolutely convergent series \begin{displaymath} \begin{split} f(z)&=b_{0}+\sum_{k=1}^{\infty}\frac{b_{k}}{(Mz+w+1)(Mz+w+2)\cdots(Mz+w+k)}, \end{split} \end{displaymath} when $\re(z)>0$ and $w\in\mathbb{C}$ with $\re(w)\geq0$. \end{theorem} \begin{proof} The proof of this Theorem is given in Watson's paper \cite{16}. \end{proof} It follows \begin{theorem}\label{Structure of inverse factorial series expansions}(Structure of inverse factorial series expansions)\\ Let $f(z)$ be a function of $z\in\mathbb{C}$ which is analytic when $\re(z)>0$; and let $f(z)$ be also analytic in the region $D$ of the complex plane defined by \begin{displaymath} \begin{split} D:&=\left\{z\in\mathbb{C}:|z|>0\;\text{and}\;|\arg(z)|\leq\pi-\varepsilon,\;\text{where $\varepsilon>0$ is arbitrarily small}\right\}. \end{split} \end{displaymath} In the region $D$ let $f(z)$ possess the asymptotic expansion \begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{n}\frac{a_{k}}{z^{k}}+R_{n}(z)=\frac{a_{1}}{z}+\frac{a_{2}}{z^{2}}+\frac{a_{3}}{z^{3}}+\ldots+\frac{a_{n}}{z^{n}}+R_{n}(z), \end{split} \end{displaymath} where \begin{displaymath} \begin{split} |a_{n}|&<A\rho^{n}n!\;\;\;\;\text{and}\;\;\;\;|R_{n}(z)z^{n+1}|<B\sigma^{n}n!, \end{split} \end{displaymath} with some constants $A$, $B$, $\rho<\frac{3}{\pi}$ and $\sigma$, which are independent of $n$.\\ Then the function $f(z)$ is equal to the absolutely convergent inverse factorial series \begin{equation}\label{structure of inverse factorial series expansion} \begin{split} f(z)&=\sum_{k=1}^{\infty}\frac{(-1)^{k}\sum_{l=1}^{k}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)} \end{split} \end{equation} for $\re(z)>0$. \end{theorem} \begin{proof} Let $f(z)$ and the region $D$ be as described in the above Theorem \ref{Structure of inverse factorial series expansions}. Because of the conditions on the function $f(z)$ and the region $D$, we can choose in Theorem \ref{Watson's Transformation Theorem} the variables $\gamma:=0$, $\alpha:=\frac{\pi}{2}-4\varepsilon$, $\delta:=\varepsilon$ and $p:=1+\varepsilon$ for $\varepsilon>0$ arbitrarily small by \cite[\text{beginning of p.\;85}]{16}. We have then that $M_{0}=\frac{3}{\pi\rho}-\varepsilon$ for some arbitrarily small number $\varepsilon>0$ and because $\rho<\frac{3}{\pi}$, we obtain that $M_{0}>1$. According to Watson's Theorem \ref{Watson's Transformation Theorem} with $M:=1<M_{0}$, $w:=0$ and $a_{0}=b_{0}=0$, we know that we can expand the function $f(z)$ into an absolutely convergent series of the form \begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{\infty}\frac{b_{k}}{(z+1)(z+2)\cdots(z+k)} \end{split} \end{displaymath} for some constants $b_{k}\in\mathbb{C}$ and all $z\in\mathbb{C}$ with $\re(z)>0$.\\ On the other hand, we have by applying a finite Weniger transformation \eqref{finite Weniger transformation formula} to the asymptotic expansion of $f(z)$ that\begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{n}\frac{(-1)^{k}\sum_{l=1}^{k}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)}+O\left(\frac{1}{z^{n+1}}\right)\;\;\text{as $z\rightarrow\infty$} \end{split} \end{displaymath} also holds.\\ Comparing the two above expressions for $f(z)$ by using Lemma \ref{Uniqueness of inverse factorial series expansions} with $c_{k}:=(-1)^{k}\sum_{l=1}^{k}(-1)^{l}S_{k}^{(1)}(l)a_{l}$, we conclude that we must have the absolutely convergent series \begin{displaymath} \begin{split} f(z)&=\sum_{k=1}^{\infty}\frac{(-1)^{k}\sum_{l=1}^{k}(-1)^{l}S_{k}^{(1)}(l)a_{l}}{(z+1)(z+2)\cdots(z+k)} \end{split} \end{displaymath} for all $z\in\mathbb{C}$ with $\re(z)>0$. \end{proof} \section{The Convergent Inverse Factorial Series Expansions for $\zeta(s,z+1-y)$ and $\psi(z+1-y)$} \label{sec:The Series Expansions for zeta(s,z+1-y) and psi(z+1-y)} In this section, we deduce in Theorem \ref{Inverse factorial series expansions for zeta(s,z+1-y) and psi(z+1-y)} the convergent inverse factorial series expansions for the functions $\zeta(s,z+1-y)$ and $\psi(z+1-y)$, where $0\leq y\leq1$.\\ For this, we need the following \begin{lemma}(Euler-Maclaurin summation formula)\cite[\text{Theorem B.5, pp.\;500-501}]{13}\label{Euler-Maclaurin formula}\\ Suppose that $n\in\mathbb{N}$ is a positive integer and that the function $f(t)$ has continuous derivatives through the $n$-th order on the interval $[a,b]$ where $a$ and $b$ are real numbers with $a<b$.\\ Then we have \begin{equation} \label{Euler-Maclaurin summation formula} \begin{split} \sum_{a<k\leq b}f(k)&=\int_{a}^{b}f(t)dt+\sum_{k=1}^{n}(-1)^{k}\frac{B_{k}(\{b\})}{k!}f^{(k-1)}(b)-\sum_{k=1}^{n}(-1)^{k}\frac{B_{k}(\{a\})}{k!}f^{(k-1)}(a)\\ &\quad+\frac{(-1)^{n+1}}{n!}\int_{a}^{b}f^{(n)}(t)B_{n}(\{t\})dt. \end{split} \end{equation} \end{lemma} \begin{proof} The proof of this Lemma \ref{Euler-Maclaurin formula} is given in \cite[\text{p.\;501}]{13}. \end{proof} From the above Lemma \ref{Euler-Maclaurin formula}, it follows \begin{lemma}\label{Asymptotic series expansions for zeta(s,z+h) and psi(z+h) with 0leq hleq1}(Asymptotic series expansions for $\zeta(s,z+h)$ and $\psi(z+h)$ with $0\leq h\leq1$)\\ Let $n\in\mathbb{N}_{0}$ and let $s\in\mathbb{C}\setminus\{1\}$ such that $\re(s)>-n$. We have for $z\in\mathbb{C} $ with $|\arg(z)|<\pi$ and $0\leq h\leq1$ the asymptotic series expansions \begin{equation}\label{Hurwitz zeta expansion} \begin{split} \zeta(s,z+h)&=\frac{z^{1-s}}{s-1}+\frac{z^{1-s}}{s-1}\sum_{k=1}^{n}{1-s\choose k}\frac{B_{k}(h)}{z^{k}}+O_{n}(z) \end{split} \end{equation} with \begin{equation}\label{error term O_n(z)} \begin{split} \left|O_{n}(z)\right|&=\left|{1-s\choose n+2}\frac{n+2}{s-1}\int_{0}^{\infty}\frac{B_{n+1}\left(\{x-h\}\right)-(-1)^{n+1}B_{n+1}(h)}{(x+z)^{n+s+1}}dx\right|\\ &\leq\frac{2(n+2)}{|s-1|}\left|{1-s\choose n+2}\right|\frac{\left|B_{n+1}\right|\sec^{n+\re(s)+1}\left(\frac{1}{2}\arg(z)\right)}{(n+\re(s))|z|^{n+\re(s)}}\max\left\{1,e^{\im(s)\arg(z)}\right\} \end{split} \end{equation} and \begin{equation}\label{digamma expansion} \begin{split} \psi(z+h)&=\log(z)-\sum_{k=1}^{n}\frac{(-1)^{k}B_{k}(h)}{kz^{k}}+U_{n}(z) \end{split} \end{equation} with \begin{equation}\label{error term U_n(z)} \begin{split} \left|U_{n}(z)\right|&=\left|\int_{0}^{\infty}\frac{(-1)^{n+1}B_{n+1}(h)-B_{n+1}(\{x-h\})}{(x+z)^{n+2}}dx\right|\leq\frac{2\left|B_{n+1}\right|\sec^{n+2}\left(\frac{1}{2}\arg(z)\right)}{(n+1)|z|^{n+1}}. \end{split} \end{equation} \end{lemma} \begin{proof} Let $0<h\leq1$ and let $z\in\mathbb{C}$ with $\left|\arg(z)\right|<\pi$. Setting $a:=-h$, $b:=N$ and $f(x):=\frac{1}{(z+h+x)^{s}}$ with $\frac{d^{n}f(x)}{dx^{n}}=\frac{d^{n}}{dx^{n}}\left(\frac{1}{(z+h+x)^{s}}\right)=-\frac{(n+1)!}{s-1}{1-s\choose n+1}\frac{1}{(z+h+x)^{n+s}}$ into Lemma \ref{Euler-Maclaurin formula}, we obtain for $s\in\mathbb{C}$ with $\re(s)>1$ that \begin{displaymath} \begin{split} \zeta(s,z+h)&=\sum_{k=0}^{\infty}\frac{1}{(z+h+k)^{s}}=\lim_{N\rightarrow\infty}\left(\sum_{-h<k\leq N}\frac{1}{(z+h+k)^{s}}\right)\\ &=\int_{-h}^{\infty}\frac{1}{(z+h+x)^{s}}dx+\lim_{N\rightarrow\infty}\left(\sum_{k=1}^{n}(-1)^{k}\frac{B_{k}(\{N\})}{k!}\frac{d^{k-1}\left(\frac{1}{(z+h+x)^{s}}\right)}{dx^{k-1}}\Bigg|_{x=N}\right)\\ &\quad-\sum_{k=1}^{n}(-1)^{k}\frac{B_{k}(\{-h\})}{k!}\frac{d^{k-1}\left(\frac{1}{(z+h+x)^{s}}\right)}{dx^{k-1}}\Bigg|_{x=-h}+\frac{(-1)^{n+1}}{n!}\int_{-h}^{\infty}\frac{d^{n}\left(\frac{1}{(z+h+x)^{s}}\right)}{dx^{n}}\Bigg|_{x=t}B_{n}(\{t\})dt\\ &=\frac{z^{1-s}}{s-1}+\frac{z^{1-s}}{s-1}\sum_{k=1}^{n}(-1)^{k}{1-s\choose k}\frac{B_{k}(\{1-h\})}{z^{k}}+(-1)^{n}{1-s\choose n+1}\frac{n+1}{s-1}\int_{0}^{\infty}\frac{B_{n}(\{x-h\})}{(x+z)^{n+s}}dx. \end{split} \end{displaymath} This is equivalent to \begin{displaymath} \begin{split} \zeta(s,z+h)&=\frac{z^{1-s}}{s-1}+\frac{z^{1-s}}{s-1}\sum_{k=1}^{n}(-1)^{k}{1-s\choose k}\frac{B_{k}(1-h)}{z^{k}}\\ &\quad+(-1)^{n+1}{1-s\choose n+2}\frac{n+2}{s-1}\int_{0}^{\infty}\frac{B_{n+1}(\{x-h\})-B_{n+1}(1-h)}{(x+z)^{n+s+1}}dx. \end{split} \end{displaymath} We use the relation \eqref{Bernoulli relation} and deduce equation \eqref{Hurwitz zeta expansion}, which extends $\zeta(s,z+h)$ analytically to the whole punctured complex $s$-plane $\mathbb{C}\setminus\{1\}$. Therefore, the equation \eqref{Hurwitz zeta expansion} is also true for all $s\in\mathbb{C}\setminus\{1\}$. By using the identity \eqref{Hurwitz zeta identity}, we see that the formula \eqref{Hurwitz zeta expansion} is also true for $h=0$. The bound \eqref{error term O_n(z)} for the error term $O_{n}(z)$ follows from \cite[\text{p.\;294}]{14} and \cite[\text{p.\;6}]{15}. This proves the first part about $\zeta(s,z+h)$ of the above Lemma \ref{Asymptotic series expansions for zeta(s,z+h) and psi(z+h) with 0leq hleq1}.\\ Now, we prove the second part for $\psi(z+h)$. We have for $z\in\mathbb{C}$ with $\left|\arg(z)\right|<\pi$, $n\geq2$ and $0\leq h\leq1$ the series expansion \cite[\text{Ex.\;4.4, p.\;295}]{14} \begin{equation}\label{LogGammaExpansion} \begin{split} \ln\left(\Gamma(z+h)\right)&=\left(z+h-\frac{1}{2}\right)\ln(z)-z+\frac{1}{2}\ln(2\pi)+\sum_{k=2}^{n}\frac{(-1)^{k}B_{k}(h)}{k(k-1)z^{k-1}}-\frac{1}{n}\int_{0}^{\infty}\frac{B_{n}\left(\{x-h\}\right)}{(x+z)^{n}}dx\\ &\hspace{-1.8cm}=\left(z+h-\frac{1}{2}\right)\ln(z)-z+\frac{1}{2}\ln(2\pi)+\sum_{k=2}^{n}\frac{(-1)^{k}B_{k}(h)}{k(k-1)z^{k-1}}+\frac{1}{n+1}\int_{0}^{\infty}\tfrac{(-1)^{n+1}B_{n+1}(h)-B_{n+1}\left(\{x-h\}\right)}{(x+z)^{n+1}}dx. \end{split} \end{equation} Differentiating this identity with respect to the variable $z$, we get equation \eqref{digamma expansion}. The estimate \eqref{error term U_n(z)} for the error term $U_{n}(z)$ follows from \cite[\text{p.\;294 and Ex.\;4.2, p.\;295}]{14}. \end{proof} We get the following \begin{theorem}\label{Inverse factorial series expansions for zeta(s,z+1-y) and psi(z+1-y)}(Inverse factorial series expansions for $\zeta(s,z+1-y)$ and $\psi(z+1-y)$)\\ Let $0\leq y\leq1$ and $s\in\mathbb{C}\setminus\{1\}$. We have for $z\in\mathbb{H}^{+}$ and all $a\in\mathbb{N}_{0}$ the absolutely convergent inverse factorial series expansions \begin{equation}\label{inverse factorial series expansion for the Hurwitz zeta function} \begin{split} \zeta(s,z+1-y)&=\frac{z^{1-s}}{s-1}+\frac{z^{1-s}}{s-1}\sum_{k=1}^{a}(-1)^{k}{1-s\choose k}\frac{B_{k}(y)}{z^{k}}\\ &\quad+\frac{z^{1-s-a}}{s-1}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}{1-s\choose l+a}S_{k}^{(1)}(l)B_{l+a}(y)}{(z+1)(z+2)\cdots(z+k)} \end{split} \end{equation} and \begin{equation}\label{inverse factorial series expansion for the digamma function} \begin{split} \psi(z+1-y)&=\log(z)-\sum_{k=1}^{a}\frac{B_{k}(y)}{kz^{k}}+\frac{1}{z^{a}}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+a}S_{k}^{(1)}(l)B_{l+a}(y)}{(z+1)(z+2)\cdots(z+k)}. \end{split} \end{equation} \end{theorem} \begin{proof} Let $s\in\mathbb{C}\setminus\{1,0,-1,-2,-3,\ldots\}$ be a fixed complex number and let $z\in\mathbb{C}\setminus(-\infty,0]$ with $|\arg(z)|\leq\pi-\varepsilon<\pi$ for some arbitrarily small, but fixed $\varepsilon>0$. Setting $h:=1-y$ for $0\leq y\leq1$ into the identities \eqref{Hurwitz zeta expansion} and \eqref{digamma expansion}, we deduce by using the relation \eqref{Bernoulli relation} and by exchanging $n$ with $n+a$ that \begin{equation}\label{formula1} \begin{split} \zeta(s,z+1-y)&=\frac{z^{1-s}}{s-1}+\frac{z^{1-s}}{s-1}\sum_{k=1}^{n+a}(-1)^{k}{1-s\choose k}\frac{B_{k}(y)}{z^{k}}+O_{n+a}(z) \end{split} \end{equation} and that \begin{equation}\label{formula2} \begin{split} \psi(z+1-y)&=\log(z)-\sum_{k=1}^{n+a}\frac{B_{k}(y)}{kz^{k}}+U_{n+a}(z), \end{split} \end{equation} where $O_{n+a}(z)$ and $U_{n+a}(z)$ are as in the previous Lemma \ref{Asymptotic series expansions for zeta(s,z+h) and psi(z+h) with 0leq hleq1} with $h:=1-y$.\\ We can write the equations \eqref{formula1} and \eqref{formula2} in the forms \begin{equation}\label{formula3} \begin{split} \zeta(s,z+1-y)&=\frac{z^{1-s}}{s-1}+\frac{z^{1-s}}{s-1}\sum_{k=1}^{a}(-1)^{k}{1-s\choose k}\frac{B_{k}(y)}{z^{k}}\\ &\quad+\frac{z^{1-s-a}}{s-1}\sum_{k=1}^{n}(-1)^{k+a}{1-s\choose k+a}\frac{B_{k+a}(y)}{z^{k}}+O_{n+a}(z) \end{split} \end{equation} and \begin{equation}\label{formula4} \begin{split} \psi(z+1-y)&=\log(z)-\sum_{k=1}^{a}\frac{B_{k}(y)}{kz^{k}}-\frac{1}{z^{a}}\sum_{k=1}^{n}\frac{B_{k+a}(y)}{(k+a)z^{k}}+U_{n+a}(z). \end{split} \end{equation} In the following calculations, we will use that the function $g(k):=2^{k}$ grows faster than any polynomial $p(k)$ as $k\rightarrow\infty$.\\ From equation \eqref{formula3}, we get \eqref{inverse factorial series expansion for the Hurwitz zeta function} for $s\in\mathbb{C}\setminus\{1,0,-1,-2,-3,\ldots\}$ by applying Theorem \ref{Structure of inverse factorial series expansions} with $R_{n}(z):=(s-1)z^{s+a-1}O_{n+a}(z)$ to the analytic function $f_{1}(z)$ defined by \begin{displaymath} \begin{split} f_{1}(z):&=(s-1)z^{s+a-1}\left[\zeta(s,z+1-y)-\frac{z^{1-s}}{s-1}-\frac{z^{1-s}}{s-1}\sum_{k=1}^{a}(-1)^{k}{1-s\choose k}\frac{B_{k}(y)}{z^{k}}\right]\\ &=\sum_{k=1}^{n}(-1)^{k+a}{1-s\choose k+a}\frac{B_{k+a}(y)}{z^{k}}+(s-1)z^{s+a-1}O_{n+a}(z) \end{split} \end{displaymath} on $z\in\mathbb{C}\setminus(-\infty,0]$ with $|\arg(z)|\leq\pi-\varepsilon<\pi$, because using that $|x^{s}|=x^{\re(s)}$ for $x\in\mathbb{R}_{0}^{+}$, we have by employing the identity \eqref{binomial coefficient bound} that \begin{displaymath} \begin{split} \left|{1-s\choose k+a}B_{k+a}(y)\right|&\leq\frac{2\zeta(k+a)(k+a)!(k+a)^{\re(s)-2}}{(2\pi)^{k+a}\left|\Gamma(s-1)\right|}+O\left(\frac{2\zeta(k+a)(k+a)!(k+a)^{\re(s)-3}}{(2\pi)^{k+a}}\right)\\ &<\frac{C_{1}(a)k!}{\pi^{k}} \end{split} \end{displaymath} and with $A_{1}:=\max\left\{1,e^{\im(s)\arg(z)}\right\}$, $A_{2}:=\max\left\{1,e^{-\im(s)\arg(z)}\right\}$, as well as $\re(s)>-n$ that \begin{displaymath} \begin{split} \left|(s-1)z^{s+a-1}O_{n+a}(z)\right|&\leq2(n+a+2)A_{1}\left|{1-s\choose n+a+2}\right|\frac{\left|B_{n+a+1}\right|e^{-\im(s)\arg(z)}\sec^{n+a+\re(s)+1}\left(\frac{1}{2}\arg(z)\right)}{\left|n+a+\re(s)\right|\cdot|z|^{n+1}}\\ &\hspace{-3.6cm}\leq\frac{4(n+a+2)A_{2}}{\left|n+a+\re(s)\right|}\cdot\frac{\zeta(n+a+1)(n+a+1)!(n+a+2)^{\re(s)-2}\sec^{n+a+\re(s)+1}\left(\frac{1}{2}\arg(z)\right)}{(2\pi)^{n+a+1}\left|\Gamma(s-1)\right||z|^{n+1}}\\ &\hspace{-3.6cm}\quad+O\left(\frac{4(n+a+2)A_{2}}{\left|n+a+\re(s)\right|}\cdot\frac{\zeta(n+a+1)(n+a+1)!(n+a+2)^{\re(s)-3}\sec^{n+a+\re(s)+1}\left(\frac{1}{2}\arg(z)\right)}{(2\pi)^{n+a+1}|z|^{n+1}}\right)\\ &\hspace{-3.6cm}<\frac{C_{2}(a)\sec^{n}\left(\frac{1}{2}\arg(z)\right)n!}{\pi^{n}|z|^{n+1}} \end{split} \end{displaymath} for some positive constants $C_{1}(a)$, $C_{2}(a)$ depending on $a$ and independent of $n$. In the last computation above, we have used the relation $|z^{s}|=|z|^{\re(s)}e^{-\im(s)\arg(z)}$.\\ The above bound for $\left|(s-1)z^{s+a-1}O_{n+a}(z)\right|$ is also true if $\re(s)\leq-n$ by taking $C_{2}(a)$ large enough, because $\re(s)\leq-n$ is only possible for finitely many $n$'s and in each case we have that $\left|(s-1)z^{s+a-1}O_{n+a}(z)\right|\leq\frac{C(n)}{|z|^{n+1}}$ for all $n\in\mathbb{N}$ and some positive constants $C(n)$.\\ To get formula \eqref{inverse factorial series expansion for the Hurwitz zeta function} also for all $s\in\{0,-1,-2,-3,\ldots\}$, we apply the Weniger transformation formula \eqref{Weniger transformation formula} directly to the function $f_{1}(z)$ with $n:=1-s-a$ and $O_{n+a}(z)=O_{1-s}(z)=0$.\\ Similarly from equation \eqref{formula4}, we obtain the formula \eqref{inverse factorial series expansion for the digamma function} by applying Theorem \ref{Structure of inverse factorial series expansions} with $R_{n}(z):=z^{a}U_{n+a}(z)$ to the analytic function $f_{2}(z)$ defined by \begin{displaymath} \begin{split} f_{2}(z):&=z^{a}\left[\log(z)-\psi(z+1-y)-\sum_{k=1}^{a}\frac{B_{k}(y)}{kz^{k}}\right]=\sum_{k=1}^{n}\frac{B_{k+a}(y)}{(k+a)z^{k}}+z^{a}U_{n+a}(z) \end{split} \end{displaymath} on $z\in\mathbb{C}$ with $|\arg(z)|\leq\pi-\varepsilon<\pi$, because we have \begin{displaymath} \begin{split} \left|\frac{B_{k+a}(y)}{k+a}\right|\leq\frac{2\zeta(k+a)(k+a)!}{(2\pi)^{k+a}(k+a)}<\frac{C_{3}(a)k!}{\pi^{k}} \end{split} \end{displaymath} and \begin{displaymath} \begin{split} \left|z^{a}U_{n+a}(z)\right|&\leq\frac{2\left|B_{n+a+1}\right|\sec^{n+a+2}\left(\frac{1}{2}\arg(z)\right)}{(n+a+1)|z|^{n+1}}\leq\frac{4\zeta(n+a+1)\sec^{n+a+2}\left(\frac{1}{2}\arg(z)\right)(n+a+1)!}{(2\pi)^{n+a+1}(n+a+1)|z|^{n+1}}\\ &<\frac{C_{4}(a)\sec^{n}\left(\frac{1}{2}\arg(z)\right)n!}{\pi^{n}|z|^{n+1}} \end{split} \end{displaymath} for some positive constants $C_{3}(a)$, $C_{4}(a)$ depending on $a$ and independent of $n$. \end{proof} \section{The generalized Faulhaber formulas} \label{sec:The generalized Faulhaber formulas} In this section we will prove our generalized versions of Faulhaber's formula, which all converge very rapidly. For their deductions, we will use the above Theorem \ref{Inverse factorial series expansions for zeta(s,z+1-y) and psi(z+1-y)}. \begin{theorem}\label{extended generalized Faulhaber formulas}(extended generalized Faulhaber formulas)\\ For every complex number $m\in\mathbb{C}\setminus\{-1\}$ and every positive real number $x\in\mathbb{R}^{+}$, we have \begin{equation}\label{Faulhaber's formula 1} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}k^{m}=\frac{1}{m+1}x^{m+1}+\zeta\left(-m\right)+\frac{x^{m+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l}S^{(1)}_{k}(l)B_{l}(\{x\})}{(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation} More generally, for every $x\in\mathbb{R}^{+}$ and every $a\in\mathbb{N}_{0}$, we have that \begin{equation}\label{Faulhaber's formula 2} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}k^{m}&=\frac{1}{m+1}x^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{a}(-1)^{k}{m+1\choose k}B_{k}(\{x\})x^{m-k+1}\\ &\quad+\frac{x^{m-a+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}{m+1\choose l+a}S^{(1)}_{k}(l)B_{l+a}\left(\left\{x\right\}\right)}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{equation} and for $m=m_{1}+im_{2}\in\mathbb{C}\setminus\{-1\}$ with $m_{1}=\re(m)\geq-1$ the special case \begin{equation}\label{Faulhaber's formula 3} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}k^{m}&=\frac{1}{m+1}x^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{\lfloor m_{1}+1\rfloor}(-1)^{k}{m+1\choose k}B_{k}(\{x\})x^{m-k+1}\\ &\quad+(-1)^{\lfloor m_{1}+1\rfloor}\frac{x^{m-\lfloor m_{1}+1\rfloor+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l+\lfloor m_{1}+1\rfloor}S^{(1)}_{k}(l)B_{l+\lfloor m_{1}+1\rfloor}(\{x\})}{(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation} Moreover, if $m=-1$, we have for every positive real number $x\in\mathbb{R}^{+}$ and every $a\in\mathbb{N}_{0}$ that \begin{equation}\label{Faulhaber's formula 4} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k}&=\log(x)+\gamma-\sum_{k=1}^{a}\frac{B_{k}(\{x\})}{kx^{k}}+\frac{1}{x^{a}}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+a}S^{(1)}_{k}(l)B_{l+a}(\{x\})}{(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation} In particular, we have for $x\in\mathbb{R}^{+}$ that \begin{equation}\label{Faulhaber's formula 5} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k}&=\log(x)+\gamma+\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l}S_{k}^{(1)}(l)B_{l}(\{x\})}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{equation} and that \begin{equation}\label{Faulhaber's formula 6} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k}&=\log(x)+\gamma-\frac{B_{1}(\{x\})}{x}+\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+1}S_{k}^{(1)}(l)B_{l+1}(\{x\})}{x(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation} \end{theorem} \begin{proof} From the formula \eqref{inverse factorial series expansion for the Hurwitz zeta function} with the parameters $s:=-m$, $z:=x$ and $y:=\{x\}$, we get \begin{displaymath} \begin{split} \sum_{k=1}^{\left\lfloor x\right\rfloor}k^{m}-\zeta(-m)&=-\zeta(-m,x+1-\{x\})\\ &=\frac{1}{m+1}x^{m+1}+\frac{1}{m+1}\sum_{k=1}^{a}(-1)^{k}{m+1\choose k}B_{k}(\{x\})x^{m-k+1}\\ &\quad+\frac{x^{m-a+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}{m+1\choose l+a}S^{(1)}_{k}(l)B_{l+a}\left(\{x\}\right)}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{displaymath} by using the formula \eqref{Hurwitz summation formula} with $n:=\left\lfloor x\right\rfloor=x-\{x\}$ in the first step. This gives the above identity \eqref{Faulhaber's formula 2} with its special cases \eqref{Faulhaber's formula 1} and \eqref{Faulhaber's formula 3}.\\ Similarly, we now use the formula \eqref{inverse factorial series expansion for the digamma function} again with the variables $z:=x$ and $y:=\{x\}$, and then we get \begin{displaymath} \begin{split} \sum_{k=1}^{\left\lfloor x\right\rfloor}\frac{1}{k}-\gamma&=\psi(x+1-\{x\})\\ &=\log(x)-\sum_{k=1}^{a}\frac{B_{k}(\{x\})}{kx^{k}}+\frac{1}{x^{a}}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+a}S^{(1)}_{k}(l)B_{l+a}(\{x\})}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{displaymath} by employing the formula \eqref{digamma summation formula} with $n:=\left\lfloor x\right\rfloor=x-\{x\}$ in the first line of the above calculation. This gives the above identity \eqref{Faulhaber's formula 4} with its special cases \eqref{Faulhaber's formula 5} and \eqref{Faulhaber's formula 6}. \end{proof} By setting $x:=n\in\mathbb{N}$ into Theorem \ref{extended generalized Faulhaber formulas}, we obtain the following \begin{corollary}(generalized Faulhaber formulas)\\ For every complex number $m\in\mathbb{C}\setminus\{-1\}$ and every natural number $n\in\mathbb{N}$, we have \begin{equation} \begin{split} \sum_{k=1}^{n}k^{m}=\frac{1}{m+1}n^{m+1}+\zeta\left(-m\right)+\frac{n^{m+1}}{m+1}\sum_{k=1}^{\infty}\frac{(-1)^{k}\sum_{l=1}^{k}{m+1\choose l}B_{l}S^{(1)}_{k}(l)}{(n+1)(n+2)\cdots(n+k)} \end{split} \end{equation} and more generally when $a\in\mathbb{N}_{0}$ that \begin{equation} \begin{split} \sum_{k=1}^{n}k^{m}&=\frac{1}{m+1}n^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{a}(-1)^{k}{m+1\choose k}B_{k}n^{m-k+1}\\ &\quad+\frac{n^{m-a+1}}{m+1}\sum_{k=1}^{\infty}\frac{(-1)^{k+a}\sum_{l=1}^{k}{m+1\choose l+a}B_{l+a}S^{(1)}_{k}(l)}{(n+1)(n+2)\cdots(n+k)}. \end{split} \end{equation} We have again when $m=m_{1}+im_{2}\in\mathbb{C}\setminus\{-1\}$ with $m_{1}=\re(m)\geq-1$ the special case \begin{equation} \begin{split} \sum_{k=1}^{n}k^{m}&=\frac{1}{m+1}n^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{\lfloor m_{1}+1\rfloor}(-1)^{k}{m+1\choose k}B_{k}n^{m-k+1}\\ &\quad+(-1)^{\lfloor m_{1}+1\rfloor}\frac{n^{m-\lfloor m_{1}+1\rfloor+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l+\lfloor m_{1}+1\rfloor}B_{l+\lfloor m_{1}+1\rfloor}S^{(1)}_{k}(l)}{(n+1)(n+2)\cdots(n+k)}. \end{split} \end{equation} For $m=-1$, we have for every natural number $n\in\mathbb{N}$ and every $a\in\mathbb{N}_{0}$ that \begin{equation} \begin{split} \sum_{k=1}^{n}\frac{1}{k}&=\log(n)+\gamma-\sum_{k=1}^{a}\frac{B_{k}}{kn^{k}}+\frac{1}{n^{a}}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}\sum_{l=1}^{k}\frac{(-1)^{l}}{l+a}B_{l+a}S^{(1)}_{k}(l)}{(n+1)(n+2)\cdots(n+k)}. \end{split} \end{equation} In particular, we have for every $n\in\mathbb{N}$ that \begin{equation} \begin{split} \sum_{k=1}^{n}\frac{1}{k}&=\log(n)+\gamma+\sum_{k=1}^{\infty}\frac{(-1)^{k+1}\sum_{l=1}^{k}\frac{(-1)^{l}}{l}B_{l}S_{k}^{(1)}(l)}{(n+1)(n+2)\cdots(n+k)}\\ &=\log(n)+\gamma+\frac{1}{2(n+1)}+\frac{5}{12(n+1)(n+2)}+\frac{3}{4(n+1)(n+2)(n+3)}\\ &\quad+\frac{251}{120(n+1)(n+2)(n+3)(n+4)}+\ldots \end{split} \end{equation} and that \begin{equation}\label{GregorioFontanaExpansion} \begin{split} \sum_{k=1}^{n}\frac{1}{k}&=\log(n)+\gamma+\frac{1}{2n}+\sum_{k=1}^{\infty}\frac{(-1)^{k}\sum_{l=1}^{k}\frac{B_{l+1}}{l+1}S_{k}^{(1)}(l)}{n(n+1)(n+2)\cdots(n+k)}\\ &=\log(n)+\gamma+\frac{1}{2n}-\frac{1}{12n(n+1)}-\frac{1}{12n(n+1)(n+2)}-\frac{19}{120n(n+1)(n+2)(n+3)}\\ &\quad-\frac{9}{20n(n+1)(n+2)(n+3)(n+4)}-\ldots. \end{split} \end{equation} \end{corollary} \noindent For every positive real number $x\in\mathbb{R}^{+}$ and for every natural number $n\in\mathbb{N}$, we list the following $8$ most used generalized Faulhaber summation formulas: \begin{itemize} \item[1.)]{{\bf Generalized Faulhaber summation formula for the partial sums of $\zeta(2)$:}\\ For every natural number $n\in\mathbb{N}$, we have that \begin{equation}\label{StirlingExpansion} \begin{split} \sum_{k=1}^{n}\frac{1}{k^{2}}&=\zeta(2)-\frac{1}{n}+\sum_{k=1}^{\infty}\frac{(-1)^{k+1}\sum_{l=1}^{k}(-1)^{l}B_{l}S_{k}^{(1)}(l)}{n(n+1)(n+2)\cdots(n+k)}\\ &=\zeta(2)-\frac{1}{n}+\sum_{k=1}^{\infty}\frac{1}{k+1}\cdot\frac{(k-1)!}{n(n+1)(n+2)\cdots(n+k)}\\ &=\zeta(2)-\frac{1}{n}+\frac{1}{2n(n+1)}+\frac{1}{3n(n+1)(n+2)}+\frac{1}{2n(n+1)(n+2)(n+3)}\\ &\quad+\frac{6}{5n(n+1)(n+2)(n+3)(n+4)}+\ldots. \end{split} \end{equation}} \item[2.)]{{\bf Extended generalized Faulhaber summation formula for the partial sums of $\zeta(3)$:}\\ For every real number $x\in\mathbb{R}^{+}$, we obtain \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k^{3}}&=\zeta(3)-\frac{1}{2x^{2}}+\frac{1}{2x}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}(-1)^{l}(l+1)S_{k}^{(1)}(l)B_{l}(\{x\})}{x(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation}} \item[3.)]{{\bf Extended generalized Faulhaber summation formula for the sum of the square roots:}\\ For every real number $x\in\mathbb{R}^{+}$, we get \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\sqrt{k}&=\frac{2}{3}x^{\frac{3}{2}}-\frac{1}{4\pi}\zeta\left(\frac{3}{2}\right)+x\sqrt{x}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}(2l-5)!!}{2^{l-1}l!}S_{k}^{(1)}(l)B_{l}(\{x\})}{(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation}} \item[4.)]{{\bf Generalized Faulhaber summation formula for the partial sums of $\zeta(-3/2)$:}\\ For every natural number $n\in\mathbb{N}$, we have that \begin{equation} \begin{split} \sum_{k=1}^{n}k\sqrt{k}&=\frac{2}{5}n^{\frac{5}{2}}+\frac{1}{2}n^{\frac{3}{2}}+\frac{1}{8}\sqrt{n}-\frac{3}{16\pi^{2}}\zeta\left(\frac{5}{2}\right)+3\sqrt{n}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(2l-3)!!}{2^{l+1}(l+2)!}B_{l+2}S_{k}^{(1)}(l)}{(n+1)(n+2)\cdots(n+k)}\\ &=\frac{2}{5}n^{\frac{5}{2}}+\frac{1}{2}n^{\frac{3}{2}}+\frac{1}{8}\sqrt{n}-\frac{3}{16\pi^{2}}\zeta\left(\frac{5}{2}\right)+\frac{\sqrt{n}}{1920(n+1)(n+2)}+\frac{\sqrt{n}}{640(n+1)(n+2)(n+3)}\\ &\quad+\frac{611\sqrt{n}}{107520(n+1)(n+2)(n+3)(n+4)}+\frac{275\sqrt{n}}{10752(n+1)(n+2)(n+3)(n+4)(n+5)}\\ &\quad+\frac{159157\sqrt{n}}{1146880(n+1)(n+2)(n+3)(n+4)(n+5)(n+6)}+\ldots. \end{split} \end{equation}} \item[5.)]{{\bf Generalized Faulhaber summation formula for the partial sums of $\zeta(-5/2)$:}\\ For every natural number $n\in\mathbb{N}$, we obtain that \begin{equation} \begin{split} \sum_{k=1}^{n}k^{2}\sqrt{k}&=\frac{2}{7}n^{\frac{7}{2}}+\frac{1}{2}n^{\frac{5}{2}}+\frac{5}{24}n^{\frac{3}{2}}+\frac{15}{64\pi^{3}}\zeta\left(\frac{7}{2}\right)+15\sqrt{n}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(2l-3)!!}{2^{l+2}(l+3)!}B_{l+3}S_{k}^{(1)}(l)}{(n+1)(n+2)\cdots(n+k)}\\ &=\frac{2}{7}n^{\frac{7}{2}}+\frac{1}{2}n^{\frac{5}{2}}+\frac{5}{24}n^{\frac{3}{2}}+\frac{15}{64\pi^{3}}\zeta\left(\frac{7}{2}\right)-\frac{\sqrt{n}}{384(n+1)}-\frac{\sqrt{n}}{384(n+1)(n+2)}\\ &\quad-\frac{37\sqrt{n}}{7168(n+1)(n+2)(n+3)}-\frac{55\sqrt{n}}{3584(n+1)(n+2)(n+3)(n+4)}\\ &\quad-\frac{1995\sqrt{n}}{32768(n+1)(n+2)(n+3)(n+4)(n+5)}-\ldots. \end{split} \end{equation}} \item[6.)]{{\bf Generalized Faulhaber summation formula for the sum of the inverses of the square roots:}\\ For every natural number $n\in\mathbb{N}$, we get that \begin{equation} \begin{split} \sum_{k=1}^{n}\frac{1}{\sqrt{k}} &=2\sqrt{n}+\zeta\left(\frac{1}{2}\right)+\frac{1}{2\sqrt{n}}+\frac{1}{\sqrt{n}}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{(2l-1)!!}{2^{l}(l+1)!}B_{l+1}S_{k}^{(1)}(l)}{(n+1)(n+2)\cdots(n+k)}\\ &=2\sqrt{n}+\zeta\left(\frac{1}{2}\right)+\frac{1}{2\sqrt{n}}-\frac{1}{24\sqrt{n}(n+1)}-\frac{1}{24\sqrt{n}(n+1)(n+2)}\\ &\quad-\frac{31}{384\sqrt{n}(n+1)(n+2)(n+3)}-\frac{15}{64\sqrt{n}(n+1)(n+2)(n+3)(n+4)}-\ldots. \end{split} \end{equation}} \item[7.)]{{\bf Extended generalized Faulhaber summation formula for the partial sums of $\zeta(3/2)$:}\\ For every real number $x\in\mathbb{R}^{+}$, we have \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k\sqrt{k}} &=\zeta\left(\frac{3}{2}\right)-\frac{2}{\sqrt{x}}-\frac{B_{1}(\{x\})}{x\sqrt{x}}+\frac{2}{\sqrt{x}}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}(2l+1)!!}{2^{l+1}(l+1)!}S_{k}^{(1)}(l)B_{l+1}(\{x\})}{x(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation}} \item[8.)]{{\bf Extended generalized Faulhaber summation formula for the partial sums of $\zeta(5/2)$:}\\ For every real number $x\in\mathbb{R}^{+}$, we obtain \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k^{2}\sqrt{k}}&=\zeta\left(\frac{5}{2}\right)-\frac{2}{3x^{\frac{3}{2}}}+\frac{4}{3\sqrt{x}}\sum_{k=1}^{\infty}(-1)^{k+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}(2l+1)!!}{2^{l+1}l!}S_{k}^{(1)}(l)B_{l}(\{x\})}{x(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation}} \end{itemize} \noindent From Theorem \ref{Structure of inverse factorial series expansions} and the proof of Theorem \ref{extended generalized Faulhaber formulas}, it also follows \begin{theorem}\label{other generalized Faulhaber formula versions}(other generalized Faulhaber formula versions)\\ For every $x\in\mathbb{R}^{+}$ and every $a\in\mathbb{N}_{0}$, we have that \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}k^{m}&=\frac{1}{m+1}x^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{a}(-1)^{k}{m+1\choose k}B_{k}(\{x\})x^{m-k+1}\\ &\quad+\frac{x^{m+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l+a}S^{(1)}_{k}(l+a)B_{l+a}\left(\left\{x\right\}\right)}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{equation} and for $m=m_{1}+im_{2}\in\mathbb{C}\setminus\{-1\}$ with $m_{1}=\re(m)\geq-1$ the special case \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}k^{m}&=\frac{1}{m+1}x^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{\lfloor m_{1}+1\rfloor}(-1)^{k}{m+1\choose k}B_{k}(\{x\})x^{m-k+1}\\ &\quad+\frac{x^{m+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l+\lfloor m_{1}+1\rfloor}S^{(1)}_{k}\left(l+\lfloor m_{1}+1\rfloor\right)B_{l+\lfloor m_{1}+1\rfloor}(\{x\})}{(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation} If $m=-1$, we have for every positive real number $x\in\mathbb{R}^{+}$ and every $a\in\mathbb{N}_{0}$ that \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k}&=\log(x)+\gamma-\sum_{k=1}^{a}\frac{B_{k}(\{x\})}{kx^{k}}+\sum_{k=1}^{\infty}(-1)^{k+a+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+a}S^{(1)}_{k}(l+a)B_{l+a}(\{x\})}{(x+1)(x+2)\cdots(x+k)}. \end{split} \end{equation} In particular, we have for $x\in\mathbb{R}^{+}$ that \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}\frac{1}{k}&=\log(x)+\gamma-\frac{B_{1}(\{x\})}{x}+\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+1}S_{k}^{(1)}(l+1)B_{l+1}(\{x\})}{(x+1)(x+2)\cdots(x+k)}\\ &=\log(x)+\gamma-\frac{\{x\}-\frac{1}{2}}{x}-\frac{\frac{1}{2}\{x\}^{2}-\frac{1}{2}\{x\}+\frac{1}{12}}{(x+1)(x+2)}-\frac{\frac{1}{3}\{x\}^{3}+\{x\}^{2}-\frac{4}{3}\{x\}+\frac{1}{4}}{(x+1)(x+2)(x+3)}\\ &\quad-\frac{\frac{1}{4}\{x\}^{4}+\frac{3}{2}\{x\}^{3}+\frac{11}{4}\{x\}^{2}-\frac{9}{2}\{x\}+\frac{109}{120}}{(x+1)(x+2)(x+3)(x+4)}-\ldots. \end{split} \end{equation} \\ By setting $x:=n\in\mathbb{N}$, we obtain the following:\\ \\ For every $n\in\mathbb{N}$ and every $a\in\mathbb{N}_{0}$, we have that \begin{equation} \begin{split} \sum_{k=1}^{n}k^{m}&=\frac{1}{m+1}n^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{a}(-1)^{k}{m+1\choose k}B_{k}n^{m-k+1}\\ &\quad+\frac{n^{m+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l+a}B_{l+a}S^{(1)}_{k}(l+a)}{(n+1)(n+2)\cdots(n+k)} \end{split} \end{equation} and for $m=m_{1}+im_{2}\in\mathbb{C}\setminus\{-1\}$ with $m_{1}=\re(m)\geq-1$ the special case \begin{equation} \begin{split} \sum_{k=1}^{n}k^{m}&=\frac{1}{m+1}n^{m+1}+\zeta\left(-m\right)+\frac{1}{m+1}\sum_{k=1}^{\lfloor m_{1}+1\rfloor}(-1)^{k}{m+1\choose k}B_{k}n^{m-k+1}\\ &\quad+\frac{n^{m+1}}{m+1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m+1\choose l+\lfloor m_{1}+1\rfloor}B_{l+\lfloor m_{1}+1\rfloor}S^{(1)}_{k}\left(l+\lfloor m_{1}+1\rfloor\right)}{(n+1)(n+2)\cdots(n+k)}. \end{split} \end{equation} If $m=-1$, we have for every positive real number $n\in\mathbb{N}$ and every $a\in\mathbb{N}_{0}$ that \begin{equation} \begin{split} \sum_{k=1}^{n}\frac{1}{k}&=\log(n)+\gamma-\sum_{k=1}^{a}\frac{B_{k}}{kn^{k}}+\sum_{k=1}^{\infty}(-1)^{k+a+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+a}B_{l+a}S^{(1)}_{k}(l+a)}{(n+1)(n+2)\cdots(n+k)}. \end{split} \end{equation} In particular, we have for $n\in\mathbb{N}$ that \begin{equation} \begin{split} \sum_{k=1}^{n}\frac{1}{k}&=\log(n)+\gamma+\frac{1}{2n}+\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}}{l+1}B_{l+1}S_{k}^{(1)}(l+1)}{(n+1)(n+2)\cdots(n+k)}\\ &=\log(n)+\gamma+\frac{1}{2n}-\frac{1}{12(n+1)(n+2)}-\frac{1}{4(n+1)(n+2)(n+3)}\\ &\quad-\frac{109}{120(n+1)(n+2)(n+3)(n+4)}-\ldots. \end{split} \end{equation} \end{theorem} \section{Conclusion} \label{sec:Conclusion} We have proved a rapidly convergent generalization of Faulhaber's formula to sums of arbitrary complex powers $m\in\mathbb{C}$. In our eyes, these formulas are useful because of their rapid convergence. We believe that they will also have applications in physics \cite{18} such as the extended version of Faulhaber's formula \cite{19,20}. With the universal technique, explained in this paper, one can obtain other summation formulas of this type \cite{21,22}, as for example with Theorem \ref{Structure of inverse factorial series expansions} and equation \eqref{LogGammaExpansion} we obtain:\\ \\ \noindent{\bf Generalized convergent Stirling summation formulas for the sums $\sum_{k=1}^{\left\lfloor x\right\rfloor}\ln(k)$ and $\sum_{k=1}^{n}\ln(k)$:}\\ For every real number $x\in\mathbb{R}^{+}$ and every natural number $a\in\mathbb{N}_{0}$, we have that \begin{equation}\label{StirlingExpansion1} \begin{split} \sum_{k=1}^{\left\lfloor x\right\rfloor}\ln(k)&=x\ln(x)-x+\frac{1}{2}\ln(2\pi)-\ln(x)B_{1}(\{x\})+\sum_{k=1}^{a}\frac{B_{k+1}(\{x\})}{k(k+1)x^{k}}\\ &\quad+\frac{1}{x^{a}}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}S_{k}^{(1)}(l)}{(l+a)(l+a+1)}B_{l+a+1}(\{x\})}{(x+1)(x+2)\cdots(x+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{\left\lfloor x\right\rfloor}\ln(k)&=x\ln(x)-x+\frac{1}{2}\ln(2\pi)-\ln(x)B_{1}(\{x\})+\sum_{k=1}^{a}\frac{B_{k+1}(\{x\})}{k(k+1)x^{k}}\\ &\quad+\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}S_{k}^{(1)}(l+a)}{(l+a)(l+a+1)}B_{l+a+1}(\{x\})}{(x+1)(x+2)\cdots(x+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{\left\lfloor x\right\rfloor}\ln(k)&=x\ln(x)-x+\frac{1}{2}\ln(2\pi)-\ln(x)B_{1}(\{x\})+\sum_{k=1}^{a}\frac{B_{k+1}(\{x\})}{k(k+1)x^{k}}\\ &\quad+x\sum_{k=1}^{\infty}(-1)^{k+a+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}S_{k}^{(1)}(l+a+1)}{(l+a)(l+a+1)}B_{l+a+1}(\{x\})}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{equation} and for every natural number $n\in\mathbb{N}$ and every natural number $a\in\mathbb{N}_{0}$ that \begin{equation}\label{StirlingExpansion2} \begin{split} \sum_{k=1}^{n}\ln(k)&=n\ln(n)-n+\frac{1}{2}\ln(2\pi)+\frac{1}{2}\ln(n)+\sum_{k=1}^{a}\frac{B_{k+1}}{k(k+1)n^{k}}\\ &\quad+\frac{1}{n^{a}}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}(-1)^{l}\frac{B_{l+a+1}S_{k}^{(1)}(l)}{(l+a)(l+a+1)}}{(n+1)(n+2)\cdots(n+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{n}\ln(k)&=n\ln(n)-n+\frac{1}{2}\ln(2\pi)+\frac{1}{2}\ln(n)+\sum_{k=1}^{a}\frac{B_{k+1}}{k(k+1)n^{k}}\\ &\quad+\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}S_{k}^{(1)}(l+a)}{(l+a)(l+a+1)}B_{l+a+1}}{(n+1)(n+2)\cdots(n+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{n}\ln(k)&=n\ln(n)-n+\frac{1}{2}\ln(2\pi)+\frac{1}{2}\ln(n)+\sum_{k=1}^{a}\frac{B_{k+1}}{k(k+1)n^{k}}\\ &\quad+n\sum_{k=1}^{\infty}(-1)^{k+a+1}\frac{\sum_{l=1}^{k}\frac{(-1)^{l}S_{k}^{(1)}(l+a+1)}{(l+a)(l+a+1)}B_{l+a+1}}{(n+1)(n+2)\cdots(n+k)}. \end{split} \end{equation} For many other functions $f(t)$, we can prove with Theorem \ref{Structure of inverse factorial series expansions} the following summation formulas:\\ \\ {\bf Convergent version of the Euler-Maclaurin summation formula:}\\ For many functions $f(t)$ it holds the following: For every real number $x\in\mathbb{R}^{+}$ and every $a\in\mathbb{N}_{0}$, we have \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}f(k)&=\int_{1}^{x}f(t)dt+C_{f}+\sum_{k=1}^{a}(-1)^{k}\frac{B_{k}(\{x\})}{k!}f^{(k-1)}(x)\\ &\quad+\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l)}{(l+a)!}f^{(l+a-1)}(x)B_{l+a}(\{x\})x^{l}}{(x+1)(x+2)\cdots(x+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}f(k)&=\int_{1}^{x}f(t)dt+C_{f}+\sum_{k=1}^{a}(-1)^{k}\frac{B_{k}(\{x\})}{k!}f^{(k-1)}(x)\\ &\quad+x^{a}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l+a)}{(l+a)!}f^{(l+a-1)}(x)B_{l+a}(\{x\})x^{l}}{(x+1)(x+2)\cdots(x+k)} \end{split} \end{equation} and for $n\in\mathbb{N}$ and $a\in\mathbb{N}_{0}$ we have \begin{equation} \begin{split} \sum_{k=1}^{n}f(k)&=\int_{1}^{n}f(t)dt+C_{f}+\sum_{k=1}^{a}(-1)^{k}\frac{B_{k}}{k!}f^{(k-1)}(n)\\ &\quad+\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l)}{(l+a)!}f^{(l+a-1)}(n)B_{l+a}n^{l}}{(n+1)(n+2)\cdots(n+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{n}f(k)&=\int_{1}^{n}f(t)dt+C_{f}+\sum_{k=1}^{a}(-1)^{k}\frac{B_{k}}{k!}f^{(k-1)}(n)\\ &\quad+n^{a}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l+a)}{(l+a)!}f^{(l+a-1)}(n)B_{l+a}n^{l}}{(n+1)(n+2)\cdots(n+k)}, \end{split} \end{equation} where the constant $C_{f}$ is given by \begin{equation} \begin{split} C_{f}&=f(1)-\sum_{k=1}^{a}(-1)^{k}\frac{B_{k}}{k!}f^{(k-1)}(1)-\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l)}{(l+a)!}f^{(l+a-1)}(1)B_{l+a}}{(k+1)!}\;\;\forall a\in\mathbb{N}_{0}. \end{split} \end{equation} \noindent{\bf Convergent version of the Boole summation formula:}\\ For many functions $f(t)$ it holds the following: For every real number $x\in\mathbb{R}^{+}$ and every $a\in\mathbb{N}_{0}$, we have \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}(-1)^{k+1}f(k)&=C_{f}+\frac{(-1)^{x-\{x\}}}{2}\sum_{k=1}^{a}(-1)^{k}\frac{E_{k-1}(\{x\})}{(k-1)!}f^{(k-1)}(x)\\ &\quad+\frac{(-1)^{x-\{x\}}}{2}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l)}{(l+a-1)!}f^{(l+a-1)}(x)E_{l+a-1}(\{x\})x^{l}}{(x+1)(x+2)\cdots(x+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}(-1)^{k+1}f(k)&=C_{f}+\frac{(-1)^{x-\{x\}}}{2}\sum_{k=1}^{a}(-1)^{k}\frac{E_{k-1}(\{x\})}{(k-1)!}f^{(k-1)}(x)\\ &\quad+\frac{(-1)^{x-\{x\}}}{2}x^{a}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l+a)}{(l+a-1)!}f^{(l+a-1)}(x)E_{l+a-1}(\{x\})x^{l}}{(x+1)(x+2)\cdots(x+k)}, \end{split} \end{equation} where $E_{n}(\{x\})$ denotes the fractional Euler polynomials \cite{22} and for $n\in\mathbb{N}$ and $a\in\mathbb{N}_{0}$ we have \begin{equation} \begin{split} \sum_{k=1}^{n}(-1)^{k+1}f(k)&=C_{f}+(-1)^{n+1}\sum_{k=1}^{a}(-1)^{k}\frac{(2^{k}-1)B_{k}}{k!}f^{(k-1)}(n)\\ &\quad+(-1)^{n+1}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l)}{(l+a)!}(2^{l+a}-1)f^{(l+a-1)}(n)B_{l+a}n^{l}}{(n+1)(n+2)\cdots(n+k)}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{n}(-1)^{k+1}f(k)&=C_{f}+(-1)^{n+1}\sum_{k=1}^{a}(-1)^{k}\frac{(2^{k}-1)B_{k}}{k!}f^{(k-1)}(n)\\ &\quad+(-1)^{n+1}n^{a}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{S^{(1)}_{k}(l+a)}{(l+a)!}(2^{l+a}-1)f^{(l+a-1)}(n)B_{l+a}n^{l}}{(n+1)(n+2)\cdots(n+k)}, \end{split} \end{equation} where the constant $C_{f}$ is given by \begin{equation} \begin{split} C_{f}&=f(1)-\sum_{k=1}^{a}(-1)^{k}\frac{(2^{k}-1)B_{k}}{k!}f^{(k-1)}(1)-\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{(2^{l+a}-1)S^{(1)}_{k}(l)}{(l+a)!}f^{(l+a-1)}(1)B_{l+a}}{(k+1)!}\;\;\forall a\in\mathbb{N}_{0}. \end{split} \end{equation} A generalization of Faulhaber's formula for alternating sums can be found in \cite{22} and an extended form of it is given for $x\in\mathbb{R}^{+}$ by:\\ {\bf Alternating versions of Faulhaber's formula:}\\ For every $x\in\mathbb{R}^{+}$, it is given by \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}(-1)^{k+1}k^{m}&=\eta(-m)+\frac{(-1)^{x-\{x\}}}{2}\sum_{k=1}^{m+1}(-1)^{k}{m\choose k-1}E_{k-1}(\{x\})x^{m-k-1}\;\;\forall m\in\mathbb{N}_{0} \end{split} \end{equation} and for $n\in\mathbb{N}$ by \begin{equation} \begin{split} \sum_{k=1}^{n}(-1)^{k+1}k^{m}&=\eta(-m)+(-1)^{n+1}\sum_{k=1}^{m+1}(-1)^{k}\frac{2^{k}-1}{k}{m\choose k-1}B_{k}n^{m-k-1}\;\;\forall m\in\mathbb{N}_{0}, \end{split} \end{equation} as well as for $x\in\mathbb{R}^{+}$ and $a\in\mathbb{N}_{0}$ by \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}(-1)^{k+1}k^{m}&=\eta(-m)+\frac{(-1)^{x-\{x\}}}{2}\sum_{k=1}^{a}(-1)^{k}{m\choose k-1}E_{k-1}(\{x\})x^{m-k-1}\\ &\quad+\frac{(-1)^{x-\{x\}}x^{m-a-1}}{2}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}{m\choose l+a-1}S^{(1)}_{k}(l)E_{l+a-1}(\{x\})}{(x+1)(x+2)\cdots(x+k)}\;\;\forall m\in\mathbb{C}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{\lfloor x\rfloor}(-1)^{k+1}k^{m}&=\eta(-m)+\frac{(-1)^{x-\{x\}}}{2}\sum_{k=1}^{a}(-1)^{k}{m\choose k-1}E_{k-1}(\{x\})x^{m-k-1}\\ &\quad+\frac{(-1)^{x-\{x\}}x^{m-1}}{2}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}{m\choose l+a-1}S^{(1)}_{k}(l+a)E_{l+a-1}(\{x\})}{(x+1)(x+2)\cdots(x+k)}\;\;\forall m\in\mathbb{C} \end{split} \end{equation} and for $n\in\mathbb{N}$ and $a\in\mathbb{N}_{0}$ by \begin{equation} \begin{split} \sum_{k=1}^{n}(-1)^{k+1}k^{m}&=\eta(-m)+(-1)^{n+1}\sum_{k=1}^{a}(-1)^{k}\frac{2^{k}-1}{k}{m\choose k-1}B_{k}n^{m-k-1}\\ &\quad+(-1)^{n+1}n^{m-a-1}\sum_{k=1}^{\infty}(-1)^{k+a}\frac{\sum_{l=1}^{k}\frac{2^{l+a}-1}{l+a}{m\choose l+a-1}B_{l+a}S^{(1)}_{k}(l)}{(n+1)(n+2)\cdots(n+k)}\;\;\forall m\in\mathbb{C}, \end{split} \end{equation} or \begin{equation} \begin{split} \sum_{k=1}^{n}(-1)^{k+1}k^{m}&=\eta(-m)+(-1)^{n+1}\sum_{k=1}^{a}(-1)^{k}\frac{2^{k}-1}{k}{m\choose k-1}B_{k}n^{m-k-1}\\ &\quad+(-1)^{n+1}n^{m-1}\sum_{k=1}^{\infty}(-1)^{k}\frac{\sum_{l=1}^{k}\frac{2^{l+a}-1}{l+a}{m\choose l+a-1}B_{l+a}S^{(1)}_{k}(l+a)}{(n+1)(n+2)\cdots(n+k)}\;\;\forall m\in\mathbb{C}. \end{split} \end{equation} In the above six equations $\eta(s):=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{s}}$ denotes the Dirichlet eta function. \section{Acknowledgment} \label{sec:Acknowledgement} This work was supported by SNSF (Swiss National Science Foundation) under grant 169247. \hrule \noindent 2010 {\it Mathematics Subject Classification}: Primary 65B15; Secondary 11B68. \noindent\emph{Keywords: }generalization of Faulhaber's formula, extended Faulhaber formula, finite Weniger transformation, Stirling number of the first kind, Bernoulli polynomial, Bernoulli number, generalized convergent Stirling summation formula, alternating Faulhaber formula. \end{document}
math
62,579
\begin{document} \maketitle \begin{abstract} The wavefront set is a fundamental invariant arising from the Harish-Chandra-Howe local character expansion of an admissible representation. We prove a precise formula for the wavefront set of an irreducible Iwahori-spherical representation with `real infinitesimal character' and determine a lower bound for this invariant in terms of the Deligne-Langlands-Lusztig parameters. In particular, for the Iwahori-spherical representations with real infinitesimal character, we deduce that the algebraic wavefront set is a singleton, as conjectured by M\oe glin and Waldspurger. As a corollary, we obtain an explicit description of the wavefront set of an irreducible spherical representation with real Satake parameter. \end{abstract} \tableofcontents \section{Introduction} \subsection{The local Langlands classification}\label{sec:Arthur} We begin with a brief overview of the local Langlands classification. Let $\mathsf k$ be a nonarchimedean local field of characteristic $0$ with ring of integers $\mathfrak o$, finite residue field $\mathbb F_q$ of cardinality $q$ and valuation $\mathsf{val}_{\mathsf k}$. Fix an algebraic closure $\bar{\mathsf k}$ of $\mathsf k$ and let $K\subset \bar{\mathsf k}$ be the maximal unramified extension of $\mathsf k$ in $\bar{\mathsf k}$. Let $W_{\mathsf k}$ be the Weil group of $\mathsf k$ \cite[(1.4)]{Tate1979}. Let $\mathbf{G}(\mathsf k)$ be the group of $\mathsf k$-rational points of a connected reductive algebraic group $\mathbf{G}$ defined and split over $\mathsf k$. Let $G^{\vee}$ denote the complex Langlands dual group associated to $\mathbf{G}$, see \cite[\S2.1]{Borel1979}. (Since we assume throughout that $\mathbf G$ is $\mathsf k$-split, we can work with $G^\vee$ in place of the L-group.) Let $W_{\mathsf k}'=W_{\mathsf k}\ltimes \mathbb C$ denote the Weil-Deligne group associated to $\mathsf k$ \cite[\S 8.3.6]{Deligne1972}. The semidirect product is defined via the action \[w x w^{-1}=\|w\| x,\qquad x\in \mathbb C,\ w\in W_k, \] where $\|w\|$ is the norm of $w\in W_k$. \begin{definition}\label{def:Langlandsparams} A \emph{Langlands parameter} is a continuous homomorphism \begin{equation} \phi: W_{\mathsf k}'\rightarrow G^\vee \end{equation} which respects the Jordan decompositions in $W_{\mathsf k}'$ and $G^\vee$ (\cite[\S8.1]{Borel1979}). \end{definition} For any Langlands parameter $\phi$, let $\Pi^{\mathsf{Lan}}_{\phi}(\mathbf{G}(\mathsf k))$ denote the associated $L$-packet of irreducible admissible $\mathbf{G}(\mathsf k)$-representations \cite[\S10]{Borel1979}. The $L$-packets have not yet been defined in general. A discussion of this problem is beyond the scope of this paper---we refer the reader to \cite{Vogan1993}, \cite{Arthur2013}, or \cite{Kaletha2022}. Of particular interest for us is the following special case. We say that a Langlands parameter is \emph{unramified} if it is trivial when restricted to the inertia subgroup $I_{\mathsf k}$ of $W_{\mathsf k}$. If we fix a generator $\mathsf{Fr}$ of the infinite cyclic group $W_{\mathsf k}/I_{\mathsf k}$, we get a bijection between the set of unramified Langlands parameters and pairs of the form \begin{equation}\label{e:unramified-Langlands} (s,u) \in G^{\vee} \times G^{\vee},\qquad s\text{ semisimple},\ u \text{ unipotent}, \ sus^{-1}=u^q. \end{equation} This bijection is defined by $\phi \mapsto (s,u) = (\phi(\mathsf{Fr}),\phi(1))$, where $1\in\mathbb{C}\subset W'_{\mathsf k}$. Replacing $u$ with $n \in \fg^{\vee}$ such that $q^n=u$, we get a further bijection onto pairs of the form \begin{equation}\label{e:unramified-Langlands2}(s, n) \in G^{\vee} \times \fg^{\vee}, \qquad s\text{ semisimple},\ n \text{ nilpotent}, \ \Ad(s)n=qn.\end{equation} If $\phi$ is an unramified Langlands parameter which corresponds to the pair $(s,n)$, the $L$-packet $\Pi_{\phi}^{\mathsf{Lan}}(\mathbf{G}({\mathsf k}))$ is in bijection with the set of irreducible representations $\rho$ of the component group $A(s,n)$ of the mutual centralizer in $G^\vee$ of $s$ and $n$, such that the center $Z(G^\vee)$ acts trivially on $\rho$. This is known as the Deligne-Langlands-Lusztig correspondence, see Theorem \ref{thm:Langlands} below. Of fundamental importance are the \emph{tempered} Langlands parameters. For unramified parameters, `tempered' means that the semisimple element $s$ in (\ref{e:unramified-Langlands}) is `relatively compact', see Theorem \ref{thm:Langlands}(1) for a more precise condition. \subsection{Unipotent representations: wavefront sets}\label{sec:results} The main result of the paper is a formula for the wavefront set in terms of the Langlands parameters for the class of representations with \emph{unipotent cuspidal support} introduced by Lusztig \cite{Lu-unip1} (see Section \ref{s:unip-cusp} for the precise definition). In terms of the Langlands correspondence, see Section \ref{sec:Arthur}, these are exactly the representations which correspond to unramified Langlands parameters. Hence, via (\ref{e:unramified-Langlands2}), the set of irreducible representations with unipotent cuspidal support is in bijection with $G^{\vee}$-conjugacy classes of triples $(s,n,\rho)$. Write $X(s,n,\rho)$ for the irreducible representation corresponding to $(s,n,\rho)$. Since triples are considered up to conjugation by $G^{\vee}$, we may assume without loss of generality that $s$ belongs to a fixed maximal torus $T^\vee$ of $G^\vee$. There is a polar decomposition $T^\vee=T_c^\vee T_{\mathbb R}^\vee$, where $T^{\vee}_c$ is compact and $T^{\vee}_{\RR}$ is a vector group, see (\ref{eq:real}). We say that $s$ is \emph{real} if $s\in T^\vee_{\mathbb R}$, and in this case, we say that $X(s,n,\rho)$ has \emph{real infinitesimal character}. As is well-known, the category of smooth $\bfG(\mathsf k)$-representation decomposes as a product of full subcategories, called \emph{Bernstein blocks}, see \cite[(2.10)]{Bernstein1984}. The category of representations with unipotent cuspidal support is a finite product of Bernstein blocks. The block containing the trivial representation (known as the \emph{principal block}) is precisely the category of \emph{Iwahori-spherical representations} (i.e. smooth representations generated by their Iwahori-fixed vectors). Under the Deligne-Langlands-Lusztig correspondence, the irreducible Iwahori-spherical representations correspond to the triples $(s,n,\rho)$ for which $\rho$ is of \emph{Springer type}. This means that $\rho$ occurs in the permutation representation of $A(s,n)$ in the top cohomology group of the variety of Borel subalgebras of $\mathfrak g^\vee$ that contain $n$ and are invariant under $\mathrm{Ad}(s)$. Notably, the set of irreducible representations with unipotent cuspidal support is the smallest class of representations which is a union of $L$-packets and contains all Iwahori-spherical representations, see Theorem \ref{thm:Langlands}. An important role will be played by the \emph{Aubert-Zelevinsky} involution \cite{Au}, denoted $X \mapsto \mathrm{AZ}(X)$, see Section \ref{sec:AZduality}. This is an involution on the Grothendieck group of finite-length smooth $\mathbf{G}(\sfk)$-representations which preserves the Grothendieck group of each Bernstein block and carries irreducibles to irreducibles (up to a sign). In the case of representations with unipotent cuspidal support, it preserves the semisimple parameter $s$. For example, $\mathrm{AZ}$ takes the trivial representation to the Steinberg discrete series representation, and more generally, a spherical representation to the unique generic representation (in the sense of admitting Whittaker models) with the same semisimple parameter. A fundamental invariant attached to an admissible representation $X$ is its \emph{wavefront set}. In its classical form, the wavefront set $\WF(X)$ of $X$ is a collection of nilpotent $\bfG(\sfk)$-orbits in the Lie algebra $\mathfrak g(\sfk)$: these are the maximal orbits for which the Fourier transforms of the corresponding orbital integrals contribute to the Harish-Chandra-Howe local character expansion of the distribution character of $X$, \cite[Theorem 16.2]{HarishChandra1999}. In this paper, we consider two coarser invariants (see Section \ref{s:wave} for the precise definitions). The first of these invariants is the \emph{algebraic wavefront set}, denoted $\hphantom{ }^{\bar{\sfk}}\WF(X)$. This is a collection of nilpotent orbits in $\mathfrak{g}(\bark)$, see for example \cite[p. 1108]{Wald18} (where it is simply referred to as the `wavefront set' of $X$). The second invariant is a natural refinement $^K\WF(X)$ of $\hphantom{ }^{\bar{\sfk}}\WF(X)$ called the \emph{canonical unramified wavefront set}, defined recently in \cite{okada2021wavefront}. This is a collection of nilpotent orbits $\mathfrak g(K)$ (modulo a certain equivalence relation $\sim_A$). The relationship between these three invariants is as follows: the algebraic wavefront set $\hphantom{ }^{\bar{\sfk}}\WF(X)$ is deducible from the usual wavefront set $\WF(X)$ as well as the canonical unramified wavefront set $^K\WF(X)$. It is not known whether the canonical unramified wavefront set is deducible from the usual wavefront set, but we expect this to be the case, see \cite[Section 5.1]{okada2021wavefront} for a careful discussion of this topic. The final ingredient that we need in order to state our main result is the duality map $d$ defined by Spaltenstein in \cite[Proposition 10.3]{Spaltenstein} (see also \cite[\S13.3]{Lusztig1984} and \cite[Appendix A]{BarbaschVogan1985}), and its refinement $d_A$ due to Achar \cite{Acharduality}. Suppose $G$ is the complex reductive group with dual $G^\vee$. Let $\mathcal N_o$ be the set of nilpotent orbits in the Lie algebra $\mathfrak g$ and let $\mathcal N_{o,\bar c}$ be the set of pairs $(\mathbb O,\bar C)$ consisting of a nilpotent orbit $\mathbb O\in \mathcal N_o$ and a conjugacy class $\bar C$ in $\bar A(\mathbb O)$, Lusztig's canonical quotient of the $G$-equivariant fundamental group $A(\OO)$ of $\OO$. Let $\mathcal N^\vee_o$ and $\mathcal N^\vee_{o,\bar{c}}$ be the corresponding sets for $G^\vee$. The duality $d$ is a map $d: \cN^{\vee}_o \to \cN_o$ whose image consists of Lusztig's special orbits. The duality $d_A$ is a map \[d_A: \mathcal N^\vee_{o,\bar c}\rightarrow \mathcal N_{o,\bar c} \] satisfying certain properties, see section \ref{subsec:nilpotent}. One of these properties is: \[d_A(\OO^\vee,1)=(d(\OO^\vee),\bar C'), \] where $\bar{C}'$ is a conjugacy class in $\bar{A}(d(\OO^{\vee}))$ (which is trivial in the case when $\OO^\vee$ is special in the sense of Lusztig). In \cite[Section 5.1]{okada2021wavefront}, it is shown that there is bijection between $\mathcal N_{o,\bar c}$ and the set of $\sim_A$-classes of unramified nilpotent orbits $\mathcal N_o(K)$ of $\bfG(K)$. This is recalled in section \ref{subsec:nilpotent}. This way, we can think of $d_A(\OO^\vee, \bar{C})$ as a class in $\mathcal N_o(K)/\sim_A$. \begin{theorem}[See Theorems \ref{thm:realwf}, \ref{cor:wfbound} below]\label{t:main} Let $X=X(s,n,\rho)$ be an irreducible smooth Iwahori-spherical $\bfG(\mathsf k)$-representation with real infinitesimal character and let $\AZ(X) = X(s,n',\rho')$. \begin{enumerate} \item The canonical unramified wavefront set $^K\WF(X)$ is a singleton, and \[^K\WF(X) = d_A(\OO^{\vee}_{\AZ(X)},1),\] where $\OO^{\vee}_{\AZ(X)}$ is the $G^\vee$-orbit of $n'$ and $d_A$ is the duality map defined by Achar. In particular, \[\hphantom{ }^{\bar{\sfk}}\WF(X) = d(\OO^\vee_{\AZ(X)}).\] \item Suppose $X=X(q^{\frac 12 h^\vee},n,\rho)$ where $h^\vee$ is the neutral element of a Lie triple attached to a nilpotent orbit $\OO^\vee\subset \mathfrak g^\vee$. Then \[d_A(\OO^{\vee}, 1) \leq_A \hphantom{ } ^K\WF(X), \] where $\leq_A$ is the partial order defined by Achar. In particular, \[d(\OO^{\vee}) \le \hphantom{ }^{\bar{\sfk}}\WF(X).\] \end{enumerate} \end{theorem} The proof relies on the local character expansion ideas of Barbasch and Moy \cite{barmoy} and the subsequent refinements of the third-named author \cite{okada2021wavefront}, together with a detailed analysis of the Springer correspondence, nilpotent orbits, and the combinatorics of the Achar duality. We believe that Theorem \ref{t:main} should remain true for all irreducible representations with unipotent cuspidal support with real infinitesimal character. {However, the combinatorics in the more general case are more involved as they rely on the generalized Springer correspondence; moreover, in order to obtain the necessary branching results to parahoric subgroups, one needs to use the correspondences with Lusztig's graded affine Hecke algebras, and this adds another layer of complexity.} {Additionally, there are examples that show that the statement of Theorem \ref{t:main}(1) does not hold if $s$ is not real.} We intend to address these generalizations in future work. There is a vast literature on the computation of the (mainly algebraic) wavefront set of representations of $p$-adic groups. We mention only two examples related to the unipotent representations. In \cite{MW87}, M\oe glin and Waldspurger introduced the notion of generalized Whittaker models, which they used to compute the wavefront sets for representations of $\mathrm{GL}(n)$, small-rank (in the sense of Howe) representations of $\mathrm{Sp}(2n)$, and, in the notation of Theorem \ref{t:main}, the case $\frac 12 h^\vee=\rho$, the infinitesimal character where the trivial $\mathbf G(\mathsf k)$-representation occurs. For $\bfG=\mathrm{SO}(2n+1)$, Waldspurger has computed the algebraic wavefront sets of all irreducible tempered representations with unipotent cuspidal support \cite[Th\'eor\`eme 2]{Wald20} and of their $\AZ$-duals \cite[Th\'eor\`eme, p.1108]{Wald18}. To use Theorem \ref{t:main} as a tool for computing wavefront sets, we need an algorithm for computing the $\AZ$-dual of an irreducible representation. For example, if $X$ is an irreducible spherical representation (in the sense of having nonzero fixed vectors under the action of $\mathbf G(\mathfrak o)$) with Satake parameter $s$), then $\mathsf{AZ}(X)=X(s,n',\rho')$ admits nonzero Whittaker models, and it is known that this is equivalent to the condition that $G^\vee(s)n'$ is the unique open orbit in $\mathfrak g^\vee_q$ (section \ref{s:unip-cusp}). Denote $\OO^\vee_s=G^\vee n'$, i.e., the $G^\vee$-saturation of the open $G^\vee(s)$-orbit in $\mathfrak g^\vee_q$. Then an immediate consequence of Theorem \ref{t:main} is the following. \begin{cor} The algebraic wavefront set of the irreducible spherical $\mathbf G(\mathsf k)$-representation $X(s)$ with real Satake parameter $s$ is \[\hphantom{ }^{\bar{\sfk}}\WF(X(s)) = d(\OO^\vee_s).\] \end{cor} While this result was expected for a long time (by analogy with real groups for example), as far as we know, this is the first proof for all split $p$-adic groups. In general, it is a difficult problem to compute the $\mathsf{AZ}$-dual. Evens and Mirkovi\'c \cite[Theorem 0.1]{EM} proved that for Iwahori-spherical representations, $\AZ$ admits a geometric description in terms of the Fourier-Deligne transform on irreducible perverse sheaves which is computable in principle via the algorithms in \cite[\S2]{Lusztig2010}. For $\bfG=\mathrm{GL}(n)$, an algorithm involving Zelevinsky multisegments was given in \cite{MW-duality}. For split symplectic and odd orthogonal groups, an explicit combinatorial algorithm for computing $\AZ$ was recently announced in \cite{AtobeMinguez}. We also mention the recent progress by Waldspurger \cite{Wald19} on the generalized Springer correspondence for $\mathrm{Sp}(2n)$ which provides another perspective on $\AZ$-duality (more in line with our methods in this paper). In a sequel to this paper, we will use the wavefront set results in order to give a new characterization of the anti-tempered unipotent Arthur packets and a $p$-adic analogue of the weak Arthur packets defined for real groups in \cite{AdamsBarbaschVogan}. \subsection{Acknowledgments} The authors would like to thank Kevin McGerty and David Vogan for many helpful conversations. The authors would also like to thank Anne-Marie Aubert, Colette M\oe glin, David Renard, and Maarten Solleveld for their helpful comments and corrections on an earlier draft of this paper. The first and second authors were partially supported by the Engineering and Physical Sciences Research Council under grant EP/V046713/1. The third author was supported by Aker Scholarship. \section{Preliminaries}\label{sec:preliminaries} Let $\bfG$ be a connected reductive algebraic group defined over $\mathbb{Z}$, and let $\bfT \subset \mathbf{G}$ be a maximal torus. For any field $F$, we write $\mathbf{G}(F)$, $\mathbf{T}(F)$, etc. for the groups of $F$-rational points. The $\CC$-points are denoted by $G$, $T$, etc. Write $X^*(\mathbf{T},\bark)$ (resp. $X_*(\mathbf{T},\bark)$) for the lattice of algebraic characters (resp. co-characters) of $\mathbf{T}(\bark)$, and write $\Phi(\mathbf{T},\bark)$ (resp. $\Phi^{\vee}(\mathbf{T},\bark)$) for the set of roots (resp. co-roots). Let $$\mathcal R=(X^*(\mathbf{T},\bark), \ \Phi(\mathbf{T},\bark),X_*(\mathbf{T},\bark), \ \Phi^\vee(\mathbf{T},\bark), \ \langle \ , \ \rangle)$$ be the root datum corresponding to $\mathbf{G}$, and let $W$ the associated (finite) Weyl group. Let $\mathbf{G}^\vee$ be the Langlands dual group of $\bfG$, i.e. the connected reductive algebraic group corresponding to the root datum $$\mathcal R^\vee=(X_*(\mathbf{T},\bark), \ \Phi^{\vee}(\mathbf{T},\bark), X^*(\mathbf{T},\bark), \ \Phi(\mathbf{T},\bark), \ \langle \ , \ \rangle).$$ Set $T^\vee=X^*(\bfT,\bark)\otimes_\ZZ \CC^\times$, regarded as a maximal torus in $G^\vee$ with Lie algebra $\mathbf{\mathfrak t}^\vee=X^*(\bfT,\bark)\otimes_{\mathbb Z} \mathbb C$, a Cartan subalgebra of the Lie algebra $\mathbf{\mathfrak g}^\vee$ of $\bfG^\vee$. Define \begin{align}\label{eq:real} \begin{split} T^\vee_{\mathbb R} &=X^*(\bfT,\bark)\otimes_{\mathbb Z} {\mathbb R}_{>0}\\ \mathbf{\mathfrak t}_{\mathbb R}^\vee &= X^*(\bfT,\bark)\otimes_{\mathbb Z} \mathbb R\\ T^\vee_c &=X^*(\bfT,\bark)\otimes_{\mathbb Z} S^1 \end{split} \end{align} There is a polar decomposition $T^\vee=T^\vee_c T ^\vee_{\mathbb R}$. If $H$ is a complex group and $x$ is an element of $H$ or $\fh$, we write $H(x)$ for the centralizer of $x$ in $H$, and $A_H(x)$ for the group of connected components of $H(x)$. If $S$ is a subset of $H$ or $\fh$ (or indeed, of $H \cup \fh$), we can similarly define $H(S)$ and $A_H(S)$. We will sometimes write $A(x)$, $A(S)$ when the group $H$ is implicit. Write $\mathcal B^\vee$ for the flag variety of $G^\vee$, i.e. the variety of Borel subgroups $B^{\vee} \subset G^{\vee}$. Note that $\mathcal{B}^{\vee}$ has a natural left $G^{\vee}$-action. For $g\in G^\vee$, write $$\mathcal B^\vee_g = \{B^\vee\in \mathcal B^\vee \mid g\in B^\vee \}.$$ (this coincides with the subvariety of Borels fixed by $g$). Similarly, for $x\in \mathfrak g^\vee$, write $$\mathcal B^\vee_x = \{B^\vee\in \mathcal B^\vee \mid x\in \mathfrak b^\vee \}.$$ If $S$ is a subset of $G^{\vee}$ or $\fg^{\vee}$ (or indeed of $G^{\vee} \cup \fg^{\vee}$), write $$\mathcal B^\vee_S = \bigcap_{x\in S} \mathcal{B}^{\vee}_x.$$ The singular cohomology group $H^i(\mathcal B^\vee_S,\CC) = H^i(\mathcal{B}^{\vee}_S)$ carries an action of $A(S)=A_{G^\vee}(S)$. For an irreducible representation $\rho\in\mathrm{Irr}(A(S)))$, let $H^i(\mathcal B^\vee_S)^\rho := \Hom_{A(S)}(\rho,H^i(\mathcal{B}^{\vee}_S))$. Write $H^{\mathrm{top}}(\mathcal{B}^{\vee}_S)$ for the top-degree nonzero cohomology group and $H^{\bullet}(\mathcal{B}_S^{\vee})$ for the alternating sum of all cohomology groups. We will often consider the subset \begin{equation}\label{eq:defofIrr0} \mathrm{Irr}(A(S))_0 := \{\rho \in \mathrm{Irr}(A(S)) \mid H^{\mathrm{top}}(\mathcal{B}_S^{\vee})^{\rho} \neq 0\}. \end{equation} Let $\mathcal C(\bfG(\mathsf k))$ be the category of smooth complex $\bfG(\mathsf k)$-representations and let $\Pi(\mathbf{G}(\mathsf k)) \subset \mathcal C(\bfG(\mathsf k))$ be the set of irreducible objects. Let $R(\bfG(\mathsf k))$ denote the Grothendieck group of $\mathcal C(\bfG(\mathsf k))$. \subsection{Nilpotent orbits}\label{subsec:nilpotent} Let $\mathcal N$ be the functor which takes a field $F$ to the set of nilpotent elements of $\mf g(F)$. By `nilpotent' in this context we mean the unstable points (in the sense of GIT) with respect to the adjoint action of $\bfG(F)$, see \cite[Section 2]{debacker}. For $F$ algebraically closed this coincides with all the usual notions of nilpotence. Let $\mathcal N_o$ be the functor which takes $F$ to the set of orbits in $\mathcal N(F)$ under the adjoint action of $\bfG(F)$. When $F$ is $\sfk$ or $K$, we view $\mathcal N_o(F)$ as a partially ordered set with respect to the closure ordering in the topology induced by the topology on $F$. When $F$ is algebraically closed, we view $\mathcal N_o(F)$ as a partially ordered set with respect to the closure ordering in the Zariski topology. For brevity we will write $\mathcal N(F'/F)$ (resp. $\mathcal N_o(F'/F)$) for $\mathcal N(F\to F')$ (resp. $\mathcal N_o(F\to F')$) where $F\to F'$ is a morphism of fields. For $(F,F')=(\sfk,K)$ (resp. $(\sfk,\bark)$, $(K,\bark)$), the map $\mathcal N_o(F'/F)$ is strictly increasing (resp. strictly increasing, non-decreasing). We will simply write $\mathcal N$ for $\mathcal N(\CC)$ and $\mathcal N_o$ for $\mathcal N_o(\CC)$. In this case we also define $\mathcal N_{o,c}$ (resp. $\mathcal N_{o,\bar c}$) to be the set of all pairs $(\OO,C)$ such that $\OO\in \mathcal N_o$ and $C$ is a conjugacy class in the fundamental group $A(\OO)$ of $\OO$ (resp. Lusztig's canonical quotient $\bar A(\OO)$ of $A(\OO)$, see \cite[Section 5]{Sommers2001}). There is a natural map \begin{equation} \mf Q:\mathcal N_{o,c}\to\mathcal N_{o,\bar c}, \qquad (\OO,C)\mapsto (\OO,\bar C) \end{equation} where $\bar C$ is the image of $C$ in $\bar A(\OO)$ under the natural homomorphism $A(\OO)\twoheadrightarrow \bar A(\OO)$. There are also projection maps $\pr_1: \cN_{o,c} \to \cN_o$, $\pr_1: \cN_{o,\bar c} \to \cN_o$. We will typically write $\mathcal N^\vee$, $\mathcal N^\vee_o, \cN^{\vee}_{o,c}$, and $\cN^{\vee}_{o,\bar c}$ for the sets $\mathcal N$, $\mathcal N_o, \cN_{o,c}$, and $\cN_{o,\bar c}$ associated to the Langlands dual group $G^\vee$. When we wish to emphasise the group we are working with we include it as a superscript e.g. $\mathcal N_o^\bfG(k)$. Recall the following classical result. \begin{lemma}[Corollary 3.5, \cite{Pommerening} and Theorem 1.5, \cite{Pommerening2}]\label{lem:Noalgclosed} Let $F$ be algebraically closed with good characteristic for $\bfG$. Then there is canonical isomorphism of partially ordered sets $\Theta_F:\mathcal N_o(F)\xrightarrow{\sim}\mathcal N_o$. \end{lemma} Write \begin{equation}\label{eq:dBV} d: \cN_0 \to \cN_0^{\vee}, \qquad d: \cN_0^{\vee} \to \cN_0. \end{equation} for the \emph{Barbasch-Lusztig-Spaltenstein-Vogan duality maps} (see \cite[Appendix A]{BarbaschVogan1985}). If $F$ is algebraically closed, we will also write \begin{equation} d: \cN_o(F) \to \cN_o^{\vee}(F), \qquad d: \cN_o^{\vee}(F) \to \cN_o(F) \end{equation} for the maps obtained by composing the maps (\ref{eq:dBV}) with the natural identifications $\mathcal N_o(F)\simeq \mathcal N_o$, $\mathcal N_o^\vee(F)\simeq \mathcal N_o^\vee$ of Lemma \ref{lem:Noalgclosed}. Write \begin{equation} d_S: \cN_{o,c} \twoheadrightarrow \cN^{\vee}_o, \qquad d_S: \cN^{\vee}_{o,c} \twoheadrightarrow \cN_o \end{equation} for the duality maps defined by Sommers in \cite[Section 6]{Sommers2001} and \begin{equation} d_A: \cN_{o,\bar c} \to \cN^{\vee}_{o,\bar c}, \qquad d_A: \cN^{\vee}_{o,\bar c} \to \cN_{o,\bar c} \end{equation} for the duality maps defined by Achar in (\cite[Section 1]{Acharduality}). We have the following compatibilities, which are clear from the definitions: \begin{itemize} \item $d(\OO) = d_S(\OO,1)$ . \item $d_S(\OO,C) = \pr_1\circ d_A(\OO,C)$. \end{itemize} There is a natural pre-order $\leq_A$ on $\mathcal N_{o,c}$ defined by $$(\OO,C)\le_A(\OO',C') \iff \OO\le \OO' \text{ and } d_S(\OO,C)\ge d_S(\OO',C').$$ Write $\sim_A$ for the equivalence relation on $\cN_{o,c}$ induced by this pre-order, i.e. $$(\OO_1,C_1) \sim_A (\OO_2,C_2) \iff (\OO_1,C_1) \leq_A (\OO_2,C_2) \text{ and } (\OO_2,C_2) \leq_A (\OO_1,C_1)$$ Write $[(\OO,C)]$ for the equivalence class of $(\OO,C) \in \cN_{o,c}$. The $\sim_A$-equivalence classes in $\cN_{o,c}$ coincide with the fibres of the projection map $\mf Q:\mathcal N_{o,c}\to\mathcal N_{o,\bar c}$ \cite[Theorem 1]{Acharduality}. So $\le_A$ descends to a partial order on $\mathcal N_{o,\bar c}$, also denoted by $\le_A$. The maps $d,d_S,d_A$ are all order reversing with respect to the relevant pre/partial orders. We also have the following easy result. \begin{lemma} \label{lem:injectiveachar} Let $\OO^\vee_1,\OO^\vee_2\in\mathcal N_o^\vee$. Then the following are true: \begin{enumerate} \item $\OO^\vee_1\le\OO^\vee_2$ if and only if $d_A(\OO^\vee_1,1)\ge_Ad_A(\OO^\vee_2,1)$, \item $\OO^\vee_1=\OO^\vee_2$ if and only if $d_A(\OO^\vee_1,1)=d_A(\OO^\vee_2,1)$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item $(\Rightarrow)$ Suppose $\OO^\vee_1\le\OO^\vee_2$. Then $d(\OO^\vee_1)\ge d(\OO^\vee_2)$. So we have \begin{align*}\pr_1(d_A(\OO^\vee_1,1)) &= d(\OO^\vee_1)\ge d(\OO^\vee_2) = \pr_1(d_A(\OO^\vee_2,1))\\ d_S(d_A(\OO^\vee_1,1)) &= \OO^\vee_1 \le \OO^\vee_2 = d_S(d_A(\OO^\vee_2,1)).\end{align*} Thus indeed $d_A(\OO^\vee_1,1)\ge_Ad_A(\OO^\vee_2,1)$. $(\Leftarrow)$ If $d_A(\OO^\vee_1,1)\ge_Ad_A(\OO^\vee_2,1)$ then $\OO^\vee_1 = d_S(d_A(\OO^\vee_1,1)) \le d_S(d_A(\OO^\vee_2,1)) = \OO^\vee_2$. \item This follows from part (i). \end{enumerate} \end{proof} \subsection{The Bruhat-Tits building} Let $\mathcal B(\bfG)$ denote the (enlarged) Bruhat-Tits building for $\bfG(\sfk)$. We use the notation $c\subseteq \mathcal B(\bfG)$ to indicate that $c$ is a face of $\mathcal B$. Given a maximal torus $\bfT$ defined and split over $\sfk$, write $\mathcal A(\bfT,\sfk)$ for the corresponding apartment in $\mathcal B(\bfG)$. For an apartment $\mathcal A$ of $\mathcal B(\bfG)$ and $\Omega\subseteq \mathcal A$ we write $\mathcal A(\Omega,\mathcal A)$ for the smallest affine subspace of $\mathcal A$ containing $\Omega$. We write $\Phi(\bfT,\sfk)$ (resp. $\Psi(\bfT,\sfk)$) for the set of roots of $\bfG(\sfk)$ (resp. affine roots) of $\bfT(\sfk)$ on $\bfG(\sfk)$. For $\psi\in \Psi(\bfT,\sfk)$ write $\dot\psi\in \Phi(\bfT,\sfk)$ for the gradient of $\psi$, and $W=W(\bfT,\sfk)$ for the Weyl group of $\bfG(\sfk)$ with respect to $\bfT(\sfk)$. Let $\widetilde W=W\ltimes X_*(\mathbf{T},\sfk)$ be the (extended) affine Weyl group. Write \begin{equation} \widetilde W\to W, \qquad w\mapsto \dot w \end{equation} for the natural projection map. Fix a special point $x_0$ of $\mathcal A(\bfT,k)$. The choice of special point $x_0$ of $\mathcal B(\bfG,K)$ fixes an inclusion $\Phi(\bfT_1,K)\to \Psi(\bfT_1,K)$ and an isomorphism between $\widetilde W$ and $N_{\bfG(K)}(\bfT_1(K))/\bfT_1(\mf O^\times)$. For a face $c\subseteq \mathcal A$ let $W_c$ be the subgroup of $\widetilde{W}$ generated by reflections in the hyperplanes through $c$ (equivalently $\mathcal A(c,\mathcal A)$). For a face $c\subseteq \mathcal B(\bfG)$ there is a subgroup $\bfP_c^+$ of $\bfG$ defined over $\mf o$ such that $\bfP_c^+(\mf o)$ is the stabiliser of $c$ in $\bfG(k)$. There is an exact sequence \begin{equation}\label{eq:parahoricses} 1 \to \bfU_c(\mf o) \to \bfP_c^+(\mf o) \to \bfL_c^+(\mathbb F_q) \to 1, \end{equation} where $\bfU_c(\mf o)$ is the pro-unipotent radical of $\bfP_c^+(\mf o)$ and $\bfL_c^+$ is the reductive quotient of the special fibre of $\bfP_c^+$ . Let $\bfL_c$ denote the identity component of $\bfL_c^+$, and let $\bfP_c$ be the subgroup of $\bf P_c^+$ defined over $\mf o$ such that $\bfP_c(\mf o)$ is the inverse image of $\bfL_c(\mathbb F_q)$ in $\bfP_c^+(\mf o)$. The torus $\bfT$ is in fact defined over $\mf o$, is a subgroup of $\bfP_c$ and the special fibre of $\bfT$, denoted $\bar\bfT$, is a $\mathbb F_q$-split torus of $\bfL_c$. Write $\Phi_c(\bar\bfT,\mathbb F_q)$ for the root system of $\bfL_c$ with respect to $\bar\bfT$. Then $\Phi_c(\bar\bfT,\mathbb F_q)$ naturally identifies with the set of $\psi\in\Psi(\bfT,k)$ that vanish on $\mathcal A(c,\mathcal A)$, and the Weyl group of $\bar\bfT$ in $\bfL_c$ is naturally isomorphic to $W_c$. The groups $\bfP_c$ obtained in this manner are called ($\sfk$-)\emph{parahoric subgroups} of $\bfG$. When $c$ is a chamber in the building, then we call $\bfP_c$ an \emph{Iwahori subgroup} of $G$. Now suppose $\mathbf{T}$ is defined over $\ZZ$ and let $\mathcal A = \mathcal A(\bfT,\sfk)$. Fix a special point $x_0$ in $\mathcal A$. Choose a set of simple roots $\Delta \subset \Phi(\bfT,\sfk)$, and let $\tilde\Delta\subseteq \Psi(\bfT,\sfk)$ be the corresponding set of extended simple roots (this depends on the choice of $x_0$). Let $c_0$ denote the chamber cut out by $\tilde\Delta$. Each subset $J\subseteq \tilde\Delta$ cuts out a face of $c_0$ which we denote by $c(J)$. We will need the following result from \cite{okada2021wavefront}. Recall that a \emph{pseudo-Levi} subgroup $L$ of $G$ is a subgroup arising as the centraliser of a semisimple group element. A pseudo-levi subgroup is \emph{standard} if contains our fixed maximal torus $T=\mathbf{T}(\CC)$. We will write $Z_L$ for the center of $L$. \begin{lemma} \cite[Proposition 4.17]{okada2021wavefront} There is a natural $W$-equivariant map \begin{equation} \Xi:\{\text{faces of } \mathcal A\} \rightarrow\{(L,tZ_L^\circ) \mid L\text{ a standard pseudo-Levi}, \ Z_{G}^\circ(tZ_L^\circ)=L\} \end{equation} where $c_1,c_2$ lie in the same fibre iff $\mathcal A(c_1,\mathcal A)+X_*(\bfT, \sfk)=\mathcal A(c_2,\mathcal A)+X_*(\bfT,\sfk)$. Moreover, if $\Xi(c) =(L,tZ_L^\circ)$ then $L$ is the complex reductive group with the same root datum as $\bfL_c(\mathbb F_q)$ (with respect to the tori $T$ and $\bar \bfT$ respectively). \end{lemma} \begin{rmk}\label{rmk:rootdata} In the statement of the lemma, the weight lattices naturally identify since $T=\bfT(\CC)$ and $\bar\bfT(\mathbb F_q)=\bfT(\mf o)/\bfT(1+\mf p)$. The root data identify via the map $$\Phi_c(\bar\bfT,\mathbb F_q)\xrightarrow{\sim}\{\dot\psi \mid \psi\in\Psi(\bfT,\sfk), \ \psi(c)=0\}\subseteq\Phi(\bfT,k)\xrightarrow{\sim}\Phi(\bfT,\CC).$$ \end{rmk} For a face $c\subseteq\mathcal B(\bfG)$ and $\Xi(c) = (L,tZ_L^\circ)$ there is a natural bijection of partially ordered sets $$\Theta_c:\mathcal N_o^{\bfL_c}(\overline{\mathbb F}_q)\xrightarrow{\sim} \mathcal N_o^{L}$$ induced by the isomorphism of root data in Remark \ref{rmk:rootdata}. \subsection{Nilpotent orbits over a maximal unramified extension} Let $\bfT$ be a maximal $\sfk$-split torus of $\bfG$ and $x_0$ be a special point in $\mathcal A(\bfT,K)$. In \cite[Section 2.1.5]{okada2021wavefront} the third-named author constructs a bijection $$\theta_{x_0,\bfT}:\mathcal N_o^{\bfG}(K)\xrightarrow{\sim}\mathcal N_{o,c}.$$ \begin{theorem} \label{lem:paramNoK} \cite[Theorem 2.20, Theorem 2.27, Proposition 2.29]{okada2021wavefront} The bijection $$\theta_{x_0,\bfT}:\mathcal N_o^{\bfG}(K)\xrightarrow{\sim}\mathcal N_{o,c}$$ is natural in $\bfT$, equivariant in $x_0$, and makes the following diagram commute: \begin{equation} \begin{tikzcd}[column sep = large] \mathcal N_o^{\bfG}(K) \arrow[r,"\theta_{x_0,\bfT}"] \arrow[d,"\mathcal N_o(\bar k/K)",swap] & \mathcal N_{o,c} \arrow[d,"\pr_1"] \\ \mathcal N_o(\bar k) \arrow[r,"\Lambda^{\bar k}"] & \mathcal N_o. \end{tikzcd} \end{equation} \end{theorem} The composition $$d_{S,\bfT}:= d_S\circ \theta_{x_0,\bfT}:\mathcal N_o(K) \to \mathcal N^\vee_o$$ is independent of the choice of $x_0$ and natural in $\bfT$ \cite[Proposition 2.32]{okada2021wavefront}. For $\OO_1,\OO_2\in \mathcal N_o(K)$ define $\OO_1\le_A\OO_2$ by $$\OO_1\le_A \OO_2 \iff \mathcal N_o(\bar k/K)(\OO_1) \le \mathcal N_o(\bar k/K)(\OO_2),\text{ and } d_{S,\bfT}(\OO_1)\ge d_{S,\bfT_2}(\OO_2)$$ and let $\sim_A$ denote the equivalence classes of this pre-order. This pre-order is independent of the choice of $\bfT$ and by Theorem \ref{lem:paramNoK}, the map $$\theta_{x_0,\bfT}:(\mathcal N_o(K),\le_A) \to (\mathcal N_{o,c},\le_A)$$ is an isomorphism of pre-orders. \begin{theorem} \label{thm:unramclasses} The composition $\mf Q\circ \theta_{x_0,\bfT}:\mathcal N_o(K)\to \mathcal N_{o,\bar c}$ descends to a (natural in $\bfT$) bijection $$\bar\theta_{\bfT}:\mathcal N_o(K)/\sim_A\to \mathcal N_{o,\bar c}$$ which does not depend on $x_0$. \end{theorem} By the construction of these maps, we have the following commutative diagram \begin{equation} \label{eq:square} \begin{tikzcd} \mathcal N_o(K) \arrow[r,"{[\bullet]}"] \arrow[d,"\theta_{x_0,\bfT}",swap] & \mathcal N_0(K)/\sim_A \arrow[d,"\bar\theta_{\bfT}"] \\ \mathcal N_{o,c} \arrow[r,"\mf Q"] & \mathcal N_{o,\bar c}. \end{tikzcd} \end{equation} Define \begin{equation} \mathcal I_o = \{(c,\OO) \mid c\subset \mathcal B(\bfG),\OO\in\cN_o^{\bfL_c}(\overline{\mathbb F}_q)\}. \end{equation} There is a partial order on $\mathcal{I}_o$, defined by $$(c_1,\OO_1)\le(c_2,\OO_2) \iff c_1=c_2 \text{ and } \OO_1\le\OO_2$$ In \cite[Section 4]{okada2021wavefront} the third author defines a strictly increasing surjective map $$\mathscr L:(\mathcal I_o,\le)\to(\mathcal N_o(K),\le).$$ The composition $[\bullet]\circ \mathscr L:(\mathcal I_o,\le)\to (\mathcal N_o(K)/\sim_A,\le_A)$ is also strictly increasing \cite[Corollary 4.7,Lemma 5.3]{okada2021wavefront}. Write $L_c$ for $\pr_1\circ\Xi(c)$. Define \begin{align} \mathcal{I}_{\tilde\Delta}&=\{(J,\OO) \mid J\subsetneq\tilde\Delta, \ \OO\in\mathcal{N}_o^{\bfL_{c(J)}}(\overline{\mathbb F}_q)\}, \\ \mathcal{K}_{\tilde\Delta}&=\{(J,\OO) \mid J\subsetneq\tilde\Delta, \ \OO\in\mathcal{N}_o^{L_{c(J)}}(\CC)\}. \end{align} Then $\mathcal I_{\tilde\Delta}\xrightarrow{\sim}\mathcal K_{\tilde\Delta}$ via $(J,\OO)\mapsto(J,\Theta_{c(J)}(\OO))$. Define \begin{equation} \mathbb L:\mathcal K_{\tilde\Delta} \to \mathcal N_{c,o} \end{equation} to be the map that sends $(J,\OO)$ to $(Gx,tZ_{G}^\circ(x))$ where $x\in\OO$ and $(L,tZ_{L}^\circ) = \Xi(c(J))$. Define $\overline{\mathbb L}=\mf Q\circ \mathbb L$. \begin{prop} \label{prop:square} The diagram \begin{equation} \begin{tikzcd} \mathcal I_{\tilde\Delta} \arrow[r,"\sim"] \arrow[d,"\mathscr L"] & \mathcal K_{\tilde\Delta} \arrow[d,"\mathbb L"] \\ \mathcal N_o(K) \arrow[r,"\theta_{x_0,\bfT}"] & \mathcal N_{o,c} \end{tikzcd} \end{equation} commutes. \end{prop} \begin{proof} It is a straightforward observation from the definition of $\Xi$ in \cite[Theorem 4.16]{okada2021wavefront} that if $c_1\subseteq c_2$ are faces of $\mathcal A$, and $(L_i,t_iZ_{L_i}^\circ) = \Xi(c_i)$ $i=1,2$, then under the natural map $Z_{L_1}/Z_{L_1}^\circ\to Z_{L_2}/Z_{L_2}^\circ$, we have that $t_1Z_{L_1}^\circ\mapsto t_2Z_{L_2}^\circ$. Now suppose $(J_1,\OO_1),(J_2,\OO_2)\in \mathcal I_{\tilde\Delta}$ are such that $J_1\subseteq J_2$ and the saturation of $\OO_1$ in $\bfL_{c(J_2)}$ (which contains $\bfL_{c(J_2)}$ as a Levi) is $\OO_2$. Let $(L_i,t_iZ_{L_i}^\circ) = \Xi(c(J_i))$ and $\OO'_i = \Theta_{c(J_i)}(\OO_i)$ for $i=1,2$. Then the saturation of $\OO_1'$ in $L_2$ is $\OO_2'$. Let $x\in \OO_1'\subseteq \OO_2'$. We have that $\mathbb L(J_1,\OO_1') = (Gx,t_1Z_{G}^\circ(x))$ and $\mathbb L(J_2,\OO_2') = (Gx,t_2Z_G^\circ(x))$. But since $Z_{L_2}^\circ\subseteq Z_{L_1}^\circ \subseteq Z_G^\circ(x)$, we have that $t_1Z_G^\circ(x) = t_2Z_G^\circ(x)$. Therefore $\mathbb L(J_1,\OO_1') = \mathbb L(J_2,\OO_2')$. Since also $\mathscr L(c(J_1),\OO_1) = \mathscr L(c(J_2),\OO_2)$ we can reduce to checking that the diagram commutes for pairs $(J,\OO)$ where $\OO$ is distinguished. But the distinguished case follows by construction of the map $\theta_{x_0,\bfT}$ in \cite[Section 4]{okada2021wavefront}. \end{proof} \subsection{$W$-representations}\label{subsec:Wreps} Let $\sim$ denote the equivalence relation on $\mathrm{Irr}(W)$ defined by Lusztig in \cite[Section 4.2]{Lusztig1984}. The relation $\sim$ partitions $\mathrm{Irr}(W)$ into subsets called \emph{families}. Each family contains a unique special representation which we denote by $E(\phi)$. For $E\in\mathrm{Irr}(W)$ we write $\phi(E)$ for the family containing $E\otimes\mathrm{sgn}$. Let \begin{equation}\label{eq:Springer} \mathrm{Springer}:\mathrm{Irr}(W)\hookrightarrow \{(\OO,\rho) \mid \OO\in\mathcal N_o,\ \rho\in\mathrm{Irr}(A(\OO))\}. \end{equation} be the \emph{Springer correspondence}. Define the subset \begin{equation}\label{eq:IrrA0}\mathrm{Irr}(A(\OO))_0 \subset \mathrm{Irr}(A(\OO))\end{equation} as in (\ref{eq:defofIrr0}) (taking $S=\{n\}$, where $n \in \OO$). Then the image of the map (\ref{eq:Springer}) is the subset $$\{(\OO,\rho) \mid \OO \in \cN_o, \ \rho \in \mathrm{Irr}(A(\OO))_0\}.$$ If $\mathrm{Springer}(E) = (\OO,\rho)$, we write $E(\OO,\rho):=E$ and $\OO(E,\CC) :=\OO$. We call $\OO(E,\CC)$ the \emph{Springer support} of $E$ (with respect to $G$). We write $\OO^\vee(E,\CC)$ for the Springer support of $E$ with respect to $G^\vee$, where we view $E$ as a representation of $W^\vee$ via the canonical isomorphism $W\xrightarrow{\sim} W^\vee$. It will always be clear from context which reductive group (up to root datum) we are working with, but occasionally we will have two reductive groups with isomorphic root data, but defined over different fields (e.g. $\bfL_c$ and $L_c$) and so we include the base field in the second argument to distinguish between the two cases. Thus if $E\in\mathrm{Irr}(W_c)$ then we write $\OO(E,\CC)$ for the Springer support with respect to $L_c$ and $\OO(E,\overline{\mathbb F}_q)$ for the Springer support with respect to $\bfL_c(\overline{\mathbb F}_q)$. For each $J \subsetneq \widetilde{\Delta}$, write $W_J \subset W$ for the subgroup generated by the simple reflections $\dot J$. Note that $W_J = \dot W_{c(J)}$, the Weyl group of $L_{c(J)}$ and $\bfL_{c(J)}$. Define the set \begin{equation} \mathcal F_{\tilde\Delta}=\{(J,\phi) \mid J\subsetneq\tilde\Delta, \ \phi \text{ a family of $W_J$}\}. \end{equation} For $J\subsetneq\tilde\Delta$ and a family $\phi$ of $W_J$ write $\OO(\phi,\bullet)$ for $\OO(E(\phi),\bullet)$, where $\bullet\in\{\overline{\mathbb F}_q,\CC\}$. Clearly $\Theta_{c(J)}(\OO(\phi,\overline{\mathbb F}_q)) = \OO(\phi,\CC)$. \subsection{Representations with unipotent cuspidal support}\label{s:unip-cusp} For every face $c \subset \mathcal{B}(\mathbf{G})$, we get a parahoric subgroup $\mathbf{P}_c(\mf o) \subset \mathbf{G}(\mf o)$ with pro-unipotent radical $\bfU_c(\mf o)$ and reductive quotient $\bfL_c(\mathbb F_q)$ (see the paragraph preceding (\ref{eq:parahoricses}). If $X$ is a smooth admissible $\mathbf{G}(\sfk)$-representation, the space of invariants $X^{\bfU_c(\mf o)}$ is a finite-dimensional $\bfL_c(\mathbb F_q)$-representation. \begin{definition} Let $X$ be an irreducible $\mathbf{G}(\mathsf{k})$-representation. We say that $X$ has \emph{unipotent cuspidal support} if there is a parahoric subgroup $\bfP_c \subset \bfG$ such that $X^{\mathbf U_c(\mf o)}$ contains an irreducible Deligne-Lusztig cuspidal unipotent representation of ${\mathbf L}_c(\mathbb F_q)$. Write $\Pi^{\mathsf{Lus}}(\bfG(\mathsf k))$ for the subset of $\Pi(\bfG(\mathsf k))$ consisting of all such representations. \end{definition} Recall that an irreducible $\mathbf{G}(\mathsf{k})$-representation $V$ is \emph{Iwahori-spherical} if $V^{\mathbf{I}(\mathfrak {o})} \neq 0$ for an Iwahori subgroup $\bfI$ of $\bfG$. We note that all such representations have unipotent cuspidal support, corresponding to the case $\mathbf P_c=\mathbf I$ and the trivial representation of $\mathbf T(\mathbb F_q)$. We will now recall the classification of irreducible representations of unipotent cuspidal support. Write $\Phi(\bfG(\mathsf k))$ for the set of $G^\vee$-orbits (under conjugation) of triples $(s,n,\rho)$ such that \begin{itemize} \item $s\in G^\vee$ is semisimple, \item $n\in \mathfrak g^\vee$ such that $\operatorname{Ad}(s) n=q n$, \item $\rho\in \mathrm{Irr}(A_{G^{\vee}}(s,n))$ such that $\rho|_{Z(G^\vee)}$ is a multiple of the identity. \end{itemize} Without loss of generality, we may assume that $s\in T^\vee$. Note that $n\in\mathfrak g^\vee$ is necessarily nilpotent. The group $G^\vee(s)$ acts with finitely many orbits on the $q$-eigenspace of $\Ad(s)$ $$\mathfrak g_q^\vee=\{x\in\mathfrak g^\vee\mid \operatorname{Ad}(s) x=qx\}$$ In particular, there is a unique open $G^\vee(s)$-orbit in $\mathfrak g_q^\vee$. Fix an $\mathfrak{sl}(2)$-triple $\{n^-,h,n\} \subset \fg^{\vee}$ with $h\in \mathfrak t^\vee_{\mathbb R}$ and set $$s_0:=sq^{-\frac{h}{2}}.$$ Then $\operatorname{Ad}(s_0)n=n$. The following theorem is a combination of several results: \cite[Theorems 7.12, 8.2, 8.3]{KL} for $\bfG$ adjoint and Iwahori-spherical representations, \cite[Corollary 6.5]{Lu-unip1} and \cite[Theorem 10.5]{Lu-unip2} for $\bfG$ adjoint and representations with unipotent cuspidal support, \cite[Theorem 3.5.4]{Re-isogeny} for $\bfG$ of arbitrary isogeny and Iwahori-spherical representations, and \cite{Sol-LLC} for $\bfG$ of arbitrary isogeny and representations with unipotent cuspidal support. See \cite[\S2.3]{AMSol} for a discussion of the compatibility between these classifications. Define the subset $\mathrm{Irr}(A(s,n))_0 \subset \mathrm{Irr}(A(s,n))$ as in (\ref{eq:defofIrr0}) (taking $S=\{s,n\}$). \begin{theorem}[{Deligne-Langlands-Lusztig correspondence}]\label{thm:Langlands} Suppose that $\bfG$ is $\mathsf k$-split. There is a bijection $$\Phi(\bfG(\mathsf k))\xrightarrow{\sim} \Pi^{\mathsf{Lus}}(\bfG(\mathsf k)),\qquad (s,n,\rho)\mapsto X(s,n,\rho),$$ such that \begin{enumerate} \item $X(s,n,\rho)$ is tempered if and only if $s_0\in T_c^\vee$ and $\overline {G^\vee(s)n}=\mathfrak g_q^\vee$, \item $X(s,n,\rho)$ is square integrable (modulo the center) if and only if it is tempered and $Z_{G^{\vee}}(s,n)$ contains no nontrivial torus. \item $X(s,n,\rho)^{\mathbf I(\mf o)}\neq 0$ if and only if $\rho\in \mathrm{Irr}(A(s,n))_0$. \end{enumerate} \end{theorem} Denote by $\Phi(\bfG(\mathsf k))_0$ the subset of $ \Phi(\bfG(\mathsf k))$ for which $\rho\in \mathrm{Irr}(A(s,n))_0$. \begin{rmk} We note that the condition $\overline {G^\vee(s)n}=\mathfrak g_q^\vee$ in (1) is superfluous---it is a consequence of the condition $s_0 \in T_c^{\vee}$, see Lemma \ref{lem:orbitclosure}. It is included in Theorem \ref{thm:Langlands} for expository purposes. \end{rmk} \begin{rmk}\label{r:real} The semisimple parameter $s$ in Theorem \ref{thm:Langlands} plays a similar role in the representation theory of $\mathbf{G}(\sfk)$ as the \emph{infinitesimal character} $\lambda \in \mathfrak{t}^*/W$ of an irreducible $\mathbf{G}(\RR)$-representation. If $X = X(s,n,\rho)$, it is thus customary to call $s$ the \emph{infinitesimal character} of $X$. Pursuing this analogy, we say, following \cite{BM1}, that $s$ is \emph{real} (or that $X$ has \emph{real infinitesimal character}) if $s \in T_{\RR}^{\vee}$, see (\ref{eq:real}). \end{rmk} For $s\in W\backslash T^\vee$, write \begin{equation}\label{e:inf-char-packet} \begin{aligned} \Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k)) &:= \{X(s,n,\rho) \mid (s,n,\rho) \in \Phi(\mathbf{G}(\mathsf{k})) \} \subset \Pi^{\mathsf{Lus}}(\mathbf{G}(\mathsf{k})),\\ \Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k))_0 &:= \{X(s,n,\rho)\in \Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k)) \mid \rho\in \mathrm{Irr}(A(s,n))_0\}. \end{aligned} \end{equation} By Theorem \ref{thm:Langlands}, there is a bijection between $\Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k))$ and pairs $(\mathcal C,\mathcal E)$, where $\mathcal C$ ranges over the finite set of $G^\vee(s)$-orbits on $\mathfrak g^\vee_q$, and $\mathcal E$ is an irreducible $G^\vee(s)$-equivariant local system on $\mathcal C$ with trivial $Z(G^\vee)$-action. Recall the \emph{Iwahori-Hecke algebra} associated to $\mathbf{G}(\sfk)$ $$\mathcal H_{\mathbf{I}}=\{f\in C^\infty_c(\bfG(\mathsf k))\mid f(i_1gi_2)=f(g),\ i_1,i_2\in \mathbf I(\mf o)\}.$$ Multiplication in $\mathcal H_{\mathbf{I}}$ is given by convolution with respect to a fixed Haar measure of $\bfG(\mathsf k)$. Let $\mathcal C_{\mathbf{I}}(\bfG(\mathsf k))$ denote the Iwahori category, i.e. the full subcategory of $\mathcal C(\bfG(\mathsf k))$ consisting of representations $X$ such that $X$ is generated by $X^{\mathbf I(\mf o)}$. The simple objects in this category are the (irreducible) Iwahori-spherical representations. By the Borel-Casselman Theorem \cite[Corollary 4.11]{Bo}, there is an exact equivalence of categories \begin{equation}\label{eq:mI} m_{\mathbf{I}}: \mathcal C_{\mathbf{I}}(\bfG(\mathsf k))\to \mathrm{Mod}(H_{\mathbf{I}}), \qquad m_{\mathbf{I}}(V) = V^{\mathbf I(\mf o)}. \end{equation} This equivalence induces a group isomorphism \begin{equation}\label{eq:mIhom} m_{\mathbf{I}}: R_{\mathbf{I}}(\mathbf{G}(\sfk)) \xrightarrow{\sim} R(\mathcal{H}_{\mathbf{I}}), \end{equation} where $R_{\mathbf{I}}(\mathbf{G}(\sfk))$ (resp. $R(\mathcal{H}_{\mathbf{I}})$) is the Grothendieck group of $\mathcal{C}_{\mathbf{I}}(\mathbf{G}(\sfk))$ (resp. $\mathrm{Mod}(\mathcal{H}_{\mathbf{I}})$). The irreducible representations occurring in the sets $\Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k))_0$ are precisely the irreducible objects in $ \mathcal C_{\mathbf{I}}(\bfG(\mathsf k))$. We note that the classification of $\bfG(\mathsf k)$-representations with real infinitesimal character is invariant under isogenies. More precisely, let \[f:\bfG'\to\bfG\] be an isogeny. Then $f$ induces an isogeny $T^\vee\to T'^\vee$ and hence a map $W\backslash T^\vee\to W\backslash T'^\vee$, see \cite[\S 5.3]{Re-euler}. Suppose $\tilde s\in W\backslash T'^\vee_{\mathbb R}$ and $s\in W\backslash T^\vee_{\mathbb R}$ correspond. We may identify $\mathfrak g^\vee= \mathfrak g'^\vee$ and $ \mathfrak g^\vee_q= \mathfrak g'^\vee_q$. Let $n\in \mathfrak g^\vee_q$ and $x\in \mathfrak t^\vee_{\mathbb R}$, $[x,n]=n$, such that $s$ (resp. $\tilde s$) equals the $q$-exponential of $x$ in $G^\vee$ (resp. $G'^\vee$). Then $\mathcal B^\vee_{s,n}=\mathcal B'^\vee_{\tilde s,n}=\mathcal B'^\vee_{x,n}.$ Since the center of $G^\vee$ (resp. $G'^\vee$) acts trivially on this variety, it follows that $\mathrm{Irr}(A(s,n))_0=\mathrm{Irr}(A(\tilde s,n))_0$. Hence there is a bijection \begin{equation}\label{e:real-identify} \Pi^{\mathsf{Lus}}_{\tilde s}(\bfG'(\mathsf k))_0\cong \Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k))_0, \quad X'(\tilde s,n,\rho)\leftrightarrow X(s,n,\rho). \end{equation} More precisely, in terms of the underlying representation theory, let $H_{\mathbf{I}}$ and $H_{\mathbf{I}}'$ be the Iwahori-Hecke algebras of $\bfG$ and $\bfG'$, respectively. The isogeny $f$ induces an embedding of Iwahori-Hecke algebras \cite[\S1.4,1.5]{Re-isogeny} \[f_*: H_{\mathbf{I}}'\hookrightarrow H_{\mathbf{I}}. \] \begin{prop}[{\cite[Lemma 5.3.1]{Re-euler}}]\label{p:real-identify} With the notation above, the inclusion $f_*$ induces a bijection $\Pi^{\mathsf{Lus}}_{s}(\bfG(\mathsf k))_0\cong \Pi^{\mathsf{Lus}}_{\tilde s}(\bfG'(\mathsf k))_0$: the restriction of the irreducible $H_{\mathbf{I}}$-module $X(s,n,\rho)^{\mathbf{I}(\mathfrak o)}$ to $H_{\mathbf{I}}'$ is irreducible and it equals $X'(\tilde s,n,\rho)$. \end{prop} For each parameter $(s,n,\rho) \in \Phi(\mathbf{G}(\mathsf{k}))_0$, there is an associated \emph{standard representation} $Y(s,n,\rho) \in \mathcal{C}(\mathbf{G}(\mathsf{k}))$. If $\bfG$ is adjoint, the relevant results are \cite[Theorems 7.12, 8.2, 8.3]{KL}, see also \cite[\S8.1]{Chriss-Ginzburg}. There is an identity in the Grothendieck group $R(\mathbf{G}(\mathsf{k}))$ (see \cite[Theorem 8.6.15]{Chriss-Ginzburg} or \cite[Proposition 10.5]{Lu-gradedII}): \begin{equation}\label{e:multi} Y(s,n,\rho)=X(s,n,\rho)+\sum_{(n',\rho')} m_{(n,\rho),(n',\rho')} X(s,n',\rho'), \end{equation} where $m_{(n,\rho),(n',\rho')}\in \mathbb Z_{\ge 0}$ and $n'$ ranges over the set of representatives of $G^\vee(s)$-orbits in $\mathfrak g^\vee_q$ such that $n\in \partial(G^\vee(s)n')$, and $\rho'\in \mathrm{Irr}(A(s,n'))_0.$ If $s$ is real, in light of Proposition \ref{p:real-identify}, we can extend (\ref{e:multi}) to $\bfG'$ by defining the standard $\bfG'(\sfk)$-module \[Y'(\tilde s,n,\rho):=m_{\bfI'}^{-1}( Y(s,n,\rho)^{\bfI}|_{\mathcal H_{\bfI}'}).\] \subsection{The Iwahori-Hecke algebra}\label{s:Hecke} Let $c_0$ be the chamber of $\mathcal A(\bfT,k)$ cut out by $\tilde \Delta$ and let $\bfI$ be the Iwahori subgroup corresponding to $c_0$. Suppose $\mathbf{P}_c$ is a parahoric subgroup containing $\mathbf{I}$ with pro-unipotent radical $\mathbf{U}_c$ and reductive quotient $\mathbf{L}_c$. The finite Hecke algebra $\mathcal H_{c}$ of $\bfL_c(\mathbb F_q)$ embeds as a subalgebra of $\mathcal H_{\bfI}$. For $X\in \mathcal C_{\bfI}(\bfG(\sfk))$, the Moy-Prasad theory of unrefined minimal $K$-types \cite{moyprasad} implies that the finite dimensional $\bfL_c(\mathbb F_q)$-representation $X^{\bfU_{c}(\mf o)}$ is a sum of principal series unipotent representations and so corresponds to an $\mathcal H_{c}$-module with underlying vector space $$(X^{\bfU_c(\mf o)})^{\bfI(\mf o)/\bfU_c(\mf o)} = X^{\bfI(\mf o)}.$$ The $\mathcal H_c$-module structure obtained in this manner coincides naturally with that of $$\Res_{\mathcal H_c}^{\mathcal H_\bfI}m_\bfI(X).$$ Let $$\mathfrak J_c:\mathcal H_c\to \CC[W_c]$$ be the isomorphism introduced by Lusztig in \cite{lusztigdeformation}. Given any $\mathcal H_c$-module $M$ we can use the isomorphism $\mathfrak J_c$ to obtain a $W_c$-representation which we denote by $M_{q\to1}$. Define \begin{equation} \label{eq:Wcqto1} X|_{W_c}:=(\Res_{\mathcal H_c}^{\mathcal H_\bfI}m_\bfI(X))_{q\to 1}. \end{equation} We will need to recall some structural facts about the Iwahori-Hecke algebra. Let $\CX:=X_*(\mathbf T,\bar{\mathsf k})=X^*(\mathbf{T}^\vee,\bar{\sfk})$ and consider the (extended) affine Weyl group $\widetilde{W} := W \ltimes \CX$. Let $$S := \{s_{\alpha} \mid \alpha \in \Delta\} \subset W$$ denote the set of simple reflections in $W$. For each $x \in \CX$, write $t_x \in \widetilde{W}$ for the corresponding translation. If $W$ is irreducible, let $\alpha_0$ be the highest root and set $s_0=s_{\alpha_0} t_{-\alpha_0^\vee}$, $S^a=S\cup\{s_0\}$. If $W$ is a product, define $S^a$ by adjoining to $S$ the reflections $s_0$, one for each irreducible factor of $W$. Consider the length function $\ell: \widetilde{W} \to \ZZ_{\geq 0}$ extending the usual length function on the affine Weyl group $W^a=W\ltimes \mathbb Z \Phi^\vee$ \[\ell(w t_x)=\sum_{\substack{\alpha\in \Phi^+\\w(\alpha)\in \Phi^-}} |\langle x,\alpha\rangle+1|+\sum_{\substack{\alpha\in \Phi^+\\w(\alpha)\in \Phi^+}} |\langle x,\alpha\rangle|. \] For each $w \in \widetilde{W}$, choose a representative $\dot w$ in the normalizer $N_{\bfG(\mathsf k)}(\mathbf I(\mf o))$. Recall the Bruhat decomposition \[\bfG(\mathsf k)=\bigsqcup_{w\in \widetilde W} \mathbf I(\mf o) \dot w \mathbf I(\mf o), \] For each $w \in \widetilde{W}$, write $T_w \in \mathcal{H}_{\mathbf{I}}$ for the characteristic function of $\mathbf I(\mf o) \dot w \mathbf I(\mf o) \subset \mathbf{G}(\sfk)$. Then $\{T_w\mid w\in \widetilde W\}$ forms a $\mathbb C$-basis for $\mathcal H_{\mathbf{I}}.$ The relations on the basis elements $\{T_w \mid w \in \widetilde{W}\}$ were computed in \cite[Section 3]{IM}: \begin{equation}\label{eq:relations} \begin{aligned} &T_w\cdot T_{w'}=T_{ww'}, \qquad \text{if }\ell(ww')=\ell(w)+\ell(w'),\\ &T_s^2=(q-1) T_s+q,\qquad s\in S^a. \end{aligned} \end{equation} Let $R$ be the ring $\CC[v,v^{-1}]$ and for $a\in \CC^*$ let $\CC_a$ be the $R$-module $R/(v-a)$. Let $\mathcal H_{\bfI,v}$ denote the Hecke algebra with base ring $R$ instead of $\CC$ and where $q$ is replaced with $v^2$ in the relations (\ref{eq:relations}). By specializing $v$ to $\sqrt{q}$, $1$, we obtain \begin{equation} \mathcal H_{\bfI,v}\otimes_R\CC_{\sqrt q} \cong \mathcal H_{\bfI}, \qquad \mathcal H_{\bfI,v}\otimes_R\CC_1 \cong \CC \widetilde W. \end{equation} Suppose $Y=Y(s,n,\rho)$ is a standard Iwahori-spherical representation, see Section \ref{s:unip-cusp}, and let $M=m_\bfI(Y)$. By \cite[Section 5.12]{KL} there is a $\mathcal H_{\bfI,v}$ module $M_v$, free over $R$, such that $$M_v\otimes_R\CC_{\sqrt q} \cong M$$ as $\mathcal H_\bfI$-modules. We can thus construct the $\widetilde W$-representation $$Y_{q\to 1}:= M_v\otimes_R \CC_{1}.$$ Let $R(\widetilde{W})$ be the Grothendieck group of $\mathrm{Rep}(\widetilde W)$. Since the standard modules form a $\ZZ$-basis for $R_\bfI(\bfG(\sfk))$, the Grothendieck group of $\mathcal C_\bfI(\bfG(\sfk))$, there is a unique homomorphism \begin{equation}\label{eq:qto1hom}(\bullet)_{q\to 1}: R_\bfI(\bfG(\sfk)) \to R(\widetilde W)\end{equation} extending $Y \mapsto Y_{q \to 1}$. Moreover, since $$\Res_{W_c}^{\widetilde W} Y_{q\to 1} = Y|_{W_c}$$ for the Iwahori-spherical standard modules we have that \begin{equation} \label{eq:heckerestriction} \Res_{W_c}^{\widetilde W}X_{q\to 1} = X|_{W_c} \end{equation} for all $X\in R_\bfI(\bfG(\sfk))$. Finally, let $c$ be the face of $c_0$ corresponding to $\Delta\subseteq\tilde\Delta$. Then $W_c = W$ and equation \ref{eq:Wcqto1} in this special case gives rise to a map \begin{equation} \label{eq:restrictiontoW} |_W:R_\bfI(\bfG(\sfk))\to R(W) \end{equation} where $R(W)$ is the Grothendieck group of $\mathrm{Rep}(W)$. There is a second description of $\mathcal{H}_{\mathbf{I}}$ in terms of generators and relations, called \emph{the Bernstein presentation} \cite{Lu-graded}. Set \[\CX_+=\{x\in \CX\mid \langle\alpha,x\rangle\ge 0,\text{ for all }\alpha\in \Phi^+\}. \] For $x\in \CX_+$, set $\theta_x=q^{-\ell(t_x)/2} T_{t_x}$. If $x\in \CX$, write $x=x_1-x_2$ for $x_1,x_2\in \CX_+$ and define \[\theta_x=\theta_{x_1}\theta_{x_2}^{-1}. \] Then $\theta_x\theta_{x'}=\theta_{x+x'}$ for all $x,x'\in \CX$ and $\theta_0=1$. Let $\mathcal A$ denote the abelian subalgebra of $\mathcal H_{\mathbf{I}}$ generated by the elements $\theta_x$. The Bernstein basis of $\mathcal H_{\mathbf{I}}$ is given by $T_w \theta_x$, for $w\in W$ and $x\in \CX$. The cross-relations are \cite{Lu-graded} \begin{equation} T_{s_\alpha}\theta_x-\theta_{ s_\alpha(x)}T_{ s_\alpha}=(q-1)\frac{\theta_x-\theta_{s_\alpha(x)}}{1-\theta_{-\alpha^\vee}}, \qquad s_{\alpha} \in S \end{equation} \subsection{Aubert-Zelevinsky duality}\label{sec:AZduality} There are three interrelated involutions that appear in the study of smooth $\mathbf{G}(\sfk)$-representations. The first is an algebra involution $\tau$ of the Hecke algebra $\mathcal{H}_{\mathbf{I}}$. It is defined on generators by \[\tau(T_w)=(-1)^{\ell(w_1)}q^{\ell(w)} T_{w^{-1}}^{-1}, \qquad w=w_1 t_x,\ w_1\in W,\ x\in \CX. \] Let $w_0$ denote the longest element of $W$. Since $$\ell(w_0 t_x)=\ell(w_0)+\ell(t_x)=\ell(t_{w_0(x)})+\ell(w_0), \text{ for every } x\in \CX_+$$ we have that \[T_{w_0(x)}=T_{w_0} T_{t_x} T_{w_0}^{-1},\qquad x\in X_+. \] Then if $x\in \CX_+$ \[\tau(T_{t_{w_0(x)}})=q^{\ell(t_{w_0(x)})} T_{t_{-w_0(x)}}^{-1}=q^{\ell(t_x)/2} \theta_{w_0(x)}. \] Hence \[\tau(\theta_x)= T_{w_0} \theta_{w_0(x)} T_{w_0}^{-1}. \] So on the Bernstein generators, $\tau$ is given by \begin{equation}\label{e:tau-Bernstein} \tau(T_w)=(-q)^{\ell(w)} T_{w^{-1}}^{-1},\ w\in W,\qquad \tau(\theta_x)=T_{w_0} \theta_{w_0(x)} T_{w_0}^{-1},\ x\in \CX. \end{equation} If $M$ is a simple $\mathcal H_{\mathbf{I}}$-module $M$, write $M^\tau$ for the module which is equal to $M$ as a vector space but with $\mathcal{H}_{\mathbf{I}}$-action twisted by $\tau$. The center of $\mathcal H_{\mathbf{I}}$ is (\cite[Proposition 3.11]{Lu-graded}) \[Z(\mathcal H_{\mathbf{I}})=\mathcal A^W, \] and therefore by (\ref{e:tau-Bernstein}) \begin{equation} \tau(z)=z, \text{ for all }z\in Z(\mathcal H_{\mathbf{I}}). \end{equation} Every simple $\mathcal H_{\mathbf{I}}$-module $M$ is necessarily finite dimensional. So by Schur's Lemma, $Z(\mathcal H_{\mathbf{I}})$ acts on $M$ by a central character $\chi_M\in W\backslash T^\vee$. Then \begin{equation}\label{e:tau-cc} \chi_{M^\tau}=\chi_{M}. \end{equation} The second involution we will consider is an involution $D$ of the Grothendieck group $R(\mathcal H_{\mathbf{I}})$ of $\mathcal H_{\mathbf{I}}$-modules. This involution can be defined in the following manner. For every $J\subset S$, let $\mathcal H_{\mathbf{I}}(J)$ denote the (parabolic) subalgebra of $\mathcal H_{\mathbf{I}}$ generated by $\mathcal A$ and $\{T_{s} \mid s\in J\}$. If $M$ is an $\mathcal H_{\mathbf{I}}$-module, let $\mathsf{Res}_J(M)$ denote the restriction of $M$ to $\mathcal H_{\mathbf{I}}(J)$. If $N$ is an $\mathcal H_{\mathbf{I}}(J)$-module, let $\mathsf{Ind}_J(N)=\mathcal H_{\mathbf{I}}\otimes_{\mathcal H_{\mathbf{I}}(J)} N$ denote the induced $\mathcal{H}_{\mathbf{I}}$-module. Then $D$ is the involution \begin{equation} D: R(\mathcal{H}_{\mathbf{I}}) \to R(\mathcal{H}_{\mathbf{I}}), \qquad D(M)=\sum_{J\subset S} (-1)^{|J|} ~\mathsf{Ind}_J(\mathsf{Res}_J(M)). \end{equation} By \cite[Theorem 2]{Kato} \begin{equation} D(M)=M^\tau, \text{for all }\mathcal H_{\mathbf{I}}\text{-modules }M. \end{equation} The involution $D$ has nice behavior with respect to unitarity. Let $*:\mathcal H_{\mathbf{I}}\to \mathcal H_{\mathbf{I}}$ be the conjugate-linear anti-involution defined on generators by \begin{equation} T_w^*=T_{w^{-1}},\qquad w\in \widetilde W. \end{equation} A non-degenerate sesquilinear form $(~,~)_M$ on $M$ is \emph{Hermitian} if \[(h\cdot m,m')_M=(m,h^*\cdot m'),\qquad m,m'\in M,\ h\in \mathcal H_{\mathbf{I}}. \] The final involution we will consider is an involution $\AZ$ on the Grothendieck group $R(\mathbf{G}(\sfk))$ of smooth $\mathbf{G}(\sfk)$-representation, called \emph{Aubert-Zelevinsky duality} \cite[\S1]{Au}. This involution can defined in the following manner. Let $\mathcal Q$ denote the set of parabolic subgroups of $\bfG$ defined over $\mathsf k$ and containing $\mathbf B$. For every $\mathbf Q\in\mathcal Q$, let $i_{\mathbf Q(\mathsf k)}^{\mathbf{G}(\mathsf k)}$ and $\mathsf{r}_{\mathbf Q(\mathsf k)}^{\mathbf{G}(\mathsf k)}$ denote the normalized parabolic induction functor and the normalized Jacquet functor, respectively. Then $\AZ$ is defined by \begin{equation} \mathsf{AZ}: R(\mathbf{G}(\mathsf{k})) \to R(\mathbf{G}(\mathsf{k})), \qquad \mathsf{AZ}(X)=\sum_{\mathbf Q\in \mathcal Q} (-1)^{r_{\mathbf Q}} ~i_{\mathbf Q(\mathsf k)}^{\bfG(\mathsf k)}(\mathsf{r}_{\mathbf Q(\mathsf k)}^{\bfG(\mathsf k)}(X)), \end{equation} where $r_{\mathbf Q}$ is the semisimple rank of the reductive quotient of $\mathbf Q$. If a class $X \in R(\mathbf{G}(\sfk))$ is irreducible, irreducible with unipotent cuspidal support, or Iwahori-spherical, so is $\AZ(X)$ (up to multiplication by a sign). More generally, $\AZ$ is an involution on the set of irreducible representations (up to sign) in any Bernstein component, see for example \cite[\S3.2]{BBK}. For the basic properties of parabolic induction and Jacquet functors relative to Berstein components, see \cite[\S1]{Au} or \cite{Roc-parabolic}. Under the Borel-Casselman equivalence (\ref{eq:mI}) the induction/restriction functors correspond \[i_{\mathbf Q(\mathsf k)}^{\mathbf{G}(\mathsf k)}\leftrightarrow \mathsf{Ind}_J,\qquad \mathsf{r}_{\mathbf Q(\mathsf k)}^{\mathbf{G}(\mathsf k)}\leftrightarrow \mathsf{Res}_J, \] Here, $J$ is the subset of $S$ corresponding to the parabolic $\mathbf Q$. In particular, \begin{equation} m_{\mathbf{I}}(\mathsf{AZ}(X))=D(m_{\mathbf{I}}(X))=(m_{\mathbf{I}}(X))^\tau, \end{equation} for all irreducible objects $X\in \mathcal C_{\mathbf{I}}(\bfG(\mathsf k))$. \subsection{Wavefront sets}\label{s:wave} Let $X$ be an admissible smooth representation of $\bfG(\sfk)$. Recall that for each nilpotent orbit $\OO\in \mathcal N_o(\sfk)$ there is an associated distribution $\mu_\OO$ on $C_c^\infty(\mf g(\sfk))$ called the \emph{nilpotent orbital integral} of $\OO$ \cite{rangarao}. Write $\hat\mu_\OO$ for the Fourier transform of this distribution. By the Harish-Chandra-Howe local character expansion \cite[Theorem 16.2]{HarishChandra1999} there is an open neighbourhood $\mathcal{V}$ of the identity $1 \in \mathbf{G}(\sfk)$ and (unique) complex numbers $c_{\OO}(X)\in \CC$ for each $\OO\in \mathcal N_o(\sfk)$ such that \begin{equation} \Theta_{X}(f) = \sum_{\OO \in \mathcal N_o(\sfk)}c_{\OO}(X) \hat \mu_{\OO}(f\circ \exp) \end{equation} for all $f\in C_c^\infty(\mathcal V)$. The \textit{($p$-adic) wavefront set} of $X$ is $$\WF(X) := \max\{\OO \mid c_{\OO}(X)\ne 0\} \subseteq \mathcal N_o(\sfk).$$ The \emph{algebraic wavefront set} of $X$ is $$^{\bar k}\WF(X) := \max \{\mathcal N_o(\bark/\sfk)(\OO) \mid c_{\OO}(X)\ne 0\} \subseteq \mathcal N_o(\bark),$$ see \cite[p. 1108]{Wald18} (note: in \cite{Wald18}, the invariant $^{\bar k}\WF(X)$ is called simply the `wavefront set' of $X$). By analogy with the case of real reductive groups (\cite[Theoerem 3.10]{Joseph1985}) and finite groups of Lie type (\cite[Theorem 11.2]{lusztigunip}) it is expected that when $X$ is irreducible, $^{\bark}\WF(X)$ is a singleton $\{\OO\}$ and $$\mathcal N_o(\bark/\sfk)(\OO') = \OO \text{ for all } \OO'\in \WF(X).$$ In \cite[Section 5.1]{okada2021wavefront} the third author has introduced a third type of wavefront set for depth $0$ representations, called the \emph{canonical unramified wavefront set}. This invariant is a natural refinement of $\hphantom{ }^{\bar{\sfk}}\WF(X)$. We will now define $^K\WF(X)$ and explain how to compute it. Fix the notation of Sections \ref{subsec:nilpotent} and \ref{subsec:Wreps}, e.g. $\mathcal{L}$, $\mathbb{L}$, $\sim_A$, $\OO(\bullet,\bullet)$, $\phi(\bullet)$, and so on. For every face $c \subset \mathcal{B}(\mathbf{G})$, the space of invariants $X^{\bfU_c(\mf o)}$ is a (finite-dimensional) $\bfL_c(\mathbb F_q)$-representation. Let $\WF(X^{\bfU_c(\mf o)}) \subseteq \cN^{\bfL_c}_o(\overline{\mathbb F}_q)$ denote the Kawanaka wavefront set \cite{kawanaka} and let \begin{equation} ^K\WF_c(X) := \{\mathscr L (c,\OO) \mid \OO\in \WF(X^{\bfU_{c}(\mf o)})\} \subseteq \cN_o(K). \end{equation} \begin{definition}\label{def:CUWF} The \emph{canonical unramified wavefront set} of $X$ is \begin{equation} ^K\WF(X) := \max \{[\hphantom{ }^K\WF_c(X)] \mid c\subseteq \mathcal B(\bfG)\} \subseteq \cN_o(K)/\sim_A. \end{equation} \end{definition} \begin{rmk} Definition \ref{def:CUWF} yields an invariant which is a bit coarser than the one given in \cite{okada2021wavefront}, but is arguably more natural, and the two are straightfowardly related by an application of the map $[\bullet]:\mathcal N_o(K)\to \mathcal N_o(K)/\sim_A$. \end{rmk} \begin{theorem} \cite[Theorem 5.4]{okada2021wavefront}\label{thm:OkadaWFset} Let $c_0$ be a chamber of $\mathcal B(\bfG)$. Then \begin{equation} ^K\WF(X) = \max \{\hphantom{ }^K\WF_c(X) \mid c\subseteq c_0\} \end{equation} where $c\subseteq c_0$ means that $c$ is a face of $c_0$. \end{theorem} We will often want to view $^K\WF(X)$ and $[^K\WF_c(X)]$ as subsets of $\mathcal N_{o,\bar c}$ using the identification $$\bar\theta_{\bfT}:\mathcal N_o(K)/\sim_A\to \mathcal N_{o,\bar c}.$$ We will write $^{K}\WF(X,\CC)$ (resp. $^K\WF_c(X,\CC)$) for $\bar \theta_{\bfT}(\hphantom{ }^K\WF(X))$ (resp. $\bar\theta_{\bfT}([\hphantom{ }^K\WF_c(X)])$). We will also want to view $\hphantom{ }^{\bar{\sfk}}\WF(X)$, which is naturally contained in $\mathcal N_o(\bar k)$, as a subset of $\mathcal{N}_o$ and so will write $\hphantom{ }^{\bar{\sfk}}\WF(X,\CC)$ for $\Theta_{\bar k}(\hphantom{ }^{\bar{\sfk}}\WF(X))$. The canonical unramified wavefront set is a refinement of the usual wavefront set in the following sense. \begin{theorem} \cite[Theorem 5.2]{okada2021wavefront} Let $X$ be a depth $0$ representation. Then \begin{equation} \hphantom{ }^{\bar{\sfk}}\WF(X,\CC) = \max (\pr_1(\hphantom{ }^{K}\WF(X,\CC))). \end{equation} \end{theorem} The proof uses the test functions from \cite[Theorem 4.5]{barmoy}. If $^{K}\WF(X)$ is a singleton, $\hphantom{ }^{\bark}\WF(X)$ is also a singleton and \begin{equation} \label{eq:wfcompatibility} \hphantom{ }^{K}\WF(X,\CC) = (\hphantom{ }^{\bark}\WF(X,\CC),\bar C). \end{equation} for some conjugacy class $\bar C$ in $\bar A(\hphantom{ }^\bark\WF(X,\CC))$. We will use the following theorem to compute the local wavefront sets. \begin{theorem} \cite[Theorem 3.9]{okada2021wavefront} \label{thm:locwf} Suppose $X\in\mathcal C_\bfI(\bfG(\sfk))$ and $m_\bfI(X)$ is deformable. Let $J\subsetneq \tilde\Delta$ and $c = c(J)$. Then \begin{equation} \label{eq:locwf} [^K\WF_{c}(X)]= \max\{[\mathscr L(c,\OO_{\phi(E)}(\overline{\mathbb F}_q)] \mid E\in\mathrm{Irr}(W_{c}),\Hom(E,\Res_{W_{c}}^{\widetilde W}(X_{q\to1}))\ne 0\}. \end{equation} \end{theorem} The notion of `deformable' in the statement of the theorem is a technical condition on $m_\bfI(X)$ (see \cite[Section 3.3]{okada2021wavefront}). In light of the extension of the $q\to 1$ operation (\ref{eq:qto1hom}) to the full Grothedieck group $R(\mathcal H_\bfI)$ and Equation \ref{eq:heckerestriction}, we may drop the requirement that $m_\bfI(X)$ is deformable from Theorem \ref{thm:locwf}. In this paper we will primarily be concerned with $^K\WF_{c}(X,\CC)$. By applying $\bar\theta_{\bfT}$ to Equation (\ref{eq:locwf}) we get \begin{equation} ^K\WF_c(X,\CC)= \max\{\overline{\mathbb L}(J,\OO(\phi(E),\CC)) \mid E\in\mathrm{Irr}(W_c), \ \Hom(E,\Res_{W_c}^{\widetilde W}(X_{q\to1}))\ne 0\} \end{equation} since $\bar\theta_{\bfT}$ is an isomorphism of partial orders and \begin{align} \bar\theta_{\bfT}([\mathscr L(c,\OO(\phi(E'),\overline{\mathbb F}_q))]) &= \mf Q(\theta(\mathscr L(c,\OO(\phi(E'),\overline{\mathbb F}_q)))) &&\text{(Equation (\ref{eq:square}))} \nonumber \\ &= \mf Q(\mathbb L(J,\Theta_{c(J)}(\OO(\phi(E'),\overline{\mathbb F}_q)))) &&\text{(Proposition \ref{prop:square})} \nonumber \\ &= \overline{\mathbb L}(J,\OO(\phi(E'),\CC)). \end{align} \section{Computation of wavefront sets}\label{sec:main-result} Our main result is a general formula for the (canonical unramified) wavefront set of an irreducible Iwahori-spherical representation with real infinitesimal character. To prove this result, it will be convenient to define the \emph{wavefront set} of a Weyl group representation (analogous to the invariant $^K\WF(X)$ defined in Section \ref{s:wave}). Fix the notation of Sections \ref{subsec:nilpotent} and \ref{subsec:Wreps}, e.g. $\overline{\mathbb{L}}$, $\phi(\bullet)$, $\OO(\bullet,\CC)$, and so on. For $E\in\mathrm{Irr}(W)$ and $J\subsetneq\tilde\Delta$, consider the subset $$\WF_J(E) = \max\{\overline{\mathbb L}(J,\OO(\phi(E'),\CC)) \mid E'\in\mathrm{Irr}(W_J), \ \Hom(E',E|_{W_J})\ne 0\} \subset \cN_{o,\bar c}.$$ \begin{definition} \label{def:wWF} The \emph{wavefront set} of $E$ is $$\WF(E) = \max \{\WF_J(E) \mid J\subsetneq \tilde\Delta\} \subset \cN_{o, \bar c}.$$ \end{definition} \begin{lemma} \label{lem:upperbound} Let $E\in \mathrm{Irr}(W)$ and $\OO^\vee = \OO^{\vee}(E,\CC)$. Then for any $J\subsetneq \tilde\Delta$ we have that \begin{equation} \WF_J(E)\le_A d_A(\OO^\vee,1). \end{equation} \end{lemma} \begin{proof} If $E'$ is an irreducible constituent of $E|_{W_J}$, then by \cite[Proposition 4.3]{acharaubert} we have that $\OO^\vee\le d_S(\mathbb L(J,\OO(\phi(E'),\CC)))$. Thus for all $(\OO,\bar C) \in \WF_J(E)$ we have that $d_S(\OO,\bar C)\ge \OO^\vee$ and so by \cite[Theorem 5.5 (3)]{okada2021wavefront} we have that $(\OO,\bar C)\le_A d_A(\OO^\vee,1)$. \end{proof} We now introduce the notion of a \emph{faithful nilpotent orbit}, which will play a central role in our calculation of $\WF(E)$. \begin{definition}\label{def:faithful} Let $\OO^{\vee} \subset \cN^{\vee}$ be a nilpotent orbit. We say that $\OO^{\vee}$ \emph{faithful} if there is a pair $(J, \phi) \in \mathcal{F}_{\tilde\Delta}$ such that \begin{itemize} \item[(i)] $\overline{\mathbb L}(J,\OO(\phi)) = d_A(\OO^{\vee},1)$. \item[(ii)] If $E$ is an irreducible representation of $W$ with $\OO^{\vee}(E,\CC) = \OO^{\vee}$, there is a representation $F \in \phi \otimes \mathrm{sgn}$ such that $\Hom_{W_J}(F,E|_{W_J}) \neq 0$. \end{itemize} \end{definition} \begin{theorem}\label{thm:faithful} Every nilpotent orbit $\OO^{\vee} \subset \cN^{\vee}$ is faithful. \end{theorem} The proof of Theorem \ref{thm:faithful} will be postponed until Section \ref{sec:faithful}. Using this result, and Lemma \ref{lem:upperbound}, we can compute the the wavefront set of any irreducible $W$-representation. \begin{cor} \label{cor:wWF} Let $E\in\mathrm{Irr}(W)$. Then $$\WF(E) = d_A(\OO^{\vee}(E,\CC),1).$$ \end{cor} \begin{proof} By Lemma \ref{lem:upperbound}, $\WF(E)$ is bounded above by $d_A(\OO^\vee(E,\CC),1)$. It suffices to show that this bound is achieved. For this let $\OO^\vee = \OO^\vee(E,\CC)$. By Theorem \ref{thm:faithful}, $\OO^\vee$ is faithful. So there is a pair $(J,\phi)\in \mathcal F_{\tilde\Delta}$ such that $\overline{\mathbb L}(J,\OO(\phi,\CC)) = d_A(\OO^\vee,1)$ and there is a representation $F\in \phi\otimes \mathrm{sgn}$ such that $\Hom(F,E|_{W_J})\ne0$. But since $F\in\phi\otimes\mathrm{sgn}$ we have that $\phi(F) = \phi$ and so $\overline{\mathbb L}(J,\OO(\phi(F),\CC)) = d_A(\OO^\vee,1)$ as required. \end{proof} We now turn to the task of computing wavefront sets of Iwahori-spherical $\mathbf{G}(\sfk)$-representations. Recall the group homomorphisms (\ref{eq:qto1hom}, \ref{eq:restrictiontoW}) $$(\bullet)_{q\to 1}: R_{\mathbf{I}}(\mathbf{G}(\sfk)) \to R(\widetilde{W}), \qquad |_W: R_{\mathbf{I}}(\mathbf{G}(\sfk)) \to R(W)$$ \begin{lemma}\label{l:WF-factors} Let $X \in \mathcal{C}_{\mathbf{I}}(\mathbf{G}(\sfk))$ and suppose \begin{equation} \Res_{W_c}^{\widetilde W}X_{q\to 1} = \Res_{\dot W_c}^{W}X|_W \end{equation} for all $c\subseteq c_0$. Let \begin{equation} \mathscr O^\vee = \min\{\OO^\vee(E,\CC) \mid E\in \mathrm{Irr}(W),\Hom(E,X|_{W})\ne0\}. \end{equation} Then \begin{equation} ^K\WF(X,\CC) = \{d_A(\OO^\vee,1) \mid \OO^\vee\in\mathscr O^\vee\}. \end{equation} \end{lemma} \begin{proof} Let $J\subsetneq \tilde\Delta$. For $E \in \mathrm{Irr}(W)$, let $n_E(X)$ denote the multiplicity of $E$ in $X|_W$, so that \begin{equation}\label{eq:isotypic}X|_W \simeq \bigoplus_{E\in \mathrm{Irr}(W)}n_E(X) E.\end{equation} Since $\Res_{W_c}^{\widetilde W}X_{q\to 1} = \Res_{\dot W_c}^{W}X|_W$, by Theorem \ref{thm:locwf} we have that \begin{equation} ^K\WF_{c(J)}(X,\CC) = \max\{\overline{\mathbb L}(J,\OO(\phi(E'),\CC)) \mid E'\in\mathrm{Irr}(W_c), \ \Hom(E',\Res_{\dot W_c}^{W}(X|_{W}))\ne 0\}. \end{equation} But by (\ref{eq:isotypic}), we have $\Hom(E',\Res_{\dot W_c}^{W}(X|_{W}))\ne 0$ if and only if there is a representation $E\in\mathrm{Irr}(W)$ with $n_E(X)\ne0$ such that $\Hom(E',E|_{\dot W_c})\ne 0$. We can therefore rewrite $^K\WF_{c(J)}(\pi,\CC)$ as \begin{align} \label{eq:locWFsimplification} \begin{split} ^K\WF_{c(J)}(X,\CC) &= \max\{\overline{\mathbb L}(J,\OO(\phi(E'),\CC)) \mid E\in \mathrm{Irr}(W), \ E'\in\mathrm{Irr}(W_c), \ n_E(X)\ne 0, \ \Hom(E',E|_{\dot W_c})\ne 0 \} \\ &= \max \{\WF_J(E)\mid E\in \mathrm{Irr}(W), \ n_E(X)\ne0\}. \end{split} \end{align} Thus \begin{align*} ^K\WF(X,\CC) &= \max\{\hphantom{ }^K\WF_{c(J)}(X,\CC) \mid J\subsetneq\tilde\Delta\}\\ &= \max \{\WF_J(E) \mid J\subsetneq\tilde\Delta, \ n_E(X)\ne0\}&& \text{(Equation \ref{eq:locWFsimplification})}\\ &= \max \{\WF(E) \mid n_E(X)\ne0\} && \text{(Definition \ref{def:wWF})}\\ &= \max \{d_A(\OO^{\vee}(E,\CC),1) \mid n_E(X)\ne0\} && \text{(Corollary \ref{cor:wWF})}. \end{align*} Then by Lemma \ref{lem:injectiveachar} we get \begin{align*}^K\WF(X) &= \{d_A(\OO^\vee,1)\mid \OO^\vee\in \min\{\OO^{\vee}(E,\CC) \mid E\in\mathrm{Irr}(W), \ n_E(X)\ne0\}\}\\ &= \{d_A(\OO^{\vee},1) \mid \OO^{\vee} \in \mathscr{O}^{\vee}\} \end{align*} as required. \end{proof} The next result is essentially contained in \cite[\S5]{Re-euler}. \begin{lemma}\label{l:std-real} Let $Y$ be an Iwahori-spherical standard module with real infinitesimal character. Then $\Res_{W_c}^{\widetilde W}Y_{q\to 1} = \Res_{\dot W_c}^{W}Y|_W$, in the notation of the previous lemma. \end{lemma} \begin{proof} For every $x\in \CX$ and irreducible tempered representation $X$ with real infinitesimal character, the Bernstein operator $\theta_x$ acts on $m_{\mathbf{I}}(X)$ with eigenvalues of the form $v^m$, $m\in \mathbb Z$, see \cite[\S5.7]{Re-euler}. Then \cite[Lemma 5.8.1]{Re-euler} implies that the claim holds when $Y=X$ is a tempered Iwahori-spherical standard module with real infinitesimal character. It is well known, see for example \cite[Proof of Theorem 5.3]{BM1} that for any (Iwahori-spherical) standard module $Y$ there exists a continuous deformation $Y^\epsilon$, $\epsilon\in [0,1]$, $Y^1=Y$, such that $Y^0$ is a (possibly reducible, Iwahori-spherical) tempered module. Moreover if $Y$ has real infinitesimal character, so does $Y^0$. Then both $\Res_{W_c}^{\widetilde W}Y^\epsilon_{q\to 1}$ and $\Res_{\dot W_c}^{W}Y^\epsilon|_W$ are independent of $\epsilon$, being continuous deformations of finite dimensional representations of finite groups. This completes the proof. \begin{comment} So for any standard representation $Y$ with real infinitesimal character, $\theta_x$ acts on $m_{\mathbf{I}}(Y)$ with eigenvalues of the form $v^r$, $r\in \mathbb R$. (Here we are thinking of the semisimple Kazhdan-Lusztig parameter $s$ of $Y$ as a specialisation of $v^y$, for some $y\in \mathfrak t^\vee_{\mathbb R}$.) Hence $x$ acts on $Y_{q\to1}$ by unipotent transformations. The claim now follows from \cite[Lemma 5.8.1]{Re-euler}. Alternatively, if $Y=Y(s,n,\rho)$, since $s_0\in T^\vee_{\mathbb R}$, one may form the family $Y^\epsilon=Y(s_0^\epsilon q^{h/2},n,\rho)$, for $\epsilon\in [0,1]$. Then both $\Res_{W_c}^{\widetilde W}Y^\epsilon_{q\to 1}$ and $\Res_{\dot W_c}^{W}Y^\epsilon|_W$ are independent of $\epsilon$, being continuous deformations of finite dimensional representations of finite groups. At $\epsilon=0$, we are in the situation of \cite[Lemma 5.8.1]{Re-euler}, and the claim follows. \end{comment} \end{proof} \begin{theorem} \label{thm:realwf} Suppose $X=X(s,n,\rho)$ is Iwahori-spherical with $s\in T^\vee_\mathbb R$. Write $\OO^{\vee}_X = G^{\vee}n \subset \mathfrak{g}^{\vee}$. Then $^K\WF(\AZ(X))$, $\hphantom{ }^{\bar{\sfk}}\WF(\AZ(X))$ are singletons, and \begin{align*} ^K\WF(\mathsf{AZ}(X)) &= d_A(\OO^{\vee}_X,1)\\ \hphantom{ }^{\bar{\sfk}}\WF(\AZ(X)) &= d(\OO^{\vee}_X). \end{align*} \end{theorem} \begin{proof} By Equation (\ref{eq:wfcompatibility}) and the identity \begin{equation} \pr_1(d_A(\OO^\vee_X,1)) = d(\OO^\vee_X) \end{equation} the formula for $\hphantom{ }^{\bar{\sfk}}\WF(\AZ(X))$ follows from the formula for $^K\WF(\AZ(X))$. So it suffices to show that $^K\WF(\AZ(X)) = d_A(\OO^{\vee}_X,1)$. By (\ref{e:multi}), there is an equality in $R(\mathbf{G}(\sfk))$ \[X = Y(s,n,\rho) + \sum_{n \in \partial (G^{\vee}n')} m_{s,n',\rho'}Y(s,n',\rho') \] for standard modules $Y(s,n',\rho')$ and $m_{s,n',\rho'} \in \ZZ$. By Lemma \ref{l:std-real}, \[\Res_{W_c}^{\widetilde W}Y(s,n',\rho')_{q\to 1} = \Res_{\dot W_c}^{W}Y(s,n',\rho')|_W. \] for each $Y(s,n',\rho')$ and hence $$\Res_{W_c}^{\widetilde W}X_{q\to 1} = \Res_{\dot W_c}^{W}X|_W $$ so $X$ satisfies the hypotheses of Lemma \ref{l:WF-factors}. By the argument in \cite[\S10.13]{Lu-gradedII} \begin{equation}\label{e:Y-coh1} Y(s,n',\rho')|_W = H^{\bullet}(\mathcal{B}^{\vee}_{n'})^{\rho'} \otimes \mathrm{sgn}. \end{equation} So as (virtual) $W$-representations \[\mathsf{AZ}(X)|_W = H^{\bullet}(\mathcal{B}^{\vee}_n)^{\rho} + \sum_{n \in \partial (G^{\vee}n')} m_{s,n',\rho'}H^{\bullet}(\mathcal{B}_{n'}^{\vee})^{\rho'} \] By \cite[Corollaire 2]{BoMac1981}, every cohomology group $H^\bullet(\mathcal{B}_{n'}^{\vee})$ decomposes as a $W$-representation \begin{equation}\label{e:Y-coh2} H^\bullet(\mathcal{B}_{n'}^{\vee})=H^{\mathrm{top}}(\mathcal{B}^{\vee}_{n'})+\sum_{\substack{E \in \mathrm{Irr}(W)\\,~n' \in \partial \mathbb O^\vee(E,\mathbb C)}} m_{E} E, \end{equation} where $m_E \in \ZZ_{\geq 0}$ for every term in the sum. Combining (\ref{e:Y-coh1}) and (\ref{e:Y-coh2}), we get that \[\mathsf{AZ}(X)|_W = H^{\mathrm{top}}(\mathcal{B}^{\vee}_n)^{\rho} + \sum_{\substack{E \in \mathrm{Irr}(W)\\~n \in \partial \mathbb O^\vee(E,\mathbb C)}} m_{E}'E, \] for some integers $m_E'$. Since $s\in T^\vee_\mathbb R$, it follows that $s_0\in T^\vee_\mathbb R$ as well. Because $Z_{G^\vee}(s_0)$ is a Levi subgroup, the inclusion $Z_{G^\vee}(s_0,n)\to Z_{G^\vee}(n)$ induces an inclusion $A(s_0,n)\hookrightarrow A(n)$. Finally $A(s_0,n)=A(s,n)$, see for example \cite[Lemma 4.3.1]{Re-isogeny}. Hence we obtain \[\mathsf{AZ}(X)|_W = \sum_{\psi \in \mathrm{Irr}(A(n))_0} [\psi:\rho]_{A(s,n)} E(n,\psi) + \sum_{\substack{E \in \mathrm{Irr}(W)\\~n \in \partial \mathbb O^\vee(E,\mathbb C)}} m_{E} E \] where $E(n,\psi)$ is the irreducible Springer representation in $H^{\mathrm{top}}(\mathcal{B}_n^{\vee})^{\psi}$ and $[\psi:\rho]_{A(s,n)}$ denotes the multiplicity of $\rho$ in $\psi|_{A(s,n)}$. Since there exists at least one $\psi$ such that $[\psi:\rho]_{A(s,n)}\neq 0$, this implies the result by Lemma \ref{l:WF-factors}. \end{proof} Now let $\OO^{\vee} \subset \mathfrak{g}^{\vee}$ be a nilpotent orbit. Choose an $\mathfrak{sl}(2)$-triple $(e^{\vee},f^{\vee},h^{\vee})$ with $e^{\vee} \in \OO^{\vee}$. The semisimple operator $\ad(h^{\vee})$ induces a Lie algebra grading $$\fg^{\vee} = \bigoplus_{n \in \ZZ}\fg^{\vee}[n], \qquad \fg^{\vee}[n] := \{x \in \fg^{\vee} \mid [h^{\vee},x] = nx\}.$$ Write $L^{\vee}$ for the connected (Levi) subgroup corresponding to the centralizer $\fg^{\vee}_0$ of $h^{\vee}$. If we set $s := q^{\frac{1}{2}h^{\vee}}$, then $$L^{\vee} = G^{\vee}(s), \qquad \fg^{\vee}[2] = \fg^{\vee}_q$$ where $G^{\vee}(s)$ and $\fg^{\vee}_q$ are as defined in Section \ref{s:unip-cusp}. Note that $L^{\vee}$ acts by conjugation on each $\fg^{\vee}[n]$, and in particular on $\fg^{\vee}[2]$. We will need the following well-known facts, see \cite[Section 4]{Kostant1959} or \cite[Prop 4.2]{Lusztigperverse}. \begin{lemma}\label{lem:orbitclosure} The following are true: \begin{itemize} \item[(i)] $L^{\vee}e^{\vee}$ is an open subset of $\fg^{\vee}[2]$ (and hence the unique open $L^{\vee}$-orbit therein). \item[(ii)] $L^{\vee}e = \OO^{\vee} \cap \fg^{\vee}[2]$. \item[(iii)] $G^{\vee}\fg^{\vee}[2] \subseteq \overline{\OO^{\vee}}$. \end{itemize} \end{lemma} \begin{comment} \begin{proof} (i) follows from a well-known result of Mal'cev (\cite{Malcev}), see also \cite[Theorem 4.2]{Kostant1959}. (ii) is a consequence of \cite[Theorem 4.3]{Kostant1959}\todo{Lusztig-graded perverse sheaves, adv in math}. For (iii), consider the map $f: G^{\vee} \times \fg^{\vee}[2] \to \fg^{\vee}$ defined by $f(g,x) = \Ad(g)x$. By (i), $U:=G^{\vee} \times L^{\vee}e$ is an open subset of $G^{\vee} \times \fg^{\vee}[2]$. Since $f$ is continuous $$G^{\vee}\fg^{\vee}[2] = f(\overline{U}) \subseteq \overline{f(U)} = \overline{G^{\vee}e^{\vee}} = \overline{\OO^{\vee}}.$$ (iii) follows immediately. \end{proof} \end{comment} \begin{comment} \begin{lemma} \label{lem:orbitclosure} Let $\{e,f,h\} \subset \fg$ be an $\mathfrak{sl}(2)$-triple and let $x \in \fg$ be a nilpotent element such that $[h,x]=2x$. Then $$Gx \subseteq \overline{Ge}.$$ \end{lemma} \begin{proof} The semisimple operator $\ad(h)$ induces a Lie algebra grading $$\fg = \bigoplus_{n \in \CC} \fg_n, \qquad \fg_n := \{y \in \fg \mid [h,y] = ny\}.$$ By the representation theory of $\mathfrak{sl}(2)$, $\fg_n = 0$ for $n \notin \ZZ$. For each $m \in \ZZ$, define $$\fg_{\geq m}:= \bigoplus_{n \geq m} \fg_n.$$ By \cite[Theorem 4.3]{Kostant1959}, we have $G(\fg_{\geq 2}) = \overline{Ge}$. But $x \in \fg_2 \subset \fg_{\geq 2}$, so $Gx \subseteq G(\fg_{\geq 2}) = \overline{Ge}$, as asserted. \end{proof} \end{comment} \begin{cor}\label{cor:wfbound} Suppose $X \in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\sfk))$ is Iwahori-spherical. Then \begin{align}\label{eq:WFsetbound} \begin{split} d_A(\OO^{\vee}, 1) &\leq_A \hphantom{ } ^K\WF(X)\\ d(\OO^{\vee}) &\leq \hphantom{ }^{\bar{\sfk}}\WF(X) \end{split} \end{align} \end{cor} \begin{proof} The second inequality follows from the first by applying $\mathrm{pr}_1$ to both sides. So it suffices to show that $d_A(\OO^{\vee}, 1) \leq_A \hphantom{ } ^K\WF(X)$. Let $X\in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\sfk))$ be Iwahori-spherical. Since $\AZ$ is an involution on the Iwahori-spherical representations in $\Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\sfk))$, we have $$X=\mathsf{AZ}(X') \text{ for some Iwahori-spherical } X'=X(q^{\frac12h^\vee},n',\rho')\in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\sfk))$$ By definition, $n' \in \fg^{\vee}_q = \fg^{\vee}[2]$. Now (iii) of Lemma \ref{lem:orbitclosure} implies that $\OO^\vee_{X'}:=G^\vee n'\le \OO^\vee$. In particular $(\OO^\vee_{X'},1)\le_A(\OO^\vee,1)$ and so $d_A(\OO^\vee,1)\le_A d_A(\OO^\vee_{X'},1)$ since $d_A$ is order-reversing. But by Theorem \ref{thm:realwf}, $d_A(\OO^\vee_{X'},1) = \hphantom{ }^K\WF(X)$. This completes the proof. \end{proof} \section{Proof of faithfulness} \label{sec:faithful} Fix the notation of Sections \ref{subsec:nilpotent}-\ref{subsec:Wreps}, i.e. $G$, $G^{\vee}$,$\cN$, $\cN^{\vee}$, $\mathcal{F}_{\tilde{\Delta}}$, $\bar{s}$, and so on. Recall that to each irreducible representation $E$ of $W$, we associate nilpotent orbits $\OO(E,\CC) \subset \cN$ and $\OO^{\vee}(E,\CC) \subset \cN^{\vee}$. In this section all of our objects will be defined over $\CC$---so we will simplify the notation by writing $\OO(E) := \OO(E,\CC)$, $\OO^{\vee}(E) :=\OO^{\vee}(E,\CC)$. Similarly, for a family $\phi$ in $\mathrm{Irr}(W)$, we will write $\OO(\phi)$, $\OO^{\vee}(\phi)$ for $\OO(\phi,\CC)$, $\OO^{\vee}(\phi,\CC)$. Recall the notion of a \emph{faithful} nilpotent orbit, see Definition \ref{def:faithful}. In this section we will prove that all nilpotent orbits are faithful (this is Theorem \ref{thm:faithful}). First, note that conditions (i) and (ii) of Definition \ref{def:faithful} depend only on the Lie algebra $\fg = \mathrm{Lie}(G)$. Moreover, if $\OO^{\vee}_1, \OO^{\vee}_2$ are faithful (for $(J_1,\phi_1)$, $(J_2,\phi_2)$ respectively), then $\OO^{\vee}_1 \times \OO^{\vee}_2$ is faithful (for $(J_1 \times J_2, \phi_1 \otimes \phi_2)$). Thus, it suffices to prove Theorem \ref{thm:faithful} for $\mathfrak{g}$ a simple Lie algebra. In many cases, we can appeal to the following proposition. \begin{lemma}\label{lem:triviallocalsystem} Suppose $\overline{\mathbb L}(J, \OO(\phi)) = d_A(\OO^{\vee},1)$, $E = E(\OO^{\vee},1)$ (as representations of $W$), and $F = \mathrm{sp}(\phi \otimes \mathrm{sgn})$. Then $j^W_{W_J} F = E$. \end{lemma} \begin{proof} Let $(\OO,C) := \mathbb L(J,\OO_{\phi})$. Then, by definition of $d_S$ we have that $E(d_S(\OO,C),1) = j_{W_J}^W F$. But $d_S(d_A(\OO^\vee,1)) = \OO^\vee$ as required. \end{proof} \begin{prop}\label{prop:easycase} Suppose there is a unique irreducible representation $E$ of $W$ such that $\OO^{\vee}(E) = \OO^{\vee}$. Then $\OO^{\vee}$ is faithful. \end{prop} \begin{proof} By the uniqueness condition, we must have $E = E(\OO^{\vee},1)$. Recall that $\overline{\mathbb L}: \mathcal{K}_{\widetilde{\Delta}} \to \cN_{o,\bar c}$ is surjective. Choose any $(J,\phi)$ such that $\overline{\mathbb L}(J,\OO_{\phi}) = d_A(\OO^{\vee},1)$ and let $F = \mathrm{sp}(\phi \otimes \mathrm{sgn})$. By Lemma \ref{lem:triviallocalsystem}, we have $j^W_{W_J}F = E$, and hence $\Hom_{W_J}(F, E|_{W_J}) \neq 0$ by Frobenius reciprocity. \end{proof} Proposition \ref{prop:easycase} implies Theorem \ref{thm:faithful} for $\fg = \mathfrak{sl}(n)$ and for $\fg = \mathfrak{so}(2n)$ and $\OO^{\vee}$ a very even orbit (see Section \ref{subsec:orbits}). The simple exceptional Lie algebras are handled in Section \ref{subsec:exceptional} (where we make repeated use of Proposition \ref{prop:easycase} to simplify the arguments). It remains to prove Theorem \ref{thm:faithful} for $\fg$ a simple Lie algebra of type B/C/D (and $\OO^{\vee}$ not very even in the type D case). The proof will require a fairly lengthy detour into the combinatorics of partitions and symbols. \subsection{Preliminaries on partitions}\label{subsec:partitions} Let $\mathcal{P}(n)$ denote the set of partitions of $n$. For $\lambda \in \mathcal{P}(n)$, we will typically write $\lambda = (\lambda_1 \geq ... \geq \lambda_k)$, and we assume $\lambda_k \neq 0$ (unless otherwise stated). We will write $\#\lambda$ for the number of (nonzero) parts of $\lambda$ and $|\lambda|$ for their sum. If $x \in \ZZ_{\geq 0}$, we will write $m_{\lambda}(x)$ for the multiplicity of $x$ in $\lambda$ and $\mathrm{ht}_{\lambda}(x)$ for its `height,' i.e. $\mathrm{ht}_{\lambda}(x) = \sum_{y \geq x} m_{\lambda}(y)$. A partition $\lambda$ is \emph{very even} if all parts are even, occuring with even multiplicity. There is a partial order on partitions, defined by the formula $$\lambda \leq \mu \iff \sum_{i \leq j} \lambda_i \leq \sum_{i \leq j} \mu_i.$$ Given an arbitrary partition $\lambda = (\lambda_1,...,\lambda_k)$, we define two new partitions (of $|\lambda|+1$ and $|\lambda|-1$) $$\lambda^+ := (\lambda_1+1,\lambda_2,...,\lambda_k), \qquad \lambda^- := (\lambda_1,\lambda_2,...,\lambda_k-1)$$ and we write $\lambda^t$ for the transpose of $\lambda$. Given $\lambda \in \mathcal{P}(n)$ and $\mu \in \mathcal{P}(m)$, we define $\lambda \cup \mu \in \mathcal{P}(m+n)$ by `adding multiplicities' $$m_{\lambda \cup \mu}(x) = m_{\lambda}(x) + m_{\mu}(x), \qquad \forall x \in \ZZ_{\geq 0}.$$ We write $\alpha = \lambda \cup_{\geq} \mu$ if $\alpha = \lambda \cup \mu$ and the smallest part of $\lambda$ is at least as large as the largest part of $\mu$. We write $\mu \subseteq \lambda$ if $m_{\mu}(x) \leq m_{\lambda}(x)$ for every $x \in \ZZ_{\geq 0}$. In this case, there is a unique subpartition partition $\lambda \setminus \mu \subseteq \lambda$ such that $\lambda = \mu \cup (\lambda \setminus \mu)$. A \emph{decorated} partition of $n$ is a partition $\lambda \in \mathcal{P}(n)$ together with a decoration $\kappa \in \{0,1\}$. We write $(\lambda,\kappa)$ as $\lambda^{\kappa}$. If $\lambda$ is \emph{not} very even, we declare $\lambda^0 = \lambda^1$. Otherwise we regard $\lambda^0$ and $\lambda^1$ as distinct. Write $\mathcal{P}^d(n)$ for the set of decorated partitions of $n$, with the equivalence relation just defined. An \emph{(ordered) bipartition} of $n$ is a pair of partitions $(\lambda,\mu)$ such that $|\lambda|+|\mu|=n$. An \emph{unordered bipartition} of $n$ is an unordered pair $\{\lambda, \mu\}$ with the same property. Write $\overline{\mathcal{P}}(n)$ (resp. $\overline{\mathcal{P}}^u(n)$) for the set of ordered (resp. unordered) bipartitions of $n$. A \emph{decorated unordered bipartition} of $n$ is an unordered bipartition $\{\lambda,\nu\}$ of $n$ together with a decoration $\kappa\in\{0,1\}$. We write $(\{\lambda, \nu\},\kappa)$ as $\{\lambda,\mu\}^\kappa$. If $\lambda\ne \mu$ we declare $\{\lambda,\mu\}^0 = \{\lambda,\mu\}^1$. Otherwise we regard $\{\lambda,\mu\}^0$ and $\{\lambda,\mu\}^1$ as distinct. Write $\overline{\mathcal P}^{d,u}(n)$ for the set of decorated unordered bipartition of $n$, with the equivalence relation just defined. \subsection{Preliminaries on classical Lie algebras} \label{sec:classicalliealgebras} Let $X \in \{B,C,D\}$ and let $\fg_X(n)$ denote the complex simple Lie algebra of type $X$ and rank $n$. Denote the corresponding Weyl group by $W_X(n)$. If $X \in \{B,C\}$, there is a bijection between the set of irreducible representations of $W_X(n)$ and the set $\overline{\mathcal{P}}(n)$ of ordered bipartitions. If $X=D$, there is a bijection between the irreducible representations of $W_X(n)$ and the set $\overline{\mathcal{P}}^{d,u}(n)$ of decorated unordered bipartitions. We label the extended Dynkin diagrams using the standard Bourbaki conventions: \begin{align*} \fg_B(n): \qquad \qquad \qquad & \dynkin[extended,labels={\alpha_0,\alpha_1,\alpha_2,\alpha_3,\alpha_{n-2},\alpha_{n-1},\alpha_n},edge length = 1 cm, root radius = .075cm] B{} \\ \fg_C(n): \qquad \qquad \qquad& \dynkin[extended,labels={\alpha_0,\alpha_1,\alpha_2,\alpha_3,\alpha_{n-1},\alpha_n},edge length = 1 cm, root radius = .075cm] C{} \\ \fg_D(n): \qquad \qquad \qquad& \dynkin[extended,labels={\alpha_0,\alpha_1,\alpha_2,\alpha_3,\alpha_{n-3},\alpha_{n-2},\alpha_{n-1},\alpha_n},edge length = 1 cm, root radius = .075cm] D{} \end{align*} For $0 \leq k \leq n$, define the subset $$J_X(k,n) := \{\alpha_0,...,\alpha_{k-1},\alpha_{k+1},...,\alpha_n\} \subsetneq \widetilde{\Delta},$$ and write $\mathfrak{l}_X(k,n) \subset \fg$ for the corresponding (standard) pseudolevi subalgebra. Then \begin{align}\label{eq:pseudolevis} \begin{split} \fl_B(k,n) &\simeq \begin{cases} \fg_D(k) \times \fg_B(n-k), & k \notin \{1\}\\ \fg_B(n), & k \in \{1\} \end{cases} \\ \fl_C(k,n) &\simeq \fg_C(k) \times \fg_C(n-k)\\ \fl_D(k,n) &\simeq \begin{cases} \fg_D(k) \times \fg_D(n-k), & k \notin \{1,n-1\} \\ \fg_D(n), & k \in \{1,n-1\} \end{cases} \end{split} \end{align} \subsection{Preliminaries on nilpotent orbits in classical types}\label{subsec:orbits} Define the sets \begin{align*} \mathcal{P}_B(n) &= \{\lambda \in \mathcal{P}(2n+1) \mid x \text{ even} \implies m_{\lambda}(x) \text{ even}\}\\ \mathcal{P}_C(n) &= \{\lambda \in \mathcal{P}(2n) \mid x \text{ odd} \implies m_{\lambda}(x) \text{ odd (or 0)}\}\\ \mathcal{P}_D(n) &= \{\lambda^{\kappa} \in \mathcal{P}^d(2n) \mid x \text{ even} \implies m_{\lambda}(x) \text{ even}\} \end{align*} For $\fg = \fg_X(n)$, $X \in \{B,C,D\}$, there is a well-known bijection \begin{equation}\label{eq:orbitspartitions}\cN_o^{\fg} \xrightarrow{\sim} \mathcal{P}_X(n)\end{equation} see \cite[Section 5.1]{CM}. The elements of $\mathcal{P}_X(n)$ are called $X$-partitions of $n$ (they are decorated in the case when $X=D$). If $X=D$ and $\OO \in \cN_0^{\fg}$ corresponds to $\lambda^{\kappa}$ , we say that $\OO$ is very even if $\lambda$ is very even, see Section \ref{subsec:partitions}. For $X \in \{B,C,D\}$, the $X$-collapse of $\lambda$ is the unique largest $X$-partition $\lambda_X$ such that $\lambda_X \leq \lambda$ (in type $B$ (resp. $C$, $D$), this is sensible provided $|\lambda|=2n+1$ (resp. $|\lambda|=2n$)). A formula for $\lambda_X$ is provided in \cite[Lem 6.3.8]{CM}. We will recall the details for the case when $X=C$ (the other cases are analogous). First, write $\lambda$ as a union of even and odd parts $$\lambda = (\lambda^e_1,...,\lambda^e_p) \cup (\lambda^o_1,...,\lambda^o_{2q}), \qquad \lambda_1^e \geq ... \geq \lambda_p^e, \ \lambda_1^o \geq ... \geq \lambda_{2q}^o.$$ For $1 \leq i \leq q$, define $$(\lambda_{2i-1}^o,\lambda_{2i}^o)' = \begin{cases} (\lambda_{2i-1}^o,\lambda_{2i}^o) &\mbox{if } \lambda^o_{2i-1}=\lambda^o_{2i}\\ (\lambda_{2i-1}^o-1,\lambda_{2i-1}^o+1) &\mbox{otherwise} \end{cases}$$ Then \begin{equation}\label{eq:Ccollapse}\lambda_C = (\lambda_1^e,...,\lambda_p^e) \cup \bigcup_{i=1}^q (\lambda_{2i-1}^o,\lambda_{2i}^o)'.\end{equation} If $\lambda \in \mathcal{P}_X(n)$, define the \emph{dual} partition $d(\lambda)$ of $\lambda$ $$d(\lambda) = \begin{cases} ((\lambda^t)^-)_C \in \mathcal{P}_C(n)& \mbox{if } X =B\\ ((\lambda^t)^+)_B \in \mathcal{P}_B(n) &\mbox{if } X=C\\ (\lambda^t)_D \in \mathcal{P}_D(n) &\mbox{if } X=D. \end{cases}$$ (If $X=D$ and $\lambda$ is very even, there is a decoration $\kappa$ to keep track of--the behavior of $\kappa$ under $d$ is explained in \cite[Cor 6.3.5]{CM}, but we will not need this result). For $X \in \{B,C,D\}$, we write $\mathcal{P}^*_X(n) \subset \mathcal{P}_X(n)$ for the set of (decorated) partitions corresponding to \emph{special nilpotent orbits}---briefly, for the set of \emph{special X-partitions}. We will use the following characterization found in \cite[Section 6.3]{CM}: a partition $\lambda \in \mathcal{P}_X(n)$ (resp. decorated partition $\lambda^{\kappa} \in \mathcal{P}_D(n)$) is special if and only if it contains: \begin{align*} X=B: &\text{ an even number of odd parts between every pair of consecutive even parts}\\ &\text{ and an odd number of odd parts greater than the largest even part.}\\ X=C: &\text{ an even number of even parts between every pair of consecutive odd parts}\\ &\text{ and an even number of even parts larger than the greatest odd part.}\\ X=D: &\text{ an even number of odd parts between every pair of consecutive even parts}\\ &\text{ and an even number of odd parts greater than the largest even part.} \end{align*} Note that if $X=D$ and $\lambda \in \mathcal{P}(2n)$ is very even, then $\lambda^0, \lambda^1 \in \mathcal{P}_X^*(n)$. Define the sets \begin{align*} \overline{\mathcal P}_B(p,n) &= \begin{cases} \mathcal P_D(p) \times \mathcal P_B(n-p) & \mbox{if } p\ne 1 \\ \mathcal \mathcal P_B(n) & \mbox{if } p = 1 \end{cases} \\ \overline{\mathcal P}_C(p,n) &= \mathcal{P}_C(p) \times \mathcal{P}_C(n-p) \\ \overline{\mathcal P}_D(p,n) &= \begin{cases} \mathcal P_D(p) \times \mathcal P_D(n-p) & \mbox{if } p\ne 1,n-1 \\ \mathcal \mathcal P_D(n) & \mbox{if } p = 1,n-1 \end{cases} \end{align*} and define $$\overline{\mathcal{P}}_X(n) = \coprod_{k=0}^n\overline{\mathcal P}_X(k,n).$$ Analogously define $\overline{\mathcal P}^*_X(k,n)$ and $\overline{P}^*_X(n)$ by replacing all instances of $\mathcal P$ with $\mathcal P^*$ in the definitions above. Following Achar, we will often write $^{\langle\mu\rangle}(\mu \cup \nu)$ for the bipartition $(\mu,\nu) \in \overline{\mathcal{P}}_X(n)$ (except when $\mu$ or $\nu$ is very even and this notation is ambiguous). Note that for $\lambda\in \mathcal P_X(n)$ we have $(\emptyset,\lambda)\in \overline{\mathcal P}_X(p,n)$ for \begin{align*} X = B:&\quad p\in \{0,1\} \\ X = C:&\quad p\in \{0\} \\ X = D:&\quad p\in \{0,1\} \end{align*} and $(\lambda,\emptyset)\in \overline{\mathcal P}_D(p,n)$ for $X=D$, $p\in\{n-1,n\}$. Thus there is ambiguity when we write $(\emptyset,\lambda)\in\overline{\mathcal P}_X(n)$. We will always interpret this statement as $(\emptyset,\lambda)\in \overline{\mathcal P}_X(0,n)$. Similarly, we interpret $(\lambda,\emptyset)\in\overline{\mathcal P}_D(n)$ to mean $(\lambda,\emptyset)\in\overline{\mathcal P}_D(n,n)$. For $J = J_X(p,n)$, we have natural bijections \begin{align}\label{eq:pseudolevisorbits} \cN_o^{L_J} &\xrightarrow{\sim} \overline{\mathcal{P}}_X(p,n) \end{align} defined via (\ref{eq:orbitspartitions}) and (\ref{eq:pseudolevis}). Let \begin{equation} \mathcal K_{\tilde\Delta}^{max} := \{(J,\OO)\in\mathcal K_{\tilde\Delta} \mid |\tilde\Delta|-|J|=1\}. \end{equation} There is a natural bijection \begin{equation}\label{eq:KPbijection}\mathcal K_{\tilde\Delta}^{max}\xrightarrow{\sim} \overline{\mathcal P}_X(n)\end{equation} defined via the bijections (\ref{eq:pseudolevisorbits}). Consider the composition $$s: \overline{\mathcal{P}}_X(n) \xrightarrow{\sim} \mathcal{K}_{\tilde{\Delta}}^{max} \hookrightarrow \mathcal{K}_{\tilde{\Delta}} \overset{\mathbb{L}}{\twoheadrightarrow} \cN_{o,c},$$ and the further composition $$\bar s: \overline{\mathcal{P}}_X(n) \overset{s}{\twoheadrightarrow} \cN_{o,\bar c} \overset{\mathfrak{Q}}{\twoheadrightarrow} \cN_{o,\bar c}.$$ The fibers of $\bar s$ are described in \cite[Section 3.4]{Acharduality}. His description requires some additional terminology. If $\lambda \in \mathcal{P}_X(n)$ and $x \in \ZZ_{\geq 0}$, we say that $x$ is \emph{markable in} $\lambda$ (or simply \emph{markable}, if $\lambda$ is understood) if $m_{\lambda}(x) \geq 1$ and \begin{align*} X=B: & \text{ $x$ is odd and $\mathrm{ht}_{\lambda}(x)$ is odd}.\\ X=C: & \text{ $x$ is even and $\mathrm{ht}_{\lambda}(x)$ is even}.\\ X=D: & \text{ $x$ is odd and $\mathrm{ht}_{\lambda}(x)$ is even}. \end{align*} Write $x_k > x_{k-1} > ... > x_1$ for the markable parts of $\lambda$. Then for $\mu$ a partition, the \emph{reduction} of $\mu$ is a subpartition $r_{\lambda}(\mu) \subseteq \lambda$ defined by $$m_{r_{\lambda}(\mu)}(x_i) = \begin{cases} 1 &\mbox{if } \mathrm{ht}_{\mu}(x_i) - \mathrm{ht}_{\mu}(x_{i+1}) \text{ is odd}.\\ 0 &\mbox{if } \mathrm{ht}_{\mu}(x_i) - \mathrm{ht}_{\mu}(x_{i+1}) \text{ is even}. \end{cases}, \qquad m_{r_{\lambda}(\mu)}(y) = 0 \text{ if } y \text{ is not markable in } \lambda,$$ where we set $\mathrm{ht}_{\mu}(x_{k+1}) = 0$. \begin{prop}[Section 3.4, \cite{Acharduality}]\label{prop:Acharequivalence} Let $\lambda \in \mathcal{P}_X(n)$ (if $X=D$, we assume $\lambda$ is not very even and so ignore the decoration). Choose $^{\langle\mu\rangle}\lambda, ^{\langle\mu'\rangle}\lambda \in \overline{\mathcal{P}}_X(n)$. Then $\bar{s}(^{\langle\mu\rangle}\lambda) = \bar{s}(^{\langle\mu'\rangle}\lambda)$ if and only if $r_{\lambda}(\mu) = r_{\lambda}(\mu')$. \end{prop} \subsection{Preliminaries on s-symbols} Write $a\ll b$ to mean $a+2\le b$. For a sequence $a = (a_1,a_2,\dots,a_k)$ define $\#a:=k$. For an integer $x$ we write $x+a$ for the sequence $(x+a_1,\dots,x+a_k)$. Write $a(i)$ for the $i$th entry from the right of $a$. Define $z^{0,k}=(0,1,\dots,k-1)$. An s-symbol $\Lambda:=(a;b)$ is a pair of sequences $a=(a_1\ll a_2 \ll \cdots \ll a_k)$, $b=(b_1 \ll b_2 \ll \dots \ll b_l)$ with $a_1,b_1\ge 0$. Define $\#\Lambda = \#a+\#b$. The defect of $\Lambda$ is defined to be $\#a-\#b$. We say $\Lambda$ is an s-symbol of type X if \begin{align*} X=B: &\text{ $\Lambda$ has defect 1}\\ X=C: &\text{ $\Lambda$ has defect 1 and $b_1>0$}\\ X=D: &\text{ $\Lambda$ has defect 0.} \end{align*} We will introduce two different equivalence relations on s-symbols, denoted $\sim$ and $\approx$. For $\Lambda=((a_1,\dots,a_k);(b_1,\dots,b_l))$ and $\Lambda'=((a_1',\dots,a_{k'}');(b_1',\dots,b_{l'}'))$ we write $\Lambda\sim\Lambda'$ if $$\{a_1,\dots,a_k,b_1,\dots,b_{l}\} = \{a_1',\dots,a_{k'}',b_1',\dots,b_{l'}'\}$$ as multisets. On the other hand, $\approx$ is defined to be the equivalence relation generated by \begin{align*} X=B,D&:((a_1,\dots,a_s);(b_1,\dots,b_t))\approx ((0,a_1+2,\dots,a_s+2);(0,b_1+2,\dots,b_t+2))\\ X=C&:((a_1,\dots,a_s);(b_1,\dots,b_t))\approx ((0,a_1+2,\dots,a_s+2);(1,b_1+2,\dots,b_t+2)). \end{align*} For a set of s-symbols $S$ of type $X$ and an $\Lambda\in S$ write $[\Lambda;S]$ (resp. $\phi(\Lambda;S)$) for the set of $\Lambda'\in S$ such that $\Lambda\approx\Lambda'$ (resp. $\Lambda\sim\Lambda'$). For a sequence $a=(a_1\ll a_2 \ll \cdots \ll a_k)$ define \begin{align*} \rho_{s,0}(a) &= \sum_ia_i-2(i-1) \\ \rho_{s,1}(a) &= \sum_ia_i-2(i-1)-1 \text{ provided $a_1> 0$}. \end{align*} Write $\Lambda_X(n;k)$ for the set of s-symbols $(a;b)$ of type $X$ with $\#b=k$ and \begin{align*} X=B: & \ \rho_{s,0}(a)+\rho_{s,0}(b) = n\\ X=C: & \ \rho_{s,0}(a)+\rho_{s,1}(b) = n\\ X=D: & \ \rho_{s,0}(a)+\rho_{s,0}(b) = n. \end{align*} For an s-symbol $\Lambda$, define $\bar \Lambda$ be the sequence $$\bar \Lambda := \begin{cases}(a_1,b_1,a_2,b_2,\dots) & \mbox{ if $X \in \{B,C\}$}\\ (b_1,a_1,b_2,a_2,\dots) & \mbox{ if $X=D$}. \end{cases} $$ We say $\Lambda$ is monotonic if $\bar \Lambda$ is a non-decreasing sequence. Note every $\sim$ equivalence class in $\Lambda_X(n;k)$ contains a monotonic s-symbol. Define $$\Lambda_X(n) = \coprod_{k\ge0} \Lambda_X(n;k)/\approx.$$ For $\Lambda\in \coprod_{k\ge0} \Lambda_X(n;k)$ we will simply write $[\Lambda]$ for $\left[\Lambda;\coprod_{k\ge0} \Lambda_X(n;k)\right]$. Define $[\Lambda_1]\sim[\Lambda_2]$ if there exists $\Lambda_1'\in[\Lambda_1],\Lambda_2'\in[\Lambda_2]$ such that $\Lambda_1'\sim\Lambda_2'$. Write $\phi([\Lambda])$ for the set of $[\Lambda']\in \Lambda_X(n)$ such that $[\Lambda]\sim[\Lambda']$. For $k\ge 0$ and $\Lambda\in\Lambda_X(n;k)$ the map \begin{equation} \label{eq:familybijection} \phi(\Lambda;\Lambda_X(n;k))\xrightarrow{\sim}\phi([\Lambda]), \qquad \Lambda'\mapsto [\Lambda'] \end{equation} is a bijection. For $X\in\{B,C\}$, recall from section \ref{sec:classicalliealgebras} that $\mathrm{Irr}(W_X(n))$ is parameterised by $\overline{\mathcal P}(n)$. Define a map \begin{equation}\label{eq:LambdaE}\Lambda:\mathrm{Irr}(W_X(n))\to \coprod_{j\ge0}\Lambda_X(n;k),\qquad E\mapsto \Lambda(E)\end{equation} as follows. Suppose $E\in\mathrm{Irr}(W_X(n))$ corresponds to $(\lambda,\mu)\in\overline{\mathcal P}(n)$ with $\lambda=(\lambda_1\le\lambda_2\le\cdots\\leq \lambda_k),\mu=(\mu_1\le\mu_2\le\cdots\mu_l)$. If $k > l$ let $\lambda'=\lambda$, $\mu'=(0^{k-l-1})\cup_\le \mu$ and define \begin{equation} \Lambda(E) := \begin{cases} (\lambda'+2z^{0,k};\mu'+2z^{0,k-1}) & \mbox{ if $X=B$} \\ (\lambda'+2z^{0,k};\mu'+2z^{0,k-1}+1) & \mbox{ if $X=C$}. \end{cases} \end{equation} If $k\le l$ let $\lambda=(0^{l-k+1})\cup_\le\lambda,\mu'=\mu$ and define $\Lambda(E)$ as above but with $k$ replaced by $l$. We get a bijection $$\mathrm{Irr}(W_X(n))\xrightarrow{\sim} \Lambda_X(n), \qquad E \mapsto [\Lambda(E)].$$ This bijection has the following properties \cite[Section 13.3]{Carter1993} \begin{enumerate} \item $\OO(E_1)=\OO(E_2)$ if and only if $[\Lambda(E_1)]\sim [\Lambda(E_2)]$, \item $\Lambda(E)$ is monotonic if and only if for all $\Lambda\in [\Lambda(E)]$, $\Lambda$ is monotonic if and only if $E = E(\OO,1)$ for some $\OO\in\cN_o$. \end{enumerate} The situation in type $D$ is a bit more delicate. Firstly we must consider unordered s-symbols. These are s-symbols, except they consist of unordered pairs $\Lambda:=\{a;b\}$ instead of ordered pairs. We say $\Lambda$ has defect $|\#a-\#b|$. For an ordered pair $\Omega=(a;b)$ write $\{\Omega\}$ for the unordered pair $\{a;b\}$. We say $\Lambda$ is of type $D$ if it has defect $0$. We will also require a decoration for $\Lambda=\{a;b\}$. We call $\Lambda^\kappa$ a decorated s-symbol where $\kappa\in \{0,1\}$. If $a\ne b$ then we declare $\Lambda^0=\Lambda^1$, otherwise they are different. We define $\sim$ for unordered s-symbols the same as for s-symbols. We define $\sim$ for decorated unordered s-symbols by \begin{equation} \{a;b\}^{\kappa} \sim \{a';b'\}^{\kappa'} \text{ if } \begin{cases} \{a;b\} = \{a';b'\}, \kappa=\kappa' & \mbox{if } a=b \\ \{a;b\} \sim \{a';b'\} & \mbox{if } a\ne b. \end{cases} \end{equation} We define $\approx$ for unordered s-symbols to be the equivalence relation generated by \begin{equation} \{(a_1,\dots,a_s);(b_1,\dots,b_t)\}\approx \{(0,a_1+2,\dots,a_s+2);(0,b_1+2,\dots,b_t+2)\}. \end{equation} We define $\approx$ for decorated unordered s-symbols by \begin{equation} \{a;b\}^{\kappa} \approx\{a';b'\}^{\kappa'} \text{ if } \begin{cases} \{a;b\} \approx \{a';b'\}, \kappa=\kappa' & \mbox{if } a=b \\ \{a;b\} \approx \{a';b'\} & \mbox{if } a\ne b. \end{cases} \end{equation} For a set $S$ of unordered s-symbols (resp. decorated unordered s-symbols) and $\Lambda\in S$ (resp. $\Lambda^\kappa\in S$)we define $[\Lambda;S]$ and $\phi(\Lambda;S)$ analagously to the ordered case. Write $\tilde\Lambda_D(n;k)$ (resp. $\tilde\Lambda_D^\bullet(n;k)$) for the set of unordered s-symbols (resp. decorated unordered s-symbols) of type $D$ with $\#b=k$ and $\rho_{s,0}(a)+\rho_{s,0}(b) = n$. Define \begin{equation} \tilde\Lambda_D(n) = \coprod_{k\ge0} \tilde\Lambda_D(n;k)/\approx, \qquad \tilde\Lambda_D^\bullet(n) = \coprod_{k\ge0} \tilde\Lambda_D^\bullet(n;k)/\approx. \end{equation} For $\Lambda \in \coprod_{k\ge0} \tilde\Lambda_D(n;k)$ (resp. $\Lambda^\kappa\in\coprod_{k\ge0} \tilde\Lambda_D^\bullet(n;k)$) we will simply write $[\Lambda]$ (resp. $[\Lambda^\kappa]$) for $[\Lambda;\coprod_{k\ge0} \tilde\Lambda_D(n;k)]$ (resp. $[\Lambda^\kappa;\coprod_{k\ge0} \tilde\Lambda^\bullet_D(n;k)]$). Write $\phi([\Lambda])$ (resp. $\phi([\Lambda^\kappa])$) for the set of $[\Lambda']\in \tilde\Lambda_D(n)$ (resp. $[\Lambda'^{\kappa'}]\in\tilde\Lambda_D^\bullet(n)$) such that $[\Lambda]\sim[\Lambda']$ (resp. $[\Lambda^\kappa]\sim[\Lambda'^{\kappa'}]$). For $k\ge0$ and $\Lambda\in\tilde\Lambda_D(n;k)$ (resp. $\Lambda^\kappa\in\tilde\Lambda_D^\bullet(n;k)$) the map \begin{align} \phi(\Lambda;\tilde\Lambda_D(n;k)) &\to \phi([\Lambda]), \qquad \Lambda'\mapsto[\Lambda'] \\ \text{resp. }\phi(\Lambda^\kappa;\tilde\Lambda_D^\bullet(n;k)) &\to \phi([\Lambda^\kappa]), \qquad \Lambda'^{\kappa'}\mapsto[\Lambda'^{\kappa'}] \end{align} is a bijection. Recall from section \ref{sec:classicalliealgebras} that $\mathrm{Irr}(W_D(n))$ is parameterised by $\overline{\mathcal P}^{d,u}(n)$. Define $$\tilde\Lambda^\bullet:\mathrm{Irr}(W_D(n))\to \coprod_{j\ge0}\tilde\Lambda_D^\bullet(n;k),\qquad E\mapsto \tilde\Lambda^\bullet(E)$$ as follows. Suppose $E\in\mathrm{Irr}(W_D(n))$ corresponds to $\{\lambda,\mu\}^\kappa\in\overline{\mathcal P}^{d,u}(n)$. Let $\lambda',\mu'$ be $\lambda,\mu$ padded with $0$'s in a similar manner to types $B$ and $C$ but to ensure they have the same number entries. Define \begin{equation} \tilde\Lambda^\bullet(E) = \{\lambda'+2z^{0,k};\mu'+2z^{0,k}\}^\kappa \end{equation} where $k=\#\lambda'=\#\mu'$. Define the map $$\tilde\Lambda:\mathrm{Irr}(W_D(n))\to \coprod_{j\ge0}\tilde\Lambda_D(n;k),\qquad E\mapsto \tilde\Lambda(E),$$ by setting $\tilde\Lambda(E)$ to be the underlying unordered symbol for $\tilde{\Lambda}^{\bullet}(E)$. We get a bijection \begin{equation} \label{eq:ssymbD} \mathrm{Irr}(W_D(n)) \xrightarrow{\sim} \tilde\Lambda_D^\bullet(n), \qquad E \mapsto [\tilde{\Lambda}^{\bullet}(E)]. \end{equation} For certain identities we will require an ordering on the rows of $\Lambda$. Define \begin{equation} \underline\Lambda = \begin{cases} (a;b) & \mbox{if } \sum_ia_i\ge \sum_ib_i \\ (b;a) & \mbox{if } \sum_ia_i< \sum_ib_i. \end{cases} \end{equation} and \begin{equation} \underline{\overline\Lambda} = \begin{cases} (b_1,a_1,b_2,a_2,\dots) & \mbox{if } \sum_ia_i\ge \sum_ib_i \\ (a_1,b_1,a_2,b_2,\dots) & \mbox{if } \sum_ia_i< \sum_ib_i. \end{cases} \end{equation} For $E\in\mathrm{Irr}(W_D(n))$ write $\Lambda(E)$ for $\underline{\tilde\Lambda(E)}$. We say $\Lambda^\kappa=\{a;b\}^\kappa\in \tilde\Lambda_D^\bullet(n)$ is monotonic if $\underline{\overline\Lambda}$ is a non-decreasing sequence. The bijection in Equation (\ref{eq:ssymbD}) has the following properties \cite[Section 13.3]{Carter1993} \begin{enumerate} \item $\OO(E_1)=\OO(E_2)$ if and only if $[\tilde\Lambda^\bullet(E_1)]\sim [\tilde\Lambda^\bullet(E_2)]$, \item $\tilde\Lambda^\bullet(E)$ is monotonic if and only if for all $\Lambda^\kappa\in [\tilde\Lambda^\bullet(E)]$, $\Lambda^\kappa$ is monotonic if and only if $E = E(\OO,1)$ for some $\OO\in\cN_o$. \end{enumerate} For $\Lambda\in\Lambda_X(n;k)$ monotonic we have the following convenient description for $[\Lambda;\Lambda_X(n;k)]$. Decompose $\bar\Lambda$ into subsequences $(I_1,I_2,\dots,I_n)$ by first extracting all pairs, and then taking intervals of consecutive integers (`intervals' for short). We call this decomposition the \textit{refinement} of $\bar\Lambda$. Now decompose $a,b$ as $a=(A_1,A_2,\dots,A_n),b=(B_1,B_2,\dots,B_n)$ where $A_i$ (resp. $B_i$) consists (in increasing order) of the elements of $a$ (resp. $b$) in $I_i$. We call this the \textit{refinements} of $a$ and $b$ respectively. \begin{example} Suppose \begin{equation} \Lambda=\begin{pmatrix} 0 & & 2 & & 3 & & 7 & & 10 & & 13 \\ & 1 & & 3 & & 6 & & 8 & & 11 & \end{pmatrix} \end{equation} then $\bar \Lambda = (0,1,2,3,3,6,7,8,10,11,13)$ and $I_1=(0,1,2),I_2=(3,3),I_3=(6,7,8),I_4=(10,11),I_5=(13)$. The refinements of $a$ and $b$ are then $a=(A_1,\dots,A_4),b=(B_1,\dots,B_4)$ where \begin{align*} A_1=(0,2),\qquad A_2=(3),\qquad A_3=(7),\qquad A_4=(10),\qquad A_5=(13) \\ B_1=(1),\qquad B_2=(3),\qquad B_3=(6,8),\qquad B_4=(11),\qquad B_5=\emptyset. \end{align*} \end{example} The following lemma is trivial and we omit the proof. \begin{lemma} \label{lem:familyflips} Let $\Lambda=(a;b),\Lambda'\in\Lambda_X(n;k)$, $X\in\{B,C,D\}$ and suppose $\Lambda$ is monotonic. Form the refinements $a=(A_1,\dots,A_n),B=(B_1,\dots,B_n)$ of $a$ and $b$. Then $\Lambda'\sim\Lambda$ if and only if $$\Lambda' = ((X_1,X_2,\dots,X_n);(Y_1,Y_2,\dots,Y_n))$$ where $\{X_i,Y_i\} = \{A_i,B_i\}$. \end{lemma} In the statement of the lemma, we say that the index $i$ for $\Lambda$ has been `flipped' if $X_i=B_i,Y_i=A_i$. So in other words one can obtain all $\Lambda'\sim\Lambda$ by \begin{align*} X=B: &\text{ flipping indices in a manner that yields a symbol of defect 1.}\\ X=C: &\text{ flipping indices in a manner that yields a symbol of defect 1,}\\ &\text{except that the index $1$ cannot be flipped if $A_1$ starts with a $0$} \\ X=D: &\text{ flipping some indices in a manner that yields a symbol of defect 0.} \end{align*} For decorated unordered s-symbols in type $D$ we have a similar description. Let $\Lambda^\kappa=(a;b)^\kappa$ be monotonic. The refinements of $\overline{\underline \Lambda}, a, b$ are defined the same as in the ordered case. \begin{lemma} Let $\Lambda^\kappa=(a;b)^\kappa,\Lambda'^{\kappa'}\in\Lambda_D^\bullet(n;k)$. Form the refinements $a=(A_1,A_2,\dots,A_n)$ and $b=(B_1,B_2,\dots,B_n)$. We have that $\Lambda'^{\kappa'}\sim\Lambda^\kappa$ iff $$\Lambda' = \{(X_1,X_2,\dots,X_n);(Y_1,Y_2,\dots,Y_n)\}$$ where $\{X_i,Y_i\} = \{A_i,B_i\}$ and \begin{align*} \begin{cases} \kappa=\kappa' & \mbox{ if } a=b \\ \text{no restriction on } \kappa,\kappa' & \mbox{ otherwise}. \end{cases} \end{align*} \end{lemma} \subsection{Preliminaries on a-symbols} An a-symbol is a pair $\alpha:=(a;b)$ of sequences $a=(a_1< a_2 <\cdots <a_k)$, $b=(b_1 <b_2 <\dots <b_l)$ with $a_1,b_1\ge 0$. The defect of $\alpha$ is $\#a-\#b$. We say $\alpha$ is an a-symbol of type $X$ if \begin{align*} X=B: &\text{ $\alpha$ has defect 1}\\ X=C: &\text{ $\alpha$ has defect 1}\\ X=D: &\text{ $\alpha$ has defect 0}. \end{align*} Define $\sim$ the same as for s-symbols. On the other hand, $\approx$ is defined for all types to be the equivalence relation generated by \begin{align*} ((a_1,\dots,a_s);(b_1,\dots,b_t))\approx ((0,a_1+1,\dots,a_s+1);(0,b_1+1,\dots,b_t+1)). \end{align*} Define $[\bullet;\bullet]$ and $\phi(\bullet;\bullet)$, for a-symbols to be the same as for s-symbols. For a sequence $a=(a_1< a_2< \cdots < a_k)$ define \begin{align*} \rho_{a,0}(a) &= \sum_ia_i-(i-1) \\ \rho_{a,1}(a) &= \sum_ia_i-(i-1)-1 \text{ provided $a_1>0$}. \end{align*} We define $\bar \alpha$ the same as for s-symbols. Define $$\alpha_X(n) = \coprod_{k\ge0} \alpha_X(n;k)/\approx.$$ For $\alpha\in \coprod_{k\ge0} \alpha_X(n;k)$ we will simply write $[\alpha]$ for $\left[\alpha;\coprod_{k\ge0} \alpha_X(n;k)\right]$. Define $[\alpha_1]\sim[\alpha_2]$ if there exists $\alpha_1'\in[\alpha_1],\alpha_2'\in[\alpha_2]$ such that $\alpha_1'\sim\alpha_2'$. Write $\phi([\alpha])$ for the set of $[\alpha']\in \alpha_X(n)$ such that $[\alpha]\sim[\alpha']$. For $k\ge 0$ and $\alpha\in\alpha_X(n;k)$ the map \begin{equation} \phi(\alpha;\alpha_X(n;k))\to \phi([\alpha]), \qquad \alpha'\mapsto [\alpha'] \end{equation} is a bijection. For $X\in\{B,C\}$ define the map $$\alpha:\mathrm{Irr}(W_X(n))\to \coprod_{j\ge0}\alpha_X(n;k),\qquad E\mapsto \alpha(E)$$ as follows. Suppose $E\in\mathrm{Irr}(W_X(n))$ corresponds to the bipartition $(\lambda,\mu)$ with $\lambda=(\lambda_1\le\lambda_2\le\cdots\lambda_k),\mu=(\mu_1\le\mu_2\le\cdots\mu_l)$. If $k > l$ let $\lambda'=\lambda$, $\mu'=(0^{k-l-1})\cup_\le \mu$ and define \begin{equation}\label{eq:alphaE} \alpha(E) := (\lambda'+z^{0,k};\mu'+z^{0,k-1}) \end{equation} If $k\le l$ let $\lambda=(0^{l-k+1})\cup_\le\lambda,\mu'=\mu$ and define $\alpha(E)$ as above but with $k$ replaced by $l$. We get a bijection $$\mathrm{Irr}(W_X(n))\xrightarrow{\sim} \alpha_X(n), \qquad E \mapsto [\alpha(E)].$$ This bijection has the following properties \cite[Section 13.2]{Carter1993} \begin{enumerate} \item $E_1\sim E_2$ if and only if $[\alpha(E_1)]\sim [\alpha(E_2)]$, \item $\alpha(E)$ is monotonic if and only if for all $\alpha\in [\alpha(E)]$, $\alpha$ is monotonic if and only if $E$ is special. \end{enumerate} The situation in type $D$ is much like situation in type $D$ for s-symbols. We define unordered a-symbols, the defect, type $D$ unordered a-symbols, decorated unordered a-symbols, $\sim$ (resp. $\approx$, resp. $[\bullet;\bullet]$, resp. $\phi(\bullet;\bullet)$) for unordered a-symbols, $\sim$ (resp. $\approx$, resp. $[\bullet;\bullet]$, resp. $\phi(\bullet;\bullet)$) for decorated unordered a-symbols, monotonicity for decorated unordered a-symbols, and $\{\bullet\}$ analogously to the same notions for s-symbols, but with s replaced with a. Write $\tilde\alpha_D(n;k)$ (resp. $\tilde\alpha_D^\bullet(n;k)$) for the set of unordered a-symbols $\{a;b\}$ (resp. decorated unordered a-symbols $\{a;b\}^\kappa$) of type $D$ with $\#b=k$ and $\rho_{a,0}(a)+\rho_{a,0}(b) = n$. Define \begin{equation} \tilde\alpha_D(n) = \coprod_{k\ge0} \tilde\alpha_D(n;k)/\approx, \qquad \tilde\alpha_D^\bullet(n) = \coprod_{k\ge0} \tilde\alpha_D^\bullet(n;k)/\approx. \end{equation} For $\alpha \in \coprod_{k\ge0} \tilde\alpha_D(n;k)$ (resp. $\alpha^\kappa\in\coprod_{k\ge0} \tilde\alpha_D^\bullet(n;k)$) we will simply write $[\alpha]$ (resp. $[\alpha^\kappa]$) for $[\alpha;\coprod_{k\ge0} \tilde\alpha_D(n;k)]$ (resp. $[\alpha^\kappa;\coprod_{k\ge0} \tilde\alpha^\bullet_D(n;k)]$). Write $\phi([\alpha])$ (resp. $\phi([\alpha^\kappa])$) for the set of $[\alpha']\in \tilde\alpha_D(n)$ (resp. $[\alpha'^{\kappa'}]\in\tilde\alpha_D^\bullet(n)$) such that $[\alpha]\sim[\alpha']$ (resp. $[\alpha^\kappa]\sim[\alpha'^{\kappa'}]$). For $k\ge0$ and $\alpha\in\tilde\alpha_D(n;k)$ (resp. $\alpha^\kappa\in\tilde\alpha_D^\bullet(n;k)$) the map \begin{align} \begin{split} \phi(\alpha;\tilde\alpha_D(n;k)) &\to \phi([\alpha]), \qquad \alpha'\mapsto[\alpha'] \\ \left(\text{resp. }\phi(\alpha^\kappa;\tilde\alpha_D^\bullet(n;k))\right. &\left.\to \phi([\alpha^\kappa]), \qquad \alpha'^{\kappa'}\mapsto[\alpha'^{\kappa'}]\right) \end{split} \end{align} is a bijection. Define $$\tilde\alpha^\bullet:\mathrm{Irr}(W_D(n))\to \coprod_{j\ge0}\tilde\alpha_D^\bullet(n;k),\qquad E\mapsto \tilde\alpha^\bullet(E)$$ as follows. Suppose $E\in\mathrm{Irr}(W_D(n))$ corresponds to $\{\lambda,\mu\}^\kappa$. Let $\lambda',\mu'$ be $\lambda,\mu$ padded with $0$'s in a similar manner to types $B$ and $C$ but to ensure they have the same number entries. Define \begin{equation} \tilde\alpha^\bullet(E) = \{\lambda'+z^{0,k};\mu'+z^{0,k}\}^\kappa \end{equation} where $k$ is the number of entries in $\lambda'$ (and hence also $\mu'$). Define a map $$\tilde\alpha:\mathrm{Irr}(W_D(n))\to \coprod_{j\ge0}\tilde\alpha_D(n;k),\qquad E\mapsto \tilde\alpha(E),$$ where $\alpha(E)$ is the underlying unordered symbol for $\tilde{\alpha}^{\bullet}(E)$. We get a bijection $$\mathrm{Irr}(W_D(n)) \xrightarrow{\sim} \tilde\alpha_D^\bullet(n), \qquad E \mapsto [\tilde{\alpha}^{\bullet}(E)]$$ This bijection has the following properties \cite[Section 13.2]{Carter1993} \begin{enumerate} \item $E_1\sim E_2$ if and only if $[\tilde\alpha^\bullet(E_1)]\sim [\tilde\alpha^\bullet(E_2)]$, \item $\tilde\alpha^\bullet(E)$ is monotonic if and only if for all $\alpha^\kappa\in [\tilde\alpha^\bullet(E)]$, $\alpha^\kappa$ is monotonic if and only if $E$ is special. \end{enumerate} For $E\in \mathrm{Irr}(W_D(n))$ write $\alpha(E)$ for $\underline{\tilde\alpha(E)}$. For certain identities we will need to modify unordered defect $0$ a-symbols into defect $1$ a-symbols. Write $\alpha_D^1(n;k)$ for the set of defect 1 symbols $(a;b)$ with $\#b=k$, $a_1=0, b_1>0$, and $\rho_{a,0}(a)+\rho_{a,1}(b) = n$. Given a $\alpha\in \alpha_D(n;k)$ define $\alpha^!$ by \begin{equation} \alpha^! := ((0,a_1+1,a_2+1,\dots,a_k+1);(b_1+1,b_2+1,\dots,b_k+1)). \end{equation} This map induces a bijection $\alpha_D(n;k)\to\alpha_D^1(n;k)$. Given a $\alpha\in \tilde\alpha_D(n;k)$ define $\alpha^!$ by $\underline\alpha^!$. Note that for $\alpha,\beta\in\alpha_D(n;k)$ (resp. $\tilde\alpha_D(n;k)$) we have $\alpha\sim\beta$ iff $\alpha^!\sim\beta^!$. \begin{lemma} \label{lem:flips} Let $\alpha=(a;b)$ be an a-symbol (of any defect) and suppose $\bar \alpha$ is decomposed into subsequences $(I_1,I_2,\dots,I_n)$. Let $a=(A_1,A_2,\dots,A_n)$ and $b=(B_1,B_2,\dots,B_n)$ be the corresponding decompositions of $a$ and $b$. We use the convention that $I_i=A_i=B_i=\emptyset$ for $i<1$ and $i>n$. Suppose for all $1\le j\le n$ and all $i<j<k$ and all $x\in I_i,y\in I_j,z\in I_k$ that $x<y<z$. Then flipping any number of indices yields an a-symbol (of possibly different defect). \end{lemma} \begin{proof} The condition guarantees that for $1\le j\le n$ and all $i<j<k$ and all $x\in A_i\cup B_i=I_i,y\in A_j\cup B_j=I_j,z\in A_k\cup B_k=I_k$ we have $x<y<z$ . Since each $A_i,B_i$ for $1\le i\le n$ is also increasing, flipping any number of indices yields an a-symbol. \end{proof} \begin{lemma} \label{lem:jump1} Let $\Lambda=(x;y)$ be a monotonic s-symbol of type $X^\vee$ and $\alpha_1=(a_1;b_1),\alpha_2=(a_2;b_2)$ be monotonic a-symbols of the same size and defect as $\Lambda$ such that $\Lambda = \alpha_1+\alpha_2$ (by which we mean $x=a_1+a_2$ and $y=b_1+b_2$). Let $\bar\Lambda=(I_1,\dots,I_n)$ be the refinement of $\bar\Lambda$, and let $T_i=|I_i|$. For $s \in \{1,2\}$, let $\bar\alpha_s=(J_1^s,\dots,J_n^s)$ be the decomposition of $\alpha_s$ corresponding to the refinement of $\bar\Lambda$ (i.e. $|J_i^s| = T_i$ for all $i$). Suppose for all $i$ such that $I_i,I_{i-1}$ are intervals we have that $$J_{i-1}^1(1) + 1 = J_{i}^1(T_{i}).$$ Then the decomposition $\bar\alpha_s=(J_1^s,\dots,J_n^s)$ satisfies the conditions of Lemma \ref{lem:flips} for $s\in\{1,2\}$. \end{lemma} \begin{proof} If $n=1$ there is nothing to prove so suppose $n>1$. Since $\alpha_1,\alpha_2$ are both monotonic it suffices to show that $J_{i-1}^s(1)<J_i^s(T_i)$ for all $1<i\le n,s\in\{1,2\}$. Thus let $1<i\le n$. We consider three cases: $I_{i-1},I_i$ are both interval; $I_{i-1}$ is a pair; $I_i$ is a pair. Suppose $I_{i-1},I_i$ are both intervals. Then $I_{i-1}(1)\ll I_{i}(T_{i})$ since otherwise $I_{i-1}(1) + 1 = I_{i}(T_{i})$ and so $(I_{i-1},I_{i})$ would itself be an interval which would contradict the the definition of the $I_j$'s. Since $I_{j}(k)=J_{j}^1(k)+J_{j}^2(k)$ for all $1\le j\le n,1\le k\le T_i$ we must have that $J_{i-1}^2(1) < J_{i}^2(T_{i})$. Thus we have that $J_{i-1}^s(1) < J_{i}^s(T_{i})$ for $s\in\{1,2\}$. Now suppose $I_{i-1}$ is a pair say $I_{i-1}=(x,x)$. Since $\alpha_1,\alpha_2$ are monotonic we must have $J_{i-1}^s=(u^s,u^s)$ for some $u^s$ where $s\in\{1,2\}$ and $u^1+u^2 = x$. Write $J_i^s=(v_1^s,\dots,v_{T_i}^s)$. Since $\alpha_s$ is an a-symbol, $u^s=J_{i-1}^s(2)<J_i^s(T_i)=v_1^s$. In particular $J_{i-1}^s(1)<J_i^s(T_i)$ for $s\in\{1,2\}$ as required. The case when $I_i$ is a pair is analagous. \end{proof} \subsection{Springer correspondence via symbols} In this section we describe how to compute the map $\lambda \mapsto [\alpha(E(\lambda,1))]$ in types $B/C/D$. \subsubsection{Type $B$} \label{sec:springerB} Let $\lambda\in \mathcal P_B(n)$. Let $\lambda'$ be equal to $\lambda$ padded with $0$'s so that $\#\lambda'$ is odd. Note that $\#\lambda$ is already necessarily odd, but the recipe still works if we pad $\lambda$ with $0$'s. Write $\#\lambda'$ as $2k+1$ and $\lambda' = (\lambda_1'\le\lambda_2'\le\cdots\le\lambda_{2k+1}')$ in non-decreasing order. Let $2\xi_1+1<2\xi_2+1<\cdots<2\xi_{k+1}+1$ and $2\eta_1<2\eta_2<\cdots<2\eta_k$ be an enumeration of the odd and even parts of $\lambda' + z^{0,2k+1}$. Let $\xi = (\xi_1,\dots,\xi_{k+1})$ and $\eta = (\eta_1,\dots,\eta_k)$. Then $[\alpha(E(\lambda,1))] = [(\xi;\eta)]$. \subsubsection{Type $C$} \label{sec:springerC} Let $\lambda\in \mathcal P_C(n)$. Let $\lambda'$ be equal to $\lambda$ padded with $0$'s so that $\#\lambda'$ is odd. Note that in type $C$, $\#\lambda$ may be either odd or even. Write $\#\lambda'$ as $2k+1$ and $\lambda' = (\lambda_1'\le\lambda_2'\le\cdots\le\lambda_{2k+1}')$ in non-decreasing order. Let $2\xi_1+1<2\xi_2+1<\cdots<2\xi_{k}+1$ and $2\eta_1<2\eta_2<\cdots<2\eta_{k+1}$ be an enumeration of the odd and even parts of $\lambda' + z^{0,2k+1}$. Let $\xi = (\xi_1,\dots,\xi_{k+1})$ and $\eta = (\eta_1,\dots,\eta_k)$. Then $[\alpha(E(\lambda,1))] = [(\eta;\xi)]$. \subsubsection{Type $D$} \label{sec:springerD} Let $\lambda^\kappa\in \mathcal P_D(n)$. Let $\lambda'$ be equal to $\lambda$ padded with $0$'s so that $\#\lambda'$ is even. Note that $\#\lambda$ is already necessarily even, but the recipe still works if we pad $\lambda$ with $0$'s. Write $\#\lambda'$ as $2k$ and $\lambda' = (\lambda_1'\le\lambda_2'\le\cdots\le\lambda_{2k}')$ in non-decreasing order. Let $2\xi_1+1<2\xi_2+1<\cdots<2\xi_{k}+1$ and $2\eta_1<2\eta_2<\cdots<2\eta_k$ be an enumeration of the odd and even parts of $\lambda' + z^{0,2k}$. Let $\xi = (\xi_1,\dots,\xi_{k+1})$ and $\eta = (\eta_1,\dots,\eta_k)$. Then $[\tilde\alpha^\bullet(E(\lambda^\kappa,1))] = [\{\xi;\eta\}^\kappa]$. \begin{rmk} By the above constructions we have that $\Lambda(E(\lambda,1))\cap\Lambda_X(n;k)\ne\emptyset$ (resp. $\Lambda(E(\lambda^\kappa,1))\cap\Lambda_X(n;k)\ne\emptyset$) for all $k\ge \lfloor \#\lambda/2 \rfloor$ and $X\in\{B,C\}$ (resp. $X=D$). \end{rmk} \begin{lemma} \label{lem:asymbolfordcollapse} Suppose $\kappa\in\{0,1\}$, $\lambda \in \mathcal{P}_C(n)$ and $\lambda^t \in \mathcal{P}_D(n)$. Write $E=E((\lambda_D)^\kappa,1)$. Pad $\lambda$ with $0$'s so that $\#\lambda$ is even. Let $2\xi_1+1<2\xi_2+1<\cdots<2\xi_{k}+1$ and $2\eta_1<2\eta_2<\cdots<2\eta_k$ be an enumeration of the odd and even parts of $\lambda + z^{0,\#\lambda}$. Then \begin{equation} \label{eq:aSymbolComputation} [\alpha(E)] = [(\xi,\eta)]. \end{equation} \end{lemma} \begin{proof} Since $\lambda$ is a $C$-partition, it admits a decomposition $$\lambda = \lambda^1 \cup_{\leq} \lambda^2 \cup_{\leq} ... \cup_{\leq} \lambda^k$$ where $\lambda^1,...,\lambda^k$ are `elementary' partitions of the following two types: \begin{itemize} \item[(i)] $\lambda^i = (e_1,o_1,...,o_{2m},e_2)$, where $e_1$, $e_2$ are even and $o_1,...,o_{2m}$ are odd (note: it might be the case that $m=0$) \item[(ii)] $\lambda^i = (o_1,...,o_{2m})$, where $o_1,...,o_{2m}$ are odd. \end{itemize} We will show that, in fact, all $\lambda^i$ are type (i). Since $\lambda^t \in \mathcal{P}_D(n)$, the largest part of $\lambda$ must be odd. So $\lambda^k$ is type (i). Suppose, for contradiction, that $\lambda^{i-1}$ is type (ii) for $2 \leq i \leq k$. Concatenating if necessary, we can assume that $\lambda^i$ is type (i). Write $$\lambda^{i-1} = (o_1,...,o_{2p}), \qquad \lambda^i = (e_1,o'_1,...,o'_{2q},e_2), \qquad o_{2p} < e_1.$$ Then $$m_{\lambda^t}(\mathrm{ht}_{\lambda}(e_1)) = e_1-o_{2p}.$$ Since $\mathrm{ht}_{\lambda}(e_1)$ is even and $\lambda^t \in \mathcal{P}_D(n)$, it follows that $e_1-o_{2p}$ is even. Contradiction. So all $\lambda^i$ are type (i), as asserted. For each $\lambda^i = (e_1,o_1,...,o_{2p},e_2)$, write $$\overline{\lambda}^i := (e_1+x_i,o_1+x_i,...,o_{2p}+x_i,e_2+x_i) \qquad x_i = \#\lambda - \mathrm{ht}_{\lambda}(e_1).$$ Since $x_i$ is even, $\overline{\lambda}^i$ is type (i). By (the type D anologue of ) equation (\ref{eq:Ccollapse}), we have $$\lambda_D = (\overline{\lambda}^1)_D \cup_{\leq} ... \cup_{\leq} (\overline{\lambda}^k)_D.$$ So it suffices to prove the lemma for each $\overline{\lambda}^i$. Since each $\overline{\lambda}^i$ is a type (i) partition, we can reduce to the case in which $\lambda = (e_1,o_1,...,o_{2m},e_2)$. In this case, we have $$\lambda_D = (e_1+1,o_1,...,o_{2p},e_2-1).$$ The right hand side of Equation (\ref{eq:aSymbolComputation}) can then readily be computed to be $$\begin{pmatrix} \lfloor \frac{o_1+1}{2}\rfloor & \lfloor \frac{o_3+3}{2} \rfloor & \ldots & \lfloor \frac{o_{2p-1}+2p-1}{2} \rfloor & \lfloor \frac{e_2+2p}{2} \rfloor \\ \lfloor \frac{e+1}{2}\rfloor & \lfloor \frac{o_2+2}{2} \rfloor & \ldots & \lfloor \frac{o_{2p-2}+2p-2}{2} \rfloor & \lfloor \frac{o_{2p}+2p}{2} \rfloor \end{pmatrix}. $$ But by the algorithm in Section \ref{sec:springerD} $$\tilde\alpha(E) \approx \begin{Bmatrix} \lfloor \frac{e+1}{2}\rfloor & \lfloor \frac{o_2+2}{2} \rfloor & \ldots & \lfloor \frac{o_{2p-2}+2p-2}{2} \rfloor & \lfloor \frac{o_{2p}+2p}{2} \rfloor \\ \lfloor \frac{o_1+1}{2}\rfloor & \lfloor \frac{o_3+3}{2} \rfloor & \ldots & \lfloor \frac{o_{2p-1}+2p-1}{2} \rfloor & \lfloor \frac{e_2+2p}{2} \rfloor \end{Bmatrix} $$ and therefore $$\alpha(E) \approx \begin{pmatrix} \lfloor \frac{o_1+1}{2}\rfloor & \lfloor \frac{o_3+3}{2} \rfloor & \ldots & \lfloor \frac{o_{2p-1}+2p-1}{2} \rfloor & \lfloor \frac{e_2+2p}{2} \rfloor\\ \lfloor \frac{e+1}{2}\rfloor & \lfloor \frac{o_2+2}{2} \rfloor & \ldots & \lfloor \frac{o_{2p-2}+2p-2}{2} \rfloor & \lfloor \frac{o_{2p}+2p}{2} \rfloor \end{pmatrix} $$ as required. \end{proof} \subsection{Restriction and induction via symbols} Let $J\subsetneq\tilde\Delta$ be maximal. Then $W_J$ is of the form \begin{align*} X=B: &\qquad W_m(D)\times W_{n-m}(B)\\ X=C: &\qquad W_m(C)\times W_{n-m}(C)\\ X=D: &\qquad W_m(D)\times W_{n-m}(D) \end{align*} for some $0\le m\le n$. Define \begin{equation} Y(X) = \begin{cases} D & \mbox{ if } X=B,D \\ C & \mbox{ if } X=C. \end{cases} \end{equation} \begin{prop} \cite[Sections 4.5, 5.3, 6.3]{lusztig09} \label{prop:jinduction} Let $E_1\otimes E_2$ be an (irreducible) special representation of $W_J$ and $E = j_{W_J}^{W}E_1\otimes E_2$. Then for $k$ sufficiently large so that we can find $\Lambda \in [\Lambda(E)]\cap\Lambda_{X^\vee}(n;k), \alpha_1\in [\alpha(E_1)]\cap\alpha_{Y(X)}(n;k)$ and $\alpha_2\in [\alpha(E_2)]\cap\alpha_X(n;k)$ we have that \begin{align*} X=B: &\qquad \Lambda = \alpha_1^!+\alpha_2\\ X=C: &\qquad \Lambda = \alpha_1 +\alpha_2\\ X=D: &\qquad \Lambda = \alpha_1 +\alpha_2. \end{align*} \end{prop} \begin{prop} \label{prop:sumtohom} Let $E_1\otimes E_2\in\mathrm{Irr}(W_J)$ and $E\in\mathrm{Irr}(W_{X^\vee}(n))$. If $X=D$, assume that $\OO^{\vee}(E)$ is not very even. Let $\Lambda\in\Lambda_{X^\vee}(n;k)$ and \begin{itemize} \item [(B)] let $\alpha\in\alpha_D(m;k),\beta\in\alpha_B(n-m;k)$, and suppose $\Lambda = \alpha^!+\beta$. If $[\tilde\alpha(E_1)] = [\{\alpha\}]$, $[\alpha(E_2)]=[\beta]$, $[\Lambda(E)]=[\Lambda]$ then $\Hom(E_1\otimes E_2,E^\vee|_{W_J})\ne0$. \item [(C)] let $\alpha\in\alpha_C(m;k),\beta\in\alpha_C(n-m;k)$, and suppose $\Lambda = \alpha+\beta$. If $[\alpha(E_1)] = [\alpha]$, $[\alpha(E_2)=\beta]$, $[\Lambda(E)]=[\Lambda]$ then $\Hom(E_1\otimes E_2,E^\vee|_{W_J})\ne0$. \item [(D)] let $\alpha\in\alpha_D(m;k),\beta\in\alpha_D(n-m;k)$, and suppose $\Lambda = \alpha+\beta$. If $[\tilde\alpha(E_1)] = [\{\alpha\}]$, $[\tilde\alpha(E_2)]=[\{\beta\}]$, $[\tilde\Lambda(E)]=[\{\Lambda\}]$ then $\Hom(E_1\otimes E_2,E^\vee|_{W_J})\ne0$. \end{itemize} \end{prop} \begin{proof} We will prove the type B case (the other cases are analogous). Write $(\mu,\nu)$,$\{\mu_1,\nu_1\}^{\kappa}$,$(\mu_2,\nu_2)$ for the bipartitions corresponding to $E$,$E_1$,$E_2$. In view of the formulas (\ref{eq:LambdaE}) and (\ref{eq:alphaE}) for the a- and s-symbols associated to a given bipartition, the assertion in (B) is equivalent to the following $$(\mu,\nu) = (\mu_1,\nu_1) + (\mu_2,\nu_2) \implies \Hom(E_1 \otimes E_2,E^{\vee}|_{W_J}) \neq 0.$$ For a partition $\lambda$ of $k$, write $V(\lambda)$ for irreducible representation of $S_k$ corresponding to $\lambda$. Let $g(\mu_1,\mu_2,\mu)$ denote the multiplicity of $V(\mu_1) \otimes V(\mu_2)$ in the restriction of $V(\mu)$ to $S_{|\mu_1|} \times S_{|\mu_2|}$. By \cite[Theorem 3.2]{GeissengerKinch}, the multiplicity of $E_1 \otimes E_2$ in $E^{\vee}|_{W_J}$ is the product $g(\mu_1,\mu_2,\mu)g(\nu_1,\nu_2,\nu)$. Thus it suffices to show that $g(\mu_1,\mu_2,\mu),g(\nu_1,\nu_2,\nu) \neq 0$. This is an immediate consequence of the Littlewood-Richardson rule. We leave the details to the reader. \end{proof} \subsection{Construction of $(J,\phi)$ in classical types}\label{subsection:Jphi} Now let $\mathfrak{g} = \mathfrak{g}_X(n)$ for $X \in \{B,C,D\}$ and fix a nilpotent orbit $\OO^{\vee} \subset \cN^{\vee}$ (which is not very even in type D). Let $\lambda \in \mathcal{P}_{X^\vee}(n)$ denote the partition corresponding to $\OO^{\vee}$. To show that $\OO^{\vee}$ is faithful, we must exhibit a pair $(J,\phi) \in \mathcal{F}_{\tilde\Delta}$ satisfying conditions (i) and (ii) of Definition \ref{def:faithful}. In this subsection, we will construct such a pair and verify condition (i) (condition (ii) will be verified in Section \ref{sec:prooffaithful}). We will construct $(J,\phi)$ by defining an explicit bipartition $^{\langle\mu\rangle}d(\lambda)$. The definition is as follows (we also define an auxilary bipartition $^{\langle\pi\rangle}d(\lambda)$ which is needed for the proofs). \begin{definition}\label{def:pimu} Define subpartitions $\pi(\lambda) \subseteq \lambda^t$ and $\mu(\lambda) \subseteq \lambda^t$ as follows \begin{itemize} \item[(i)] If $X=C$, then $$m_{\pi(\lambda)}(x) = \begin{cases} 1 &\mbox{if } x \text{ is even and } m_{\lambda^t}(x) \text{ is}\\ & \mbox{odd}\\ 0 &\mbox{otherwise} \end{cases}, \qquad m_{\mu(\lambda)}(x) = \begin{cases} 2 &\mbox{if } x \text{ is even and } m_{\lambda^t}(x) \text{ is}\\ &\mbox{even and nonzero}\\ 1 & \mbox{if } x \text{ is even and } m_{\lambda^t}(x) \text{ is}\\ & \mbox{odd}\\ 0 &\mbox{otherwise} \end{cases} $$ \item[(ii)] If $X=B$ or $X=D$, then $$m_{\pi(\lambda)}(x) = \begin{cases} 1 &\mbox{if } x \text{ is odd and } m_{\lambda^t}(x) \text{ is}\\ &\mbox{odd}\\ 0 &\mbox{otherwise} \end{cases}, \qquad m_{\mu(\lambda)}(x) = \begin{cases} 2 &\mbox{if } x \text{ is odd and } m_{\lambda^t}(x) \text{ is}\\ &\mbox{even and nonzero}\\ 1 & \mbox{if } x \text{ is odd and } m_{\lambda^t}(x) \text{ is}\\ &\mbox{odd}\\ 0 &\mbox{otherwise} \end{cases} $$ \end{itemize} \end{definition} We will shortly show that $\mu(\lambda)\subseteq d(\lambda)$, but note that care must be taken in the $X=B$ and $X=D$ cases since if $\mu(\lambda)\in\mathcal P(2)$ for $X\in\{B,D\}$ or $\mu(\lambda)\in\mathcal P(2n-2)$ for $X=D$, the bipartition $^{\langle \mu(\lambda) \rangle}d(\lambda)$ cannot be an element of $\overline{\mathcal P}(n)$. These are inconvenient edge cases which we will treat separately now and ignore for the rest of the section. \begin{prop} \label{prop:edgecases} Let $X\in\{B,D\}$ and $\lambda\in\mathcal P_{X^\vee}(n)$. Then \begin{enumerate} \item $\mu(\lambda) \in \mathcal P(2)$ if and only if $\lambda$ is of the form $\lambda=(\lambda_1,\lambda_2^{o_2},\lambda_3^{e_3},\dots,\lambda_k^{e_k})$ where $\lambda_1>\lambda_2\ge \lambda_3\ge \cdots\lambda_k\ge 0$, $\lambda_1-\lambda_2$ is even, $o_2>0$ is odd, and $e_3,\dots,e_k>0$ are even. \item if $X=D$, then $\mu(\lambda)\in\mathcal P(2n-2)$ if and only if $\lambda \in \{(1,1),(3,1)\}$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item $(\Rightarrow)$ If $\mu(\lambda) \in \mathcal P(2)$ then since $\mu(\lambda)$ only has odd parts, $\mu(\lambda) = (1,1)$. By definition this means that $\lambda^t$ has only even parts except a non-zero even number of $1$'s. Thus $\lambda$ must take the form given. $(\Leftarrow)$ The only odd part of $\lambda^t$ is $1$ and it occurs with even multiplicity and so $\mu(\lambda) = (1,1)$. \item $(\Rightarrow)$ Suppose that $\mu(\lambda) \in\mathcal P(2n-2)$. Let $\nu = \lambda^t\setminus \mu(\lambda)$. Then $\nu \in \mathcal P(2)$ and so $\nu = (2)$ or $(1,1)$. Since $\mu$ only has odd parts, if $\nu = (1,1)$ then $\lambda^t$ only has odd parts. So $\lambda$ is of the form $(\lambda_1^{a_1},\lambda_2^{a_2},\dots,\lambda_k^{a_k})$ where $a_1$ is odd (in fact $=1$), and $a_i$ is even for all $i\ge 2$. Since $\lambda$ is of type $D$ this implies that $\lambda_1$ is odd. But $\lambda$ is a partition of $2n$ which is impossible since $\sum_i a_i\lambda_i \equiv 1 \pmod 2$. If $\nu = (2)$ then there are two cases to consider: $\lambda^t$ only has entries $\ge 2$ or $1$ is a part of $\lambda^t$. In the former case $\lambda$ must be of the form $(\lambda_1^{2},\lambda_2^{o_2},\lambda_3^{e_3},\dots,\lambda_k^{e_k})$ where $\lambda_1>\lambda_2\ge\lambda_3\ge\cdots\ge \lambda_k\ge 0$, $o_2>0$ is odd, $e_3,\dots,e_k>0$ are even. By parity considerations, this can only be a partition of $2n$ in the case $\lambda = (\lambda_1^2)$. But the multiplicity of $2$ in $\lambda^t$ is $1$ and so $\lambda_1 = 1$. Finally, suppose $1$ is a part of $\lambda^t$. Then $\lambda$ is of the form $(\lambda_1,\lambda_2,\lambda_3^{o_3},\lambda_4^{e_4},\dots,\lambda_k^{e_k})$ where $\lambda_1>\lambda_2>\lambda_3\ge\lambda_4\ge\cdots \ge\lambda_k\ge 0$, $o_3>0$ is odd, and $e_4,\dots,e_k>0$ are even. Again by parity considerations this is a partition of $2n$ only in the case when $\lambda = (\lambda_1,\lambda_2)$. Since $2$ has multiplicity $1$ in $\lambda^2$ we must have $\lambda_2 = 1$. Since all the parts of $\mu$ have multiplicity at most $2$, $\lambda=(3,1)$ or $(2,1)$. But we can rule out the $(2,1)$ case since $2+1$ is odd. $(\Leftarrow)$ This is trivial to check. \end{enumerate} \end{proof} Note that since $D_1\simeq A_1$ and $D_2 \simeq A_1\times A_1$, the orbits appearing in Proposition \ref{prop:edgecases} (2) are faithful since we have already shown that nilpotent orbits of simple lie algebras of type $A$ are faithful. We now prove the remaining case. \begin{prop} Let $X\in\{B,D\}$ and $\lambda$ be of the form $\lambda=(\lambda_1,\lambda_2^{o_2},\lambda_3^{e_3},\dots,\lambda_k^{e_k})$ where $\lambda_1>\lambda_2\ge \lambda_3\ge \cdots\lambda_k\ge 0$, $\lambda_1-\lambda_2$ is even, $o_2>0$ is odd, and $e_3,\dots,e_k>0$ are even. Then $\OO^\vee$ is faithful. \end{prop} \begin{proof} Let $(J,\phi) = (\Delta,\phi(E(\OO^\vee,1)))$. Since $\lambda$ is special we have that $$\overline{\mathbb L}(J,\OO(\phi)) = \overline{\mathbb L}(\Delta,d(\OO^\vee)) = (d(\OO^\vee),1) = d_A(\OO^\vee,1).$$ This verifies condition (i) for faithfulness. Since $W_J = W$ for $J=\Delta$, proving condition (ii) for our choice of $(J,\phi)$ is equivalent to showing that all representations $E\in\mathrm{Irr}(W)$ with $\OO^\vee(E) = \OO^\vee$ lie in the same family as $E(\OO^\vee,1)$. This is an easy exercise in the combinatorics of s-symbols and a-symbols and is omitted. \end{proof} For the remainder of this section we assume that $\lambda$ is not equal to one of the edge cases mentioned. \begin{lemma}\label{lem:propsofmu} The following are true: \begin{itemize} \item[(i)] $\pi \subseteq d(\lambda)$. \item[(ii)] $\mu \subseteq d(\lambda)$. \item[(iii)] $^{\langle\pi\rangle}d(\lambda) \in \overline{\mathcal{P}}^*(n)$. \item[(iv)] $^{\langle\mu\rangle}d(\lambda) \in \overline{\mathcal{P}}^*(n)$. \end{itemize} \end{lemma} \begin{proof} We will prove (i)-(iv) for $X=C$ (the proofs for $X=B,D$ are analogous, and are omitted). Since $\pi \subseteq \mu$, (ii) implies (i). And since all parts of $\pi \setminus \mu$ have even multiplicity, (iv) implies (iii). So it is enough to prove (ii) and (iv). Choose $\nu$ so that $\mu \cup \nu = \lambda^t$. Note that $|\nu|$ is odd. We will first show that \begin{equation}\label{eq:dlambdamu} d(\lambda) = \mu \cup (\nu^-)_C \end{equation} There are two cases to consider. (a) $\mu \subseteq (\lambda^t)^-$. In this case we have $(\lambda^t)^- = \mu \cup \nu^-$. Since $\mu$ consists only of even parts, equation (\ref{eq:Ccollapse}) implies $$d(\lambda) = ((\lambda^t)^-)_C = \mu \cup (\nu^-)_C,$$ as desired. (b) $\mu \nsubseteq (\lambda^t)^-$. Write $$\nu = \nu' \cup_{\geq} (x,y_1,...,y_r),$$ where $x$ is the smallest odd part of $\nu$ (note that $r$ might be $0$). Then by (\ref{eq:Ccollapse}) we have \begin{equation}\label{eq:nu'}(\nu^-)_C = (\nu')_C \cup_{\geq} (x-1,y_1,...,y_r)\end{equation} Now let $z$ be the smallest part of $\mu$ and choose $\mu'$ such that $\mu = \mu' \cup (z)$. Then $$(\lambda^t)^- = \mu' \cup [\nu' \cup_{\geq} (x,y_1,...,y_r,z-1)].$$ So by (\ref{eq:Ccollapse}) we have \begin{align}\label{eq:dlambda} \begin{split} d(\lambda) &= ((\lambda^t)^-)_C\\ &= \mu' \cup [(\nu')_C \cup_{\geq} (x-1,y_1,...,y_r,z)]\\ &= \mu \cup [(\nu')_C \cup_{\geq} (x-1,y_1,...,y_r)] \end{split} \end{align} Substituting (\ref{eq:nu'}) into the right hand side of (\ref{eq:dlambda}), we obtain $$d(\lambda) = \mu \cup (\nu^-)_C,$$ as desired. Thus, we have proved (\ref{eq:dlambdamu}). (ii) follows immediately. In view of (\ref{eq:dlambdamu}), (iv) amounts to the assertion that $(\nu^-)_C$ is special. This will require some notation. For $\alpha \in \mathcal{P}(2n)$, define $$e_{\alpha}(i,j) := \#\{k \mid i \leq k \leq j \text{ and } \alpha_k \text{ is even}\}$$ Let $\alpha_{t_1},...,\alpha_{t_p}$ be the odd parts of $\alpha$ (for $t_1 < t_2 < ... < t_p$). We say that \begin{itemize} \item $\alpha$ is \emph{special} if $e_{\alpha}(1,t_1)$ is even and $e_{\alpha}(t_i,t_{i+1})$ is even for $1 \leq i \leq p-1$. \item $\alpha$ is \emph{quasi-special} if $e_{\alpha}(1,t_1)$ is even, $e_{\alpha}(t_i,t_{i+1})$ is even for $1 \leq i \leq p-2$, and the smallest part of $\alpha$ is odd, with multiplicity $1$. \end{itemize} By (\ref{eq:Ccollapse}), the odd parts of $\alpha_C$ are precisely those of the form $(\alpha_C)_{t_{2i-1}}$ or $(\alpha_C)_{t_{2i}}$ for $i$ satisfying $\alpha_{t_{2i-1}}=\alpha_{t_{2i}}$. Choose $i < j$ such that $$\alpha_{t_{2i-1}} = \alpha_{t_{2i}} \qquad \text{and} \qquad \alpha_{t_{2j-1}} = \alpha_{t_{2j}}.$$ Then by (\ref{eq:Ccollapse}) $$e_{\alpha_C}(t_{2i},t_{2j-1}) = e_{\alpha}(t_{2i},t_{2j-1}) + 2(j-i-1).$$ In particular, $$e_{\alpha_C}(t_{2i},t_{2j-1}) \equiv e_{\alpha}(t_{2i},t_{2j-1}) \mod{2}$$ The following implications are immediate \begin{itemize} \item $\alpha \text{ special} \implies \alpha_C \text{ special}$. \item $\alpha \text{ quasi-special} \implies \alpha_C \text{ special}.$ \end{itemize} Returning to the situation at hand, there are two cases to consider. First, suppose the smallest part of $\nu$ is even. In this case, $\nu^-$ is quasi-special. So $(\nu^-)_C$ is special. Next, suppose the smallest part of $\nu$ is odd. In this case, $\nu^-$ is special. So $(\nu^-)_C$ is special. In either case, $(\nu^-)_C$ is special, as desired. \end{proof} \begin{rmk} If $X=D$, then $\mu$ consists of only odd parts. In particular, it is never very even. \end{rmk} Write $(J,\OO) \in \mathcal{K}_{\tilde{\Delta}}^{max}$ (resp. $(J',\OO') \in \mathcal{K}_{\tilde{\Delta}}^{max}$) for the pairs corresponding to the bipartition $^{\langle\mu\rangle}d(\lambda) \in \overline{\mathcal{P}}_X(n)$ (resp. $^{\langle\pi\rangle}d(\lambda) \in \overline{\mathcal{P}}_X(n)$) under the bijection (\ref{eq:KPbijection}). And write \begin{equation}\label{eq:Jphi}(J,\phi) := (J,\phi(E(\OO,1) \otimes \mathrm{sgn})) \in \mathscr{F}_{\tilde{\Delta}}, \qquad (J',\phi') := (J', \phi(E(\OO',1) \otimes \mathrm{sgn})) \in \mathscr{F}_{\tilde{\Delta}}.\end{equation} It is easy to show that both $(J,\phi)$ and $(J',\phi')$ satisfy condition (i) of Definition \ref{def:faithful}. \begin{lemma}\label{lem:conditioniclassical} $\overline{\mathbb L}(J,\OO(\phi)) = \overline{\mathbb L}(J',\OO'(\phi')) = d_A(\OO^{\vee},1)$. \end{lemma} \begin{proof} Note that $\phi(E(\OO,1)\otimes \mathrm{sgn})$ is the family containing $E(\OO,1)$ and so $\OO(\phi) = \OO$ since $\OO$ is special (and similarly for $\OO'$). So it suffices to show $$\overline{\mathbb L}(J,\OO) = \overline{\mathbb L}(J',\OO') = d_A(\OO^{\vee},1).$$ These are equivalent to the equalities $$\bar{s}(^{\langle\mu\rangle}d(\lambda)) = \bar{s}(^{\langle\pi\rangle}d(\lambda)) = d_A(\OO^{\vee},1).$$ The second equality follows imediately from Achar's definition of $d_A$ (see Equation (9) in \cite[Section 4]{Acharduality}). The first equality, by Proposition \ref{prop:Acharequivalence}, is equivalent to the following $$r_{d(\lambda)}(\mu) = r_{d(\lambda)}(\pi).$$ But this is clear from definitions: $\mu$ is obtained from $\pi$ by adding parts of even multiplicity. So for each $x \in \ZZ_{\geq 0}$, we have $$\mathrm{ht}_{\mu}(x) \equiv \mathrm{ht}_{\pi}(x) \mod{2},$$ and therefore $$m_{r_{d(\lambda)}(\mu)}(x) = m_{r_{d(\lambda)}(\pi)}(x),$$ as required. \end{proof} \subsection{Proof of faithfulness in classical types}\label{sec:prooffaithful} Continue with the notations of Subsection \ref{subsection:Jphi}, i.e. $\OO^{\vee}$, $^{\langle\mu\rangle}d(\lambda)$, $(J,\phi)$, and so on. In this subsection, we will show that $(J,\phi)$ satisfies condition (ii) of Definition \ref{def:faithful}. This will allow us to complete the proof of Theorem \ref{thm:faithful} for types B/C/D. The proof of condition (ii) will require a sequence of technical lemmas. Define \begin{equation} \omega_X = \begin{cases} 1 & \mbox{ if $X=B$} \\ 0 & \mbox{ if $X=C$} \\ 1 & \mbox{ if $X=D$} \end{cases} \end{equation} and \begin{equation} \chi(x) = \begin{cases} 0 & \mbox{ if $x$ is even} \\ 1 & \mbox{ if $x$ is odd}. \end{cases} \end{equation} Write $\lambda=(\lambda_1^{p_1},\lambda_2^{p_2},\dots,\lambda_l^{p_l})$ where $p_i>0$ and $0<\lambda_1<\lambda_2<\cdots<\lambda_l$. Note that if $X=C$ then $\#\lambda$ is odd, if $X=D$ then $\#\lambda$ is even, and if $X=B$ then $\#\lambda$ can be either even or odd. However, in type $B$, when computing a/s-symbols we will want to pad $\lambda$ with 0's to ensure it has an odd number of entries. Thus we define \begin{equation} \lambda' = (\lambda_0^{p_0},\lambda_1^{p_1},\dots,\lambda_l^{p_l}) \end{equation} where $\lambda_0 = 0$ and $p_0=0$ if $X\in\{C,D\}$ and $p_0$ is some integer $>0$ of the opposite parity to $\#\lambda$ if $X=B$. Let $P_i = \sum_{j=0}^ip_j$ and $Q_i = \sum_{j=i}^lp_j$. We have that $\#\lambda' = P_{i-1}+Q_{i}$ for all $0\le i\le l+1$. \begin{lemma}\label{lem:technicallemma1} Let $\Lambda \in [\Lambda(E(\lambda,1))]\cap \Lambda_{X^\vee}(n;k)$ where $k>\lfloor \#\lambda/2\rfloor$. Let $\overline\Lambda = (I_1,I_{2},\dots,I_n)$ be the refinement for $\bar\Lambda$. Let $1\le a_1 < \cdots < a_u \le n$ be the indices such that $I_a$ is an interval. Let $0\le b_1 < \cdots < b_v \le l$ be the indices such that $\lambda_b \equiv \omega_{X^\vee}\pmod 2$. Then $u = v$ and $I_{a_i}=(\overline\Lambda(Q_{b_i}),\dots,\overline\Lambda(Q_{b_i+1}+1))$ for all $i$. When $X=B$ we additionally have $b_1 = 0$ and $I_1(\#I_1) = 0$. \end{lemma} \begin{proof} First consider the case when $X=B$. The condition $k>\lfloor \#\lambda/2\rfloor$ translates to the $p_0>0$ condition from the paragraph above. Thus we need only compute $\Lambda(E(\lambda,1))$ using the algorithm in Paragraph \ref{sec:springerB} with $p_0>0$ as above. Write $\#\lambda = 2k+1$. We can write $\lambda + z^{0,\#\lambda}$ as $(K_0,\dots,K_l)$ where $K_i = \lambda_i + P_{i-1} + (0,1,\dots,p_i-1)$. Then the even entries of $K_i$ are $2x_i$ and the odd entries of $K_i$ are $2y_i+1$ where \begin{align} x_i &= \lfloor (\lambda_i+P_{i-1})/2\rfloor + (0,1,\dots) + \chi(\lambda_i+P_{i-1}) \\ y_i &= \lfloor (\lambda_i+P_{i-1})/2\rfloor + (0,1,\dots) \end{align} and \begin{align} \#x_i = \begin{cases} \lceil p_i/2\rceil & \mbox{ if $\lambda_i + P_{i-1}$ is even} \\ \lfloor p_i/2\rfloor & \mbox{ if $\lambda_i + P_{i-1}$ is odd} \end{cases}, \qquad \#y_i = \begin{cases} \lfloor p_i/2\rfloor & \mbox{ if $\lambda_i + P_{i-1}$ is even} \\ \lceil p_i/2\rceil & \mbox{ if $\lambda_i + P_{i-1}$ is odd}. \end{cases} \end{align} Note that $\#x_i+\#y_i = p_i$ for all $i$. Let $x = (x_1,\dots,x_l)$ and $y = (y_1,\dots,y_l)$. Then $\Lambda := (x+z^{0,k+1};y+z^{0,k}+1)$ is an s-symbol for $E(\lambda,1)$. Decompose $\Lambda$ as $((x_0',\dots,x_l');(y_0',\dots,y_l'))$ where $\# x_i' = \# x_i$ and $\# y_i' = \# y_i$. We claim that $\bar \Lambda = (B_1,\dots,B_l)$ where \begin{equation} B_i = \begin{cases} \overline{(x_i';y_i')} & \mbox{ if $P_{i-1}$ is even} \\ \overline{(y_i';x_i')} & \mbox{ if $P_{i-1}$ is odd}. \end{cases} \end{equation} We prove this by inducting on the content of the first $P_i$ entries of $\bar \Lambda$. For $i=0$, we have $P_{i-1} = P_{-1} = 0$. If $p_0$ is odd then $\#x_1 = \#y_1+1$ and $B_1 = \overline{(x_1';y_1')}$. If $p_1$ is even then $\#x_1 = \#y_1$ and so again $B_1 = \overline{(x_1';y_1')}$. Now suppose $i>0$. If $P_{i-1}$ is even then by the inductive hypothesis the first entry of $B_i$ must be the first entry of $x_i$ and so since $p_i = \#x_i'+\#y_i'$ we only need to show that $x_i'$ has $0$ or $1$ more entries than $y_i'$. If $p_i$ is odd then $\lambda_i$ is even and so $\#x_i = \#y_i + 1$ as required. If $p_i$ is even then $\#x_i = \#y_i$ as required. The case when $P_{i-1}$ is odd is analogous. It follows from this that \begin{align} x_i' &= \lfloor P_{i-1}/2 \rfloor + (0,1,\dots) + x_i + \chi(P_{i-1}) \nonumber \\ &= \lfloor \lambda_i/2\rfloor + P_{i-1} + (0,2,\dots) + \chi(P_{i-1})\vee \chi(\lambda_i) \\ y_i' &= \lfloor P_{i-1}/2 \rfloor + (0,1,\dots) + y_i + 1 \nonumber \\ &= \lfloor \lambda_i/2\rfloor + P_{i-1} + (1,3,\dots) - \chi(P_{i-1})\wedge \chi(\lambda_i+P_{i-1}). \end{align} and so \begin{equation} \label{eq:blockform} B_i = \lfloor \lambda_i/2 \rfloor + P_{i-1} + \begin{cases} (0,1,2,3,\dots) & \mbox{ $\lambda_i$ is even} \\ (1,1,3,3,\dots) & \mbox{ $\lambda_i$ is odd} \end{cases} \end{equation} where $\#B_i$ is even when $\lambda_i$ is odd since then $p_i = \#B_i$ is even. Finally, if both $\lambda_{i-1},\lambda_i$ are even then $B_{i-1}(1) = \lfloor \lambda_{i-1}/2\rfloor + P_{i-1} - 1$ and $B_i(p_i) = \lfloor \lambda_i/2\rfloor + P_{i-1}$. Since $\lambda_{i-1}\ll \lambda_i$ it follows in particular that $(B_{i-1},B_i)$ is not an interval. Thus the intervals of $\bar\Lambda$ are exactly the $B_i$ where $\lambda_i$ is even. Since $B_i = (\bar\Lambda(Q_{i}),\dots,\bar\Lambda(Q_{i+1}+1))$, the result follows. For the observation that $b_1 = 0$ simply note that $\lambda_0 = 0 \equiv \omega_{C} = 0 \pmod 2$. Then $I_1(\#I_1)=0$ follows from Equation (\ref{eq:blockform}). The cases $X=C,D$ are similar (in fact simpler) and left to the reader. We remark only that in these cases $\approx$ only adds/removes pairs to the start of $\bar\Lambda$ which does not affect the position (indexed from the right) of the intervals in $\bar\Lambda$. Thus it suffices to prove the claim for a single s-symbol in the $\approx$ equivalence class. \end{proof} Let \begin{equation} \alpha_1 := \begin{cases} \alpha(E(d_{LS}(\mu),1))^! & \mbox{ if $X=B$}\\ \alpha(E(d_{LS}(\mu),1)) & \mbox{ if $X\in\{C,D\}$}. \end{cases} \end{equation} We will now provide a more explicit description of $\alpha_1$. We have that $\lambda^t=(Q_l^{q_l},Q_{l-1}^{q_{l-1}},\dots,Q_1^{q_1})$ where $q_i = \lambda_i-\lambda_{i-1}$. Let $1\le c_1<c_2<\cdots<c_r\le l$ be all the indices $c$ such that $Q_c\equiv \omega_X\pmod 2$. Let \begin{equation} \eta_i = \begin{cases} 1 & \mbox{ if } q_i \text{ is odd} \\ 2 & \mbox{ if } q_i \text{ is even}. \end{cases} \end{equation} Then by definition $\mu = (Q_{c_r}^{\eta_{c_r}},Q_{c_{r-1}}^{\eta_{c_{r-1}}},\dots,Q_{c_1}^{\eta_{c_1}})$. Define $H_i = \sum_{j\le i}\eta_{c_j}$ and $t_i = Q_{c_i}-Q_{c_{i+1}}$ with the convention that $c_{r+1} = l+1$. Then $\mu^t = (H_1^{t_1},H_2^{t_2},\dots,H_r^{t_r})$. Let $d_i = c_i-1$. Then $t_i = \#\lambda-P_{d_i}-\#\lambda+P_{d_{i+1}} = P_{d_{i+1}}-P_{d_i}$. Thus $p(\mu^t) = \#\lambda-P_{d_1}$. It will be useful to pad $\mu^t$ with 0's so that it has the same number of parts as $\lambda'$ so we write it as $\mu^t = (H_0^{t_0},H_1^{t_1},\dots,H_r^{t_r})$ with the convention that $c_0=0$. Write $t_i'=\lfloor t_i/2\rfloor$, $H_i'=\lfloor H_i/2\rfloor$, $P_i'=\lfloor P_i\rfloor$. Note that we always have $P_{d_{i-1}}'+t_{i-1}'=P_{d_i}'$ for all $1\le i\le r$ since if $i=1$, $P_{d_{i-1}}=0$ and if $i>1$, $t_{i-1}$ is even. We now record $\alpha_1$ for the various types. We only show the calculation in type $B$ explicitly as this is the most difficult case. The calculations in types $C$ and $D$ are similar, but more straightforward (there is no need to define $\tilde t_i$, $\tilde P_i$ etc.). \paragraph{Type $B$} \label{par:typeB} By Lemma \ref{lem:propsofmu} we have that $^{\langle\mu\rangle}d(\lambda)\in \overline{\mathcal P}^*(n)$ and so $\mu$ is a special type $D$ partition. By \cite[Proposition 6.3.7]{CM} this means that $\mu^t$ is a partition of type $C$ and so we can compute $\alpha(E(d_{LS}(\mu),1))$ using Lemma \ref{lem:asymbolfordcollapse}. Write $\mu^t$ as $\mu^t=(H_0^{\tilde t_0},H_1^{\tilde t_1},\dots,H_r^{\tilde t_r})$ where $\tilde t_0= t_0-1\ge0$ and $\tilde t_i= t_i$ for $1\le i \le r$ so that $\mu^t$ has an even number of entries. Define $\tilde p_0 = p_0-1$ and $\tilde p_i = p_i$ for $1\le i\le l$ and set $\tilde P_{i} = \sum_{j=0}^i\tilde p_j$ and $\tilde Q_{i} = \sum_{j=i}^l\tilde p_j$. We have $\tilde t_i = \tilde P_{d_{i+1}}-\tilde P_{d_i}$. Write $\tilde t_i'=\lfloor \tilde t_i/2\rfloor$ and $\tilde P_i'=\lfloor \tilde P_i\rfloor$. Note that similar to before we always have $\tilde P_{d_{i-1}}'+\tilde t_{i-1}'=\tilde P_{d_i}'$ for all $1\le i\le r$. Then $\mu^t + z^{0,\#\mu^t}$ can be decomposed into subsequences $(K_0,K_1,\dots,K_r)$ where $K_i = H_i + \tilde P_{d_i} + (0,1,\dots,\tilde t_i-1)$. Since $\tilde P_{d_i}+ \tilde Q_{c_i} = \#\lambda'-1$ we have that $\tilde P_{d_i}\equiv \tilde Q_{c_i} \pmod 2$. But $\tilde Q_{c_i} = Q_{c_i}$ and $Q_{c_i}\equiv \omega_B \equiv 1 \pmod 2$ for $1\le i \le r$ and so $\tilde P_{d_i}$ is odd for $1 \le i \le r$, $P_{d_0}=0$ is even, $\tilde t_i$ is even for $1 \le i < r$ and $\tilde t_0,\tilde t_r$ are odd. Thus the odd entries of $K_i$ are $2x_i+1$ where \begin{equation} x_0 = (0,1,\dots), \qquad x_i = \tilde P_{d_i}' + H_i' + (0,1,\dots) + \chi(H_i), \text{ for } 1\le i\le r, \end{equation} the even entries are $2y_i$ where \begin{equation} y_0 = (0,1,\dots), \qquad y_i = \tilde P_{d_i}'+H_i'+(0,1,\dots)+1, \text{ for } 1\le i\le r, \end{equation} and \begin{equation} \# x_i = \begin{cases} \tilde t_i' & \mbox{ if } i<r \\ \tilde t_r'+1 & \mbox{ if } i=r \end{cases}, \qquad \# y_i = \begin{cases} \tilde t_0'+1 & \mbox{ if } i=0 \\ \tilde t_i' & \mbox{ if } i>0. \end{cases} \end{equation} Therefore \begin{equation} \alpha_1 = ((a_0,a_1,\dots,a_r);(b_0,b_1,\dots,b_r)) \end{equation} where \begin{align*} a_0&=(0,1,\dots,\tilde t_0')\\ b_0&=(0,1,\dots,\tilde t_0') + 1 \\ a_i&=\tilde P_{d_i}'+H_i'+(1,2,\dots,\tilde t_i') + \chi(H_i)\\ b_i&=\tilde P_{d_i}'+H_i'+(1,2,\dots,\tilde t_i') + 1 \\ a_r&=\tilde P_{d_r}'+H_r'+(1,2,\dots,\tilde t_r',\tilde t_r'+1) + \chi(H_r)\\ b_r&=\tilde P_{d_r}'+H_r'+(1,2,\dots,\tilde t_r') + 1 \end{align*} for $0<i<r$. Using the fact that $t_0' = \tilde t_0'+1$, $t_i' = \tilde t_i'$, and $P_{d_i}' = \tilde P_{d_i}'+1$ for $1\le i \le r$ we get that \begin{align*} a_0&=(0,1,\dots,t_0'-1)\\ b_0&=(0,1,\dots,t_0'-1) + 1 \\ a_i&=P_{d_i}'+H_i'+(0,1,\dots,t_i'-1) + \chi(H_i)\\ b_i&=P_{d_i}'+H_i'+(0,1,\dots,t_i'-1) + 1 \\ a_r&=P_{d_r}'+H_r'+(0,1,\dots,t_r'-1,t_r') + \chi(H_r)\\ b_r&=P_{d_r}'+H_r'+(0,1,\dots,t_r'-1) + 1. \end{align*} We also have \begin{equation} \overline{\alpha_1} = (B_0,B_1,\dots,B_r) \end{equation} where $B_i = \overline{(a_i;b_i)}$ for $0\le i\le r$. \paragraph{Type $C$} \begin{equation} \alpha_1 = ((a_0,a_1,\dots,a_r);(b_0,b_1,\dots,b_r)) \end{equation} where \begin{align*} a_0&=(0,1,\dots,t_0')\\ b_0&=(0,1,\dots,t_0'-1)\\ a_i&=P_{d_i}'+H_i'+(0,1,\dots,t_i'-1) + 1\\ b_i&=P_{d_i}'+H_i'+(0,1,\dots,t_i'-1) + \chi(H_i). \end{align*} for $0<i\le r$. We also have \begin{equation} \overline{\alpha_1} = (B_0,B_1,\dots,B_r) \end{equation} where $B_0=\overline{(a_0;b_0)}$ and $B_i=\overline{(b_i;a_i)}$ for $0<i\le r$. \paragraph{Type $D$} \begin{equation} \alpha_1 = ((a_0,a_1,\dots,a_r);(b_0,b_1,\dots,b_r)) \end{equation} \begin{align*} a_0&=(0,1,\dots,t_0'-1) \\ b_0&=(0,1,\dots,t_0'-1,t_0')\\ a_i&=P_{d_i}'+H_i'+(0,1,\dots,t_i'-1) + \chi(H_i)\\ b_i&=P_{d_i}'+H_i'+(0,1,\dots,t_i'-1) + 1\\ a_r&=P_{d_r}'+H_r'+(0,1,\dots,t_r'-1,t_r') + \chi(H_r)\\ b_r&=P_{d_r}'+H_r'+(0,1,\dots,t_r'-1) + 1 \end{align*} for $0<i<r$. We also have \begin{equation} \overline{\alpha_1} = (B_0,B_1,\dots,B_r) \end{equation} where $B_0=\overline{(b_0;a_0)}$ and $B_i=\overline{(a_i;b_i)}$ for $0<i\le r$. \begin{lemma} \label{lem:endparity} For all $1\le i< r$ we have $\lambda_{c_i}\equiv\lambda_{c_{i+1}-1} \pmod 2$. \end{lemma} \begin{proof} Suppose first $c_{i+1}-c_i=1$. Then $\lambda_{c_i}=\lambda_{c_{i+1}-1}$ so the lemma holds trivially. Now consider the case $c_{i+1}-c_i>1$. Then by definition $Q_c\equiv\omega_X\pmod 2$ for $c=c_i,c_{i+1}$ and $Q_c\nequiv \omega_X\pmod 2$ for all $c_{i}<c<c_{i+1}$. For this to be the case we must have that $p_{c_i},p_{c_{i+1}-1}$ are both odd. But then $\lambda_{c_i},\lambda_{c_{i+1}-1}\equiv \omega_{X^\vee}\pmod 2$. In all cases we have $\lambda_{c_i}\equiv\lambda_{c_{i+1}-1}\pmod 2$ as required. \end{proof} For $0\le i<r$ define \begin{equation} \Delta_i = \begin{cases} 0 & \mbox{ if $\lambda_{c_{i+1}-1}$ is even} \\ 1 & \mbox{ if $\lambda_{c_{i+1}-1}$ is odd}, \end{cases} \end{equation} and \begin{equation} \Delta_r = \begin{cases} 0 & \mbox{ if $\lambda_{c_{r}}$ is even} \\ 1 & \mbox{ if $\lambda_{c_{r}}$ is odd}. \end{cases} \end{equation} Then for all $0\le i\le r$, we have $\Delta_i\equiv \lambda_{c_i} \pmod 2$ (where we use the lemma for $1\le i<r$) and so $\eta_{c_i} \equiv \lambda_{c_{i}}-\lambda_{c_{i}-1} \equiv \Delta_i-\Delta_{i-1}\pmod 2$ for all $1\le i\le r$. It follows that $H_i = \sum_{j\le i}\eta_{c_i}\equiv \Delta_i-\Delta_0$. \begin{lemma} We have that $\Delta_0=\omega_{X^\vee}$. \end{lemma} \begin{proof} First note that $c_1=1$ if and only if $\#\lambda = Q_1 \equiv \omega_X \pmod 2$. But if $X=C$ then $\#\lambda$ is odd which is $\nequiv \omega_C = 0 \pmod 2$, and if $X=D$ then $\#\lambda$ is even which is $\nequiv \omega_D = 1 \pmod 2$. Thus $c_1=1$ if and only if $X=B$ and $\#\lambda$ is odd. Now suppose $c_1>1$. Then $Q_{c_1}\equiv\omega_X \pmod 2$ and $Q_c\nequiv\omega_X\pmod 2$ for all $1\le c<c_1$. It follows that $p_{c_1-1}$ must be odd and so $\lambda_{c_1-1} \equiv \omega_{X^\vee}\pmod 2$ as required. If $c_1=1$ then as argued above, we must have $X=B$. Thus $\lambda_{c_1-1} = \lambda_0 = 0 \equiv \omega_{C}\pmod 2$ as required. \end{proof} \begin{lemma}\label{lem:technicallemma2} $\alpha(Q_i+1)+1=\alpha(Q_i)$ for all $1\le i\le l$ such that $\lambda_i,\lambda_{i-1}\equiv\omega_{X^\vee}\pmod 2$. \end{lemma} \begin{proof} Let $0\le e_1 < e_2 < \cdots < e_s \le l$ be the indices $e$ such that $\lambda_e\equiv \omega_{X^\vee}\pmod 2$ (note that $e_1=0$ only in type $B$). Let $i>0$ be such that $e_{i} = e_{i-1}+1$. Write $e$ for $e_i$. We must show that $\overline\alpha_1(Q_{e}+1) + 1 = \overline\alpha_1(Q_{e})$. Let $j$ be such that $c_{j}\le e< c_{j+1}$. We have that \begin{align} \label{eq:intervalpos1} \overline\alpha_1(Q_e) &= B_j(Q_{e}-Q_{c_{j+1}})\\ \overline\alpha_1(Q_e+1) &= \begin{cases} B_j(Q_{e}-Q_{c_{j+1}}+1) & \mbox{ if } c_j<e \\ B_{j-1}(1) & \mbox{ if } c_j=e. \end{cases} \label{eq:intervalpos2} \end{align} Moreover, since $0\le e_{i-1}$ we must have $1\le e$ and so $c_0<e$. Thus $e=c_j$ implies that $j>0$. We consider two cases: $e=c_j$ and $e>c_j$. Suppose first that $e = c_{j}$ (and hence $j>0$). Then (in all cases) \begin{equation} \overline\alpha_1(Q_{e}+1) = B_{j-1}(1) = P_{d_{j-1}}'+H_{j-1}'+t_{j-1}' \end{equation} and \begin{equation} \overline\alpha_1(Q_{e}) = B_j(Q_e-Q_{c_{j+1}}) = P_{d_{j}}'+H_{j}'+\chi(H_j). \end{equation} Since $j>0$, we have $P_{d_{j-1}}'+t_{j-1}' = P_{d_{j}}'$. Also $\eta_{c_{j}} \equiv \lambda_{c_{j}}-\lambda_{c_{j}-1} = \lambda_{e}-\lambda_{e-1}\equiv 0 \pmod 2$ and so $\eta_{c_j} = 2$ and $H_j'=H_{j-1}'+1$. Moreover as $\lambda_{c_j-1} = \lambda_{e_{i-1}}\equiv \omega_{X^\vee}\pmod 2$ we have that $\Delta_{j-1}=\omega_{X^\vee}$ and so $H_{j-1} \equiv \Delta_{j-1}-\Delta_0 = 0$. Thus $\chi(H_j) = 0$. So we have that $\overline\alpha(E;p')(Q_e+1) = \overline\alpha(E;p')(Q_e)-1$ as required. For the second case we suppose $e > c_j$. Then $c_j\ll c_{j+1}$. Thus as in Lemma \ref{lem:endparity} we have that $\Delta_j = \omega_{X^\vee}$. Note this holds for $j=0,r$ too. It follows that $H_j$ is even and so $\chi(H_j) = 0$. Therefore, by inspecting the expressions for the a-symbol in the various cases, we have that $B_j(x) = B_j(x+1)+1$ whenever $x\equiv\sigma_j$ where $\sigma_i = 1$ for $0\le i<r$ and $\sigma_r = 1-\omega_{X}$. It is thus sufficient to check that $Q_{e}-Q_{c_{j+1}}\equiv \sigma_j\pmod 2$ for all $0\le j\le r$. But since $c_j<e<c_{j+1}$, we must have that $Q_e\nequiv\omega_X\pmod 2$. Since $Q_{c_{j+1}}$ has the opposite parity of $Q_e$ if $j<r$ and $=0$ if $j=r$ we have that $Q_e-Q_{c_{j+1}}\equiv 1\pmod 2$ for $j<r$ and $\equiv \sigma_r\pmod 2$ if $j=r$ as required. \end{proof} Recall the pair $(J,\phi) \in \mathscr{F}_{\tilde{\Delta}}$ constructed in (\ref{eq:Jphi}). By Lemma \ref{lem:conditioniclassical}, $(J,\phi)$ satisfies condition (i) of Definition \ref{def:faithful}. We will now show that is satisfies condition (ii) as well. This will complete the proof of Theorem \ref{thm:faithful} for classical types. \begin{lemma} If $E$ is any irreducible representation of $W$ with $\OO^\vee(E) = \OO^{\vee}$, there is a representation $F \in \phi \otimes \mathrm{sgn}$ such that $\Hom(F, E|_{W_J})\ne0$. \end{lemma} \begin{proof} Let $E_0 = E(\OO^\vee,1)$ and $F_0 = \mathrm{sp}(\phi)$. By Lemma \ref{lem:conditioniclassical} we have that $\overline{\mathbb L}(J,\OO(\phi)) = d_A(\OO^\vee,1)$. Thus by Lemma \ref{lem:technicallemma1} and Frobenius reciprocity we have $\Hom(F_0,E|_{W_J})\ne0$. But $F_0 = G_1\otimes G_2$ where $G_1=E(d_{LS}(\mu),1)$ and $G_2 = E(d_{LS}(\nu),1)$. Let $\Lambda_0' := \Lambda(E_0)$, $\alpha_1' = \alpha(G_1)$ and $\alpha_2 := \alpha(G_2)$. If $X=D$ let $\kappa$ be the decoration for $\tilde\alpha_D^\bullet(G_2)$. Let $k>\lfloor \#\lambda/2\rfloor$ be sufficiently large so that there exists $\Lambda_0=(a;b) \in [\Lambda_0']\cap \Lambda_{X^\vee}(n;k)$, $\alpha_1''\in [\alpha_1']\cap \alpha_{Y(X)}(n;k)$ and $\alpha_2\in [\alpha_2']\cap \alpha_X(n;k)$. Let $\alpha_1 = \alpha''^!$ if $X=B$ and $\alpha_1 = \alpha_1''$ otherwise. Then by Proposition \ref{prop:jinduction}, $\Lambda_0 = \alpha_1 + \alpha_2$. Let $\bar\Lambda = (I_1,\dots,I_k)$ be the refinement of $\bar\Lambda_0$ and $a=(A_1,\dots,A_k),b=(B_1,\dots,B_k)$ be the refinements of $a$ and $b$. Let $\alpha_i = (a^i;b^i)$ and $a^i = (A_1^i,\dots,A_k^i),b^i = (B_1^i,\dots,B_k^i)$ be the corresponding decomposition of $\alpha_i$ for $i\in\{1,2\}$. Now let $E\in\mathrm{Irr}(W)$ be any representation with $\OO^\vee(E) = \OO^\vee$ and let $\Lambda \in [\Lambda(E)]\cap \Lambda_X(n;k)$ ($\ne \emptyset$ by Equation \ref{eq:familybijection}). Then by Lemma \ref{lem:familyflips}, $\Lambda = ((X_1,\dots,X_k);(Y_1,\dots,Y_k))$ where $\{A_i,B_i\} = \{X_i,Y_i\}$ for all $1\le i\le k$. For $i\in\{1,2\}$, let $\beta_i = ((X_1^i,\dots,X_k^i);(Y_1^i,\dots,Y_k^i))$ where \begin{equation} (X_j^i,Y_j^i) = \begin{cases} (A_j^i,B_j^i) & \mbox{ if } (X_j,Y_j)=(A_j,B_j) \\ (B_j^i,A_j^i) & \mbox{ if } (X_j,Y_j)=(B_j,A_j). \end{cases} \end{equation} By construction, $\beta_1$ and $\beta_2$ have the same defect and number of entries as $\Lambda$ and $\Lambda = \beta_1+\beta_2$. By Lemma \ref{lem:technicallemma2} we can apply Lemma \ref{lem:jump1} and so both $\beta_1,\beta_2$ are a-symbols of type $X$. Note that when $X=B$, since $k>\lfloor \#\lambda/2\rfloor$, by Lemma \ref{lem:technicallemma1}, $A_1(\#A_1) = 0$ which implies that $(X_1,Y_1)=(A_1,B_1)$ (since $\Lambda$ is an s-symbol of type $C$). By the expression for $\alpha_1$ given in Paragraph \ref{par:typeB}, we also have $A_1^1(\#A_1^1)=0,B_1^1(\#B_1^1) = 1$ and so $\beta_1 \in \alpha_D^1(n;k)$ and so there is a $\gamma_1\in \alpha_D(n;l)$ such that $\gamma_1^! = \beta_1$. Since $\mu$ is not very even for $X\in\{B,D\}$, there is a unique representation $F_1\in\mathrm{Irr}(W_{Y(X)}(|\mu|))$ such that \begin{align*} X=B: &[\tilde\alpha(F_1)] = [\{\gamma_1\}] \\ X=C: &[\alpha(F_1)] = [\beta_1] \\ X=D: &[\tilde\alpha(F_1)] = [\{\beta_1\}]. \end{align*} Let $F_2\in \mathrm{Irr}(W_X(|\nu|))$ be the unique representation such that \begin{align*} X&=B,C: [\alpha(F_2)] = [\beta_2] \\ X&=D: [\tilde\alpha^\bullet(F_2)] = [\{\beta_2\}^\kappa]. \end{align*} Let $F = F_1\otimes F_2$. Then, since $\beta_i\sim\alpha_i$ (and $\gamma_1\sim\alpha_1$ when $X=B$) we have that $F_i\sim G_i$, and so $F\sim F_0$. This implies $F\in \phi\otimes\mathrm{sgn}$. Finally, since $\Lambda = \beta_1+\beta_2$, we have by Proposition \ref{prop:sumtohom} that $\Hom(F,E|_{W_J})\ne0$ as required. \end{proof} \subsection{Proof of faithfulness in exceptional types}\label{subsec:exceptional} Let $\fg$ be a simple exceptional Lie algebra and let $\OO^{\vee} \subset \fg^{\vee}$ be a nilpotent orbit. If $A(\OO^\vee)$ is trivial, then $\OO^{\vee}$ is faithful by Proposition \ref{prop:easycase}. If $A(\OO^{\vee})$ is nontrivial, we use GAP to exhibit an explicit pair $(J,\phi) \in \mathscr{F}_{\tilde{\Delta}}$ satisfying conditions (i) and (ii) of Definition \ref{def:faithful}. In many cases, we may take $(J,\phi) = (\Delta,\phi(E(\OO^\vee,1)))$ (indeed this can be done precisely when $\OO^\vee$ is special and all the representations $E\in\mathrm{Irr}(W)$ with $\OO^\vee(E) = \OO^\vee$ lie in the same family as $E(\OO^\vee,1)$). However, in some cases, a less obvious choice is required. These (less obvious) cases are listed in Tables \ref{table:1}, \ref{table:2}, and \ref{table:3}. Note that in these tables we list $(J,\OO(\phi))$ instead of $(J,\phi)$, but the map $\phi\mapsto \OO(\phi)$ induces a bijection between families and special nilpotent orbits, so $\phi$ can be recovered from the information provided. Note also that there are no tables for $G_2$ or $E_6$---in these cases, there are no exceptions to consider. \begin{table}[H] \begin{tabular}{ |c||c|c|c| } \hline $\OO^\vee$ & $J$ & Type & $\OO(\phi,\CC)$\\ \hline $A_2$ & \dynkin[extended,labels={\times,\times,\times,\times,}] F4 & $B_4$ & $(711)$ \\ $B_2$ & \dynkin[extended,labels={\times,\times,\times,\times,}] F4 & $B_4$ & $(531)$ \\ $C_3(a_1)$ & \dynkin[extended,labels={\times,,\times,\times,\times}] F4 & $A_1+C_3$ & $(2)\times (42)$ \\ $F_4(a_2)$ & \dynkin[extended,labels={\times,\times,\times,\times,}] F4 & $B_4$ & $(32211)$ \\ \hline \end{tabular} \caption{$(J,\phi)$ for $\fg = F_4$}\label{table:1} \end{table} \begin{table}[H] \begin{tabular}{ |c||c|c|c| } \hline $\OO^\vee$ & $J$ & Type & $\OO(\phi,\CC)$\\ \hline $A_3+A_2$ & \dynkin[extended,labels={\times,\times,\times,\times,\times,\times,,\times}] E7 & $D_6+A_1$ & $(7311) \times (2)$ \\ $E_7(a_4)$ & \dynkin[extended,labels={\times,\times,\times,\times,\times,\times,,\times}] E7 & $D_6+A_1$ & $(332211) \times (2)$ \\ \hline \end{tabular} \caption{$(J,\phi)$ for $\fg =E_7$}\label{table:2} \end{table} \begin{longtable} { |c||c|c|c| } \hline $\OO^\vee$ & $J$ & Type & $\OO(\phi,\CC)$\\ \hline $A_3+A_2$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(11,3,1,1)$ \\ $D_4+A_2$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(7711)$ \\ $D_6(a_2)$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(7531)$ \\ $E_6(a_3)+A_1$ & \dynkin[extended,labels={\times,\times,\times,\times,\times,\times,\times,,\times}] E8 & $E_6+A_2$ & $E_6(a_3) \times (3)$ \\ $E_7(a_5)$ & \dynkin[extended,labels={\times,\times,\times,\times,\times,\times,\times,\times,}] E8 & $E_7+A_1$ & $E_7(a_5) \times (2)$ \\ $E_7(a_4)$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(732211)$ \\ $D_5+A_2$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(5533)$ \\ $E_8(b_6)$ & \dynkin[extended,labels={\times,\times,\times,\times,\times,\times,\times,,\times}] E8 & $E_6+A_2$ & $D_4(a_1) \times (3)$ \\ $D_7(a_1)$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(443311)$ \\ $E_8(b_4)$ & \dynkin[extended,labels={\times,,\times,\times,\times,\times,\times,\times,\times}] E8 & $D_8$ & $(33222211)$ \\ \hline \caption{$(J,\phi)$ for $\fg = E_8$} \label{table:3} \end{longtable} \begin{comment} \section{Index of Notation}\label{sec:notation} \begin{longtable}{l l l} $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(k))$ & Arthur packet attached to Arthur parameter $\psi$ \\ $\Pi_{\OO^{\vee}}^{\mathsf{Art}}(\mathbf{G}(k))$ & Unipotent Arthur packet attached to nilpotent orbit $\OO^{\vee}$ for $G^{\vee}$\\ $\Pi^{\mathsf{Lus}}(\mathbf{G}(k))$\\ $\Pi^{\mathsf{Lus}}_{s}(\mathbf{G}(k))$\\ $\mathrm{Irr}(H)$ & \\ $E(\OO,\mathcal L)$ & Springer\\ $\cN(F)$ & Set of nilpotent elements of $\fg(F)$ \\ $\cN_o(F)$ & Set of $\mathbf{G}(F)$-orbits on $\cN(F)$\\ $\cN$\\ $\cN_o$\\ $A(\OO)$\\ $\bar{A}(\OO)$\\ $\cN_{o,c}$\\ $\cN_{o,\bar c}$\\ $\mathfrak{Q}$\\ $\Theta_F$\\ $\theta$\\ $d$\\ $d_S$\\ $d_A$\\ $\leq_A$\\ $\bar \theta_{\bfT}$\\ $\mathcal{B}(\mathbf{G})$\\ $\cA(c,\cA)$\\ $W$\\ $\widetilde{W}$\\ $\mathbb{P}_c^+, \mathbb{M}_c, \mathbb{U}_c, \mathbb{P}_c$\\ $\Phi_c$\\ $\Psi$\\ $\Delta$\\ $\tilde{\Delta}$\\ $Z_G$\\ $\Xi$\\ $\Theta_c$\\ $\mathcal{I}_o$\\ $\mathcal{I}_{\tilde{\Delta}}$\\ $\mathcal{K}_{\tilde{\Delta}}$\\ $\Xi_{\tilde{\Delta}}$\\ $s$\\ $\overline{\mathbb L}$\\ $\mathcal{F}_{\tilde{\Delta}}$\\ $\OO_E(F)$\\ $\OO_{\phi}(F)$\\ $^K\WF_c(X)$\\ $^K\WF(X)$\\ $^K\WF_c(X;\CC)$\\ $^K\WF(X;\CC)$\\ $\WF(X)$ \\ $\WF(X;\CC)$ \\ $\mathbb{I}$\\ \end{longtable} \end{comment} \begin{sloppypar} \printbibliography[title={References}] \end{sloppypar} \end{document}
math
169,058
\beta egin{equation}gin{document} \beta egin{equation}gin{abstract} We prove an abstract KAM theorem adapted to space-multidimensional hamiltonian PDEs with regularizing nonlinearities. It applies in particular to the singular perturbation problem studied in the first part of this work. \beta egin{equation}gin{center} { \beta f \lambda_a rge 8/2/ 2015} \text{e} nd{center} \text{e} nd{abstract} \sigma ubjclass{ } \keywords{ KAM theory, Hamiltonian systems, multidimensional PDEs.} \thanks{ } \maketitle \tableofcontents \sigma ection{Introduction} \sigma ubsubsection{The phase space} \lambda_a bel{ssThePhaseSpace} Let $ \mathcal{A} $ and $ \mathcal{F} $ be two finite sets in $ \mathbb{Z} ^{d_*}$ and let $ \mathcal{L} _\infty$ be an infinite subset of $ \mathbb{Z} ^{d_*}$. Let $ \mathcal{L} $ be the disjoint union $ \mathcal{A} \sigma qcup \mathcal{F} \sigma qcup \mathcal{L} _{\infty}$ and consider $( \mathbb{C} ^2)^{ \mathcal{L} }$. For any subset $X$ of $ \mathcal{L} $, consider the projection $$ \partiali_X:( \mathbb{C} ^2)^{ \mathcal{L} }\to ( \mathbb{C} ^2)^{X}=\{ \overline{z} eta\in ( \mathbb{C} ^2)^{ \mathcal{L} }: \overline{z} eta_a=0\ \forall a\notin X\}.$$ We can thus write $( \mathbb{C} ^2)^{ \mathcal{L} }=( \mathbb{C} ^2)^{X}\times ( \mathbb{C} ^2)^{ \mathcal{L} \sigma etminus X}$, $ \overline{z} eta=( \overline{z} eta_X, \overline{z} eta_{ \mathcal{L} \sigma etminus X})$, and when $X$ is finite this gives an injection $$\iota_X:( \mathbb{C} ^2)^{\#X} \mathbb{h} ookrightarrow ( \mathbb{C} ^2)^{ \mathcal{L} }$$ whose image is $ ( \mathbb{C} ^2)^{X}$. Let $\gamma =(\gamma _1,\gamma _2)\in \mathbb{R} ^2$ and let $Y_\gamma $ be the space of sequences $ \overline{z} eta\in ( \mathbb{C} ^2)^{ \mathcal{L} } $ such that $$ || \overline{z} eta||_{\gamma }= \sigma qrt{ \sigma um_{a\in \mathcal{L} } | \overline{z} eta_a|^2e^{2\gamma _1|a|} \lambda_a ngle a \rho angle^{2 \gamma _2}}<\infty$$ -- here $ \lambda_a ngle a \rho angle= \max (|a|,1)$ and $|\cdot |$ is the standard Hermitian norm on $ \mathbb{C} ^n$ associated with the standard scalar product $ \lambda_a ngle \cdot,\cdot \rho angle_{ \mathbb{C} ^n}$. Write $ \overline{z} eta_a=(p_a,q_a)$ and let $$ \mathcal{O} mega ( \overline{z} eta, \overline{z} eta')= \sigma um_{a\in \mathcal{L} } p_a q'_a-q_a p'_a.$$ $ \mathcal{O} mega $ is an anti-symmetric bi-linear form which is continuous on $$Y_\gamma \times Y_{-\gamma }\cup Y_{-\gamma }\times Y_{\gamma }\to \mathbb{C} $$ with norm $\aa{ \mathcal{O} mega }= 1$. The subspaces $( \mathbb{C} ^2)^{\{a\}}$ are symplectic subspaces of two (complex) dimensions carrying the canonical symplectic structure. $ \mathcal{O} mega $ defines as usual (by contraction on the first factor) a bounded bijective operator $$Y_\gamma \ni \overline{z} eta\mapsto \mathcal{O} mega ( \overline{z} eta,\cdot) \in Y^*_{-\gamma }. \footnote{\ $Y^*_{\gamma }$ denote the Banach space dual of $Y_{\gamma }$} $$ We shall denote its inverse by $$J: Y^*_{-\gamma }\to Y_\gamma .$$ \beta egin{equation}gin{NB*} There is another common way to identify $ Y^*_{-\gamma }$ with $Y_\gamma $, the $L^2$-pairing. This pairing defines an isomorphism $\nabla: Y^*_{-\gamma }\to Y_\gamma $ such that $$J\circ\nabla^{-1} \overline{z} eta=\{ \left( \beta egin{equation}gin{array}{cc} 0& -1\\ 1& 0 \text{e} nd{array} \rho ight) \overline{z} eta_a: a\in \mathcal{L} \}.$$ The operator $J\circ\nabla^{-1}$ is a complex structure compatible with $ \mathcal{O} mega $ which is customarily denoted by $J$, and we shall follow this tradition. This abuse of notation will cause no confusion since the two $J$'s act on different objects: one acts on one-forms and the other on vectors, and which is the case will be clear from the context. \text{e} nd{NB*} A bounded map $A:Y_\gamma \to Y_\gamma $, $\gamma \ge(0,0)$, \footnote{\ $(\gamma _1',\gamma _2')\le (\gamma _1,\gamma _2)$ if, and only if $\gamma _1'\le\gamma _1$ and $\gamma _1'\le \gamma _2'$} is {\it symplectic} if, and only if, it extends to a bounded map $A:Y_{-\gamma }\to Y_{-\gamma }$ and verifies $$ \mathcal{O} mega (A \overline{z} eta,A \overline{z} eta')= \mathcal{O} mega ( \overline{z} eta, \overline{z} eta'),\quad \overline{z} eta\in Y_{\gamma }, \overline{z} eta'\in Y_{-\gamma },$$ or, equivalently, $A^*\circ J^{-1}\circ A=J^{-1}$ on $Y_{\gamma }$ and on $Y_{-\gamma }$. If $A$ is bijective, then it is symplectic if, and only if, $A^*\circ J^{-1}\circ A=J^{-1}$ on $Y_{\gamma }$ (see \cite{K00}). Let $$ \mathbb{A} ^{ \mathcal{A} } = \mathbb{C} ^{ \mathcal{A} }\times( \mathbb{C} /2 \partiali \mathbb{Z} )^{ \mathcal{A} }$$ and consider the Banach manifold $ \mathbb{A} ^{ \mathcal{A} } \times \partiali_{ \mathcal{L} \sigma etminus \mathcal{A} } Y_\gamma $ whose elements are denoted $x=(r,\theta= [z],w)$. \footnote{\ $[z]$ being the class of $z\in \mathbb{C} ^{ \mathcal{A} }$} We provide this manifold with the metric $$\aa{x-x'}_\gamma = \inf_{p\in \mathbb{Z} ^{d_*}} ||(r, z+2 \partiali p, w)-(r', z',w')||_\gamma .$$ We provide $ \mathbb{A} ^{ \mathcal{A} } \times \partiali_{ \mathcal{L} \sigma etminus \mathcal{A} } Y_\gamma $ with the symplectic structure $ \mathcal{O} mega $. To any $C^{1}$-function $f(r,\theta,w)$ on (some open set in) $ \mathbb{A} ^{ \mathcal{A} }\times \partiali_{ \mathcal{L} \sigma etminus \mathcal{A} } Y_{\gamma }$ it associates a vector field $X_f=-J(df)$ -- the Hamiltonian vector field of $f$ \footnote{\ there is no agreement as to the sign of the Hamiltonian vectorfield - we've used the choice of Arnold \cite{Arn}} -- which in the coordinates $(r,\theta,w)$ takes the form $$ \left( \beta egin{equation}gin{array}{c} \dot r_a \\ \dot \theta_a \text{e} nd{array} \rho ight)=J \left( \beta egin{equation}gin{array}{c} \frac{ \partial}{ \partial r_a} f(r,\theta,w)\\ \frac{ \partial}{ \partial \theta_a} f(r,\theta,w) \text{e} nd{array} \rho ight) \qquad \left( \beta egin{equation}gin{array}{c} \dot p_a \\ \dot q_a \text{e} nd{array} \rho ight)=J \left( \beta egin{equation}gin{array}{c} \frac{ \partial}{ \partial p_a} f(r,\theta,w)\\ \frac{ \partial}{ \partial q_a} f(r,\theta,w) \text{e} nd{array} \rho ight). $$ \sigma ubsubsection{An integrable Hamiltonian system in $\infty$ many dimensions} In this paper we are considering an infinite dimensional Hamiltonian system given by a function $h(r,w, \rho )$ of the form \beta egin{equation} \lambda_a bel{equation1.1} \lambda_a ngle r, \omega ( \rho ) \rho angle +\frac12 \lambda_a ngle w,A( \rho )w \rho angle= \lambda_a ngle r, \omega ( \rho ) \rho angle +\frac12 \lambda_a ngle w_{ \mathcal{F} },H( \rho )w_{ \mathcal{F} } \rho angle +\frac12 \sigma um_{a\in \mathcal{L} _{\infty}} \lambda_a (p_a^2+q_a^2), \text{e} nd{equation} where $w_a=(p_a,q_a)$ and \beta egin{equation} \lambda_a bel{properties}\left\{ \beta egin{equation}gin{array}{ll} \omega : \mathcal{D} \to \mathbb{R} ^{ \mathcal{A} }&\\ \lambda_a : \mathcal{D} \to \mathbb{R} ,&\quad a\in \mathcal{L} _\infty\\ H: \mathcal{D} \to gl( \mathbb{R} ^{ \mathcal{F} }\times \mathbb{R} ^{ \mathcal{F} }),&\quad {}^t\! H=H \text{e} nd{array} \rho ight. \text{e} nd{equation} are $ \mathcal{C} ^{{s_*}}$, ${s_*}\ge 1$, functions of $ \rho \in \mathcal{D} $, the unit ball in $ \mathbb{R} ^{ \mathcal{P} }$, parametrized by some finite subset $ \mathcal{P} $ of $ \mathbb{Z} ^{d_*}$. The Hamiltonian vector field of $h$ is not $ \mathcal{C} ^{1}$ on $ \mathbb{A} ^{ \mathcal{A} }\times \partiali_{ \mathcal{L} \sigma etminus \mathcal{A} } Y_{\gamma }$, but its Hamiltonian system still has a well defined flow with a {\it finite-dimensional invariant torus} $$\{0\}\times \mathbb{T} ^{ \mathcal{A} }\times\{0,0\}$$ which is {\it reducible}, i.e. the linearized equation on this torus (is conjugated to a system that) does not depend on the angles $\theta$. This linearized equation has infinitely many elliptic directions with purely imaginary eigenvalues $$\{{\mathbf i} \lambda_a ( \rho ) : a\in \mathcal{L} _\infty\}$$ and finitely many other directions given by the system $$ \dot \overline{z} eta_ \mathcal{F} = JH( \rho ) \overline{z} eta_ \mathcal{F} .$$ \sigma ubsubsection{A perturbation problem} The question here is if this invariant torus for $h$ persists under perturbations $h+f$, and, if so, if the persisted torus is reducible. In finite dimension the answer is yes under very general conditions -- for the first proof in the purely elliptic case see \cite{E88}, and for a more general case see \cite{Y99}. These statements say that, under general conditions, the invariant torus persists and remains reducible under sufficiently small perturbations for a subset of parameters $ \rho $ of large Lebesgue measure. Since the unperturbed problem is linear, parameter selection can not be avoided here. In infinite dimension the situation is more delicate, and results can only be proven under quite severe restrictions on the normal frequencies (i.e. the eigenvalues ${\mathbf i} \lambda_a $). Such restrictions are fulfilled for many PDE's in one space dimension -- the first such result was obtained in \cite{K87}. For PDE's in higher space dimension the behavior of the normal frequencies is much more complicated and the results are more sparse. A result for the Beam equation (which is simpler model than the Schr\"odinger equation and the Wave equation, and frequently considered in works on nonlinear PDE's) was first obtained in \cite{GY06a} and \cite{GY06b}. For other results on PDE's in higher space dimension see the discussion in the first part of this work \cite{EGK}. \sigma ubsubsection{Conditions on the unperturbed Hamiltonian} The function $h$ we shall consider will verify several assumptions. \beta egin{equation}gin{itemize} \item[A1] {\it -- spectral asymptotics.} There exist constants $0< c',c\le 1$ and exponents $ \beta egin{equation}ta_1= 2$, $ \beta egin{equation}ta_2\ge 0, \beta egin{equation}ta_3>0$ such that for all $ \rho \in \mathcal{D} $: \beta egin{equation} \lambda_a bel{la-lb-ter} | \lambda_a ( \rho )-\ab{a}^{ \beta egin{equation}ta_1} |\leq c \frac1{ \lambda_a ngle a \rho angle^{ \beta egin{equation}ta_2}}\quad a\in \mathcal{L} _{\infty}; \text{e} nd{equation} \beta egin{equation}gin{multline} \lambda_a bel{la-lb} |( \lambda_a ( \rho )- \lambda_b ( \rho ))-(\ab{a}^{ \beta egin{equation}ta_1}-\ab{b}^{ \beta egin{equation}ta_1}) |\leq \\ \le c'c\max( \frac1{ \lambda_a ngle a \rho angle^{ \beta egin{equation}ta_3}},\frac1{ \lambda_a ngle b \rho angle^{ \beta egin{equation}ta_3}}), \quad a,b\in \mathcal{L} _{\infty}\,; \text{e} nd{multline} \beta egin{equation} \lambda_a bel{laequiv} \left\{ \beta egin{equation}gin{array}{l} \lambda_a ( \rho )\geq c' \ \quad a\in \mathcal{L} _\infty\\ ||(JH( \rho ho))^{-1}||\leq \frac1{c'}; \text{e} nd{array} \rho ight. \text{e} nd{equation} \beta egin{equation} \lambda_a bel{la-lb-bis} \left\{ \beta egin{equation}gin{array}{ll} |( \lambda_a ( \rho )- \lambda_b ( \rho ))) |\ge c' & a,b\in \mathcal{L} _\infty,\ \ab{a}\not=\ab{b}\\ ||( \lambda_a ( \rho )I-{\mathbf i}JH( \rho ho))^{-1}||\leq \frac1{c'} & a\in \mathcal{L} _{\infty}. \text{e} nd{array} \rho ight. \text{e} nd{equation} \item[A2] {\it -- transversality.} We refer to section \rho ef{ssUnperturbed} for the precise formulation of this condition which describes how the eigenvalues vary with the parameter $ \rho $. \text{e} nd{itemize} \sigma ubsubsection{Conditions on the perturbation} For $ \sigma igma>0$ we let $ \mathcal{O} _\gamma ( \sigma ,\mu)$ be the set $$\{x=(r ,\theta,w)\in \mathbb{A} ^{ \mathcal{A} } \times \partiali_{ \mathcal{L} \sigma etminus \mathcal{A} } Y_{\gamma }: \aa{(\frac r\mu,{\mathbf i}\frac{\Im\theta} \sigma ,\frac w\mu)-0}_\gamma <1\}.$$ It is often useful to scale the action variables by $\mu^2$ and not by $\mu$, but in our case $\mu$ will be $\approx 1$, and then there is no difference. We shall consider perturbations $$f: \mathcal{O} _{\gamma _*}( \sigma , \mu)\to \mathbb{C} ,\quad \gamma _*=(0,m_*)\ge(0,0),$$ that are real holomorphic up to the boundary (rhb). This means that it gives real values to real arguments and extends holomorphically to a neighborhood of the closure of $ \mathcal{O} _{\gamma _*}( \sigma , \mu)$. $f$ is clearly also rhb on $ \mathcal{O} _{\gamma '}( \sigma , \mu)$ for any $\gamma '\ge\gamma _*$, and $$Jd f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to Y_{-\gamma '} $$ is rhb. But we shall require more: \beta egin{equation}gin{itemize} \item[R1] {\it -- first differential} $$Jd f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to Y_{\gamma '} $$ is rhb for any $\gamma _*\le \gamma '\le\gamma $. \text{e} nd{itemize} This is a natural smoothness condition on the space of holomorphic functions on $ \mathcal{O} _{\gamma _*}( \sigma , \mu)$, and it implies, in particular, that $Jd^2 f(x)\in \mathcal{B} (Y_{\gamma };Y_{\gamma })$ for any $x\in \mathcal{O} _{\gamma }( \sigma , \mu)$. That $Jd^2 f(x)\in \mathcal{B} (Y_{\gamma };Y_{\gamma })$ implies in turn that $$ \ab{Jd^2f(x)[e_a,e_b]}\le \mathbb{C} te e^{-\gamma _1\ab{ \ab{a}-\ab{b}}}\min(\frac{ \lambda_a ngle a \rho angle}{ \lambda_a ngle b \rho angle}, \frac{ \lambda_a ngle b \rho angle}{ \lambda_a ngle a \rho angle})^{\gamma _2}$$ for any two unit vectors $e_a\in( \mathbb{C} ^2)^{\{a\}}$ and $e_b\in( \mathbb{C} ^2)^{\{b\}}$. But many Hamiltonian PDE's verify other, and stronger, decay conditions in terms of $$\min(\ab{a-b},\ab{a+b}).$$ Such decay conditions do not seem to be naturally related to any smoothness condition of $f$, but they may be instrumental in the KAM-theory for multidimensional PDE's: see for example \cite{EK10} where such conditions were used to build a KAM-theory for some multidimensional non-linear Schr\"odinger equations. The decay condition needed in this work depends on a parameter $0\le \varkappa \le m_*$ and defines a Banach sub-algebra $ \mathcal{M} _{\gamma , \varkappa }^b$ of $ \mathcal{B} (Y_{\gamma };Y_{\gamma })$, with norm $\aa{\cdot}_{\gamma , \varkappa }$ -- its precise definition will be given in section \rho ef{ssMatrixAlgebra}. We shall require: \beta egin{equation}gin{itemize} \item[R2] {\it -- second differential} $$Jd^2f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{M} _{\gamma ', \varkappa }^b$$ is rhb for any $\gamma _*\le \gamma '\le\gamma $s. \text{e} nd{itemize} Denote by $ \mathcal{T} _{\gamma , \varkappa }( \sigma ,\mu) $ the space of functions $$f: \mathcal{O} _{\gamma _*}( \sigma , \mu)\to \mathbb{C} ,$$ real holomorphic up to the boundary, verifying R1 and R2. We provide $ \mathcal{T} _{\gamma , \varkappa } ( \sigma , \mu)$ with the norm \beta egin{equation} \lambda_a bel{norm} |f|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\\ \gamma , \varkappa \text{e} nd{subarray}}= \max\left\{ \beta egin{equation}gin{array}{l} \sigma up_{x\in \mathcal{O} _{\gamma _*}( \sigma , \mu)}|f(x)|\\ \sigma up_{\gamma _*\le \gamma '\le\gamma } \sigma up_{x\in \mathcal{O} _{\gamma '}( \sigma , \mu)} || Jd f(x)||_{\gamma '}\\ \sigma up_{\gamma _*\le \gamma '\le\gamma } \sigma up_{x\in \mathcal{O} _{\gamma '}( \sigma , \mu)}||Jd^2f(x)||_{\gamma ', \varkappa } \text{e} nd{array} \rho ight. \text{e} nd{equation} making it into a Banach space. Notice that the first two ``components'' of this norm are related to the smoothness of $f$, while the third ``component'' imposes a further decay condition on $Jd^2f(x)$. \sigma ubsubsection{The normal form theorem} For any $a\in \mathcal{L} _\infty$, let $$ [a]=\{b\in \mathcal{L} _\infty: |b|=|a|\}.$$ \beta egin{equation}gin{theorem*} Let $h$ be a Hamiltonian defined by \text{e} qref{equation1.1} and verifying Assumptions A1-2. Let $f: \mathcal{O} _{\gamma _*}( \sigma ,\mu)\to \mathbb{C} $ be real holomorphic and verifying Assumptions R1-2 with $$ \gamma =(\gamma _1,m_*)>\gamma _*=(0,m_*)\quad\textrm{and}\quad 0< \varkappa \le m_*.$$ If $\varepsilon=|f|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\\ \gamma , \varkappa \text{e} nd{subarray}}$ is sufficiently small, then there is a set $ \mathcal{D} ' \sigma ubset \mathcal{D} $ with $$\operatorname{Leb}( \mathcal{D} \sigma etminus \mathcal{D} ')\to 0,\quad \varepsilon\to 0$$ and a $ \mathcal{C} ^{{s_*}}$ mapping $$ \mathcal{P} hi: \mathcal{O} _{\gamma _*}( \sigma /2,\mu/2)\times \mathcal{D} \to \mathcal{O} _{\gamma _*}( \sigma ,\mu),$$ real holomorphic and symplectic for each parameter $ \rho \in \mathcal{D} $, such that $$ (h+ f)\circ \mathcal{P} hi= h'+f'\in \mathcal{T} _{\gamma _*, \varkappa } ,$$ and \beta egin{equation}gin{itemize} \item[(i)] for $ \rho \in \mathcal{D} '$ and $ \overline{z} eta=r=0$ $$d_r f'=d_\theta f'= d_{ \overline{z} eta} f'=d^2_{ \overline{z} eta} f'=0;$$ \item[(ii)] $h': \mathcal{O} _{\gamma _*}( \sigma /2,\mu/2)\times \mathcal{D} \to \mathbb{C} $ is a $ \mathcal{C} ^{{s_*}}$-function, real holomorphic for each parameter $ \rho \in \mathcal{D} $, and $$[ \partial_ \rho ^j(h'(\cdot, \rho )-h(\cdot, \rho ))]_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\\ \gamma _* , \varkappa \text{e} nd{subarray}}< C\varepsilon, \quad |j|\le {s_*}-1;$$ \item[(iii)] $h'$ has the form \beta egin{equation}gin{multline*} \lambda_a ngle r, \omega '( \rho ) \rho angle +\frac12 \lambda_a ngle \overline{z} eta_{ \mathcal{F} },H'( \rho ) \overline{z} eta_{ \mathcal{F} } \rho angle+\\ +\frac12 \sigma um_{[a]} \beta ig( \lambda_a ngle p_{[a]},A'_{[a]}( \rho )p_{[ a]} \rho angle+ \lambda_a ngle q_{[a]},A'_{[a]}( \rho )q_{[ a]} \rho angle \beta ig) \text{e} nd{multline*} where the matrix $A'_{[a]}$ is symmetric; \item[(iv)] for any $x\in \mathcal{O} _{\gamma _*}( \sigma /2,\mu/2)$ $$|| \partial_ \rho ^j( \mathcal{P} hi(x, \rho )-x )||_{\gamma _*}\le C\varepsilon,\quad |j|\le {s_*}-1.$$ \text{e} nd{itemize} The constant $C$ depends on $\# \mathcal{A} ,\# \mathcal{F} , \# \mathcal{P} ,d_*, {{s_*}},m_*, \varkappa $ and $h$, but not on $\varepsilon$. \text{e} nd{theorem*} We shall give a more precise formulation of this result in Theorem \rho ef{main} and its Corollary \rho ef{cMain}. \sigma ubsubsection{A singular perturbation problem} We want to apply this theorem to construct small-amplitude solutions of the multi-dimensional beam equation on the torus: $$ u_{tt}+ \mathcal{D} elta^2 u+m u = - g(x,u)\,,\quad u=u(t,x), \quad \ x\in \mathbb{T} ^{d_*}.$$ Here $g$ is a real analytic function satisfying $$ g(x,u)=4u^3+ O(u^4).$$ Writing it in Fourier components, and introducing action-angle variables for the modes in (an arbitrary finite subset) $ \mathcal{A} $, the linear part becomes a Hamiltonian system with a Hamiltonian $h$ of the form \text{e} qref{equation1.1}, with $ \mathcal{F} $ void. $h$ satisfies (for all $m>0$) condition A1, but not condition A2. The way to improve on $h$ is to use a (partial) Birkhoff normal form around $u=0$ in order to extract a piece from the non-linear part which improves on $h$. This leads to a situation where the assumptions A1 and A2 and the size of the perturbation are linked -- a singular perturbation problem. In order to apply the theorem to such a singular situation one needs a careful and precise description of how the smallness requirement depends on the (parameters determining) assumptions A1 and A2. This is quite a serious complication which is carried out in this paper and the precise description of the smallness requirement is given in Theorem \rho ef{main}. This normal form theorem improves on the result in \cite{GY06a} and \cite{GY06b} in two respects. \beta egin{equation}gin{itemize} \item We have imposed no ``conservation of momentum'' on the perturbation -- this has the effect that our normal form is not diagonal in the purely elliptic directions. In this respect it resembles the normal form obtained for the non-linear Schr\"odinger equation obtained in \cite{EK10} and the block diagonal form is the same. \item We have a finite-dimensional, possibly hyperbolic, component, whose treatment requires higher smoothness in the parameters. \text{e} nd{itemize} The proof has no real surprises. It is a classical KAM-theorem carried out in a complex situation. The main part is, as usual, the solution of the homological equation with reasonable estimates. The fact that the block structure is not diagonal complicates, but this was also studied in for example \cite{EK10}. The iteration combines a finite linear iteration with a ``super-quadratic'' infinite iteration. This has become quite common in KAM and was also used for example in \cite{EK10}. \sigma ubsubsection{Notation and agreements.} $ \lambda_a ngle a \rho angle =\max(|a|,1)$. ${\mathbf i}$ will denote the complex imaginary unit. ${ \overline erline z}$ is the complex-conjugate of $z\in \mathbb{C} $. By $ \lambda_a ngle \overline{z} eta, \overline{z} eta' \rho angle_{ \mathbb{C} ^n}$ we denote the standard Hermitian scalar product in $ \mathbb{C} ^n$, conjugate-linear in the first variable and linear in the second variable. All Euclidean spaces, unless otherwise stated, are provided with the Euclidean norm denoted by $|\cdot|$. For two subsets $X$ and $Y$ of a Euclidean space we denote $$\underline{\operatorname{dist}}(X,Y)=\inf_{x\in X,\ y\in Y}|x-y|$$ and $$\operatorname{diam}(X)= \sigma up_{x,y\in X}|x-y|.$$ The space of bounded linear operators between two Banach spaces $X$ and $Y$ is denoted $ \mathcal{B} (X;Y)$. Its operator norm will usually be denoted $\|\cdot\|$ without specification of the spaces. Our complex Banach spaces will be complexifications of some ``natural'' real Banach spaces which in general are implicit. An analytic function between domains of two complex Banach spaces is called real holomorphic if it gives real values to real arguments. The sets $ \mathcal{A} , \mathcal{F} , \mathcal{L} _\infty, \mathcal{P} $, as well as the ``starred'' constants $d_*, {s_*},m_*$ will be fixed in this paper -- the dependence of them is usually not indicated. Constants depending only on the dimensions $\# \mathcal{A} , \# \mathcal{F} ,\# \mathcal{P} $, on ${d_*},{s_*},m_*$ and on the choice of finite-dimensional norms are regarded as {\it absolute constants}. An absolute constant only depending on $ \beta egin{equation}ta$ is thus a constant that only depends on $ \beta egin{equation}ta$, besides these factor. Arbitrary constants will often be denoted by $ \mathbb{C} te$, $ {\operatorname{ct.} } $ and, when they occur as an exponent, $ \text{e} xp$. Their values may change from line to line. For example we allow ourselves to write $2 \mathbb{C} te\le \mathbb{C} te$. \sigma ubsubsection{Acknowledgement} The authors acknowledge the support from the project ANR-10-BLAN 0102 of the Agence Nationale de la Recherche. \sigma ection{Preliminaries.} \sigma ubsection{A matrix algebra} \lambda_a bel{ssMatrixAlgebra} \ The mapping \beta egin{equation} \lambda_a bel{pdist} (a,b)\mapsto [a-b]=\min (|a-b|,|a+b|) \text{e} nd{equation} is a pseudo-metric on $ \mathbb{Z} ^{d_*}$, i.e. verifying all the relations of a metric with the only exception that $[a-b]$ is $=0$ for some $a\not=b$. This is most easily seen by observing that $[a-b]=\textrm{d}_{ {\operatorname{Hausdorff} } }(\{ \partialm a\},\{ \partialm b\})$. We have $ [a-0]=|a|$. Define, for any $\gamma =(\gamma _1,\gamma _2)\ge(0,0)$ and $ \varkappa \ge0$, $$e_{\gamma , \varkappa }(a,b)=Ce^{\gamma _1[a-b]}\max([a-b],1)^{\gamma _2}\min( \lambda_a ngle a \rho angle, \lambda_a ngle b \rho angle)^ \varkappa .$$ \beta egin{equation}gin{lemma} \lambda_a bel{lWeights} \ \beta egin{equation}gin{itemize} \item[(i)] If $\gamma _1, \gamma _2- \varkappa \ge0$, then $$e_{\gamma , \varkappa }(a,b)\le e_{\gamma ,0}(a,c) e_{\gamma , \varkappa }(c,b),\quad \forall a,b,c,$$ if $C$ is sufficiently large (bounded with $\gamma _2, \varkappa $). \item[(ii )] If $-\gamma \le\tilde \gamma \le \gamma $, then $$e_{\tilde\gamma , \varkappa }(a,0)\le e_{\gamma , \varkappa }(a,b) e_{\tilde\gamma , \varkappa }(b,0),\quad \forall a,b$$ if $C$ is sufficiently large (bounded with $\gamma _2, \varkappa $). \text{e} nd{itemize} \text{e} nd{lemma} \beta egin{equation}gin{proof} (i). Since $[a-b]\le [a-c]+[c-b]$ it is sufficient to prove this for $\gamma _1=0$. If $\gamma _2=0$ then the statement holds for any $C\ge1$, so it is sufficient to consider $\gamma _2>0$ and, hence $\gamma _2=1$. Then we want to prove \beta egin{equation}gin{multline*} \max([a-b],1)\min( \lambda_a ngle a \rho angle, \lambda_a ngle b \rho angle)^ \varkappa \\ \le C \max([a-c],1)\max([c-b],1) \min( \lambda_a ngle c \rho angle, \lambda_a ngle b \rho angle)^ \varkappa . \text{e} nd{multline*} Now $\max([a-b],1)\le \max([a-c],1)+\max([c-b],1)$, $$ \max([c-b],1) \min( \lambda_a ngle c \rho angle, \lambda_a ngle b \rho angle)^ \varkappa \gtrsim \lambda_a ngle b \rho angle^ \varkappa , $$ and $$ \max([a-c],1) \min( \lambda_a ngle c \rho angle, \lambda_a ngle b \rho angle)^ \varkappa \gtrsim \min( \lambda_a ngle a \rho angle, \lambda_a ngle b \rho angle)^ \varkappa .$$ This gives the estimate. (ii) Again it suffices to prove this for $\gamma _1=0$ and $\gamma _2=1$. Then we want to prove $$ \max(\ab{a},1)^{\tilde \gamma _2} \le C \max([a-b],1)\min( \lambda_a ngle a \rho angle, \lambda_a ngle b \rho angle)^ \varkappa \max(\ab{b},1)^{\tilde \gamma _2}.$$ The inequality is fulfilled with $C\ge1$ if $a$ or $b$ equal $0$. Hence we need to prove $$ \ab{a}^{\tilde \gamma _2} \le C \max([a-b],1)\min( \lambda_a ngle a \rho angle, \lambda_a ngle b \rho angle)^ \varkappa \ab{b}^{\tilde \gamma _2}.$$ Suppose $\tilde\gamma _2\ge0$. If $\ab{a}\le 2 \ab{b}$ then this holds for any $C\ge2$. If $\ab{a}\ge2\ab{b}$ then $[a-b]\ge \frac12\ab{a}$ and the statement holds again for any $C\ge2$. If instead $\tilde\gamma _2<0$, then we get the same result with $a$ and $b$ interchanged. \text{e} nd{proof} \sigma ubsubsection {The space $ \mathcal{M} _{\gamma , \varkappa }$} \lambda_a bel{sM2} We shall consider matrices $A: \mathcal{L} \times \mathcal{L} \to gl(2, \mathbb{C} )$, formed by $2\times2$-blocs, (each $A_a^{b}$ is a $2\times2$-matrix). Define $$|A|_{\gamma , \varkappa }=\max\left\{ \beta egin{equation}gin{array}{l} \sigma up_a \sigma um_{b} \ab{A_a^b} e_{\gamma , \varkappa }(a,b)\\ \sigma up_b \sigma um_{a} \ab{A_a^b} e_{\gamma , \varkappa }(a,b), \text{e} nd{array} \rho ight.$$ where the norm on $A_a^{b}$ is the matrix operator norm. Let $ \mathcal{M} _{\gamma , \varkappa }$ denote the space of all matrices $A$ such that $\ab{A}_{\gamma , \varkappa }<\infty$. Clearly $\ab{\cdot}_{\gamma , \varkappa }$ is a norm on $ \mathcal{M} _{\gamma , \varkappa }$. It follows by well-known results that $ \mathcal{M} _{\gamma , \varkappa }$, provided with this norm, is a Banach space. Transposition -- $({}^tA)_a^b={}^t\!A_b^a$ -- and $ \mathbb{C} $-conjugation -- $( \overline erline{A})_a^b={ \overline erline{A_a^b}})$ -- do not change this norm.The identity matrix is in $ \mathcal{M} _{\gamma , \varkappa }$ if, and only if, $ \varkappa =0$, and then $|I|_{\gamma ,0}=C$. \sigma ubsubsection{Matrix multiplication} We define (formally) the {\it matrix product} $$(AB)_a^b= \sigma um_{c} A_a^cB_c^b.$$ Notice that complex conjugation, transposition and taking the adjoint behave in the usual way under this formal matrix product. \beta egin{equation}gin{proposition} \lambda_a bel{pMatrixProduct} Let $\gamma _2\ge \varkappa $. If $A\in \mathcal{M} _{\gamma ,0}$ and $B\in \mathcal{M} _{\gamma , \varkappa }$, then $AB$ and $BA\in \mathcal{M} _{\gamma , \varkappa }$ and $$ \ab{{AB} }_{\gamma , \varkappa }\ \textrm{and}\ \ab{{BA} }_{\gamma , \varkappa }\le \ab{A}_{\gamma ,0} \ab{B}_{\gamma , \varkappa }.$$ \text{e} nd{proposition} \beta egin{equation}gin{proof} (i) We have, by Lemma \rho ef{lWeights}(i), $$ \sigma um_{b} \ab{( AB)_a^b} e_{\gamma , \varkappa }(a,b)\le \sigma um_{b,c} \ab{ A_a^c} \ab{ B_c^b} e_{\gamma , \varkappa }(a,b)\le $$ $$\le \sigma um_{b,c}\ab{ A_a^c} \ab{ B_c^b} e_{\gamma ,0}(a,c)e_{\gamma , \varkappa }(c,b) $$ which is $\le \ab{A}_{\gamma ,0} \ab{B}_{\gamma , \varkappa }$. This implies in particular the existence of $(AB)_a^b$. The sum over $a$ is shown to be $\le \ab{A}_{\gamma ,0} \ab{B}_{\gamma , \varkappa }$ in a similar way. The estimate of $BA$ is the same. \text{e} nd{proof} Hence $ \mathcal{M} _{\gamma ,0}$ is a Banach algebra, and $ \mathcal{M} _{\gamma , \varkappa }$ is a closed ideal in $ \mathcal{M} _{\gamma ,0}$. \sigma ubsubsection{The space $ \mathcal{M} _{\gamma , \varkappa }^b$} We define (formally) on $Y_\gamma $ (see section \rho ef{ssThePhaseSpace}) $$(A \overline{z} eta )_a= \sigma um_{b} A_a^b \overline{z} eta _b.$$ \beta egin{equation}gin{proposition} \lambda_a bel{pMatrixBddOp} Let $-\gamma \le\tilde \gamma \le\gamma $. If $A\in \mathcal{M} _{\gamma , \varkappa }$ and $ \overline{z} eta \in Y_{\tilde \gamma }$, then $A \overline{z} eta \in Y _{\tilde \gamma }$ and $$ \aa{A \overline{z} eta }_{\tilde \gamma }\le \ab{A}_{\gamma , \varkappa }\aa{ \overline{z} eta }_{\tilde\gamma }.$$ \text{e} nd{proposition} \beta egin{equation}gin{proof} Let $ \overline{z} eta '=A \overline{z} eta $. We have $$ \sigma um_{a} \ab{ \overline{z} eta '_a}^2e_{\tilde \gamma ,0}(a,0)^2\le \sigma um_{a} \beta ig( \sigma um_{b} \ab{A_a^b}\ab{ \overline{z} eta _{b}} e_{\tilde \gamma ,0 }(a,0) \beta ig)^2.$$ Write $$\ab{A_a^b}\ab{ \overline{z} eta _{b}}e_{\tilde \gamma ,0 }(a,0) = I\times (I\ab{ \overline{z} eta _{b}}e_{\tilde \gamma ,0}(b,0))\times J$$ where $$I=I_{a,b}= \sigma qrt{\ab{A_a^b} e_{\gamma , \varkappa }(a,b)}.$$ Since, by Lemma \rho ef{lWeights}(ii), $$J= \frac{e_{\tilde \gamma ,0}(a,0)}{e_{\gamma , \varkappa }(a,b)e_{\tilde\gamma ,0}(b,0)}\le 1,$$ we get, by H\"older, \beta egin{equation}gin{multline*} \sigma um_{a} \ab{ \overline{z} eta' _a}^2e_{\tilde \gamma ,0}(a,0)^2\le \sigma um_{a} ( \sigma um_{b} I_{a,b}^2)( \sigma um_{b} I_{a,b}^2\ab{ \overline{z} eta _{b}}^2e_{\tilde\gamma ,0}(b,0)^2)\\ \le \ab{A}_{\gamma , \varkappa } \sigma um_{a,b} I_{a,b}^2\ab{ \overline{z} eta _{b}}^2e_{\tilde\gamma ,0}(b,0)^2 \le \ab{A}_{\gamma , \varkappa } \sigma um_{b} \ab{ \overline{z} eta _{b}}^2e_{\tilde \gamma ,0}(b,0)^2 \sigma um_{a}I_{a,b}^2\le \text{e} nd{multline*} $$\le \ab{A}_{\gamma , \varkappa }^2 \aa{ \overline{z} eta }_{\tilde \gamma }^2. $$ This shows that $y_a$ exists for all $a$, and it also proves the estimate. \text{e} nd{proof} We have thus, for any $-\gamma \le \tilde\gamma \le\gamma $, a continuous embedding of $ \mathcal{M} _{\gamma , \varkappa }$, $$ \mathcal{M} _{\gamma , \varkappa } \mathbb{h} ookrightarrow \mathcal{M} _{\gamma ,0}\to \mathcal{B} (Y_{\tilde \gamma };Y_{\tilde \gamma }),$$ into the space of bounded linear operators on $Y_{\tilde \gamma }$. Matrix multiplication in $ \mathcal{M} _{\gamma , \varkappa }$ corresponds to composition of operators. \sigma mallskip For our applications we must consider a larger sub algebra with somewhat weaker decay properties. For $\gamma =(\gamma _1,\gamma _2)\ge \gamma _*=(0,m_*)$, let $$ \mathcal{M} _{\gamma , \varkappa }^b= \mathcal{B} (Y_{\gamma };Y_{\gamma })\cap \mathcal{M} _{(\gamma _1,\gamma _2+ \varkappa -m_*), \varkappa }$$ which we provide with the norm $$ \aa{A}_{\gamma , \varkappa }=\aa{A}_{ \mathcal{B} (Y_{\gamma };Y_{\gamma })}+ \ab{A}_{(\gamma _1,\gamma _2+ \varkappa -m_*), \varkappa }.$$ This norm makes $ \mathcal{M} _{\gamma ,0}^b$ into a Banach sub-algebra of $ \mathcal{B} (Y_{\gamma };Y_{\gamma })$ and $ \mathcal{M} _{\gamma , \varkappa }^b$ becomes a closed ideal in $ \mathcal{M} _{\gamma ,0}^b$. \sigma ubsection{Functions} Let $$0\le \varkappa \le m_*$$ and let $$\gamma =(\gamma _1,m_*)\ge \gamma _*=(0,m_*).$$ \sigma ubsubsection{The function space $ \mathcal{T} _{\gamma , \varkappa } $ } Consider the space of functions $f: \mathcal{O} _{\gamma _*}( \sigma , \mu)\to \mathbb{C} $ which are real holomorphic up to the boundary (rhb) of $ \mathcal{O} _{\gamma _*}( \sigma , \mu)$. This implies that for any $\gamma \ge\gamma _*$ $$f: \mathcal{O} _{\gamma }( \sigma , \mu)\to \mathbb{C} $$ $$Jd f: \mathcal{O} _\gamma ( \sigma , \mu)\to Y_{-\gamma }$$ and $$Jd^2f: \mathcal{O} _\gamma ( \sigma , \mu)\to \mathcal{B} (Y_\gamma ,Y_{-\gamma }) $$ are also rhb. We define $ \mathcal{T} _{\gamma , \varkappa }( \sigma ,\mu) $ to be the space of such functions such that for any $\gamma _*\le \gamma '\le\gamma $, $$ \beta egin{equation}gin{array}{lll} R_1 & - & Jd f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to Y_{\gamma '} \\ R_2 & - & Jd^2f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{M} _{\gamma ', \varkappa }^b \text{e} nd{array}$$ are rhb. We provide $ \mathcal{T} _{\gamma , \varkappa } ( \sigma , \mu)$ with the norm \text{e} qref{norm}. The higher differentials $d^{k+2}f$ can be estimated by Cauchy estimates on some smaller domain in terms of this norm. This norm makes $ \mathcal{T} _{\gamma , \varkappa }( \sigma , \mu) $ into a Banach space and a Banach algebra with the constant function $f=1$ as unit. \beta egin{equation}gin{remark*} The differential forms $d^{k+2}f(x)$, $x\in \mathcal{O} _{\gamma '}( \sigma , \mu)$, are canonically identified with three bounded linear maps $$ \beta egin{equation}gin{array}{lll} R_0 & - & \beta igotimes_{k+2} Y_{\gamma '}=\underbrace{Y_{\gamma '}\otimes\dots\otimes Y_{\gamma '}}_{k+2\ times}\longrightarrow \mathbb{C} \\ R_1 & - & \beta igotimes_{k+1} Y_{\gamma '}\longrightarrow Y_{-\gamma '}^* \overline erset{J}{\longrightarrow} Y_{\gamma '}\\ R_2 & - & \beta igotimes_{k} Y_{\gamma '} \longrightarrow J^{-1} \mathcal{M} _{\gamma ', \varkappa }^b \overline erset{J}{\longrightarrow} \mathcal{M} _{\gamma ', \varkappa }^b, \text{e} nd{array}$$ where $\otimes$ is the symmetric tensor product (over $ \mathbb{C} $). \text{e} nd{remark*} \sigma ubsubsection{The function space $ \mathcal{T} _{\gamma , \varkappa , \mathcal{D} } $ } Let $ \mathcal{D} $ be the unit ball in $ \mathbb{R} ^{ \mathcal{P} }$. We shall consider functions $$f: \mathcal{O} _{\gamma ^*}( \sigma , \mu)\times \mathcal{D} \to \mathbb{C} $$ which are of class $ \mathcal{C} ^{{s_*}}$. We say that $f\in \mathcal{T} _{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu) $ if, and only if, $$\frac{ \partial^j f}{ \partial \rho ^j}(\cdot, \rho )\in \mathcal{T} _{\gamma , \varkappa }( \sigma ,\mu) $$ for any $ \rho \in \mathcal{D} $ and any $\ab{j}\le {s_*}$. We provide this space by the norm $$ |f|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}= \max_{\ab{j}\le {s_*}} \sigma up_{ \rho \in \mathcal{D} } |\frac{ \partial^j f}{ \partial \rho ^j}(\cdot, \rho )|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\\ \gamma , \varkappa \text{e} nd{subarray}}$$ This norm makes $ \mathcal{T} _{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu) $ into a Banach space and a Banach algebra. \sigma ubsubsection{ Jets of functions.} \lambda_a bel{ss5.1} For any function $f\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$ we define its jet $f^T(x)$, $x=(r,\theta,w)$, as the following Taylor polynomial of $f$ at $r=0$ and $w=0$ \beta egin{equation} \lambda_a bel{jet} f(0,\theta,0)+d_r f(0,\theta,0)[r] +d_w f(0,\theta,0)[w]+\frac 1 2 d^2_wf(0,\theta,0)[w,w] \text{e} nd{equation} Functions of the form $f^T$ will be called {\it jet-functions.} \beta egin{equation}gin{proposition} \lambda_a bel{lemma:jet} Let $f\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$. Then $f^T\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$ and $$ |f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}} \leq C |f|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}.$$ $C$ is an absolute constant. \text{e} nd{proposition} \beta egin{equation}gin{proof} The first part follows by general arguments. Look for example on $$g(x)= d^2_wf\circ p(x)[ w,w],\quad x=(r,\theta,w),$$ where $p(x)$ is the projection onto $(0,\theta,0)$. This function $g$ is real holomorphic up to the boundary (rhb) on $ \mathcal{O} _{\gamma _*}( \sigma , \mu)$, being a composition of such functions. Its sup-norm is obtained by a Cauchy estimate of $f$: $$\aa{d^2_wf(p(x))}_{ \mathcal{B} (Y_{\gamma _*},Y_{\gamma _*}; \mathbb{C} )}\aa{w}^2_{\gamma '} \le \mathbb{C} te\frac1{\mu^2} \sigma up_{ \mathcal{O} _{\gamma _*}( \sigma , \mu)}\ab{f(y)}\aa{w}_{\gamma _*}^2\le \mathbb{C} te \sigma up_{y\in \mathcal{O} _{\gamma _*}( \sigma , \mu)}\ab{f(y)}.$$ Since $Jd g(x)[\cdot]$ equals $$ \beta ig(J dd^2_w f\circ p(x)[w,w] \beta ig)[dp[\cdot]]+ 2 \beta ig(Jd^2_w f\circ p(x)[ w] \beta ig)[\cdot], $$ and $$Jd^2_w f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '};Y_{\gamma '})$$ and $$Jd d^2_w f=J d^2_w df: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '},Y_{\gamma '};Y_{\gamma '})$$ are rhb, it follows that $d g$ verifies R1 and is rhb. The norm $\aa{Jd g(x)}_{\gamma '}$ is less than $$\aa{J d^2_w d f(p(x))}_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '};Y_{\gamma '})}\aa{w}^2_{\gamma '} +2\aa{Jd^2_wf(p(x))}_{ \mathcal{B} (Y_{\gamma '};Y_{\gamma '})}\aa{w}_{\gamma '},$$ which is $\le \mathbb{C} te \sigma up_{y\in \mathcal{O} _{\gamma '}( \sigma , \mu)}\aa{Jd f(y)}_{\gamma '}$ -- this follows by Cauchy estimates of derivatives of $Jd f$. Since $Jd^2 g(x)[\cdot,\cdot]$ equals $$ \beta ig(J d^2 d_w^2f\circ p(x)[w,w] \beta ig)[dp[\cdot],dp[\cdot]]+ 2J \beta ig( d d^2_w f\circ p(x)[w] \beta ig)[\cdot,dp[\cdot]+ 2Jd_w^2 f\circ p(x)[\cdot,\cdot],$$ and $$Jd^2_w f: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{M} _{\gamma ', \varkappa }^b,$$ $$J d d^2_w f=J d^2_w df: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '}; \mathcal{M} _{\gamma ', \varkappa }^b)$$ and $$J d^2 d_w^2f=J d_w^2 d^2f : \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '},Y_{\gamma '}; \mathcal{M} _{\gamma ', \varkappa }^b)$$ are rhb, it follows that $Jd g^2$ verifies R2 and is rhb. The norm $\aa{Jd^2 g}_{\gamma ', \varkappa }$ is less than $$\aa{J d_w^2 d^2f(p(x))}_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '}; \mathcal{M} _{\gamma ', \varkappa }^b)}\aa{w}^2_{\gamma '} +2\aa{J d_w d^2f(p(x))}_{ \mathcal{B} (Y_{\gamma '}; \mathcal{M} _{\gamma ', \varkappa }^b)}\aa{w}_{\gamma '}+$$ $$ +2\aa{Jd^2f(x)}_{\gamma ', \varkappa },$$ which is $\le \mathbb{C} te \sigma up_{y\in \mathcal{O} _{\gamma '}( \sigma , \mu)}\aa{Jd^2 f(y)}_{\gamma ', \varkappa }$ -- this follows by a Cauchy estimate of $Jd^2 f$. The derivatives with respect to $ \rho $ are treated alike. \text{e} nd{proof} \sigma ubsection{Flows} \sigma ubsubsection{ Poisson brackets.} \lambda_a bel{ss5.2} The Poisson bracket $\{f,g\}$ of two $ \mathcal{C} ^1$-functions $f$ and $g$ is (formally) defined by $$ \mathcal{O} mega (Jdf,Jdg)=df[Jdg]=-dg[Jdf]$$ If one of the two functions verify condition R1, this product is well-defined. Moreover, if both $f$a nd $g$ are jet-function, then $\{f,g\}$ is also a jet-function. \beta egin{equation}gin{proposition} \lambda_a bel{lemma:poisson} Let $f,g\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$, and let $ \sigma '< \sigma $ and $\mu'<\mu\le1$. Then \beta egin{equation}gin{itemize} \item[(i)] $\{g,f\}\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$ and $$ \ab{\{g,f\}}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu' \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq C_{ \sigma - \sigma '}^{\mu-\mu'}\ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}} \ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}$$ for $$C_{ \sigma - \sigma '}^{\mu-\mu'}=C \beta ig(\frac1{( \sigma igma- \sigma igma')} + \frac1{ (\mu-\mu') } \beta ig).$$ \item[(ii)] the n-fold Poisson bracket $ P_g^n f \in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$ and $$\ab{ P_g^n f }_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu' \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq \beta ig(C_{ \sigma - \sigma '}^{\mu-\mu'}\ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , 0, \mathcal{D} \text{e} nd{subarray}} \beta ig)^n \ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , 0, \mathcal{D} \text{e} nd{subarray}}$$ where $P_g f=\{g,f\}$. \text{e} nd{itemize} $C$ is an absolute constant. \text{e} nd{proposition} \beta egin{equation}gin{proof} (i) We must first consider the function $h= \mathcal{O} mega (Jdg,Jdf)$ on $ \mathcal{O} _{\gamma _*}( \sigma , \mu)$ Since $J dg,\ J df: \mathcal{O} _{\gamma _*}( \sigma , \mu)\to Y_{\gamma _*}$ are real holomorphic up to the boundary (rhb), it follows that $h: \mathcal{O} _{\gamma _*}( \sigma , \mu)\to \mathbb{C} $ is rhb, and $$\ab{h(x)}\le \aa{J dg (x)}_{\gamma _*}\aa{J df(x)}_{\gamma _*}.$$ The vector $J d h(x)$ is a sum of $$J \mathcal{O} mega (Jd^2g(x),Jdf(x))=Jd^2g(x)[Jdf(x)]$$ and another term with $g$ and $f$ interchanged. Since $Jd^2g: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '};Y_{\gamma '})$ and $Jdg,\ Jdf: \mathcal{O} _{\gamma '}( \sigma , \mu)\to Y_{\gamma '}$ are rhb, it follows that $Jd h$ verifies R1 and is rhb. Moreover $$\aa{Jd^2g(x)[Jdf(x),\cdot]}_{\gamma '}\le \aa{J d^2g (x)}_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '}) }\aa{J df(x)}_{\gamma '}$$ and, by definition of $ \mathcal{M} _{\gamma , \varkappa }^b$ , $$\aa{J d^2g (x)}_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '}) }\le \aa{J d^2g (x)}_{\gamma ',0}.$$ The operator $Jd^2h(x)=d(Jdh) (x)$ is a sum of $$Jd^3g(x)[Jdf(x)]$$ and $$Jd^2g(x)[Jd^2f(x)]$$ and two other terms with $g$ and $f$ interchanged. Since $Jd^3g: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '}; \mathcal{M} _{\gamma ', \varkappa }^b)$ and $Jdf: \mathcal{O} _{\gamma '}( \sigma , \mu)\to Y_{\gamma '}$ are rhb, it follows that the first function $ \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{B} (Y_{\gamma '}; \mathcal{M} _{\gamma ', \varkappa }^b)$ is rhb. It can be estimated on a smaller domain using a Cauchy estimate for $Jd^3g(x)$ The second term is treated differently. Since $$Jd^2f,\ Jd^2g: \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{M} _{\gamma , \varkappa }^b$$ are rhb, and since, by Proposition \rho ef{pMatrixProduct}, taking products is a bounded bi-linear maps with norm $\le 1$, it follows that the second function $ \mathcal{O} _{\gamma '}( \sigma , \mu)\to \mathcal{M} _{\gamma ', \varkappa }^b$ is rhb and $$\aa{Jd^2g(x)[Jd^2f(x)]}_{\gamma ', \varkappa }\le \aa{Jd^2g(x)}_{\gamma ', \varkappa }\aa{Jd^2f(x)}_{\gamma ', \varkappa }.$$ The derivatives with respect to $ \rho $ are treated alike. (ii) That $ P_g^n f \in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$ follows from (ii), but the estimate does not follow from the estimate in (ii). The estimate follows instead from Cauchy estimates of $n$-fold product $P_g^n f $. \text{e} nd{proof} \beta egin{equation}gin{remark} The proof shows that the assumptions can be relaxed when $g$ is a jet function: it suffices then to assume that $g\in \mathbb{T} c_{\gamma ,0, \mathcal{D} }( \sigma ,\mu)$ and $g- \mathbb{h} at g(\cdot,0,\cdot) \footnote{\ $ \mathbb{h} at g(\cdot,0,\cdot)$ this is the $0$:th Fourier coefficent of the function $\theta\mapsto g(\cdot,\theta,\cdot)$} \in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$. Then $\{g,f\}$ will still be in $ \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$ but with the bound $$ \ab{\{g,f\}}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu' \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq C_{ \sigma - \sigma '}^{\mu-\mu'} \beta ig(\ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , 0, \mathcal{D} \text{e} nd{subarray}} +\ab{g- \mathbb{h} at g(\cdot,0,\cdot)}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}} \beta ig) \ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}.$$ To see this it is enough to consider a jet-function $g$ which does not depend on $\theta$. The only difference with respect to case (i) is for the second differential. The second term is fine since, by Proposition \rho ef{pMatrixProduct}, $ \mathcal{M} _{\gamma ', \varkappa }^b$ is a two-sided ideal in $ \mathcal{M} _{\gamma ',0}^b$ and $$\aa{Jd^2g(x)[Jd^2f(x)]}_{\gamma ', \varkappa }\le \aa{Jd^2g(x)}_{\gamma ',0}\aa{Jd^2f(x)}_{\gamma ', \varkappa }.$$ For the first term we must consider $Jd^3g(x)[Jdf(x)]$ which, a priori, takes its values in $ \mathcal{M} _{\gamma ',0}^b$ and not in $ \mathcal{M} _{\gamma ', \varkappa }^b$. But since $g$ is a jet-function independent of $\theta$ this term is $=0$. \text{e} nd{remark} \sigma ubsubsection {Hamiltonian flows} \lambda_a bel{ss5.3} The Hamiltonian vector field of a $ \mathcal{C} ^1$-function $g$ on (some open set in) $Y_\gamma $ is $-Jdg$. Without further assumptions it is an element in $Y_{-\gamma }$, but if $g\in \mathcal{T} _{\gamma , \varkappa }$, then it is an element in $Y_\gamma $ and has a well-defined local flow $ \mathcal{P} hi_g$. \beta egin{equation}gin{proposition} \lambda_a bel{Summarize} Let $g\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma ,\mu)$, and let $ \sigma '< \sigma $ and $\mu'<\mu\le1$. If $$ \ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq\\ \frac1{C}\min( \sigma - \sigma ',\mu-\mu'),$$ then \beta egin{equation}gin{itemize} \item[(i)] the Hamiltonian flow map $ \mathcal{P} hi^t= \mathcal{P} hi^t_g$ is, for all $\ab{t}\le1$ and all $\gamma _*\le \gamma '\le\gamma $, a $ \mathcal{C} ^{{s_*}}$-map $$ \mathcal{O} _{\gamma '}( \sigma ',\mu')\times \mathcal{D} \to \mathcal{O} _{\gamma '}( \sigma ,\mu)$$ which is real holomorphic and symplectic for any fixed $ \rho ho\in \mathcal{D} $. Moreover, $$\aa{ \partial_ \rho ^j ( \mathcal{P} hi^t(x, \rho )-x)}_{\gamma '}\le C\ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}},$$ and $$ \aa{ \partial_ \rho ^j(d \mathcal{P} hi^t(x)-I)}_{\gamma ', \varkappa }\le C\ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}},$$ for any $x\in \mathcal{O} _{\gamma '}( \sigma ',\mu')$, $\gamma _*\le \gamma '\le\gamma $, and $0\le \ab{j}\le {s_*}$. \item[(ii)] $f\circ \mathcal{P} hi_g^t\in \mathbb{T} c_{\gamma , \varkappa }( \sigma ',\mu', \mathcal{D} )$ for $\ab{t}\le1$ and $$ \ab{ f\circ \mathcal{P} hi_g^t }_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu' \ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq C \ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu \ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}.$$ \text{e} nd{itemize} $C$ is an absolute constant. \text{e} nd{proposition} \beta egin{equation}gin{proof} It follows by general arguments that $ \mathcal{P} hi= \mathcal{P} hi_g: U\to \mathcal{O} _{\gamma }( \sigma ,\mu)$ is real holomorphic in $(t, \overline{z} eta)\in U \sigma ubset \mathbb{C} \times \mathcal{O} _{\gamma }( \sigma ,\mu)$ and depends smoothly on any smooth parameter in the vector field. Clearly, for $\ab{t}\le 1$ and $x\in \mathcal{O} _{\gamma }( \sigma ',\mu')$ $$\aa{ \mathcal{P} hi^t(x, \rho )-x}_{\gamma }\le \sigma up_{x\in \mathcal{O} _{\gamma }( \sigma ,\mu)}\aa{Jdg(x)}_\gamma \le \ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , 0, \mathcal{D} \text{e} nd{subarray}}$$ as long as $ \mathcal{P} hi^t(x)$ stays in the domain $ \mathcal{O} _{\gamma }( \sigma ,\mu)$. It follows by classical arguments that this is the case if $$\ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , 0, \mathcal{D} \text{e} nd{subarray}} \le {\operatorname{ct.} } \min( \sigma - \sigma ',\mu-\mu').$$ {\it The differential}. We have $$\frac{d}{dt} d \mathcal{P} hi^t(x)=-Jd^2g( \mathcal{P} hi^t(x))d \mathcal{P} hi^t(x)=B(t)d \mathcal{P} hi^t(x),$$ where $B(t)\in \mathcal{M} _{\gamma , \varkappa }^b$. By re-writing this equation in the integral form $d \mathcal{P} hi^t(x)=\operatorname{Id}+\int_0^t B(s) d \mathcal{P} hi^s(x) \text{d} s$ and iterating this relation, we get that $ d \mathcal{P} hi^t(x)-\operatorname{Id}= B^{\infty}(t)$ with $$B^{\infty}(t) = \sigma um_{k\geq 1}\int_{0}^{t}\int_{0}^{t_{1}}\cdots \int_{0}^{t_{k-1}} \partialrod_{j=1}^{k}B(t_{j})\text{d}t_{k} \cdots \text{d}t_{2}\,\text{d}t_{1}.$$ We get, by Proposition \rho ef{pMatrixProduct}, that $d \mathcal{P} hi^t(x)-\operatorname{Id}\in \mathcal{M} _{\gamma , \varkappa }^b$ and, for $\ab{t}\le1$ , $$ \aa{ d \mathcal{P} hi^t(x)-\operatorname{Id}}_{\gamma , \varkappa }\le \sigma um_{k\geq 1} \aa{Jd^2g( \mathcal{P} hi^t(x))}^k_{\gamma , \varkappa }\frac{t^k}{k!}\le \aa{Jd^2g( \mathcal{P} hi^t(x))}_{\gamma , \varkappa }.$$ In particular, $A=d \mathcal{P} hi^t(x)$ is a bounded bijective operator on $Y_\gamma $. Since $Jd^2g$ is a Hamiltonian vector field we clearly have that $$ \mathcal{O} mega (A \overline{z} eta,A \overline{z} eta')= \mathcal{O} mega ( \overline{z} eta, \overline{z} eta'),\quad\forall \overline{z} eta, \overline{z} eta'\in Y_\gamma ,$$ so $A$ is symplective. {\it Parameter dependence}. For $\ab{j}=1$, we have $$\frac{d}{dt} Z(t)=\frac{d}{dt} \frac{ \partial^j \mathcal{P} hi^t(x, \rho )}{ \partial \rho ^j}= B(t, \rho )Z(t) -\frac{ \partial^j Jdg(x, \rho )}{ \partial \rho ^j} =B(t)Z(t)+A(t).$$ Since $$\aa{A(t)}_\gamma + \aa{B(t)}_{\gamma , \varkappa }\le \mathbb{C} te \ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}},$$ it follows by classical arguments, using Gronwall, that $$\aa{Z(t)}_{\gamma ,0}\le \mathbb{C} te \ab{g}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\ab{t}.$$ The higher order derivatives (with respect tp $ \rho ho$) of $ \mathcal{P} hi^t(x, \rho )$, and the derivatives of $d \mathcal{P} hi^t(x, \rho )$ are treated in the same way. The same argument applies to any $\gamma _*\le\gamma '\le\gamma $. Since $$ f\circ \mathcal{P} hi_g^t = \sigma um_{n\ge0}\frac1{n!}t^nP^n_{-g}f ,$$ (ii) is a consequence of Proposition \rho ef{lemma:poisson}(ii). \text{e} nd{proof} \sigma ection{Normal Form Hamiltonians and the KAM theorem} \sigma ubsection{Block decomposition, normal form matrices.} In this subsection we recall two notions introduced in \cite{EK10} for the nonlinear Schr\"odinger equation. They are essential to overcome the problems of small divisors in a multidimensional context. Since the structure of the spectrum for the beam equation, $\{ \sigma qrt{|a|^4+m},\ a\in \mathbb{Z} ^{d_*}\}$, is similar to that for the NLS equation, $\{|a|^2+ \mathbb{h} at{V}_a,\ a\in \mathbb{Z} ^{d_*}\}$, then to study the beam equation we will use tools, similar to those used to study the NLS equation. \noindent{ \beta f Block decomposition:} For any $ \mathcal{D} elta\in \mathbb{N} \cup \{\infty\}$ we define an equivalence relation on $ \mathbb{Z} ^{d_*}$, generated by the pre-equivalence relation $$ a \sigma im b \mathcal{L} ongleftrightarrow \left\{ \beta egin{equation}gin{array}{l} |a|=|b| \\ {[a-b]} \leq \mathcal{D} elta. \text{e} nd{array} \rho ight.$$ (see \text{e} qref{pdist}). Let $[a]_ \mathcal{D} elta$ denote the equivalence class of $a$ -- the {\it block} of $a$. For further references we note that \beta egin{equation} \lambda_a bel{a-b} |a|=|b| \text{ and } [a]_{ \mathcal{D} elta}\neq [b]_{ \mathcal{D} elta} \mathbb{R} ightarrow [a-b]\geq \mathcal{D} elta \text{e} nd{equation} The crucial fact is that the blocks have a finite maximal ``diameter'' $$d_ \mathcal{D} elta=\max_{[a]=[b]} [a-b]$$ which do not depend on $a$ but only on $ \mathcal{D} elta$. \beta egin{equation}gin{proposition} \lambda_a bel{blocks} \beta egin{equation} \lambda_a bel{block} d_ \mathcal{D} elta\leq C \mathcal{D} elta^{\frac{(d_*+1)!}2}. \text{e} nd{equation} The constant $C$ only depends on $d_*$. \text{e} nd{proposition} \beta egin{equation}gin{proof} In \cite{EK10} it was considered the equivalence relation on $ \mathbb{Z} ^{d_*}$, generated by the pre-equivalence $$ a\approx b\quad\text{if}\quad |a|=|b|\quad \text{and}\quad |a-b|\le \mathcal{D} elta. $$ Denote by $[a]^o_ \delta lta$ and $d^o_ \mathcal{D} elta$ the corresponding equivalence class and its diameter (with respect to the usual distance). Since $a \sigma im b$ if and only if $a\approx b$ or $a\approx -b$, then \beta egin{equation} \lambda_a bel{union} [a]_ \mathcal{D} elta = [a]^o_ \mathcal{D} elta \cup -[a]^o_ \mathcal{D} elta, \text{e} nd{equation} provided that the union in the r.h.s. is disjoint. It is proved in \cite{EK10} that $d_ \mathcal{D} elta^o\le D_ \mathcal{D} elta=:C \mathcal{D} elta^{\frac{(d_*+1)!}2}$. Accordingly, if $|a|\ge D_ \mathcal{D} elta$, then the union above is disjoint, \text{e} qref{union} holds and diameter of $[a]_ \mathcal{D} elta$ satisfies \text{e} qref{block}. If $|a|< D_ \mathcal{D} elta$, then $[a]_ \mathcal{D} elta$ is contained in a sphere of radius $< D_ \mathcal{D} elta$. So the block's diameter is at most $2D_ \mathcal{D} elta$. This proves \text{e} qref{block} if we replace there $C_{d_*}$ by $2C_{d_*}$. \text{e} nd{proof} If $ \mathcal{D} elta=\infty$ then the block of $a$ is the sphere $\{b: |b|=|a|\}$. Each block decomposition is a sub-decomposition of the trivial decomposition formed by the spheres $\{|a|=\text{const}\}$. \noindent { \beta f Normal form matrices.} Let $ \mathcal{E} _ \mathcal{D} elta$ be the decomposition of $ \mathcal{L} = \mathcal{F} \sigma qcup \mathcal{L} _{\infty}$ into the subsets $$[a]_ \mathcal{D} elta= \left\{ \beta egin{equation}gin{array}{ll} [a]_ \mathcal{D} elta\cap \mathcal{L} _{\infty} & a\in \mathcal{L} _{\infty} \sigma ubset \mathbb{Z} ^{d_*}\\ \mathcal{F} & a\in \mathcal{F} . \text{e} nd{array} \rho ight.$$ \beta egin{equation}gin{remark} \lambda_a bel{remark-blocks} Now the diameter of each block $[a]_ \mathcal{D} elta$ is bounded by $$d_ \mathcal{D} elta\leq C \mathcal{D} elta^{\frac{(d_*+1)!}2}$$ if moreover we let $C\ge \# \mathcal{F} $. \text{e} nd{remark} On the space of $2\times 2$ complex matrices we introduce a projection $$ \mathcal{P} i: \mathcal{M} at(2\times 2, \mathbb{C} )\to \mathbb{C} I+ \mathbb{C} J, $$ orthogonal with respect to the Hilbert-Schmidt scalar product. Note that $ \mathbb{C} I+ \mathbb{C} J$ is the space of matrices, commuting with the symplectic matrix $J$. \beta egin{equation}gin{definition} \lambda_a bel{d_31} We say that a matrix $A:\ \mathcal{L} \times \mathcal{L} \to \mathcal{M} at(2\times 2, \mathbb{C} )$ is on normal form with respect to $ \mathcal{D} elta$, $ \mathcal{D} elta\in \mathbb{N} \cup \{\infty\}$, and write $A\in \mathbb{N} F_ \mathcal{D} elta$, if \beta egin{equation}gin{itemize} \item[(i)] $A$ is real valued, \item[(ii)] $A$ is symmetric, i.e. $A_b^a \text{e} quiv {}^t \mathbb{h} space{-0,1cm}A_a^b$, \item[(iii)] $A$ is block diagonal over $ \mathcal{E} _ \mathcal{D} elta$, i.e. $A_b^a=0$ if $[a]_ \mathcal{D} elta\neq [b]_ \mathcal{D} elta$, \item[(iv)] $A$ satisfies $ \mathcal{P} i A^a_b \text{e} quiv A^a_b$ for all $a,b\in \mathcal{L} _{\infty}$. \text{e} nd{itemize} \text{e} nd{definition} Any quadratic form ${\mathbf q}(w)= \frac 1 2 \lambda_a ngle w,Aw \rho angle$, $w=(p,q)$, can be written as $$ \frac 1 2 \lambda_a ngle p,A_{11}p \rho angle+ \lambda_a ngle p,A_{12}q \rho angle+\frac 1 2 \lambda_a ngle q,A_{22}q \rho angle +\frac 1 2 \lambda_a ngle w_{ \mathcal{F} }, H( \rho ) w_{ \mathcal{F} } \rho angle $$ where $A_{11},\ A_{22}$ and $H$ are real symmetric matrices and $A_{12}$ is a real matrix. We now pass from the real variables $w_a=(p_a,q_a)$ to the complex variables $z_a=(\xi_a, \text{e} ta_a)$ by $w=U z$ defined through \beta egin{equation} \lambda_a bel{transf} \xi_a=\frac 1 { \sigma qrt 2} (p_a+iq_a),\quad \text{e} ta_a =\frac 1 { \sigma qrt 2} (p_a-iq_a), \text{e} nd{equation} for $a\in \mathcal{L} _\infty$, and acting like the identity on $ ( \mathbb{C} ^2)^ \mathcal{F} $. Then we have $$ {\mathbf q}(Uz)=\frac 1 2 \lambda_a ngle \xi,P\xi \rho angle+ \frac 1 2 \lambda_a ngle \text{e} ta,{ \overline erline P} \text{e} ta \rho angle+ \lambda_a ngle \xi,Q \text{e} ta \rho angle +\frac 1 2 \lambda_a ngle z_{ \mathcal{F} }, H( \rho ) z_{ \mathcal{F} } \rho angle ,$$ where $$P=\frac12\Big( (A_{11}-A_{22})-{\mathbf i}(A_{12}+{}^t A_{12}) \beta ig)$$ and $$Q=\frac12\Big( (A_{11}+A_{22})+{\mathbf i}(A_{12}-{}^t A_{12}) \beta ig).$$ Hence $P$ is a complex symmetric matrix and $Q$ is a Hermitian matrix. If $A$ is on normal form, then $P=0$. Notice that this change of variables is not symplectic but $$ U\left( \beta egin{equation}gin{array}{cc} J_\infty & 0\\ 0 & J_{ \mathcal{F} } \text{e} nd{array} \rho ight) {}^t\! U= \left( \beta egin{equation}gin{array}{cc} {\mathbf i}J_\infty & 0\\ 0 & J_{ \mathcal{F} } \text{e} nd{array} \rho ight).$$ \sigma ubsection{The unperturbed Hamiltonian} \lambda_a bel{ssUnperturbed} Let $h$ be a function as in \text{e} qref{equation1.1}, i.e. \beta egin{equation} \lambda_a bel{unperturbed} h (r,w, \rho ) = \lambda_a ngle r, \omega ( \rho ) \rho angle + \frac12 \lambda_a ngle w,A ( \rho )w \rho angle, \text{e} nd{equation} where $$ \lambda_a ngle w,A( \rho )w \rho angle = \lambda_a ngle w_{ \mathcal{F} },H ( \rho )w_{ \mathcal{F} } \rho angle +\frac12 \beta ig( \lambda_a ngle p_{\infty},Q ( \rho )p_{\infty} \rho angle + \lambda_a ngle q_{\infty},Q ( \rho )q_{\infty} \rho angle \beta ig)$$ and $$Q ( \rho )=\operatorname{diag}\{ \lambda_a ( \rho ): a\in \mathcal{L} _\infty\}.$$ Assume $ \omega ,\ \lambda_a ,\ H$ verify \text{e} qref{properties}. We shall denote by $$ \chi= |\nabla_ \rho \omega ega |_{ \mathcal{C} ^{ {{s_*}}-1 } ( \mathcal{D} )}+ \sigma up_{a\in \mathcal{L} } |\nabla_ \rho \lambda_a mbda_a|_{ \mathcal{C} ^{ {{s_*}}-1 } ( \mathcal{D} )} + ||\nabla_ \rho H ||_{ \mathcal{C} ^{ {{s_*}}-1 } ( \mathcal{D} ) }$$ since this quantity will play an important role in our analysis. \sigma ubsubsection{Assumption A1 -- spectral asymptotic} \ There exist constants $0< c,c'\le 1$ and exponents $ \beta egin{equation}ta_1=2$, $ \beta egin{equation}ta_2\ge0$, $ \beta egin{equation}ta_3>0$ such that for all $ \rho \in \mathcal{D} $ the relations \text{e} qref{laequiv}, \text{e} qref{la-lb}, \text{e} qref{la-lb-bis} and \text{e} qref{la-lb-ter} hold. \sigma ubsubsection{Assumption A2 -- transversality.} Let $[a]=[a]_\infty$ so that $[a]$ equals $\{b: \ab{b}=\ab{a}\}$ when $a\in \mathcal{L} _\infty$ and equals $ \mathcal{F} $ when $a\in \mathcal{F} $ . Denote by $Q_{[a]}$ the restriction of the matrix $Q$ to $[a]\times [a]$ and let $Q_{[ \text{e} mptyset]}=0$. Let also $JH( \rho )_{[ \text{e} mptyset]}=0$ and $m=2\# \mathcal{F} $. There exists a $1\ge \delta lta_0>0$ such that for all $ \mathcal{C} ^{{s_*}}$-functions $$ \omega ega': \mathcal{D} \to \mathbb{R} ^n,\quad | \omega ega'- \omega ega|_{ \mathcal{C} ^{{s_*}}( \mathcal{D} )}< \delta lta_0,$$ the following hold for each $k\in \mathbb{Z} ^n \sigma etminus 0$: \beta egin{equation}gin{itemize} \item[$(i)$] for any $a,b\in \mathcal{L} _\infty\cup \{ \text{e} mptyset\} $ let $$L( \rho ):X\mapsto \lambda_a ngle k, \omega '( \rho ) \rho angle X+Q_{[a]}( \rho )X \partialm XQ_{[b]};$$ then either $L( \rho ) $ is {\it $ \delta _0$-invertible} for all $ \rho \in \mathcal{D} $ , i.e. $$\aa{L( \rho )^{-1}}\le \frac1{ \delta lta_0}, \quad \forall \rho \in \mathcal{D} ,$$ or there exists a unit vector ${\mathfrak z}$ such that $$\ab{ \lambda_a ngle v, \partial_{\mathfrak z} L( \rho ) v \rho angle} \ge \delta lta_0, \quad \forall \rho \in \mathcal{D} \footnote{\ $ \partial_{\mathfrak z} $ denotes here the directional derivative in the direction ${\mathfrak z}\in \mathbb{R} ^p$} $$ and for any unit-vector $v$ in the domain of $L( \rho )$; \item[$(ii)$] let $$L( \rho , \lambda_a mbda):X\mapsto \lambda_a ngle k, \omega '( \rho ) \rho angle X+ \lambda_a mbda X+ {\mathbf i}XJH( \rho )$$ and $$P( \rho , \lambda_a mbda)= \delta t L( \rho , \lambda_a mbda):$$ then either $L( \rho , \lambda_a ( \rho ))$ is $ \delta _0$-invertible for all $ \rho \in \mathcal{D} $ and $a\in[a]_\infty$, or there exists a unit vector ${\mathfrak z}$ such that $$ \ab{ \partial_{\mathfrak z}P( \rho , \lambda_a ( \rho ))+ \partial_ \lambda_a mbda P( \rho , \lambda_a ( \rho )) \lambda_a ngle v, \partial_{\mathfrak z} Q( \rho ) v \rho angle} \ge \delta lta_0\aa{L(\cdot, \lambda_a (\cdot))}_{ \mathcal{C} ^{1}( \mathcal{D} )}^{m-1}$$ for all $ \rho \in \mathcal{D} $, $a\in[a]_\infty$ and for any unit-vector $v\in ( \mathbb{C} ^2)^{[a]}$; \item[$(iii)$] for any $a,b\in \mathcal{F} \cup \{ \text{e} mptyset\} $ let $$L( \rho ):X\mapsto \lambda_a ngle k, \omega '( \rho ) \rho angle X-{\mathbf i}JH( \rho )_{[a]}X+ {\mathbf i}XJH( \rho )_{[b]};$$ then either $L( \rho )$ is $ \delta _0$-invertible for all $ \rho \in \mathcal{D} $, or there exists a unit vector ${\mathfrak z}$ and an integer $1\le j\le {s_*}$ such that $$\ab{ \partial_{\mathfrak z}^j \delta t L( \rho ) }\ge \delta lta_0 \aa{L( \rho )}_{ \mathcal{C} ^{j}( \mathcal{D} )}^{m-1}, \quad \forall \rho \in \mathcal{D} .$$ \text{e} nd{itemize} \beta egin{equation}gin{remark} \lambda_a bel{remro} The dichotomy in A2 is imposed not only on $ \omega ega$ but also on $ \mathbb{C} a^1$-perturbations of $ \omega ega'$, because, in general, the dichotomy for $ \omega ega'$ does not imply that for perturbations. If, however, any $ \mathbb{C} a^{{s_*}}$ perturbation $ \omega ega'$ of $ \omega ega$ can be written as $ \omega ega'= \omega ega\circ f$ for some diffeomorphism $ f=id+ \mathcal{O} ( \delta lta_0)$ -- this is for example the case when $ \omega ega( \rho )= \rho $, -- then the dichotomy on $ \omega ega$ implies a dichotomy on $ \mathbb{C} a^{{s_*}}$-perturbations. \text{e} nd{remark} \sigma ubsection{Normal form Hamiltonians} Consider now the function \text{e} qref{unperturbed} defined on the set $ \mathcal{D} $. This function will be fixed throughout this paper and we shall denote it and its ``components'' by $$h_{\textrm up},\ \omega _{\textrm up},\ A_{\textrm up},\ Q_{\textrm up},\ H_{\textrm up}.$$ \beta egin{equation}gin{remark} \lambda_a bel{rConvention} The essential properties of $h_{\textrm up}$ are given by the constants $$\chi,c',c, \beta egin{equation}ta=( \beta egin{equation}ta_1, \beta egin{equation}ta_2, \beta egin{equation}ta_3), \delta _0.$$ These will be fixed now, once and for all. All estimates will depend on $h_{\textrm up}$ only through these constants. Since it will be important for our analysis of the Beam equation we shall track this dependence with respect to $c', \delta _0,\chi$. In order to simplify the estimates a little we shall assume that \beta egin{equation} \lambda_a bel{Conv}0<c'\le \delta _0\le\chi\le c. \text{e} nd{equation} \text{e} nd{remark} We shall consider functions of the form \beta egin{equation} \lambda_a bel{normform} h(r,w, \rho )= \lambda_a ngle \omega ( \rho ), r \rho angle +\frac 1 2 \lambda_a ngle w, A( \rho )w \rho angle \text{e} nd{equation} which satisfies \noindent { \beta f Hypothesis $ \omega ega$:} $ \omega ega$ is of class $ \mathbb{C} a^{{s_*}}$ on $ \mathcal{D} $ and \beta egin{equation} \lambda_a bel{hyp-omega} | \omega ega- \omega ega_{\textrm up}|_{ \mathbb{C} a^{{s_*}}( \mathcal{D} )}\le \delta lta. \text{e} nd{equation} \noindent { \beta f Hypothesis B: } $A-A_{\textrm up}: \mathcal{D} \to \mathcal{M} _{0, \varkappa appa}^b$ is of class $ \mathbb{C} a^{{s_*}}$, $A( \rho )$ is on normal form $\in \mathbb{N} F_{ \mathcal{D} elta}$ for all $ \rho \in \mathcal{D} $ and \beta egin{equation} \lambda_a bel{hypoB} || \partial_ \rho ^j (A( \rho )-A_{\textrm up}( \rho ))_{[a]} || \le \delta lta\frac{1}{ \lambda_a ngle a \rho angle^ \varkappa appa} \text{e} nd{equation} for $ |j| \le {{s_*}}$, $a\in \mathcal{L} $ and $ \rho \in \mathcal{D} $. \footnote{\ here it is important that $||\ ||$ is the matrix operator norm} We also require that \beta egin{equation} \lambda_a bel{varkappa} \varkappa appa>0. \text{e} nd{equation} A function verifying these assumptions is said to be on {\it normal form}. and we shall denote this by $$h\in \mathbb{N} F_{ \varkappa appa,h_{\textrm up} }( \mathcal{D} elta, \delta lta).$$ Since the unperturbed Hamiltonian $h_{\textrm up}$ will be fixed in this paper we shall often suppress is in this notation writing simply $h\in \mathbb{N} F_{ \varkappa appa }( \mathcal{D} elta, \delta lta)$. \sigma ubsection{The normal form theorem} In this section we state an abstract KAM result for perturbations of normal form Hamiltonians by a function in $ \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, $0< \sigma ,\mu\le 1$. Let $$\gamma =(\gamma _1,m_*)> \gamma _*=(0,m_*)\quad\textrm{and}\quad \varkappa > 0.$$ \beta egin{equation}gin{theorem} \lambda_a bel{main} There exist positive constants $C$, $ \alpha pha$, $ \text{e} xp$ and $ \text{e} xp_3$ such that, for any $h\in \mathbb{N} F_{ \varkappa appa,h_{\textrm up}}( \mathcal{D} elta, \delta )$ and for any $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, $$ \varepsilon=\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\ \textrm{and}\ \xi=\ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}},$$ if $$ \delta lta \le \frac1{2C} c'$$ and \beta egin{equation} \lambda_a bel{epsi} \varepsilon(\log \frac1\varepsilon)^{ \text{e} xp}\le \frac1{C} \beta ig( \frac{ \max(\gamma _1^{-1} ,d_{ \mathcal{D} elta})}{ \sigma \mu} \beta ig)^{- \text{e} xp} \beta ig(\frac{c'}{\chi+\xi} \beta ig)^{ \text{e} xp_3} c', \footnote{\ remember the convention \text{e} qref{Conv} } \text{e} nd{equation} then there exist a set $ \mathcal{D} '= \mathcal{D} '(h, f) \sigma ubset \mathcal{D} $, $$\operatorname{Leb} ( \mathcal{D} \sigma etminus \mathcal{D} ')\leq C \beta ig(\log\frac1{\varepsilon} \frac{ \max(\gamma _1^{-1} ,d_{ \mathcal{D} elta})}{ \sigma \mu} \beta ig)^{ \text{e} xp} ( \frac{\chi+\xi}{ \delta lta_0})^{1+ \alpha pha} (\frac{\varepsilon}{ \delta _0})^{ \alpha pha},$$ and a $ \mathcal{C} ^{{s_*}}$ mapping $$ \mathcal{P} hi: \mathcal{O} _{ \gamma _*}( \sigma /2,\mu/2)\times \mathcal{D} \to \mathcal{O} _{ \gamma _*}( \sigma ,\mu),$$ real holomorphic and symplectic for each parameter $ \rho \in \mathcal{D} $, such that $$(h+ f)\circ \mathcal{P} hi= h'+f'$$ and \beta egin{equation}gin{itemize} \item[(i)] for $ \rho \in \mathcal{D} '$ and $ \overline{z} eta=r=0$ $$d_r f'=d_\theta f'= d_{ \overline{z} eta} f'=d^2_{ \overline{z} eta} f'=0;$$ \item[(ii)] $ h'\in \mathbb{N} F_{ \varkappa }(\infty, \delta ')$, $ \delta '\le \frac{c'}2$, and $$\ab{ h'- h}_{ \beta egin{equation}gin{subarray}{c} \sigma /2,\mu/2\ \ \\ \gamma _*, \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le C;$$ \item[(iii)] for any $x\in \mathcal{O} _{ \gamma _*}( \sigma /2,\mu/2)$ and $\ab{j}\le{s_*}$ $$|| \partial_ \rho ^j ( \mathcal{P} hi(x,\cdot)-x)||_{ \gamma _*}+ \aa{ \partial_ \rho ^j (d \mathcal{P} hi(x,\cdot)-I)}_{ \gamma _*, \varkappa } \le C;$$ \item[(iv)] if $\tilde \rho =(0, \rho _2,\dots, \rho _p)$ and $f^T(\cdot,\tilde \rho )=0$ for all $\tilde \rho $, then $h'=h$ and $ \mathcal{P} hi(x,\cdot)=x$ for all $\tilde \rho $. \text{e} nd{itemize} $C$ is an absolut constant that only depends on $ \beta egin{equation}ta, \varkappa ,c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. The exponent $ \text{e} xp'$ is an absolute constant that only depends on $ \beta egin{equation}ta$ and $ \varkappa $. The exponent $ \text{e} xp_3$ only depends on $s_*$. The exponent $ \alpha pha'$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. \text{e} nd{theorem} The condition on $ \mathcal{P} hi$ and $h'-h$ may look bad but it is not. \beta egin{equation}gin{corollary} \lambda_a bel{cMain} Under the assumption of the theorem, let $\varepsilon_*$ be the largest positive number such that \text{e} qref{epsi} holds. Then, for any $ \rho \in \mathcal{D} $ and $\ab{j}\le{s_*}-1$, \beta egin{equation}gin{itemize} \item[$(ii)'$] $$\ab{ \partial_ \rho ^j (h'(\cdot, \rho )- h(\cdot, \rho ))}_{ \beta egin{equation}gin{subarray}{c} \sigma /2,\mu/2\ \ \\ \gamma _*, \varkappa appa,\ \ \ \ \text{e} nd{subarray}}\le \frac{C}{\varepsilon_*}\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}};$$ \item[$(iii)'$] $$|| \partial_ \rho ^j ( \mathcal{P} hi(x, \rho )-x)||_{\gamma _*}+ \aa{ \partial_ \rho ^j (d \mathcal{P} hi(x,r)-I)}_{\gamma ^*, \varkappa } \le \frac{C}{\varepsilon_*}\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}},$$ for any $x\in \mathcal{O} _{\gamma _*}( \sigma /2,\mu/2)$. \text{e} nd{itemize} The constant $C$ is an absolute constant that also depends on $ \beta egin{equation}ta$. \text{e} nd{corollary} \beta egin{equation}gin{proof} Let us denote $ \rho $ here by $\tilde \rho $. If $|f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\le \varepsilon_*$, then we can apply the theorem to $\varepsilon f$ for any $|\varepsilon|\le 1$. Let now $ \rho =(\varepsilon,\tilde \rho )$ and consider $h$ and $h_{\mathrm up}$ as functions depending on this new parameter $ \rho $ -- they will still verify the assumptions of the theorem, which will provide us with a mapping $ \mathcal{P} hi$ with a $ \mathcal{C} ^{{s_*}}$ dependence in $ \rho =(\varepsilon,\tilde \rho )$ and equal to the identity when $\varepsilon=0$. The bound on the derivative now implies that $$|| \mathcal{P} hi(x,\varepsilon,\tilde \rho )-x ||_{\gamma _*}\le C\varepsilon\le \frac C{\varepsilon_*}|f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}} $$ for any $x\in \mathcal{O} _{\gamma _*}( \sigma /2,\mu/2)$. The same estimate holds for all derivatives with respect to $\tilde \rho $ up to order ${{s_*}}-1$. The argument for $h'-h$ is the same. \text{e} nd{proof} We can take $ \delta =0$ here, in which case $h$ equals the unperturbed Hamiltonian $h_{\textrm up}$ - this is the case described in theorem of the Introduction. \beta egin{equation}gin{remark} The assumption \text{e} qref{epsi} on $\varepsilonilon$ involves many constants and parameters. Then \text{e} qref{epsi} takes the form $$\varepsilon(\log \frac1\varepsilon)^{ \text{e} xp}\le C' \beta ig(\frac{c'}{\chi+\xi} \beta ig)^{ \text{e} xp_3} c'$$ where $C'$ depends on $C\gamma , \sigma ,\mu, \mathcal{D} elta$. If we assume that $$\xi, \chi =O( \delta lta_0^{1- \alpha eph}) \quad \text{and} \quad c'=O( \delta lta_0^{1+ \alpha eph})$$ for some $ \alpha eph>0$, then assumption \text{e} qref{epsi} reduces to $$\varepsilon(\log \frac1\varepsilon)^{ \text{e} xp} \lambda_a mbda_s im C' \delta lta_0^{1+ \alpha eph+2 \alpha eph \text{e} xp_3},$$ which implies $$\varepsilon_* \gtrsim C' \delta lta_0^{1+2 \alpha eph+2 \alpha eph \text{e} xp_3}$$ when $ \delta _0$ is sufficiently small. Actually in paper \cite{EGK} Theorem \rho ef{main} this is used in this context. \text{e} nd{remark} \sigma ection{Small Divisors} \lambda_a bel{s6} For a mapping $L: \mathcal{D} \to gl(\dim, \mathbb{R} )$ define, for any $ \kappa >0$, $$\Sigma(L, \kappa ppa)=\{ \rho \in \mathcal{D} : ||L^{-1}| |>\frac1 \kappa \}.$$ Let $$h(r,w, \rho )= \lambda_a ngle r, \omega ( \rho ) \rho angle+ \frac12 \lambda_a ngle w,A( \rho ) w \rho angle$$ be a normal form Hamiltonian in $ \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta lta)$. Recall the convention in Remark \rho ef{rConvention} and assume $ \varkappa >0$ and \beta egin{equation} \lambda_a bel{ass} \delta lta \le \frac{1}{C}c' , \text{e} nd{equation} where $C$ is to be determined. \beta egin{equation}gin{lemma} \lambda_a bel{lSmallDiv1} Let $$L_{k}= \lambda_a ngle k, \omega ( \rho ) \rho angle.$$ There exists a constant $C$ such that if \text{e} qref{ass} holds, then $$\operatorname{Leb} \beta ig( \beta igcup_{0<\ab{k}\le N} \Sigma(L_k, \kappa ) \beta ig) \le C N^{ \text{e} xp} \frac{\chi+ \delta lta}{ \delta lta_0}\frac{ \kappa }{ \delta _0}$$ and $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma(L_k, \kappa ),\Sigma(L_k,\frac \kappa 2))>\frac{1}{C}\frac{ \kappa }{N(\chi+ \delta lta)} \footnote{\ this is assumed to be fulfilled if $\Sigma_{L_j}(\frac \kappa 2)= \text{e} mptyset$} $$ for any $ \kappa >0$. The exponent $ \text{e} xp$ only depends on $\# \mathcal{A} $. $C$ is an absolut constant. \text{e} nd{lemma} \beta egin{equation}gin{proof} Since $ \delta lta\le \delta lta_0$, using Assumption A2$(i)$, with $a=b= \text{e} mptyset$, we have, for each $k\not=0$, either that $$ | \lambda_a ngle \omega ( \rho ), k \rho angle|\ge \delta _0\ge \kappa \quad \forall \rho \in \mathcal{D} $$ or that $$ \partial_{\mathfrak z} \lambda_a ngle \omega ( \rho ), k \rho angle \ \geq \delta lta_0\quad \forall \rho \in \mathcal{D} $$ (for some suitable choice of a unit vector $\mathfrak z$). The first case implies $\Sigma(L_{k}, \kappa )= \text{e} mptyset$. The second case implies that $\Sigma(L_k, \kappa ppa)$ has Lebesgue measure $$ \lambda_a mbda_s im \frac{N(\chi+ \delta lta)}{ \delta lta_0} \frac{ \kappa }{ \delta _0}.$$ Summing up over all $0<\ab{k}\le N$ gives the first statement. The second statement follows from the mean value theorem and the bound $$\ab{\nabla_ \rho L_k( \rho )}\le N(\chi+ \delta lta).$$ \text{e} nd{proof} \beta egin{equation}gin{lemma} \lambda_a bel{lSmallDiv2} Let $$L_{k,[a]}= \beta ig( \lambda_a ngle k, \omega \rho angle I - J A \beta ig)_{[a]}.$$ There exists a constant $C$ such that if \text{e} qref{ass} holds, then, $$\operatorname{Leb} \beta ig( \beta igcup_{ \beta egin{equation}gin{subarray}{c} 0<\ab{k}\le N\\ [a] \text{e} nd{subarray}} \Sigma(L_{k,[a]}( \kappa ) \beta ig) \le C N^{ \text{e} xp} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta lta_0})^{\frac1{{s_*}}}$$ and $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma(L_{k,[a]}, \kappa ),\Sigma(L_{k,[a]},\frac \kappa 2))>\frac{1}{C}\frac \kappa {N(\chi+ \delta lta)},$$ for any $ \kappa >0$. $ \text{e} xp$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1}$ and $\# \mathcal{A} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \beta egin{equation}gin{proof} Consider first $a\in \mathcal{L} _\infty$. Then $L_{k,[a]}$ decouples into to a sum of two Hermitian operators of the form $$ \lambda_a ngle k, \omega \rho angle I + Q_{[a]}$$ -- denoted $L=L_{k,[a]}$ -- where $Q_{[a]}$ is the restriction of $Q$ to $[a]\times [a]$. Since $L$ is Hermitian: \beta egin{equation}gin{itemize} \item $$ ||L( \rho )^{-1}|| \leq \max || \beta ig( \lambda_a ngle k, \omega ( \rho ) \rho angle+ \lambda_a mbda( \rho ) \beta ig)^{-1}|| ,$$ where the maximum is taken over all eigenvalues $ \lambda_a mbda( \rho )$ of $Q( \rho )$; \item for any $ \rho _0\in \mathcal{D} $, $$ \partial_{\mathfrak z} \lambda_a ngle v( \rho ) L( \rho ), v( \rho ) \rho angle \sigma lash_ { \rho = \rho _0} \ =\ \partial_{\mathfrak z} \lambda_a ngle v( \rho _0) L( \rho ), v( \rho _0) \rho angle \sigma lash_ { \rho = \rho _0} $$ for any eigenvector $v( \rho )$ of $L( \rho )$ (associated to an eigenvalue which is $ \mathcal{C} ^1$ in the direction $\mathfrak z$ ). \text{e} nd{itemize} If we let $$L_{\textrm up}= \lambda_a ngle k, \omega \rho angle I + (Q_{\textrm up}) _{[a]},$$ where $ Q_{\textrm up}$ comes from the unperturbed Hamiltinonian, then it follows, from \text{e} qref{hypoB} and \text{e} qref{ass}, that $$\aa{L-L_{\textrm up}}_{ \mathcal{C} ^{{s_*}}( \mathcal{D} )} \leq \delta \leq \frac{ \delta lta_0}2$$ and, hence, $$\textrm{d}_{ {\operatorname{Hausdorff} } }( \sigma (L), \sigma (L_{\textrm up}))< \delta lta.$$ If now $L_{\textrm up}$ is $ \delta lta_0$-invertible, then this implies that $L$ is $\frac{ \delta lta_0}2$-invertible. Otherwise $$ \partial_{\mathfrak z} \beta ig( \lambda_a ngle k, \omega ( \rho ) \rho angle+ \lambda_a mbda( \rho ) \beta ig) \sigma lash_{ \rho = \rho _0}= \lambda_a ngle v( \rho ), \partial_{\mathfrak z} L( \rho ) v( \rho ) \rho angle \sigma lash_{ \rho = \rho _0} =$$ $$= \lambda_a ngle v( \rho ), \partial_{\mathfrak z} L_{\textrm up}( \rho ) v( \rho ) \rho angle \sigma lash_{ \rho = \rho _0} + \mathcal{O} ( \delta lta),$$ where $v( \rho )$ is a unit eigenvector of $L( \rho )$ associated with the eigenvalue $ \lambda_a mbda( \rho )$. If $L_{\textrm up}$ is not $ \delta lta_0$-invertible, then, by assumption A2$(i)$ there exists a unit vector ${\mathfrak z}$ such that $$\ab{ \partial_{\mathfrak z} \beta ig( \lambda_a ngle k, \omega ( \rho ) \rho angle+ \lambda_a mbda( \rho ) \beta ig)} \sigma lash_{ \rho = \rho _0}\ge \delta lta_0- \delta lta \ge \frac{ \delta lta_0}2.$$ Hence, the Lebesgue measure of $\Sigma(L _{k,[a]}, \kappa )$ is $ \lambda_a mbda_s im d_{ \mathcal{D} elta}^{d_*} \frac \kappa { \delta lta_0}$ -- recall that, by Remark \rho ef{remark-blocks}, the operator is of dimension $ \lambda_a mbda_s im d_{ \mathcal{D} elta}^{d_*}$. (This argument is valid if $ \lambda_a mbda( \rho )$ is $ \mathcal{C} ^1$ in the direction $\mathfrak z$ which can always be assumed when $Q$ is analytic in $ \rho $. The non-analytic case follows by analytical approximation.) Since $| \lambda_a ngle k, \omega ( \rho ) \rho angle | \lambda_a mbda_s im \ab{k} \lambda_a mbda_s im N$, it follows, by \text{e} qref{la-lb-ter}, that $$| \lambda_a ngle k, \omega ( \rho ) \rho angle + \lambda_a mbda( \rho )|\geq \ab{ \lambda_a ( \rho )}- \delta lta- \mathbb{C} te\ab{k} \ge \ab{a}^{ \beta egin{equation}ta_1}- c \lambda_a ngle a \rho angle ^{- \beta egin{equation}ta_2} - \delta lta- \mathbb{C} te\ab{k}$$ for some appropriate $a\in[a]$. Hence, $\Sigma(L_{k,[a]}, \kappa )= \text{e} mptyset$ for $ |a | \gtrsim N^{\frac1{ \beta egin{equation}ta_1}}$. Summing up over all $0<\ab{k}\le N$ and all $ |a | \lambda_a mbda_s im N^{\frac1{ \beta egin{equation}ta_1}}$ gives the first estimate. Consider now $a\in \mathcal{F} $. Then $L( \rho )= \beta ig( \lambda_a ngle k, \omega \rho angle I- {\mathbf i}JH \beta ig)$ and it follows, by \text{e} qref{hypoB} and \text{e} qref{ass}, that $$\aa{L-L_{\textrm up}}_{ \mathcal{C} ^{{s_*}}}\le \delta \leq \frac12 \delta lta_0,$$ where $L_{\textrm up}( \rho )= \beta ig( \lambda_a ngle k, \omega \rho angle I- {\mathbf i}JH_{\textrm up} \beta ig)$. If now $L_{\textrm up}$ is $ \delta lta_0$-invertible, then $L$ will be $\frac{ \delta lta_0}2$-invertible. Otherwise, $$\ab{ \delta t L- \delta t L_{\textrm up}}_{ \mathcal{C} ^{j}} \le \mathbb{C} te \delta lta \beta ig(\aa{L_{\textrm up}}_{ \mathcal{C} ^{j}}+ \delta lta \beta ig)^{m-1}$$ and, by assumption A2(iii), there exists a unit vector ${\mathfrak z}$ and an integer $1\le j\le {s_*}$ such that $$\ab{ \partial_{\mathfrak z}^j \delta t L_{\textrm up}( \rho ) }\ge \delta lta_0\aa{ L_{\textrm up}}_{ \mathcal{C} ^{j}( \mathcal{D} )}^{m-1}, \quad \forall \rho \in \mathcal{D} .$$ This implies that $|L_{\textrm up}|_{ \mathcal{C} ^{j}}\ge {\operatorname{ct.} } \delta lta_0$ and, hence, $$\ab{ \delta t L- \delta t L_{\textrm up}}_{ \mathcal{C} ^{j}}\le \mathbb{C} te \delta lta \aa{L_{\textrm up}}_{ \mathcal{C} ^{j}}^{m-1}.$$ Thus $$\ab{ \partial_{\mathfrak z}^j \delta t L( \rho ) }\ge ( \delta lta_0- \mathbb{C} te \delta lta)\aa{ L}_{ \mathcal{C} ^{j}( \mathcal{D} )}^{m-1}, \quad \forall \rho \in \mathcal{D} ,$$ and $ \delta lta_0- \mathbb{C} te \delta lta\ge\frac{ \delta lta_0}2$. Then, by Lemma \rho ef{lTransv1}, $$\frac{ \delta t L( \rho )} { \aa{ L }_{ \mathcal{C} ^{j}}^{m-1} } \ge \kappa ,$$ outside a set $\Sigma'$ of Lebesgue measure $$\le \mathbb{C} te \frac{|\nabla_ \rho L|_{ \mathcal{C} ^{{j}-1}( \mathcal{D} )}}{ \delta _0}(\frac \kappa { \delta _0})^{\frac1j}.$$ Hence, by Cramer's rule, $$ \operatorname{Leb} \Sigma(L,\varepsilon)\le \mathbb{C} te \frac{|\nabla_ \rho L |_{ \mathcal{C} ^{{j}-1}( \mathcal{D} )}}{ \delta _0}(\frac \kappa { \delta _0})^{\frac1j} \le \mathbb{C} te \frac{N(\chi+ \delta lta)}{ \delta _0}(\frac \kappa { \delta _0})^{\frac1j}.$$ Summing up over all $\ab{k}\le N$ gives the first estimate. The second estimate follows from the mean value theorem and the bound $$\ab{\nabla_ \rho L_{k,[a]}( \rho )}\le N(\chi+ \delta lta).$$ \text{e} nd{proof} \beta egin{equation}gin{lemma} \lambda_a bel{lSmallDiv3} Let $$L_{k,[a],[b]}=( \lambda_a ngle k, \omega \rho angle I-{\mathbf i} \mathcal{A} d_{JA})_{[a]}^{[b]}.$$ There exists a constant $C$ such that if \text{e} qref{ass} holds, then, $$ \beta igcup_{ \beta egin{equation}gin{subarray}{c}0<\ab{k}\le N\\ \ab{a-b}\le \mathcal{D} elta' \text{e} nd{subarray}} \Sigma(L_{k,[a],[b]}, \kappa ) \le \mathbb{C} te N^{ \text{e} xp} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta lta_0})^{ \alpha pha}$$ and $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma(L_{k,[a],[b]}, \kappa ),\Sigma(L_{k,[a],[b]},\frac \kappa 2))>\frac{1}{C}\frac \kappa {N(\chi+ \delta lta)},$$ for any $ \kappa >0$. The exponent $ \text{e} xp$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1} $ and $\# \mathcal{A} $. The exponent $ \alpha pha$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \beta egin{equation}gin{proof} Consider first $a,b \in \mathcal{F} $. This case is treated as the operator $L( \rho )= \beta ig( \lambda_a ngle k, \omega \rho angle I- {\mathbf i}JH \beta ig)$ in the previous lemma. Consider then $a\in \mathcal{L} _\infty$ and $b \in \mathcal{F} $, so that $$L_{k,[a]}( \rho ):X\mapsto \lambda_a ngle k, \omega ( \rho ) \rho angle X + Q_{[a]}( \rho )X+X{\mathbf i}JH( \rho ).$$ Let $$L( \rho , \lambda_a mbda):X\mapsto \lambda_a ngle k, \omega ( \rho ) \rho angle X+ \lambda_a mbda X+ {\mathbf i}XJH( \rho )$$ and $$P( \rho , \lambda_a mbda)= \delta t L( \rho , \lambda_a mbda).$$ Since $L_{k,[a]}( \kappa )$ is ``partially'' Hermitian, $$ ||L_{k,[a]}( \rho )^{-1}|| \leq \max \aa{L( \rho , \lambda_a mbda( \rho ))^{-1}},$$ where the maximum is taken over all eigenvalues $ \lambda_a mbda( \rho )$ of $Q( \rho )$. If we let $$L_{\textrm up}( \rho , \lambda_a mbda):X\mapsto \lambda_a ngle k, \omega ( \rho ) \rho angle X + \lambda_a mbda X+X{\mathbf i}JH_{\textrm up}( \rho ),$$ then it follows, from \text{e} qref{hypoB} and \text{e} qref{ass}, that $$\aa{L-L_{\textrm up}}_{ \mathcal{C} ^{{s_*}}( \mathcal{D} )} \leq \delta \leq \frac{ \delta lta_0}2.$$ If $L_{\textrm up}$ is $ \delta lta_0$-invertible, then this implies that $L$ is $\frac{ \delta lta_0}2$-invertible. Otherwise $$\frac{d}{d_{\mathfrak z}}P( \rho , \lambda_a mbda( \rho ))= \partial_{\mathfrak z}P( \rho , \lambda_a mbda( \rho ))+ \partial_ \lambda_a mbda P( \rho , \lambda_a mbda( \rho )) \lambda_a ngle v( \rho ), \partial_{\mathfrak z}Q( \rho )v( \rho ) \rho angle=$$ $$= \partial_{\mathfrak z}P_{\textrm up}( \rho , \lambda_a mbda_a( \rho ))+ \partial_ \lambda_a mbda P_{\textrm up}( \rho , \lambda_a mbda_a( \rho )) \lambda_a ngle v( \rho ), \partial_{\mathfrak z}Q_{\textrm up}( \rho )v( \rho ) \rho angle + \mathcal{O} ( \delta lta || L||_{ \mathcal{C} ^{1}( \mathcal{D} )}^{m-1}).$$ By Assumption A2$(ii)$ there exists a unit vector ${\mathfrak z}$ such that $$ \ab{\frac{d}{d_{\mathfrak z}}P( \rho , \lambda_a mbda( \rho ))}\ge\frac{ \delta lta_0}2 || L||_{ \mathcal{C} ^{1}( \mathcal{D} )}^{m-1}.$$ Hence, the Lebesgue measure of $\Sigma(L_{k,[a]} , \kappa )$ is $ \lambda_a mbda_s im d_{ \mathcal{D} elta}^{d_*} \frac \kappa { \delta lta_0}$ -- recall that, by Remark \rho ef{remark-blocks}, the operator is of dimension $ \lambda_a mbda_s im d_{ \mathcal{D} elta}^{d_*}$. (This argument is valid if $ \lambda_a mbda( \rho )$ is $ \mathcal{C} ^1$ in the direction $\mathfrak z$ which can always be assumed when $Q$ is analytic in $ \rho $. The non-analytic case follows by analytical approximation.) Let $ \lambda_a mbda( \rho )$ be an eigenvalue of $Q_{[a]}$. Since $| \lambda_a ngle k, \omega ( \rho ) \rho angle | \lambda_a mbda_s im \ab{k} \lambda_a mbda_s im N$, it follows, by \text{e} qref{la-lb-ter}, that $$| \lambda_a ngle k, \omega ( \rho ) \rho angle + \lambda_a mbda( \rho )|\geq \ab{ \lambda_a ( \rho )} - \delta lta- \mathbb{C} te\ab{k} \ge \ab{a}^{ \beta egin{equation}ta_1}-c \lambda_a ngle a \rho angle ^{- \beta egin{equation}ta_2} - \delta lta - \mathbb{C} te\ab{k}$$ for some appropriate $a\in[a]$. Hence, $\Sigma(L_{k,[a]}, \kappa )= \text{e} mptyset$ for $ |a | \gtrsim (N)^{\frac1{ \beta egin{equation}ta_1}}$. Summing up over all $0<\ab{k}\le N$ and all $ |a | \lambda_a mbda_s im(N)^{\frac1{ \beta egin{equation}ta_1}}$ gives the first estimate. Consider finally $a,b \in \mathcal{L} _\infty$. Then $L_{k,[a],[b]}$ decouples into a sum of four Hermitian operators of the forms $$L_{k,[a],[b]}( \rho ):X\mapsto \lambda_a ngle k, \omega \rho angle X + Q_{[a]}X+X{}^tQ_{[b]}$$ and $$L_{k,[a],[b]}( \rho ):X\mapsto \lambda_a ngle k, \omega \rho angle X + Q_{[a]}X-XQ_{[b]}.$$ The first one is treated exactly as the operator $X\mapsto \lambda_a ngle k, \omega \rho angle X + Q_{[a]}X$ in the previous lemma, so let us concentrate on the second one. It follows as in the previous lemma that the Lebesgue measure of $\Sigma(L_{k,[a],[b]}, \kappa ')$ is $ \lambda_a mbda_s im d_{ \mathcal{D} elta}^{2d_*} \frac{ \kappa '}{ \delta lta_0}$ -- recall that, by Remark \rho ef{remark-blocks}, the operator is of dimension $ \lambda_a mbda_s im d_{ \mathcal{D} elta}^{2d_*}$. The problem now is the measure estimate of $ \beta igcup\Sigma(L_{k,[a],[b]}, \kappa )$ since, a priori, there may be infinitely many $\Sigma(L_{k,[a],[b]}, \kappa )$ that are non-void. We can assume without restriction that $\ab{a}\le \ab{b}$. Since $| \lambda_a ngle k, \omega ( \rho ) \rho angle |\le \mathbb{C} te \ab{k}\le \mathbb{C} te N$, it is enough to consider $\ab{b}-\ab{a} \le \mathbb{C} te N$ (because $ \beta egin{equation}ta_1\ge1$). Since $ \beta egin{equation}ta_1=2$, $\ab{a}^{ \beta egin{equation}ta_1}-\ab{b}^{ \beta egin{equation}ta_1}$ is an integer, and outside a set $\Sigma(2 \kappa ')$ of Lebesgue measure $$\le \mathbb{C} te N\frac{ \kappa '}{ \delta lta_0}$$ we have $$| \lambda_a ngle k, \omega ( \rho ) \rho angle \ +\ab{a}^{ \beta egin{equation}ta_1}-\ab{b}^{ \beta egin{equation}ta_1}|\ge2 \kappa '.$$ Then, by \text{e} qref{la-lb}, $$| \lambda_a ngle k, \omega ( \rho ) \rho angle \ + \alpha pha( \rho )- \beta egin{equation}ta( \rho )|\ge 2 \kappa '- 2\frac{ \delta }{ \lambda_a ngle a \rho angle^ \varkappa appa}- 2\frac{c'c}{ \lambda_a ngle a \rho angle^{ \beta egin{equation}ta_3}}$$ for any $ \alpha pha( \rho )$ and $ \beta egin{equation}ta( \rho )$, eigenvalues of $Q_{[a]}( \rho )$ and $Q_{[b]}( \rho )$, respectively. Now this is $\ge \kappa '$ unless $$\ab{a} \le \mathbb{C} te \min\Big( (\frac{ \delta }{ \kappa '})^{\frac1{ \varkappa }},( \frac{c'}{ \kappa '})^{ \frac1{ \beta egin{equation}ta_3}}\Big)=:M.$$ Hence, if $ \kappa '\ge \kappa $, then $$ \beta igcup_{[a],[b],k}\Sigma(L_{k,[a],[b]}, \kappa ) \sigma ubset \Sigma(2 \kappa ')\cup \beta igcup_{ \ab{a},\ab{b}\le M+ \mathbb{C} te N,k} \Sigma(L_{k,[a],[b]}, \kappa ).$$ This set has measure $$ \lambda_a mbda_s im N\frac{ \kappa '}{ \delta lta_0}+ \Big(N\frac{ \delta _0}{ \kappa '}\Big)^{2d_*\max( \frac1{ \beta egin{equation}ta_3},\frac1{ \varkappa })}\frac \kappa { \delta lta_0} .$$ By an appropriate choice of $ \kappa '\in[ \kappa , \delta lta_0]$, this becomes $$\le \mathbb{C} te N^{ \text{e} xp} (\frac{ \kappa }{ \delta lta_0})^{ \alpha pha}$$ for some $ \alpha pha>0$ depending on $\frac{d_*}{ \beta egin{equation}ta_3},\frac{d_*}{ \varkappa }$. \text{e} nd{proof} \sigma ection{Homological equation} \lambda_a bel{s6} Let $h$ be a normal form Hamiltonian \text{e} qref{normform}, $$ h(r,w, \rho )= \lambda_a ngle \omega ( \rho ), r \rho angle +\frac 1 2 \lambda_a ngle w, A( \rho ) w \rho angle\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta lta)$$ and assume $ \varkappa >0$ and \beta egin{equation} \lambda_a bel{ass1} \delta lta \le \frac{1}{C}c' , \text{e} nd{equation} where $C$ is to be determined. Let $$\gamma =(\gamma ,m_*)\ge \gamma _*=(0,m_*).$$ \beta egin{equation}gin{remark} \lambda_a bel{rAbuse} Notice the abuse of notations here. It will be clear from the context when $\gamma $ is a two-vector, like in $\aa{\cdot}_{\gamma , \varkappa }$, and when it is a scalar, like in $e^{\gamma d}$. \text{e} nd{remark} Let $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$. In this section we shall construct a jet-function $S$ that solves the {\it non-linear homological equation} \beta egin{equation} \lambda_a bel{eqNlHomEq} \{ h,S \}+ \{ f-f^T,S \}^T+f^T=0 \text{e} nd{equation} as good as possible -- the reason for this will be explained in the beginning of the next section. In order to do this we shall start by analyzing the {\it homological equation} \beta egin{equation} \lambda_a bel{eqHomEq} \{ h,S \}+f^T=0. \text{e} nd{equation} We shall solve this equation modulo some ``cokernel'' and modulo an ``error''. \sigma ubsection{Three components of the homological equation} \lambda_a bel{ssFourComponents} Let us write $$f^T(\theta,r,w)=f_r(r,\theta)+ \lambda_a ngle f_w(\theta),w \rho angle+\frac 1 2 \lambda_a ngle f_{ww}(\theta)w,w \rho angle$$ and recall that, by Proposition \rho ef{lemma:jet}, $f^T\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$. Let $$ S(\theta,r,w)=S_r(r,\theta)+ \lambda_a ngle S_w(\theta),w \rho angle+ \frac 1 2 \lambda_a ngle S_{ww}(\theta)w,w \rho angle,$$ where $f_r$ and $S_r$ are affine functions in $r$ -- here we have not indicated the dependence on $ \rho $. Then the Poisson bracket $\{h, S\}$ equals \beta egin{equation}gin{multline*} - \beta ig( \partial_{ \omega ega} S_r(r, \theta) + \lambda_a ngle \partial_{ \omega ega} S_w(\theta), w \rho angle + \frac12 \lambda_a ngle \partial_{ \omega ega} S_{ww}(\theta),w \rho angle + \\ + \lambda_a ngle AJ S_w(\theta),w \rho angle + \frac12 \lambda_a ngle AJ S_{ww}(\theta)w,w \rho angle - \frac12 \lambda_a ngle S_{ww}(\theta)JAw,w \rho angle \text{e} nd{multline*} where $ \partial_{ \omega ega}$ denotes the derivative of the angles $\theta$ in direction $ \omega $. Accordingly the homological equation \text{e} qref{eqHomEq} decomposes into three linear equations: $$\left\{ \beta egin{equation}gin{array}{l} \partial_{ \omega ega} S_r(r,\theta) =f_r(r,\theta),\\ \partial_{ \omega ega} S_w(\theta) -AJ S_w(\theta)= f_w(\theta),\\ \partial_{ \omega ega} S_{ww}(\theta) - AJS_{ww}(\theta) +S_{ww}(\theta)JA=f_{ww}(\theta). \text{e} nd{array} \rho ight.$$ \sigma ubsection{The first equation} \lambda_a bel{homogene} \beta egin{equation}gin{lemma} \lambda_a bel{prop:homo12} There exists constant $C$ such that if \text{e} qref{ass1} holds, then, for any $N\ge1$ and $ \kappa >0$, there exists a closed set $ \mathcal{D} _1= \mathcal{D} _1(h, \kappa ,N) \sigma ubset \mathcal{D} $, satisfying $$\operatorname{Leb} ( \mathcal{D} \sigma etminus \mathcal{D} _1)\leq C N^{ \text{e} xp} \frac{\chi+ \delta lta}{ \delta lta_0}\frac{ \kappa }{ \delta _0}$$ and there exist $ \mathbb{C} a^{{s_*}}$ functions $S_r$ and $R_r$ on $ \mathbb{C} ^{ \mathcal{A} }\times \mathbb{T} ^ \mathcal{A} \times \mathcal{D} \to \mathbb{C} $, real holomorphic in $r,\theta$, such that for all $ \rho \in \mathcal{D} _1$ \beta egin{equation} \lambda_a bel{homo1} \partial_{ \omega ( \rho )}S_r (r,\theta, \rho ) =f_r(r,\theta, \rho )- \mathbb{h} at f_r(r,0, \rho ) \footnote{\ $ \mathbb{h} at f_r(r,0, \rho )$ is the $0$:th Fourier coefficent, or the mean value, of the function $\theta\mapsto f_r(r,\theta, \rho )$} -R_r(\theta, \rho ) \text{e} nd{equation} and for all $(r,\theta, \rho )\in \mathbb{C} ^{ \mathcal{A} }\times \mathbb{T} ^ \mathcal{A} _{ \sigma '}\times \mathcal{D} $, $\ab{r}<\mu$, $ \sigma '< \sigma $, and $|j|\le{{s_*}}$ \beta egin{equation}gin{align} \lambda_a bel{homo1S} | \partial_ \rho ^jS_r(r,\theta, \rho )|\leq & C \frac{1}{ \kappa ( \sigma - \sigma ')^{n}} \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} |f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}} ,\\ \lambda_a bel{homo1R} | \partial_ \rho ^j R_r(r,\theta, \rho )|\leq & C\frac{ e^{- ( \sigma - \sigma ')N}} { ( \sigma - \sigma ')^{n}}|f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\,. \text{e} nd{align} The exponent $ \text{e} xp$ only depends on $n=\# \mathcal{A} $, and $C$ is an absolut constant. \text{e} nd{lemma} \beta egin{equation}gin{proof} Written in Fourier components the equation \text{e} qref{homo1} then becomes, for $k\in \mathbb{Z} ^{ \mathcal{A} }$, $$L_k( \rho ) \mathbb{h} at S(k)=: \lambda_a ngle k, \omega ( \rho ) \rho angle \mathbb{h} at S(k)=-{\mathbf i}( \mathbb{h} at F(k)- \mathbb{h} at R(k))$$ where we have written $S,F$ and $ R$ for $S_r, (f_r- \mathbb{h} at f_r)$ and $R_r$ respectively. Therefore \text{e} qref{homo1} has the (formal) solution $$S(r,\theta, \rho )= \sigma um \mathbb{h} at S (r,k, \rho ) e^{{\mathbf i} \lambda_a ngle k,\theta \rho angle}\quad\textrm{and}\quad R(r,\theta, \rho )= \sigma um \mathbb{h} at F (r,k, \rho ) e^{{\mathbf i} \lambda_a ngle k,\theta \rho angle}$$ with $$ \mathbb{h} at S(r,k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} -L_k( \rho )^{-1}{\mathbf i} \mathbb{h} at F(r,k, \rho ) & \textrm{ if } 0< |k|\le N\\ 0 & \textrm{ if not} \text{e} nd{array} \rho ight.$$ and $$ \mathbb{h} at R(r,k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} \mathbb{h} at F_{a}(r,k, \rho ) & \textrm{ if } |k|> N\\ 0& \textrm{ if not}. \text{e} nd{array} \rho ight.$$ By Lemma \rho ef{lSmallDiv1} $$ ||(L_k( \rho ))^{-1}||\le \frac1 \kappa \, $$ for all $ \rho $ outside some set $\Sigma(L_k, \kappa ppa)$ such that $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma(L_k, \kappa ),\Sigma(L_k,\frac \kappa 2))\ge {\operatorname{ct.} } \frac \kappa {N(\chi+ \delta lta)}$$ and $$ \mathcal{D} _1= \mathcal{D} \sigma etminus \beta igcup_{0<|k|\le N}\Sigma(L_k, \kappa )$$ fulfills the estimate of the lemma. For $ \rho \notin \Sigma(L_k,\frac \kappa ppa2)$ we get $$ | \mathbb{h} at S (r,k, \rho )| \le \mathbb{C} te\frac{1}{ \kappa }| \mathbb{h} at F(r,k, \rho )|\,.$$ Differentiating the formula for $ \mathbb{h} at S(r,k, \rho )$ once we obtain $$ \partial^j_ \rho \mathbb{h} at S(r,k, \rho )= \Big( \ -\frac{{\mathbf i}}{ \lambda_a ngle \omega , k \rho angle} \partial^j_ \rho \mathbb{h} at F(r,k, \rho )+ \ \frac{{\mathbf i}}{ \lambda_a ngle \omega , k \rho angle^2} \lambda_a ngle \partial^j_ \rho \omega , k \rho angle \mathbb{h} at F(r,k, \rho )\Big) $$ which gives, for $ \rho \notin \Sigma(L_k,\frac \kappa ppa2)$, $$ | \partial^j_ \rho \mathbb{h} at S(r,k, \rho )|\le \mathbb{C} te \frac1 \kappa ppa( N\frac{\chi+ \delta }{ \kappa ppa})\max_{0\le l\le j}| \partial^l_ \rho \mathbb{h} at F(r,k, \rho )|. $$ (Here we used that $| \partial_ \rho \omega ega( \rho ho)|\le \chi+ \delta $. ) The higher order derivatives are estimated in the same way and this gives $$ | \partial_ \rho ^j \mathbb{h} at S(r,k, \rho )|\le \mathbb{C} te \frac1 \kappa ppa(N\frac{\chi+ \delta }{ \kappa ppa})^{ |j |}\max_{0\le l\le j} | \partial^l_ \rho \mathbb{h} at F(r,k, \rho )| $$ for any $ |j |\le{{s_*}}$, where $ \mathbb{C} te$ is an absolute constant. By Lemma \rho ef{lExtension}, there exists a $ \mathcal{C} ^\infty$-function $g_k: \mathcal{D} \to \mathbb{R} $, being $=1$ outside $\Sigma(L_k, \kappa )$ and $=0$ on $\Sigma(L_k,\frac \kappa 2)$ and such that for all $j\ge 0$ $$| g_k |_{ \mathcal{C} ^j( \mathcal{D} )}\le ( \mathbb{C} te\frac{N(\chi+ \delta lta)}{ \kappa })^j.$$ Multiplying $ \mathbb{h} at S(r,k, \rho )$ with $g_k( \rho )$ gives a $ \mathcal{C} ^{{s_*}}$-extension of $ \mathbb{h} at S(r,k, \rho )$ from $ \mathcal{D} \sigma etminus \Sigma(L_k, \kappa )$ to $ \mathcal{D} $ satisfying the same bound as $ \mathbb{h} at S(r,k, \rho )$. It follows now, by a classical argument, that the formal solution converge and that $| \partial_ \rho ^j S(r,\theta, \rho )|$ and $| \partial_ \rho ^j R(r,\theta, \rho )|$ fulfills the estimates of the lemma. When summing up the series for $| \partial_ \rho ^j R(r,\theta, \rho )|$ we get a term $e^ {-\frac1C( \sigma - \sigma ')N}$, but the factor $\frac1C$ disappears by replacing $N$ by $CN$. By construction $S$ and $R$ solve equation \text{e} qref{homo1} for any $ \rho \in \mathcal{D} _1$. \text{e} nd{proof} \sigma ubsection{The second equation} \lambda_a bel{s5.3} Concerning the second component of the homological equation we have \beta egin{equation}gin{lemma} \lambda_a bel{prop:homo3} There exists an absolut constant $C$ such that if \text{e} qref{ass1} holds, then, for any $N\ge1$ and $$0< \kappa \le c',$$ there exists a closed set $ \mathcal{D} _2= \mathcal{D} _2(h, \kappa ,N) \sigma ubset \mathcal{D} $, satisfying $$ \operatorname{Leb} ( \mathcal{D} \sigma etminus \mathcal{D} _2)\leq C N^{ \text{e} xp} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta lta_0})^{\frac1{{s_*}}}, $$ and there exist $ \mathbb{C} a^{{s_*}}$-functions $S_w$ and $R_w$ on $ \mathbb{T} ^ \mathcal{A} \times \mathcal{D} \to Y_\gamma $, real holomorphic in $\theta$, such that for $ \rho \in \mathcal{D} _2$ \beta egin{equation} \lambda_a bel{homo2} \partial_{ \omega ( \rho )} S_w(\theta, \rho ) -A( \rho ) JS_w(\theta, \rho )= f_w(\theta, \rho )-R_w(\theta, \rho ) \text{e} nd{equation} and for all $(\theta, \rho )\in \mathbb{T} ^ \mathcal{A} _{ \sigma '}\times \mathcal{D} $, $ \sigma '< \sigma $, and $|j|\le {{s_*}}$ \beta egin{equation}gin{align} \lambda_a bel{homo2S} || \partial_ \rho ^j S_w(\theta, \rho )||_{\gamma }\leq & C\frac{1}{ \kappa ( \sigma - \sigma ')^{n}} \beta ig( N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} |f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}} \\ \lambda_a bel{homo2R} || \partial_ \rho ^j R_w(\theta, \rho )||_{\gamma }\leq & C\frac{ e^{-( \sigma - \sigma ')N} } {( \sigma - \sigma ')^{n}} |f^T|_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}. \text{e} nd{align} The exponent $ \text{e} xp$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1}$ and $n=\# \mathcal{A} $, and $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \partialroof Let us re-write \text{e} qref{homo2} in the complex variables $\xi$ and $ \text{e} ta$ described in section \rho ef{ssUnperturbed}. The quadratic form $(1/2) \lambda_a ngle w, A( \rho ho) w \rho angle\ $ gets transformed, by $w=Uz$, to $$ \lambda_a ngle\xi, Q( \rho ho) \text{e} ta \rho angle + \frac12 \lambda_a ngle z_{ \mathcal{F} }, H'( \rho ho) z_{ \mathcal{F} } \rho angle,$$ where $Q'$ is a Hermitian matrix and $H'$ is a real symmetric matrix. Then we make in \text{e} qref{homo2} the substitution $ S={}^t\!US_w$, $R={}^t\! UR_w$ and $F={}^t\!Uf_w$, where $S={}^t(S^\xi, S^ \text{e} ta,S^{ \mathcal{F} })$, etc. In this notation eq.~ \text{e} qref{homo2} decouples into the equations \beta egin{equation}gin{align*} \partial_{ \omega } S^\xi + {\mathbf i} QS^\xi= F^\xi-R^\xi,\\ \partial_{ \omega } S^ \text{e} ta- {\mathbf i}{}^t\!QS^ \text{e} ta= F^ \text{e} ta-R^ \text{e} ta\\ \partial_{ \omega } S^{ \mathcal{F} }- HJS^{ \mathcal{F} }= F^{ \mathcal{F} }-R^{ \mathcal{F} }. \text{e} nd{align*} Let us consider the first equation. Written in the Fourier components it becomes \beta egin{equation} \lambda_a bel{homo2.10} ( \lambda_a ngle k, \omega ( \rho ) \rho angle I + Q) \mathbb{h} at S^\xi (k)=-{\mathbf i}( \mathbb{h} at F^\xi(k)- \mathbb{h} at R^\xi(k)). \text{e} nd{equation} This equation decomposes into its ``components'' over the blocks $[a]=[a]_ \mathcal{D} elta$ and takes the form \beta egin{equation} \lambda_a bel{homo2.1}L_{k,[a]}( \rho ) \mathbb{h} at S_{[a]}(k)=: ( \lambda_a ngle k, \omega ( \rho ) \rho angle + Q_{[a]}) \mathbb{h} at S_{[a]}(k)=-{\mathbf i}( \mathbb{h} at F_{[a]}(k)- \mathbb{h} at R_{[a]}(k)) \text{e} nd{equation} -- the matrix $Q_{[a]}$ being the restriction of $Q^\xi$ to $[a]\times [a]$, the vector $F_{[a]}$ being is the restriction of $F^\xi$ to $[a]$ etc. Equation \text{e} qref{homo2.10} has the (formal) solution $$ \mathbb{h} at S_{[a]}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} -(L_{k,[a]}( \rho ))^{-1}{\mathbf i} \mathbb{h} at F_{[a]}(k, \rho ) & \textrm{ if } |k|\le N\\ 0 & \textrm{ if not} \text{e} nd{array} \rho ight.$$ and $$ \mathbb{h} at R_{a}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} \mathbb{h} at F_{a}(k, \rho ) & \textrm{ if } |k|> N\\ 0& \textrm{ if not}. \text{e} nd{array} \rho ight.$$ For $k\not=0$, by Lemma \rho ef{lSmallDiv2}, $$ ||(L_{k,[a]}( \rho ))^{-1}||\le \frac1 \kappa \, $$ for all $ \rho $ outside some set $\Sigma(L_{k,[a]}, \kappa ppa)$ such that $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma(L_{k,[a]}, \kappa ),\Sigma(L_{k,[a]},\frac \kappa 2))\ge {\operatorname{ct.} } \frac \kappa {N(\chi+ \delta lta)}$$ and $$ \mathcal{D} _2= \mathcal{D} \sigma etminus \beta igcup_{ \beta egin{equation}gin{subarray}{c} 0< |k|\le N\\ [a] \text{e} nd{subarray}}\Sigma_{k,[a]}( \kappa ),$$ fulfills the required estimate. For $k=0$, it follows by \rho ef{ass1} and \rho ef{laequiv} that $$ ||(L_{k,[a]}( \rho ))^{-1}||\le \frac1{c'}\le \frac1{ \kappa }\, . $$ We then get, as in the proof of Lemma \rho ef{prop:homo12}, that $ \mathbb{h} at S_{[a]}(k,\cdot)$ and $ \mathbb{h} at R_{[a]}(k,\cdot)$ have $ \mathcal{C} ^{{s_*}}$-extension to $ \mathcal{D} $ satisfying $$|| \partial_ \rho ^j \mathbb{h} at S_{[a]} (k, \rho )|| \leq \mathbb{C} te\frac{1}{ \kappa } \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \max_{0\le l\le j} || \partial_ \rho ^l \mathbb{h} at F_{[a]}(k, \rho )|| $$ and $$|| \partial_ \rho ^j R_{[a]}(k, \rho )||\leq \mathbb{C} te || \partial_ \rho ^j \mathbb{h} at F_{[a]}(k, \rho )||,$$ and satisfying \text{e} qref{homo2.1} for $ \rho \in \mathcal{D} _2$. These estimates imply that $$|| \partial_ \rho ^j \mathbb{h} at S^\xi(k, \rho )||_\gamma \leq \mathbb{C} te\frac{1}{ \kappa } \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \max_{0\le l\le j} || \partial_ \rho ^l \mathbb{h} at F^\xi (k, \rho )||_\gamma $$ and $$|| \partial_ \rho ^j R^\xi (k, \rho )||\leq \mathbb{C} te || \partial_ \rho ^j F^\xi(k, \rho )||_\gamma .$$ Summing up the Fourier series, as in Lemma \rho ef{prop:homo12}, we get $$|| \partial_ \rho ^j S^\xi (\theta, \rho )||_\gamma \leq \mathbb{C} te\frac{1}{ \kappa ( \sigma - \sigma ')^{n}} \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \max_{0\le l\le j} \sigma up_{|\Im\theta|< \sigma }|| \partial_ \rho ^l F^\xi(\cdot, \rho )||_\gamma $$ and $$|| \partial_ \rho ^j R^\xi(\theta, \rho )||_\gamma \leq \mathbb{C} te\frac{ e^{-\frac1{ \mathbb{C} te}( \sigma - \sigma ')N} } {( \sigma - \sigma ')^{n}} \sigma up_{|\Im\theta|< \sigma }|| \partial_ \rho ^j F^\xi(\cdot, \rho )||_\gamma $$ for $(\theta, \rho )\in \mathbb{T} ^ \mathbb{A} _{ \sigma '}\times \mathcal{D} $, $0< \sigma '< \sigma $, and $|j|\le{{s_*}}$. This implies the estimates \text{e} qref{homo2S} and \text{e} qref{homo2R} -- the factor $\frac1{ \mathbb{C} te}$ disappears by replacing $N$ by $ \mathbb{C} te N$. The other two equations are treated in exactly the same way. \text{e} ndproof \sigma ubsection{The third equation} \lambda_a bel{s5.4} Concerning the third component of the homological equation, \text{e} qref{eqHomEq}, we have the following result, where for a solution $S_{w w}(\theta, \rho ho)$ we estimate separately its mean-value $ \mathbb{h} at S_{w w}(0, \rho ho)$ and the deviation from the mean-value $S_{ww}(\theta, \rho ho) - \mathbb{h} at S_{ww}(0, \rho ho)$. \beta egin{equation}gin{lemma} \lambda_a bel{prop:homo4} There exists an absolut constant $C$ such that if \text{e} qref{ass1} holds, then, for any $N\ge1$, $ \mathcal{D} elta'\ge \mathcal{D} elta\ge 1$, and $$ \kappa \le\frac1C c',$$ there exist subsets $ \mathcal{D} _3= \mathcal{D} _3(h, \kappa ,N, \mathcal{D} elta') \sigma ubset \mathcal{D} $, satisfying $$\operatorname{Leb}( \mathcal{D} \sigma etminus { \mathcal{D} _3})\le C N^{ \text{e} xp} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta _0})^{ \alpha pha}$$ and there exist real $ \mathbb{C} a^{{s_*}}$-functions $B_{ww}: \mathcal{D} \to \mathcal{M} _{\gamma , \varkappa appa} \cap \mathbb{N} F_{ \mathcal{D} elta'} $ and $S_{ww}$, $R_{ww}: \mathbb{T} ^{ \mathcal{A} }\times \mathcal{D} \to \mathcal{M} _{\gamma , \varkappa appa}$, real holomorphic in $\theta$, such that for all $ \rho \in \mathcal{D} _3$ \beta egin{equation}gin{multline} \lambda_a bel{homo3} \partial_{ \omega ( \rho )} S_{ww}(\theta, \rho ) -A( \rho )JS_{ww}(\theta, \rho )+ S_{ww}(\theta, \rho )JA( \rho )=\\ f_{ww} (\theta, \rho )-B_{ww}( \rho )-R_{ww}(\theta, \rho ) \text{e} nd{multline} and for all $(\theta, \rho )\in \mathbb{T} ^ \mathcal{A} _{ \sigma '}\times \mathcal{D} $, $ \sigma '< \sigma $, and $|j|\le {{s_*}}$ \beta egin{equation} \lambda_a bel{homo3S} \aa{ \partial_ \rho ^j S_{ww}(\theta, \rho )}_{\gamma ', \varkappa appa}\leq C \mathcal{D} elta'\frac{ \mathcal{D} elta^{ \text{e} xp_2}e^{2\gamma d_ \mathcal{D} elta}}{ \kappa ( \sigma - \sigma ')^{n}} \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}, \text{e} nd{equation} \beta egin{equation} \lambda_a bel{homo3R} \aa{ \partial_ \rho ^j R_{ww}(\theta, \rho )}_{\gamma ', \varkappa }\leq C \mathcal{D} elta' \mathcal{D} elta^{ \text{e} xp_2}\left(\frac{e^{-( \sigma - \sigma ')N}+e^{-(\gamma -\gamma ') \mathcal{D} elta'}}{ ( \sigma - \sigma ')^{n}} \rho ight) \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}, \text{e} nd{equation} \beta egin{equation} \lambda_a bel{homo3B} \aa{ \partial_ \rho ^j B_{ww}( \rho )}_{\gamma ', \varkappa }\leq C \mathcal{D} elta' \mathcal{D} elta^{ \text{e} xp_2} \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}, \text{e} nd{equation} for any $\gamma _*\le \gamma '\le\gamma $. The exponent $ \text{e} xp$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1} $ and $n=\# \mathcal{A} $. The exponent $ \text{e} xp_2$ only depends on $d_*,m_*$. The exponent $ \alpha pha$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \partialroof It is also enough to find complex solutions $S_{ww}$, $R_{ww}$ and $B_{ww}$ verifying the estimates, because then their real parts will do the job. As in the previous section, and using the same notation, we re-write \text{e} qref{homo3} in complex variables. So we introduce $S={}^t\! US_{ \overline{z} eta, \overline{z} eta} U$, $R={}^t\! UR_{ \overline{z} eta, \overline{z} eta} U$, $B={}^t\! UB_{ \overline{z} eta, \overline{z} eta} U$ and $F={}^t\! UJf_{ \overline{z} eta, \overline{z} eta} U$. In appropriate notations \text{e} qref{homo3} decouples into the equations \beta egin{equation}gin{align*} & \partial_{ \omega } S^{\xi\xi} +{\mathbf i} Q S^{\xi\xi}+ {\mathbf i} S^{\xi\xi}\ {}^tQ= F^{\xi\xi}-B^{\xi\xi}- R^{\xi\xi},\\ & \partial_{ \omega } S^{\xi \text{e} ta} + {\mathbf i}Q S^{\xi \text{e} ta} - {\mathbf i}S^{\xi \text{e} ta} Q= F^{\xi \text{e} ta}-B^{\xi \text{e} ta}- R^{\xi \text{e} ta},\\ & \partial_{ \omega } S^{\xi z_{ \mathcal{F} }} +{\mathbf i} Q S^{\xi z_{ \mathcal{F} }}+ S^{\xi z_{ \mathcal{F} }}\ JH = F^{\xi z_{ \mathcal{F} }}-B^{\xi\xi}- R^{\xi z_{ \mathcal{F} }},\\ & \partial_{ \omega } S^{z_{ \mathcal{F} }z_{ \mathcal{F} }} +HJ S^{z_{ \mathcal{F} }z_{ \mathcal{F} }}- S^{z_{ \mathcal{F} }z_{ \mathcal{F} }}JH= F^{z_{ \mathcal{F} }z_{ \mathcal{F} }}- B^{z_{ \mathcal{F} }z_{ \mathcal{F} }}- R^{z_{ \mathcal{F} }z_{ \mathcal{F} }}, \text{e} nd{align*} and equations for $ S^{ \text{e} ta \text{e} ta},S^{ \text{e} ta\xi}, S^{z_{ \mathcal{F} }\xi }, S^{ \text{e} ta z_{ \mathcal{F} }},S^{z_{ \mathcal{F} } \text{e} ta}$. Since those latter equations are of the same type as the first four, we shall concentrate on these first. \sigma mallskip {\it First equation. } Written in the Fourier components it becomes \beta egin{equation} \lambda_a bel{homo3.10} ( \lambda_a ngle k, \omega ( \rho ) \rho angle I + Q) \mathbb{h} at S^{\xi\xi} (k)+ \mathbb{h} at S^{\xi\xi} (k){}^tQ =-{\mathbf i}( \mathbb{h} at F^{\xi\xi}(k)- \delta _{k,0} B- \mathbb{h} at R^{\xi\xi}(k)). \text{e} nd{equation} This equation decomposes into its ``components'' over the blocks $[a]\times[b]$, $[a]=[a]_ \mathcal{D} elta$, and takes the form \beta egin{equation}gin{multline} \lambda_a bel{homo3.1} L(k,[a],[b], \rho ) \mathbb{h} at S_{[a]}^{[b]}(k)=: \lambda_a ngle k, \omega ( \rho ) \rho angle \ \mathbb{h} at S_{[a]}^{[b]}(k) + Q_{[a]}( \rho ) \mathbb{h} at S_{[a]}^{[b]}(k)+ \\ \mathbb{h} at S_{[a]}^{[b]}(k)\ {}^tQ_{[b]}( \rho )= -{\mathbf i}( \mathbb{h} at F_{[a]}^{[b]}(k, \rho ) - \mathbb{h} at R_{[a]}^{[b]}(k)- \delta _{k,0} B_{[a]}^{[b]}) \text{e} nd{multline} -- the matrix $Q_{[a]}$ being the restriction of $Q^{\xi\xi}$ to $[a]\times [a]$, the vector $F_{[a]}^{[b]}$ being the restriction of $F^{\xi\xi}$ to $[a]\times[b]$ etc. Equation \text{e} qref{homo3.10} has the (formal) solution $$ \mathbb{h} at S_{[a]}^{[b]}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} -L(k,[a],[b], \rho )^{-1}{\mathbf i} \mathbb{h} at F_{[a]}^{[b]}(k, \rho ) & \textrm{ if } \underline{\operatorname{dist}}([a],[b])\le \mathcal{D} elta'\ \textrm{and}\ \ |k|\le N\\ 0 & \textrm{ if not }, \text{e} nd{array} \rho ight. $$ $$ \mathbb{h} at R_{a}^{b}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} \mathbb{h} at F_{a}^{b}(k, \rho )& \textrm{ if } \underline{\operatorname{dist}}([a],[b])\ge \mathcal{D} elta'\ \textrm{or}\ \ |k|>N\\ 0 & \text{ if not}. \text{e} nd{array} \rho ight. $$ For $k\not=0$, by Lemma \rho ef{lSmallDiv3}, $$ ||(L_{k,[a],[b]}( \rho ))^{-1}||\le \frac1 \kappa \, $$ for all $ \rho $ outside some set $\Sigma_{k,[a],[b]}( \kappa ppa)$ such that $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma_{k,[a],[b]}( \kappa ),\Sigma_{k,[a],[b]}(\frac \kappa 2))\ge {\operatorname{ct.} } \frac \kappa {N(\chi+ \delta lta)},$$ and $$ \mathcal{D} _3= \mathcal{D} \sigma etminus \beta igcup_{ \beta egin{equation}gin{subarray}{c} 0<|k|\le N\\ [a],[b] \text{e} nd{subarray}}\Sigma_{k,[a],[b]}( \kappa )$$ fulfills the required estimate. For $k=0$, it follows by \rho ef{ass1} and \rho ef{laequiv} that $$ ||(L_{k,[a],[b]}( \rho ))^{-1}||\le \frac1{c'}\le \frac1{ \kappa }\, . $$ We then get, as in the proof of Lemma \rho ef{prop:homo12}, that $ \mathbb{h} at S_{[a]}^{[b]}(k,\cdot)$ and $ \mathbb{h} at R_{[a]}^{[b]}(k,\cdot)$ have $ \mathcal{C} ^{{s_*}}$-extension to $ \mathcal{D} $ satisfying $$|| \partial_ \rho ^j \mathbb{h} at S_{[a]}^{[b]} (k, \rho )|| \leq \mathbb{C} te\frac{1}{ \kappa } \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \max_{0\le l\le j} || \partial_ \rho ^l \mathbb{h} at F_{[a]}(k, \rho )|| $$ and $$|| \partial_ \rho ^j R_{a}^{b}(k, \rho )||\leq \mathbb{C} te || \partial_ \rho ^j \mathbb{h} at F_{a}(k, \rho )||,$$ and satisfying \text{e} qref{homo3.1} for $ \rho \in \mathcal{D} _3$. These estimates imply that, for any $\gamma _*\le \gamma '\le\gamma $, $$|| \partial_ \rho ^j \mathbb{h} at S^{\xi\xi}(k, \rho )||_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '})}\leq \mathbb{C} te \mathcal{D} elta'\frac{ \mathcal{D} elta^{ \text{e} xp}e^{2\gamma d_ \mathcal{D} elta}}{ \kappa } \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \max_{0\le l\le j} || \partial_ \rho ^l \mathbb{h} at F^{\xi\xi} (k, \rho )||_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '})} $$ and $$|| \partial_ \rho ^j \mathbb{h} at R^{\xi\xi}(k, \rho )||_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '})}\leq \mathbb{C} te \mathcal{D} elta' \mathcal{D} elta^{ \text{e} xp} || \partial_ \rho ^j \mathbb{h} at F^{\xi\xi} (k, \rho )||_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '})}.$$ The factor $ \mathcal{D} elta^{ \text{e} xp}e^{2\gamma d_ \mathcal{D} elta}$ occurs because the diameter of the blocks $\le d_{ \mathcal{D} elta}$ interferes with the exponential decay and influences the equivalence between the $l^1$-norm and the operator-norm. The factor $ \mathcal{D} elta' \mathcal{D} elta^{ \text{e} xp} $ occurs because the truncation $ \lambda_a mbda_s im \mathcal{D} elta'+ d_{ \mathcal{D} elta}$ of diagonal influences the equivalence between the sup-norm and the operator-norm. These estimates gives estimates for the matrix norms and, for any $\gamma _*\le \gamma '\le\gamma $, $$|| \partial_ \rho ^j \mathbb{h} at S^{\xi\xi}(k, \rho )||_{\gamma , \varkappa }\leq \mathbb{C} te\frac{ \mathcal{D} elta^{ \text{e} xp}e^{2\gamma d_ \mathcal{D} elta}}{ \kappa } \beta ig(N\frac{\chi+ \delta }{ \kappa } \beta ig)^{|j|} \max_{0\le l\le j} || \partial_ \rho ^l \mathbb{h} at F^{\xi\xi} (k, \rho )||_{\gamma , \varkappa } $$ and $$|| \partial_ \rho ^j R^{\xi\xi} (k, \rho )||_{\gamma , \varkappa }\leq \mathbb{C} te || \partial_ \rho ^j F^{\xi\xi}(k, \rho )||_{\gamma , \varkappa }.$$ Summing up the Fourier series, as in Lemma \rho ef{prop:homo3}, we get that $S^{\xi\xi}(\theta, \rho )$ satisfies the estimate \text{e} qref{homo3S} and that $R^{\xi\xi}(\theta, \rho )$ satisfies the estimate \text{e} qref{homo3R}. \sigma mallskip {\it The third equation. } We write the equation in Fourier components and decompose it into its ``components'' on each product block $[a]\times[b]$, $[b]= \mathcal{F} $: \beta egin{equation}gin{multline*} L(k,[a],[b], \rho ho) \mathbb{h} at S_{[a]}^{[b]}(k) := \lambda_a ngle k, \omega ( \rho ) \rho angle \ \mathbb{h} at S_{[a]}^{[b]}(k) +Q_{[a]}( \rho ) \mathbb{h} at S_{[a]}^{[b]}(k) -\\ {\mathbf i} \mathbb{h} at S_{[a]}^{[b]}(k) JH( \rho )= -{\mathbf i}( \mathbb{h} at F_{[a]}^{[b]}(k, \rho )- \delta lta_{k,0}B_{[a]}^{[b]}- \mathbb{h} at R_{[a]}^{[b]}(k)) \text{e} nd{multline*} -- here we have suppressed the upper index ${\xi z_{ \mathcal{F} }}$. The formal solution is the same as in the previous case and it converges to functions verifying \text{e} qref{homo3S}, \text{e} qref{homo3R} and \text{e} qref{homo3B}, by Lemma \rho ef{lSmallDiv3}, and by \text{e} qref{la-lb-bis}. \sigma mallskip {\it The fourth equation. } We write the equation in Fourier components: \beta egin{equation}gin{multline*} L(k,[a],[b], \rho ho) \mathbb{h} at S_{[a]}^{[b]}(k) := \lambda_a ngle k, \omega ( \rho ) \rho angle \ \mathbb{h} at S_{[a]}^{[b]}(k) - {\mathbf i}HJ( \rho ) \mathbb{h} at S_{[a]}^{[b]}(k) +\\ {\mathbf i} \mathbb{h} at S_{[a]}^{[b]}(k) JH( \rho )= -{\mathbf i}( \mathbb{h} at F_{[a]}^{[b]}(k, \rho )- \delta lta_{k,0}B_{[a]}^{[b]}- \mathbb{h} at R_{[a]}^{[b]}(k)), \text{e} nd{multline*} where $[a]=[b]= \mathcal{F} $ -- here we have suppressed the upper index ${z_{ \mathcal{F} } z_{ \mathcal{F} }}$. The equation is solved (formally) by $$ \mathbb{h} at S_{[a]}^{[b]}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} -L(k,[a],[b], \rho )^{-1} {\mathbf i} \mathbb{h} at F_{[a]}^{[b]}(k, \rho ) & \textrm{ if } 0<|k|\le N\\ 0 & \textrm{ if not} , \text{e} nd{array} \rho ight. $$ $$ \mathbb{h} at R_{[a]}^{[b]}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} \mathbb{h} at F_{[a]}^{[b]}(k, \rho ) & \textrm{ if } |k|> N\\ 0& \textrm{ if not}; \text{e} nd{array} \rho ight. $$ and $$B^{[b]}_{[a]}( \rho )= \mathbb{h} at F_{[a]}^{[b]}(0, \rho ). $$ The formal solution now converges a solution verifying \text{e} qref{homo3S}, \text{e} qref{homo3R} and \text{e} qref{homo3B} by Lemma \rho ef{lSmallDiv3}. \sigma mallskip {\it The second equation. } We write the equation in Fourier components and decompose it into its ``components'' on each product block $[a]\times[b]$: \beta egin{equation}gin{multline*} L(k,[a],[b], \rho ) \mathbb{h} at S_{[a]}^{[b]}(k)=: \lambda_a ngle k, \omega ( \rho ) \rho angle \ \mathbb{h} at S_{[a]}^{[b]}(k) + Q_{[a]}( \rho ) \mathbb{h} at S_{[a]}^{[b]}(k)- \\ \mathbb{h} at S_{[a]}^{[b]}(k)Q_{[b]}( \rho )= -{\mathbf i}( \mathbb{h} at F_{[a]}^{[b]}(k, \rho ) - \mathbb{h} at R_{[a]}^{[b]}(k)- \delta _{k,0} B_{[a]}^{[b]}) \text{e} nd{multline*} -- here we have suppressed the upper index $\xi \text{e} ta$. This equation is now solved (formally) by $$ S_{[a]}^{[b]}(\theta, \rho )= \sigma um \mathbb{h} at S_{[a]}^{[b]} (k, \rho ) e^{{\mathbf i}k\cdot \theta} \quad\textrm{and}\quad R_{[a]}^{[b]}(\theta, \rho ) = \sigma um \mathbb{h} at R_{[a]}^{[b]} (k, \rho ) e^{{\mathbf i}k\cdot \theta},$$ with $$ \mathbb{h} at S_{[a]}^{[b]}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} L(k,[a],[b], \rho )^{-1}{\mathbf i} \mathbb{h} at F_{[a]}^{[b]}(k, \rho ) & \textrm{ if } \underline{\operatorname{dist}}([a],[b])\le \mathcal{D} elta'\ \textrm{and}\ \ 0< |k|\le N\\ 0 & \textrm{ if not }, \text{e} nd{array} \rho ight. $$ $$ \mathbb{h} at R_{a}^{b}(k, \rho )= \left\{ \beta egin{equation}gin{array}{ll} \mathbb{h} at F_{a}^{b}(k, \rho )& \textrm{ if } \underline{\operatorname{dist}}([a],[b])\ge \mathcal{D} elta'\ \textrm{or}\ \ |k|>N\\ 0 & \text{ if not}. \text{e} nd{array} \rho ight. $$ and $$B_{a}^{b}( \rho )= \left\{ \beta egin{equation}gin{array}{ll} \mathbb{h} at F_{a}^{b}(0, \rho ) & \textrm{ if } \underline{\operatorname{dist}}([a],[b])\le \mathcal{D} elta'\ \textrm{and}\ \ k=0\\ 0& \text{ if not}. \text{e} nd{array} \rho ight. $$ We have to distinguish two cases, depending on when $k= 0$ or not. \sigma mallskip {\it The case $k\not=0$. } We have, by Lemma \rho ef{lSmallDiv3}, $$ ||(L_{k,[a],[b]}( \rho ))^{-1}||\le \frac1 \kappa \, $$ for all $ \rho $ outside some set $\Sigma_{k,[a],[b]}( \kappa ppa)$ such that $$\underline{\operatorname{dist}}( \mathcal{D} \sigma etminus \Sigma_{k,[a],[b]}( \kappa ),\Sigma_{k,[a],[b]}(\frac \kappa 2))\ge {\operatorname{ct.} } \frac \kappa {N(\chi+ \delta lta)}, $$ and $$ \mathcal{D} _3= \mathcal{D} \sigma etminus \beta igcup_{ \beta egin{equation}gin{subarray}{c} 0<|k|\le N\\ [a],[b] \text{e} nd{subarray}}\Sigma_{k,[a],[b]}( \kappa )$$ fulfills the required estimate. \sigma mallskip {\it The case $k=0$.} In this case we consider the block decomposition $ \mathcal{E} _{ \mathcal{D} elta'}$ and we distinguish whether $|a|=|b|$ or not. If $|a|>|b|$, we use \text{e} qref{ass1} and \text{e} qref{la-lb-bis} to get $$| \alpha pha( \rho )- \beta egin{equation}ta( \rho )|\geq c'-\frac{ \delta }{ \lambda_a ngle a \rho angle^ \varkappa appa} -\frac{ \delta }{ \lambda_a ngle b \rho angle^ \varkappa appa}\geq \frac{c'}{2}\ge \kappa .$$ This estimate allows us to solve the equation by choosing $$B_{[a]}^{[b]}= \mathbb{h} at R_{[a]}^{[b]}(0) =0$$ and $$ \mathbb{h} at S_{[a]}^{[b]}(0, \rho )= L(0,[a],[b], \rho )^{-1} \mathbb{h} at F_{[a]}^{[b]}(0, \rho )$$ with $$ || \partial_ \rho ^j \mathbb{h} at S_{[a]}^{[b]}(0, \rho )||\le \mathbb{C} te \frac{1}{ \kappa } (N\frac{\chi+ \delta }{ \kappa ppa})^{ |j |} \max_{0\le l\le j}\aa{ \partial_ \rho ^l \mathbb{h} at F_{[a]}^{[b]}(0, \rho )},$$ which implies \text{e} qref{homo3S}. If $|a|=|b|$, we cannot control $| \alpha pha( \rho )- \beta egin{equation}ta( \rho )|$ from below, so then we define $$ \mathbb{h} at S_{[a]}^{[b]}(0)=0$$ and \beta egin{equation}gin{align*} B_{a}^{b}( \rho )= \mathbb{h} at F_{a}^{b}(0, \rho )) ,\quad \mathbb{h} at R_{a}^{b}(0) =0\quad &\text{for } [a]_{ \mathcal{D} elta'}= [b]_{ \mathcal{D} elta'}\\ \mathbb{h} at R_{a}^{b}(0, \rho ) = \mathbb{h} at F_{a}^{b}(0, \rho )\quad B_{a}^{b}=0,\quad& \text{for } [a]_{ \mathcal{D} elta'}\not= [b]_{ \mathcal{D} elta'}. \text{e} nd{align*} Clearly $R$ and $B$ verifiy the estimates \text{e} qref{homo3R} and \text{e} qref{homo3B}. \sigma mallskip Hence, the formal solution converges to functions verifying \text{e} qref{homo3S}, \text{e} qref{homo3R} and \text{e} qref{homo3B} by Lemma \rho ef{lSmallDiv3}. Moreover, for $ \rho \in \mathcal{D} '$, these functions are a solution of the fourth equation. \text{e} ndproof \sigma ubsection{The homological equation} \beta egin{equation}gin{lemma} \lambda_a bel{thm-homo} There exists a constant $C$ such that if \text{e} qref{ass1} holds, then, for any $N\ge1$, $ \mathcal{D} elta'\ge \mathcal{D} elta\ge 1$ and $$ \kappa \le\frac1C c',$$ there exist subsets $ \mathcal{D} '= \mathcal{D} (h, \kappa , N) \sigma ubset \mathcal{D} $, satisfying $$\operatorname{Leb}( \mathcal{D} \sigma etminus { \mathcal{D} '})\le C N^{ \text{e} xp_1} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta _0})^{ \alpha pha}$$ and there exist real jet-functions $S, R\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$ and $h_+$ verifying, for $ \rho \in \mathcal{D} '$, \beta egin{equation} \lambda_a bel{eqHomEqbis} \{ h,S \}+ f^T= h_++R, \text{e} nd{equation} and such that $$h+h_+\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta', \delta _+)$$ and, for all $0< \sigma '< \sigma $, \beta egin{equation} \lambda_a bel{estim-B} \ab{ h_+}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le X \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \text{e} nd{equation} \beta egin{equation}gin{equation} \lambda_a bel{estim-S} \ab{S}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq \frac{1} \kappa X (N\frac{\chi+ \delta lta}{ \kappa })^{{{s_*}}} \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \text{e} nd{equation} \beta egin{equation} \lambda_a bel{estim-R} \ab{R}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq X\left( e^{-( \sigma - \sigma ')N}+e^{-(\gamma -\gamma ') \mathcal{D} elta'} \rho ight) \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}, \text{e} nd{equation} for $\gamma _*\le\gamma '\le \gamma $, where $$ X=C \mathcal{D} elta' \beta ig(\frac{ \mathcal{D} elta}{ \sigma - \sigma ' } \beta ig)^{ \text{e} xp_2} e^{2\gamma d_ \mathcal{D} elta}\max(1,\mu^2).$$ The exponent $ \text{e} xp_1$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1} $ and $\# \mathcal{A} $. The exponent $ \text{e} xp_2$ only depends on $d_*,m_*$ and $\# \mathcal{A} $. The exponent $ \alpha pha$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \beta egin{equation}gin{remark} The estimates \text{e} qref{estim-B} provides an estimate of $ \delta _+$. Indeed, for any $a,b\in [a]_{ \mathcal{D} elta'}$ $$ \ab{ \partial_ \rho ^j B_a^b}\le \frac1 C|| \partial_ \rho ^j B||_{\gamma , \varkappa }e_{\gamma , \varkappa }(a,b)^{-1}\le \mathbb{C} te ( \mathcal{D} elta')^{ \varkappa } \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \frac1{ \lambda_a ngle a \rho angle^{ \varkappa }}. $$ Since $\# [a]_{ \mathcal{D} elta'}\le ( \mathcal{D} elta')^{ \text{e} xp}$ we get $$ || \partial_ \rho ^j B( \rho )_{[a]_{ \mathcal{D} elta'}} || \le \mathbb{C} te( \mathcal{D} elta')^{ \text{e} xp} \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \frac1{ \lambda_a ngle a \rho angle^{ \varkappa }}. $$ This gives the estimate of $ \delta lta_+- \delta lta$. \text{e} nd{remark} \beta egin{equation}gin{proof} The set $ \mathcal{D} '$ will now be given by the intersection of the sets in the three previous lemma of this section. We set $$ h_+(r,w)= \mathbb{h} at f_r(r,0) + \frac 1 2 \lambda_a ngle w, Bw \rho angle$$ $$ S(r,\theta,w)=S_r(\theta,r)+ \lambda_a ngle S_w(\theta)w \rho angle+ \frac 1 2 \lambda_a ngle S_{ww}(\theta)w,w \rho angle$$ and $$ R(r,\theta,w)= R_r(r,\theta)+ \lambda_a ngle R_w(\theta),w \rho angle+ \frac 1 2 \lambda_a ngle R_{ww}(\theta)w,w \rho angle.$$ These functions also depend on $ \rho \in \mathcal{D} $ and they verify equation \text{e} qref{eqHomEqbis} for $ \rho \in \mathcal{D} '$. If $x=(r,\theta,w)\in \mathcal{O} _{\gamma _*}( \sigma ,\mu)$, then $$| h_+(x)|\le \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} + \frac 1 2 || Bw ||_{\gamma _*}||w ||_{\gamma _*}.$$ Since $$\aa{B}_{\gamma , \varkappa } \ge \aa{B}_{\gamma _*, \varkappa }\ge \aa{B}_{ \mathcal{B} (Y_{\gamma _*},Y_{\gamma _*})}$$ it follows that $$| h_+(x)|\le \mathbb{C} te \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\max(1,\mu^2).$$ We also have for any $x=(r,\theta,w)\in \mathcal{O} _{\gamma '}( \sigma ,\mu)$, $\gamma _*\le \gamma '\le\gamma $, $$||Jd h_+(x)||_{\gamma '}\le \mathbb{C} te \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}+ ||Bw||_{\gamma '}.$$ Since $$\aa{B}_{\gamma , \varkappa } \ge \aa{B}_{\gamma ', \varkappa }\ge \aa{B}_{ \mathcal{B} (Y_{\gamma '},Y_{\gamma '})}$$ it follows that $$||Jd h_+(x)||_{\gamma '}\le \mathbb{C} te \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\max(1,\mu).$$ Finally $Jd^2 h_+(x)$ equals $ JB$ which satisfies the required bound. The estimates of the derivatives with respect to $ \rho $ are the same and obtained in the same way. The functions $S(\theta,r, \overline{z} eta)$ and $R(\theta,r, \overline{z} eta)$ are estimated in the same way. \text{e} nd{proof} \sigma ubsection{The non-linear homological equation} The equation \text{e} qref{eqNlHomEq} can now be solved easily. \beta egin{equation}gin{proposition} \lambda_a bel{thm-Eq} There exists a constant $C$ such that for any $$h\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta ),\quad \delta lta \le \frac1C c',$$ and for any $$N\ge 1,\quad \mathcal{D} elta'\ge \mathcal{D} elta\ge 1,\quad \kappa \le\frac1C c'$$ there exists a subset $ \mathcal{D} '= \mathcal{D} (h, \kappa ,N) \sigma ubset \mathcal{D} $, satisfying $$\operatorname{Leb}( \mathcal{D} \sigma etminus { \mathcal{D} '})\le C N^{ \text{e} xp_1} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta _0})^{ \alpha pha},$$ and, for any $f\in \mathbb{T} c_{\gamma , \varkappa appa}( \sigma ,\mu, \mathcal{D} )$, $\mu\le1$, $$\varepsilon=\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\quad \textrm{and}\quad \xi=\ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}},$$ there exist real jet-functions $S,R\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$ and $h_+$ verifying, for $ \rho \in \mathcal{D} '$, \beta egin{equation} \lambda_a bel{eqNlHomEqbis} \{ h,S \}+\{ f-f^T,S \}^T+ f^T= h_++R \text{e} nd{equation} and such that $$h+ h_+\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta', \delta lta_+)$$ and, for all $ \sigma '< \sigma $ and $\mu'<\mu$, \beta egin{equation} \lambda_a bel{estim-B2} \ab{ h_+}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le CXY\varepsilon \text{e} nd{equation} \beta egin{equation} \lambda_a bel{estim-S2} \ab{S}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq C \frac1 \kappa XY\varepsilon \text{e} nd{equation} \beta egin{equation} \lambda_a bel{estim-R2} \ab{R}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq C\left( e^{-( \sigma - \sigma ')N}+e^{-(\gamma -\gamma ') \mathcal{D} elta'} \rho ight)XY\varepsilon, \text{e} nd{equation} for $\gamma _*\le\gamma '\le \gamma $, where $$X=(\frac{N \mathcal{D} elta' e^{\gamma d_ \mathcal{D} elta} }{( \sigma - \sigma ')(\mu-\mu')})^{ \text{e} xp_2}$$ and $$ Y=(\frac{\chi+ \delta lta+\xi}{ \kappa })^{4{s_*}+3}.$$ The exponent $ \text{e} xp_1$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1} $ and $\# \mathcal{A} $. The exponent $ \text{e} xp_2$ only depends on $d_*,m_*,s_*$ and $\# \mathcal{A} $. The exponent $ \alpha pha$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{proposition} \beta egin{equation}gin{remark} Notice that the ``loss'' of $S$ with respect to $ \kappa $ is of ``order'' $4{{s_*}}+3$. However, if $\chi$, $ \delta $ and $\xi= \ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}$ are of size $ \lambda_a mbda_s im \kappa $, then the loss is only of ``order'' 1. \text{e} nd{remark} \beta egin{equation}gin{proof} Let $S=S_0+S_1+S_2$ be a jet-function such that $S_1$ starts with terms of degree $1$ in $r,w$ and $S_2$ starts with terms of degree $2$ in $r,w$ -- jet functions are polynomials in $r,w$ and we give (as is usual) $w$ degree $1$ and $r$ degree $2$. Let now $ \sigma '= \sigma _5< \sigma _4< \sigma _3< \sigma _2< \sigma _1< \sigma _0= \sigma $ be a (finite) arithmetic progression, i.e. $ \sigma _j- \sigma _{j+1}$ do not depend on $j$, and let and $\mu'=\mu_5<\mu_4<\mu_3<\mu_2<\mu_1<\mu_0=\mu$ be another arithmetic progressions. Then $\{ h',S\}+\{ f-f^T,S \}^T+ f^T= h_++R$ decomposes into three homological equations $$\{ h',S_0 \}+ f^T= ( h_+)_{0}+R_0,$$ $$\{ h',S_1 \}+f_1^T= ( h_+)_{1}+R_1,\quad f_1=\{ f-f^T,S_0 \},$$ $$\{ h',S_2 \}+ f_2^T= ( h_+)_{2}+R_2,\quad f_2=\{ f-f^T,S_1 \}.$$ By Lemma \rho ef{thm-homo} we have for the first equation $$\ab{( h_+)_0}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le X\varepsilon, \quad \ab{R_0}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq XZ\varepsilon, $$ $$\ab{S_0}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq \frac{1} \kappa X Y\varepsilon $$ where $$ X=C \mathcal{D} elta' \beta ig(\frac{5 \mathcal{D} elta}{ \sigma - \sigma ' } \beta ig)^{ \text{e} xp} e^{2\gamma _1 d_ \mathcal{D} elta}.$$ and where $Y,Z$ are defined by the right hand sides in the estimates \text{e} qref{estim-S} and \text{e} qref{estim-R}. By Proposition \rho ef{lemma:poisson} we have $$ \xi_1=\ab{f_1}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_2\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le \frac{1} \kappa X YW\xi \varepsilon$$ where $$W=C \beta ig(\frac5{( \sigma igma- \sigma igma')} + \frac5{ (\mu-\mu') } \beta ig).$$ By Proposition \rho ef{lemma:jet} $\varepsilon_1=\ab{f_1^T}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_2\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}$ satisfies the same bound as $\xi_1$ By Lemma \rho ef{thm-homo} we have for the second equation $$\ab{( h_+)_1}_{ \beta egin{equation}gin{subarray}{c} \sigma _3,\mu_2\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le X\varepsilon_1,\quad \ab{R_1}_{ \beta egin{equation}gin{subarray}{c} \sigma _3,\mu_2\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq XZ\varepsilon_1, $$ $$\ab{S_1}_{ \beta egin{equation}gin{subarray}{c} \sigma _3,\mu_2\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq \frac{1} \kappa X Y\varepsilon_1. $$ By Propositions \rho ef{lemma:jet} and \rho ef{lemma:poisson} we have $$ \xi_2=\ab{f_2}_{ \beta egin{equation}gin{subarray}{c} \sigma _4,\mu_4\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le \frac1 \kappa X YW\xi_1 \varepsilon_1,$$ and $\varepsilon_2=\ab{f_2^T}_{ \beta egin{equation}gin{subarray}{c} \sigma _4,\mu_4\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}$ satisfies the same bound. By Lemma \rho ef{thm-homo} we have for the third equation $$\ab{( h_+)_2}_{ \beta egin{equation}gin{subarray}{c} \sigma _5,\mu_4\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le X\varepsilon_2,\quad \ab{R_2}_{ \beta egin{equation}gin{subarray}{c} \sigma _5,\mu_4\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq XZ\varepsilon_2, $$ $$\ab{S_2}_{ \beta egin{equation}gin{subarray}{c} \sigma _5,\mu_4\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq \frac{1} \kappa X Y\varepsilon_2. $$ Putting this together we find that $$\varepsilon+\varepsilon_1+\varepsilon_2 \le (1+\frac1 \kappa X YW\xi)^3\varepsilon= T\varepsilon$$ and $$\ab{h_+}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le XT\varepsilon,\quad \ab{R}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq XZT\varepsilon, $$ $$\ab{S}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq \frac1 \kappa X YT\varepsilon. $$ Renaming $X$ and $Y$ gives now the estimates. \text{e} nd{proof} \sigma ection{Proof of the KAM Theorem} Theorem \rho ef{main} is proved by an infinite sequence of change of variables typical for KAM-theory. The change of variables will be done by the classical Lie transform method which is based on a well-known relation between composition of a function with a Hamiltonian flow $ \mathcal{P} hi^t_S$ and Poisson brackets: $$\frac{d}{d t} f\circ \mathcal{P} hi^t_S=\{f,S\}\circ \mathcal{P} hi^t_S $$ from which we derive $$f \circ \mathcal{P} hi^1_S=f+\{f,S\}+\int_0^1 (1-t)\{\{f,S\},S\}\circ \mathcal{P} hi^t_S\ \text{d} t.$$ Given now three functions $ h,k$ and $f$. Then \beta egin{equation}gin{multline*}( h+k+f )\circ \mathcal{P} hi^1_S=\\ h+k+f +\{ h+k+f ,S\}+\int_0^1 (1-t)\{\{ h+k+f ,S\},S\}\circ \mathcal{P} hi^t_S\ \text{d} t. \text{e} nd{multline*} If now $S$ is a solution of the equation \beta egin{equation} \lambda_a bel{eq-homobis} \{ h,S \}+\{ f-f^T,S \}^T+ f^T= h_++R, \text{e} nd{equation} then $$ ( h+k+f )\circ \mathcal{P} hi^1_S= h+k+h_++f_+$$ with \beta egin{equation}gin{multline} \lambda_a bel{f+} f_+= R+(f-f^T)+\{k+f^T ,S\}+\{f-f^T,S\}-\{f-f^T,S\}^T+\\ +\int_0^1 (1-t)\{\{ h+k+f ,S\},S\}\circ \mathcal{P} hi^t_S\ \text{d} t \text{e} nd{multline} and \beta egin{equation} \lambda_a bel{f+T} f_+^T= R+ \{k+f^T ,S\}^T+(\int_0^1 (1-t)\{\{h+k+f ,S\},S\}\circ \mathcal{P} hi^t_S\ \text{d} t)^T. \text{e} nd{equation} If we assume that $S$ is ``small as'' $f^T$, then $f_+^T$ is is ``small as'' $k f^T$ -- this is the basis of a linear iteration scheme with (formally) linear convergence. \footnote{\ it was first used by Poincar\'e, credited by him to the astronomer Delauney, and it has been used many times since then in different contexts. } But if also $k$ is of the size $f^T$, then $f^+$ is is ``small as'' the square of $f^T$ -- this is the basis of a quadratic iteration scheme with (formally) quadratic convergence. We shall combine both of them. First we shall give a rigorous version of the change of variables described above. \sigma ubsection{The basic step} Let $h\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta )$ and assume $ \varkappa >0$ and \beta egin{equation} \lambda_a bel{ass1} \delta lta \le \frac{1}{C}c' , \text{e} nd{equation} where $C$ is to be determined. Let $$\gamma =(\gamma ,m_*)\ge \gamma _*=(0,m_*)$$ and recall Remark \rho ef{rAbuse}. Let $N\ge 1$, $ \mathcal{D} elta'\ge \mathcal{D} elta\ge 1$ and $$ \kappa \le\frac1C c'.$$ Proposition \rho ef{thm-Eq} then gives, for any $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, $\mu\le1$, $$\varepsilon=\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\quad \textrm{and}\quad \xi=\ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}},$$ a set $ \mathcal{D} '= \mathcal{D} '(h, \kappa ,N) \sigma ubset \mathcal{D} $ and functions $S,h_+, R$ satisfying \text{e} qref{estim-B2}+ \text{e} qref{estim-S2}+ \text{e} qref{estim-R2} and solving the equation \text{e} qref{eq-homobis}, $$\{h,S \}+\{ f-f^T,S \}^T+ f^T= h_++R,$$ for any $ \rho \in \mathcal{D} '$. Let now $0< \sigma '= \sigma _4< \sigma _3< \sigma _2< \sigma _1< \sigma _0= \sigma $ and $0<\mu'=\mu_4<\mu_3<\mu_2<\mu_1<\mu_0=\mu$ be (finite) arithmetic progressions. {\it The flow $ \mathcal{P} hi^t_S$.} We have, by \text{e} qref{estim-S2}, $$\ab{S}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu_1\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\\ \leq \mathbb{C} te \frac1 \kappa XY\varepsilon$$ where $X,Y$ and $ \mathbb{C} te$ are given in Proposition \rho ef{thm-Eq}, i.e. $$X=(\frac{ \mathcal{D} elta' e^{\gamma d_ \mathcal{D} elta}N}{( \sigma _0- \sigma _1)(\mu_0-\mu_1)})^{ \text{e} xp_2}= (\frac{4^2 \mathcal{D} elta' e^{\gamma d_ \mathcal{D} elta}N}{( \sigma - \sigma ')(\mu-\mu')})^{ \text{e} xp_2},$$ $$Y=(\frac{\chi+ \delta lta+\xi}{ \kappa })^{4{{s_*}}+3}$$ -- we can assume without restriction that $ \text{e} xp_2\ge 1$. If \beta egin{equation} \lambda_a bel{hyp-f1} \varepsilon\leq \frac1C \frac{ \kappa }{X^2Y}, \text{e} nd{equation} and $C$ is sufficiently large, then we can apply Proposition \rho ef{Summarize}(i). By this proposition it follows that for any $0\le t\le1$ the Hamiltonian flow map $ \mathcal{P} hi^t_S$ is a $ \mathcal{C} ^{{s_*}}$-map $$ \mathcal{O} _{\gamma '}( \sigma _{i+1},\mu_{i+1})\times \mathcal{D} \to \mathcal{O} _{\gamma '}( \sigma _i,\mu_i),\quad \forall \gamma _*\le\gamma '\le\gamma ,\quad i=1,2,3, $$ real holomorphic and symplectic for any fixed $ \rho ho\in \mathcal{D} $. Moreover, $$|| \partial_ \rho ^j ( \mathcal{P} hi^t_S(x,\cdot)-x)||_{\gamma '}\le \mathbb{C} te\frac1 \kappa XY \varepsilon$$ and $$\aa{ \partial_ \rho ^j (d \mathcal{P} hi^t_S(x,\cdot)-I)}_{\gamma ', \varkappa }\le \mathbb{C} te\frac1 \kappa XY \varepsilon$$ for any $x\in \mathcal{O} _{\gamma '}( \sigma _2,\mu_2)$, $\gamma _*\le \gamma '\le\gamma $, and $0\le \ab{j}\le {s_*}$. {\it A transformation.} Let now $k\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$ and set $$ \text{e} ta=\ab{k}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}.$$ Then we have $$(h+k+f )\circ \mathcal{P} hi^1_S= h+k+h_++f_+$$ where $f_+$ is defined by \text{e} qref{f+}, i.e. \beta egin{equation}gin{multline*} f_+= R+(f-f^T)+\{k+f^T ,S\}+\{f-f^T,S\}-\{f-f^T,S\}^T+\\ +\int_0^1 (1-t)\{\{ h+k+f ,S\},S\}\circ \mathcal{P} hi^t_S\ \text{d} t. \text{e} nd{multline*} The integral term is the sum $$ \int_0^1 (1-t)\{h_++R-f^T,S\}\circ \mathcal{P} hi^t_S\ \text{d} t +\int_0^1 (1-t)\{\{k+f ,S\}-\{f-f^T,S\}^T,S\}\circ \mathcal{P} hi^t_S\ \text{d} t.$$ {\it The estimates of $\{k+f^T,S \}$ and $\{f-f^T,S \}$.} By Proposition \rho ef{lemma:poisson}(i) $$ \ab{\{k+f^T,S\}}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_2 \ \\ \gamma , \alpha pha, \mathcal{D} \text{e} nd{subarray}} \leq \mathbb{C} te X \ab{S}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu_1\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \ab{k+f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu_1 \ \\ \gamma , \alpha pha, \mathcal{D} \text{e} nd{subarray}}.$$ Hence \beta egin{equation} \ab{\{k+f^T,S\}}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_2 \ \\ \gamma , \alpha pha, \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te \frac1 \kappa X^2Y( \text{e} ta+\varepsilon) \varepsilon. \text{e} nd{equation} Similarly, \beta egin{equation} \ab{\{f-f^T,S\}}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_2 \ \\ \gamma , \alpha pha, \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te \frac1 \kappa X^2Y(\xi+\varepsilon)\varepsilon. \text{e} nd{equation} {\it The estimate of $\{h_+-f^T,S \}\circ \mathcal{P} hi^t_S$.} The estimate of $h_+$ is given by \text{e} qref{estim-B2}: $$\ab{ h_+}_{ \beta egin{equation}gin{subarray}{c} \sigma _1,\mu_1\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le \mathbb{C} te XY\varepsilon.$$ This gives, again by Proposition \rho ef{lemma:poisson}(ii), $$ \ab{\{h_+-f^T,S\}}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_2 \ \\ \gamma , \alpha pha, \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te \frac1 \kappa X^3Y^2\varepsilon^2.$$ Let now $F=\{h_+-f^T,S \}$. If $\varepsilon$ verifies \text{e} qref{hyp-f1} for a sufficiently large constant $C$, then we can apply Proposition \rho ef{Summarize}(ii). By this proposition, for $\ab{t}\le1$, the function $F\circ \mathcal{P} hi_S^t\in \mathbb{T} c_{\gamma , \varkappa , \mathcal{D} }( \sigma _3,\mu_3)$ and \beta egin{equation} \ab{\{h_+-f^T,S \}\circ \mathcal{P} hi_S^t}_{ \beta egin{equation}gin{subarray}{c} \sigma _3,\mu_3 \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te \frac1 \kappa X^3Y^2\varepsilon^2. \text{e} nd{equation} {\it The estimate of $\{R,S \}\circ \mathcal{P} hi^t_S$.} The estimate of $R$ is given by \text{e} qref{estim-R2}: $$\ab{R}_{ \beta egin{equation}gin{subarray}{c} \sigma _2,\mu_1\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te XYZ_{\gamma '}\varepsilon,$$ where $$Z_{\gamma '}=\left( e^{-( \sigma - \sigma _2)N}+e^{-(\gamma -\gamma ') \mathcal{D} elta'} \rho ight).$$ Then, as in the previous case, \beta egin{equation} \ab{\{R,S \}\circ \mathcal{P} hi_S^t}_{ \beta egin{equation}gin{subarray}{c} \sigma _3,\mu_3 \ \\ \gamma ', \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te \frac1 \kappa X^3Y^2Z_{\gamma '}\varepsilon^2. \text{e} nd{equation} {\it The estimate of $\{\{k+f,S \}-\{f-f^T,S\}^T,S\}\circ \mathcal{P} hi^t_S$.} This function is estimated as above. If $F=\{\{k+f,S \}-\{f-f^T,S\}^T,S\}$, then, by Proposition \rho ef{lemma:jet} and Proposition \rho ef{lemma:poisson}(i), $$ \ab{F}_{ \beta egin{equation}gin{subarray}{c} \sigma _3,\mu_3 \ \\ \gamma , \alpha pha, \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te (\frac1 \kappa X^2Y)^2( \text{e} ta+\xi)\varepsilon^2$$ and by Proposition \rho ef{Summarize}(ii) \beta egin{equation} \ab{\{\{k+f,S \}-\{f-f^T\}^T,S\}\circ \mathcal{P} hi_S^t}_{ \beta egin{equation}gin{subarray}{c} \sigma _4,\mu_4 \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\leq \mathbb{C} te (\frac1 \kappa X^2Y)^2( \text{e} ta+\xi)\varepsilon^2 . \text{e} nd{equation} Renaming now $X$ and $Y$ and replacing $N$ by $2N$ now gives the following lemma. \beta egin{equation}gin{lemma} \lambda_a bel{basic} There exists an absolute constant $C$ such that, for any $$h\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta ),\quad \varkappa >0,\quad \delta lta \le \frac1C c',$$ and for any $$N\ge 1,\quad \mathcal{D} elta'\ge \mathcal{D} elta\ge 1,\quad \kappa \le\frac1C c',$$ there exists a subset $ \mathcal{D} '= \mathcal{D} ( h, \kappa ,N) \sigma ubset \mathcal{D} $, satisfying $$\operatorname{Leb}( \mathcal{D} \sigma etminus { \mathcal{D} '})\le C N^{ \text{e} xp_1} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta _0})^{ \alpha pha},$$ and, for any $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, $\mu\le1$, $$\varepsilon=\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\quad \textrm{and}\quad \xi=[f]_{ \sigma ,\mu, \mathcal{D} }^{\gamma , \varkappa appa},$$ satisfying $$\varepsilon \leq \frac1C \frac{ \kappa }{XY}, \qquad \left\{ \beta egin{equation}gin{array}{ll} X=(\frac{N \mathcal{D} elta' e^{\gamma d_ \mathcal{D} elta}}{( \sigma - \sigma ')(\mu-\mu')})^{ \text{e} xp_2},& \sigma '< \sigma ,\ \mu'<\mu\\ Y= (\frac{\chi+ \delta lta+\xi} \kappa )^{ \text{e} xp_3},&\ \text{e} nd{array} \rho ight. $$ and for any $k\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, there exists a $ \mathcal{C} ^{{s_*}}$ mapping $$ \mathcal{P} hi: \mathcal{O} _{\gamma '}( \sigma ',\mu')\times \mathcal{D} \to \mathcal{O} _{\gamma '}( \sigma -\frac{ \sigma - \sigma '}2,\mu-\frac{\mu-\mu'}2),\quad \forall \gamma _*\le\gamma '\le\gamma ,$$ real holomorphic and symplectic for each fixed parameter $ \rho \in \mathcal{D} $, and functions $f_+,R_+\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ',\mu')$ and $$h+h_+\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta', \delta lta_+),$$ such that $$(h+k+f )\circ \mathcal{P} hi= h+k+ h_++f_++R_+,\quad \forall \rho \in \mathcal{D} ',$$ and $$\ab{ h_+}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} +\ab{ f_+-f}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le CXY\varepsilon,$$ $$\ab{ f_+^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le C\frac1 \kappa XY (\ab{k}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}+ \kappa e^{-( \sigma - \sigma ')N}+\varepsilon )\varepsilon $$ and $$\ab{ R_+}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le C XY e^{-(\gamma -\gamma ') \mathcal{D} elta'}\varepsilon$$ for any $\gamma _*\le\gamma '\le\gamma $. Moreover, $$|| \partial_ \rho ^j ( \mathcal{P} hi(x, \rho )-x)||_{\gamma '}+ \aa{ \partial_ \rho ^j (d \mathcal{P} hi(x, \rho )-I)}_{\gamma ', \varkappa } \le C\frac1 \kappa XY \varepsilon$$ for any $x\in \mathcal{O} _{\gamma '}( \sigma ',\mu')$, $\gamma _*\le\gamma '\le\gamma $ and $\ab{j}\le{s_*}$, and for any $ \rho \in \mathcal{D} $. Finally, if $\tilde \rho =(0, \rho _2,\dots, \rho _p)$ and $f^T(\cdot,\tilde \rho )=0$ for all $\tilde \rho $, then $f_+-f=R_+=h_+=0$ and $ \mathcal{P} hi(x,\cdot)=x$ for all $\tilde \rho $. The exponent $ \text{e} xp_1$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1} $ and $\# \mathcal{A} $. The exponent $ \text{e} xp_2$ only depends on $d_*,m_*,s_*$ and $\# \mathcal{A} $. The exponent $ \text{e} xp_3$ only depends on $s_*$. The exponent $ \alpha pha$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \sigma ubsection{A finite induction} We shall first make a finite iteration without changing the normal form in order to decrease strongly the size of the perturbation. We shall restrict ourselves to the case when $N= \mathcal{D} elta'$. \beta egin{equation}gin{lemma} \lambda_a bel{Birkhoff} There exists a constant $C$ such that, for any $$h\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta ),\quad \varkappa >0,\quad \delta lta \le \frac1C c',$$ and for any $$ \mathcal{D} elta'\ge \mathcal{D} elta\ge 1,\quad \kappa \le\frac1C c',$$ there exists a subset $ \mathcal{D} '= \mathcal{D} ( h, \kappa , \mathcal{D} elta') \sigma ubset \mathcal{D} $, satisfying $$\operatorname{Leb}( \mathcal{D} \sigma etminus { \mathcal{D} '})\le C ( \mathcal{D} elta')^{ \text{e} xp_1} \beta ig(\frac{\chi+ \delta lta}{ \delta lta_0} \beta ig) (\frac{ \kappa }{ \delta _0})^{ \alpha pha},$$ and, for any $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, $\mu\le1$, $$\varepsilon=\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}\quad \textrm{and}\quad \xi=[f]_{ \sigma ,\mu, \mathcal{D} }^{\gamma , \varkappa appa},$$ satisfying $$\varepsilon \leq \frac1C \frac{ \kappa }{XY},\quad \left\{ \beta egin{equation}gin{array}{ll} X=(\frac{ \mathcal{D} elta' e^{\gamma d_ \mathcal{D} elta}}{( \sigma - \sigma ')(\mu-\mu')}\log\frac1{\varepsilon})^{ \text{e} xp_2},& \sigma '< \sigma ,\ \mu'<\mu\\ Y= (\frac{\chi+ \delta lta+\xi} \kappa )^{ \text{e} xp_3},&\ \text{e} nd{array} \rho ight. $$ there exists a $ \mathcal{C} ^{{s_*}}$ mapping $$ \mathcal{P} hi: \mathcal{O} _{\gamma '}( \sigma ',\mu')\times \mathcal{D} \to \mathcal{O} _{'\gamma }( \sigma -\frac{ \sigma - \sigma '}{2},\mu-\frac{\mu-\mu'}{2}), \quad \forall \gamma _*\le\gamma '\le\gamma ,$$ real holomorphic and symplectic for each fixed parameter $ \rho \in \mathcal{D} $, and functions $f'\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ',\mu')$ and $$h'\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta', \delta '),$$ such that $$(h+f )\circ \mathcal{P} hi= h'+f',\quad \forall \rho \in \mathcal{D} ',$$ and $$\ab{ h'- h}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le C XY \varepsilon,$$ $$\xi'=\ab{ f'}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le \xi+ CXY \varepsilon$$ and $$ \varepsilon'=\ab{ (f')^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le C XY(e^{-\frac12( \sigma - \sigma ') \mathcal{D} elta'}+ e^{-\frac12(\gamma -\gamma ') \mathcal{D} elta'})\varepsilon,$$ for any $\gamma _*\le\gamma '\le \gamma $. Moreover, $$|| \partial_ \rho ^j ( \mathcal{P} hi(x, \rho )-x)||_{\gamma '}+ \aa{ \partial_ \rho ^j (d \mathcal{P} hi(x, \rho )-I)}_{\gamma ', \varkappa } \le C\frac1 \kappa XY \varepsilon$$ for any $x\in \mathcal{O} _{\gamma '}( \sigma ',\mu')$, $\gamma _*\le\gamma '\le\gamma $ and $\ab{j}\le{s_*}$, and for any $ \rho \in \mathcal{D} $. Finally, if $\tilde \rho =(0, \rho _2,\dots, \rho _p)$ and $f^T(\cdot,\tilde \rho )=0$ for all $\tilde \rho $, then $f'-f=h'=0$ and $ \mathcal{P} hi(x,\cdot)=x$ for all $\tilde \rho $. The exponent $ \text{e} xp_1$ only depends on $\frac{d_*}{ \beta egin{equation}ta_1} $ and $\# \mathcal{A} $. The exponent $ \text{e} xp_2$ only depends on $d_*,m_*,s_*$ and $\# \mathcal{A} $. The exponent $ \text{e} xp_3$ only depends on $s_*$. The exponent $ \alpha pha$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. $C$ is an absolut constant that depends on $c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. \text{e} nd{lemma} \beta egin{equation}gin{proof} Let $N= \mathcal{D} elta'$. Let $ \sigma _1= \sigma -\frac{ \sigma - \sigma '}2$, $\mu_1=\mu-\frac{\mu-\mu'}2$ and $ \sigma _{K+1}= \sigma '$, $\mu_{K+1}=\mu'$, and let $\{ \sigma _j\}_1^{K+1}$ and $\{\mu_j\}_1^{K+1}$ be arithmetical progressions. We take $K$ such that $$ \kappa e^{-( \sigma _{j}- \sigma _{j+1})N}\le \varepsilon,$$ i.e. $K\le ( \sigma - \sigma ') \mathcal{D} elta'(\log\frac \kappa {\varepsilon})^{-1}$. We let $f_1=f$ and $k_1=0$, and we let $\varepsilon_1= [f_1^T]_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}=\varepsilon$, $\xi_1= [f_1]_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}=\xi$, $ \delta lta_1= \delta lta$ and $ \text{e} ta_1=[k_1]_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}=0$. Define now $$\varepsilon_{j+1}=C\frac1 \kappa X_jY_j( \text{e} ta_j+\varepsilon_1+\varepsilon_j)\varepsilon_j,$$ $$\xi_{j+1}=\xi_j+ C X_jY_j \varepsilon_j,\quad \text{e} ta_{j+1}= \text{e} ta_j+CX_jY_j\varepsilon_j,$$ with $$X_j=(\frac{N \mathcal{D} elta' e^{\gamma d_ \mathcal{D} elta}} {( \sigma _j- \sigma _{j+1})(\mu_j-\mu_{j+1})})^{ \text{e} xp_2},\quad Y_j=(\frac{\chi+ \delta lta+\xi_j}{ \kappa })^{ \text{e} xp_3},$$ where $C, \text{e} xp_2, \text{e} xp_3$ are given in Lemma \rho ef{basic}. One verifies by an immediate induction that \beta egin{equation}gin{sublem*} There exists an absolute constant $C'$ such that if $$\varepsilon_1\le\frac 1{C'} \frac \kappa { X_1^2Y_1^2}$$ then, for all $j\ge1$, $$\varepsilon_j\le \frac1C \frac \kappa { X_j^2Y_j^2}\quad\textrm{and}\quad\varepsilon_{j}\le ( \mathbb{C} te \frac{ X^2_1Y^2_1} \kappa \varepsilon_1)^{j-1}\varepsilon_1 ,$$ $$(\xi_{j} - \xi_1)+( \text{e} ta_{j}- \text{e} ta_1) \le C' X_1 Y_1\varepsilon_1.$$ The constant $ \mathbb{C} te$ only depends on $C$ and $ \text{e} xp_3$. \text{e} nd{sublem*} We can then apply Lemma \rho ef{basic} $K$ times to get $$ \mathcal{P} hi_j: \mathcal{O} _{\gamma '}( \sigma _{j+1},\mu_{j+1})\times \mathcal{D} '\to \mathcal{O} _{\gamma '}( \sigma _j-\frac{ \sigma _j- \sigma _{j+1}}2,\mu_j-\frac{\mu_j-\mu_{j+1}}2),\quad \gamma _*\le\gamma '\le\gamma _{j}$$ and $f_{j+1}$ and $R_{j+1}$ such that, for $ \rho \in \mathcal{D} '$, $$(h+k_j+f_j +S_j)\circ \mathcal{P} hi_j= h+k_j+ h_{j+1}+f_{j+1}+R_{j+1}+S_j\circ \mathcal{P} hi_j$$ with $k_{j+1}=k_j+ h_{j+1}$, $k_1=0$, $S_{j+1}=R_{j+1}+S_j\circ \mathcal{P} hi_j$, $S_1=0$. We then take $ \mathcal{P} hi= \mathcal{P} hi_1\circ\dots\circ \mathcal{P} hi_K$, $h'= h+k_{K+1}$ and $f'=f_{K+1}+S_{K+1}$. Then $$\ab{ h'-h}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le C' X_1Y_1 \varepsilon, \qquad \delta lta'\le C( \mathcal{D} elta')^{ \text{e} xp_2}X_1Y_1\varepsilon$$ and $$\xi'=\ab{ f'}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le \xi+ C'X_1Y_1 \varepsilon$$ and, for $ \rho \in \mathcal{D} '$, $$ \varepsilon'=\ab{ (f')^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ',\mu'\ \ \\ \gamma ', \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le ( \mathbb{C} te \frac{ X^2_1Y^2_1} \kappa \varepsilon_1)^K\varepsilon +C X_1Y_1 e^{-(\gamma -\gamma ') \mathcal{D} elta'}\varepsilon.$$ For the estimates of $ \mathcal{P} hi$, write $ \mathcal{P} si_j= \mathcal{P} hi_j\circ\dots\circ \mathcal{P} hi_K$ and $ \mathcal{P} si_{K+1}=id$. For $(x, \rho )\in \mathcal{O} _{\gamma '}( \sigma ',\mu')\times \mathcal{D} $ we then have $$|| \mathcal{P} hi(x, \rho )-x||_{\gamma '}\le \sigma um_{j=1}^K || \mathcal{P} si_j(x, \rho )- \mathcal{P} si_{j+1}(x, \rho )||_{\gamma '}.$$ Then $$ || \mathcal{P} si_j(x, \rho )- \mathcal{P} si_{j+1}(x, \rho )||_{\gamma '}=|| \mathcal{P} hi_j( \mathcal{P} si_{j+1}(x, \rho ), \rho )- \mathcal{P} si_{j+1}(x, \rho )||_{\gamma '}$$ is $$ \le \mathbb{C} te \frac1 \kappa X_1Y_1 \varepsilon_j\max(\frac1{| \sigma - \sigma ' |}, \frac1{|\mu-\mu' |}),$$ by a Cauchy estimate. Hence $$|| \mathcal{P} hi(x,\cdot)-x||_{\gamma '}\le \mathbb{C} te \frac1 \kappa X_1Y_1 \max(\frac1{| \sigma - \sigma ' |}, \frac1{|\mu-\mu' |})\varepsilon.$$ The derivatives with respect to $ \rho $ are obtained in the same way, as is also the estimates of $d \mathcal{P} hi$. The result now follows if we take $C'$ sufficiently large and increases the exponent $ \text{e} xp_2$. \text{e} nd{proof} {\it Proof of sublemma.} The estimates are true for $j=1$ so we proceed by induction on $j$. Let us assume the estimates hold up to $j$. Then, for $k\le j$, $$Y_{k}\le (\frac{\chi+ \delta lta+\xi_1+2C' X_1Y_1\varepsilon_1}{ \kappa })^{ \text{e} xp_3}= 2^{ \text{e} xp_3} Y_1$$ and $$\varepsilon_{j+1}\le 2^{ \text{e} xp_3} \frac{ X_1Y_1} \kappa [2C X_1 Y_1\varepsilon_1+\varepsilon_1+\varepsilon_1]\varepsilon_j\le \mathbb{C} te \frac{ X^2_1Y^2_1} \kappa \varepsilon_1\varepsilon_j.$$ Then $$\xi_{j+1}\le \xi_1+ \mathbb{C} te X_1Y_1(\varepsilon_1+\dots+\varepsilon_{j+1})\le \xi_1+2 \mathbb{C} te X_1Y_1\varepsilon_1$$ and similarly for $ \text{e} ta_{j+1}$. \sigma ubsection{The infinite induction} We are now in position to prove our main result, Theorem \rho ef{main}. Let $h$ be a normal form Hamiltonian in $ \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta lta)$ and let $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$ be a perturbation such that $$ 0<\varepsilon= \ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}},\quad \xi=\ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa , \mathcal{D} \text{e} nd{subarray}}. $$ We construct the transformation $ \mathcal{P} hi$ as the composition of infinitely many transformations $ \mathcal{P} hi$ as in Lemma \rho ef{Birkhoff}. We first specify the choice of all the parameters for $j\geq 1$. Let $C, \text{e} xp_1, \text{e} xp_2, \text{e} xp_3$ and $ \alpha pha$ be the constants given in Lemma \rho ef{Birkhoff}. \sigma ubsubsection{Choice of parameters} We assume (to simplify) $\gamma , \sigma ,\mu\le1$ and $ \mathcal{D} elta\ge1$. By decreasing $\gamma $ or increasing $ \mathcal{D} elta$ we can also assume $\gamma =(d_{ \mathcal{D} elta})^{-1}$. We choose for $j\ge1$ $$ \mu_j= \beta ig(\frac 12 +\frac 1 {2^j} \beta ig)\mu\quad\textrm{and}\quad \sigma _{j}= \beta ig(\frac 1 2 +\frac 1 {2^j} \beta ig) \sigma .$$ We define inductively the sequences $\varepsilon_j$, $ \mathcal{D} elta_j$, $ \delta _j$ and $\xi_j$ by \beta egin{equation}\left\{ \beta egin{equation}gin{array}{ll} \varepsilon_{j+1}= \varepsilon^{K_j}CX_1Y_1\varepsilon & \varepsilon_1=\varepsilon\\ \mathcal{D} elta_{j+1} =4K_j \max(\frac {1} { \sigma _j- \sigma _{j+1}},d_{ \mathcal{D} elta_j})\log \frac1{\varepsilon}& \mathcal{D} elta_1= \mathcal{D} elta\\ \gamma _{j+1}=(d_{ \mathcal{D} elta_{j+1}})^{-1}& \gamma _1=\gamma \\ \delta _{j+1}= \delta _j+ C X_jY_j\varepsilon_j& \delta _1= \delta \ge0 \\ \xi_{j+1}= \xi_j+CX_jY_j\varepsilon_j&\xi_1=\xi\ge\varepsilon, \text{e} nd{array} \rho ight. \text{e} nd{equation} where $$\left\{ \beta egin{equation}gin{array}{ll} X_j=(\frac{ \mathcal{D} elta_{j+1} e^{\gamma _j d_{ \mathcal{D} elta_j}}}{( \sigma _j- \sigma _{j+1})(\mu_j-\mu_{j+1})}\log \frac1{\varepsilon_j})^{ \text{e} xp_2} &=(\frac{K_j \mathcal{D} elta_{j+1} e4^{j+1}}{ \sigma \mu}\log \frac1{\varepsilon})^{ \text{e} xp_2}\\ Y_j=( \frac{\chi+ \delta lta_j+\xi_j}{ \kappa _j})^{ \text{e} xp_3}.& \text{e} nd{array} \rho ight.$$ The $ \kappa _j$ is defined implicitly by $$\varepsilon_j=\frac 1{C} \frac{ \kappa _j}{ X_jY_j},$$ These sequences depend on the choice of $K_j$. We shall let $K_j$ increase like $$K_{j}=K^{j}$$ for some $K$ sufficiently large. \beta egin{equation}gin{lemma} \lambda_a bel{numerical2} There exist constants $C'$ and $ \text{e} xp'$ such that, if $$ K\ge C' $$ and $$ \varepsilon\le\frac1{C'} \beta ig( \frac{ \sigma \mu}{K \mathcal{D} elta\log\frac1\varepsilon} \beta ig)^{ \text{e} xp'} \beta ig(\frac{c' }{\chi+ \delta lta+\xi} \beta ig)^{ \text{e} xp_3} c',$$ then \beta egin{equation}gin{itemize} \item[(i)] $$ \sigma um_{k=1}^\infty CX_kY_k\varepsilon_k \le 2C X_1Y_1 \varepsilon\le \frac1{2C} c'.$$ \item[(ii)] $$ \sigma um_{k=1}^\infty C \max(\frac1{ \sigma _k- \sigma _{k+1}}, \frac1{\mu_k-\mu_{k+1}})X_kY_k\varepsilon_k \le 5C \max(\frac1{ \sigma }, \frac1{\mu}) X_1Y_1 \varepsilon\le \frac1{2C} c'.$$ \item[(iii)] $$ \sigma um_{j\ge1}C \mathcal{D} elta_{j+1}^{ \text{e} xp_1} \beta ig(\frac{\chi+ \delta lta_j}{ \delta lta_0} \beta ig) (\frac{ \kappa _j}{ \delta _0})^{ \alpha pha}\le C'(\frac{K \mathcal{D} elta\log\frac1\varepsilon}{ \sigma \mu})^{ \text{e} xp'} ( \frac{\chi+ \delta lta+\xi}{ \delta lta_0})^{ \beta egin{equation}ta} (\frac{\varepsilon}{ \delta _0})^{ \alpha pha'}. $$ \text{e} nd{itemize} $C'$ is an absolut constant that only depends on $ \beta egin{equation}ta, \varkappa ,c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. The exponent $ \text{e} xp'$ is an absolute constant that only depends on $ \beta egin{equation}ta$ and $ \varkappa $. The exponents $ \alpha pha'$ and $ \beta egin{equation}ta$ are positive constants only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. \text{e} nd{lemma} Notice that (i) implies that $$ C X_jY_j(e^{-\frac12( \sigma _j- \sigma _{j+1}) \mathcal{D} elta_{j+1}}+ e^{-\frac12(\gamma _j-\gamma _{j+1}) \mathcal{D} elta_{j+1}} )\varepsilon_j\le \varepsilon_{j+1},$$ \beta egin{equation}gin{proof} $ \mathcal{D} elta_{j+1} $ is equal to $$4K_j\max(\frac {1} { \sigma _j- \sigma _{j+1}},d_{ \mathcal{D} elta_j})\log \frac1{\varepsilon} \le ( \mathbb{C} te \frac {1} { \sigma } \log\frac1{\varepsilon})(2K)^{j^2} \mathcal{D} elta_j^{a}=A(2K)^{j^2} \mathcal{D} elta_j^{a},$$ which, by an induction, is seen to be, by assumption on $\varepsilon$, $$\le(A(2K)^a \mathcal{D} elta)^{a^j}\le (\frac1\varepsilon)^{a^j}$$ if $a$ is, say, at least $6$. In the same way one sees that $$ X_j\le (\frac1\varepsilon)^{2 \text{e} xp_2 a^j}.$$ (i). For $j=1$, (i) holds by assumption. Indeed, by definition $$(CX_1Y_1\varepsilon_1)^{1+ \text{e} xp_3}= \kappa _1^{1+ \text{e} xp_3}=CX_1Y_1 \kappa _1^{ \text{e} xp_3}\varepsilon_1 \le \mathbb{C} te \beta ig(\frac{K \mathcal{D} elta\log\frac1\varepsilon}{ \sigma \mu} \beta ig)^{ \text{e} xp_2'} (\chi+ \delta lta+\xi)^{ \text{e} xp_3}\varepsilon,$$ which is $$\le \beta ig(\frac1{4C} c' \beta ig)^{ \text{e} xp_3+1}$$ by assumption on $\varepsilon$. Assume now (i) holds up to $j-1\ge1$. Then $ \delta _j\le \delta +2CX_1Y_1\varepsilon$ and $\xi_j\le\xi+2CX_1Y_1\varepsilon$, and hence $$ Y_j\le ( \frac{\chi+ \delta lta+\xi+ 4CX_1Y_1 \varepsilon}{ \kappa _j})^{ \text{e} xp_3}\le \mathbb{C} te Y_1(\frac{ \kappa _1}{ \kappa _j})^{ \text{e} xp_3},$$ and, by the definition of $ \kappa _j$, $$ \kappa _j^{1+ \text{e} xp_3}=CX_jY_j\varepsilon_j \kappa _j^{ \text{e} xp_3} \le \mathbb{C} te Y_1 \kappa _1^{ \text{e} xp_3}X_j\varepsilon_j \le X_j\varepsilon^{K_{j}}$$ by assumption on $\varepsilon$. Hence $$CX_jY_j\varepsilon_j= \kappa _j\le X_j\varepsilon^{2b K_{j}}\le \varepsilon^{2b K_{j}-2 \text{e} xp_2 a^j}\le \varepsilon^{b K_{j}} ,\quad b=\frac1{2( \text{e} xp_3+1)},$$ if $K$ is large enough -- notice that $j\ge2$. This implies that $$ \sigma um_{k=2}^j C X_kY_k\varepsilon_k \le 2\varepsilon^{b K_{2}}\le \varepsilon\le CX_1Y_1\varepsilon_1$$ if $K$ is large enough. The proof of (ii) is similar. To see (iii) we have for $j\ge 2$ $$ \mathcal{D} elta_{j+1}^{ \text{e} xp_1} \kappa _j^{ \alpha pha}= \beta ig( \mathcal{D} elta_{j+1}^{ \text{e} xp'} \kappa _j \beta ig)^{ \alpha pha}\le \beta ig(X_j^{ \text{e} xp'} \kappa _j \beta ig)^{ \alpha pha} \le \beta ig(\varepsilon^{2b K_{j}-2( \text{e} xp'+1) \text{e} xp_2 a^j} \beta ig)^{ \alpha pha} $$ which is $$\le \varepsilon^{b K_{j} \alpha pha } $$ if $K$ is large enough. Therefore $$ \sigma um_{j\ge1} \mathcal{D} elta_{j+1}^{ \text{e} xp_1} \kappa _j^{ \alpha pha}= \mathcal{D} elta_{2}^{ \text{e} xp_1} \kappa _1^{ \alpha pha}+ 2\varepsilon^{b K_{2} \alpha pha }\le 2 \mathcal{D} elta_{2}^{ \text{e} xp_1} \kappa _1^{ \alpha pha}$$ if $K$ is large enough. \text{e} nd{proof} \sigma ubsubsection{The iteration} \beta egin{equation}gin{proposition} There exist positive constants $C'$, $ \alpha pha'$ and $ \text{e} xp'$ such that, for any $h\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta, \delta )$ and for any $f\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma ,\mu)$, $0<\gamma , \sigma ,\mu\le 1$, $$ \varepsilon=\ab{f^T}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}},\quad \xi=\ab{f}_{ \beta egin{equation}gin{subarray}{c} \sigma ,\mu\ \ \\ \gamma , \varkappa appa, \mathcal{D} \text{e} nd{subarray}},$$ if $$ \delta lta \le \frac1{C'} c'$$ and $$ \varepsilon(\log \frac1\varepsilon)^{ \text{e} xp'}\le\frac1{C} \beta ig( \frac{ \max(\gamma ^{-1} ,d_{ \mathcal{D} elta})}{ \sigma \mu} \beta ig)^{- \text{e} xp'} \beta ig(\frac{c'}{\chi+ \delta lta+\xi} \beta ig)^{ \text{e} xp_3} c' $$ then there exist a set $ \mathcal{D} '= \mathcal{D} '(h, f) \sigma ubset \mathcal{D} $, $$\operatorname{Leb} ( \mathcal{D} \sigma etminus \mathcal{D} ')\leq C \beta ig(\log\frac1{\varepsilon} \frac{ \max(\gamma ^{-1} ,d_{ \mathcal{D} elta})}{ \sigma \mu} \beta ig)^{ \text{e} xp'} ( \frac{\chi+ \delta lta+\xi}{ \delta lta_0})^{ \beta egin{equation}ta} (\frac{\varepsilon}{ \delta _0})^{ \alpha pha'},$$ and a $ \mathcal{C} ^{{s_*}}$ mapping $$ \mathcal{P} hi : \mathcal{O} _{\gamma _*}( \sigma /2,\mu/2)\times \mathcal{D} \to \mathcal{O} _{\gamma _*}( \sigma ,\mu),$$ real holomorphic and symplectic for given parameter $ \rho ho\in \mathcal{D} $, and $$h'\in \mathbb{N} F_{ \varkappa appa}(\infty, \delta '),\quad \delta '\le \frac{c'}2,$$ such that $$(h+f)\circ \mathcal{P} hi=h'+f'$$ verifies $$\ab{ f'-f}_{ \beta egin{equation}gin{subarray}{c} \sigma /2,\mu/2\ \ \\ \gamma _*, \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le C'$$ and, for $ \rho \in \mathcal{D} '$, $(f')^T=0$. Moreover, $$\ab{ h'- h}_{ \beta egin{equation}gin{subarray}{c} \sigma /2,\mu/2\ \ \\ \gamma _*, \varkappa appa, \mathcal{D} \text{e} nd{subarray}}\le C'$$ and $$|| \partial_ \rho ^j ( \mathcal{P} hi(x,\cdot)-x)||_{\gamma _*}+ \aa{ \partial_ \rho ^j (d \mathcal{P} hi(x,\cdot)-I)}_{\gamma _*, \varkappa } \le C'$$ for any $x\in \mathcal{O} _{(0,m_*)}( \sigma ',\mu')$ and $\ab{j}\le{s_*}$, and for any $ \rho \in \mathcal{D} $. Finally, if $\tilde \rho =(0, \rho _2,\dots, \rho _p)$ and $f^T(\cdot,\tilde \rho )=0$ for all $\tilde \rho $, then $h'=h$ and $ \mathcal{P} hi(x,\cdot)=x$ for all $\tilde \rho $. $C'$ is an absolut constant that only depends on $ \beta egin{equation}ta, \varkappa ,c$ and $ \sigma up_ \mathcal{D} \ab{ \omega }$. The exponent $ \text{e} xp'$ is an absolute constant that only depends on $ \beta egin{equation}ta$ and $ \varkappa $. The exponent $ \text{e} xp_3$ only depends on $s_*$. The exponent $ \alpha pha'$ is a positive constant only depending on $s_*,\frac{d_*}{ \varkappa } ,\frac{d_*}{ \beta egin{equation}ta_3} $. \text{e} nd{proposition} \beta egin{equation}gin{proof} Assume first that $\gamma =d_ \mathcal{D} elta^{-1}$. Choose the number $\mu_j, \sigma _j,\varepsilon_j, \mathcal{D} elta_j,\gamma _j, \delta _j,\xi_j,X_j,Y_j, \kappa _j$ as above in Lemma \rho ef{numerical2} with $K= C' $. By the assumption on $\varepsilon$ we can apply Lemma \rho ef{numerical2}. Let $h_1=h$, $f_1=f$ and $ \mathcal{D} _1= \mathcal{D} $. Lemma \rho ef{numerical2} now implies that we can apply Lemma \rho ef{Birkhoff} iteratively to get for all $j\ge1$ a set $ \mathcal{D} _{j+1} \sigma ubset \mathcal{D} _j$ such that $$\operatorname{Leb} ( \mathcal{D} _j \sigma etminus \mathcal{D} _{j+1})\leq C \mathcal{D} elta_{j+1}^{ \text{e} xp_2} \beta ig(\frac{\chi+ \delta lta_j}{ \delta lta_0} \beta ig) (\frac{ \kappa _j}{ \delta _0})^{ \alpha pha},$$ a $ \mathcal{C} ^{{s_*}}$ mapping $$ \mathcal{P} hi_{j+1} : \mathcal{O} ^{\gamma '}( \sigma _{j+1},\mu_{j+1})\times \mathcal{D} _{j+1}\to \mathcal{O} ^{\gamma '}( \sigma _j-\frac{ \sigma _j- \sigma _{j+1}}{2},\mu_j-\frac{\mu_j-\mu_{j+1}}{2}),\quad \forall \gamma _*\le\gamma '\le\gamma _{j+1},$$ real holomorphic and symplectic for each fixed parameter $ \rho $, and functions $f_{j+1}\in \mathbb{T} c_{\gamma , \varkappa appa, \mathcal{D} }( \sigma _{j+1},\mu_{j+1})$ and $$h_{j+1}\in \mathbb{N} F_{ \varkappa appa}( \mathcal{D} elta_{j+1}, \delta _{j+1})$$ such that $$(h_j+f_j)\circ \mathcal{P} hi_{j+1}=h_{j+1}+f_{j+1},\quad\forall \rho \in \mathcal{D} _{j+1},$$ with $$\ab{ f_{j+1}^T}_{ \beta egin{equation}gin{subarray}{c} \sigma _{j+1},\mu_{j+1}\ \ \\ \gamma _{j+1}, \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le \varepsilon_{j+1}$$ and $$\ab{ f_{j+1}}_{ \beta egin{equation}gin{subarray}{c} \sigma _{j+1},\mu_{j+1}\ \ \\ \gamma _{j+1}, \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le \xi_{j+1}. $$ Moreover, $$\ab{ h_{j+1}- h_j}_{ \beta egin{equation}gin{subarray}{c} \sigma _{j+1},\mu_{j+1}\ \ \\ \gamma _{j+1}, \varkappa appa, \mathcal{D} \text{e} nd{subarray}} \le C X_jY_{j} \varepsilon_j$$ and $$|| \partial_ \rho ^l ( \mathcal{P} hi_{j+1}(x,\cdot)-x)||_{\gamma '}+ \aa{ \partial_ \rho ^l (d \mathcal{P} hi_{j+1}(x,\cdot)-I)}_{\gamma ', \varkappa } \le C\frac1{ \kappa _j} X_jY_j \varepsilon_j$$ for any $x\in \mathcal{O} _{\gamma '}( \sigma _{j+1},\mu_{j+1})$, $\gamma _*\le \gamma '\le\gamma _{j+1}$ and $\ab{l}\le{s_*}$. We let $h'= \lambda_i m h_j$, $f'= \lambda_i m f_j$ and $ \mathcal{P} hi= \mathcal{P} hi_2\circ\dots\circ \mathcal{P} hi_3\circ\dots $. Then $(h+f)\circ \mathcal{P} hi=h'+f'$ and $h'$ and $f'$ verify the statement. The convergence of $ \mathcal{P} hi$ and its estimates follows by Cauchy estimates as in the proof of Lemma \rho ef{Birkhoff}. The last statement is obvious. If $\gamma >(d_{ \mathcal{D} elta})^{-1}$, then we can just decrease $\gamma $ and we obtain the same result. If $\gamma <(d_{ \mathcal{D} elta})^{-1}$, then we increase $ \mathcal{D} elta$ and we obtain the same result. \text{e} nd{proof} Theorem \rho ef{main} now follows from this proposition. \sigma ection{Examples} \sigma ubsection{Beam equation with a convolutive potential } \lambda_a bel{s4.1} Consider the $d_*$ dimensional beam equation on the torus \beta egin{equation} \lambda_a bel{beamm}u_{tt}+ \mathcal{D} elta^2 u+V \sigma tar u + \varepsilon g(x,u)=0 ,\quad x\in \mathbb{T} ^{d_*}. \text{e} nd{equation} Here $g$ is a real analytic function on $ \mathbb{T} ^{d_*}\times I$, where $I$ is a neighborhood of the origin in $ \mathbb{R} $, and the convolution potential $V:\ \mathbb{T} ^{d_*}\to \mathbb{R} $ is supposed to be analytic with real Fourier coefficients $ \mathbb{h} at V(a)$, $a\in \mathbb{Z} ^{d_*}$. Let $ \mathcal{A} $ be any subset of cardinality $n$ in $ \mathbb{Z} ^{d_*}$. We set $ \mathcal{L} = \mathbb{Z} ^{d_*} \sigma etminus \mathcal{A} $, $ \rho ho=( \mathbb{h} at V_a)_{a\in \mathcal{A} }, $ and treat $ \rho ho$ as a parameter of the equation, $$ \rho ho=( \rho ho_{a_1},\dots, \rho ho_{a_n}) \in \mathcal{D} =[ \rho ho_{a_1'}, \rho ho_{a_1^{''}}]\times\dots \times[ \rho ho_{a_n'}, \rho ho_{a_n^{''}}] $$ (all other Fourier coefficients are fixed). We denote $\mu_a=|a|^4+ \mathbb{h} at V(a)$, $a\in \mathbb{Z} ^{d_*}$, and assume that $\mu_a >0$ for all $a\in \mathcal{A} $, i.e. $|a|^4+ \rho ho_a'>0$ if $a\in \mathcal{A} $. We also suppose that $$ \mu_l\ne0,\quad \mu_{l_1}\ne\mu_{l_2} \qquad \forall \, l, l_1, l_2 \in \mathcal{L} ,\ l_1\ne l_2. $$ Denote $$ \mathcal{F} =\{a\in \mathcal{L} : \mu_a<0\}, \;\; | \mathcal{F} |=: {{N}},\quad \mathcal{L} _\infty = \mathcal{L} \sigma etminus \mathcal{F} \,, $$ consider the operator $$ \mathcal{L} ambda=| \mathcal{D} elta^2 +V \sigma tar\ |^{1/2}=\operatorname{diag} \{ \lambda_a mbda_a, a\in \mathbb{Z} ^{d_*}\}\,, \quad \lambda_a mbda_a= \sigma qrt{|\mu_a|}\,, $$ and the following operator $ \mathcal{L} ambda^{\#}$, linear over real numbers: $$ \mathcal{L} ambda^{\#}(ze^{i \lambda_a ngle a, x \rho angle }) = \left\{ \beta egin{equation}gin{array}{ll}\ \ z \lambda_a e^{i \lambda_a ngle a, x \rho angle }, \;\; a\in \mathcal{L} _{\infty}\,,\\ - \beta ar z \lambda_a e^{i \lambda_a ngle a,x \rho angle }, \;\; a\in \mathcal{F} , \text{e} nd{array} \rho ight. $$ Introducing the complex variable $$ \partialsi= \frac 1{ \sigma qrt 2}( \mathcal{L} ambda^{1/2}u- i \mathcal{L} ambda^{-1/2}\dot u)= (2 \partiali)^{-d/2} \sigma um_{a\in \mathbb{Z} ^{d_*}} \partialsi_a e^{i \lambda_a ngle a, x \rho angle }\,, $$ we get for it the equation (cf. \cite[Section 1.2]{EGK}) \beta egin{equation} \lambda_a bel{k1} \dot \partialsi=i \beta ig( \mathcal{L} ambda^{\#} \partialsi+ \varepsilon \frac1{ \sigma qrt2} \mathcal{L} ambda^{-1/2} g\left(x, \mathcal{L} ambda^{-1/2} \left(\frac{ \partialsi+ \beta ar \partialsi}{ \sigma qrt 2} \rho ight) \rho ight)\,. \text{e} nd{equation} Writing $ \partialsi_a=(u_a+iv_a)/ \sigma qrt2$ we see that eq.~ \text{e} qref{k1} is a Hamiltonian system with respect to the symplectic form $ \sigma um dv_s\wedge du_s$ and the Hamiltonian $h=h_{\textrm up}+\varepsilon P$, where $$ P= \int_{ \mathbb{T} ^{d_*}} G\left(x, \mathcal{L} ambda^{-1/2} \left(\frac{ \partialsi+ \beta ar \partialsi}{ \sigma qrt 2} \rho ight) \rho ight) \text{d} x\,,\qquad \partial_u G(x,u)=g(x,u)\,, $$ and $h_{\textrm up}$ is the quadratic Hamiltonian $$ h_{\textrm up} ({u}, {v}) = \sigma um_{a\in \mathcal{A} } \lambda_a mbda_a |{ \partialsi}_a |^2 +\Big \lambda_a ngle \left( \beta egin{equation}gin{array}{c} {u}_ \mathcal{F} \\ {v}_F \text{e} nd{array} \rho ight), H \left( \beta egin{equation}gin{array}{c} {u}_F \\ {v}_F \text{e} nd{array} \rho ight) \Big \rho angle + \sigma um_{a\in \mathcal{L} _\infty} \lambda_a mbda_a |{ \partialsi}_a |^2 \,. $$ Here ${u}_ \mathcal{F} ={}^t({u}_a, a\in \mathcal{F} )$ and $H$ is a symmetric $2{{N}}\times2{{N}}\,$-matrix. The $2{{N}}$ eigenvalues of the Hamiltonian operator with the matrix $H$ are the real numbers $\{ \partialm \lambda_a , a\in \mathcal{F} \}$. So the linear system \text{e} qref{beamm}${}\mid_{\varepsilon=0}$ is stable if and only if $n=0$. Let us fix any $n$ vector $I=\{I_a>0,a\in \mathcal{A} \}$ with positive components. The $n$-dimensional torus \beta egin{equation}n \left\{ \beta egin{equation}gin{array}{ll} | \partialsi_a|^2 =I_a,\quad &a\in \mathcal{A} \\ \partialsi_a=0,\quad & a\in \mathcal{L} = \mathbb{Z} ^{d_*} \sigma etminus \mathcal{A} , \text{e} nd{array} \rho ight. \text{e} nd{equation}n is invariant for the unperturbed linear equation; it is linearly stable if and only if ${{N}}=0$. In the linear space span$\{ \partialsi_a, a\in \mathcal{A} \}$ we introduce the action-angle variables $(r_a,\theta_a)$ through the relations $ \partialsi_a= \sigma qrt{(I_a+r_a)}e^{i\theta_a}$, $ a\in \mathcal{A} . $ The unperturbed Hamiltonian becomes $$ h_{\textrm up}= \text{const} + \lambda_a ngle r, \omega ( \rho ) \rho angle +\Big \lambda_a ngle \left( \beta egin{equation}gin{array}{c} {u}_ \mathcal{F} \\ {v}_ \mathcal{F} \text{e} nd{array} \rho ight), H \left( \beta egin{equation}gin{array}{c} {u}_ \mathcal{F} \\ {v}_ \mathcal{F} \text{e} nd{array} \rho ight) \Big \rho angle + \sigma um_{a\in \mathcal{L} _\infty} \lambda_a mbda_a | \partialsi_a|^2\,, $$ with $ \omega ( \rho )=( \omega ega_a= \lambda_a mbda_a, \,{a\in \mathcal{A} } )$, and the perturbation becomes $$ P=\varepsilon \int_{ \mathbb{T} ^{d_*}}G \left(x, \mathbb{h} at u(r,\theta; \overline{z} eta)(x) \rho ight) \text{d} x, \quad \mathbb{h} at u(r,\theta; \overline{z} eta)(x) = \mathcal{L} ambda^{-1/2}\Big(\frac{ \partialsi+ \beta ar \partialsi}{ \sigma qrt2}\Big), $$ i.e. $$ \mathbb{h} at u = \sigma um_{a\in \mathcal{A} } \frac { \sigma qrt{(I_a+r_a)} \,(e^{i\theta_a} \partialhi_a + e^{-i\theta_a} \partialhi_{-a} )} { \sigma qrt{ 2 \lambda_a }} + \sigma um_{a\in \mathcal{L} }\frac{ \partialsi_a \partialhi_a + \beta ar \partialsi_a \partialhi_{-a}}{ \sigma qrt{ 2 \lambda_a }}. $$ In the symplectic coordinates $((u_a, v_a), a\in \mathcal{L} )$ the Hamiltonian $h_{\textrm up}$ has the form \text{e} qref{equation1.1}, and we wish to apply to the Hamiltonian $h=h_{\textrm up}+\varepsilon P$ Theorem \rho ef{main} and Corollary \rho ef{cMain}. The assumption A1 with constants $c,c'$ of order one, $ \beta egin{equation}ta_1= \beta egin{equation}ta_2= \beta egin{equation}ta_3=2$ holds trivially. The assumption A2 also holds since for each case (i)-(iii) the second alternative with $ \omega ega( \rho ho)= \rho ho$ is fulfilled for some $ \delta lta_0 \sigma im1$. Finally, the assumptions R1 and R2 with $ \varkappa appa=1$ and suitable constants $\gamma mma_1, \sigma igma, \mu>0$ and $\gamma mma_2=m_*$ are valid in view of Lemma~3.2 in \cite{EGK}. More exactly, the validity of the assumption R1 is a part of the lemma's assertion. The lemma also states that the second differential $Jd^2 f$ defines holomorphic mappings $$ Jd^2f: \mathcal{O} _{\gamma '}( \sigma ,\mu) \to M_{\gamma '}^D\,,\qquad\gamma '\le\gamma \,, $$ where $ M_{\gamma }^D$ is the space of matrices $A$, formed by $2\times 2$-blocs $A_a^b$, such that $$ |A|_\gamma ^D := \sigma up_{a,b} \lambda_a ngle a \rho angle \lambda_a ngle b \rho angle |A|_a^b \max([a-b], 1)^{\gamma _2}e^{\gamma _1 [a-b]}<\infty\,. $$ It is easy to see that $M_{\gamma }^D \sigma ubset \mathcal M^b_{\gamma , \varkappa appa}$ if $ \varkappa appa=1$ and $m_*$, entering the definition of $ \mathcal M^b_{\gamma , \varkappa appa}$, is sufficiently big. So R2 also holds. Let us set $ u_0(\theta,x) = \mathbb{h} at u(0,\theta;0)(x) $. Then for every $I\in \mathbb{R} _+^n$ and $\theta_0\in \mathbb{T} ^{d_*}$ the function $(t,x)\mapsto u_0(\theta_0+t \omega ,x)$ is a solution of \text{e} qref{beamm} with $\varepsilon=0$. Application of Theorem \rho ef{main} and Corollary \rho ef{cMain} gives us the following result: \beta egin{equation}gin{theorem} \lambda_a bel{t72} For $\varepsilon$ sufficiently small there is a Borel subset ${ \mathcal{D} _\varepsilon} \sigma ubset \mathcal{D} $, $\, \operatorname{meas}( \mathcal{D} \sigma etminus{ \mathcal{D} _\varepsilon})\leq C\varepsilon^ \alpha pha$, $ \alpha pha>0$, such that for $ \rho ho\in{ \mathcal{D} _\varepsilon}$ there is a function $ u_1(\theta,x)$, analytic in $\theta\in \mathbb{T} ^n_{\frac \sigma 2}$ and $H^{d^*}$-smooth in $x\in \mathbb{T} ^{d_*}$, satisfying $$ \sigma up_{|\Im\theta|<\frac \sigma 2}\|u_1(\theta,\cdot)-u_0(\theta,\cdot)\|_{H^{d^*}( \mathbb{T} ^{d_*})} \leq \beta egin{equation}ta\varepsilon,$$ and there is a mapping $ \omega ':{ \mathcal{D} _\varepsilon}\to \mathbb{R} ^n$, $\ \| \omega '- \omega \|_{C^1({ \mathcal{D} _\varepsilon})}\leq \beta egin{equation}ta\varepsilon,$ such that for $ \rho \in { \mathcal{D} _\varepsilon}$ the function $\ u(t,x)=u_1(\theta+t \omega '( \rho ),x) $ is a solution of the beam equation \text{e} qref{beamm}. Equation \text{e} qref{k1}, linearised around its solution $ \partialsi(t)$, corresponding to the solution $u(t,x)$ above, has exactly ${{N}}$ unstable and ${{N}}$ stable directions. \text{e} nd{theorem} The last assertion of this theorem follows from the item (iii) of Theorem \rho ef{main} which implies that the linearised equation, in the directions, corresponding to $ \mathcal{L} $, reduces to a linear equation with a coefficient matrix which can be written as $B=B_ \mathcal{F} \oplus B_\infty$. The operator $B_ \mathcal{F} $ is close to the Hamiltonian operator with the matrix $H$, so it has ${{N}}$ stable and ${{N}}$ unstable directions, while the matrix $B_\infty$ is skew-symmetric, so it has imaginary spectrum. \beta egin{equation}gin{remark} This result was proved by Geng and You \cite{GY06a} for the case when the perturbation $g$ does not depend on $x$ and the unperturbed linear equation is stable. \text{e} nd{remark} \sigma ubsection{NLS equation with a smoothing nonlinearity } \lambda_a bel{s4.2} Consider the NLS equation with the Hamiltonian $$ g(u)=\tfrac12\int|\nabla u|^2\,dx+\frac{m}2\int|u(x)|^2\,dx +\varepsilon\int f(t, \mathcal{D} e^{- \alpha pha}u(x),x)\,dx, $$ where $m\ge0, \ \alpha pha>0, \ u(x)$ is a complex function on the torus $ \mathbb{T} ^{d_*}$ and $f$ is a real-analytic function on $ \mathbb{R} \times \mathbb{R} ^2\times \mathbb{T} ^{d_*}$ (here we regard $ \mathbb{C} $ as $ \mathbb{R} ^2$). The corresponding Hamiltonian equation is \beta egin{equation}gin{equation} \lambda_a bel{-2.1} \dot u=i \beta ig(- \mathcal{D} elta+mu+\varepsilon \mathcal{D} e^{- \alpha pha}\nabla_2 f(t, \mathcal{D} e^{- \alpha pha}u(x),x) \beta ig)\,, \text{e} nd{equation} where $\nabla_2$ is the gradient with respect to the second variable, $u\in \mathbb{R} ^2$. We have to introduce in this equation a vector-parameter $ \rho ho\in \mathbb{R} ^n$. To do this we can either assume that $f$ is time-independent and add a convolution-potential term $V(x, \rho ho)*u$ (cf. \text{e} qref{beamm}), or assume that $f$ is a quasiperiodic function of time, $f=F( \rho ho t,u(x),x)$, where $ \rho ho\in \mathcal{D} \Subset \mathbb{R} ^n$. Cf. \cite{BB}. Let us discuss the second option. In this case the non-autonomous equation \text{e} qref{-2.1} can be written as an autonomous system on the extended phase-space $ \mathcal{O} \times \mathbb{T} ^n\times L_2=\{(r,\theta,u(\cdot))\}$, where $ L_2=L_2( \mathbb{T} ^{d_*}; \mathbb{R} ^2)$ and $ \mathcal{O} $ is a ball in $ \mathbb{R} ^n$, with the Hamiltonian \beta egin{equation}gin{equation*} \beta egin{equation}gin{split} &g(r,u, \rho ho)=h_{\textrm up}(r,u, \rho ho)+\varepsilon\int F(\theta, \mathcal{D} e^{- \alpha pha}u(x),x)\,dx,\\ &h_{\textrm up}(r,u, \rho ho)= \lambda_a ngle \rho ho, r \rho angle + \tfrac12\int|\nabla u|^2\,dx+\frac{m}2\int|u(x)|^2\,dx. \text{e} nd{split} \text{e} nd{equation*} Assume that $m>0$\footnote{\ if undesirable, the term $imu$ can be removed from eq.~ \text{e} qref{-2.1} by means of the substitution $u(t,x)=u'(t,x)e^{imt}$.} and take for $A_{\textrm up}$ the operator $- \mathcal{D} elta+m$ with the eigenvalues $ \lambda_a mbda_a=|a|^2+m$. Then the Hamiltonian $g(r,u, \rho ho)$ has the form, required by Theorem \rho ef{main} with $$ \mathcal{L} = \mathbb{Z} ^{d_*},\quad \mathcal{F} = \text{e} mptyset, \quad \varkappa appa=\min(2 \alpha pha,1), \quad \beta egin{equation}ta_1=2, \quad \beta egin{equation}ta_2=0, \quad \beta egin{equation}ta_3=2 $$ (any $ \beta egin{equation}ta_3$ will do here in fact) and suitable $ \sigma igma, \mu, \gamma mma_1>0$ and $\gamma _2=m_*$. The theorem applies and implies that, for a typical $ \rho ho$, equation \text{e} qref{-2.1} has time-quasiperiodic solutions of order $\varepsilon$. The equation, linearised about these solutions, reduces to constant coefficients and all its Lyapunov exponents are zero. If $ \alpha pha=0$, equations \text{e} qref{-2.1} become significantly more complicated. Still the assertions above remain true since they follow from the KAM-theorem in \cite{EK10}. Cf. \cite{EK09}, where is considered nonautonomous linear Schr\"odinger equation, which is equation \text{e} qref{-2.1} with the perturbation $\varepsilon (- \mathcal{D} elta)^{- \alpha pha}\nabla_2f$ replaced by $\varepsilon V( \rho ho t,x)u$, and it is proved that this equation reduces to an autonomous equation by means of a time-quasiperiodic linear change of variable $u$. In \cite{BB} equation \text{e} qref{-2.1} with $ \alpha pha=0$ and $f=F( \rho ho t, \mathcal{D} e^{- \alpha pha}u(x),x)$ is considered for the case when the constant-potential term $mu$ is replaced by $V(x)u$ with arbitrary sufficiently smooth potential $V(x)$. It is proved that for a typical $ \rho ho$ the equation has small time-quasiperiodic solutions, but not that the linearised equations are reducible to constant coefficients. \appendix \sigma ection{} \sigma ubsubsection{Transversality} \lambda_a bel{ssTransversality} \ Let $ \mathcal{D} $ be the unit ball in $ \mathbb{R} ^p$. For any matrix-valued function $$f: \mathcal{D} \to gl(\dim, \mathbb{C} ),$$ let $$\Sigma(f,\varepsilon)=\{ \rho \in \mathcal{D} : \aa{f( \rho )^{-1}}>\frac1{\varepsilon}\},$$ where $\aa{\ \ }$ is the operator norm. \beta egin{equation}gin{lemma} \lambda_a bel{lTransv1} Let $f: \mathcal{D} \to \mathbb{C} $ be a $ \mathcal{C} ^{{s_*}}$-function which is $({\mathfrak z},j, \delta _0)$-transverse, $1\le j\le {s_*}$. Then, $$ \operatorname{Leb} \{ \rho \in \mathcal{D} : \ab{f( \rho )}<\varepsilon\}\le C \frac{|\nabla_ \rho f |_{ \mathcal{C} ^{{{s_*}}-1}( \mathcal{D} )}}{ \delta _0}(\frac\varepsilon{ \delta _0})^{\frac1j}.$$ $C$ is a constant that only depends on ${s_*}$ and $p$. \text{e} nd{lemma} \beta egin{equation}gin{proof} It is enough to prove this for ${\mathfrak z}=(1,0,\dots,0)$, i.e. for a scalar $ \rho $. It is a well-known result, see for example Lemma B.1 in \cite{E02}, that $$ \operatorname{Leb}(\Sigma(f,\varepsilon))\le C \frac{|f |_{ \mathcal{C} ^{{{s_*}}}( \mathcal{D} )}}{ \delta _0}(\frac\varepsilon{ \delta _0})^{\frac1j}.$$ This implies the claim. \text{e} nd{proof} \sigma ubsubsection{Extension} \lambda_a bel{ssExtension} \ \beta egin{equation}gin{lemma} \lambda_a bel{lExtension} Let $X \sigma ubset Y$ be subsets of $ \mathcal{D} _0$ such that $$\underline{\operatorname{dist}}( \mathcal{D} _0 \sigma etminus Y,X)\ge \varepsilon,$$ then there exists a $ \mathcal{C} ^\infty$-function $g: \mathcal{D} _0\to \mathbb{R} $, being $=1$ on $X$ and $=0$ outside $Y$ and such that for all $j\ge 0$ $$| g |_{ \mathcal{C} ^j( \mathcal{D} _0)}\le C(\frac C{\varepsilon})^j.$$ $C$ is an absolute constant. \text{e} nd{lemma} \beta egin{equation}gin{proof} This is a classical result obtained by convoluting the characteristic function of $X$ with a $ \mathcal{C} ^\infty$-approximation of the Dirac-delta supported in a ball of radius $\le \frac{\varepsilon}2$. \text{e} nd{proof} \beta egin{equation}gin{thebibliography}{99} \beta ibitem{Arn} V.I. Arnold. \newblock {Mathematical methods in classical mechanics, 3d edition}. \newblock Springer-Verlag, Berlin, 2006. \beta ibitem{BB} M. Berti and P. Bolle. \newblock Quasi-periodic solutions with Sobolev regularity of NLS on $ \mathbb{T} ^d$ and a multiplicative potential. \newblock{ \text{e} m J. European Math. Society}, \textbf{15} (2013) 229-286. \beta ibitem{E88} L.H. Eliasson. \newblock Perturbations of stable invariant tori for Hamiltonian systems. \newblock{ \text{e} m Annali della Scoula Normale Superiore di Pisa} \textbf{15} (1988), 115-147. \beta ibitem{E02} L.H. Eliasson. \newblock Perturbations of linear quasi-periodic systems. \newblock{ \text{e} m Lecture Notes in Mathematics } \textbf{1784} (2002), 1-60. \beta ibitem{EGK} L.H. Eliasson, B. Gr\'ebert and S.B. Kuksin. \newblock KAM for the non-linear Beam equation 1: small-amplitude solutions. \newblock{ \text{e} m arXive 1412.2803}. \beta ibitem{EK09} L.H. Eliasson and S.B. Kuksin. \newblock On reducibility of Schr\"odinger equations with quasiperiodic in time potentials. \newblock{ \text{e} m Comm. Math. Phys.} 286 (2009), no. 1, 125--135. \beta ibitem{EK10} L.H. Eliasson and S.B. Kuksin. \newblock KAM for the nonlinear Schr\"odinger equation. \newblock{ \text{e} m Ann. Math } \textbf{172} (2010), 371-435. \beta ibitem{Y99} {J. Geng and J. You}. \newblock {Perturbations of lower dimensional tori for Hamiltonian systems.} \newblock{ \text{e} m J. Diff. Eq.}, \textbf{152} (1999), 1--29. \beta ibitem{GY06a} {J. Geng and J. You}. \newblock {A {KAM} theorem for {H}amiltonian partial differential equations in higher dimensional spaces.} \newblock{ \text{e} m Comm. Math. Phys.}, \textbf{262} (2006), 343--372. \beta ibitem{GY06b} {J. Geng and J. You}. \newblock {K{AM} tori for higher dimensional beam equations with constant potentials}. \newblock{ \text{e} m Nonlinearity}, \textbf{19} (2006), {2405--2423}. \beta ibitem{K87} S.~B. Kuksin. \newblock Hamiltonian perturbations of infinite-dimensional linear systems with an imaginary spectrum. \newblock{ \text{e} m Funct. Anal. Appl.}, \textbf{21} (1987), 192--205. \beta ibitem{K00} S. B. Kuksin. \newblock Analysis of {H}amiltonian {PDE}s. \newblock{ Oxford University Press}, 2000. \text{e} nd{thebibliography} \text{e} nd{document}
math
209,964
\begin{document} \title{Entanglement enhanced atomic gyroscope} \author{J.J. Cooper, D.W. Hallwood, and J. A. Dunningham} \affiliation{School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT, United Kingdom} \pacs{03.75.-b, 03.75.Dg, 03.75.Lm, 37.25.+k, 67.85.-d} \begin{abstract} The advent of increasingly precise gyroscopes has played a key role in the technological development of navigation systems. Ring-laser and fibre-optic gyroscopes, for example, are widely used in modern inertial guidance systems and rely on the interference of unentangled photons to measure mechanical rotation. The sensitivity of these devices scales with the number of particles used as $1/\sqrt{N}$. Here we demonstrate how, by using sources of entangled particles, it is possible to do better and even achieve the ultimate limit allowed by quantum mechanics where the precision scales as $1/N$. We propose a gyroscope scheme that uses ultra-cold atoms trapped in an optical ring potential. \end{abstract} \maketitle \section{Introduction} Optical interferometers have revolutionised the field of metrology, enabling path length differences to be measured, for the first time, to less than the wavelength of the light being used. As well as their more familiar linear versions, interferometers can also be used in ring geometries to make accurate measurements of angular momentum. These interferometric gyroscopes surpass the precision of their mechanical counterparts and form a key component of many modern navigation systems. They work by exploiting the different path lengths experienced by light as it propagates in opposite directions around a rotating ring. For instance, in the Sagnac geometry \cite{Sagnac1913} photons are put into a superposition of travelling in opposite directions around a ring and, when rotated, the two directions acquire different phases. This phase difference is directly related to the rate of rotation and can be measured by recombining the two components at a beam splitter and recording the intensity at each of the outputs. Such schemes use streams of photons that are independent of one another, i.e. not entangled. In this case the measurement accuracy is fundamentally limited by the jitter in the recorded intensities due to the fact that photons come in discrete packages. This is known as the shot-noise limit and restricts the measurement to a precision that scales inversely with the square root of the total number of photons. One possible way to beat this precision limit is to use entangled particles \cite{Yurke86}. In fact it has long been known that the precision of optical interferometers is improved with the use of squeezed or entangled states of light \cite{Caves1981, Lee2002, Giovannetti2004}. In principle, this should enable us to reach the Heisenberg limit whereby the precision scales inversely with the total number of particles. The precision of gyroscopic devices can be improved further still with the use of entangled atomic states due to their mass enhancement factor over equivalent photonic devices \cite{Dowling1998}. The use of entangled atomic states to make precision measurements of rotations was first proposed in \cite{Dowling1998}. Since then, and with the experimental realization of Bose-Einstein condensates, the use of entangled atoms for precision measurements has been widely researched. As in the optical case, of key importance to the ultimate precision afforded by an atomic device is the input state used. The use of number-squeezed atomic states will allow for Heisenberg limited precisions and as such there has been much research into the generation and uses of these squeezed states \cite{Orzel2001, Li2007, Esteve2008, Haine2009}. Several proposals have already been made to use these squeezed, and other entangled atomic states, such as maximally entangled `NOON' states, to make general phase measurements with sub-shot noise sensitivities \cite{Bouyer1997, Dunningham04, Pezze2005, Pezze2006, Dunningham01a, Pezze2007, Pezze2009}. It has also been shown that uncorrelated atoms can also achieve sub shot-noise sensitivities of rotational phase shifts \cite{Search2009} by using a chain of matter wave interferometers, or a chain of gyroscopes. Here we propose a gyroscope scheme that uses squeezed and entangled atomic inputs to push the sensitivity of rotational phase measurements below the shot noise limit. We extend the investigation of optimal input states to an experimentally accessible atomic gyroscope scheme capable of measuring small rotations. It works by trapping ultra-cold atoms in a one-dimensional optical lattice in a ring geometry and carefully evolving the trapping potential. We investigate the precision achieved with different inputs and show that although a so-called NOON state produces the best precision, another state (sometimes referred to as a `bat' state) that is created by passing a number-squeezed state through a beam splitter, is far more robust and might therefore be a preferred candidate. It should be noted that it is possible to beat the Heisenberg precision scaling of $1/N$ in some measurement schemes. In fact precision scalings of $N^{-3/2}$ can be achieved even when the initial state is unentangled in a few specific metrology protocols \cite{Boixo2008, Boixo2009a}. This is achieved through a nonlinear coupling between the quantum probe and the parameter to be measured. Therefore, BECs with their particle interactions, may naturally lend themselves to this. However, the challenge would be to find ways of coupling the angular momentum we wish to measure to the scattering length of the atoms. While this may provide an interesting future direction to this work, in light of this difficulty, we concentrate for now on attaining the Heisenberg limit through optimizing the input state. \section{The system} Our system consists of a collection of ultra-cold atoms trapped by the dipole force in an optical lattice loop of three sites. Rings with this geometry have already been experimentally demonstrated \cite{Boyer2006, Henderson2009}. For a sufficiently cold system, we need only consider a single level in each site and so can describe the system using the Bose-Hubbard Hamiltonian, \begin{equation} \label{ham1} \frac{H}{\hbar}=\sum_{j=0}^{2}\epsilon_{j}a_{j}^{\dag}a_{j} -\sum_{j=0}^{2}J_{j}\left(a_{j}^{\dag}a_{j+1} + a_{j+1}^{\dag}a_{j} \right)+\sum_{j=0}^{2}V_{j}a_{j}^{\dag}{}^{2}a_{j}^{2}, \end{equation} where $a_{j}$ is the annihilation operator for an atom at site $j$ and $J_j$ is the coupling strength between sites $j$ and $j+1$. The ring geometry means that $a_{j}=a_{j+3}$. The parameter $V_{j}$ is the strength of the interaction between atoms on site $j$ and $\epsilon_{j}$ accounts for the energy offset of site $j$. In general, we take the zero point energy to be the same for each site and so set $\epsilon_j=0$. For the purposes of this work it will be convenient to describe the system in terms of the quasi-momentum (or flow) basis which is related to the site basis by, \begin{equation} \alpha_k = \frac{1}{\sqrt{3}}\sum_{j=0}^{2} e^{i2\pi jk/3}a_{j}, \label{fourier} \end{equation} where $\alpha_k$ corresponds to the annihilation of an atom with $k$ quanta of flow. We shall use positive subscripts to refer to clockwise flow and negative subscripts to refer to anti-clockwise flow. \section{Scheme 1: Uncorrelated particles} The first scheme we present uses unentangled atoms to achieve shot noise limited precisions. To begin with, the potential barriers between the sites are high and $N$ atoms are contained within one site, say site zero. The initial state of the system is therefore $|\psi\rangle_{U0}=|N,0,0\rangle$ where the terms in the ket represent the number of atoms in sites zero, one and two respectively. The first step is to rapidly reduce the potential barrier between just two sites, we choose sites zero and one, in such a way that the two sites remain separate but there is strong coupling between them. This must be done rapidly with respect to the tunneling time, but slowly with respect to the energies associated with excited states in order to ensure the system remains in the ground state. This separation of timescales has already been demonstrated experimentally \cite{Greiner2002a}. In this regime the coupling between the two sites is much larger than their on site interactions and the Hamiltonian describing the two sites is, \begin{equation} \frac{H_{2J}}{\hbar}=-J(a_0^\dag a_1+a_1^\dag a_0) . \label{H2J} \end{equation} Importantly, the remaining two barriers are high ($V \gg J$) and so prevent tunneling between sites one and two and sites two and zero. The system is left to evolve for time $t=\pi/4J$ whilst this barrier is low. This is equivalent to applying a two port 50:50 beam splitter to our initial state (as shown in \cite{Dunningham01a}) and so transforms $|\psi\rangle_{U0}$ to, \begin{equation} |\psi\rangle_{U1} = \frac{1}{\sqrt{2^{N}N!}}(a_0^\dag + ia_1^\dag)^N|0,0,0\rangle . \label{psi1s1} \end{equation} Each individual atom is now equally likely to be on site zero or one. In other words we have $N$ single-particle superpositions on the two sites. The next step is to apply a three port beam splitter, or tritter. This splitting procedure is described in detail in \cite{Cooper09}. Essentially to achieve a tritter in this system we immediately lower the two remaining potential barriers, on the same timescale as before, and allow the system to evolve for a further $t=2\pi/9J$. This tritter operation is given by, \begin{equation} S_3= \frac{1}{\sqrt{3}} \begin{pmatrix} 1 & e^{i2\pi /3} & e^{i2\pi /3} \\ e^{i2\pi /3} & 1 & e^{i2\pi /3} \\ e^{i2\pi /3} & e^{i2\pi /3} & 1 \end{pmatrix} \label{tritter} \end{equation} from which we can see $|\psi\rangle_{U1}$ is transformed to, \begin{equation} |\psi\rangle_{U2} = \frac{1}{\sqrt{2^{N}3^{N}N!}}\left( (a_0^\dag + e^{i2\pi /3}a_1^\dag + e^{i2\pi /3}a_2^\dag)+i(e^{i2\pi /3}a_0^\dag + a_1^\dag + e^{i2\pi /3}a_2^\dag) \right)^{N}|0,0,0\rangle . \end{equation} At this point we rapidly raise the potential barriers, `freezing' the atoms in the lattice sites. Comparing $|\psi\rangle_{U2}$ with equation (\ref{fourier}) we see that applying a $2\pi/3$ phase to site two results in a superposition of the $\alpha_{-1}$ and $\alpha_{1}$ flow states. This phase is achieved by applying an energy offset, $\epsilon_2$, to site two, whilst the barriers are high, for time $t_{\epsilon}=4\pi/3\epsilon_2$. Offset application times of $500$ns have been demonstrated experimentally \cite{Denschlag2000} and it is this time we shall use in section \ref{Interactions} when we assess the impact of non-zero interactions. We then immediately lower the barriers again so the atoms can flow around the loop. The resulting superposition can be written as, \begin{equation} |\psi\rangle_{U3} = \frac{1}{\sqrt{2^{N}N!}}(\alpha_{-1}^{\dag} + i\alpha_{1}^{\dag})^{N}|0,0,0\rangle \end{equation} where the terms in the ket now represent the number of atoms in each of the possible flow states, $\alpha_{-1}$, $\alpha_0$ and $\alpha_1$ respectively. This is now a $N$ single particle flow superposition. At this point the $\alpha_{-1}$ and $\alpha_{1}$ states are degenerate, so $|\psi\rangle_{U3}$ does not evolve. However, we now apply the rotation we wish to measure, $\omega$, to the ring which causes a phase, $\theta$, to be applied around it. The energies of the two flow states now change according to the Hamiltonian, \begin{equation} \frac{H_k}{\hbar} = -2J\sum_{k=-1}^1 \cos\left(\theta/3 - 2\pi k/3\right)\alpha_k^{\dag}\alpha_k . \end{equation} After a time $t_{\omega}$, and ignoring global phases, the state has evolved to \begin{equation} |\psi\rangle_{U4} = \frac{1}{\sqrt{2^{N}N!}} \left(e^{i2Jt_{\omega}\cos(\theta/3+2\pi/3)}(\alpha_{-1}^{\dag})+ie^{i2Jt_{\omega}\cos(\theta/3-2\pi/3)}(\alpha_{1})^{\dag} \right)^{N}|000\rangle \end{equation} and so a phase difference of $\phi = 2\sqrt{3}Jt_{\omega}\sin(\theta/3)$ is established between the two flows. We now wish to read-out this phase difference from which we can directly determine $\omega$ since $\omega = h\theta/({L^2 m})$ where $m$ is the mass of the atom and $L$ is the circumference of the ring. The read-out procedure involves sequentially undoing all the operations performed prior to the phase shift. This is analogous to standard Mach-Zehnder interferometry where an (inverse) beam splitter is placed after the phase shift to undo the initial beam splitting operation. The undoing process begins with the application of a $-2\pi/3$ phase to site two giving, \begin{equation} |\psi\rangle_{U5} = \frac{1}{\sqrt{2^{N}3^{N}N!}}\left((a_0^\dag + e^{i2\pi /3}a_1^\dag + e^{i2\pi /3}a_2^\dag)+ie^{i\phi}(e^{i2\pi /3}a_0^\dag + a_1^\dag + e^{i2\pi /3}a_2^\dag) \right)^N|0,0,0\rangle \end{equation} where the terms in the kets now, once again, represent the number of atoms in sites zero, one and two. Next we undo the tritter by applying an inverse tritter, $S_{3}^{-1}=S_{3}^\dag$. The inverse tritter operation is discussed in detail in \cite{Cooper09} but essentially is achieved by lowering all three barriers and allowing the system to evolve for time $t=4\pi/9J$ (i.e. twice as long as for a tritter) giving, \begin{equation} |\psi\rangle_{U6} = \frac{1}{\sqrt{2^{N}N!}} \left(a_0^\dag + ie^{i\phi}a_1^\dag\right)^N |0,0,0\rangle \label{output} \end{equation} which is equivalent to $|\psi\rangle_{U1}$ but with a phase difference, $\phi$. Finally we apply an inverse two port 50:50 beam splitter. This is achieved in just the same way as the two port beam splitting operation described above, but with a hold time of $t=3\pi/4J$ rather than $t=\pi/4J$. The resulting state is, \begin{equation} |\psi\rangle_{U7} = \frac{1}{\sqrt{N!}} \left( \cos \left(\frac{\phi}{2}\right)(a_0^\dag) - \sin \left(\frac{\phi}{2}\right)(a_1^\dag) \right)^N |0,0,0\rangle \end{equation} meaning the probabilities of detecting each atom at site zero and site one are, \begin{eqnarray} P_0 = \cos^2\left(\frac{\phi}{2} \right) \\ \nonumber P_1 = \sin^2\left(\frac{\phi}{2} \right) . \label{probS3} \end{eqnarray} Since the atoms are independent the total number detected in the two sites is given by a binomial distribution. The mean number of atoms detected at site zero is therefore $\langle n_0 \rangle = N\cos^2(\phi/2)$ and at site one it is $\langle n_1 \rangle = N\sin^2(\phi/2)$. By counting the number of atoms detected at each site we can determine $\phi$, and hence $\omega$, just as in a typical Mach-Zehnder interferometer. The precision with which this scheme enables us to measure $\omega$ can be found by calculating the quantum Fisher information, $F_Q$. This is a tool for evaluating the precision limits of quantum measurements and is independent of the measurement procedure. For a pure state $|\Psi(\phi)\rangle$ it is given by~\cite{Braunstein1994}, \begin{equation} F_Q=4\left[ \langle\Psi'(\phi)|\Psi'(\phi)\rangle - \left| \langle\Psi'(\phi)|\Psi(\phi)\rangle \right|^2 \right] . \label{Fq} \end{equation} We convert this into an uncertainty in $\phi$ using the Cramer-Rao lower bound \cite{Rao1945, Cramer1946, Helstrom1976}, \begin{equation} \Delta \phi \geq 1/ \sqrt{F_Q}. \label{Cramer} \end{equation} Using $|\psi\rangle_{U4}$ we find the maximum resolution scaling of $\phi$ with $N$ is $N^{-1/2}$, or equivalently $\Delta\theta \sim \sqrt{3}/(2Jt_{\omega}\cos(\theta/3)\sqrt{N})$. This translates to an uncertainty in $\omega$ of, \begin{equation} \Delta \omega \sim \left(\frac{h}{L^2 m}\right)\frac{\sqrt{3}}{2Jt_{\omega}\sqrt{N}} \end{equation} where we have made the approximation that $\theta/3 \ll 1$. This has the well-known $1/\sqrt{N}$ scaling that is a signature of the shot-noise limit. To summarize, shot-noise limited measurements of rotations are made as follows:\\ 1. Apply a two port 50:50 beam splitter to the first two modes of the state $|N,0,0\rangle$.\\ 2. Perform a three port beam splitter (tritter) operation to the state.\\ 3. Apply a $2\pi/3$ phase to site two.\\ 4. Leave the system to evolve for time $t_{\omega}$ under the rotation, $\omega$.\\ 5. Apply a $-2\pi/3$ phase to site two.\\ 6. Perform an inverse tritter operation on the state.\\ 7. Apply an inverse two port 50:50 beam splitter to the first two modes.\\ 8. Count the number of atoms in each site. We will now show how our scheme can be modified to create entangled states and will investigate the effect of this entanglement on the precision scaling of our rotation measurements. \section{Scheme 2: The bat state}\label{bat} We begin with $N/2$ atoms on site zero and on site one, i.e. $|\psi\rangle_{B0} = |N/2,N/2,0\rangle$. The production of dual Fock state BECs in a double well potential was first proposed in \cite{Spekkens1999}. This number squeezed state could be achieved by slowly applying a double well trapping potential to a condensate so that a phase transition occurs to the Mott insulator state and has been demonstrated experimentally in three-dimensional optical lattices \cite{Greiner2002}. Initially the barriers between the three sites are high and, as in scheme 1, a two port beam splitter is applied to the first two modes of $|\psi\rangle_{B0}$. The resulting output is sometimes referred to as a `bat' state since a plot of the amplitudes in the number basis resemble the ears of a bat. Steps two to seven are identical to scheme 1 resulting in, \begin{equation} |\psi\rangle_{B4} = \frac{1}{\sqrt{2^N}(N/2)!} \left((\alpha_{-1}^\dag)^2 + e^{i2\phi}(\alpha_{1}^\dag)^2 \right)^{N/2}|0,0,0\rangle \end{equation} and \begin{equation} |\psi\rangle_{B7} = \frac{1}{2^N \left(N/2\right)!} \left( \left(a_0^{\dag} -ia_1^{\dag}\right)^2 + e^{i2\phi} \left(-ia_0^{\dag} +a_1^{\dag}\right)^2 \right)^{N/2} |0,0,0\rangle \end{equation} where a global phase has been ignored and $\phi$ is again given by $\phi = 2\sqrt{3}Jt_{\omega}\sin(\theta/3)$. This is very similar to the scheme in \cite{Dunningham04}, where a bat state is used to measure a phase difference in a general Mach-Zehnder interferometer set-up, and in fact results in the same output. The difference between the schemes is that ours has been adapted to measure a rotation around a ring of lattice sites rather than a general phase between two paths. To determine $\phi$ we could count the number of atoms detected at each site and repeat. However, this requires nearly perfect detector efficiencies \cite{Kim1999} and so is experimentally challenging. Instead we use the read-out scheme described (in detail) in \cite{Dunningham04} to make our measurements. Essentially, after step seven the system is then left to evolve with the barriers high for $\tau= \pi/16V$ (note in the original paper $\tau=\pi/8U$ because $U=2V$ here). The trapping potentials are then switched off and after some expansion time interference fringes are recorded. These two steps are the read-out steps and are what we shall refer to as step eight. The scheme is repeated many times and the visibility, $V$, of the fringes is calculated as in \cite{Dunningham04}. From these visibility measurements we determine $\phi$ directly. Note that the quantum Fisher information depends only on the final state of the system and not on the measurement procedure. As such we use $|\psi\rangle_{B4}$ to determine the precision scaling of $\theta$ with $N$ and find $\Delta\theta \sim \sqrt{3}/(2Jt_{\omega}\cos(\theta/3)\sqrt{N(N/2+1)})$. This means the uncertainty in $\omega$ is, \begin{equation} \Delta\omega \sim \left(\frac{h}{L^2 m}\right) \frac{\sqrt{3}}{2Jt_{\omega} \cos(\theta/3)\sqrt{N(N/2+1)}} \sim \left(\frac{h}{L^2 m}\right) \frac{\sqrt{3}}{\sqrt{2}Jt_{\omega}N}, \label{omegaBAT} \end{equation} where we have made the approximations $N\gg 1$ and $\theta/3 \ll 1$. This has the same number scaling as the Heisenberg limit. We now present a third scheme which offers a slight improvement in the precision scaling of our measurements. We then consider the advantages and disadvantages of schemes 2 and 3 and discuss their experimental limitations. \section{Scheme 3: The NOON state}\label{cat} This scheme is very similar to scheme 1, the only difference is that the two port 50:50 beam splitter (and its inverse) is replaced with a two port quantum beam splitter (and an inverse two port quantum beam splitter). A two port quantum beam splitter (QBS) is defined as a device \cite{Dunningham} that outputs $(|N, 0\rangle+e^{i\xi}|0, N\rangle)/\sqrt{2}$ when $|N, 0\rangle$ is inputted. The terms in the kets represent the number of particles in the two modes. This output is called a NOON state. As in scheme 1, the three potential barriers are initially high and $N$ atoms are inputted into site zero, $|\psi\rangle_{N0}=|N,0,0\rangle$. The first step is to apply the QBS. This is done using a scheme proposed in \cite{Dunningham01a}. The QBS begins with the application of a two port 50:50 beam splitter between sites zero and one, as previously described. A $\pi/2$ phase is then applied to one of the two sites using an energy offset as above. At this stage $V \gg J$ and the interactions are tuned such that their strength on one site is an integer multiple of the strength on the second site \footnote{This ensures that the required superposition is created independent of the total number of atoms, $N$}. The system is left to evolve for $t=\pi/2V$ in this regime after which a second two port beam splitter is applied. These steps output, \begin{equation} |\psi\rangle_{N1} \to \frac{1}{\sqrt{2N!}}\left((a_0^\dag)^N + e^{i\xi}(a_1^\dag)^N \right)|0,0,0\rangle . \label{2site} \end{equation} where $\xi$ is some relative phase established by the splitting procedure. Here we have a superposition of all $N$ atoms on site zero and all on site one. At the equivalent stage in scheme 1 we had $N$ single particle superpositions (see equation \ref{psi1s1}). It is this difference that is responsible for the improved precision. Steps two to six are identical to those in scheme 1 giving, \begin{equation} |\psi\rangle_{N4} = \frac{1}{\sqrt{2N!}} \left((\alpha_{-1}^\dag)^N + e^{i\xi}e^{iN\phi}(\alpha_{1}^\dag)^N \right)|0,0,0\rangle . \label{output} \end{equation} and \begin{equation} |\psi\rangle_{N6} = \frac{1}{\sqrt{2N!}} \left((a_0^\dag)^N + e^{i\xi}e^{iN\phi}(a_1^\dag)^N \right)|0,0,0\rangle . \label{output} \end{equation} Finally we apply an inverse two port quantum beam splitter to the system. This is achieved by sequentially undoing all the steps of the QBS. So,\\ 1. Apply an inverse two port beam splitter. \\ 2. Raise the barriers and tune the interactions as before. \\ 3. Leave the system to evolve for $t=\pi/2V$. \\ 4. Apply a $-\pi/2$ phase to the same site as before. \\ 5. Apply a second inverse two port beam splitter. \\ 6. Raise the barriers. This results in all $N$ atoms detected at site zero or all at site one with respective probabilities, \begin{eqnarray} P_0 = \cos^2\left(\frac{N\phi}{2} \right) \\ \nonumber P_1 = \sin^2\left(\frac{N\phi}{2} \right) . \end{eqnarray} By repeating the scheme many times, each time recording the site on which all $N$ atoms are detected, $\phi$, and hence $\omega$, can be determined. Using the quantum Fisher information and the Cramer-Rao lower bound the maximum resolution of scheme 3 is $\Delta \theta \sim \sqrt{3}/(2Jt_{\omega}\cos(\theta/3)N)$ meaning, for $\theta/3 \ll 1$, \begin{equation} \Delta \omega \sim \left(\frac{h}{L^2 m}\right)\frac{\sqrt{3}}{2Jt_{\omega}N}. \label{omegaNOON} \end{equation} We see the NOON state offers a slight improvement in resolution scaling over scheme 2. It has the same number scaling, but the numerical factor is $\sqrt{2}$ better. However, this slight improvement in resolution may come at great experimental expense. We now investigate the effect of experimental limitations on schemes 2 and 3. \section{Comparisons and practical limitations}\label{practical} Both schemes 2 and 3 allow Heisenberg limited precision measurements of small rotations with the precision capabilities of scheme 3 being marginally favorable. Our descriptions thus far, however, have only considered the idealized case and we have neglected important physical processes that may limit the experimental feasibility of the schemes. We now reevaluate the schemes when these limitations are accounted for. \subsection{Particle loss} We need to account for the fact that atoms may be lost during the schemes. It is well known that NOON states undergoing particle loss decohere quickly and so soon lose their Heisenberg limited sensitivity. However, it has previously been shown that certain entangled Fock states, that allow sub-shot noise phase measurements, are much more robust to these losses \cite{Huver2008}. Here we wish to see how resistant the bat state is to particle loss in comparison to the NOON state by determining the precisions afforded by both schemes in the presence of loss. We investigate the effects of loss using a procedure described in \cite{Demkowicz09, Dorner09} which models loss from the middle section (between the two beam splitters) of an ordinary two path interferometer using fictitious beam splitters whose transmissivities, $\eta$, are directly related to the rate of loss. The equivalent loss in our schemes is loss from the momentum modes during $t_{\omega}$. It was shown in \cite{Demkowicz09, Dorner09} that the time at which loss occurs during this interval is irrelevant. Since losses are equally likely from both modes we consider equal loss rates, $\eta$, from each momentum mode. To compare the effects of particle loss on our two schemes we determine $F_Q$ and $\Delta \phi$ for each scheme for different loss rates, or different $\eta$. As shown in \cite{Demkowicz09, Dorner09} $F_Q$ varies with $\eta$ as, \begin{equation} F_Q= \sum_{l=0}^{N}F_Q\left[ \sum_{l_{\alpha_1}=0}^{l}p_{l_{\alpha_1},l-l_{\alpha_1}}|\xi_{l_{\alpha_1},l-l_{\alpha_1}}(\phi)\rangle \langle \xi_{l_{\alpha_1},l-l_{\alpha_1}}(\phi)| \right] \end{equation} where $l$ is the total number of atoms lost, $l_{\alpha_1}$ is the number lost from the $\alpha_1$ mode, $p_{l_{\alpha_1},l-l_{\alpha_1}}$ is the probability of each loss event and \begin{equation} |\xi_{l_{\alpha_1},l-l_{\alpha_1}}(\phi)\rangle = \frac{1}{\sqrt{p_{l_{\alpha_1},l-l_{\alpha_1}}}} \sum_{m=l_{\alpha_1}}^{N-(l-l_{\alpha_1})}\beta_me^{im\phi}\sqrt{B_{l_{\alpha_1},l-l_{\alpha_1}}^m} |m-l_{\alpha_1}, N-m-(l-l_{\alpha_1})\rangle. \end{equation} Here \begin{equation} B_{l_{\alpha_1},l-l_{\alpha_1}}^m=\binom{m}{l_{\alpha_1}}\binom{N-m}{l-l_{\alpha_1}}\eta^N(\eta^{-1}-1)^l \end{equation} and $\beta_m$ is \begin{equation} \beta_m=\frac{i^{N/2}\sqrt{m!}\sqrt{(N-m)!}}{2^{N/2}(m/2)!(N/2-m/2)!}\times \frac{1+(-1)^m}{2} \end{equation} for the bat state and \begin{equation*} \beta_m = \left\{ \begin{array}{rl} 1/\sqrt{2} & \text{for} \ m=0,N\\ 0 & \text{for} \ m\ne0,N \end{array} \right. \end{equation*} for the NOON state. $F_Q$ is found numerically using \begin{equation} F_Q = Tr[\rho(\phi)A^2] \end{equation} where $A$ is the symmetric logarithmic derivative which is defined by, \begin{equation} \frac{\partial \rho(\phi)}{\partial \phi} = \frac{1}{2}[A\rho(\phi) + \rho(\phi)A]. \end{equation} It is given by, \begin{equation} (A)_{ij} = \frac{2}{\lambda_i + \lambda_j}\left[\frac{\partial\rho(\phi)}{\partial \phi}\right]_{ij} \end{equation} in the eigenbasis of $\rho(\phi)$, where $\lambda_{i,j}$ are the eigenvalues of $\rho(\phi)$. Combining these equations gives \cite{Ping2007}, \begin{equation} F_Q = \sum_{i,j}\frac{2}{\lambda_i+\lambda_j}\left|\left \langle \Psi_i \left| \frac{\partial \rho(\phi)}{\partial \phi} \right| \Psi_j \right \rangle \right|^2 \end{equation} where $\Psi_{i,j}$ are the eigenvectors of $\rho(\phi)$. Figure \ref{BSloss} shows how $\Delta\phi$ varies with $\eta$ for the two schemes when $N=10$. As expected the NOON state achieves the best precision when $\eta=1$, or when there are no losses. However, as $\eta$ decreases the bat state soon becomes the favored scheme. The lower bound of the shaded area is the Heisenberg limit and the upper bound is the precision achievable when an uncorrelated, or classical, input state is used, as in scheme 1. We see the uncorrelated state soon outperforms the NOON state. However, the bat state outperforms the uncorrelated input for approximately half the loss rates shown. Since it is unlikely half the atoms would be lost in an experiment, the bat state, unlike the NOON state, appears to offer an experimentally feasible increase in precision over classical precision measurement experiments. In the remainder of the paper we therefore assess the impact of experimental limitations on just scheme 2. \begin{figure} \caption{The uncertainty of $\phi$ for different rates of loss, $\eta$, for $N=10$. The blue dashed line shows $\Delta\phi$ for scheme 2 and the green solid line shows $\Delta\phi$ for scheme 3. The upper bound of the shaded region is the precision afforded by scheme 1 (the classical precision limit - black dashed-dotted line) and the lower bound shows the Heisenberg limit. Scheme 3 soon becomes less favorable than scheme 1, whilst scheme 2 is much more robust to losses.} \label{BSloss} \end{figure} \subsection{Variations in N between experimental runs} Our scheme requires many repetitions of the gyroscope procedure in order to build up interference fringes from which $\phi$ can be determined. We have assumed thus far that each run involves exactly $N$ atoms. However, in an experiment $N$ is likely to fluctuate between runs. The effect of fluctuations of order $\sqrt{N}$ on the bat state input are discussed in \cite{Dunningham04}. In that case, an ordinary two-path linear interferometer is used but the same results apply here. It is shown that while the interference fringe signal is degraded by these fluctuations, the approximate Heisenberg limited sensitivity of the scheme is not destroyed. As expected, the larger $N$, the smaller the fluctuation effects, which is good since we would ideally work in the limit of large $N$ since this gives the best improvement in precision. \subsection{Interactions}\label{Interactions} So far we have considered only the idealized system setting $V=0$ (apart from in the detection step, step eight, where we require large interactions to minimize small coupling effects), and $J=0$ in the low coupling regime. While interactions can be tuned to extremely small values using Feshbach resonances it is an unrealistic assumption to discount them altogether. Likewise it is unrealistic to completely neglect coupling effects in the low coupling regime. Here we consider the effect of non-zero interactions and non-zero coupling strengths in the low coupling regime on scheme 2. To determine experimental orders of magnitude for $V$ and $J$ we use the approximations \cite{Scheel06}, \begin{equation} V \approx \frac{2a_{s}V_{0}^{\frac{3}{4}}E_{R}^{\frac{1}{4}}}{\hbar(\sqrt{\lambda D})} \end{equation} \begin{equation} J \approx \frac{E_R}{2\hbar}\exp\left(-\left(\frac{\pi^2}{4}\right)\sqrt{\frac{V_0}{E_R}}\right)\left(\sqrt{\frac{V_0}{E_R}}+\left(\sqrt{\frac{V_0}{E_R}}\right)^3\right) \end{equation} where $a_{s}$ is the scattering length, $V_{0}$ the barrier height, $E_{R}$ the recoil energy, $\lambda$ the wavelength of the lattice light and $D$ the transverse width of the lattice sites. Feshbach resonances can tune $a_{s}$ to values smaller than the Bohr radius for some BECs \cite{Chin08} and in the high coupling regime barrier heights of order $V_{0}=2E_{R}$ have been demonstrated \cite{Jona03}. Using light of wavelength $\lambda=D=10\mu$m and ${}^{87}$Rb atoms, interactions can therefore be tuned to $V \sim 10^{-3}$Hz and $J \sim 10$Hz in this regime. While in the low coupling regime, where $V_{0}=35E_{R}$ \cite{Greiner2002a}, we find $V \sim 10^{-2}$Hz and $J \sim 10^{-2}$Hz. Note that in the detection process the system evolves for $t=\pi/16V$ with high potential barriers. Here we require large interactions to minimize small coupling effects. Taking $a_s=9000a_0$ \cite{Cornish2000} gives $V \sim 100$Hz. Using these values we assess the impact of non-zero interactions and coupling strengths. As $N$ increases the occupation number per site increases and as such the effect of non-zero interactions become more pronounced. We would therefore like to determine the maximum number of atoms our system can tolerate before these effects become too destructive. To do this we measure the fidelity between the output of the gyroscope in the idealized case in which $V=0$ (except in step eight) and $J=0$ in the low coupling regime with the same output when $V \sim 10^{-3}$ in the high coupling regime, $V \sim 10^{-2}$ in the low coupling regime and $J \sim 10^{-2}$ in the low coupling regime. The maximum $N$ that can be tolerated is taken to be the first $N$ for which this fidelity falls below 0.99. In this simulation we have taken $t_{\omega} = 1$s and $\theta=\pi/100$. Figure \ref{interactionN} shows how the fidelities decrease as $N$ increases. We see that by our definition the maximum number of atoms the system can tolerate is approximately 60. Squeezed states with larger numbers of atoms have been demonstrated experimentally \cite{Esteve2008} and as such interactions are likely to be one of the main limiting factors of this scheme. \begin{figure} \caption{The fidelity between the output of scheme 2 in the idealized case (where $V=0$, and $J=0$ in the low coupling regime) with the output in the non-idealized case (where $V \sim 10^{-3} \label{interactionN} \end{figure} \subsection{Metastability} Our gyroscope scheme relies on the two counterpropagating superfuid states being stable at least over the timescale of the measurement that we want to make. There has been a lot of discussion and analysis of what conditions need to be met so that a superfluid current persists (or is metastable) in BECs. In a toroidal trap, for example, the condition for metastability is that the mean interaction energy per particle is greater than the single particle quantization energy $h^2/mL^2$ \cite{Leggett2001a}. This can be shown to be equivalent to the condition that the superfluid velocity is less than the velocity of sound, which is just the Landau criterion for the metastability of superfluid flow. Persistent superfluid flows have recently been experimentally observed for BECs in toroidal traps \cite{Helmerson2007a}. There has also been an analysis of the metastability requirements for a system that is directly relevant to our scheme, i.e. a superfluid Bose gas trapped in a ring optical lattice \cite{Kolovsky2006a}. In this work, it was shown that any disorder due to energy offsets, $\epsilon$, on the lattice sites in the ring can cause dissipation by providing a coupling pathway between states with opposite quasi-momentum. If, however, there are interactions between the atoms, the energy levels are shifted and this mismatch means that the different supercurrent states become effectively decoupled and so the supercurrent should persist. Kolovsky showed that for states with low values of quasimomentum (such as the $\pm$1 unit of angular momentum states that we consider) the minimum strength of the interactions must be $V_{\rm min} \approx 8\epsilon/N$. Taking parameters from our scheme, in the strong tunneling region, we have $J \sim10$Hz and $V \sim 10^{-3}$Hz. This means that we require $\epsilon < N V/8 \approx 10^{-2}$Hz, for $N\approx 60$. By stabilizing the energy offsets to this level (or increasing the interactions) persistent supercurrents should be able to be achieved. \subsection{Comparison with other schemes} At this point we note that our precision analysis is for the case of a single shot, i.e. $N$ atoms are loaded into the lattice and a single measurement is made over time $t_{\omega}$ corresponding to a particle flux of $N/t_{\omega}$. In reality, the results of many runs will be combined to give a measurement of the rotation. Suppose we repeat the measurement $n$ times to give a total integration time of $\tau = nt_{\omega}$\footnote{Note that there are still $N$ atoms on each run, which lasts for time $t_{\omega}$, so the particle flux is the same as for the case of a single run.}. In this case, we get \begin{equation} \Delta\omega \approx \left(\frac{h}{L^2 m}\right) \frac{\sqrt{3}}{\sqrt{2n}Jt_{\omega}N} = \left(\frac{h}{L^2 m}\right) \frac{\sqrt{3}}{\sqrt{2t_{\omega} \tau}JN} = \frac{S}{\sqrt{\tau}}, \end{equation} where the short-time sensitivity is given by, \begin{equation} S = \left(\frac{h}{L^2 m}\right) \frac{\sqrt{3}}{\sqrt{2t_{\omega}}JN}. \end{equation} Substituting in approximate values for our setup in the strong coupling regime (i.e. $J\approx 10$Hz, $N\approx 60$, $t_{\omega}\approx 1$s and $L=2\pi \times 20 \mu$m \cite{Henderson2009}), we get $S\approx 10^{-3}$ rads$^{-1}/\sqrt{\rm{Hz}}$. This compares unfavorably with other atom interferometry schemes which can achieve sensitivities better than $10^{-8}$ rads$^{-1}/\sqrt{\rm{Hz}}$ \cite{Gustavson1997}. These other schemes achieve improved sensitivities by having much larger particle fluxes (e.g. $6\times 10^{8}$ atoms/s) and much larger areas enclosed by their interferometer paths (e.g. 22 mm${}^2$) \cite{Gustavson1997}. The scheme presented here is therefore unlikely to challenge the overall precision offered by other techniques, except perhaps in specialised cases where the number of atoms available is restricted to a small number or the area of the interferometer must be very small. The main interest of this scheme, however, is that it proposes a means of creating macroscopic superpositions of persistent superfluid flows and that these could show evidence of Heisenberg scaling of measurement precision. This in itself would be of fundamental interest. In order to improve its short-term sensitivity, however, it is likely to be difficult to create entangled states with very large numbers of particles, so a different configuration would need to be used to greatly enhance the enclosed area of the interferometer. Alternatively nonlinear couplings of the atoms and the rotation could potentially be used to achieve precision scalings of $N^{-3/2}$ using unentangled atoms \cite{Boixo2008, Boixo2009a}. The reliance of such a scheme on unentangled atoms would allow for much larger atom numbers than in our scheme and as such seems to offer a possible way of improving upon current gyroscope precisions. However, the challenge of this scheme would be coupling the angular momentum and the atomic scattering lengths. \section{Conclusion} We have presented three schemes to measure small rotations applied to a ring of lattice sites by creating superpositions of ultra-cold atoms flowing in opposite directions around the ring. Two of these schemes are capable of Heisenberg limited precision measurements where the precision scales as $1/N$. The two schemes use different entangled states. While the scheme that used a NOON state gave slightly better precision in the idealized case, after consideration of experimental limitations it was shown that the bat state is likely to be the preferred candidate largely due to its robustness to particle loss. Importantly the bat state outperformed the case of unentangled particles for modest loss rates. The effects of non-zero interactions were shown to limit the preferred scheme to approximately 60 atoms and as such this scheme is not capable of outperforming the precision of existing atomic gyroscopes at present. However, the interesting result is the Heisenberg scaling of the precision. All the steps in this scheme should be within reach of current technologies which is promising for its experimental implementation. This work was supported by the United Kingdom EPSRC through an Advanced Research Fellowship GR/S99297/01, an RCUK Fellowship, and the EuroQUASAR programme EP/G028427/1. \end{document}
math
40,160
\begin{document} \title[Extension of a result of Dickson and Fuller]{Additive Unit Representations in Endomorphism Rings and an Extension of a result of Dickson and Fuller} \author{Pedro A. Guil Asensio} \address{Departamento de Mathematicas, Universidad de Murcia, Murcia, 30100, Spain} \email{[email protected]} \author{Ashish K. Srivastava} \address{Department of Mathematics and Computer Science, St. Louis University, St. Louis, MO-63103, USA} \email{[email protected]} \keywords{automorphism-invariant modules, injective modules, quasi-injective modules} \subjclass[2000]{16D50, 16U60, 16W20} \dedicatory{Dedicated to T. Y. Lam on his 70th Birthday} \begin{abstract} A module is called automorphism-invariant if it is invariant under any automorphism of its injective hull. Dickson and Fuller have shown that if $R$ is a finite-dimensional algebra over a field $\mathbb F$ with more than two elements then an indecomposable automorphism-invariant right $R$-module must be quasi-injective. In this note, we extend and simplify the proof of this result by showing that any automorphism-invariant module over an algebra over a field with more than two elements is quasi-injective. Our proof is based on the study of the additive unit structure of endomorphism rings. \end{abstract} \maketitle \section{Introduction.} \noindent The study of the additive unit structure of rings has a long tradition. The earliest instance may be found in the investigations of Dieudonn\'{e} on Galois theory of simple and semisimple rings \cite{D}. In \cite{H}, Hochschild studied additive unit representations of elements in simple algebras and proved that each element of a simple algebra over any field is a sum of units. Later, Zelinsky \cite{Zelinsky} proved that every linear transformation of a vector space $V$ over a division ring $D$ is the sum of two invertible linear transformations except when $V$ is one-dimensional over $\mathbb F_2$. Zelinsky also noted in his paper that this result follows from a previous result of Wolfson \cite{Wolfson}. The above mentioned result of Zelinsky has been recently extended by Khurana and Srivastava in \cite{KS} where they proved that any element in the endomorphism ring of a continuous module $M$ is a sum of two automorphisms if and only if $\operatorname{End}(M)$ has no factor ring isomorphic to the field of two elements $\mathbb F_2$. In particular, this means that, in order to check if a module $M$ is invariant under endomorphisms of its injective hull $E(M)$, it is enough to check it under automorphisms, provided that $\operatorname{End}(E(M))$ has no factor ring isomorphic to $\mathbb F_2$. Recall that a module $M$ is called \textit{quasi-injective} if every homomorphism from a submodule $L$ of $M$ to $M$ can be extended to an endomorphism of $M$. Johnson and Wong characterized quasi-injective modules as those that are invariant under any endomorphism of their injective hulls \cite{JW}. A module $M$ which is invariant under automorphisms of its injective hull is called an {\it automorphism-invariant module}. This class of modules was first studied by Dickson and Fuller in \cite{DF} for the particular case of finite-dimensional algebras over fields $\mathbb F$ with more than two elements. They proved that if $R$ is a finite-dimensional algebra over a field $\mathbb F$ with more than two elements then an indecomposable automorphism-invariant right $R$-module must be quasi-injective. And it has been recently shown in \cite{SS} that this result fails to hold if $\mathbb F$ is a field of two elements. Let us recall that a ring $R$ is said to be of {\it right invariant module type} if every indecomposable right $R$-module is quasi-injective. Thus, the result of Dickson and Fuller states that if $R$ is a finite-dimensional algebra over a field $\mathbb F$ with more than two elements, then $R$ is of right invariant module type if and only if every indecomposable right $R$-module is automorphism-invariant. Examples of automorphism-invariant modules which are not quasi-injective, can be found in \cite{ESS} and \cite{Teply}. And recently, it has been shown in \cite{ESS} that a module $M$ is automorphism-invariant if and only if every monomorphism from a submodule of $M$ extends to an endomorphism of $M$. For more details on automorphism-invariant modules, see \cite{ESS}, \cite{LZ}, \cite{SS}, and \cite{AS}. The purpose of this note is to exploit the above mentioned result of Khurana and Srivastava in \cite{KS} in order to extend, as well as to give a much easier proof, of Dickson and Fuller's result by showing that if $M$ is any right $R$-module such that there are no ring homomorphisms from $\operatorname{End}_R(M)$ into the field of two elements $\mathbb{F}_2$, then $M_R$ is automorphism-invariant if and only if it is quasi-injective. In particular, we deduce that if $R$ is an algebra over a field $\mathbb F$ with more than two elements, then a right $R$-module $M$ is automorphism-invariant if and only if it is quasi-injective. Throughout this paper, $R$ will always denote an associative ring with identity element and modules will be right unital. We refer to \cite{AF} for any undefined notion arising in the text. \section*{Results.} We begin this section by proving a couple of lemmas that we will need in our main result. \begin{lemma} \label{basic} Let $M$ be a right $R$-module such that $\operatorname{End}(M)$ has no factor isomorphic to $\mathbb{F}_2$. Then $\operatorname{End}(E(M))$ has no factor isomorphic to $\mathbb{F}_2$ either. \end{lemma} \begin{proof} Let $M$ be any right $R$-module such that $\operatorname{End}(M)$ has no factor isomorphic to $\mathbb{F}_2$ and let $S=\operatorname{End}(E(M))$. We want to show that $S$ has no factor isomorphic to $\mathbb{F}_2$. Assume to the contrary that $\psi: S\rightarrow \mathbb{F}_2$ is a ring homomorphism. As $\mathbb{F}_2\cong\operatorname{End}_{\mathbb Z}(\mathbb{F}_2)$, the above ring homomorphism yields a right $S$-module structure to $\mathbb{F}_2$. Under this right $S$-module structure, $\psi:S\rightarrow \mathbb{F}_2$ becomes a homomorphism of $S$-modules. Moreover, as $\mathbb{F}_2$ is simple as $\mathbb Z$-module, so is as right $S$-module. Therefore, $\ker(\psi)$ contains the Jacobson radical $J(S)$ of $S$ and thus, it factors through a ring homomorphism $\psi':S/J(S)\rightarrow \mathbb{F}_2$. On the other hand, given any endomorphism $f:M\rightarrow M$, it extends by injectivity to a (non-unique) endomorphism $\varphi_f: E(M)\rightarrow E(M)$ \[ \xymatrix{ M \ar[d]^{} \ar[r]^{f} &M \ar[d]^{}\\ E(M) \ar[r]^{\varphi_f} &E(M).} \] Now define $\eta: \operatorname{End}(M)\rightarrow \frac{S}{J(S)}$ by $\eta(f)=\varphi_f+J(S)$. It may be easily checked that $\eta$ is a ring homomorphism. Clearly, then $\eta \circ \psi': \operatorname{End}(M)\rightarrow \mathbb{F}_2$ is a ring homomorphism. This shows that $\operatorname{End}(M)$ has a factor isomorphic to $\mathbb{F}_2$, a contradiction to our hypothesis. Hence, $\operatorname{End}(E(M))$ has no factor isomorphic to $\mathbb{F}_2$. \end{proof} \begin{lemma} $($\cite{KS}$)$ \label{2good} Let $M$ be a continuous right module over any ring $S$. Then each element of the endomorphism ring $R=\operatorname{End}(M_S)$ is the sum of two units if and only if $R$ has no factor isomorphic to $\mathbb{F}_2$. \end{lemma} We can now prove our main result. \begin{theorem} \label{char} Let $M$ be any right $R$-module such that $\operatorname{End}(M)$ has no factor isomorphic to $\mathbb{F}_2$, then $M$ is quasi-injective if and only $M$ is automorphism-invariant. \end{theorem} \begin{proof} Let $M$ be an automorphism-invariant right $R$-module such that $\operatorname{End}(M)$ has no factor isomorphic to $\mathbb{F}_2$. Then by Lemma \ref{basic}, $\operatorname{End}(E(M))$ has no factor isomorphic to $\mathbb{F}_2$. Now by Lemma \ref{2good}, each element of $\operatorname{End}(E(M))$ is a sum of two units. Therefore, for every endomorphism $\lambda \in \operatorname{End}(E(M))$, we have $\lambda=u_1+u_2$ where $u_1, u_2$ are automorphisms in $\operatorname{End}(E(M))$. As $M$ is an automorphism-invariant module, it is invariant under both $u_1$ and $u_2$, and we get that $M$ is invariant under $\lambda$. This shows that $M$ is quasi-injective. The converse is obvious. \end{proof} \begin{lemma} \label{z2} Let $R$ be any ring and $S$, a subring of its center $Z(R)$. If $\mathbb{F}_2$ does not admit a structure of right $S$-module, then for any right $R$-module $M$, the endomorphism ring $\operatorname{End}(M)$ has no factor isomorphic to $\mathbb{F}_2$. \end{lemma} \begin{proof} Assume to the contrary that there is a ring homomorphism $\psi: \operatorname{End}_R(M) \rightarrow \mathbb{F}_2$. Now, define a map $\varphi: S\rightarrow \operatorname{End}_R(M)$ by the rule $\varphi(r)=\varphi_r$, for each $r\in S$, where $\varphi_r:M\rightarrow M$ is given as $\varphi_r(m)=mr$. Clearly $\varphi$ is a ring homomorphism since $S\subseteq Z(R)$ and so, the composition $\varphi \circ f$ gives a nonzero ring homomorphism from $S$ to $\mathbb{F}_2$, yielding a contradiction to the assumption that $\mathbb{F}_2$ does not admit a structure of right $S$-module. \end{proof} We can now extend the above mentioned result of Dickson and Fuller. \begin{theorem} \label{main} Let $A$ be an algebra over a field $\mathbb F$ with more than two elements. Then any right $A$-module $M$ is automorphism-invariant if and only if $M$ is quasi-injective. \end{theorem} \begin{proof} Let $M$ be an automorphism-invariant right $A$-module. Since $A$ is an algebra over a field $\mathbb F$ with more than two elements, by Lemma \ref{z2}, it follows that $\mathbb{F}_2$ does not admit a structure of right $Z(A)$-module and therefore $\operatorname{End}(M)$ has no factor isomorphic to $\mathbb{F}_2$. Now, by Theorem \ref{char}, $M$ must be quasi-injective. The converse is obvious. \end{proof} As a consequence we have the following \begin{corollary} Let $R$ be any algebra over a field $\mathbb F$ with more than two elements. Then $R$ is of right invariant module type if and only if every indecomposable right $R$-module is automorphism-invariant. \end{corollary} \begin{corollary} If $A$ is an algebra over a field $\mathbb F$ with more than two elements such that $A$ is automorphism-invariant as a right $A$-module, then $A$ is right self-injective. \end{corollary} It is well-known that a group ring $R[G]$ is right self-injective if and only if $R$ is right self-injective and $G$ is finite (see \cite{C}, \cite{R}). Thus, in particular, we have the following \begin{corollary} Let $K[G]$ be automorphism-invariant, where $K$ is a field with more than two elements. Then $G$ must be finite. \end{corollary} \end{document}
math
10,820
\begin{document} \title{Breather Solutions for a Quasilinear $(1+1)$-dimensional Wave Equation} \author{Simon Kohler and Wolfgang Reichel} \address{Institute for Analysis, Karlsruhe Institute of Technology (KIT), D-76128 Karlsruhe, Germany} \mathrm{e}mail{[email protected], [email protected]} \date{\today} \begin{abstract} We consider the $(1+1)$-dimensional quasilinear wave equation $g(x)w_{tt}-w_{xx}+h(x) (w_t^3)_t=0$ on $\mathbb{R}\times\mathbb{R}$ which arises in the study of localized electromagnetic waves modeled by Kerr-nonlinear Maxwell equations. We are interested in time-periodic, spatially localized solutions. Here $g\mathrm{i}n\Leb{\mathrm{i}nfty}{\mathbb{R}}$ is even with $g\not\mathrm{e}quiv 0$ and $h(x)=\gamma\,\delta_0(x)$ with $\gamma\mathrm{i}n\mathbb{R}\backslash\{0\}$ and $\delta_0$ the delta-distribution supported in $0$. We assume that $0$ lies in a spectral gap of the operators $L_k=-\difff{x}{2}-k^2\omega^2g$ on $\Leb{2}{\mathbb{R}}$ for all $k\mathrm{i}n 2\mathbb{Z}+1$ together with additional properties of the fundamental set of solutions of $L_k$. By expanding $w$ into a Fourier series in time we transfer the problem of finding a suitably defined weak solution to finding a minimizer of a functional on a sequence space. The solutions that we have found are exponentially localized in space. Moreover, we show that they can be well approximated by truncating the Fourier series in time. The guiding examples, where all assumptions are fulfilled, are explicitely given step potentials and periodic step potentials $g$. In these examples we even find infinitely many distinct breathers. \mathrm{e}nd{abstract} \maketitle \section{Introduction and Main Results}\label{results} We study the $(1+1)$-dimensional quasilinear wave equation \begin{align} g(x)w_{tt}-w_{xx}+h(x)(w_t^3)_t=0 \quad \text{for } (x,t)\mathrm{i}n\mathbb{R}\times\mathbb{R}, \label{quasi} \mathrm{e}nd{align} and we look for real-valued, time-periodic and spatially localized solutions $w(x,t)$. At the end of this introduction we give a motivation for this equation arising in the study of localized electromagnetic waves modeled by Kerr-nonlinear Maxwell equations. We also cite some relevant papers. To the best of our knowledge for \mathrm{e}qref{quasi} in its general form no rigorous existence results are available. A first result is given in this paper by taking an extreme case where $h(x)$ is a spatial delta distribution at $x=0$. Our basic assumption on the coefficients $g$ and $h$ is the following: \begin{align} g\mathrm{i}n\Leb{\mathrm{i}nfty}{\mathbb{R}} \text{ even, } g \not \mathrm{e}quiv 0 \text{ and } h(x)=\gamma\delta_0(x) \mbox{ with } \gamma\neq0 \tag{C0}\label{C0} \mathrm{e}nd{align} where $\delta_0$ denotes the delta-distribution supported in $0$. We have two prototypical examples for the potential $g$: a step potential (Theorem~\ref{step}) and a periodic step potential (Theorem~\ref{w is a weak solution in expl exa}). The general version is given in Theorem~\ref{w is a weak solution general} below. \begin{thm}\label{step} For $a,b,c>0$ let \[ g(x)\coloneqq\left\{\begin{array}{ll} -a, & \mbox{ if }\, \abs{x}>c,\\ b, & \mbox{ if }\, \abs{x}<c. \mathrm{e}nd{array}\right. \] For every frequency $\omega$ such that $\sqrt{b}\omega c \frac{2}{\pi}\mathrm{i}n \frac{2\mathbb{N}+1}{2\mathbb{N}+1}$ and $\gamma<0$ there exist infinitely many nontrivial, real-valued, spatially localized and time-periodic weak solutions of \mathrm{e}qref{quasi} with period $T=\frac{2\pi}{\omega}$. For each solution $w$ there are constants $C,\rho>0$ such that $\abs{w(x,t)}\leq C\mathrm{e}^{-\rho\abs{x}}$. \mathrm{e}nd{thm} \begin{thm}\label{w is a weak solution in expl exa} For $a,b>0$, $a\not =b$ and $\mathbb{T}heta\mathrm{i}n(0,1)$ let \[ g(x)\coloneqq\left\{\begin{array}{ll} a, & \mbox{ if }\, \abs{x}<\pi\mathbb{T}heta,\\ b, & \mbox{ if }\, \pi\mathbb{T}heta<\abs{x}<\pi \mathrm{e}nd{array}\right. \] and extend $g$ as a $2\pi$-periodic function to $\mathbb{R}$. Assume in addition \begin{align*} \sqrt{\frac{b}{a}}\,\frac{1-\mathbb{T}heta}{\mathbb{T}heta}\mathrm{i}n \frac{2\mathbb{N}+1}{2\mathbb{N}+1}. \mathrm{e}nd{align*} For every frequency $\omega$ such that $4\sqrt{a}\theta\omega\mathrm{i}n \frac{2\mathbb{N}+1}{2\mathbb{N}+1}$ there exist infinitely many nontrivial, real-valued, spatially localized and time-periodic weak solutions of \mathrm{e}qref{quasi} with period $T=\frac{2\pi}{\omega}$. For each solution $w$ there are constants $C,\rho>0$ such that $\abs{w(x,t)}\leq C\mathrm{e}^{-\rho\abs{x}}$. \mathrm{e}nd{thm} Weak solutions of \mathrm{e}qref{quasi} are understood in the following sense. We write $D\coloneqq\mathbb{R}T$ and denote by $\mathbb{T}_T$ the one-dimensional torus with period $T$. \begin{defn}\label{Defn of weak Sol to (quasi)} Under the assumption \mathrm{e}qref{C0} a function $w\mathrm{i}n\SobH{1}{\mathbb{R}T}$ with $\partial_tw(0,\cdot)\mathrm{i}n\Leb{3}{\mathbb{T}_T}$ is called a weak solution of \mathrm{e}qref{quasi} if for every $\psi\mathrm{i}n\mathbb{C}ontc{\mathrm{i}nfty}{\mathbb{R}T}$ \begin{align} \mathrm{i}nt_D-g(x)\partial_tw\,\partial_t\psi +\partial_xw\,\partial_x\psi\dd{(x,t)} -\gamma\mathrm{i}nt_{0}^{T} (\partial_t w(0,t))^3 \partial_t \psi(0,t)\dd{t}=0. \label{WeakEquation for (quasi)} \mathrm{e}nd{align} \mathrm{e}nd{defn} Theorem~\ref{step} and Theorem~\ref{w is a weak solution in expl exa} are special cases of Theorem~\ref{w is a weak solution general}, which applies to much more general potentials $g$. In Section~\ref{details_example_step} and Section~\ref{explicit example Bloch Modes_WR} of the Appendix we will show that the special potentials $g$ from these two theorems satisfy the conditions \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} of Theorem~\ref{w is a weak solution general}. The basic preparations and assumptions for Theorem~\ref{w is a weak solution general} will be given next. Since we are looking for time-periodic solutions, it is appropriate to make the Fourier ansatz $w(x,t)=\sum_{k\mathrm{i}n\mathbb{Z}odd} w_k(x) \mathrm{e}^{\mathrm{i} k\omega t}$ with $\mathbb{Z}odd\coloneqq2\mathbb{Z}+1$. The reason for dropping even Fourier modes is that the $0$-mode belongs to the kernel of the wave operator $L=g(x)\partial_t^2 - \partial_x^2$. The restriction to odd Fourier modes generates $T/2=\pi/\omega$-antiperiodic functions $w$, is therefore compatible with the structure of \mathrm{e}qref{quasi} and in particular the cubic nonlinearity. Notice the decomposition $(Lw)(x,t)=\sum_{k\mathrm{i}n \mathbb{Z}odd} L_k w_k(x) \mathrm{e}^{\mathrm{i} k\omega t}$ with self-adjoint operators \begin{align*} L_k = -\frac{d^2}{dx^2} - k^2\omega^2 g(x): H^2(\mathbb{R})\subset L^2(\mathbb{R})\rightarrow L^2(\mathbb{R}). \mathrm{e}nd{align*} Clearly $L_k=L_{-k}$ so that is suffices to study $L_k$ for $k\mathrm{i}n \mathbb{N}odd$. Our first assumption is concerned with the spectrum $\sigma(L_k)$: \begin{align}\label{spectralcond} \forall\,k\mathrm{i}n \mathbb{N}odd, 0\not \mathrm{i}n \sigma_{\mathrm{ess}}(L_k)\cup \sigma_{\mathrm{D}}(L_k), \tag{C1} \mathrm{e}nd{align} where by $\sigma_{\mathrm{D}}(L_k)$ we denote the spectrum of $L_k$ with an extra Dirichlet condition at $0$, i.e., the spectrum of $L_k$ restricted to $\{\varphi\mathrm{i}n\SobH{2}{\mathbb{R}}~|~\varphi(0)=0\}$. This is the same as the spectrum of $L_k$ restricted to functions which are odd around $x=0$. \begin{lemma} \label{exp_decaying_sol} Under the assumption \mathrm{e}qref{C0} and \mathrm{e}qref{spectralcond} there exists for every $k\mathrm{i}n \mathbb{N}odd$ a function $\Phi_k\mathrm{i}n H^2(0,\mathrm{i}nfty)$ with $L_k\Phi_k=0$ on $(0,\mathrm{i}nfty)$ and $\Phi_k(0)=1$. \mathrm{e}nd{lemma} \begin{proof} We have either that $0$ is in the point spectrum (but not the Dirichlet spectrum) or that $0$ is in the resolvent set of $L_k$. In the first case there is an eigenfunction $\Phi_k\mathrm{i}n H^2(\mathbb{R})$ with $L_k\Phi_k=0$ and w.l.o.g. $\Phi_k(0)=1$. In the second case $0\mathrm{i}n \rho(L_k)$ so that there exists a unique solution $\tilde \Phi_k$ of $L_k \tilde \Phi_k = 1_{[-2,-1]}$ on $\mathbb{R}$. Clearly, if restricted to $(0,\mathrm{i}nfty)$, the function $\tilde\Phi_k$ solves $L_k \tilde\Phi_k=$ on $(0,\mathrm{i}nfty)$. Moreover, $\tilde\Phi_k(0)\not =0$ since otherwise $\tilde\Phi_k$ would be an odd eigenfunction of $L_k$ which is excluded due to $0\mathrm{i}n \rho(L_k)$. Thus a suitably rescaled version of $\tilde\Phi_k$ satisfies the claim of the lemma. \mathrm{e}nd{proof} Our second set of assumptions concerns the structure of the decaying fundamental solution according to Lemma~\ref{exp_decaying_sol}. \begin{align}\label{FurtherCond_phik} \mbox{There exist $\rho, M>0$ such that for all } k\mathrm{i}n \mathbb{N}odd\colon |\Phi_k(x)|\leq Me^{-\rho x} \mbox{ on } [0,\mathrm{i}nfty). \tag{C2} \mathrm{e}nd{align} Now we can formulate our third main theorem as a generalization of Theorem~\ref{step} and Theorem~\ref{w is a weak solution in expl exa}. The fact that the solutions which we find, can be well approximated by truncation of the Fourier series in time, is explained in Lemma~\ref{lemma_approximation} below. Moreover, a further extension yielding infinitely many different solutions is given in Theorem~\ref{multiplicity abstract} in Section~\ref{infinitely_many_breathers}. \begin{thm}\label{w is a weak solution general} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} for a potential $g$ and a frequency $\omega>0$. Then \mathrm{e}qref{quasi} has a nontrivial, $T$-periodic weak solution $w$ in the sense of Definition~\ref{Defn of weak Sol to (quasi)} with $T=\frac{2\pi}{\omega}$ provided \begin{itemize} \mathrm{i}tem[(i)] $\gamma<0$ and the sequence $\left(\Phi'_k(0)\right)_{k\mathrm{i}n\mathbb{N}odd}$ has at least one positive element, \mathrm{i}tem[(ii)] $\gamma>0$ and the sequence $\left(\Phi'_k(0)\right)_{k\mathrm{i}n\mathbb{N}odd}$ has at least one negative element. \mathrm{e}nd{itemize} Moreover, there is a constant $C>0$ such that $\abs{w(x,t)}\leq C\mathrm{e}^{-\rho\abs{x}}$ for all $(x,t)\mathrm{i}n \mathbb{R}^2$ with $\rho$ as in \mathrm{e}qref{FurtherCond_phik}. \mathrm{e}nd{thm} \begin{rmk} (a) \label{remark_Dr} It turns out that the above assumptions can be weakened as follows: it suffices to verify \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} and (i), (ii) for all integers $k\mathrm{i}n r\cdot \mathbb{Z}odd$ for some $r\mathrm{i}n \mathbb{N}odd$. We will prove this observation in Section~\ref{infinitely_many_breathers}. (b) Our variational approach also works if we consider \mathrm{e}qref{quasi} with Dirichlet boundary conditions on a bounded interval $(-l,l)$ instead of the real line. There are many possible results. For illustration purposes we just formulate the simplest one. E.g., if we assume that $\frac{\omega l}{\pi}\mathrm{i}n\frac{\mathbb{Z}odd}{4\mathbb{Z}}$ then \begin{align*} w_{tt}-w_{xx}+\gamma\delta_0(x)(w_t^3)_t=0 \mbox{ on } (-l,l)\times\mathbb{R} \mbox{ with } w(\pm l,t)=0 \mbox{ for all } t \mathrm{e}nd{align*} has a nontrivial, real-valued time-periodic weak solution with period $T=\frac{2\pi}{\omega}$ both for $\gamma>0$ and $\gamma<0$. The operator $L_k=-\frac{d^2}{dx^2}-\omega^2k^2$ is now a self-adjoint operator on $H^2(-l,l)\cap H_0^1(-l,l)$. The assumption $\frac{\omega l}{\pi}\mathrm{i}n\frac{\mathbb{Z}odd}{4\mathbb{Z}}$ guarantees \mathrm{e}qref{spectralcond} for all $k\mathrm{i}n\mathbb{Z}odd$. The functions $\Phi_k$ are given by $\Phi_k(x)=\frac{\sin(\omega k(l-x))}{\sin(\omega kk)}$ so that $\Phi_k'(0)=-\omega k\cot(\omega kl)$. The assumption $\frac{\omega l}{\pi}\mathrm{i}n\frac{\mathbb{Z}odd}{4\mathbb{Z}}$ now guarantees that the sequence $\{\cot(\omega kl)~|~k\mathrm{i}n\mathbb{Z}odd\}$ is finite and does not contain $0$ or $\pm\mathrm{i}nfty$. Moreover $\frac{\omega l}{\pi}=\frac{2p+1}{4q}$ yields $\Phi_{k}'(0)\Phi_{k+2q}'(0)<0$, i.e., we also have the required sign-change which allows for both signs of $\gamma$. \mathrm{e}nd{rmk} We observe that the growth of $\left(\Phi'_k(0)\right)_{k\mathrm{i}n\mathbb{Z}odd}$ is connected to regularity properties of our solutions. \begin{thm}\label{w is even more regular} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} and additionally $\Phi'_k(0) = O(k)$. Then the weak solution $w$ from Theorem \ref{w is a weak solution general} belongs to $\SobHper{1+\nu}{\mathbb{T}_T,\Leb{2}{\mathbb{R}}}\cap\SobHper{\nu}{\mathbb{T}_T,\SobH{1}{\mathbb{R}}}$ for any $\nu\mathrm{i}n(0,\frac{1}{4})$. \mathrm{e}nd{thm} Here, for $\nu\mathrm{i}n\mathbb{R}$ the fractional Sobolev spaces of time-periodic functions are defined by \begin{align*} \SobHper{\nu}{\mathbb{T}_T,\Leb{2}{\mathbb{R}}}&\coloneqq\left\{ u(x,t)=\sum_{k\mathrm{i}n\mathbb{Z}}\hat{u}_k(x)\mathrm{e}^{\mathrm{i}\omega kt} ~\bigg|~ \sum_{k\mathrm{i}n\mathbb{Z}}\left(1+\abs{k}^2\right)^{\nu}\norm{\hat{u}_k}^2_\Leb{2}{\mathbb{R}}<\mathrm{i}nfty \right\}, \\ \SobHper{\nu}{\mathbb{T}_T,\SobH{1}{\mathbb{R}}}&\coloneqq\left\{ u(x,t)=\sum_{k\mathrm{i}n\mathbb{Z}}\hat{u}_k(x)\mathrm{e}^{\mathrm{i}\omega kt} ~\bigg|~ \sum_{k\mathrm{i}n\mathbb{Z}}\left(1+\abs{k}^2\right)^{\nu}\norm{\hat{u}_k}^2_\SobH{1}{\mathbb{R}}<\mathrm{i}nfty \right\}. \mathrm{e}nd{align*} We shortly motivate \mathrm{e}qref{quasi} and give some references to the literature. Consider Maxwell's equations in the absence of charges and currents \begin{align*} \nabla\cdot\mathbf{D}&=0, &\nabla\times\mathbf{E}\,=&-\partial_t\mathbf{B}, &\mathbf{D}=&\varepsilon_0\mathbf{E}+\mathbf{P}(\mathbf{E}), \\ \nabla\cdot\mathbf{B}&=0, &\nabla\times\mathbf{H}=&\,\partial_t\mathbf{D}, &\mathbf{B}=&\mu_0\mathbf{H}. \mathrm{e}nd{align*} We assume that the dependence of the polarization $\mathbf{P}$ on the electric field $\mathbf{E}$ is instantaneous and it is the sum of a linear and a cubic term given by $\mathbf{P}(\mathbf{E})=\varepsilon_0\chi_1(\mathbf{x})\mathbf{E}+\varepsilon_0\chi_3(\mathbf{x})\abs{\mathbf{E}}^2\mathbf{E}$, cf. \cite{agrawal}, Section~2.3 (for simplicity, more general cases where instead of a factor multiplying $\abs{\mathbf{E}}^2\mathbf{E}$ one can take $\chi_3$ as an $\mathbf{x}$-dependent tensor of type $(1,3)$ are not considered here). Here $\varepsilon_0, \mu_0$ are constants such that $c^2=(\varepsilon_0\mu_0)^{-1}$ with $c$ being the speed of light in vacuum and $\chi_1, \chi_3$ are given material functions. By direct calculations one obtains the quasilinear curl-curl-equation \begin{align} 0=\nabla\times\nabla\times\mathbf{E} +\partial_t^2\left( V(\mathbf{x})\mathbf{E}+\Gamma(\mathbf{x})\abs{\mathbf{E}}^2\mathbf{E}\right), \label{curlcurl} \mathrm{e}nd{align} where $V(\mathbf{x})=\mu_0\varepsilon_0\left(1+\chi_1(\mathbf{x})\right)$ and $\Gamma(\mathbf{x})=\mu_0\varepsilon_0\chi_3(\mathbf{x})$. Once \mathrm{e}qref{curlcurl} is solved for the electric field $\mathbf{E}$, the magnetic induction $\mathbf{B}$ is obtained by time-integration from $\nabla\times\mathbf{E}=-\partial_t\mathbf{B}$ and it will satisfy $\nabla\cdot\mathbf{B}=0$ provided it does so at time $t=0$. By construction, the magnetic field $\mathbf{H}=\frac{1}{\mu_0} \mathbf{B}$ satisfies $\nabla\times\mathbf{H}=\partial_t\mathbf{D}$. In order to complete the full set of nonlinear Maxwell's equations one only needs to check Gauss's law $\nabla\cdot\mathbf{D}=0$ in the absence of external charges. This will follow directly from the constitutive equation $\mathbf{D}=\varepsilon_0(1+\chi_1(\mathbf{x}))\mathbf{E}+\varepsilon_0\chi_3(\mathbf{x})\abs{\mathbf{E}}^2\mathbf{E}$ and the two different specific forms of $\mathbf{E}$ given next: \begin{align*} \mathbf{E}(\mathbf{x},t)&=(0,u(x_1-\kappa t,x_3),0)^T &&\hspace*{-2cm}\mbox{ polarized wave traveling in $x_1$-direction } \\ \mathbf{E}(\mathbf{x},t)&=(0,u(x_1,t),0)^T &&\hspace*{-2cm}\mbox{ polarized standing wave} \mathrm{e}nd{align*} In the first case $\mathbf{E}$ is a polarized wave independent of $x_2$ traveling with speed $\kappa$ in the $x_1$ direction and with profile $u$. If additionally $V(\mathbf{x})=V(x_3)$ and $\Gamma(\mathbf{x})=\Gamma(x_3)$ then the quasilinear curl-curl-equation \mathrm{e}qref{curlcurl} turns into the following equation for $u=u(\tau,x_3)$ with the moving coordinate $\tau=x_1-\kappa t$: \begin{align*} -u_{x_3 x_3} + (\kappa^2 V(x_3)-1) u_{\tau\tau} + \kappa^2\Gamma(x_3)(u^3)_{\tau\tau}=0. \mathrm{e}nd{align*} Setting $u=w_\tau$ and integrating once w.r.t. $\tau$ we obtain \mathrm{e}qref{quasi}. In the second case $\mathbf{E}$ is a polarized standing wave which is independent of $x_2, x_3$. If we assume furthermore that $V(\mathbf{x})=V(x_1)$ and $\Gamma(\mathbf{x})=\Gamma(x_1)$ then this time the quasilinear curl-curl-equation \mathrm{e}qref{curlcurl} for $u=w_t$ turns (after one time-integration) directly into \mathrm{e}qref{quasi}. In the literature, \mathrm{e}qref{curlcurl} has mostly been studied by considering time-harmonic waves $\mathbf{E}(\mathbf{x},t)= \mathbf{U}(\mathbf{x})e^{\mathrm{i}\kappa t}$. This reduces the problem to the stationary elliptic equation \begin{equation} \label{curl_curl_stat} 0=\nabla\times\nabla\times\mathbf{U} -\kappa^2\left( V(\mathbf{x})\mathbf{U}+\Gamma(\mathbf{x})\abs{\mathbf{U}}^2\mathbf{U}\right) \mbox{ in } \mathbb{R}^3. \mathrm{e}nd{equation} Here case $\mathbf{E}$ is no longer real-valued. This may be justified by extending the ansatz to $\mathbf{E}(\mathbf{x},t)= \mathbf{U}(\mathbf{x})e^{\mathrm{i}\kappa t}+c.c.$ and by either neglecting higher harmonics generated from the cubic nonlinearity or by assuming the time-averaged constitutive relation $\mathbf{P}(\mathbf{E})=\varepsilon_0\chi_1(\mathbf{x})\mathbf{E}+\varepsilon_0\chi_3(\mathbf{x})\frac{1}{T}\mathrm{i}nt_0^T\abs{\mathbf{E}}^2\,dt \mathbf{E}$ with $T=2\pi/\kappa$, cf. \cite{stuart_1993}, \cite{Sutherland03}. For results on \mathrm{e}qref{curl_curl_stat} we refer to \cite{BDPR_2016}, \cite{Mederski_2015} and in particular to the survey \cite{Bartsch_Mederski_survey}. Time-harmonic traveling waves have been found in a series of papers \cite{stuart_1990, stuart_1993,stuart_zhou_2010}. The number of results for monochromatic standing polarized wave profiles $U(\mathbf{x})=(0,u(x_1),0)$ with $u$ satisfying $0=-u''-\kappa^2\left( V(x_1)u+\Gamma(x_1)|u|^2u\right)$ on $\mathbb{R}$ is too large to cite so we restrict ourselves to Cazenave's book \cite{cazenave}. Our approach differs substantially from the approaches by monochromatic waves described above. Our ansatz $w(x,t)=\sum_{k\mathrm{i}n\mathbb{Z}odd} w_k(x) \mathrm{e}^{\mathrm{i} k\omega t}$ with $\mathbb{Z}odd\coloneqq2\mathbb{Z}+1$ is automatically polychromatic since it couples all integer multiples of the frequency $\omega$. A similar polychromatic approach is considered in \cite{PelSimWeinstein}. The authors seek spatially localized traveling wave solutions of the 1+1-dimensional quasilinear Maxwell model, where in the direction of propagation $\chi_1$ is a periodic arrangement of delta functions. Based on a multiple scale approximation ansatz, the field profile is expanded into infinitely many modes which are time-periodic in both the fast and slow time variables. Since the periodicities in the fast and slow time variables differ, the field becomes quasiperiodic in time. To a certain extent the authors of \cite{PelSimWeinstein} analytically deal with the resulting system for these infinitely many coupled modes through bifurcation methods, with a rigorous existence proof still missing. However, numerical results from \cite{PelSimWeinstein} indicate that spatially localized traveling waves could exist. With our case of allowing $\chi_1$ to be a bounded function but taking $\chi_3$ to be a delta function at $x=0$ we consider an extreme case. On the other hand our existence results (possibly for the first time) rigorously establish localized solutions of the full nonlinear Maxwell problem \mathrm{e}qref{curlcurl} without making the assumption of either neglecting higher harmonics or of assuming a time-averaged nonlinear constitutive law. The existence of localized breathers of the quasilinear problem \mathrm{e}qref{quasi} with bounded coefficients $g, h$ remains generally open. We can, however, provide specific functions $g$, $h$ for which \mathrm{e}qref{quasi} has a breather-type solution that decays to $0$ as $|x|\to \mathrm{i}nfty$. Let \begin{align*} b(x) \coloneqq (1+x^2)^{-1/2}, \quad h(x) \coloneqq \frac{1-2x^2}{1+x^2}, \quad g(x) \coloneqq \frac{2+x^4}{(1+x^2)^2} \mathrm{e}nd{align*} and consider a time-periodic solution $a$ of the ODE \begin{align*} -a'' - (a'^3)' =a \mathrm{e}nd{align*} with minimal prescribed period $T\mathrm{i}n (0,2\pi)$. Then $w(x,t) \coloneqq a(t)b(x)$ satisfies \mathrm{e}qref{quasi}. Note that $h$ is sign-changing and $w$ is not exponentially localized. We found this solution by inserting the ansatz for $w$ with separated variables into \mathrm{e}qref{quasi}. We then defined $b(x)\coloneqq(1+x^2)^{-1/2}$ and set $g(x)\coloneqq -b''(x)/b(x)$ and $h(x)\coloneqq -b''(x)/b(x)^3$. The remaining equation for $a$ then turned out to be the above one. The paper is structured as follows: In Section~\ref{variational_approach} we develop the variational setting and give the proof of Theorem~\ref{w is a weak solution general}. The proof of the additional regularity results of Theorem~\ref{w is even more regular} is given in Section~\ref{further_regularity}. In Section~\ref{infinitely_many_breathers} we give the proof of Theorem~\ref{multiplicity abstract} on the existence of infinitely many different breathers. In Section~\ref{approximation} we show that our breathers can be well approximated by truncation of the Fourier series in time. Finally, in the Appendix we give details on the background and proof of Theorem~\ref{step} (Section~\ref{details_example_step}) and Theorem~\ref{w is a weak solution in expl exa} (Section~\ref{explicit example Bloch Modes_WR}) as well as a technical detail on a particular embedding of H\"older spaces into Sobolev spaces (Section~\ref{embedding}). \section{Variational Approach and Proof of Theorem~\ref{w is a weak solution general}} \label{variational_approach} The main result of our paper is Theorem~\ref{w is a weak solution general} which will be proved in this section. It is a consequence of Lemma~\ref{breathers} and Theorem~\ref{J attains a minimum and its properties} below. Formally \mathrm{e}qref{quasi} is the Euler-Lagrange-equation of the functional \begin{equation} \label{def_I} I(w)\coloneqq\mathrm{i}nt_D-\frac{1}{2}g(x)\abs{\partial_tw}^2+\frac{1}{2}\abs{\partial_xw}^2\dd{(x,t)} -\frac{1}{4}\gamma\mathrm{i}nt_{0}^{T}\abs{\partial_tw(0,t)}^4\dd{t} \mathrm{e}nd{equation} defined on a suitable space of $T$-periodic functions. Instead of directly searching for a critical point of this functional we first rewrite the problem into a nonlinear Neumann boundary value problem under the assumption that $w$ is even in $x$. In this case \mathrm{e}qref{quasi} amounts to the following linear wave equation on the half-axis with nonlinear Neumann boundary conditions: \begin{gather} \begin{cases} g(x) w_{tt}-w_{xx}=0 & \text{for } (x,t)\mathrm{i}n(0,\mathrm{i}nfty)\times\mathbb{R},\\ 2w_x(0_+,t)=\gamma\left(w_t(0,t)^3\right)_t & \text{for }t\mathrm{i}n\mathbb{R} \mathrm{e}nd{cases}\label{nonlinNeuBVP} \mathrm{e}nd{gather} where solutions $w\mathrm{i}n\SobH{1}{[0,\mathrm{i}nfty)\times\mathbb{T}_T}$ with $\partial_tw(0,\cdot)\mathrm{i}n\Leb{3}{\mathbb{T}_T}$ of \mathrm{e}qref{nonlinNeuBVP} are understood in the sense that \begin{align} 2\mathrm{i}nt_{D_+}-g(x)\partial_tw\,\partial_t\psi +\partial_xw\,\partial_x\psi\dd{(x,t)} -\gamma\mathrm{i}nt_{0}^{T} (\partial_t w(0,t))^3 \partial_t \psi(0,t)\dd{t}=0 \label{WeakEquation for nlinNeuBVP} \mathrm{e}nd{align} for all $\psi\mathrm{i}n\mathbb{C}ontc{\mathrm{i}nfty}{[0,\mathrm{i}nfty)\times\mathbb{T}_T}$ with $D_+=(0,\mathrm{i}nfty)\times\mathbb{T}_T$. It is clear that evenly extended solutions $w$ of \mathrm{e}qref{WeakEquation for nlinNeuBVP} also satisfy \mathrm{e}qref{WeakEquation for (quasi)}. To see this note that every $\psi\mathrm{i}n\mathbb{C}ontc{\mathrm{i}nfty}{\mathbb{R}\times\mathbb{T}_T}$ can be split into an even and an odd part $\psi=\psi_{e}+\psi_{o}$ both belonging to $\mathbb{C}ontc{\mathrm{i}nfty}{\mathbb{R}\times\mathbb{T}_T}$. Testing with $\psi_o$ in \mathrm{e}qref{WeakEquation for (quasi)} produces zeroes in all spatial integrals due to the evenness of $w$ and also in the temporal integral since $\psi_{o}(0,\cdot)\mathrm{e}quiv 0$ due to oddness. Testing with $\psi_e$ in \mathrm{e}qref{WeakEquation for (quasi)} produces twice the spatial integrals appearing in \mathrm{e}qref{WeakEquation for nlinNeuBVP}. In the following we concentrate on finding solutions of \mathrm{e}qref{nonlinNeuBVP} for the linear wave equation with nonlinear Neumann boundary conditions. Motivated by the linear wave equation in \mathrm{e}qref{nonlinNeuBVP} we make the ansatz that \begin{equation} \label{ansatz} w(x,t)=\sum_{k\mathrm{i}n\mathbb{Z}odd}\frac{\hat{\alpha}_k}{k}\Phi_k(\abs{x})e_k(t), \mathrm{e}nd{equation} where $e_k(t)\coloneqq\frac{1}{\sqrt{T}}\mathrm{e}^{\mathrm{i}\omega kt}$ denotes the $\Leb{2}{\mathbb{T}_T}$-orthonormal Fourier base of $\mathbb{T}_T$, and where $\Phi_k$ are the decaying fundamental solutions $\Phi_k$ of $L_k$, cf. Lemma~\ref{exp_decaying_sol}. Such a function $w$ will always solve the linear wave equation in \mathrm{e}qref{nonlinNeuBVP} and we will determine real sequences $\hat{\alpha}= (\hat\alpha_k)_{k\mathrm{i}n \mathbb{Z}odd}$ such that the nonlinear Neumann condition is satisfied as well. The additional factor $\frac{1}{k}$ is only for convenience, since $\partial_t$ generates a multiplicative factor $\mathrm{i}\omega k$. The convolution between two sequences $\hat{z},\hat{y}\mathrm{i}n\mathbb{R}^\mathbb{Z}$ is defined pointwise (whenever it converges) by $(\hat{z}*\hat{y})_k\coloneqq \sum_{l\mathrm{i}n\mathbb{Z}}\hat{z}_l\hat{y}_{k-l}$. In order to obtain real-valued functions $w$ by the ansatz \mathrm{e}qref{ansatz} we require the sequence $\hat{\alpha}$ to be real and odd in $k$, i.e., $\hat{\alpha}_k\mathrm{i}n \mathbb{R}$ and $\hat{\alpha}_k = -\hat{\alpha}_{-k}$. Since \mathrm{e}qref{ansatz} already solves the wave equation in \mathrm{e}qref{nonlinNeuBVP}, it remains to find $\hat{\alpha}$ such that \begin{align*} 2w_x(0_+,t) = 2\sum_{k\mathrm{i}n \mathbb{Z}odd} \frac{\hat{\alpha}_k}{k} \Phi_k'(0)e_k(t) \stackrel{!}{~=~} \frac{1}{T}\sum_{k\mathrm{i}n \mathbb{Z}odd} \gamma\omega^4 k (\hat\alpha*\hat\alpha*\hat\alpha)_k e_k(t) = \gamma(w_t(0,t)^3)_t, \mathrm{e}nd{align*} where we have used $\Phi_k(0)=1$. As the above identity needs to hold for all $t\mathrm{i}n \mathbb{R}$ we find \begin{equation} \label{euler_lagrange_alpha} (\hat\alpha*\hat\alpha*\hat\alpha)_k = \frac{2T\Phi_k'(0)}{\gamma\omega^4 k^2} \hat \alpha_k \quad\mbox{ for all } k \mathrm{i}n \mathbb{Z}odd. \mathrm{e}nd{equation} This will be accomplished by searching for critical points $\hat{\alpha}$ of the functional \begin{align*} J(\hat{z})\coloneqq\frac{1}{4} (\hat{z}*\hat{z}*\hat{z}*\hat{z})_0+\frac{T}{\gamma \omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat{z}_k^2. \mathrm{e}nd{align*} defined on a suitable Banach space of real sequences $\hat{z}$ with $\hat{z}_k = -\hat{z}_{-k}$. Indeed, computing (formally) the Fr\'{e}chet derivative of $J$ at $\hat{\alpha}$ we find \begin{equation} J'(\hat{\alpha})[\hat{y}]=\left(\hat{\alpha}*\hat{\alpha}*\hat{\alpha}*\hat{y}\right)_0+\frac{2T}{\gamma\omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat{\alpha}_k\hat{y}_k. \label{frechet} \mathrm{e}nd{equation} Let us indicate how \mathrm{e}qref{frechet} amounts to \mathrm{e}qref{euler_lagrange_alpha}. For fixed $k_0\mathrm{i}n \mathbb{Z}odd$ we define the test sequence $\hat{y}\coloneqq(\delta_{k,k_0}-\delta_{k,-k_0})_{k\mathrm{i}n \mathbb{Z}odd}$ which has exactly two non-vanishing entries at $k_0$ and at $-k_0$. Thus, $\hat{y}$ belongs to the same space of odd, real sequences as $\hat{\alpha}$ and can therefore be used as a test sequence in $J'(\hat{\alpha})[\hat{y}]=0$. After a short calculation using $\hat{\alpha}_k=-\hat{\alpha}_{-k}$, $\Phi_k'=\Phi_{-k}'$ we obtain \mathrm{e}qref{euler_lagrange_alpha} for $k_0$. It turns out that a real Banach space of real-valued sequences which is suitable for $J$ can be given by \begin{align*} \Dom{J}\coloneqq\left\{ \hat{z}\mathrm{i}n\mathbb{R}^\mathbb{Z}odd ~\big|~ \mathbb{N}ORM{\hat{z}}<\mathrm{i}nfty,~ \hat{z}_k=-\hat{z}_{-k} \right\} \mbox{ where } \mathbb{N}ORM{\hat{z}}\coloneqq \norm{\hat{z}*\hat{z}}_\seq{2}^\frac{1}{2}. \mathrm{e}nd{align*} The relation between the function $I$ defined in \mathrm{e}qref{def_I} and the new functional $J$ is formally given by \begin{align*} I\left(\sum_{k\mathrm{i}n\mathbb{Z}odd}\frac{\hat{z}_k}{k}\Phi_k(\abs{x})e_k(t)\right) =-\frac{\gamma\omega^4}{T} J\left(\hat{z}\right). \mathrm{e}nd{align*} \begin{lemma} \label{Charakterization Dom(J)} The space $(\Dom{J},\mathbb{N}ORM{\cdot})$ is a separable, reflexive, real Banach space and isometrically embedded into the real Banach space $\Leb{4}{\mathbb{T}_T,\mathrm{i}\mathbb{R}}$ of purely imaginary-valued measurable functions. Moreover for $\hat{u}, \hat{v}, \hat{w}, \hat{z} \mathrm{i}n \Dom{J}$ we have \begin{align} (\hat{u}*\hat{u}*\hat{u}*\hat{u})_0 & = \mathbb{N}ORM{\hat{u}}^4, \\ \abs{(\hat{u}*\hat{v}*\hat{w}*\hat{z})_0} & \leq \mathbb{N}ORM{\hat{u}}\,\mathbb{N}ORM{\hat{v}}\,\mathbb{N}ORM{\hat{w}}\,\mathbb{N}ORM{\hat{z}}, \label{conv_multilinear} \\ \norm{\hat{z}}_{\seq{2}} &\leq\mathbb{N}ORM{\hat{z}}. \label{l2_l4} \mathrm{e}nd{align} \mathrm{e}nd{lemma} \begin{proof} We first recall the correspondence between real-valued sequences $\hat{z}\mathrm{i}n l_2$ with $\hat{z}_k=-\hat{z}_{-k}$ and purely imaginary-valued functions $z\mathrm{i}n\Leb{2}{\mathbb{T}_T,\mathrm{i}\mathbb{R}}$ by setting \begin{align*} \hat{z}_k\coloneqq\skp{z}{e_k}_{L^2(\mathbb{T}_T)} \mbox{ and } z(t)\coloneqq\sum_{k\mathrm{i}n\mathbb{Z}}\hat{z}_ke_k(t) \mathrm{e}nd{align*} Parseval's identity provides the isomorphism $\norm{z}_{\Leb{2}{\mathbb{T}_T}}=\|\hat{z}\|_\seq{2}$. The following identity \begin{align*} T\norm{z}_{\Leb{4}{\mathbb{T}_T}}^4 = T\mathrm{i}nt_0^T z(t)^4\,dt = (\hat{z}*\hat{z}*\hat{z}*\hat{z})_0 = \|\hat{z}*\hat{z}\|_\seq{2}^2 = \mathbb{N}ORM{\hat{z}}^4 \mathrm{e}nd{align*} shows that $\mathbb{N}ORM{\cdot}$ is indeed a norm on $\Dom{J}$ and its provides the isometric embedding of $\Dom{J}$ into a subspace of $L^4(\mathbb{T}_T,\mathrm{i}\mathbb{R})$. By Parseval's equality and H\"older's inequality we see that \begin{align*} \norm{\hat{z}}_{\seq{2}}=\norm{z}_{\Leb{2}{\mathbb{T}_T}}\leq T^\frac{1}{4}\norm{z}_{\Leb{4}{\mathbb{T}_T}}=\mathbb{N}ORM{\hat{z}} \mathrm{e}nd{align*} so that $\Dom{J}$ is indeed a subspace of $l^2$. Finally, for any $\hat{u}, \hat{v}, \hat{w}, \hat{z} \mathrm{i}n \Dom{J}$ we see that \begin{align*} \abs{(\hat{u}*\hat{v}*\hat{w}*\hat{z})_0} = T \abs{\mathrm{i}nt_0^T u(t)v(t)w(t)z(t) \,dt} \leq T\norm{u}_{L^4}\norm{v}_{L^4}\norm{w}_{L^4}\norm{z}_{L^4} = \mathbb{N}ORM{\hat{u}}\,\mathbb{N}ORM{\hat{v}}\,\mathbb{N}ORM{\hat{w}}\,\mathbb{N}ORM{\hat{z}}. \mathrm{e}nd{align*} This finishes the proof of the lemma. \mathrm{e}nd{proof} For $\frac{T}{2}$-anti-periodic functions $\psi\colon D\to \mathbb{R}$ of the space-time variable $(x,t)\mathrm{i}n D$ we use the notation \begin{equation} \label{ansatz_phi} \psi(x,t)=\sum_{k\mathrm{i}n\mathbb{Z}odd}\hat{\psi}_k(x)e_k(t) = \sum_{k\mathrm{i}n\mathbb{Z}odd} \frac{1}{k} \Psi_k(x) e_k(t) \mathrm{e}nd{equation} with $\frac{1}{k}\Psi_k(x)=\hat{\psi}_k(x)\coloneqq\skp{\psi(x,\cdot)}{e_k}_\Leb{2}{\mathbb{T}_T}$. The Parseval identity and the definition of $\mathbb{N}ORM{\cdot}$ immediately lead to the following lemma. \begin{lemma} \label{characterization} For $\psi:D\to \mathbb{R}$ as in \mathrm{e}qref{ansatz_phi} the following holds: \begin{itemize} \mathrm{i}tem[(i)] $\|\psi_x\|_{L^2(D)}^2=\sum_{k} \frac{1}{k^2} \|\Psi_k'\|_{\Leb{2}{\mathbb{R}}}^2$, \mathrm{i}tem[(ii)] $\|\psi_t\|_{L^2(D)}^2=\omega^2\sum_{k} \|\Psi_k\|_{\Leb{2}{\mathbb{R}}}^2$, \mathrm{i}tem[(iii)] $T\|\psi_t(0,\cdot)\|_\Leb{4}{\mathbb{T}_T}^4 = \omega^4\mathbb{N}ORM{\hat{y}}^4$ where $\hat{y}_k = \Psi_k(0)$ for $k\mathrm{i}n \mathbb{Z}odd$. \mathrm{e}nd{itemize} \mathrm{e}nd{lemma} The next result give some estimates on the growth of norms of $\Phi_k$. It serves as a preparation for the proof of regularity properties for functions $w$ as in \mathrm{e}qref{ansatz} stated in Lemma~\ref{breathers}. \begin{lemma} \label{norm_estimates} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik}. Then \begin{equation} \label{phi_k_estimates} \|\Phi_k\|_{L^2(0,\mathrm{i}nfty)}= O(1), \quad \|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}=O(k), \quad \|\Phi_k'\|_{L^\mathrm{i}nfty(0,\mathrm{i}nfty)} = O(k^\frac{3}{2}). \mathrm{e}nd{equation} In particular $|\Phi_k'(0)|=O(k^\frac{3}{2})$. \label{asymptotik} \mathrm{e}nd{lemma} \begin{proof} The first part of \mathrm{e}qref{phi_k_estimates} is a direct consequence of \mathrm{e}qref{FurtherCond_phik}. Multiplying $L_k\Phi_k=0$ with $\Phi_k$, $\Phi_k'$ and integrating from $a\geq 0$ to $\mathrm{i}nfty$ we get \begin{align} \mathrm{i}nt_a^\mathrm{i}nfty -\omega^2 k^2 g(x)\Phi_k(x)^2+\Phi_k'(x)^2\,dx & = -\Phi_k(a)\Phi_k'(a), \label{mult1}\\ \mathrm{i}nt_a^\mathrm{i}nfty -2 \omega^2 k^2 g(x) \Phi_k(x)\Phi_k'(x)\,dx & = -\Phi_k'(a)^2, \label{mult2} \mathrm{e}nd{align} respectively. Applying the Cauchy-Schwarz inequality to \mathrm{e}qref{mult2} and using the first part of \mathrm{e}qref{phi_k_estimates} we find \begin{equation} \label{mult3} \|\Phi_k'\|_{L^\mathrm{i}nfty(0,\mathrm{i}nfty)}^2 \leq O(k^2) \|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)} \mathrm{e}nd{equation} and from \mathrm{e}qref{mult1}, \mathrm{e}qref{mult3} we get \begin{align*} \|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}^2 & \leq O(k^2) + \|\Phi_k\|_{L^\mathrm{i}nfty(0,\mathrm{i}nfty)} \|\Phi_k'\|_{L^\mathrm{i}nfty(0,\mathrm{i}nfty)}\\ & \leq O(k^2) + \|\Phi_k\|_{L^\mathrm{i}nfty(0,\mathrm{i}nfty)} O(k) \|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}^\frac{1}{2}. \mathrm{e}nd{align*} The $L^\mathrm{i}nfty$-assumption in \mathrm{e}qref{FurtherCond_phik} leads to \begin{align*} \|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}^2 \leq O(k^2) + O(k)\|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}^\frac{1}{2} \leq O(k^2) +C_\mathrm{e}psilon O(k^\frac{4}{3}) + \mathrm{e}psilon \|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}^2, \mathrm{e}nd{align*} where we have used Young's inequality with exponents $4/3$ and $4$. This implies the second inequality in \mathrm{e}qref{phi_k_estimates}. Inserting this into \mathrm{e}qref{mult3} we obtain the third inequality in \mathrm{e}qref{phi_k_estimates}. \mathrm{e}nd{proof} \begin{lemma}\label{breathers} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik}. For $\hat{\alpha}\mathrm{i}n\Dom{J}$ and $w:D\to\mathbb{R}$ as in \mathrm{e}qref{ansatz} we have $w_x, w_t \mathrm{i}n L^2(D)$, $w_t(0,\cdot)\mathrm{i}n\Leb{4}{\mathbb{T}_T}$ and there are values $C>0$ and $\rho>0$ such that $\abs{w(x,t)}\leq C\mathrm{e}^{-\rho\abs{x}}$. \mathrm{e}nd{lemma} \begin{rmk} The lemma does not require $\hat{\alpha}$ to be a critical point of $J$. The smoothness and decay of $w$ as in \mathrm{e}qref{ansatz} is simply a consequence of $\hat{\alpha} \mathrm{i}n \Dom{J}$ and \mathrm{e}qref{FurtherCond_phik}. \mathrm{e}nd{rmk} \begin{proof} We use the characterization from Lemma~\ref{characterization}. Let us begin with the estimate for $\norm{\partial_t w}_{\Leb{2}{D}}$. By Lemma~\ref{norm_estimates} we have $\sup_k \|\Phi_k\|_{L^2(0,\mathrm{i}nfty)}<\mathrm{i}nfty$ so that \begin{align*} \norm{\partial_t w}_{\Leb{2}{D}}^2 &= 2\omega^2 \sum_k \hat{\alpha}_k^2\norm{\Phi_k}_\Leb{2}{0,\mathrm{i}nfty}^2 \leq 2\omega^2\Bigl(\sup_k\norm{\Phi_k}_\Leb{2}{0,\mathrm{i}nfty}\Bigr)^2 \norm{\hat{\alpha}}_\seq{2}^2 \\ & \leq 2\omega^2\Bigl(\sup_k\norm{\Phi_k}_\Leb{2}{0,\mathrm{i}nfty}\Bigr)^2 \mathbb{N}ORM{\hat{\alpha}}^2<\mathrm{i}nfty, \mathrm{e}nd{align*} which finishes our first goal. Next we estimate $\norm{\partial_x w}_{\Leb{2}{D}}$. Here we use again Lemma~\ref{norm_estimates} to find \begin{align*} \norm{\partial_x w}_{\Leb{2}{D}}^2 = 2\sum_k\frac{\hat{\alpha}_k^2}{k^2}\norm{\Phi_k'}_\Leb{2}{0,\mathrm{i}nfty}^2 \leq C\|\hat\alpha\|_{l^2}^2 \leq C\mathbb{N}ORM{\hat\alpha}^2<\mathrm{i}nfty \mathrm{e}nd{align*} which finishes our second goal. Next we show that $w_t(0,\cdot)\mathrm{i}n\Leb{4}{\mathbb{T}_T}$. Using $\Phi_k(0)=1$ we observe that \begin{align*} T\norm{w_t(0,\cdot)}_\Leb{4}{\mathbb{T}_T}^4 =T\mathrm{i}nt_{0}^T\Bigl(\sum_{k\mathrm{i}n\mathbb{Z}odd}\mathrm{i}\omega\hat{\alpha}_k\Phi_k(0)e_k(t)\Bigr)^4\dd{t}\\ =\omega^4\mathbb{N}ORM{\hat{\alpha}}^4<\mathrm{i}nfty. \mathrm{e}nd{align*} Finally we show the uniform-in-time exponential decay of $w$. By construction $w$ is even in $x$, hence we only consider $x>0$. By \mathrm{e}qref{FurtherCond_phik} we see that \begin{align*} \abs{w(x,t)} \leq\sum_k\frac{\abs{\hat{\alpha}_k}}{\abs{k}}\abs{\Phi_k(x)} =\sum_k\frac{\abs{\hat{\alpha}_k}}{\abs{k}}Ce^{-\alpha x} \leq \|\hat\alpha\|_{l^2}\left(\sum_k \frac{1}{k^2}\right)^{1/2} C e^{-\alpha x} \leq \tilde C e^{-\alpha x} \mathrm{e}nd{align*} which finishes the proof of the lemma. \mathrm{e}nd{proof} In the following result we will show that minimizers of $J$ on $\Dom{J}$ exist, are solutions of \mathrm{e}qref{euler_lagrange_alpha} and indeed correspond to weak solutions of \mathrm{e}qref{quasi}. \begin{thm}\label{J attains a minimum and its properties} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik}. Then the functional $J$ is well defined on its domain $\Dom{J}$, Fr\'{e}chet-differentiable, bounded from below and attains its negative minimum provided \begin{itemize} \mathrm{i}tem[(i)] $\gamma<0$ and the sequence $\left(\Phi'_k(0)\right)_{k\mathrm{i}n\mathbb{N}odd}$ has at least one positive element, or \mathrm{i}tem[(ii)] $\gamma>0$ and the sequence $\left(\Phi'_k(0)\right)_{k\mathrm{i}n\mathbb{N}odd}$ has at least one negative element. \mathrm{e}nd{itemize} For every critical point $\hat{\alpha}\mathrm{i}n\Dom{J}$ the corresponding function $w(x,t)\coloneqq\sum_{k\mathrm{i}n\mathbb{Z}odd}\frac{\hat{\alpha}_k}{k}\Phi_k(\abs{x})e_k(t)$ is a nontrivial weak solution of \mathrm{e}qref{quasi}. \mathrm{e}nd{thm} \begin{proof} Note that $J(\hat z) = \frac{1}{4} \mathbb{N}ORM{\hat{z}}^4 + J_1(\hat z)$, where $J_1(\hat z)= \sum_k a_k \hat z_k^2$ with $a_k = \frac{T\Phi_k'(0)}{\gamma\omega^4 k^2}$. By Lemma~\ref{norm_estimates} the sequence $(a_k)_k$ is converging to $0$ as $|k|\to \mathrm{i}nfty$, so in particular it is bounded. Due to \mathrm{e}qref{l2_l4} one finds that $J$ is well defined and continuous on $\Dom{J}$, and moreover, that for $\hat{z}\mathrm{i}n \Dom{J}$ \begin{align*} J(\hat{z}) \geq\frac{1}{4}\mathbb{N}ORM{\hat{z}}^4 -\sup_k |a_k| \sum_k\hat{z}_k^2 \geq\frac{1}{4}\mathbb{N}ORM{\hat{z}}^4 - \sup_k |a_k|\mathbb{N}ORM{\hat{z}}^2. \mathrm{e}nd{align*} This implies that $J$ is coercive and bounded from below. The weak lower semi-continuity of $J$ follows from the convexity and continuity of the map $\hat z \mapsto \mathbb{N}ORM{\hat{z}}^4$ and the weak continuity of $J_1$. To see the latter take an arbitrary $\mathrm{e}psilon>0$. Then there is $k_0\mathrm{i}n \mathbb{N}$ such that $|a_k|\leq \mathrm{e}psilon$ for $|k|> k_0$ and this implies the inequality \begin{align} \label{estimate_J_1} |J_1(\hat z)- J_1(\hat{y})| \leq \sup_k |a_k| \sum_{|k|\leq k_0} |\hat z_k^2-\hat{y_k}^2| + \mathrm{e}psilon (\|\hat z\|_{l^2}^2 + \|\hat{y}\|_{l^2}^2) \quad\forall\,\hat{z},\hat{y}\mathrm{i}n\Dom{J}. \mathrm{e}nd{align} Since $(\Dom{J},\mathbb{N}ORM{\cdot})$ continuously embeds into $l^2$ any weakly convergent sequence in $(\Dom{J},\mathbb{N}ORM{\cdot})$ also weakly converges in $l^2$ and in particular pointwise. This pointwise convergence together with the boundedness of the sequence and \mathrm{e}qref{estimate_J_1} yields the weak continuity of $J_1$ and thus the weak lower semi-continuity of $J$. As a consequence, cf. Theorem 1.2 in \cite{struwe}, we get the existence of a minimizer. In order to check that the minimizer is nontrivial is suffices to verify that $J$ attains negative values. Here we distinguish between case (i) and (ii) in the assumptions of the theorem. In case (i) when $\gamma<0$ we find an index $k_0$ such that $\Phi_{k_0}'(0)>0$. In case (ii) when $\gamma>0$ we choose $k_0$ such that $\Phi_{k_0}'(0)<0$. In both cases we obtain that $\Phi_{k_0}'(0)/\gamma<0$. If we set $\hat{y}\coloneqq(\delta_{k,k_0}-\delta_{k,-k_0})_{k\mathrm{i}n \mathbb{Z}odd}$ then $\hat{y}$ has exactly two non-vanishing entries, namely $+1$ at $k_0$ and $-1$ at $-k_0$. Hence $\hat{y}\mathrm{i}n\Dom{J}$. Using the property $\Phi_{k_0}'=\Phi_{-k_0}'$ we find for $t\mathrm{i}n \mathbb{R}$ \begin{align*} J(t \hat{y})=t^4\frac{1}{4}\mathbb{N}ORM{\hat{y}}^4 +2t^2\frac{T\Phi'_{k_0}(0)}{\gamma\omega^4k_0^2} \mathrm{e}nd{align*} which is negative by the choice of $k_0$ provided $t>0$ is sufficiently small. Thus, $\mathrm{i}nf_{\Dom{J}}J<0$ and every minimizer $\hat\alpha$ is nontrivial. Next we show for every critical point $\hat\alpha$ of $J$ that $w(x,t)\coloneqq\sum_{k\mathrm{i}n\mathbb{Z}odd}\frac{\hat{\alpha}_k}{k}\Phi_k(\abs{x})e_k(t)$ is a weak solution of \mathrm{e}qref{quasi}. The regularity properties $w\mathrm{i}n\SobH{1}{\mathbb{R}T}$, $\partial_tw(0,\cdot)\mathrm{i}n\Leb{4}{\mathbb{T}_T}$ and the exponential decay have already been shown in Lemma~\ref{breathers}. We skip the standard proof that $J\mathrm{i}n\mathbb{C}ont{1}{\Dom{J},\mathbb{R}}$ and that its Fr\'{e}chet-derivative is given by \mathrm{e}qref{frechet}. We will show that \mathrm{e}qref{WeakEquation for (quasi)} holds for any $\psi$ as in \mathrm{e}qref{ansatz_phi} with even functions $\Psi_k\mathrm{i}n H^1(\mathbb{R})$, $\Psi_k=-\Psi_{-k}$ such that $\psi_x, \psi_t \mathrm{i}n L^2(D)$ and $\psi(0,\cdot)\mathrm{i}n L^4(\mathbb{T}_T)$ as described in Lemma~\ref{characterization}. We begin by deriving expressions and estimates for the functionals \begin{align*} H_1(\psi) = \mathrm{i}nt_D g(x) w_t \psi_t\,d(x,t), \quad H_2(\psi) = \mathrm{i}nt_D w_x \psi_x \,d(x,t), \quad H_3(\psi) = \mathrm{i}nt_{0}^{T} w_t(0,t)^3\psi_t(0,t)\,dt. \mathrm{e}nd{align*} In a first step we assume that the sum in \mathrm{e}qref{ansatz_phi} is finite in order to justify the exchange of summation and integration in the following. Then, starting with $H_1$ we find \begin{align*} H_1(\psi) &= -\omega^2 \mathrm{i}nt_D g(x)\sum_{k,l} \hat{\alpha}_k\Phi_k(\abs{x}) \Psi_l(|x|) e_k(t)e_l(t) \dd{(x,t)} \\ &=-2\omega^2 \sum_k \hat{\alpha}_k\mathrm{i}nt_0^\mathrm{i}nfty g(x)\Phi_k(x)\Psi_{-k}(x)\dd{x}\\ &=2\omega^2 \sum_k \hat{\alpha}_k\mathrm{i}nt_0^\mathrm{i}nfty g(x)\Phi_k(x)\Psi_k(x)\dd{x}, \\ |H_1(\psi)| &\leq 2\omega^2 \|g\|_{L^\mathrm{i}nfty(\mathbb{R})} \Bigl(\sum_k \hat{\alpha}_k^2 \|\Phi_k\|^2_{L^2(0,\mathrm{i}nfty)}\Bigr)^\frac{1}{2} \Bigl(\sum_k\|\Psi_k\|_{L^2(0,\mathrm{i}nfty)}^2\Bigr)^\frac{1}{2}= \|g\|_{L^\mathrm{i}nfty(\mathbb{R})} \|w_t\|_{L^2(D)}\|\psi_t\|_{L^2(D)} \mathrm{e}nd{align*} and similarly for $H_2$ we find using \mathrm{e}qref{eq:bloch} \begin{align*} H_2(\psi) &= \mathrm{i}nt_D \sum_{k,l} \frac{\hat{\alpha}_k}{k}\Phi_k'(\abs{x}) \frac{1}{l}\Psi_l'(|x|) e_k(t)e_l(t) \dd{(x,t)} \\ &= 2\sum_k \frac{\hat{\alpha}_k}{-k^2} \mathrm{i}nt_0^\mathrm{i}nfty \Phi_k'(x)\Psi_{-k}'(x)\dd{x}\\ &= 2\sum_k \frac{\hat{\alpha}_k}{k^2} \mathrm{i}nt_0^\mathrm{i}nfty \Phi_k'(x)\Psi_k'(x)\dd{x} \\ &= 2\omega^2\sum_k \hat{\alpha}_k \mathrm{i}nt_0^\mathrm{i}nfty g(x) \Phi_k(x)\Psi_k(x)\dd{x} - 2\sum_k \frac{\hat{\alpha}_k}{k^2}\Phi_k'(0)\Psi_k(0), \\ |H_2(\psi)| &\leq 2\Bigl(\sum_k \frac{\hat{\alpha}_k^2}{k^2}\|\Phi_k'\|_{L^2(0,\mathrm{i}nfty)}^2 \Bigr)^\frac{1}{2} \Bigl(\sum_k \frac{1}{k^2}\|\Psi_k'\|_{L^2(0,\mathrm{i}nfty)}^2 \Bigr)^\frac{1}{2} = \|w_x\|_{L^2(D)}\|\psi_x\|_{L^2(D)}. \mathrm{e}nd{align*} Moreover, considering $H_3$ and setting $\hat{y}_k \coloneqq \Psi_k(0)$ for $k\mathrm{i}n \mathbb{Z}odd$ one sees \begin{align*} H_3(\psi) &= \omega^4 \mathrm{i}nt_{0}^{T}\Bigl(\sum_k \hat{\alpha}_ke_k(t)\Bigr)^3\Bigl(\sum_l \Psi_l(0)e_l(t)\Bigr)\dd{t} \\ &= \frac{\omega^4}{T} (\hat{\alpha}*\hat{\alpha}*\hat{\alpha}*\hat{y})_0,\\ \abs{H_3(\psi)} & \leq \frac{\omega^4}{T} \mathbb{N}ORM{\hat{\alpha}}^3 \mathbb{N}ORM{\hat{y}} = \norm{w_t(0,\cdot)}_\Leb{4}{\mathbb{T}_T}^3\norm{\psi_t(0,\cdot)}_\Leb{4}{\mathbb{T}_T}. \mathrm{e}nd{align*} Hence $H_1, H_2$ and $H_3$ are bounded linear functionals of the variable $\psi$ as in \mathrm{e}qref{ansatz_phi} with $\psi_x, \psi_t \mathrm{i}n L^2(D)$ and $\psi_t(0,\cdot)\mathrm{i}n \Leb{4}{\mathbb{T}_T}$. For such $\psi$ we use the above formulae for $H_1, H_2, H_3$ and compute the linear combination \begin{align*} -H_1(\psi)+H_2(\psi)-\gamma H_3(\psi) = -2\sum_{k} \frac{\hat{\alpha}_k}{k^2} \Phi_k'(0)\Psi_k(0) - \frac{\gamma \omega^4}{T} (\hat{\alpha}*\hat{\alpha}*\hat{\alpha}*\hat{y})_0=0 \mathrm{e}nd{align*} due to the Euler-Lagrange equation for the functional $J$, i.e., the vanishing of $J'(\hat{\alpha})[\hat{y}]$ in \mathrm{e}qref{frechet} for all $\hat{y} \mathrm{i}n \Dom{J}$. The last equality means that $w$ is a weak solution of \mathrm{e}qref{quasi}. \mathrm{e}nd{proof} \section{Further Regularity} \label{further_regularity} Here we prove Theorem~\ref{w is even more regular}. We observe first that in the example of a periodic step-potential in Theorem~\ref{w is a weak solution in expl exa} we find that not only $\Phi'_k(0)=O(k^\frac{3}{2})$ holds (as Lemma~\ref{norm_estimates} shows) but even $\Phi'_k(0)=O(k)$ is satisfied. It is exactly this weaker growth that we can exploit in order to prove additional smoothness of the solutions of \mathrm{e}qref{quasi}. We begin by defining for $\nu>0$ the Banach space of sequences \begin{align*} \seqsobh{\nu}\coloneqq\Bigl\{\hat{z}\mathrm{i}n\seq{2} \mbox{ s.t. } \|\hat{z}\|_{h^\nu}^2 \coloneqq \sum_k (1+k^2)^{\nu}\abs{\hat{z}_k}^2<\mathrm{i}nfty\Bigr\}. \mathrm{e}nd{align*} Moreover, we use the isometric isomorphism between $h^\nu$ and \begin{align*} H^{\nu}(\mathbb{T}_T) = \Bigl\{z(t)=\sum_k \hat{z}_k e_k(t) \text{ s.t. } \hat{z}\mathrm{i}n\seqsobh{\nu} \Bigr\} \mathrm{e}nd{align*} by setting $\|z\|_{H^\nu} \coloneqq \|\hat z\|_{h^\nu}$. We also use the Morrey embedding $\SobH{1+\nu}{\mathbb{T}_T}\to \mathbb{C}ont{0,\frac{1}{2}+\nu}{\mathbb{T}_T}$ for $\nu \mathrm{i}n (0,1/2)$ and the following embedding: $\mathbb{C}ont{0,\nu}{\mathbb{T}_T}\to \SobH{\tilde\nu}{\mathbb{T}_T}$ for $0<\tilde\nu<\nu\leq 1$, cf. Lemma~\ref{unusual_embedd} in the Appendix. The latter embedding means that $\hat{z} \mathrm{i}n h^{\tilde\nu}$ provided $z\mathrm{i}n\mathbb{C}ont{0,\nu}{\mathbb{T}_T}$ and $0<\tilde\nu<\nu\leq 1$. \begin{thm}\label{smoothness alpha} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond}, \mathrm{e}qref{FurtherCond_phik} and in addition $\Phi'_k(0) = O(k)$. For every $\hat{\alpha}\mathrm{i}n \Dom{J}$ with $J'(\hat{\alpha})=0$ we have $\hat{\alpha}\mathrm{i}n h^\nu$ for every $\nu\mathrm{i}n (0,1/4)$. \mathrm{e}nd{thm} \begin{proof} Let $\hat{\alpha}\mathrm{i}n \Dom{J}$ with $J'(\hat{\alpha})=0$. Recall from \mathrm{e}qref{euler_lagrange_alpha} that \begin{equation} \label{el} (\hat{\alpha}*\hat{\alpha}*\hat{\alpha})_k = \hat{\mathrm{e}ta}_k \hat{\alpha}_k \quad\mbox{ where }\quad \hat{\mathrm{e}ta}_k \coloneqq \frac{2T\Phi'_k(0)}{\gamma\omega^4k^2} \mbox{ for } k \mathrm{i}n \mathbb{Z}odd \mathrm{e}nd{equation} so that $|\hat{\mathrm{e}ta}_k| \leq C/k$. If we define the convolution of two $T$-periodic functions $f,g\mathrm{i}n\Leb{2}{\mathbb{T}_T}$ on the torus $\mathbb{T}_T$ as \begin{align*} \left(f*g\right)(t)\coloneqq\frac{1}{\sqrt{T}}\mathrm{i}nt_{0}^{T}f(s)g(t-s)\dd{s} \mathrm{e}nd{align*} and if we set \begin{align*} \alpha(t) \coloneqq \sum_k \hat{\alpha}_k e_k(t), \quad \mathrm{e}ta (t) \coloneqq \sum_k \hat{\mathrm{e}ta}_k e_k(t) \mathrm{e}nd{align*} then the equation \begin{equation} \label{el_equiv} \alpha^3=\alpha*\mathrm{e}ta \mathrm{e}nd{equation} for the $T$-periodic function $\alpha\mathrm{i}n\Leb{4}{\mathbb{T}_T}$ is equivalent to the equation \mathrm{e}qref{el} for the sequence $\hat\alpha\mathrm{i}n \Dom{J}$. We will analyze \mathrm{e}qref{el_equiv} with a bootstrap argument. \mathrm{e}mph{Step 1:} We show that $\alpha \mathrm{i}n\mathbb{C}ont{0,\frac{1}{6}}{\mathbb{T}_T}$. The right hand side of \mathrm{e}qref{el_equiv} is an $\SobH{1}{\mathbb{T}_T}$-function since \begin{align*} \norm{\alpha*\mathrm{e}ta}_\SobH{1}{\mathbb{T}_T}^2 =\norm{\hat{\alpha}\hat{\mathrm{e}ta}}_\seqsobh{1}^2 \leq \sum_{k\mathrm{i}n\mathbb{Z}odd} (1+k^2)\hat{\alpha}_k^2\frac{C^2}{k^2} \leq 2C^2 \|\hat{\alpha}\|_{l^2}^2 <\mathrm{i}nfty. \mathrm{e}nd{align*} Therefore, using \mathrm{e}qref{el_equiv} we see that $\alpha^3\mathrm{i}n H^1(\mathbb{T}_T)$ and by the Morrey embedding that $\alpha^3\mathrm{i}n\mathbb{C}ont{0,\frac{1}{2}}{\mathbb{T}_T}$. Since the inverse of the mapping $x\mapsto x^3$ is given by $x\mapsto |x|^{-\frac{2}{3}}x$, which is a $\cont{0,\frac{1}{3}}(\mathbb{R})$-function, we obtain $\alpha\mathrm{i}n\mathbb{C}ont{0,\frac{1}{6}}{\mathbb{T}_T}$. \mathrm{e}mph{Step 2:} We fix $q\mathrm{i}n (0,1)$ and show that if $\alpha\mathrm{i}n \mathbb{C}ont{0,\nu_n}{\mathbb{T}_T}$ for some $\nu_n\mathrm{i}n (0,1/2)$ solves \mathrm{e}qref{el_equiv} then $\alpha \mathrm{i}n \mathbb{C}ont{0,\nu_{n+1}}{\mathbb{T}_T}$ with $\nu_{n+1}= \frac{q\nu_n}{3}+\frac{1}{6}$. For the proof we iterate the process from Step 1 and we start with $\alpha\mathrm{i}n\mathbb{C}ont{0,\nu_n}{\mathbb{T}_T}$. Then, according to Lemma~\ref{unusual_embedd} of the Appendix, $\alpha\mathrm{i}n\SobH{q\nu_n}{\mathbb{T}_T}$ and hence $\hat{\alpha} \mathrm{i}n \seqsobh{q\nu_n}$. Then as before the convolution of $\alpha$ with $\mathrm{e}ta$ generates one more weak derivative, namely \begin{align*} \norm{\alpha*\mathrm{e}ta}_\SobH{1+q\nu_n}{\mathbb{T}_T}^2 =\norm{\hat{\alpha}\hat{\mathrm{e}ta}}_\seqsobh{1+q\nu_n}^2 \leq \sum_k(1+k^2)^{1+q\nu_n}\hat{\alpha}_k^2\frac{C^2}{k^2} \leq C^2 \|\hat{\alpha}\|_{h^{q\nu_n}}<\mathrm{i}nfty. \mathrm{e}nd{align*} Hence by \mathrm{e}qref{el_equiv} we conclude $\alpha^3\mathrm{i}n\SobH{1+q\nu_n}{\mathbb{T}_T}$ and by the Morrey embedding $\alpha^3\mathrm{i}n\mathbb{C}ont{0,\frac{1}{2}+q\nu_n}{\mathbb{T}_T}$ provided $q\nu_n \mathrm{i}n (0,1/2)$. As in Step 1 this implies $\alpha\mathrm{i}n\mathbb{C}ont{0,\nu_{n+1}}{\mathbb{T}_T}$ with $\nu_{n+1} =\frac{1}{6}+\frac{q\nu_n}{3}$. Starting with $\nu_1=1/6$ from Step 1 we see by Step 2 that $\nu_n\nearrow\frac{1}{2(3-q)}$. Since $q\mathrm{i}n (0,1)$ can be chosen arbitrarily close to $1$ this finishes the proof. \mathrm{e}nd{proof} With this preparation the proof of Theorem \ref{w is even more regular} is now immediate. \begin{proof}[Proof of Theorem~ \ref{w is even more regular}] Let $w(x,t) = \sum_{k\mathrm{i}n \mathbb{Z}odd} \frac{\hat{\alpha}_k}{k} \Phi_k(|x|)e_k(t)$ with $\hat{\alpha} \mathrm{i}n \Dom{J}$ such that $J'(\hat{\alpha})=0$. Recall from assumption \mathrm{e}qref{FurtherCond_phik} that $C\coloneqq \sup_k\norm{\Phi_k}_\Leb{2}{0,\mathrm{i}nfty}^2<\mathrm{i}nfty$. Likewise, from Lemma~\ref{norm_estimates} we have $\norm{\Phi_k'}_\Leb{2}{0,\mathrm{i}nfty}^2 \leq \tilde Ck^2$ for all $k\mathrm{i}n \mathbb{Z}odd$ and some $\tilde C>0$. Therefore, using Theorem~\ref{smoothness alpha} we find for all $\nu<\frac{1}{4}$ \begin{align*} \norm{\partial_t^{1+\nu} w}_\Leb{2}{D}^2 =2\omega^{2+2\nu}\sum_k\hat{\alpha}_k^2|k|^{2\nu}\norm{\Phi_k}_\Leb{2}{0,\mathrm{i}nfty}^2 \leq2\omega^{2+2\nu}C \|\hat{\alpha}\|_{h^\nu}^2 <\mathrm{i}nfty \mathrm{e}nd{align*} and likewise \begin{align*} \norm{\partial_t^\nu w_x}_\Leb{2}{D}^2 =2\omega^{2\nu}\sum_k\hat{\alpha}_k^2|k|^{2\nu-2}\norm{\Phi_k'}_\Leb{2}{0,\mathrm{i}nfty}^2 \leq 2\omega^{2\nu}\tilde C\|\hat{\alpha}\|_{h^\nu}^2 <\mathrm{i}nfty. \mathrm{e}nd{align*} This establishes the claim. \mathrm{e}nd{proof} \section{Existence of Infinitely Many Breathers} \label{infinitely_many_breathers} In this section we extend Theorem~\ref{w is a weak solution general} by the following multiplicity result. \begin{thm}\label{multiplicity abstract} Assume \mathrm{e}qref{C0}, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik}. Then \mathrm{e}qref{quasi} has infinitely many nontrivial, $T$-periodic weak solution $w$ in the sense of Definition~\ref{Defn of weak Sol to (quasi)} with $T=\frac{2\pi}{\omega}$ provided \begin{itemize} \mathrm{i}tem[(i)] $\gamma<0$ and there exists an integer $l_-\mathrm{i}n \mathbb{N}odd$ such that for infinitely many $j\mathrm{i}n \mathbb{N}$ the sequence $\Bigl(\Phi'_{m\cdot l_-^j}(0)\Bigr)_{m\mathrm{i}n\mathbb{N}odd}$ has at least one positive element, \mathrm{i}tem[(ii)] $\gamma>0$ and there exists an integer $l_+\mathrm{i}n \mathbb{N}odd$ such that for infinitely many $j\mathrm{i}n \mathbb{N}$ the sequence $\Bigl(\Phi'_{m\cdot l_+^j}(0)\Bigr)_{m\mathrm{i}n\mathbb{N}odd}$ has at least one negative element. \mathrm{e}nd{itemize} \mathrm{e}nd{thm} \begin{rmk} \label{remark_infinitely} In the above Theorem, conditions \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} can be weakened: instead of requiring them for all $k\mathrm{i}n \mathbb{N}odd$ it suffices to require them for $k\mathrm{i}n l_-^j\mathbb{N}odd$, $k\mathrm{i}n l_+^j\mathbb{N}odd$ respectively. We prove this observation together with the one in Remark~\ref{remark_Dr} at the end of this section. \mathrm{e}nd{rmk} We start with an investigation about the types of symmetries which are compatible with our equation. The Euler-Lagrange equation \mathrm{e}qref{euler_lagrange_alpha} for critical points $\hat{\alpha}\mathrm{i}n\Dom{J}$ of $J$ takes the form $(\hat{\alpha}*\hat{\alpha}*\hat{\alpha})_k = \hat{\mathrm{e}ta}_k \hat{\alpha}_k$ with $\hat{\mathrm{e}ta}_k \coloneqq \frac{2T\Phi'_k(0)}{\gamma\omega^4k^2}$ for $k \mathrm{i}n \mathbb{Z}odd$. Next we describe subspaces of $\Dom{J}$ which are invariant under triple convolution and pointwise multiplication with $(\hat\mathrm{e}ta_k)_{k\mathrm{i}n \mathbb{Z}odd}$. It turns out that these subspaces are made of sequences $\hat{z}$ where only the $r^{th}$ entry modulus $2r$ is occupied. \begin{defn} For $r\mathrm{i}n \mathbb{N}odd, p \mathrm{i}n \mathbb{N}even$ with $r<p$ let \begin{align*} \Dom{J}_{r,p} = \{\hat{z}\mathrm{i}n \Dom{J}:\forall\,k\mathrm{i}n\mathbb{Z}, k\neq r ~\mathrm{mod}~p \colon\hat{z}_k=0 \}. \mathrm{e}nd{align*} \mathrm{e}nd{defn} \begin{lemma} For $r\mathrm{i}n \mathbb{N}odd, p\mathrm{i}n \mathbb{N}even$ with $r<p$ and $p\not = 2r$ we have $\Dom{J}_{r,p}=\{0\}$. \mathrm{e}nd{lemma} \begin{proof} Let $\hat{z}\mathrm{i}n \Dom{J}_{r,p}$. For all $k\not \mathrm{i}n r+p\mathbb{Z}$ we have $\hat{z}_k=0$ by definition of $\Dom{J}_{r,p}$. Let therefore $k=r+pl_1$ for some $l_1\mathrm{i}n \mathbb{Z}$. Then $-k=-r-pl_1 \not \mathrm{i}n r+p\mathbb{Z}$ because otherwise $2r=-p(l_1+l_2)=p|l_1+l_2|$ for some $l_2\mathrm{i}n \mathbb{Z}$. Since by assumption $p>r$ we get $|l_1+l_2|<2$. But clearly $|l_1+l_2|\not \mathrm{i}n \{0,1\}$ since $r\not= 0$ and $p\not = 2r$ by assumption. By this contradiction we have shown $-k\not \mathrm{i}n r+p\mathbb{Z}$ so that necessarily $0=\hat z_{-k}=-\hat z_{k}$. This shows $\hat z=0$. \mathrm{e}nd{proof} In the following we continue by only considering $\mathcal{D}_r \coloneqq\Dom{J}_{r,2r}$ for $r\mathrm{i}n\mathbb{N}odd$. \begin{prop} \label{zwei_eingenschaften} Let $r\mathrm{i}n\mathbb{N}odd$. \begin{itemize} \mathrm{i}tem[(i)] The elements $\hat z \mathrm{i}n \mathcal{D}_r$ are exactly those elements of $\Dom{J}$ which generate $\frac{T}{2r}$-antiperiodic functions $\sum_{k\mathrm{i}n \mathbb{Z}odd} \frac{\hat z_k}{k}\Phi_k(x)e_k(t)$. \mathrm{i}tem[(ii)] If $\hat z\mathrm{i}n \mathcal{D}_r$ then $(\hat z*\hat z*\hat z)_k=0$ for all $k\not\mathrm{i}n r+2r\mathbb{Z}$. \mathrm{e}nd{itemize} \mathrm{e}nd{prop} \begin{proof} (i) An element $\hat z\mathrm{i}n \Dom{J}$ generates a $\frac{T}{2r}$-antiperiodic function $z(x,t)= \sum_{k\mathrm{i}n \mathbb{Z}odd} \frac{\hat z_k}{k}\Phi_k(x)e_k(t)$ if and only if $z(x,t+\frac{T}{2r})=-z(x,t)$. Comparing the Fourier coefficients we see that this is the case if for all $k\mathrm{i}n\mathbb{Z}odd$ we have $\hat z_k\bigl(\mathrm{e}xp(\frac{\mathrm{i}\omega kT}{2r})+1\bigr)=0$, i.e., either $k\mathrm{i}n r+2r\mathbb{Z}$ or $\hat z_k=0$. This is exactly the condition that $\hat z \mathrm{i}n \mathcal{D}_r$. \\ (ii) Let $\hat z\mathrm{i}n \mathcal{D}_r$ and assume that there is $k\mathrm{i}n\mathbb{Z}$ such that $0\not = (\hat z*\hat z*\hat z)_k=\sum_{l,m} \hat z_l\hat z_{m-l}\hat z_{k-m}$. So there is $l_0, m_0\mathrm{i}n \mathbb{Z}odd$ such that $\hat z_{l_0}, \hat z_{m_0-l_0}, \hat z_{k-m_0}\not =0$ which means by the definition of $\mathcal{D}_r$ that $l_0, m_0-l_0, k-m_0\mathrm{i}n r+2r\mathbb{Z}$. Thus $k = l_0+m_0-l_0+k-m_0 \mathrm{i}n 3r+2r\mathbb{Z} = r+2r\mathbb{Z}$. \mathrm{e}nd{proof} \begin{proof}[Proof of Theorem~\ref{multiplicity abstract}] We give the proof in case (i); for case (ii) the proof only needs a trivial modi\-fication. Let $r=l^j$ where $j$ is an index such that the sequence $\Bigl(\Phi'_{k\cdot l^j}(0)\Bigr)_{k\mathrm{i}n \mathbb{N}odd}$ has a positive element (we have changed the notation from $l_-$ to $l$ for the sake of readability). Since $\mathcal{D}_r$ is a closed subspace of $\Dom{J}$ we have as before in Theorem~\ref{J attains a minimum and its properties} the existence of a minimizer $\hat\alpha^{(r)}\mathrm{i}n \mathcal{D}_r$, i.e., $J(\hat\alpha^{(r)})=\min_{\mathcal{D}_r}J<0$. Moreover, $\hat\alpha^{(r)}$ satisfies the restricted Euler-Lagrange-equation \begin{equation} 0=J'\left(\hat\alpha^{(r)}\right)\left[\hat{x}\right]=\left(\hat\alpha^{(r)}*\hat\alpha^{(r)}*\hat\alpha^{(r)}*\hat{x}\right)_0+\frac{2T}{\gamma\omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat\alpha^{(r)}_k\hat{x}_k \qquad\forall\,\hat{x}\mathrm{i}n \mathcal{D}_r. \label{frechet_symmetric} \mathrm{e}nd{equation} We need to show that \mathrm{e}qref{frechet_symmetric} holds for every $\hat z\mathrm{i}n \Dom{J}$. If for an arbitrary $\hat{z}\mathrm{i}n\Dom{J}$ we define $\hat{x}_k\coloneqq\hat{z}_k$ for $k\mathrm{i}n r+2r\mathbb{Z}$ and $\hat{x}_k\coloneqq0$ else then $\hat{x}\mathrm{i}n \mathcal{D}_r$. If we furthermore define $\hat{y}\coloneqq\hat{z}-\hat{x}$ then $\hat{y}_k=0$ for all $k\mathrm{i}n r+2r\mathbb{Z}$. This implies in particular that \begin{align*} \sum_k\frac{\Phi'_k(0)}{k^2}\hat\alpha^{(r)}_k\hat{y}_k = 0 \mathrm{e}nd{align*} and by using (ii) of Proposition~\ref{zwei_eingenschaften} also \begin{align*} (\hat\alpha*\hat\alpha*\hat\alpha*\hat y)_0=\sum_{k}\left(\hat\alpha^{(r)}*\hat\alpha^{(r)}*\hat\alpha^{(r)}\right)_k\hat{y}_{-k}=0. \mathrm{e}nd{align*} This implies $J'(\hat\alpha^{(r)})[\hat{y}]=0$ and since by \mathrm{e}qref{frechet_symmetric} also $J'(\hat\alpha^{(r)})[\hat{x}]=0$ we have succeeded in proving that $J'(\hat\alpha^{(r)})=0$. It remains to show the multiplicity result. For this purpose we only consider $r=l^{j_m}$ for $j_m\to \mathrm{i}nfty$ as $m\to\mathrm{i}nfty$ where $j_m$ is an index such that the sequence $\Bigl(\Phi'_{l^{j_m}k}(0)\Bigr)_{k\mathrm{i}n \mathbb{N}odd}$ has a positive element. First we observe that $\mathcal{D}_{l^{j_m}}\supsetneq \mathcal{D}_{l^{j_{m+1}}}$. Assume for contradiction that the set $\{\hat\alpha^{(l^{j_m})}\}$ is finite. Then we have a subsequence $(j_{m_n})_{n\mathrm{i}n \mathbb{N}}$ such that $\hat\alpha = \hat\alpha^{(l^{j_{m_n}})}$ is constant. But then \begin{align*} \hat\alpha \mathrm{i}n \bigcap_{n\mathrm{i}n\mathbb{N}} \mathcal{D}_{l^{j_{m_n}}} = \bigcap_{j\mathrm{i}n\mathbb{N}} \mathcal{D}_{l^j}=\{0\}. \mathrm{e}nd{align*} This contradiction shows the existence of infinitely many distinct critical points of the function $J$ and finishes the proof of the theorem. \mathrm{e}nd{proof} \begin{proof}[Proof of Remark~\ref{remark_Dr} and Remark~\ref{remark_infinitely}] The proof of Theorem~\ref{multiplicity abstract} works on the basis that it suffices to minimize the functional $J$ on $\mathcal{D}_r$. In this way a $\frac{T}{2r}$-antiperiodic breather is obtained. For $\hat z\mathrm{i}n \mathcal{D}_r$ only the entries $\hat z_k$ with $k\mathrm{i}n r\mathbb{Z}odd$ are nontrivial while all other entries vanish. Therefore, \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} and the values of $\Phi_k'(0)$ are only relevant for $k\mathrm{i}n r\mathbb{Z}odd$. In the special case of Remark~\ref{remark_infinitely} we take $r=l_\pm^j$. \mathrm{e}nd{proof} \section{Approximation by Finitely Many Harmonics}\label{approximation} Here we give some analytical results on finite dimensional approximation of the breathers obtained in Theorem~\ref{w is a weak solution general}. The finite dimensional approximation is obtained by cutting-off the ansatz \mathrm{e}qref{ansatz} and only considering harmonics of order $|k|\leq N$. Here a summand in the series \mathrm{e}qref{ansatz} of the form $\Phi_k(|x|)e_k(t)$ is a called a harmonic since it satisfies the linear wave equation in \mathrm{e}qref{nonlinNeuBVP}. We will prove that $J$ restricted to spaces $\Dom{J^{(N)}}$ of cut-off ansatz functions still attains its minimum and that the sequence of the corresponding minimizers converges up to a subsequence to a minimizer of $J$ on $\Dom{J}$. \begin{defn} Let $N\mathrm{i}n\mathbb{N}odd$. Define \begin{align*} J^{(N)}\coloneqq J|_\Dom{J^{(N)}},\qquad \Dom{J^{(N)}}\coloneqq\left\lbrace \hat{z}\mathrm{i}n\Dom{J} ~\big|~ \forall\,\abs{k}>N\colon\hat{z}_k=0 \right\rbrace \mathrm{e}nd{align*} \mathrm{e}nd{defn} \begin{lemma} \label{lemma_approximation} Under the assumptions of Theorem~\ref{w is a weak solution general} the following holds: \begin{enumerate} \mathrm{i}tem[(i)] For every $N\mathrm{i}n \mathbb{N}odd$ sufficiently large there exists $\hat{\alpha}^{(N)}\mathrm{i}n\Dom{J^{(N)}}$ such that $J(\hat{\alpha}^{(N)})=\mathrm{i}nf J^{(N)}<0$ and $\lim_{N\to\mathrm{i}nfty}J(\hat{\alpha}^{(N)})=\mathrm{i}nf J$. \mathrm{i}tem[(ii)] There is $\hat{\alpha}\mathrm{i}n\Dom{J}$ such that up to a subsequence (again denoted by $(\hat{\alpha}^{(N)})_N$) we have \begin{align*} \hat{\alpha}^{(N)}\to\hat{\alpha} \qquad \text{ in }~\Dom{J} \mathrm{e}nd{align*} and $J(\hat{\alpha})=\mathrm{i}nf J$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{rmk} The Euler-Lagrange-equation for $\hat{\alpha}^{(N)}$ reads: \begin{align*} 0=J'\left(\hat{\alpha}^{(N)}\right)[\hat{y}]=\left(\hat{\alpha}^{(N)}*\hat{\alpha}^{(N)}*\hat{\alpha}^{(N)}*\hat{y}\right)_0+\frac{2T}{\gamma\omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat{\alpha}^{(N)}_k\hat{y}_k \qquad \forall\,\hat{y}\mathrm{i}n\Dom{J^{(N)}}. \mathrm{e}nd{align*} This amounts to satisfying \mathrm{e}qref{WeakEquation for (quasi)} in Definition~\ref{Defn of weak Sol to (quasi)} for functions $\psi(x,t)= \sum_{k\mathrm{i}n \mathbb{Z}odd, |k|\leq N} \hat \psi_k(x) e_k(t)$ with $\hat\psi_k\mathrm{i}n H^1(\mathbb{R})$. Clearly, in general $\hat{\alpha}^{(N)}$ is not a critical point of $J$. \mathrm{e}nd{rmk} \begin{proof} (i) We choose $N\mathrm{i}n\mathbb{N}odd$ so large, such that we have the assumed sign of the the one element in $\left(\Phi_k'(0)\right)_{\abs{k}\leq N}$. The restriction of $J$ to the $\frac{N+1}{2}$-dimensional space $\Dom{J^{(N)}}$ preserves coercivity. The continuity of $J^{(N)}$ therefore guarantees the existence of a minimizer $\hat\alpha^{(N)}\mathrm{i}n\Dom{J^{(N)}}$. As before we see that $J(\hat\alpha^{(N)})=\mathrm{i}nf J^{(N)}<0$, so in particular $\hat{\alpha}^{(N)}\neq0$. Next we observe that $\Dom{J^{(N)}}\subset\Dom{J}$, i.e., $J(\hat{\alpha}^{(N)})\geq\mathrm{i}nf J= J(\hat\beta)$ for a minimizer $\hat{\beta}\mathrm{i}n\Dom{J}$ of $J$. Let us define $\hat{\beta}^{(N)}_k=\hat{\beta}_k$ for $\abs{k}\leq N$ and $\hat{\beta}^{(N)}_k=0$. Since the Fourier-series $\beta(t) = \sum_k \hat\beta_k e_k(t)$ converges in $L^4(\mathbb{T})$, cf. Theorem~4.1.8 in \cite{GrafakosClass}, we see that $\hat{\beta}^{(N)}\rightarrow\hat{\beta}$ in $\Dom{J}$. By the minimality of $\hat{\alpha}^{(N)}\mathrm{i}n \Dom{J^{(N)}}$ and continuity of $J$ we conclude \begin{align*} \mathrm{i}nf_{\Dom{J}}J\leq J(\hat{\alpha}^{(N)})\leq J(\hat{\beta}^{(N)})\longrightarrow J(\hat{\beta})=\mathrm{i}nf_{\Dom{J}}J. \mathrm{e}nd{align*} Hence $\lim_{N\to\mathrm{i}nfty} J(\hat{\alpha}^{(N)})=\mathrm{i}nf J$ as claimed. \noindent (ii) Since $\Dom{J^{(N)}}\subset\Dom{J^{(N+1)}}\subset\Dom{J}$ we see that $J(\hat{\alpha}^{(N)})\geq J(\hat{\alpha}^{(N+1)})\geq\mathrm{i}nf J$ so that in particular the sequence $(J(\hat{\alpha}^{(N)}))_N$ is bounded. By coercivity of $J$ we conclude that $(\hat{\alpha}^{(N)})_N$ is bounded in $\Dom{J}$ so that there is $\hat{\alpha}\mathrm{i}n\Dom{J}$ and a subsequence (again denoted by $(\hat{\alpha}^{(N)})_N$) such that \begin{align*} \hat{\alpha}^{(N)}\rightharpoonup\hat{\alpha} \qquad \text{ in }~\Dom{J}. \mathrm{e}nd{align*} By part (i) and weak lower semi-continuity of $J$ we obtain \begin{align*} \mathrm{i}nf J=\lim_{N\to\mathrm{i}nfty}J(\hat{\alpha}^{(N)})\geq J(\hat{\alpha}), \mathrm{e}nd{align*} i.e., $\hat{\alpha}$ is a minimizer of $J$. Recall that $J(\cdot)= \frac{1}{4}\mathbb{N}ORM{\cdot}^4+J_1(\cdot)$ where $J_1$ is weakly continuous, cf. proof of Theorem~\ref{J attains a minimum and its properties}. Therefore, since $\hat{\alpha}^{(N)}\rightharpoonup\hat{\alpha}$ and $J(\hat{\alpha}^{(N)})\to J(\hat{\alpha})$ we see that $\mathbb{N}ORM{\hat\alpha^{(N)}}\to \mathbb{N}ORM{\hat\alpha}$ as $N\to \mathrm{i}nfty$. Since $\Dom{J}$ is strictly uniformly convex, we obtain the norm-convergence of $(\hat\alpha^{(N)})_N$ to $\hat\alpha$. \mathrm{e}nd{proof} \section{Appendix}\label{appendix} \subsection{Details on exponentially decreasing fundamental solutions for step potentials}\label{details_example_step} Here we consider a second-order ordinary differential operator \begin{align*} L_k \coloneqq - \frac{d^2}{dx^2} -k^2\omega^2 g(x) \mathrm{e}nd{align*} with $g$ as in Theorem~\ref{step}. Clearly, $L_k$ is a self-adjoint operator on $L^2(\mathbb{R})$ with domain $H^2(\mathbb{R})$. Moreover, $\sigma_{ess}(L_k)=[k^2\omega^2 a,\mathrm{i}nfty)$. By the assumption on $\omega$ we have \begin{align*} \sqrt{b}\omega c \frac{2}{\pi} = \frac{p}{q} \mbox{ with } p,q \mathrm{i}n \mathbb{N}odd. \mathrm{e}nd{align*} Hence, with $k\mathrm{i}n q\mathbb{N}odd$, $k\sqrt{b}\omega c$ is an odd multiple of $\pi/2$. In the following we shall see that $0$ is not an eigenvalue of $L_k$ for $k\mathrm{i}n q\mathbb{N}odd$ so that \mathrm{e}qref{spectralcond} as in Remark~\ref{remark_infinitely} is fulfilled. A potential eigenfunction $\phi_k$ for the eigenvalue $0$ would have to look like \begin{equation} \label{ansatz_ef} \phi_k(x) = \begin{cases} -A\sin(k\omega\sqrt{b}c) e^{k\omega\sqrt{a}(x+c)}, & ~\phantom{-c<}x<-c,\\ A\sin(k\omega \sqrt{b} x)+B\cos(k\omega\sqrt{b}x), & -c<x<c,\\ A\sin(k\omega\sqrt{b}c) e^{-k\omega\sqrt{a}(x-c)}, & \phantom{-}c<x. \mathrm{e}nd{cases} \mathrm{e}nd{equation} with $A,B\mathrm{i}n \mathbb{R}$ to be determined. Note that we have used $\cos(k\omega\sqrt{b}c)=0$. The $C^1$-matching of $\phi_k$ at $x=\pm c$ lead to the two equations \begin{align*} -Bk\omega\sqrt{b}\sin(k\omega\sqrt{b}c) &= -Ak\omega\sqrt{a}\sin(k\omega\sqrt{b}c),\\ Bk\omega\sqrt{b}\sin(k\omega\sqrt{b}c) &= -Ak\omega\sqrt{a}\sin(k\omega\sqrt{b}c) \mathrm{e}nd{align*} and since $\sin(k\omega\sqrt{b}c)=\pm 1$ this implies $A=B=0$ so that there is no eigenvalue $0$ of $L_k$. Next we need to find the fundamental solution $\phi_k$ of $L_k$ that decays to zero at $+\mathrm{i}nfty$ and is normalized by $\phi_k(0)=1$. Here we can use the same ansatz as in \mathrm{e}qref{ansatz_ef} and just ignore the part of $\phi_k$ on $(-\mathrm{i}nfty,0)$. Now the normalization $\phi_k(0)=1$ leads to $B=1$ and the $C^1$-matching at $x=c$ leads to $A=\sqrt{\frac{b}{a}}B=\sqrt{\frac{b}{a}}$ so that the decaying fundamental solution is completely determined. We find that \begin{align*} \abs{\phi_k(x)} \leq \left\{ \begin{array}{ll} A+B, & 0\leq x \leq c \\ A, & c<x\leq 2c \\ A e^{-\frac{1}{2} k\omega\sqrt{a}x}, & x>2c \mathrm{e}nd{array}\right. \mathrm{e}nd{align*} so that $|\phi_k(x)|\leq (A+B)e^{-\rho_k x}\leq Me^{-\rho x}$ on $[0,\mathrm{i}nfty)$ with $\rho_k = \frac{1}{2} k\omega\sqrt{a}$, $\rho=\frac{1}{2}\omega\sqrt{a}$ and $M=A+B$. This shows that also \mathrm{e}qref{FurtherCond_phik} holds. Finally, since $\phi_k'(0)=\frac{bk\omega}{\sqrt{a}}>0$ the existence of infinitely many breathers can only be shown for $\gamma<0$. At the same time, due to $|\phi_k(0)|=O(k)$, Theorem~\ref{w is even more regular} applies. \subsection{Details on Bloch Modes for periodic step potentials} \label{explicit example Bloch Modes_WR} Here we consider a second-order periodic ordinary differential operator \begin{align*} L \coloneqq - \frac{d^2}{dx^2} + V(x) \mathrm{e}nd{align*} with $V\mathrm{i}n L^\mathrm{i}nfty(\mathbb{R})$ which we assume to be even and $2\pi$-periodic. Moreover, we assume that $0$ does not belong to the spectrum of $L:H^2(\mathbb{R})\subset L^2(\mathbb{R})\to L^2(\mathbb{R})$. We first describe what Bloch modes are and why they exists. Later we show that this is the situation which occurs in Theorem~\ref{w is a weak solution general} and we verify conditions \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik}. A function $\Phi\mathrm{i}n\mathbb{C}ont{1}{\mathbb{R}}$ which is twice almost everywhere differentiable such that \begin{equation} \label{eq:bloch} L\Phi=0 \quad\text{ a.e. in } \mathbb{R}, \qquad \Phi(\cdot+2\pi)=\rho\Phi(\cdot). \mathrm{e}nd{equation} with $\rho\mathrm{i}n (-1,1)\setminus\{0\}$ is called the (exponentially decreasing for $x\to \mathrm{i}nfty$) Bloch mode of $L$ and $\rho$ is called the Floquet multiplier. The existence of $\Phi$ is guaranteed by the assumption that $0\notin\sigma(L)$. This is essentially Hill's theorem, cf. \cite{Eastham}. Note that $\Psi(x)\coloneqq\Phi(-x)$ is a second Bloch mode of $L$, which is exponentially increasing for $x\to \mathrm{i}nfty$. The functions $\Phi$ and $\Psi$ form a fundamental system of solutions for operator $L$ on $\mathbb{R}$. Next we explain how $\Phi$ is constructed, why it can be taken real-valued and why it does not vanish at $x=0$ so that we can assume w.l.o.g $\Phi(0)=1$. According to \cite{Eastham}, Theorem 1.1.1 there are linearly independent functions $\Psi_{1},\Psi_{2}\colon\mathbb{R}\rightarrow\mathbb{C}$ and Floquet-multipliers $\rho_{1},\rho_{2}\mathrm{i}n\mathbb{C}$ such that $L\Psi_{j}=0$ a.e. on $\mathbb{R}$ and $\Psi_{j}(x+2\pi)=\rho_{j}\Psi_{j}(x)$ for $j=1,2$. We define $\phi_{j}$, $j=1,2$ as the solutions to the initial value problems \begin{align*} \begin{cases} L\phi_1=0,\\ \phi_{1}(0)=1,\quad \phi_{1}'(0)=0, \mathrm{e}nd{cases} \quad\text{and}\qquad \begin{cases} L\phi_2=0,\\ \phi_{2}(0)=0,\quad \phi_{2}'(0)=1 \mathrm{e}nd{cases} \mathrm{e}nd{align*} and consider the Wronskian \begin{align} \label{wronskian} W(x)\coloneqq\begin{pmatrix} \phi_{1}(x) & \phi_{2}(x) \\ \phi'_{1}(x) & \phi'_{2}(x) \mathrm{e}nd{pmatrix} \mathrm{e}nd{align} and the monodromy matrix \begin{align} \label{monodromy} A\coloneqq W(2\pi)=\begin{pmatrix} \phi_{1}(2\pi) & \phi_{2}(2\pi) \\ \phi'_{1}(2\pi) & \phi'_{2}(2\pi) \mathrm{e}nd{pmatrix}. \mathrm{e}nd{align} Then $\det A=1$ is the Wronskian determinant of the fundamental system $\phi_1, \phi_2$ and the Floquet multipliers $\rho_{1,2} = \frac{1}{2} \left(\tr(A)\pm \sqrt{\tr(A)^2-4}\right)$ are the eigenvalues of $A$ with corresponding eigenvectors $v_{1}=(v_{1,1}, v_{1,2})\mathrm{i}n \mathbb{C}^2$ and $v_{2}=(v_{2,1}, v_{2,2})\mathrm{i}n\mathbb{C}^2$. Thus, $\Psi_{j}(x)=v_{j,1}\phi_{1}(x)+v_{j,2}\phi_{2}(x)$. By Hill's theorem (see \cite{Eastham}) we know that \begin{align*} 0\mathrm{i}n\sigma(L) \qquad\Leftrightarrow\qquad \abs{\tr(A)}\leq 2. \mathrm{e}nd{align*} Due to the assumption that $0\not\mathrm{i}n \sigma(L)$ we see that $\rho_1, \rho_2$ are real with $\rho_1, \rho_2\mathrm{i}n\mathbb{R}\setminus\{-1,0,1\}$ and $\rho_1\rho_2=1$, i.e., one of the two Floquet multipliers has modulus smaller then one and other one has modulus bigger than one. W.l.o.g. we assume $0<|\rho_2|<1<|\rho_1|$. Furthermore, since $\rho_1, \rho_2$ are real and $A$ has real entries we can choose $v_1, v_2$ to be real and so $\Psi_1, \Psi_2$ are both real valued. As a result we have found a real-valued Bloch mode $\Psi_2(x)$ which is exponentially decreasing as $x\to \mathrm{i}nfty$ due to $|\rho_2|<1$. Let us finally verify that $\Psi_2(0)\not = 0$ so that we may assume by rescaling that $\Psi_2(0)=1$. Assume for contradiction that $\Psi_2(0)=0$. Since the potential $V(x)$ is even in $x$ this implies that $\Psi_2$ is odd and hence (due to the exponential decay at $+\mathrm{i}nfty$) in $L^2(\mathbb{R})$. But this contradicts that $0\not\mathrm{i}n\sigma(L)$. Now we explain how the precise choice of the data $a, b>0, \mathbb{T}heta\mathrm{i}n (0,1)$ and $\omega$ for the step-potential $g$ in Theorem~\ref{w is a weak solution in expl exa} allows to fulfill the conditions \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik}. Let us define \begin{align*} \tilde g(x)\coloneqq \begin{cases} a, & x \mathrm{i}n [0,2\mathbb{T}heta\pi),\\ b, & x \mathrm{i}n (2\mathbb{T}heta\pi,2\pi). \mathrm{e}nd{cases} \mathrm{e}nd{align*} and extend $\tilde g$ as a $2\pi$-periodic function to $\mathbb{R}$. Then $\tilde g(x) = g(x-\mathbb{T}heta\pi)$, and the corresponding exponentially decaying Bloch modes $\tilde\phi_k$ and $\phi_k$ are similarly related by $\tilde\phi_k(x) = \phi_k(x-\mathbb{T}heta\pi)$. For the computation of the exponentially decaying Bloch modes, it is, however, more convenient to use the definition $\tilde g$ instead of $g$. Now we will calculate the monodromy matrix $A_k$ from \mathrm{e}qref{monodromy} for the operator $L_k$. For a constant value $c>0$ the solution of the initial value problem \begin{align*} -\phi''(x)-k^2\omega^2c\phi(x)=0, \quad\phi(x_0)=\alpha,\quad\phi'(x_0)=\beta \mathrm{e}nd{align*} is given by \begin{align*} \begin{pmatrix}\phi(x)\\\phi'(x)\mathrm{e}nd{pmatrix} = T_k(x-x_0,c)\begin{pmatrix}\alpha\\\beta\mathrm{e}nd{pmatrix} \mathrm{e}nd{align*} with the propagation matrix \begin{align*} T_k(s,c)\coloneqq\begin{pmatrix} \cos(k\omega\sqrt{c}s) & \frac{1}{k\omega\sqrt{c}}\sin(k\omega\sqrt{c}s) \\ -k\omega\sqrt{c}\sin(k\omega\sqrt{c}s) & \cos(k\omega\sqrt{c}s) \mathrm{e}nd{pmatrix}. \mathrm{e}nd{align*} Therefore we can write the Wronskian as follows \begin{align*} W_k(x) &=\begin{cases} T_k(x,a) & x\mathrm{i}n[0,2\mathbb{T}heta\pi] \\ T_k(x-2\mathbb{T}heta\pi,b)T_k(2\mathbb{T}heta\pi,a) & x\mathrm{i}n[2\mathbb{T}heta\pi,2\pi] \mathrm{e}nd{cases} \mathrm{e}nd{align*} and the monodromy matrix as \begin{align*} A_k=W_k(2\pi)=T_k(2\pi(1-\mathbb{T}heta),b)T_k(2\mathbb{T}heta\pi,a). \mathrm{e}nd{align*} To get the exact form of $A_k$ let us use the notation \begin{align*} l\coloneqq\sqrt{\frac{b}{a}}\,\frac{1-\mathbb{T}heta}{\mathbb{T}heta},\qquad m\coloneqq2\sqrt{a}\mathbb{T}heta\omega. \mathrm{e}nd{align*} Hence \begin{align*} A_k = & \sin(kml\pi)\sin(km\pi) \\ & \cdot \begin{pmatrix} \cot(kml\pi)\cot(km\pi)-\sqrt{\frac{a}{b}} & \frac{1}{k\omega\sqrt{a}} \cot(kml\pi) + \frac{1}{k\omega\sqrt{b}}\cot(km\pi) \\ -k\omega\sqrt{b}\cot(km\pi)-k\omega\sqrt{a}\cot(kml\pi) & -\sqrt{\frac{b}{a}}+\cot(kml\pi)\cot(km\pi) \mathrm{e}nd{pmatrix} \mathrm{e}nd{align*} and \begin{align*} \tr(A_k)= 2\cos(kml\pi)\cos(km\pi)-\Bigl(\sqrt{\frac{a}{b}}+\sqrt{\frac{b}{a}}\Bigr)\sin(kml\pi)\sin(km\pi). \mathrm{e}nd{align*} In order to verify \mathrm{e}qref{spectralcond} we aim for $\abs{\tr(A_k)}>2$. However, instead of showing $\abs{\tr(A_k)}>2$ for all $k\mathrm{i}n\mathbb{Z}odd$ we may restrict to $k\mathrm{i}n r\cdot\mathbb{Z}odd$ for fixed $r\mathrm{i}n\mathbb{N}odd$ according to Remark~\ref{remark_Dr}. Next we will choose $r\mathrm{i}n\mathbb{Z}odd$. Due to the assumptions from Theorem~\ref{w is a weak solution in expl exa} we have \begin{equation} \label{cond_on_lr} l=\frac{\tilde p}{\tilde q}, ~~2m=\frac{p}{q} \mathrm{i}n\frac{\mathbb{N}odd}{\mathbb{N}odd}. \mathrm{e}nd{equation} Therefore, by setting $r=\tilde q q$\footnote{Instead of $r=\tilde q q$ we may have chosen any odd multiple of $\tilde q q$, e.g. $r=(\tilde q q)^j$ for any $j\mathrm{i}n \mathbb{N}$. This is important for the applicability of Theorem~\ref{multiplicity abstract} to obtain infinitely many breathers.} we obtain $\cos(km\pi)=\cos(kml\pi)=0$ and $\sin(km\pi), \sin(kml\pi)\mathrm{i}n\{\pm1\}$ for all $k\mathrm{i}n r\cdot \mathbb{Z}odd$. Together with $a\not =b$ this implies $|\tr(A_k)|=\sqrt{\frac{a}{b}}+\sqrt{\frac{b}{a}}>2$ so that \mathrm{e}qref{spectralcond} holds and $A_k$ takes the simple diagonal form \begin{align*} A_k = \begin{pmatrix} -\sqrt{\frac{a}{b}}\sin(kml\pi)\sin(km\pi) & \\ 0 & -\sqrt{\frac{b}{a}}\sin(kml\pi)\sin(km\pi). \mathrm{e}nd{pmatrix} \mathrm{e}nd{align*} In the following we assume w.l.o.g $0<a<b$, i.e., the Floquet exponent with modulus less than $1$ is $\rho_k = -\sqrt{\frac{a}{b}}\sin(kml\pi)\sin(km\pi)$. Note that $|\rho_k|=\sqrt{a/b}$ is independent of $k$. Furthermore the Bloch mode $\tilde \phi_k$ that is decaying to $0$ at $+\mathrm{i}nfty$ and normalized by $\tilde\phi_k(\mathbb{T}heta\pi)=1$ is deduced from the upper left element of the Wronskian, i.e., \begin{align*} \tilde \phi_k(x) = \frac{1}{\cos(k\omega\sqrt{a}\mathbb{T}heta\pi)}\left\{ \begin{array}{ll} \cos(k\omega\sqrt{a}x), & x\mathrm{i}n (0,2\mathbb{T}heta\pi), \\ \cos(k\omega\sqrt{b}(x-2\mathbb{T}heta\pi))\cos(k\omega\sqrt{a}2\mathbb{T}heta\pi) & \\ -\sqrt{\frac{a}{b}}\sin(k\omega\sqrt{b}(x-2\mathbb{T}heta\pi))\sin(k\omega\sqrt{a}2\mathbb{T}heta\pi), & x\mathrm{i}n (2\mathbb{T}heta\pi,2\pi) \mathrm{e}nd{array} \right. \mathrm{e}nd{align*} and on shifted intervals of lengths $2\pi$ one has $\tilde\phi_k(x+2m\pi)= \rho_k^{m}\tilde\phi_k(x)$. Notice that by \mathrm{e}qref{cond_on_lr} the expression $k\omega\sqrt{a}\mathbb{T}heta\pi=k\frac{p}{q}\frac{\pi}{4}$ is an odd multiple of $\pi/4$ since $k\mathrm{i}n q\tilde q\mathbb{Z}odd$ and hence $|\cos(k\omega\sqrt{a}\mathbb{T}heta\pi)|=1/\sqrt{2}$. Therefore $\|\phi_k\|_{L^\mathrm{i}nfty(0,\mathrm{i}nfty)}=\|\tilde\phi_k\|_{L^\mathrm{i}nfty(\mathbb{T}heta\pi,\mathrm{i}nfty)}\leq \|\tilde\phi_k\|_{L^\mathrm{i}nfty(0,2\pi)}\leq \sqrt{2}(1+\sqrt{a/b})$. Thus we have shown that $|\phi_k(x)|\leq M e^{-\rho x}$ for $x\mathrm{i}n [0,\mathrm{i}nfty)$ with $M>0$ and $\rho=\frac{1}{4\pi}(\ln b-\ln a)>0$. Finally, let us compute \begin{align*} \phi_k'(0)=\tilde\phi_k'(\mathbb{T}heta\pi)= -k\omega\sqrt{a}\tan(k\omega\sqrt{a}\mathbb{T}heta\pi)\mathrm{i}n\{\pm k\omega\sqrt{a}\}. \mathrm{e}nd{align*} This shows that $|\phi_k'(0)|=O(k)$ holds which allows to apply Theorem~\ref{w is even more regular}. It also shows that the estimate $|\phi_k(0)|=O(k^\frac{3}{2})$ from Lemma~\ref{norm_estimates} can be improved in special cases. To see that $\phi_k'(0)$ is alternating in $k$, observe that moving from $k\mathrm{i}n r\mathbb{Z}odd$ to $k+2r\mathrm{i}n r\mathbb{Z}odd$ the argument of $\tan$ changes by $2r\omega\sqrt{a}\mathbb{T}heta\pi$ which is an odd multiple of $\pi/2$. Since $\tan(x+\mathbb{Z}odd\frac{\pi}{2})=-1/\tan(x)$ we see that the sequence $\phi_k'(0)$ is alternating for $k\mathrm{i}n r\mathbb{Z}odd$. This shows in particular that for any $j\mathrm{i}n\mathbb{N}$ the sequence $(\phi_{hr^j}'(0))_{h\mathrm{i}n \mathbb{N}odd}$ contains infinitely many positive and negative elements, and hence Theorem~\ref{multiplicity abstract} for the existence of infinitely many breathers is applicable. This concludes the proof Theorem~\ref{w is a weak solution in expl exa} since we have shown that the potential $g$ satisfies the assumptions \mathrm{e}qref{spectralcond} and \mathrm{e}qref{FurtherCond_phik} from Theorem~\ref{w is a weak solution general}. \subsection{Embedding of H\"older-spaces into Sobolev-spaces}\label{embedding} \begin{lemma} \label{unusual_embedd} For $0<\tilde\nu<\nu<1$ there is the continuous embedding $\mathbb{C}ont{0,\nu}{\mathbb{T}_T}\to \SobH{\tilde\nu}{\mathbb{T}_T}$. \mathrm{e}nd{lemma} \begin{proof} Let $z(t)= \sum_k \hat z_k e_k(t)$ be a function in $\mathbb{C}ont{0,\nu}{\mathbb{T}_T}$. We need to show the finiteness of the spectral norm $\|z\|_{H^{\tilde\nu}}$. For this we use the equivalence of the spectral norm $\|\cdot\|_{H^{\tilde\nu}}$ with the Slobodeckij norm, cf. Lemma~\ref{equivalence}. Therefore it suffices to check the estimate \begin{align*} \mathrm{i}nt_{\mathbb{T}_T} \mathrm{i}nt_{\mathbb{T}_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2\tilde\nu}} \dd{t}\dd{\tau} \leq \|z\|_{C^\nu(\mathbb{T}_T)}^2 \mathrm{i}nt_{\mathbb{T}_T} \mathrm{i}nt_{\mathbb{T}_T} |t-\tau|^{-1+2(\nu-\tilde\nu)} \dd{t}\dd{\tau} \leq C(\nu,\tilde\nu)\|z\|_{C^\nu(\mathbb{T}_T)}^2 \mathrm{e}nd{align*} where the double integral is finite due to $\nu>\tilde\nu$. \mathrm{e}nd{proof} For $0<s<1$ recall the definition of the Slobodeckij-seminorm for a function $z:\mathbb{T}_T \to \mathbb{R}$ \begin{align*} [z]_s \coloneqq \left(\mathrm{i}nt_{\mathbb{T}_T} \mathrm{i}nt_{\mathbb{T}_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2s}} \dd{t}\dd{\tau}\right)^{1/2}. \mathrm{e}nd{align*} \begin{lemma} \label{equivalence} For functions $z\mathrm{i}n H^s(\mathbb{T}_T)$, $0<s<1$ the spectral norm $\|z\|_{H^s} = (\sum_k (1+k^2)^s |\hat z_k|^2)^{1/2}$ and the Solobodeckij norm $\mathbb{N}ORM{z}_{H^s}\coloneqq (\|z\|_{L^2(\mathbb{T}_T)}^2+ [z]_s^2)^{1/2}$ are equivalent. \mathrm{e}nd{lemma} \begin{proof} The Solobodeckij space and the spectrally defined fractional Sobolev space are both Hilbert spaces. Hence, by the open mapping theorem, if suffices to verify the estimate $\mathbb{N}ORM{z}_{H^s} \leq C\|z\|_{H^s}$. By direct computation we get \begin{align*} \mathrm{i}nt_{\mathbb{T}_T} \mathrm{i}nt_{\mathbb{T}_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2s}}\dd{t}\dd{\tau} &= \mathrm{i}nt_0^{T} \mathrm{i}nt_{-\tau}^{T-\tau} \frac{|z(x+\tau)-z(\tau)|^2}{|x|^{1+2s}} \dd{x}\dd{\tau}\\ &= \mathrm{i}nt_0^{T} \left(\mathrm{i}nt_0^{T-\tau} \frac{|z(x+\tau)-z(\tau)|^2}{x^{1+2s}}\dd{x} + \mathrm{i}nt_{T-\tau}^{T} \frac{|z(x+\tau)-z(\tau)|^2}{(T-x)^{1+2s}}\dd{x}\right)\dd{\tau} \\ &= \mathrm{i}nt_0^{T}\mathrm{i}nt_0^{T} \frac{|z(x+\tau)-z(\tau)|^2}{g(x,\tau)^{1+2s}}\dd{x}\dd{\tau} \mathrm{e}nd{align*} with \begin{align*} g(x,\tau) = \left\{ \begin{array}{ll} x & \mbox{ if } 0\leq x\leq T-\tau, \\ T-x & \mbox{ if } T-\tau \leq x \leq T. \mathrm{e}nd{array} \right. \mathrm{e}nd{align*} Since $g(x,\tau) \geq \dist(x,\partial\mathbb{T}_T)$ and due to Parseval's identity we find \begin{align*} \mathrm{i}nt_{\mathbb{T}_T} \mathrm{i}nt_{\mathbb{T}_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2s}}\dd{t}\dd{\tau} & \leq \mathrm{i}nt_{\mathbb{T}_T} \frac{\| \widehat{z(\cdot+x)}-\hat z\|_{l^2}^2}{\dist(x,\partial\mathbb{T}_T)^{1+2s}}\dd{x} \\ &= \mathrm{i}nt_{\mathbb{T}_T} \sum_k \frac{|\mathrm{e}xp(\mathrm{i} k\omega x)-1|^2 |\hat z_k|^2}{\dist(x,\partial\mathbb{T}_T)^{1+2s}}\dd{x} \\ &= 4 \mathrm{i}nt_0^{T/2}\sum_k \frac{1-\cos(k\omega x)}{x^{1+2s}} |\hat z_k|^2 \dd{x} \\ & \leq 4\tilde C \sum_k k^{2s} |\hat z_k|^2 \mathrm{e}nd{align*} with $\tilde C=\mathrm{i}nt_0^\mathrm{i}nfty \frac{1-\cos(\omega\xi)}{\xi^{1+2s}}\dd{\xi}$. This finishes the proof. \mathrm{e}nd{proof} \section*{Acknowledgment} Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 258734477 – SFB 1173 \mathrm{e}nd{document}
math
85,569
\betagin{document} \title{On powers of Hamilton cycles in Ramsey-Tur\'{a} \betagin{abstract} We prove that for $r\in \mathbb{N}$ with $r\geq 2$ and $\mu>0$, there exist $\alpha>0$ and $n_{0}$ such that for every $n\geq n_{0}$, every $n$-vertex graph $G$ with $\deltalta(G)\geq \left(1-\frac{1}{r}+\mu\right)n$ and $\alpha(G)\leq \alpha n$ contains an $r$-th power of a Hamilton cycle. We also show that the minimum degree condition is asymptotically sharp for $r=2, 3$ and the $r=2$ case was recently conjectured by Staden and Treglown. \end{abstract} \section{Introduction}\label{sec1} A fundamental topic in graph theory is that of finding conditions under which a graph is Hamiltonian. The classical result of Dirac \cite{Dirac1952} states that any graph on $n\geq 3$ vertices with minimum degree at least $\frac{n}{2}$ contains a Hamilton cycle. The \emph{$r$-th power of a graph $H$} is the graph obtained from $H$ by joining every pair of vertices with distance at most $r$ in $H$. As a natural generalization of Dirac's theorem, the P\'{o}sa--Seymour Conjecture received much attention, which predicts that an $n$-vertex graph $G$ satisfying $\deltalta(G)\geq \frac{r}{r+1}n$ contains an $r$-th power of a Hamilton cycle. There had been many excellent results (see e.g. \cite{FGR1994},\cite{FK1995},\cite{HR1979}) until it was finally settled by Koml\'{o}s, S\'{a}rk\"{o}zy and Szemer\'{e}di \cite{KSGS1998} in 1997. Note that the minimum degree condition is sharp as seen by near-balanced complete $(r+1)$-partite graphs. As for many other problems in the area, this extremal example has the characteristic that it contains a large independence set. There has thus been significant interest in seeking variants of classical results in extremal graph theory, where one now forbids the host graph from containing a large independent set. Indeed, nearly 50 years ago, Erd\H{o}s, Hajnal, S\'{o}s and Szemer\'{e}di \cite{EHSS1983} initiated the study of the Tur\'{a}n problem under the additional assumption of small independence number. Formally, given a graph $H$ and natural numbers $m, n\in \mathbb{N}$, the \emph{Ramsey--Tur\'{a}n number} $\textbf{RT}(n, H, m)$ is the maximum number of edges in an $n$-vertex $H$-free graph $G$ with $\alpha(G)\leq m$ and the \emph{Ramsey--Tur\'an density} of $K_r$ is defined as $\varrho(K_r):=\lim\limits_{\alpha\to0}\lim\limits_{n\to\infty}\frac{\textbf{RT}(n, K_r, \alpha n)}{\binom{n}{2}}$. Erd\H{o}s--S\'{o}s \cite{EST1979} and Erd\H{o}s--Hajnal--S\'{o}s--Szemer\'{e}di \cite{EHSS1983} proved that \betagin{equation}\label{eq:rhoK} \varrho(K_r) = \betagin{cases} 1-\frac{2}{r-1} \text{ if $r$ is odd,}\\ 1-\frac{6}{3r-4} \text{ otherwise.} \end{cases} \end{equation} For more literature on the Ramsey--Tur\'{a}n theory, we refer the readers to a comprehensive survey of Simonovits and S\'{o}s \cite{simsos2001}. More recently, there has been interest in similar questions but where now one seeks for a spanning subgraph in an $n$-vertex graph with independence number $o(n)$ and large minimum degree. Recent results mostly focus on clique-factors (and in general, $F$-factors). In particular, Knierim and Su~\cite{MR4193066} showed that for fixed $\mu>0$, sufficiently small $\alpha>0$ and sufficiently large $n\in r\mathbb N$, an $n$-vertex graph $G$ with $\deltalta(G)\geq \left(1-\frac{2}{r}+\mu\right)n$ and $\alpha(G)\leq \alpha n$ contains a $K_{r}$-factor. The $r=3$ case was obtained earlier by Balogh, Molla and Sharifzadeh~\cite{MR3570984}, who initiated this line of research. Nenadov and Pehova~\cite{MR4080942} also proposed similar problems for sublinear $\ell$-independence number for $\ell\ge 2$. For more problems and results on this topic, we refer to \cite{MR3570984,CHKWY,HHYW,HMWY2021,MR4193066,MR4080942,MR4170632}. Moving attention to \emph{connected} spanning subgraphs (in contrast to $F$-factors), one quickly observes that we cannot reduce the minimum degree condition in Dirac's theorem (``$n/2$'') significantly, as seen by the graph formed by two disjoint cliques of almost equal size. For a next step, Staden and Treglown \cite{MR4170632} conjectured that this minimum degree condition essentially guarantees a square of Hamilton cycle. \betagin{conj}[\cite{MR4170632}]\label{conj3} For every $\mu>0$, there exist $\alpha>0$ and $n_{0}\in \mathbb{N}$ such that the following holds. For every $n$-vertex graph $G$ with $n\geq n_{0}$, if $\deltalta(G)\geq \left(\frac{1}{2}+\mu\right)n$ and $\alpha(G)\leq \alpha n$, then $G$ contains a square of a Hamilton cycle. \end{conj} \subsection{Main results and Lower bound constructions} Our main result is the following, which in particular resolves Conjecture \ref{conj3}. \betagin{theorem}[]\label{thm2} Given $\mu>0$ and $r\in \mathbb{N}$ with $r\geq 2$, there exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{r}+\mu\right)n$ and $\alpha(G)\leq \alpha n$. Then $G$ contains an $r$-th power of a Hamilton cycle. \end{theorem} The minimum degree condition in Conjecture \ref{conj3} is asymptotically best possible, e.g. by considering a union of two disjoint cliques of almost equal size. Moreover, we cannot expect a result where $\mu$ does not depend on $\alpha$ (at least for r=2), as seen by the following proposition. \betagin{prop}[]\label{prop1.2} Given $\alpha>0$ and $n\in \mathbb{N}$, there exists an $n$-vertex graph $G$ with $\deltalta(G)\geq (\frac{1}{2}+\frac{\alpha}{2})n-1$ and $\alpha(G)\leq \alpha n$ such that $G$ has no square of a Hamilton cycle. \end{prop} For general lower bound on the minimum degree condition, we prove the following proposition by the \emph{connecting barrier}, which matches the minimum degree condition in Theorem~\ref{thm2} for the case $r=3$. \betagin{prop}[]\label{prop1.3} Given $\mu, \alpha>0$ and $r\in \mathbb{N}$ with $r\geq 3$, the following holds for sufficiently large $n$. There exists an $n$-vertex graph $G$ with $\deltalta(G)\geq \frac{2-\varrho(K_r)}{3-2\varrho(K_r)}n-\mu n$ and $\alpha(G)\leq \alpha n$ such that $G$ has no $r$-th power of a Hamilton cycle. \end{prop} The proofs of Proposition \ref{prop1.2} and Proposition \ref{prop1.3} are given in Subsection \ref{sec2}. Note that $\varrho(K_3)=0$ and thus Proposition~\ref{prop1.3} complements Theorem~\ref{thm2} on the minimum degree condition for $r=3$. For general $r$, note that by~\eqref{eq:rhoK}, we have \[ \frac{2-\varrho(K_r)}{3-2\varrho(K_r)} = 1 - \frac{2}{r + c} > 1- \frac2{r+1}, \] where $c=3$ if $r$ is odd and $c=8/3$ otherwise. By the aforementioned result of Knierim and Su~\cite{MR4193066}, this shows a clear separation between the minimum degree thresholds for the $K_{r+1}$-factor problem and the $r$-th power of a Hamilton cycle problem in host graphs with sublinear independence number. This is in contrast to the problems in general host graphs, where the two problems share the same minimum degree threshold. We suspect that the ``connecting barrier'' in Proposition \ref{prop1.3} indeed discloses the best possible minimum degree condition forcing an $r$-th power of a Hamilton cycle for every $r\geq 2$ and propose the following conjecture. \betagin{conj}\label{conj4} Given $\mu>0$ and $r\geq 4$, there exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \frac{2-\varrho(K_r)}{3-2\varrho(K_r)}n+\mu n$ and $\alpha(G)\leq \alpha n$. Then $G$ contains an $r$-th power of a Hamilton cycle. \end{conj} The smallest open case is $r=4$ where the minimum degree assumption in Theorem \ref{thm2} is roughly $\frac{3}{4}n$, while Proposition \ref{prop1.3} only provides a lower bound roughly $\frac{7}{10}n$. \subsection{Connecting barrier}\label{sec2} To obtain an $r$-th power of a Hamilton cycle, an essential condition is that every two vertices can be connected via an $r$-path. Motivated by this, we give two constructions for the proofs of Proposition \ref{prop1.2} and Proposition \ref{prop1.3}, in which the connecting property collapses. In the proofs, we apply a degree form of Ramsey--Tur\'{a}n number which was recently studied in \cite{CHKWY}, and their main result implies that every extremal construction for the function $\textbf{RT}(n, K_{r}, o(n))$ can be made almost regular. \betagin{coro}[\cite{CHKWY}, Proposition~1.5]\label{cor1} Given $r \in \mathbb{N}$ with $r \ge 3$ and constants $\mu,\alpha>0$, the following holds for all sufficiently large $n\in \mathbb{N}$. There is an $n$-vertex graph $G$ with $\deltalta(G)\ge \varrho(K_r) n-\mu n $ and $\alpha(G) \le \alpha n$ such that it does not contain any copy of $K_{r}$. \end{coro} The proofs of Proposition \ref{prop1.2} and Proposition \ref{prop1.3} go as follows. We build a graph $G$ with $V(G)=V_{1}\cup V_{2}\cup V_{3}$. Let $G[V_{2}, V_{1}]$ and $G[V_{2}, V_{3}]$ be two complete bipartite graphs, $E(G[V_{1}, V_{3}])=\emptyset$ and $G[V_{2}]$ be a $K_{r}$-free graph with large minimum degree. We shall prove that there is no $r$-th power of a Hamilton cycle in $G$. Then we optimize the size of $V_{i}$ for every $i\in [3]$ in order to maximize the minimum degree of $G$. \betagin{proof}[Proof of Proposition \ref{prop1.2}] Given $\alpha>0$ and $n\in \mathbb{N}$, let $G$ be an $n$-vertex graph with $V(G)=V_{1}\cup V_{2}\cup V_{3}$, $|V_{2}|=\alpha n$ and $|V_{1}|=|V_{3}|=\frac{1-\alpha}{2}n$. Let $E(G[V_{2}])=\emptyset$, $G[V_{i}]$ be a complete graph and $G[V_{2}, V_{i}]$ be a complete bipartite graph for every $i\in\{1, 3\}$. It holds that $\deltalta(G)\geq(\frac{1}{2}+\frac{\alpha}{2})n-1$ and $\alpha (G)=\alpha n$. Suppose for a contradiction that there exists a square of a Hamilton cycle in $G$, say $C$. Let $C_{u, v}$ be a shortest $2$-power of a path between $u$ and $v$ in $C$. We choose two vertices $u\in V_{1}$ and $v\in V_{3}$ such that $|C_{u, v}|$ is minimum among all such pairs in $V_1\times V_3$. Then as $uv\notin E(G)$, it holds that $V(C_{u, v})\backslash \{u, v\}\subseteq V_{2}$ and $|V(C_{u, v})\cap V_{2}|\geq 2$. Since $E(G[V_{2}])=\emptyset$, this is a contradiction. \end{proof} \betagin{proof}[Proof of Proposition \ref{prop1.3}]Given $\alpha,\mu>0$ and $r\in \mathbb{N}$ with $r\geq 3$, we choose $\frac{1}{n}\ll \alpha,\mu$. Let $G$ be an $n$-vertex graph with $V(G)=V_{1}\cup V_{2}\cup V_{3}$, $|V_{1}|=|V_{3}|=\frac{1-\varrho(K_r)}{3-2\varrho(K_r)}n$ and $|V_{2}|=\frac{n}{3-2\varrho(K_r)}$. Let $G[V_{i}]$ be a complete graph and $G[V_{2}, V_{i}]$ be a complete bipartite graph for every $i\in\{1, 3\}$. Let $G[V_{2}]$ be a $K_{r}$-free subgraph with $\alpha(G[V_{2}])\leq \alpha n$ and $\deltalta(G[V_{2}])\geq \varrho(K_r)|V_2|-\mu n$ given by Corollary~\ref{cor1}. For every vertex $v\in V_{1}$, it holds that $d(v)=n-|V_{3}|-1=\frac{2-\varrho(K_r)}{3-2\varrho(K_r)}n-1$. For every vertex $v\in V_3$, it holds that $d(v)=n-|V_{1}|-1=\frac{2-\varrho(K_r)}{3-2\varrho(K_r)}n-1$. For every vertex $v\in V_{2}$, it holds that \betagin{align*} d(v) & \geq |V_{1}|+|V_{3}|+\deltalta(G[V_{2}]) \\ & \geq n-|V_{2}|+\varrho(K_r)|V_2|-\mu n = \frac{2-\varrho(K_r)}{3-2\varrho(K_r)}n-\mu n. \end{align*} Now $G$ has $\deltalta(G)\geq \frac{2-\varrho(K_r)}{3-2\varrho(K_r)}n-\mu n$ and $\alpha(G)\leq \alpha n$. Suppose for a contradiction that there exists an $r$-th power of a Hamilton cycle in $G$, say $C$. Let $C_{u, v}$ be a shortest $r$-path between $u$ and $v$ in $C$. We choose two vertices $u\in V_{1}$ and $v\in V_{3}$ such that $|C_{u, v}|$ is minimum among all such pairs in $V_1\times V_3$. Then as $uv\notin E(G)$, we obtain that $V(C_{u, v})\backslash \{u, v\}\subseteq V_{2}$ and $|V(C_{u, v})\cap V_2|\geq r$, which forces a copy of $K_{r}$ in $G[V_2]$, a contradiction. \end{proof} \subsection{Proof strategy} Our proof makes use of the absorption method and builds on the techniques developed in \cite{RM2014}. The absorption method was introduced by R\"{o}dl, Ruci\'{n}ski and Szemer\'{e}di about a decade ago in \cite{MR2500161}. Since then, it has turned out to be an important tool for studying the existence of spanning structures in graphs, digraphs and hypergraphs. Under this framework, we find the desired $r$-th power of a Hamilton cycle by four lemmas: the Absorber Lemma (Lemma \ref{lem2.1}), the Reservoir Lemma (Lemma \ref{lem2.2}), the Almost Path-cover Lemma (Lemma \ref{almost}) and the Connecting Lemma (Lemma \ref{connect}). The main difficulties lie in proving a connecting lemma. For every two copies of $K_r$, say $X$ and $Y$, Koml\'{o}s, S\'{a}k\"{o}zy and Szemer\'{e}di \cite{KSGS1998} also proved such a connecting lemma: they first use the regularity method and transfer the problem to the \emph{reduced graph}, and then use the \emph{cascade} technique to do the connection via an $r$-th power of a walk. Then the regularity method can find an embedding of an $r$-path that connects $X$ and $Y$. Under our weaker minimum degree condition, using their method we can make such a connection in the reduced graph via an \emph{$(r-1)$-power} of a path. Then some dedicate embedding using the regularity and the assumption of sublinear independence number\footnote{For now, one can simply think of that every cluster in the regularity partition contains many edges inside.} can help us to achieve the connection by an $r$-path of constant length. To reduce the technicality, we first present an easy way to connect (in the reduced graph) via a short sequence of copies of $K_{r+1}$, where consecutive pairs of them share $r-1$ vertices; secondly, to ease the embedding process, we introduce the notion of \emph{good walk} (see Definition~\ref{def6.0}) and use it to model the way how an $r$-path is embedded into a sequence of clusters. For the absorbing path, we first choose a random set $A$ of vertices and find an $r$-path $P$ where one can free any subset of vertices of $A$. Then the vertices of $A$ can be used to help on connecting $r$-paths in later steps. Here is a sketch the proof of Theorem~\ref{thm2}. We apply the Reservoir Lemma to obtain a random set $A$. Then we apply the Absorber Lemma, and find a collection of vertex-disjoint absorbers in $G-A$, one for each vertex of $A$. By applying the Connecting Lemma, we connect all these $|A|$ absorbers one by one by using the vertices in $V(G)\backslash A$, and obtain an $r$-path, say $P_1$, which contains all these $|A|$ absorbers and every vertex in $A$. Note that the $r$-path $P_1$ is an absorbing path in the sense that we can remove any subset of vertices of $A$ while keeping the ends of the $r$-path unchanged. In $G-V(P_1)$, we apply the Almost Path-cover Lemma, and obtain a family of vertex-disjoint $r$-paths, say $\{P_2, P_3, \dots, P_\ell\}$ for some $\ell = o(n)$, and let $R:=V(G)\setminus \bigcup^{\ell}_{i=1}V(P_i)$. By the choice of $A$, every vertex in $G$ has many neighbors in $A$ and thus many absorbers in $A$. We then choose a collection of vertex-disjoint short $r$-paths that contain the vertices of $R$ with other vertices in $A$ -- this can be achieved by just choosing vertex-disjoint absorbers for vertices of $R$. By applying the Connecting Lemma, we connect all these $|R|$ (short) $r$-paths and all these $\ell$ (long) $r$-paths one by one to an $r$-th power of a cycle by using the vertices in $A$. Since all the vertices used are from $A$ (and thus from $P_1$), we obtain an $r$-th power of a Hamilton cycle in $G$. \subsection{Basic notation} In this subsection, we include some notation used throughout the paper. For a graph $G:= G(V, E)$, we write $|G|=|V(G)|$ and $e(G)=|E(G)|$. For $U\subseteq V(G)$, $G[U]$ denotes the induced graph of $G$ on $U$. Let $G-U:=G[V(G)\backslash U]$. For two subsets $A, B\subseteq V(G)$, we use $E(A, B)$ to denote the set of edges with one endpoint in $A$ and the other in $B$. Given $A\subseteq V(G)$ and $v\in V(G)$, $N_{A}(v):=N(v)\cap A$, and $d_{A}(v):=|N_{A}(v)|$. When $A=V(G)$, we drop the subscript and simply write $d(v)$. For $A\subseteq V(G)$, $N(A):=\bigcap_{v\in A}N(v)$. For any integers $a\leq b$, $[a, b]:=\{i\in \mathbb{Z}: a\leq i\leq b\}$ and $[a]:= [1, a]$. Given $r\in \mathbb{N}$, we use $\mathbf{S}_{r}$ to denote the family of all permutations of $[r]$. Given $\pi\in \mathbf{S}_{r}$ and $i\neq j\in [r]$, we use $\pi\circ(i, j)$ to denote the permutation in $\mathbf{S}_{r}$ which comes from $\pi$ by swapping the $i$-th and $j$-th elements in $\pi$. Given a vertex set $V=\{v_{1}, \dots, v_{\ell}\}$, let $\mathbf{S}_{V}$ be the family of all permutations of the vertices $v_{1}, \dots, v_{\ell}$. Given an $r$-path $P=v_{1}v_{2}\dots v_{\ell}$ for some $\ell\in \mathbb{N}$ and $\ell\geq r$, two \emph{ends} of $P$ are defined as the $r$-tuples $(v_{r}, v_{r-1}, \dots, v_{1})$ and $(v_{\ell-r+1}, \dots, v_{\ell-1}, v_{\ell})$. Given two disjoint $r$-tuples of vertices $\mf{x}, \mf{y}$ each inducing a copy of $K_{r}$, say $\mf{x}=(x_{1}, \dots, x_{r})$ and $\mf{y}=(y_{1},\dots, y_{r})$, and an $r$-path $P=v_{1}\dots v_{\ell}$ for some $\ell\in \mathbb{N}$ that is vertex disjoint from $\mf{x}$ and $\mf{y}$, we say $P$ \emph{connects} $\mf{x}$ and $\mf{y}$ if $v_{i}\in N(x_{i}, \dots, x_{r})$ and $y_{i}\in N(v_{\ell-r+i}, \dots, v_{\ell})$ for every $i\in [r]$. Moreover, we use $\mf{x} P \mf{y}$ to denote the resulting $r$-path $x_{1}\dots x_{r}v_1v_2\ldots v_{\ell}y_{1}\dots y_{r}$. Given two sequences $Q_{1}:=(S_1, \dots, S_{r})$ and $Q_{2}:=(T_{1}, \dots, T_{\ell})$, let $Q_{1}Q_2:=(S_1, \dots, S_{r}, T_{1}, \dots, T_{\ell})$. When we write $\betata\ll \gammamma$, we always mean that $\betata, \gammamma$ are constants in $(0, 1)$, and there exists $\betata_{0}=\betata_{0}(\gammamma)$ such that the subsequent statements hold for all $0<\betata\leq \betata_{0}$. Hierarchies of other lengths are defined analogously. The rest of the paper is organized as follows. In Section \ref{sec3} we will give the proof of Theorem \ref{thm2}. The proofs of the Absorber Lemma, the Reservoir Lemma and the Almost path-cover Lemma are in Subsection \ref{sec4}. In Section \ref{sec6} we will introduce the regularity lemma and prove the Connecting Lemma which will comprise the majority of the paper. \section{Proof of Theorem \ref{thm2}}\label{sec3} In this section, we introduce the crucial lemmas used to prove our main result. We explain how they work together to derive the proof of Theorem \ref{thm2}. The proofs of these lemmas are presented in full detail in Subsection \ref{sec4} and Section \ref{sec6}, respectively. \subsection{Main tools}\label{sec3.1} Before presenting the statements of these crucial lemmas, we need to introduce one more notion. \betagin{defn}[Absorber] Given $v\in V(G)$, we say that an $r$-path $P$ of order $2r$ is an \emph{absorber} for $v$ if $V(P)\cup\{v\}$ induces an $r$-path of order $2r+1$ (in $G$) which shares the same ends with $P$. For $v\in V(G)$ and $U\subseteq V(G)$, let $\mathcal{L}_{U}(v)$ be a maximum set of vertex-disjoint absorbers of $v$ in $G[U]$. We often omit the subscript $U$ if the graph is clear from the context. \end{defn} We are actually able to prove a slightly stronger result by replacing $1-\frac{1}{r}$ in Theorem \ref{thm2} with $1-\frac{1}{f(r)}$ for some $0<f(r)\leq r$. \betagin{lemma}[Absorber Lemma]\label{lem2.1} Given $r\in \mathbb{N}$ and $\mu>0$, let $f(r):=\frac{r}{2}+1$ if $r\in 2\mathbb{N}$, and $f(r):=\frac{r}{2}+\frac{5}{6}$ if $r\in 2\mathbb{N}+1$. There exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{f(r)}+\mu\right)n$, $\alpha(G)\leq \alpha n$, and $W\subseteq V(G)$ with $|W|\leq\frac{\mu}{2}n$. Then every $v\in V(G)$ has at least one absorber in $G[N(v)\backslash W]$. \end{lemma} For convenience, we will use the following immediate corollary of Lemma \ref{lem2.1}. \betagin{cor}\label{coro2.1} Given $r\in \mathbb{N}$ and $\mu>0$, let $f(r):=\frac{r}{2}+1$ if $r\in 2\mathbb{N}$, and $f(r):=\frac{r}{2}+\frac{5}{6}$ if $r\in 2\mathbb{N}+1$. There exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{f(r)}+\mu\right)n$ and $\alpha(G)\leq \alpha n$. Then $|\mathcal{L}(v)|\geq \frac{\mu}{4r}n$ for every $v\in V(G)$. \end{cor} As mentioned earlier, we need to reserve a vertex subset in the original graph to connect a small number of $r$-paths. The following lemma provides such a vertex subset. \betagin{lemma}[Reservoir Lemma]\label{lem2.2} Given $c, \mu, \eta, \gammamma>0$, there exists $\zetata>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(c+\mu\right)n$, and $|\mathcal{L}(v)|\geq \eta n$ for every $v\in V(G)$. Then there exists a vertex subset $A\subseteq V(G)$ with $2r\zetata n\leq |A|\leq \gammamma n$ such that for every $v\in V(G)$, it holds that $|N_A(v)|\geq \left(c+\frac{\mu}{2}\right)|A|$ and $|\mathcal{L}_{A}(v)|\geq \zetata n$. \end{lemma} The following lemma provides a family of vertex-disjoint $r$-paths that cover almost all vertices in $G$. We are actually able to prove a slightly stronger result by replacing $1-\frac{1}{r}$ in Theorem \ref{thm2} with $1-\frac{1}{g(r)}$ for some $0<g(r)\leq r$. \betagin{lemma}[Almost path-cover Lemma]\label{almost} Given $\mu, \deltalta>0$ and $r, \ell\in \mathbb{N}$, let $g(r):=\lfloor\frac{r}{2}\rfloor+1$. There exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{g(r)}+\mu\right)n$ and $\alpha(G)\leq \alpha n$. Then $G$ contains a family of vertex-disjoint $r$-paths with the same length $\ell$, which covers all but at most $\deltalta n$ vertices in $G$. \end{lemma} We also require the following result to connect two vertex-disjoint $r$-paths. This lemma is the only part of the proof of Theorem \ref{thm2} that requires $\deltalta(G)\geq (1-\frac{1}{r}+\mu)n$ (elsewhere, $\deltalta(G)\geq (1-\frac{2}{r+2}+\mu)n$ suffices). \betagin{lemma}[Connecting Lemma]\label{connect} Given $r\in \mathbb{N}$ and $\mu>0$, there exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{r}+\mu\right)n$ and $\alpha(G)\leq \alpha n$. For every two disjoint $r$-tuples of vertices, denoted as $\mf{x}$ and $\mf{y}$, each inducing a copy of $K_{r}$ in $G$, there exists an $r$-path in $G$ on at most $200r^{5}$ vertices, which connects $\mf{x}$ and $\mf{y}$. \end{lemma} More often, given an $r$-tuple $\mf{x}=(x_1,x_2,\ldots,x_r)$, we write $\overf{x}:=(x_r,x_{r-1},\ldots,x_1)$. Now we are ready to prove Theorem \ref{thm2} using Corollary \ref{coro2.1}, Lemma \ref{lem2.2}, Lemma \ref{almost} and Lemma \ref{connect}. \betagin{proof} [Proof of Theorem \ref{thm2}] Given $\mu>0$ and $r\in \mathbb{N}$, we choose \betagin{center} $\frac{1}{n}\ll \alpha\ll\frac{1}{\ell}, \deltalta\ll\zetata\ll\gammamma\ll\mu, \frac{1}{r}$. \end{center} Let $\eta=\frac{\mu}{4r}$, $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{r}+\mu\right)n$ and $\alpha(G)\leq \alpha n$. Applying Corollary \ref{coro2.1} to $G$, we get that $|\mathcal{L}(v)|\geq \frac{\mu}{4r}n$ for every vertex $v\in V(G)$. By Lemma \ref{lem2.2}, we obtain a reservoir $A\subseteq V(G)$ with \stepcounter{propcounter} \betagin{enumerate}[label = ({\bfseries \Alph{propcounter}\arabic{enumi}})] \item\label{p1} $2r\zetata n\leq|A|\leq \gammamma n$; \item\label{p2} $|\mathcal{L}_{A}(v)|\geq \zetata n$ for every $v\in V(G)$; \item\label{p3} $|N_A(v)|\geq \left(1-\frac{1}{r}+\frac{\mu}{2}\right)|A|$ for every $v\in V(G)$. \end{enumerate} Since $\gammamma\ll \mu, \frac{1}{r}$ and $|\mathcal{L}(v)|\geq \frac{\mu}{4r}n$ for every $v\in V(G)$, we can greedily choose an absorber for every $v\in A$ such that they are pairwise vertex disjoint from each other and from $A$. Since $\gammamma \ll \mu, \frac{1}{r}$ and $|A|\leq \gammamma n$, this is possible because during the process, the number of vertices that we need to avoid is at most $(2r+1)|A|\leq (2r+1)\gammamma n< \frac{\mu}{4r}n\leq |\mathcal{L}(v)|$. Let $\mathcal{F}_{1}:=\{H_{1}, H_{2}, \dots, H_{|A|}\}$ be the family of these $r$-paths, each of which is the $r$-path formed by a vertex in $A$ \emph{together with} \footnote{Each $H_i$ has the same ends as the corresponding absorber.} its absorber. Let $\mf{x}_i$ and $\mf{y}_i$ be two ends of $H_i$ for every $i\in[|A|]$. Then we shall connect all these $r$-paths $H_i$ in pairs into an $r$-path. In every step of connecting $H_i$ and $H_{i+1}$, we shall find an $r$-path of length at most $200r^5$ connecting $\mf{y}_{i}$ and $\overleftarrow{\textbf{x}_{i+1}}$, which is vertex disjoint from all previous connections and $V(\mathcal{F}_1)$. Since $\frac{1}{n}\ll \gammamma\ll\mu, \frac{1}{r}$ and $|A|\leq \gammamma n$, such a family of $|A|-1$ vertex-disjoint $r$-paths can be iteratively obtained by repeatedly applying Lemma~\ref{connect} to $G-W$ with $W$ being the set of vertices that are in $ V(\mathcal{F}_1)$ or used in previous connections. After all these connections, we obtain an $r$-path, say $P_{1}$, which contains all the members of $\mathcal{F}_{1}$, and thus $A$. Since $\gammamma \ll \mu, \frac{1}{r}$ and $|A|\leq \gammamma n$, $|P_{1}|\leq 200r^{5}(|A|-1)+(2r+1)|A|<(3r+200r^{5})|A|\leq \frac{\mu}{4} n$. Observer that for every subset $A'\subseteq A$ there is an $r$-path on $V(P_1) \setminus A'$ with the same ends as $P_1$. Note that we have $\deltalta(G-V(P_{1})) \geq \left(1-\frac{1}{r}+\mu\right)n-\frac{\mu}{4} n\geq \left(1-\frac{1}{r}+\frac{3}{4}\mu\right)n$. Applying Lemma \ref{almost} to $G-V(P_{1})$, we obtain a collection of vertex-disjoint $r$-paths, each of length $\ell$, denoted as $P_{2}, \dots, P_{\lambda}$ for some integer $\lambda\leq\frac{n}{\ell}$. Moreover, this collection covers all but a set $R$ of at most $\deltalta n$ vertices in $G-V(P_{1})$. Recall that by \ref{p2}\ref{p3}, for every $v\in R$, we have $|\mathcal{L}_{A}(v)|\geq \zetata n$ and $|N_A(v)|\geq \left(1-\frac{1}{r}+\frac{\mu}{2}\right)|A|$. As $\deltalta\ll\zetata, \frac{1}{r}$ and $|R|\le \deltalta n$, we can greedily pick an absorber in $G[A]$ for every $v\in R$ such that all such absorbers are pairwise vertex disjoint from each other. This is possible since each time for $v\in R$ the number of absorbers of $v$ touched in previous steps is at most $2r|R|\leq 2r\deltalta n<\zetata n \leq|\mathcal{L}_{A}(v)|$. Let $\mathcal{F}_{2}:=\{H_{|A|+1}, H_{|A|+2}, \dots, H_{|A|+|R|}\}$ be the family of $r$-paths each of which is formed by a vertex in $R$ together with its absorber. Now we connect all these $r$-paths to an $r$-th power of a (Hamilton) cycle using the vertices of $A$. In fact, since $\frac{1}{\ell}, \deltalta\ll\zetata\ll \mu, \frac{1}{r}$, $|A|\geq 2r\zetata n$, $|R|\le \deltalta n$ and $|N_A(v)|\ge \left(1-\frac{1}{r}+\frac{\mu}{2}\right)|A|$ for every $v\in V(G)$, this can be similarly done by iteratively applying Lemma~\ref{connect} to $G[A]-W$ with $W$ being the set of vertices in $V(\mathcal{F}_2)$ and those used in previous connections, and by the fact that \betagin{center} $|W|\le (2r+1)|R|+200r^{5}(|R|+\lambda)\leq 300r^5(|R|+\lambda)\leq 300r^5(\deltalta n+\frac{n}{\ell})\leq \frac{\mu}{4}|A|$. \end{center} Let $A'\subseteq A$ be the set of all the vertices used for the connections. By the observation of $P_1$ as above, we can obtain an $r$-path on $V(P_1)\setminus A'$ with the same ends as $P_1$. This gives an $r$-th power of a Hamilton cycle of $G$. \end{proof} \subsection{Proof of the lemmas}\label{sec4} In this subsection, we provide the (short) proofs for Lemma \ref{lem2.1}, Lemma \ref{lem2.2}, and Lemma~\ref{almost}. The proof of Lemma \ref{lem2.1} utilizes some results on the Ramsey--Tur\'{a}n number. We prove Lemma \ref{lem2.2} using some standard probabilistic arguments. To prove Lemma \ref{almost}, we apply a recent result of Chen, Han, Wang and Yang in \cite{CHWY2022}. To prove Lemma \ref{lem2.1}, we need the following notation. We use $ar(H)$ to denote the \emph{vertex arboricity} of $H$ which is the least integer $r$ such that $V(H)$ can be partitioned into $r$ parts and each part induces a forest in $H$. The corresponding partition of $V(H)$ is an \emph{acyclic partition} of $H$. \betagin{defn}[\cite{EHSS1983}]\label{def1.5} The \emph{modified arboricity} $AR(H)$ of a graph $H$ is the least $\ell\in \mathbb{N}$ for which the following holds: \betagin{itemize} \item either $\ell$ is even and $ar(H)=\frac{\ell}{2}$; \item or $\ell$ is odd and there exists an independent set $V^{\ast}$ such that $ar(H-V^{\ast})\leq \frac{\ell-1}{2}$. \end{itemize} \end{defn} \betagin{theorem}[\cite{EHSS1983}]\label{thm0.3} Let $h, \ell\in \mathbb{N}$ and $H$ be an $h$-vertex graph. If $AR(H)\leq \ell$, then $\textbf{RT}(n, H, o(n))\leq \textbf{RT}(n, K_{\ell}, o(n))$. \end{theorem} \betagin{prop}[]\label{prop3.3} Given $r\geq 2$, let $H:=P^{r}_{2r}$. Then $AR(H)=r+1$. \end{prop} \betagin{proof} Let $H$ be an $r$-path $P=v_{1}\dots v_{2r}$. Since $K_{r+1}\subseteqH$, we have $AR(H)\ge AR(K_{r+1})=r+1$. Assume that $r$ is even, we obtain an acyclic partition of $H$ with $\frac{r}{2}+1$ parts \[\{v_{1}, v_{2}, v_{r+2}, v_{r+3}\},\{v_{3}, v_{4}, v_{r+4}, v_{r+5}\}, \cdots, \{v_{r-1}, v_{r}, v_{2r}\},\{v_{r+1}\}.\] Thus, by definition, we obtain that $AR(H)=r+1$. Assume that $r$ is odd, we obtain an acyclic partition of $H$ with $\frac{r+1}{2}$ parts \[\{v_{1}, v_{2}, v_{r+2}, v_{r+3}\}, \cdots, \{v_{r-2}, v_{r-1}, v_{2r-1}, v_{2r}\}, \{v_{r}, v_{r+1}\}.\] Therefore, by definition, we obtain that $AR(H)=r+1$. \end{proof} In the next subsection, we prove Lemma \ref{lem2.1}. \subsubsection{Absorbers: Proof of Lemma \ref{lem2.1}}\label{sec2.2.1} Before presenting the proof, we first recall some of the results mentioned earlier regarding Ramsey--Tur\'{a}n theory. In \cite{EST1979}, Erd\H{o}s and S\'{o}s proved that let $r\in 2\mathbb{N}+1$, $\textbf{RT}(n, K_{r}, o(n))=\frac{r-3}{2r-2}n^{2}+o(n^{2})$. In \cite{EHSS1983}, Erd\H{o}s, Hajnal, S\'{o}s and Szemer\'{e}di proved that let $r\in 2\mathbb{N}$, $\textbf{RT}(n, K_{r}, o(n))=\frac{3r-10}{6r-8}n^{2}+o(n^{2})$. \betagin{proof} [Proof of Lemma \ref {lem2.1}] Given $r\in \mathbb{N}$ and $\mu>0$, we choose $\frac{1}{n}\ll\alpha\ll \mu$. Recall $f(r)=\frac{r}{2}+1$ if $r\in 2\mathbb{N}$, and $f(r)=\frac{r}{2}+\frac{5}{6}$ if $r\in 2\mathbb{N}+1$. Let $f:=f(r)$ for every $r\in \mathbb{N}$, $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{f}+\mu\right)n$, $W\subseteq V(G)$ with $|W|\leq\frac{\mu}{2}n$ and $v\in V(G)$. We shall find an absorber of $v$ in $G[N_G(v)\setminus W]$, that is, a copy of $P^{r}_{2r}$. Write $V'=N_G(v)\setminus W$ and $n'=|V'|$. Then $n'\ge \frac{f-1}{f}n$. By Theorem \ref{thm0.3} and Proposition \ref{prop3.3}, it suffices to prove that $e(G[V'])\geq \textbf{RT}(n', K_{r+1}, 2\alpha n')$. We have $\deltalta(G[V']) \ge n' - \frac{n}{f} + \mu n\ge n' - \frac{n'}{f-1} + \mu n'$. This implies that the density of $G[V']$ is at least $1-\frac{1}{f-1}+\mu$. By (\ref{eq:rhoK}) and the definition of $f=f(r)$, we infer that $G[V']$ contains a copy of $K_{r+1}$. This completes the proof. \end{proof} \subsubsection{Reservoir: Proof of Lemma \ref{lem2.2}}\label{sec2.2.2} The proof of Lemma \ref{lem2.2} involves a standard probabilistic argument. In order to carry out this argument, we will need to make use of the following Chernoff bound for the binomial distribution (see Corollary 2.3 in \cite{JSLT2000}). Recall that the binomial random variable with parameters $(n, p)$ is the sum of $n$ independent Bernoulli variables, each taking value $1$ with probability $p$ and $0$ with probability $1-p$. \betagin{prop}[\cite{JSLT2000}]\label{prop6.1} Suppose that $X$ has the binomial distribution and $0<a<\frac{3}{2}$. Then $\mathbb{P}[|X-\mathbb{E}[X]|\geq a\mathbb{E}[X]]\leq 2\mathrm{ex}p(-a^{2}\mathbb{E}[X]/3)$. \end{prop} \betagin{proof} [Proof of Lemma \ref {lem2.2}] Given $c, \mu, \eta, \gammamma>0$, we choose $\frac{1}{n}\ll \eta, \mu, \gammamma, c$ and let $c_{1}=c_{2}=\frac{\mu}{4}$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(c+\mu\right)n$, and every $v\in V(G)$ has a set $\mathcal{L}(v)$ of at least $\eta n$ vertex-disjoint absorbers. We first note that $c+\mu\leq 1$. We choose a vertex subset $A\subseteq V(G)$ by including every vertex independently at random with probability $p=\frac{\gammamma}{2}$. Notice that $\mathbb{E}[|A|]=np$, $\mathbb{E}[|\mathcal{L}_{A}(v)|]\ge\eta np^{2r}$ and $\mathbb{E}[d_A(v)]\ge\left(c+\mu\right)np$ for every $v\in V(G)$. Then by Proposition \ref{prop6.1}, it holds that: \betagin{enumerate} \item $\mathbb{P}[|A|\geq(1+c_{1})\mathbb{E}[|A|]]\leq \mathbb{P}[||A|-\mathbb{E}[|A|]|\geq c_{1}\mathbb{E}[|A|]]\leq 2\mathrm{ex}p\left(-c^{2}_{1}np/3\right)$; \item $\mathbb{P}[|\mathcal{L}_{A}(v)|\leq \frac{1}{2}\mathbb{E}[|\mathcal{L}_{A}(v)|]]\leq 2\mathrm{ex}p\left(-\eta np^{2r}/12\right)$; \item $\mathbb{P}[d_A(v)\leq (1-c_{2})\mathbb{E}[d_A(v)]]\leq 2\mathrm{ex}p\left(-c^{2}_{2}(c+\mu)np/3\right)$. \end{enumerate} Meanwhile, it holds that \betagin{center} $\mathrm{ex}p\left(-c^{2}_{1}np/3\right)+n\mathrm{ex}p\left(-c^{2}_{2}(1-\frac{1}{r}+\mu)np/3\right)+n\mathrm{ex}p\left(-\eta np^{2r}/12\right)\leq\frac{1}{6}$. \end{center} By the union bound, with probability at least $\frac{2}{3}$ the set $A$ satisfies the following properties: \[ |A| \leq (1+\frac{\mu}{4})\mathbb{E}[|A|]=(1+\frac{\mu}{4})np=(1+\frac{\mu}{4})\frac{\gammamma}{2}n\leq \gammamma n;\] \[ |\mathcal{L}_{A}(v)| \geq \frac{1}{2}\mathbb{E}[|\mathcal{L}_{A}(v)|]=\frac{\eta p^{2r}}{2}n=\frac{\eta \gammamma^{2r}}{2^{2r+1}}n;\] \[d_A(v) \geq (1-\frac{\mu}{4})\left(c+\mu\right)np \geq \left(c+\frac{\mu}{2}\right)(1+\frac{\mu}{4})np\geq \left(c+\frac{\mu}{2}\right)|A|,\] where the second inequality holds since $c+\mu\leq 1$. Hence, $A$ is as desired by taking $\zetata=\frac{\eta\gammamma^{2r}}{2^{2r+1}}$. The lower bound of $|A|$ follows from $|\mathcal{L}_{A}(v)|\geq \zetata n$, thus $|A|\geq 2r\zetata n$. \end{proof} \subsubsection{Almost path-cover: Proof of Lemma~\ref{almost}}\label{sec5} To prove Lemma \ref{almost}, we employ a recent result of Chen, Han, Wang, and Yang~\cite{CHWY2022}. For simplicity of presentation, we state a weaker version of their result as follows. \betagin{lemma}[\cite{CHWY2022}, Lemma 3.1]\label{lem4.1} Given $\mu, \deltalta>0$, an $h$-vertex graph $H$ with $h\in \mathbb{N}$ and $h\geq 3$, there exists $\alpha>0$ such that the following holds for sufficiently large $n$. Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \max\left\{\left(1-\frac{1}{ar(H)}+\mu\right)n, \left(\frac{1}{2}+\mu\right)n\right\}$ and $\alpha(G)\leq \alpha n$. Then $G$ contains an $H$-tiling which covers all but at most $\deltalta n$ vertices. \end{lemma} In fact, Lemma \ref{almost} can be easily derived by applying Lemma \ref{lem4.1} with $H=P^r_{\ell}$, and making the trivial observation that $ar(P^r_{\ell})=g(r)$. \footnote{Here, we remark that the proof of Lemma \ref{almost} also can be derived by combining a standard application of the powerful Szemer\'{e}di's regularity lemma and some involved embedding techniques as in \cite{CHWY2022}. In fact, the degree condition $(1-\tfrac{1}{r}+\mu )n$ enables us to apply the Hajnal--Szemer\'{e}di theorem and obtain an almost $K_{r}$-factor in the reduced graph. Then we suitably embed vertex-disjoint copies of $P^r_{\ell}$ into every collection of $r$ clusters that form a copy of $K_r$ as above, where one need some involved embedding techniques in the setting that $\alpha(G)=o(n)$ (e.g. the tree-building lemma in \cite{EHSS1983}). } \section{Connecting $r$-tuples}\label{sec6} In this section we shall focus on Lemma~\ref{connect}, which allows us to ``connect'' two copies of $K_r$ (ordered) via an $r$-path of a constant length. Following a standard application of the regularity lemma, we obtain an $\varepsilon$-regular partition and then build a \emph{reduced graph} $R$ (see Definition~\ref{def2.7}). We shall first use Lemma~\ref{lem6.2} to obtain a \emph{good walk} (See Definition \ref{def6.0}) in $R$ and then use some involved embedding techniques (see Lemma \ref{lem6.30} and Lemma \ref{lem6.31}) to finish the connection. \subsection{Regularity} In this paper, we make use of a degree form of the regularity lemma \cite{KS1996}. We shall first introduce a basic notion of regular pair. Given a graph $G$ and a pair $(V_{1}, V_{2})$ of vertex-disjoint subsets in $V(G)$, the \emph{density} of $(V_{1}, V_{2})$ is defined as \betagin{center} $d(V_{1}, V_{2})=\frac{e(V_{1}, V_{2})}{|V_{1}||V_{2}|}$.\\ \end{center} \betagin{defn}[]\label{def2.1} Given $\varepsilon>0$, a graph $G$ and a pair $(V_{1}, V_{2})$ of vertex-disjoint subsets in $V(G)$, we say that the pair $(V_1, V_2)$ is $\varepsilon $-\emph{regular} if for all $X\subseteq V_{1}$ and $Y\subseteq V_{2}$ satisfying \[ |X| \ge \varepsilon |V_{1}| ~\text{and}~ |Y| \ge \varepsilon |V_{2}|, \] we have \[ |d(X,Y) - d(V_1,V_2)| \le \varepsilon. \] Moreover, a pair $(X, Y)$ is called $(\varepsilon, \betata)$-\emph{regular} if $(X, Y)$ is $\varepsilon$-regular and $d(X, Y)\geq \betata$. \end{defn} \betagin{lemma}[\cite{KS1996}, Slicing Lemma]\label{lem2.5} Assume $(V_{1}, V_{2})$ is $\varepsilon$-regular with density $d$. For some $\alpha\geq \varepsilon$, let $V_{1}'\subseteq V_{1}$ with $|V_{1}'|\geq \alpha|V_{1}|$ and $V_{2}'\subseteq V_{2}$ with $|V_{2}'|\geq \alpha|V_{2}|$. Then $(V_{1}', V_{2}')$ is $\varepsilon'$-regular with $\varepsilon':=\max\left\{2\varepsilon, \varepsilon/\alpha\right\}$ and for its density $d'$ we have $|d'-d|<\varepsilon$. \end{lemma} \betagin{lemma}[\cite{KS1996}, Degree form of Regularity Lemma]\label{lem2.6} For every $\varepsilon > 0$, there is an $N = N(\varepsilon )$ such that the following holds for any real number $\betata\in [0, 1]$ and $n\in \mathbb{N}$. Let $G$ be a graph with $n$ vertices. Then there exists a partition $V(G)=V_{0}\cup \cdots \cup V_{k} $ and a spanning subgraph $G' \subseteq G$ with the following properties: \betagin{enumerate} \item [$({\rm 1})$] $ \frac{1}{\varepsilon}\leq k \le N $; \item [$({\rm 2})$] $|V_{i}| \le \varepsilon n$ for $i\in [0, k]$ and $|V_{1}|=|V_{2}|=\cdots=|V_{k}| =m$ for some $m\in \mathbb{N}$; \item [$({\rm 3})$] $d_{G'}(v) > d_{G}(v) - (\betata + \varepsilon )n$ for all $v \in V(G)$; \item [$({\rm 4})$] each $V_{i}$ is an independent set in $G' $ for $i\in [k]$; \item [$({\rm 5})$] all pairs $(V_{i}, V_{j})$ are $\varepsilon $-regular (in $G'$) with density 0 or at least $\betata$ for distinct $i, j\neq0$. \end{enumerate} \end{lemma} More often, we call the partition in Lemma~\ref{lem2.6} an $(\varepsilon,\betata)$-\emph{regular} partition. A widely-used auxiliary graph accompanied with the regular partition is the reduced graph. \betagin{defn}[Reduced graph]\label{def2.7} Let $k\in \mathbb{N}$, $\betata, \varepsilon>0$, $G$ be a graph with an $(\varepsilon, \betata)$-regular partition $V(G)=V_0\cup \ldots \cup V_k$ and $G'\subseteq G$ be a subgraph fulfilling the properties of Lemma \ref{lem2.6}. We denote by $R_{\varepsilon, \betata}$ the \emph{reduced graph} for the $(\varepsilon,\betata)$-regular partition, which is defined as follows. Let $V(R_{\varepsilon, \betata})=\{V_{1}, \ldots, V_{k}\}$ and for two distinct clusters $V_{i}$ and $V_{j}$ we draw an edge between $V_{i}$ and $V_{j}$ if $d_{G'}(V_i, V_j)\geq \betata$ and no edge otherwise. \end{defn} The following fact presents a minimum degree of the reduced graph. \betagin{fac}[]\label{fact2.8} Let $n\in \mathbb{N}$, $\varepsilon, \betata, c\in(0,1)$ and $G$ be an $n$-vertex graph with $\deltalta(G)\ge cn$. Let $V(G)=V_{0}\cup \cdots \cup V_{k}$ be a vertex partition of $V(G)$ satisfying Lemma \ref{lem2.6} $(1)$-$(5)$ and $R:=R_{\varepsilon, \betata}$ the reduced graph for the partition. Then for every $V_{i}\in V(R)$ we have $d_{R}(V_{i})\ge(c-2\varepsilon-\beta)k$. \end{fac} \betagin{proof}[Proof] Note that $|V_{0}|\leq \varepsilon n$ and $|V_{i}|=m$ for every $i\in [k]$. Every edge in $R$ represents less than $m^{2}$ edges in $G'-V_{0}$. Thus we have \betagin{align}\label{al1} d_{R}(V_{i}) & \geq \frac{|V_{i}|(\deltalta(G)-(\betata+\varepsilon)n-\varepsilon n)}{m^{2}} \nonumber \\ & \ge(c-2\varepsilon-\beta) k.\nonumber \end{align} \end{proof} \subsection{Main tools} \betagin{defn}[Good walk]\label{def6.0} Given $r, k, \ell\in \mathbb{N}$, let $R$ be a $k$-vertex graph. Then we say $\mathcal{Q}=(S_{1}, \dots, S_{\ell})$ (allowing repetition) is a \emph{walk} of order $\ell$ in $R$, where $S_{i}\in V(R)$ for every $i\in [\ell]$, if every consecutive $r+1$ elements in $\mathcal{Q}$ induces a clique in $R$ of size either $r+1$ or $r$. Moreover, in the latter case the repetition of a vertex is allowed only in adjacent positions. Let $\mathcal{Q}[i, j]:=(S_{i}, S_{i+1}, \dots, S_{j})$ and $\mathcal{Q}[i]:=\mathcal{Q}[i,i]=S_i$ for every $i, j \in[\ell]$. Then we call the $r$-tuples $\mathcal{Q}[1,r]$ and $\mathcal{Q}[\ell-r+1, \ell]$ the \emph{head} and \emph{tail} of $\mathcal{Q}$, respectively. We say $S_{i}$ is a \emph{lazy element} in $\mathcal{Q}$ if $S_{i}=S_{i+1}$. We say a walk $\mathcal{Q}$ is \emph{good} if the first lazy element is $S_{r+1}$, and for every two distinct lazy elements $S_{i}$ and $S_{j}$ (if any), we have $|i-j|\geq 20(r+1)$. \end{defn} Note that the parameter $20(r+1)$ in Definition~\ref{def6.0} is chosen to make the embedding process more convenient, and it could be an arbitrarily large constant. Next, we give an example about Definition \ref{def6.0}. Fix $r=2$ and $R$ to be a graph with $V(R)=\{V_{1}, V_{2}, \dots, , V_{6}, V_{7}\}$ such that $R[\{V_{i}, V_{i+1}, V_{i+2}\}]$ is a copy of $K_{3}$ for every $i\in\{1, 3, 4\}$ and $V_{6}V_{7}\in E(R)$. Let $\mathcal{Q}=(V_{1}, V_{2}, V_{3}, V_{3}, V_{4}, V_{5}, V_{6}, V_{6}, V_{7})$. Then by Definition \ref{def6.0} $\mathcal{Q}$ is a walk with lazy elements $\mathcal{Q}[3]$ and $\mathcal{Q}[7]$ but is not a good walk (See Figure \ref{L2}). \betagin{figure}[htbp] \centering \includegraphics[scale=0.6]{2.pdf} \caption{$V_{1}V_{2}V_{3}V_{3}V_{4}V_{5}V_{6}V_{6}V_{7}$ is not a good walk} \label{L2} \end{figure} The following lemma provides us a good walk with fixed head and tail. \betagin{lemma}\label{lem6.2} For $\mu>0$ and $r\in\mathbb{N}$, the following holds for sufficiently large $k$. Let $R$ be a $k$-vertex graph with $\deltalta(R)\geq \left(1-\frac{1}{r}+\mu\right)k$ and $\mf{x}$, $\mf{y}$ be two disjoint $r$-tuples of vertices in $R$, each inducing a copy of $K_{r}$. Then there exists a good walk $\mathcal{Q}=(S_1,\ldots,S_{\ell})$ for some $\ell\leq 100r^{5}$ in $R$ with head $\mf{x}$ and tail $\mf{y}$. Moreover the last lazy element of $\mathcal{Q}$, say $S_q$, satisfies that $q\leq \ell-20(r+1)$. \end{lemma} The proof of Lemma \ref{lem6.2} will be presented in next subsection. The subsequent work in proving Lemma \ref{connect} is the embedding process. To elaborate on this, we need the following two lemmas (Lemma \ref{lem6.30} and Lemma \ref{lem6.31}). Figure \ref{L3} is an illustration of Lemma \ref{lem6.30}, and Figure \ref{L4} is an illustration of Lemma \ref{lem6.31}. \betagin{figure}[htbp] \centering \includegraphics[scale=0.6]{3.pdf} \caption{Illustration of $r=2$} \label{L3} \end{figure} \betagin{lemma}\label{lem6.30} For $\betata,\eta>0$ and $r, k\in\mathbb{N}$, there exist $\varepsilon, \alpha>0$ such that the following holds for sufficiently large $m\in \mathbb{N}$. Let $G$ be a graph with an equipartition $V(G)=V_{1}\cup V_{2}\cup\cdots\cup V_{k}$, $\alpha(G)\leq \alpha |V(G)|$, and $|V_{i}|=m$ for every $i\in [k]$. Let $R$ be a graph with $V(R)=\{V_{1}, V_{2}, \dots, V_{k}\}$, and $V_{i}V_{j}\in E(R)$ if $(V_{i}, V_{j})$ is $(\varepsilon, \betata)$-regular in $G$. Let $\mathcal{Q}:=(S_{1}, S_{2}, \dots, S_{3r+2})$ be a good walk in $R$ with exactly one lazy element $S_{r+1}$ and $T_i\subseteqS_i$ be a set of size at least $\eta m$ for every $i\in[3r+2]$ such that $T_{r+1}=T_{r+2}$. Then there exists an $r$-path in $G$, say $P=v_{1}v_{2}\dots v_{r}xyv_{r+3}\dots v_{2r+2}$, such that $\{x, y\}\subseteq T_{r+1}$, $v_{i}\in T_{i}$ for every $i\in[2r+2]\setminus \{r+1,r+2\}$, and $|N_{G}(v_{j-r}, \dots, v_{2r+2})\cap T_{j}|\geq \left(\frac{\betata}{2}\right)^{r}|T_j|$ for every $j\in [2r+3, 3r+2]$. \end{lemma} \betagin{lemma}\label{lem6.31} For $\betata,\eta>0$ and $r, k, \ell\in\mathbb{N}$, there exists $\varepsilon>0$ such that the following holds for sufficiently large $m\in \mathbb{N}$. Let $G$ be a graph with an equipartition $V(G)=V_{1}\cup V_{2}\cup \cdots\cup V_{k}$, and $|V_{i}|=m$ for every $i\in [k]$. Let $R$ be a graph with $V(R)=\{V_{1}, V_{2}, \dots, V_{k}\}$, and $V_{i}V_{j}\in E(R)$ if $(V_{i}, V_{j})$ is $(\varepsilon, \betata)$-regular in $G$. Let $\mathcal{Q}:=(S_{1}, \dots, S_{\ell+r})$ be a good walk in $R$ without any lazy element and $T_i\subseteqS_i$ be a set of size at least $\eta m$ for every $i\in[\ell+r]$. Then there exists an $r$-path in $G$, say $P=v_{1}v_{2}\dots v_{\ell}$, such that $v_{i}\in T_{i}$ for every $i\in[\ell]$ and $|N_{G}(v_{j-r}, \dots, v_{\ell})\cap T_{j}|\geq \left(\frac{\betata}{2}\right)^{r}|T_j|$ for every $j\in [\ell+1, \ell+r]$. \end{lemma} \betagin{figure}[htbp] \centering \includegraphics[scale=0.6]{4.pdf} \caption{Illustration of $r=2$ and $\ell=5$} \label{L4} \end{figure} \subsection{Proof of Lemma~\ref{connect}} We now have all the necessary tools to prove Lemma~\ref{connect}. \betagin{proof}[Proof of Lemma \ref{connect}] Given $\mu>0$, $r\in \mathbb{N}$, we set $\betata=\frac{\mu}{20}$, and choose \betagin{center} $\frac{1}{n}\ll \alpha\ll\frac{1}{k}\ll\varepsilon\ll\mu, \frac{1}{r}$. \end{center} Let $G$ be an $n$-vertex graph with $\deltalta(G)\geq \left(1-\frac{1}{r}+\mu\right)n$ and $\alpha(G)\leq \alpha n$. Let $\mf{x}$, $\mf{y}$ be two disjoint $r$-tuples of vertices, each inducing a copy of $K_{r}$ in $G$. In order to simplify the notation, we write $\mf{x}=(x_{1}, \dots, x_{r})$ and $\mf{y}=(y_{r}, \dots, y_{1})$. Our goal is to find an $r$-path $P$ in $G$ which connects $(x_{1}, \dots, x_{r})$ and $(y_{r}, \dots, y_{1})$. Applying Lemma \ref{lem2.6} to $G$, we obtain an $(\varepsilon,\beta)$-regular partition $\mathcal{P}=\{V_{0}, V_{1}, \dots, V_{k}\}$ of $V(G)$ and write $m:=|V_{i}|$ for every $i\in [k]$. Let $R:=R_{\varepsilon, \betata}$ be the reduced graph for the $(\varepsilon,\beta)$-regular partition $\mathcal{P}$. Then Fact~\ref{fact2.8} implies that $\deltalta(R)\geq \left(1-\frac{1}{r}+\frac{\mu}{2}\right)k$. The following claim reduces the connecting process to finding a good walk in $R$ with fixed head and tail, whose proof is postponed to the end of this subsection. \betagin{claim}\label{cl6.3} Let $G[\{v_{1}, \dots, v_{r}\}]$ be a copy of $K_{r}$ in $G$ and $W\subseteq V(R)$ with $|W|\leq r$. Then there exists a copy of $K_{r}$ in $R-W$, say $H$ with $V(H)=\{V_{1}, V_{2}, \dots, V_{r}\}$ such that $|N_{G}(v_{i}, \dots, v_{r})\cap V_{i}|\geq \mu m$ for every $i\in[r]$. \end{claim} Applying Claim \ref{cl6.3} with $W=\emptyset$, we obtain a copy of $K_r$ in $R$, whose vertex set, without loss of generality, is denoted as $\{V_{1}, \dots, V_{r}\}$. Then $|N_{G}(x_{i}, \dots, x_{r}) \cap V_{i}|\geq \mu m$ for every $i\in[r]$. Applying Claim \ref{cl6.3} again with $W=\{V_{1}, \dots, V_{r}\}$, we obtain a copy of $K_r$ in $R-W$, whose vertex set is denoted as $\{U_{1},\dots, U_{r}\}\subseteq V(R)\backslash W$. Then $|N_{G}(y_{i}, \dots, y_{r})\cap U_{i}|\geq \mu m$ for every $i\in[r]$. By applying Lemma \ref{lem6.2} to $R$, we obtain a good walk $\mathcal{Q}:=(S_1, \dots, S_\ell)$ for some $\ell\leq 100r^{5}$ with head $(V_{1}, \dots, V_{r})$ and tail $(U_{r},\dots, U_{1})$. Let $S_{t_{1}}, S_{t_{2}}, \dots, S_{t_{s}}$ be all the lazy elements in $\mathcal{Q}$ for some $s\in \mathbb{N}$. Then it follows that $t_{1}=r+1$, $t_{q+1}-t_{q}\geq 20(r+1)$ for every $q\in[s-1]$, $\ell-t_{s}\geq 20(r+1)$, and $s\leq\frac{100r^{5}}{20(r+1)}\leq 5r^{4}$. The desired $r$-path will be constructed piece by piece in the following phases, where we denote by $P_0$ the $r$-path $x_1\ldots x_r$. \noindent\textbf{Phase $0$.} Let $S^{0}_{i}:=N_{G}(x_{i}, \dots, x_{r})\cap V_{i}$ and $S^{0}_{\ell-i+1}:=N_{G}(y_{i}, \dots, y_{r})\cap U_{i}$ for every $i\in [r]$. Then by Claim~\ref{cl6.3} we obtain that $|S^{0}_{i}|, |S^{0}_{\ell-r+i}|\geq \mu m$ for every $i\in [r]$. Let $S^{0}_{i}:=S_{i}$ for every $i\in [r+1, \ell-r]$. \noindent\textbf{Phase $1$.} We first consider the subwalk $\mathcal{Q}^{1}_{1}:=(S_{1}, \dots, S_{3r+2})$. Applying Lemma \ref{lem6.30} to $\mathcal{Q}^{1}_{1}$ with $\eta=\mu$ and $T_i=S_i^0$ for every $i\in[3r+2]$, we obtain an $r$-path, say $P^{1}_{1}=v_{1}\cdots v_{r}x^{1}y^{1}v_{r+3}\cdots v_{2r+2}$ with $v_{i}\in S^0_{i}$ for every $i\in [2r+2]\backslash \{r+1, r+2\}$ and $x^{1}, y^{1}\in S^0_{r+1}=S^0_{r+2}$. Moreover, we have $|N_{G}(v_{j-r}, \dots, v_{2r+2})\cap S^0_{j}|\geq \left(\frac{\betata}{2}\right)^{r}m$ for every $j\in[2r+3, 3r+2]$. Let $S^{1}_{j}=(N_{G}(v_{j-r}, \dots, v_{2r+2})\cap S^{0}_{j})\backslash V(P^{1}_{1})$ for every $j\in [2r+3, 3r+2]$. Then $|S^{1}_{j}|\geq \left(\tfrac{\betata}{2}\right)^{r}m-|V(P_{1}^{1})|\geq\left(\frac{\betata}{4}\right)^{r}m$. Next, we consider the subwalk $\mathcal{Q}^{2}_{1}:=(S_{2r+3}, \dots, S_{t_{2}-1})$. Let $S^{1}_{j}:=S^{0}_{j}\backslash V(P_{1}^{1})$ for every $j\in [3r+3, t_{2}-1]$. Then $|S^{1}_{j}|\geq m-|V(P_{1}^{1})|\geq\frac{m}{2}$ for every $j\in [3r+3, t_{2}-1]$. Applying Lemma \ref{lem6.31} to $\mathcal{Q}^{2}_{1}$ with $\eta=\left(\frac{\beta}{4}\right)^r$ and $T_i=S_{i+2r+2}^1$ for every $i\in[t_2-2r-3]$, we obtain an $r$-path, say $P^{2}_{1}=v_{2r+3}\cdots v_{t_{2}-r-1}$ with $v_{i}\in S^1_{i}$ for every $i\in [2r+3, t_{2}-r-1]$, such that \[|N_{G}(v_{j-r}, \dots, v_{t_{2}-r-1})\cap S^{1}_{j}|\geq \left(\tfrac{\betata}{2}\right)^{r}\tfrac{m}{2}~\text{for every}~ j\in [t_{2}-r, t_{2}-1].\] By the definition of $S_j^0$ and $S_j^1$ for all $j\in[t_2-1]$, we end up with an $r$-path \betagin{center} $x_1\ldots x_rv_{1}\cdots v_{r}x^{1}y^{1}v_{r+3}\cdots v_{t_2-r-1}$, \end{center} denoted as $P_{1}=P_0 P_{1}^{1} P_{1}^{2}$. We update \[S_{j}^{0}\leftarrow(N_{G}(v_{j-r}, \dots, v_{t_{2}-r-1})\cap S^{0}_{j})\backslash V(P_{1})~\text{for every}~j\in [t_{2}-r, t_{2}-1].\] For every $j\in [t_{2}-r, t_{2}-1]$, we observe that \betagin{align}\label{ali1} |S_{j}^{0}| & =|(N_{G}(v_{j-r}, \dots, v_{t_{2}-r-1})\cap S^{0}_{j})\backslash V(P_{1})|\geq \left(\tfrac{\betata}{2}\right)^{r}\tfrac{m}{2}-|V(P_{1})|\geq \left(\tfrac{\betata}{4}\right)^{r}m. \end{align} For every $i\in [s-1]$, we keep updating $S_{j}^{0}$ for every $j\in [t_{i+1}-r, t_{i+1}-1]$, and for every $i\geq 1$, define ($\textbf{E}_{i}$) as follows and show that ($\textbf{E}_{i}$) holds in \textbf{Phase} $i$. \betagin{flushleft} ($\textbf{E}_{i}$). $|S^{0}_{j}|\geq \left(\frac{\betata}{4}\right)^{r}m$ for every $j\in [t_{i+1}-r, t_{i+1}-1]$. \end{flushleft} Therefore, ($\textbf{E}_{1}$) holds by (\ref{ali1}). Suppose after the first $i-1$ ($i\geq 2$) phases, we obtain an $r$-path, say $P_{i-1}:=P_{0}P_{1}^{1} P_{1}^{2} \cdots P_{i-1}^{1} P_{i-1}^{2}$, and a sequence of subsets $S^{0}_{j}$ which satisfy ($\textbf{E}_{i-1}$). Next we move to \textbf{Phase} $i$. \noindent\textbf{Phase $i$ ($i\geq 2$).} We write $S^{0}_{j}:=S_{j}\backslash V(P_{i-1})$ for every $j\in [t_{i}, t_{i+1}-1]$, and thus $|S^{0}_{j}|\geq m-|V(P_{i-1})|\geq\frac{m}{2}$. Similarly, we focus on the subwalks \[\mathcal{Q}^{1}_{i}:=(S_{t_{i}-r}, \dots, S_{t_{i}+2r+1})~\text{and}~ \mathcal{Q}^{2}_{i}:=(S_{t_{i}+r+2}, \dots, S_{t_{i+1}-1}).\] Recall $S_{t_{i}}$ is a lazy element. By applying Lemma~\ref{lem6.30} and Lemma~\ref{lem6.31} as in \textbf{Phase} $1$ and the condition in ($\textbf{E}_{i-1}$), we end up with an $r$-path, denoted as $P_i$ such that \[P_i=P_{i-1} v_{t_{i}-r}\cdots v_{t_{i}-1}x^{i}y^{i}v_{t_{i}+2}\cdots v_{t_{i}+r+1} v_{t_{i}+r+2}\dots v_{t_{i+1}-r-1},\] where $x^{i}, y^{i}\in S^0_{t_{i}}$ and $v_{j}\in S^0_{j}$ for every $j\in[t_{i}-r, t_{i+1}-r-1]\backslash \{t_{i}, t_{i}+1\}$. Moreover, we obtain a sequence of subsets $N_{G}(v_{j-r}, \dots, v_{t_{i+1}-r-1})\cap S^{0}_{j}$, $j\in [t_{i+1}-r, t_{i+1}-1]$, each of size at least $\left(\tfrac{\betata}{2}\right)^{r}\tfrac{m}{2}$. Similarly, we update \[S_{j}^{0}\leftarrow(N_{G}(v_{j-r}, \dots, v_{t_{i+1}-r-1})\cap S^{0}_{j})\backslash V(P_{i})~\text{for every}~j\in [t_{i+1}-r, t_{i+1}-1].\] It holds that $|S_{j}^{0}|=|(N_{G}(v_{j-r}, \dots, v_{t_{i+1}-r-1})\cap S^{0}_{j})\backslash V(P_{i})|\geq \left(\tfrac{\betata}{2}\right)^{r}\tfrac{m}{2}-|V(P_{i})|\geq \left(\tfrac{\betata}{4}\right)^{r}m$ for every $j\in [t_{i+1}-r, t_{i+1}-1]$. Now ($\textbf{E}_i$) holds and we go on to \textbf{Phase} $i+1$. Suppose after \textbf{Phase} $s$ we end up with an $r$-path, denoted as \[P_{s}=P_{s-1} v_{t_{s}-r}\cdots v_{t_{s}-1}x^{s}y^{s}v_{t_{s}+2}\cdots v_{t_{s}+r+1} v_{t_{s}+r+2}\dots v_{\ell-r},\] and a sequence of $r$ subsets $N_{G}(v_{j-r}, \dots, v_{\ell-r})\cap S^{0}_{j}$, $j\in [\ell-r+1, \ell]$, each of size at least $\left(\tfrac{\betata}{2}\right)^{r}|S_j^0|\ge \left(\tfrac{\betata}{2}\right)^{r}\mu m$. Again, we update \betagin{center} $S_{j}^{0}$ $\leftarrow$ $(N_G(v_{j-r}, \dots, v_{\ell-r})\cap S^{0}_{j})\backslash V(P_{s})$ for every $j\in [\ell-r+1, \ell]$. \end{center} Then each $S^{0}_{j}$ has cardinality $|S^{0}_{j}|\ge \left(\tfrac{\betata}{2}\right)^{r}\mu m-|V(P_s)|\geq \left(\tfrac{\betata}{4}\right)^{r}\mu m$. Recall that $S^{0}_{\ell-i+1}\subseteqU_i$ for every $i\in[r]$ and all pairs $(U_i,U_j)$ are $\varepsilon$-regular with density at least $\beta$ for distinct $i,j\in[r]$. Then as $\varepsilon\ll \beta,\mu,\frac{1}{r}$, Lemma \ref{lem2.5} implies that all pairs $(S^{0}_{i}, S^{0}_{j})$ are $\varepsilon'$-regular with density at least $\beta-\varepsilon$ for some $\varepsilon':=\max\{2\varepsilon, \tfrac{|S_{i}|}{|S^{0}_{i}|}\varepsilon\}\le\left(\frac{4}{\betata}\right)^r\frac{\varepsilon}{\mu}$ for distinct $i, j\in [\ell-r+1, \ell]$. Then we can greedily find a copy of $K_{r}$, say $H_{3}$ with $V(H_{3})=\{v_{\ell-r+1}, \dots, v_{\ell}\}$, such that $v_{j}\in S^{0}_{j}$ for every $j\in [\ell-r+1, \ell]$. Since $S^{0}_{\ell-i+1}=N_{G}(y_{i}, \dots, y_{r})\cap U_i$ for every $i\in [r]$ (see \textbf{Phase} $0$), we obtain an $r$-path $P_{s} v_{\ell-r+1}\cdots v_{\ell} y_{r}\cdots y_{1}$ of length at most $2\ell\leq 200r^{5}$ as desired. This completes the proof of Lemma~\ref{connect}. \end{proof} Now it remains to prove Claim \ref{cl6.3}. \betagin{proof}[Proof of Claim \ref{cl6.3}] Since $\delta(G)\ge (1-\frac{1}{r}+\mu)n$, $v_{1}, \dots,v_{r}$ have at least $r\mu n$ common neighbors in $G$. Hence, there exists a cluster, say $V_{1}\in V(R)\backslash W$, such that $|N_G(v_{1}, \dots,v_{r})\cap V_{1}|\geq \frac{r\mu n-|W|m}{k-|W|}\geq\frac{r\mu n-rm}{k}\geq\mu m$, as $\frac{1}{k}\ll \mu$. Suppose we have obtained a maximal collection of clusters in $V(R)\backslash W$, say $V_{1}, \dots, V_s$ for some $s\ge 1$ such that they form a copy of $K_s$ in $R$ and $|N_G(v_{i}, \dots, v_{r})\cap V_{i}|\geq \mu m$ for every $i\in[s]$. Suppose for a contradiction that $s<r$. For every $V'\in N_R(V_{1}, \dots, V_{s})\backslash W$, we have $|(N_G(v_{s+1}, \dots, v_{r})\backslash\bigcup_{V_{i}\in W}V_{i})\cap V'|<\mu m$. Since $\frac{1}{k}\ll \mu, \frac{1}{r}$, it follows that \betagin{center} $|N_R(V_{1}, \dots, V_{s})\backslash W|\geq s\left(1-\frac{1}{r}+\frac{\mu}{2}\right)k-(s-1)k-r\geq\left(\frac{r-s}{r}+\frac{s}{4}\mu\right)k>\frac{r-s}{r} k$. \end{center} Then we have \betagin{align*} |(N_G(v_{s+1}, \dots, v_{r})\backslash \bigcup_{V_{i}\in W}V_{i})| & <\mu m|N_R(V_{1}, \dots, V_{s})\backslash W|+|V(G)\backslash(\bigcup_{V_{i}\in W\cup N_R(V_{1}, \dots, V_{s})}V_{i})|\\ &= \mu m|N_R(V_{1}, \dots, V_{s})\backslash W|+n-|W|m-|N_R(V_{1}, \dots, V_{s})\backslash W|m\\ &<(\mu-1)m\frac{(r-s)k}{r}+n-|W|m. \end{align*} \betagin{flushleft} Meanwhile, we have \end{flushleft} \betagin{align*} |N_G(v_{s+1}, \dots, v_{r})\backslash \bigcup_{V_{i}\in W}V_{i})|\geq & (r-s)(1-\tfrac{1}{r}+\mu)n-(r-s-1)n-|W|m\\=&n-|W|m-(r-s)(\frac{1}{r}-\mu)n. \end{align*} Putting the bounds together, we obtain \betagin{center} $(r-s)(\frac{1}{r}-\mu)n>(1-\mu)m\frac{(r-s)k}{r}$. \end{center} Since $mk\geq (1-\varepsilon)n$, we get $1-r\mu>(1-\mu)(1-\varepsilon)>1-2\mu$, a contradiction. \end{proof} \subsection{Proof of Lemma \ref{lem6.2}} The proof of Lemma \ref{lem6.2} goes roughly as follows. Let $\mf{x},\mf{y}$ be two disjoint $r$-tuples of vertices in $R$, each inducing a copy of $K_r$, say $H_{\mf{x}}$, $H_{\mf{y}}$, respectively. We first build a sequence of copies of $K_{r}$ starting from $H_{\mf{x}}$ and ending with $H_{\mf{y}}$, where any two consecutive copies share $r-1$ common vertices. Then we extend every copy of $K_{r}$ in the sequence to $K_{r+1}$ such that every two consecutive copies of $K_{r+1}$ share exactly $r-1$ common vertices (see Lemma~\ref{lem6.3}). Consider the resulting sequence of copies of $K_{r+1}$, say $H_1, H_2, \ldots, H_{s}$, with $|V(H_{i})\cap V(H_{i+1})|=r-1$ for every $i\in[s-1]$, and then define two permutations $\mf{x}^+,\mf{y}^+$ of $V(H_1), V(H_s)$, respectively, satisfying $\mf{x}^+[1,r]=\mf{x}$ and $\mf{y}^+[2,r+1]=\mf{y}$. The good walk comes from the following two phases: \betagin{enumerate} \item [$(1)$] For every $i\in [s-1]$, we build a good walk $\mathcal{Q}_{i}$ of order $\ell_i$ (see Lemma~\ref{lem6.6}), such that \betagin{itemize} \item $\mathcal{Q}_{1}[1,r+1]=\mf{x}^+$ and $\mathcal{Q}_{i}[1,r+1]=\mathcal{Q}_{i-1}[\ell_{i-1}-r, \ell_{i-1}]$ when $i\ge 2$; \item $\mathcal{Q}_{i}[\ell_{i}-r, \ell_{i}]$ is a permutation of $V(H_{i+1})$, and $V(\mathcal{Q}_{i})\subseteq V(H_{i})\cup V(H_{i+1})$. \end{itemize} \item [$(2)$] We build a good walk $\mathcal{Q}_{s}$ which starts with $\mathcal{Q}_{s-1}[\ell_{s-1}-r, \ell_{s}]$, i.e. a permutation of $V(H_s)$ obtained in $(1)$, and ends with the fixed permutation $\mf{y}^+$ of $V(H_s)$ (see Lemma~\ref{lem6.5}). \end{enumerate} Hence we obtain a desired good walk with head $\mf{x}$ and tail $\mf{y}$ by piecing together all these walks $\mathcal{Q}_{1},\ldots,\mathcal{Q}_{s}$. Now we give the three lemmas. \betagin{lemma}\label{lem6.3} Given $\mu>0$, $r\in\mathbb{N}$, the following holds for sufficiently large $k$. Let $R$ be a $k$-vertex graph with $\deltalta(R)\geq \left(1-\frac{1}{r}+\mu\right)k$, $H_{1}$ and $H_{2}$ be two vertex-disjoint copies of $K_{r}$ in $R$. Then there exists a family of copies of $K_{r+1}$, say $\{B_{1}, B_{2},\dots, B_{\ell}\}$ for some $\ell\leq r^{2}$, with \betagin{itemize} \item $V(H_{1})\subseteq V(B_{1})$, $V(H_{2})\subseteq V(B_{\ell})$; \item $|V(B_{i})\cap V(B_{i+1})|=r-1$ for every $i\in [\ell-1]$. \end{itemize} \end{lemma} \betagin{proof}[Proof] Given $\mu >0$, $r\in \mathbb{N}$ and $r\geq 2$, we choose $\frac{1}{k}\ll \mu, \frac{1}{r}$. Let $R$ be a $k$-vertex graph with $\deltalta(R)\geq \left(1-\frac{1}{r}+\mu\right)k$. Let $V(H_{1})=\{v_{1}, \dots, v_{r}\}$ and $V(H_{2})=\{u_{1}, \dots, u_{r}\}$. It suffices to find a family of copies of $K_{r}$, say $\{A_{1}, \dots, A_{\ell}\}$ for some $\ell\leq r^{2}$ such that \betagin{itemize} \item $H_{1}=A_{1}$ and $H_{2}=A_{\ell}$; \item $|V(A_{i})\cap V(A_{i+1})|=r-1$ for every $i\in [\ell-1]$. \end{itemize} In fact, the desired copies $B_i$ can be easily obtained by extending $A_i$ to a copy of $K_{r+1}$ by a new vertex, which is possible because every $r$ vertices in $R$ have at least $r\mu k$ common neighbors. To achieve this, we first prove the following claim. We set $t_{0}:=0$, $t_{1}:=r-1$, $t_{i+1}:=t_{i}+(r-i-1)$ for every $i\in [r-2]$ and let $Q_{0}:=(v_{1}, \dots, v_{r})$. \betagin{figure} \betagin{center} \tikzset{every picture/.style={line width=0.75pt}} \betagin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (60,100) .. controls (60,86.19) and (89.1,75) .. (125,75) .. controls (160.9,75) and (190,86.19) .. (190,100) .. controls (190,113.81) and (160.9,125) .. (125,125) .. controls (89.1,125) and (60,113.81) .. (60,100) -- cycle ; \draw (320,100) .. controls (320,77.91) and (371.49,60) .. (435,60) .. controls (498.51,60) and (550,77.91) .. (550,100) .. controls (550,122.09) and (498.51,140) .. (435,140) .. controls (371.49,140) and (320,122.09) .. (320,100) -- cycle ; \draw [fill={rgb, 255:red, 66; green, 102; blue, 102 } ,fill opacity=0.1 ] (160,190) .. controls (160,178.95) and (182.39,170) .. (210,170) .. controls (237.61,170) and (260,178.95) .. (260,190) .. controls (260,201.05) and (237.61,210) .. (210,210) .. controls (182.39,210) and (160,201.05) .. (160,190) -- cycle ; \draw [fill={rgb, 255:red, 189; green, 126; blue, 74 } ,fill opacity=0.1 ] (310,180) .. controls (310,171.72) and (325.67,165) .. (345,165) .. controls (364.33,165) and (380,171.72) .. (380,180) .. controls (380,188.28) and (364.33,195) .. (345,195) .. controls (325.67,195) and (310,188.28) .. (310,180) -- cycle ; \draw [fill={rgb, 255:red, 16; green, 125; blue, 172 } ,fill opacity=0.1 ] (440,160) .. controls (440,154.48) and (448.95,150) .. (460,150) .. controls (471.05,150) and (480,154.48) .. (480,160) .. controls (480,165.52) and (471.05,170) .. (460,170) .. controls (448.95,170) and (440,165.52) .. (440,160) -- cycle ; \draw (80,100) ; \draw [shift={(80,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (170,100) ; \draw [shift={(170,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (110,100) ; \draw [shift={(110,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (140,100) ; \draw [shift={(140,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (360,100) ; \draw [shift={(360,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (410,100) ; \draw [shift={(410,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (460,100) ; \draw [shift={(460,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (510,100) ; \draw [shift={(510,100)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (180,190) ; \draw [shift={(180,190)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (210,190) ; \draw [shift={(210,190)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (240,190) ; \draw [shift={(240,190)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw [color={rgb, 255:red, 195; green, 39; blue, 43 } ,draw opacity=1 ] (110,100) -- (180,190) ; \draw [color={rgb, 255:red, 195; green, 39; blue, 43 } ,draw opacity=1 ] (140,100) -- (180,190) ; \draw [color={rgb, 255:red, 195; green, 39; blue, 43 } ,draw opacity=1 ] (170,100) -- (180,190) ; \draw [color={rgb, 255:red, 66; green, 102; blue, 102 } ,draw opacity=1 ] (155.37,213.18) .. controls (143.49,192.69) and (183.31,147.42) .. (244.3,112.08) .. controls (305.3,76.74) and (364.37,64.7) .. (376.24,85.2) .. controls (388.12,105.69) and (348.3,150.96) .. (287.31,186.3) .. controls (226.32,221.64) and (167.24,233.68) .. (155.37,213.18) -- cycle ; \draw [color={rgb, 255:red, 189; green, 126; blue, 74 } ,draw opacity=1 ] (311.03,132.97) .. controls (334.83,98.39) and (378.38,78.14) .. (408.3,87.75) .. controls (438.23,97.35) and (443.2,133.17) .. (419.4,167.75) .. controls (395.61,202.33) and (352.06,222.57) .. (322.14,212.97) .. controls (292.21,203.37) and (287.24,167.55) .. (311.03,132.97) -- cycle ; \draw [color={rgb, 255:red, 16; green, 125; blue, 172 } ,draw opacity=1 ] (358.96,129.22) .. controls (338.59,97.06) and (352.32,69.47) .. (389.64,67.6) .. controls (426.96,65.73) and (473.73,90.29) .. (494.1,122.44) .. controls (514.48,154.6) and (500.74,182.18) .. (463.43,184.06) .. controls (426.11,185.93) and (379.34,161.37) .. (358.96,129.22) -- cycle ; \draw (330,180) ; \draw [shift={(330,180)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (360,180) ; \draw [shift={(360,180)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw (460,160) ; \draw [shift={(460,160)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1.34, y radius= 1.34] ; \draw [color={rgb, 255:red, 195; green, 39; blue, 43 } ,draw opacity=1 ] (210,190) .. controls (226.44,230.33) and (311.78,229.67) .. (330,180) ; \draw [color={rgb, 255:red, 195; green, 39; blue, 43 } ,draw opacity=1 ] (240,190) .. controls (261.78,203.67) and (290,210) .. (330,180) ; \draw [color={rgb, 255:red, 195; green, 39; blue, 43 } ,draw opacity=1 ] (360,180) -- (460,160) ; \draw (70,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$v_{1}$}; \draw (100,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$v_{2}$}; \draw (130.67,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$v_{3}$}; \draw (160,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$v_{4}$}; \draw (360,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$u_{1}$}; \draw (410,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$u_{2}$}; \draw (460,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$u_{3}$}; \draw (510,80.4) node [anchor=north west][inner sep=0.75pt] [font=\footnotesize] {$u_{4}$}; \draw (225.33,152.07) node [anchor=north west][inner sep=0.75pt] {$Q_{1}$}; \draw (350.67,147.73) node [anchor=north west][inner sep=0.75pt] {$Q_{2}$}; \draw (477.33,139.07) node [anchor=north west][inner sep=0.75pt] {$Q_{3}$}; \end{tikzpicture} \end{center} \caption{Illustration of Claim \ref{cl5.12} with $r=4$} \end{figure} \betagin{claim}\label{cl5.12} There exist~$r-1$~sequences of vertices in $R$ (mutually vertex disjoint), say $Q_{1}, \dots, Q_{r-1}$ with $Q_{i}:=(x_{t_{i-1}+1}, \dots, x_{t_{i}})$, $|Q_{i}|=r-i$, satisfying that $V(Q_{i})\subseteq N_R(u_{1}, \dots, u_{i})$ and $Q_{i-1}Q_{i}$ induces an $(r-i)$-power of a path in $R$ for every $i\in [r-1]$. \end{claim} \betagin{pr} We shall iteratively build each $Q_i$ for $i\in[r-1]$ as required. For the case $i=1$, since $|N_{R}(v_{2}, \dots, v_{r}, u_{1})|\geq r\mu k$, we arbitrarily take a common neighbor of $v_{2}, \dots, v_{r}$ and $u_{1}$, say $x_{1}$. Similarly we can iteratively pick another $r-2$ distinct vertices say $x_{2},\ldots,x_{r-2}$, such that each $x_j$ is a common neighbor of $v_{j+1}, \dots, v_{r}, x_{1}, \dots, x_{j-1}$ and $u_{1}$ whilst avoiding all the vertices in $\{u_1,\ldots, u_r\}$. Then $Q_{1}:=(x_{1}, \dots, x_{r-1})$ is desired as $Q_{0}Q_{1}=v_1v_2\ldots v_r x_1\ldots x_{r-1}$ actually induces an $(r-1)$-power of a path in $R$. Suppose we have obtained $Q_1,\ldots,Q_q$ as above with $Q_{i}:=(x_{t_{i-1}+1}, \dots, x_{t_{i}})$, $V(Q_{i})\subseteq N_R(u_{1}, \dots, u_{i})$, and $Q_{i-1}Q_{i}$ induces an $(r-i)$-power of a path in $R$ for every $i\in[q]$. Now we will find the existence of $Q_{q+1}$. Similarly, since $|\bigcup\limits^q_{i=1}U_i|\leq q(r-1)<r\mu k$, we can iteratively pick $r-i-1$ distinct vertices say $x_{t_{q}+1}, \dots, x_{t_{q+1}}$ such that each $x_{t_{q}+j}$ ($j\in[r-i-1]$) is a common neighbor of the $r$ vertices $x_{t_{q-1}+j+1}, \dots, x_{t_{q}}, \dots, x_{t_{q}+j-1}, u_{1}, \dots, u_{q+1}$. This yields the desired $Q_{q+1}=(x_{t_{q}+1}, \dots, x_{t_{q+1}})$. \end{pr} Next, we find all desired copies of $K_{r}$. By Claim \ref{cl5.12}, for every $i\in [r-1]$, every consecutive $r-i+1$ vertices in the sequence $Q_{i-1}Q_{i}$ together with $\{u_{1}, \dots, u_{i-1}\}$ form a copy of $K_{r}$, which yields a sequence of copies of $K_{r}$ in order, say $\mathcal{L}^{i}:=(A^{i}_{1}, \dots, A^{i}_{r-i+1})$. Observe that every two consecutive copies of $K_{r}$ in $\mathcal{L}^{i}$ share $r-1$ vertices consisting of $u_{1}, \dots, u_{i-1}$ and $r-i$ consecutive vertices in the sequence $Q_{i-1}Q_{i}$. Also, since $V(Q_{i})\subseteq N_R(u_{1}, \dots, u_{i})$, we have that $V(Q_{r-1})\cup\{u_{1}, \dots, u_{r-1}\}$ induces a copy of $K_r$, denoted as $A^r_1$. We claim that the resulting sequence \[A^{1}_{1} \dots A^{1}_{r} \dots A^{r-1}_{1} A^{r-1}_{2} A^r_1 H_{2}\] is desired. In fact, since every two consecutive copies in each $\mathcal{L}_{i}$ share $r-1$ vertices, it remains to show that the last element of $\mathcal{L}_{i}$ and the first element of $\mathcal{L}_{i+1}$ share $r-1$ vertices. This easily follows as $V(A^{i}_{r-i+1})=\{x_{t_{i-1}}, x_{t_{i-1}+1}, \dots, x_{t_{i}}, u_{1}, \dots, u_{i-1}\}$ and $V(A^{i+1}_{1})=\{x_{t_{i-1}+1}, \dots, x_{t_{i}}, u_{1}, \dots, u_{i}\}$ for every $i\in [r-1]$. Observe that $|V(A^r_1)\cap V(H_{2})|=r-1$. The proof is completed by renaming the copies of $K_{r}$ in the above sequence as $A_{1}, \dots, A_{\ell}$ in order, where $\ell:=2+\sum^{r-1}_{i=1}|\mathcal{L}^{i}|=\frac{r^{2}+r+2}{2}\leq r^{2}$. \end{proof} \betagin{lemma}\label{lem6.5} Let $H$ be a copy of $K_{r+1}$. Then for any $\pi_{1}, \pi_{2}\in \mathbf{S}_{V(H)}$, there exists a good walk $\mathcal{Q}$ in $H$ with order $\ell\leq 65r^{3}$ such that \betagin{itemize} \item $\mathcal{Q}[1,r+1]=\pi_{1}$ and $\mathcal{Q}[\ell-r, \ell]=\pi_{2}$; \item There is no lazy element among the last $20r+20$ elements of the walk. \end{itemize} \end{lemma} \betagin{proof}[Proof] Let $V(H)=\{v_{1}, \dots, v_{r+1}\}$. For any $\pi_{1}, \pi_{2}\in \mathbf{S}_{V(H)}$, we can transform $\pi_{1}$ to $\pi_{2}$ by recursively swapping two consecutive elements at most $r^{2}$ times. Let $q_{1}, \dots, q_{s}$ be the sequence of all the positions where we make swapping as above with $q_{i}\in [r]$ for every $i\in[s]$. To prove Lemma \ref{lem6.5}, it suffices to prove that for every $q\in[r]$ and every $\pi_{3}\in \mathbf{S}_{V(H)}$, there exists a good walk $\mathcal{Q}_q$ with two ends $\pi_{3}$ and $\pi_{4}$ where $\pi_{4}=\pi_{3}\circ(q, q+1)$. Without loss of generality, we assume $\pi_{3}=(v_{1}, \dots, v_{q-1}, v_{q}, v_{q+1}, \dots, v_{r+1})$ and $\pi_{4}=(v_{1}, \dots, v_{q-1}, v_{q+1}, v_{q}, \dots, v_{r+1})$. We build the walk $\mathcal{Q}_q=(S_{1}, S_{2}, \dots)$ as follows. \betagin{itemize} \item If $q\in[r-1]$, then we build $\mathcal{Q}_q=\pi_3 (v_{r+1})\pi_4\pi_4\cdots $ where we repeat $\pi_4$ 21 times to guarantee that there is no lazy element among the last $20r+20$ elements. Observe that $\mathcal{Q}_q$ is a good walk with two ends $\pi_{3}$ and $\pi_{4}$, exactly one lazy element $S_{r+1}$, and $|\mathcal{Q}_q|=22r+23$. \item If $q=r$, then let $\pi_5:=(v_{r+1}, v_2, \cdots, v_r, v_1)$, and we build $\mathcal{Q}_q=\pi_3\pi_5\cdots\pi_5\pi_4\cdots\pi_4$ where we repeat $\pi_5$ 21 times to guarantee that the distance between two lazy elements is bigger than $20(r+1)$, and repeat $\pi_4$ 21 times to guarantee that there is no lazy element among the last $20r+20$ elements. Observe that $\mathcal{Q}_q$ is a good walk with two ends $\pi_{3}$ and $\pi_{4}$, two lazy elements $S_{r+1}$ and $S_{22r+22}$, and $|\mathcal{Q}_r|=43r+43$. \end{itemize} Hence, for any $\pi_{1}, \pi_{2}\in \mathbf{S}_{V(H)}$, we can transform $\pi_{1}$ to $\pi_{2}$ by performing a sequence of at most $r^{2}$ switchings, each of which switches two consecutive elements as above. The concatenation of the corresponding good walks gives rise to a good walk $\mathcal{Q}$ with two ends $\pi_{1}$ and $\pi_{2}$, where we have $|\mathcal{Q}|\leq (43r+43)r^{2}\leq 65r^{3}$. \end{proof} \betagin{lemma}\label{lem6.6} Let $H_{1}, H_{2}$ be two copies of $K_{r+1}$ in $R$ with $|V(H_{1})\cap V(H_{2})|=r-1$. Then for any given permutation $\pi$ of $V(H_{1})$ there exists a good walk $\mathcal{Q}$ of order $\ell\le 100r^3$ such that $V(\mathcal{Q})=V(H_1)\cup V(H_2)$, $\mathcal{Q}[1,r+1]=\pi$ and $\mathcal{Q}[\ell-r, \ell]$ is a permutation of $V(H_{2})$. \end{lemma} \betagin{proof}[Proof] Let $V(H_{1})=\{v_{1}, v_{2}, \dots, v_{r+1}\}$ and $V(H_{2})=\{v_{3}, \dots, v_{r+1}, w_{1}, w_{2}\}$. Let $\pi_{1}$ be any permutation of $V(H_{1})$. We write $\pi_2=(v_{1}, v_{2}, \dots, v_{r+1})$ and $\pi_3=(v_{r+1}, w_{1}, w_{2}, v_{3}, \dots, v_{r})$. Then Lemma~\ref{lem6.5} applied to $H_1$ gives a good walk say $\mathcal{Q}_1$ with $|\mathcal{Q}_1|\leq 65r^{3}$ which starts with $\pi_{1}$ and ends with $\pi_2$ and contains no lazy element among the last $20r+20$ elements. Now we build $\mathcal{Q}_2=\pi_2\pi_3\pi_3\cdots$ where we repeat $\pi_3$ 21 times to guarantee that there is no lazy element among the last $20r+20$ elements. It is easy to verify that $\mathcal{Q}_2$ is a good walk with only one lazy element $S_{r+1}$. Thus by piecing together $\mathcal{Q}_1,\mathcal{Q}_2$ and identifying the segment $\pi_2$, one can obtain a desired good walk of order less than $65r^{3}+22r+22\leq 100r^3$. \end{proof} Now we are ready to prove Lemma~\ref{lem6.2}. \betagin{proof}[Proof of Lemma \ref{lem6.2}] Given $\mu>0$, $r\in\mathbb{N}$, we choose $\frac{1}{k}\ll\mu, \frac{1}{r}$. Let $R$ be a $k$-vertex graph with $\deltalta(R)\geq \left(1-\frac{1}{r}+\mu\right)k$ and $\mf{x}$, $\mf{y}$ be two disjoint $r$-tuples of vertices in $R$, each inducing a copy of $K_{r}$, say $H_{\mf{x}}, H_{\mf{y}}$ with $V(H_{\mf{x}})=\{v_{1}, \dots, v_{r}\}$ and $V(H_{\mf{y}})=\{u_{1}, \dots, u_{r}\}$. Without loss of generality, we write $\mf{x}=(v_{1}, \dots, v_{r})$ and $\mf{y}=(u_{r}, \dots, u_{1})$. Applying Lemma \ref{lem6.3} to $R$, there exists a family of copies of $K_{r+1}$, say $\{H_{1}, \dots, H_{s}\}$ for some $s\leq r^{2}$, with $V(H_{\mf{x}})\subseteq V(H_{1})$, $V(H_{\mf{y}})\subseteq V(H_s)$ and $|V(H_{i})\cap V(H_{i+1})|=r-1$ for every $i\in[s-1]$. Denote by $\mf{x}^+:=(v_{1}, \dots, v_{r}, v_{r+1})$, $\mf{y}^+:=(u_{r+1}, u_{r}, \dots, u_{1})$ the permutations of $V(H_1)$ and $V(H_s)$, respectively. As mentioned above, by iteratively applying Lemma \ref{lem6.6} $s-1$ times, we obtain a collection of good walks $\mathcal{Q}_{i}$ of order $\ell_i\le 100r^3$, $i\in[s-1]$, such that $\mathcal{Q}_{1}[1,r+1]=\mf{x}^+$ and $\mathcal{Q}_{i}[1,r+1]=\mathcal{Q}_{i-1}[\ell_{i-1}-r, \ell_{i-1}]$ when $i\ge 2$. In particular, $V(\mathcal{Q}_{i})\subseteq V(H_{i})\cup V(H_{i+1})$ for every $i\in[s-1]$ and $\mathcal{Q}_{s-1}[\ell_{s-1}-r, \ell_{s-1}]$ is a permutation of $V(H_{s})$. Furthermore, applying Lemma~\ref{lem6.5} to $H_s$ with $\pi_{1}=\mathcal{Q}_{s-1}[\ell_{s-1}-r, \ell_{s-1}]$, $\pi_{2}=\mf{y}^+$, we obtain a good walk, say $\mathcal{Q}_{s}$ with $|\mathcal{Q}_{s}|=:\ell_{s}\leq 65r^{3}$, such that $\mathcal{Q}_{s}[1,r+1]=\mathcal{Q}_{s-1}[\ell_{s-1}-r, \ell_{s-1}]$ and $\mathcal{Q}_{s}[\ell_{s}-r, \ell_{s}]=\mf{y}^+$. Thus we can piece the walks $\mathcal{Q}_{1},\ldots,\mathcal{Q}_{s}$ together by identifying the $r+1$ coordinates from the ending of $\mathcal{Q}_{i}$ and the beginning of $\mathcal{Q}_{i+1}$ for all $i\in[s-1]$. Then the resulting sequence $\mathcal{Q}$ is actually a good walk with head $\mf{x}$ and tail $\mf{y}$ as desired. Moreover, it is easy to see that $|\mathcal{Q}|=\sum_{i\in[s]}\ell_i\leq 100sr^3\le 100r^5$ and Lemma \ref{lem6.5} guarantees that there is no lazy element among the last $20r+20$ elements of $\mathcal{Q}$. This completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lem6.30} and Lemma \ref{lem6.31}} In this subsection, we begin by proving Lemma \ref{lem6.31}. We then use Lemma \ref{lem6.31} to prove Lemma \ref{lem6.30}. Throughout the proofs, we will rely on the following trivial fact. \betagin{fac}[\cite{KS1996}]\label{fac5.10} Given $\betata, \varepsilon>0$, let $(A, B)$ be an $(\varepsilon, \betata)$-regular pair, and $Y\subseteq B$ with $|Y|\geq \varepsilon |B|$. Then there exists a subset $A'\subseteq A$ with $|A'|\leq \varepsilon |A|$ such that every vertex $v$ in $A\backslash A'$ has $|N(v)\cap Y|\geq (\betata-\varepsilon)|Y|$. \end{fac} In the following, we prove Lemma \ref{lem6.31}. \betagin{proof}[Proof of Lemma \ref{lem6.31}] Given $\betata,\eta>0$, and $r, k, \ell\in \mathbb{N}$, we choose $\frac{1}{m}\ll \varepsilon \ll \betata,\eta,\frac{1}{r}, \frac{1}{\ell}$. Let $G$ be a graph with an equipartition $V(G)=V_{1}\cup V_{2}\cup \cdots\cup V_{k}$, and $|V_{i}|=m$ for every $i\in [k]$. Let $R$ be a graph with $V(R)=\{V_{1}, V_{2}, \dots, V_{k}\}$, and $V_{i}V_{j}\in E(R)$ if $(V_{i}, V_{j})$ is $(\varepsilon, \betata)$-regular in $G$. Let $\mathcal{Q}=(S_{1}, \dots, S_{\ell+r})$ be a good walk in $R$ without any lazy element and $T_i\subseteqS_i$ be a set of size at least $\eta m$ for every $i\in[\ell+r]$. It suffices to prove that for every $s\in [\ell]$, there exists an $r$-path say $v_{1}v_2 \dots v_{s}$ such that $v_{i}\in T_{i}$ for every $i\in [s]$ and furthermore $|N_{G}(v_{s-t(s, j)+1}, \dots, v_{s})\cap T_{s+j}|\geq \left(\frac{\betata}{2}\right)^{t(s, j)}|T_{s+j}|$ for every $j\in [r]$, where $t(s, j)=\min\{s,r-j+1\}$. We shall prove this by induction on $s$. The base case $s=1$ is trivial. Recall that $\mathcal{Q}$ has no lazy element and thus $(S_{1}, S_{j+1})$ is $(\varepsilon, \betata)$-regular for every $j\in [r]$. Note that $|T_i|\ge \eta m> r\varepsilon m=r\varepsilon |S_{i}|$ for every $i\in [\ell+r]$. By Fact \ref{fac5.10}, there exists a subset $S'_{1}\subseteq S_{1}$ with $|S'_{1}|\leq r\varepsilon |S_{1}|$ such that every vertex $v\in T_{1}\backslash S'_{1}$ has $|N_{G}(v)\cap T_{j+1}|\geq (\betata-\varepsilon)|T_{j+1}|$ for every $j\in [r]$. As $|T_1|\ge \eta m> r\varepsilon m=r\varepsilon |S_{1}|\geq |S_1'|$, we choose an arbitrary vertex $v_{1}$ in $T_{1}\backslash S'_{1}$. Then $|N_{G}(v_1)\cap T_{j+1}|\geq (\betata-\varepsilon)|T_{j+1}|\geq \frac{\betata}{2}|T_{j+1}|$ for every $j\in [r]$. Next, we show that our claim holds for $s+1$ assuming it holds for $s\ge 1$. The induction hypothesis implies that there exists a vertex set $\{v_{1}, \dots, v_{s}\}$ such that $v_i\in T_i$ for $i\in[s]$ and the set $T_{s+j}^*:=N_{G}(v_{s-t(s, j)+1}, \dots, v_{s})\cap T_{s+j}$ has $|T_{s+j}^*|\ge \left(\frac{\betata}{2}\right)^{t(s, j)}|T_{s+j}|\geq \varepsilon |S_{s+j}|$ for every $j\in [r]$. Recall that $(S_{s+1}, S_{s+1+j})$ is $(\varepsilon, \betata)$-regular for every $j\in [r]$. By Fact \ref{fac5.10}, there exists a subset $S'_{s+1}\subseteqS_{s+1}$ with $|S'_{s+1}|\leq r\varepsilon |S_{s+1}|$ such that every vertex $v\in T^{\ast}_{s+1}\setminus S'_{s+1}$ satisfies that for every $j\in [r-1]$ \[|N_{G}(v)\cap T^*_{s+1+j}|\geq (\betata-\varepsilon)|T^*_{s+1+j}|\geq \frac{\betata}{2}\left(\frac{\betata}{2}\right)^{t(s,j+1)}|T_{s+1+j}|=\left(\frac{\betata}{2}\right)^{t(s+1,j)}|T_{s+1+j}|\] and $|N_{G}(v)\cap T_{s+1+r}|\ge (\betata-\varepsilon)|T_{s+1+r}|\geq \frac{\betata}{2}|T_{s+1+r}|=\left(\frac{\betata}{2}\right)^{t(s+1,r)}|T_{s+1+r}|$. As $\frac{1}{m}\ll\varepsilon\ll\betata,\eta,\frac{1}{r},\frac{1}{\ell}$ and $s\leq \ell$, it holds that \[|T^{*}_{s+1}\backslash (S'_{s+1}\cup\{v_1,v_2,\ldots,v_s\})|\geq \left(\frac{\betata}{2}\right)^{r}|T_{s+1}|-r\varepsilon|S_{s+1}|-s\geq\left(\frac{\betata}{2}\right)^{r}\eta m-r\varepsilon m-s>0.\] In $T^{*}_{s+1}\backslash (S'_{s+1}\cup\{v_1,v_2,\ldots,v_s\})$, we choose an arbitrary vertex $v_{s+1}$. Then as $t(s+1,j)=t(s,j+1)+1$, it follows that \[|N_{G}(v_{(s+1)-t(s+1,j)+1},\ldots,v_s,v_{s+1})\cap T_{s+1+j}|=|N_{G}(v_{s+1})\cap T^*_{s+1+j}|\ge\left(\frac{\betata}{2}\right)^{t(s+1,j)}|T_{s+1+j}|\] for every $j\in [r]$, where $T^{*}_{s+1+r}=T_{s+1+r}$. This finishes the proof. \end{proof} In the following, we use Lemma \ref{lem6.31} to prove Lemma \ref{lem6.30}. \betagin{proof}[Proof of Lemma \ref{lem6.30}] Given $r, k\in \mathbb{N}$ and $\betata,\eta>0$, we choose $\frac{1}{m}\ll \alpha\ll \varepsilon\ll\betata, \eta,\frac{1}{r}, \frac{1}{k}$. Let $G$ be a graph with an equipartition $V(G)=V_{1}\cup V_{2}\cup\cdots\cup V_{k}$, $\alpha(G)\leq \alpha |V(G)|$, and $|V_{i}|=m$ for every $i\in [k]$. Let $R$ be a graph with $V(R)=\{V_{1}, V_{2}, \dots, V_{k}\}$, and $V_{i}V_{j}\in E(R)$ if $(V_{i}, V_{j})$ is $(\varepsilon, \betata)$-regular in $G$. Let $\mathcal{Q}=(S_{1}, S_{2}, \dots, S_{3r+2})$ be a good walk in $R$ with exactly one lazy element $S_{r+1}$ and $T_i\subseteqS_i$ be a set of size at least $\eta m$ for every $i\in[3r+2]$ such that $T_{r+1}=T_{r+2}$. We first consider the subwalk $(S_{1}, S_{2}, \dots, S_{2r})$. Recall that $S_{r+1}$ is the lazy element. Applying Lemma \ref{lem6.31} with $\ell=r$, there exists an $r$-path, say $P=v_{1} \dots v_{r}$, such that $v_{i}\in S_{i}$ for every $i\in[r]$ and $|N_{G}(v_{j}, \dots, v_{r})\cap T_{r+j}|\geq \left(\frac{\betata}{2}\right)^{r}|T_{r+j}|$ for every $j\in [r]$. Define \[T^{0}_{r+j}:=N_{G}(v_{j}, \dots, v_{r})\cap T_{r+j}~\text{for every}~j\in [r]\] and $T^{0}_{r+j}:=T_{r+j}$ for every $j\in [r+1, 2r+2]$. To finish the proof, it suffices to prove the following claim. \betagin{claim}\label{cl5.11} There exists a vertex set $\{v_{r+3}, \dots, v_{2r+2}\}$ disjoint from $\{v_1,\ldots,v_r\}$ such that $v_{i}\in T^{0}_{i}$ for every $i\in [r+3,2r+2]$ and $\{v_{r+3}, \dots, v_{2r+2}\}$ induces a clique. Moreover, there exists a sequence of subsets $T^{1}_{r+1}:= N_{G}(v_{r+3}, \dots, v_{2r+2})\cap T^0_{r+1}$ and\[T^{1}_{j}:=N_{G}(v_{j-r}, \dots, v_{2r+2})\cap T^0_{j}~\text{for every}~j\in[2r+3, 3r+2]\] such that $|T^{1}_{j}|\geq \left(\frac{\betata}{2}\right)^{r}| T^0_{j}|$ for every $j\in[2r+3, 3r+2]\cup \{r+1\}$. \end{claim} \betagin{pr} The proof of the claim follows from the same argument as in that of Lemma~\ref{lem6.31}, except an additional requirement on the choice of $v_{r+3},\ldots,v_{2r+2}$ that $|T^{1}_{r+1}|\geq \left(\frac{\betata}{2}\right)^{r}| T^0_{r+1}|$, and we omit it. \end{pr} By Claim \ref{cl5.11}, there exists a subset $T^{1}_{r+1}\subseteq N_{G}(v_{1}, \dots, v_{r},v_{r+3}, \dots, v_{2r+2})$ with $|T^{1}_{r+1}|\geq \left(\frac{\betata}{2}\right)^{r}|T^0_{r+1}|\ge \left(\frac{\betata}{2}\right)^{2r}\eta m>\alpha km=\alpha|V(G)|$, as $\alpha\ll \betata,\eta,\frac{1}{r},\frac{1}{k}$. Hence, $G[T^{1}_{r+1}]$ contains an edge, say $xy$. Then the $r$-path $P=v_{1}\dots v_{r}xyv_{r+3}, \dots, v_{2r+2}$ is as desired. \end{proof} \end{document}
math
84,288
\begin{document} \setcounter{section}{0} \setcounter{tocdepth}{1} \title[Twisted Conjugacy Classes in Twisted Chevalley Groups]{Twisted Conjugacy Classes in Twisted Chevalley Groups} \author[Sushil Bhunia]{Sushil Bhunia} \author[Pinka Dey]{Pinka Dey} \author[Amit Roy]{Amit Roy} \thanks{Dey and Roy acknowledge financial support from UGC and CSIR, Govt. of India, respectively} \address{Indian Institute of Science Education and Research (IISER) Mohali, Knowledge City, Sector 81, S.A.S. Nagar 140306, Punjab, India} \email{[email protected]} \email{[email protected]} \email{[email protected], [email protected]} \subjclass[2010]{Primary 20E45, 20G15} \keywords{Twisted conjugacy, Chevalley groups, twisted Chevalley groups.} \date{\today} \begin{abstract} Let $G$ be a group and $\varphi$ be an automorphism of $G$. Two elements $x, y\in G$ are said to be $\varphi$-twisted conjugate if $y=gx\varphi(g)^{-1}$ for some $g\in G$. We say that a group $G$ has the $R_{\infty}$-property if the number of $\varphi$-twisted conjugacy classes is infinite for every automorphism $\varphi$ of $G$. In this paper, we prove that twisted Chevalley groups over a field $k$ of characteristic zero have the $R_{\infty}$-property as well as the $S_{\infty}$-property if $k$ has finite transcendence degree over $\mathbb Q$ or $\textup{Aut}(k)$ is periodic. \end{abstract} \maketitle \section{Introduction} Let $G$ be a group and $\varphi$ be an automorphism of $G$. Two elements $x, y$ of $G$ are said to be twisted $\varphi$-conjugate, denoted by $x\sim_{\varphi} y$, if $y=gx\varphi(g)^{-1}$ for some $g\in G$. Clearly, $\sim_{\varphi}$ is an equivalence relation on $G$. The equivalence classes with respect to this relation are called the \textit{$\varphi$-twisted conjugacy classes} or \textit{the Reidemeister classes} of $\varphi$. If $\varphi=\mathrm{Id}$, then the $\varphi$-twisted conjugacy classes are the usual conjugacy classes. The $\varphi$-twisted conjugacy class containing $x\in G$ is denoted by $[x]_{\varphi}$. The \textit{Reidemeister number} of $\varphi$, denoted by $R(\varphi)$, is the number of all $\varphi$-twisted conjugacy classes. A group $G$ has the \textit{$R_{\infty}$-property} if $R(\varphi)$ is infinite for every automorphism $\varphi$ of $G$. The Reidemeister number is closely related to the Nielsen number of a selfmap of a manifold, which is homotopy invariant. More precisely, for a compact connected manifold $ M $ of dimension at least $ 3 $ and a homeomorphism $f: M\to M$, the minimal number of the fixed-point set varying over all the maps homotopic to $ f $ is same as the Nielsen number $ N(f) $. This number is bounded above by the Reidemeister number $R(f_{\#})$, where $f_{\#} $ is the induced map on $ \pi_1(M) $. If all the essential fixed-point classes have the same fixed-point index, then either $ N(f)=0 $ or $ N(f)=R(f_{\#}) $ (we refer the reader to \cite[p. 37, Theorem 5.6]{Jiang} for details). Since $ N(f) $ is always finite, we get $ N(f)=0 $ when $ R(f_{\#}) $ is infinite. Thus, $ R(f_{\#}) $ is infinite implies that we can deform $f$ to a fixed-point free map, up to homotopy. The problem of determining which classes of groups have the $R_{\infty}$-property is an active area of research initiated by Fel'shtyn and Hill \cite{fh94}. This problem has a long list of references, for example, see \cite{fg08, levitt07, MS, timur12, timurnasy2016} just to name a few. For a nice introduction to the subject, we refer the readers to \cite{fel10, FN} and the references therein. In this context, we recall a classical result by Steinberg, which says that for a connected linear algebraic group $G$ over an algebraically closed field $k$ and a surjective endomorphism $\varphi$ of $G$, if $\varphi$ has a finite set of fixed points then $G=\{g\varphi(g)^{-1}\mid g\in G\}=[e]_{\varphi}$, i.e., $R(\varphi)=1$ (see \cite[Theorem 10.1]{St}). In particular, any semisimple linear algebraic group $G$ over an algebraically closed field $k$ with $\operatorname{char}k=p>0$ possesses an automorphism $\varphi$ (Frobenius automorphism) with finitely many fixed points. Then by the Steinberg theorem, it implies that $R(\varphi)=1$. The theory of Chevalley groups was introduced by Claude Chevalley himself in \cite{chevalley55}, and further developed by Steinberg \cite{steinberg59}. Motivation of the present work comes from the results of Nasybullov and Fel'shtyn. They proved that a Chevalley group $G$ over a field $k$ of characteristic zero possesses the $R_{\infty}$-property if the transcendence degree of $k$ over $\mathbb{Q}$ is finite and $G$ is of type $\Phi$ (here $\Phi$ is a root system corresponding to $G$), or $\textup{Aut}(k)$ is periodic and $G$ is of type $\Phi\neq A_1$ (see \cite[Theorem 3.2]{FN}. Also, see \cite[Theorems 1, 2]{NaT} for details). It is worth mentioning that a reductive linear algebraic group $G$ over an algebraically closed field $k$ of characteristic zero possesses the $R_{\infty}$-property if the transcendence degree of $k$ over $\mathbb Q$ is finite and the radical of $G$ is a proper subgroup of $G$ (see \cite[Theorem 4.1]{FN}). Later Nasybullov generalized the results from Chevalley groups over fields to Chevalley groups (of classical types) over certain rings. A Chevalley group $G$ of type $A_l, B_l, C_l$ or $D_l$ over a local integral domain $k$ of zero characteristic possesses the $R_{\infty}$-property if $\textup{Aut}(k)$ is periodic (see \cite[Theorems 1, 2]{timur12} and \cite[Theorem 1]{timurnasy2016}). Recently, Nasybullov showed that Chevalley groups (of types $A_n, B_n, C_n, D_n$) over an algebraically closed field $k$ do not satisfy the $R_{\infty}$-property, when $k$ has infinite transcendence degree over $\mathbb Q$ (see \cite[Theorem 7]{timur2019} and \cite[Theorem 8]{nasy2019}). So unless otherwise specified, we will assume that the field has zero characteristic. \vskip 2mm The following natural question arises in this context: \begin{question}\label{question1} What can we say about the $R_{\infty}$-property of the twisted Chevalley groups (also known as Steinberg groups)? \end{question} The main results of this paper are the following, which solves Question \ref{question1} over a field $k$ of characteristic zero with finite transcendence degree over $\mathbb Q$ or $\textup{Aut}(k)$ is periodic. In a sequel, we will do this for twisted Chevalley groups over an integral domain $k$ under the same assumptions on $k$ as Nasybullov did. Since we are considering only zero characteristic field, then non-trivial graph automorphisms exist only for the root systems of types $A_l \, (l\geq 2)$, $D_l\, (l\geq 4)$ and $E_6$. Suppose $\Phi$ is an irreducible root system corresponding to the adjoint Chevalley group $G$ of types either $A_l \, (l\geq 2)$ or $D_l\, (l\geq 4)$ or $E_6$. Let $\sigma=\overline{\rho}f$ be a product of graph and field automorphisms of $G$, and $G'_{\sigma}$ be the corresponding twisted Chevalley group of adjoint type (which is simple). In general any twisted Chevalley group also denoted by $G'_{\sigma}:=\widetilde{G}'_{\sigma}/Z$, where $\widetilde{G}'_{\sigma}$ is the twisted Chevalley group of universal type, and $Z\leq Z(\widetilde{G}'_{\sigma})$ is a central subgroup (for details, see \secref{tchev}). Now that we have the notations laid out, we state our main theorems of this paper. \begin{theorem}\label{mainthm1} Let $G'_{\sigma}$ be a twisted Chevalley group corresponding to the irreducible root system $\Phi$ over a field $k$ of $\mathrm{char}\; k=0$. If the transcendence degree of $k$ over $\mathbb{Q}$ is finite, then $G'_{\sigma}$ possesses the $R_{\infty}$-property. \end{theorem} Examples of such fields are $\mathbb Q(T_1, T_2, \ldots, T_n), \overline{\mathbb Q}, \overline{\mathbb Q}(T_1, T_2, \ldots, T_n)$, etc. \begin{theorem}\label{mainthm2} Let $G'_{\sigma}$ be a twisted Chevalley group corresponding to the irreducible root system $\Phi$ over a field $k$ of $\mathrm{char}\; k=0$. If $\textup{Aut}(k)$ is periodic, then $G'_{\sigma}$ possesses the $R_{\infty}$-property. \end{theorem} Some examples of such fields are: any number field, the real numbers $\mathbb{R}$, the $p$-adic fields $\mathbb Q_p$, etc. Further Theorems \ref{mainthm1} and \ref{mainthm2} provide a characterization of the $S_{\infty}$-property in twisted Chevalley groups. Suppose $\Psi\in \textup{Out}(G):=\textup{Aut}(G)/\textup{Inn}(G)$. Two elements $\alpha, \beta\in \Psi$ are said to be \emph{isogredient} (or similar) if $\beta=i_g\circ \alpha \circ i_{g^{-1}}$ for some $g\in G$, where $i_g(h)=ghg^{-1}$. Clearly, this is an equivalence relation on $\Psi$. Let $S(\Psi)$ denote the number of isogredience classes of $\Psi$. A group $G$ has the \emph{$S_{\infty}$-property} if $S(\Psi)=\infty$ for all $\Psi\in \textup{Out}(G)$. (For details see \secref{isogred}. Also, see \cite{ft2015}). Any non-elementary hyperbolic group satisfies the $S_{\infty}$-property (see \cite{ll00}). It follows from the works of Nasybullov that the Chevalley group has the $S_{\infty}$-property under the same condition as above on the field $k$ (although it was implicit there). For example, see \cite{FN}. In this direction, we have the following result: \begin{corollary}\label{maincor1} Let $G'_{\sigma}$ be a twisted Chevalley group corresponding to the irreducible root system $\Phi$ over a field $k$ of $\mathrm{char}\; k=0$. If the transcendence degree of $k$ over $\mathbb{Q}$ is finite or $\textup{Aut}(k)$ is periodic, then $G'_{\sigma}$ possesses the $S_{\infty}$-property. \end{corollary} The problem of determining when the identity class $[e]_{\varphi}$ is a subgroup of $G$ was initiated by Bardakov et al. in \cite{bnn2013}. Also, see \cite{gn2019} for some recent works. In \cite[Theorems 3, 4]{NaT}, Nasybullov proved that $[e]_{\varphi}$ is a subgroup of a Chevalley group $G$ over $k$ if and only if $\varphi$ is a central automorphism, provided that the field $k$ over $\mathbb Q$ has finite transcendence degree or $\textup{Aut}(k)$ is periodic. We extend this result to the twisted Chevalley groups and prove the following: \begin{corollary}\label{maincor2} Let $G'_{\sigma}$ be a twisted Chevalley group corresponding to the irreducible root system $\Phi$ over a field $k$ of $\mathrm{char}\; k=0$. Suppose that the transcendence degree of $k$ over $\mathbb{Q}$ is finite or $\textup{Aut}(k)$ is periodic. Then the $\varphi$-twisted conjugacy class $[e]_{\varphi}$ is a subgroup of $G'_{\sigma}$ if and only if $\varphi$ is a central automorphism of $G'_{\sigma}$. \end{corollary} \subsubsection*{Structure of the paper} This paper is the study of the $R_{\infty}$ and $S_{\infty}$-property of the twisted Chevalley groups. In \secref{preliminaries}, we cover the preliminaries. \secref{mainsection} contains the proof of \thmref{mainthm1} and \thmref{mainthm2}. The final section is devoted to the proof of \corref{maincor1} and \corref{maincor2}. \section{Preliminaries}\label{preliminaries} In this section, we fix some notations and terminologies, and recall some results which will be used throughout this paper. Most of the notions are from Carter \cite{ca}. Let $k$ be a field of characteristic zero. \subsection{Chevalley groups} \label{section21} We refer the interested reader to the original work of Chevalley \cite{chevalley55}. Let $\mathcal{L}$ be a complex simple Lie algebra and $\mathcal{H}$ be a Cartan subalgebra of $\mathcal{L}$. Consider the adjoint representation $\mathrm{ad} : \mathcal{L} \rightarrow \mathfrak{gl}(\mathcal{L})$ given by $\mathrm{ad}X(Y)=[X,Y]$. Thus we have the Cartan decomposition $$\mathcal{L}=\mathcal{H}\bigoplus \displaystyle\sum_{\alpha \in \Phi}\mathcal{L}_{\alpha},$$ where $\mathcal{L}_{\alpha}=\{X\in \mathcal{L}\mid \mathrm{ad}H(X)=\alpha(H)X,\text{ for all } H \in \mathcal{H}\}$ are root spaces of $\mathcal{L}$ and $\Phi$ is an irreducible root system with respect to $\mathcal{H}$. Also, we fix $\mathfrak Delta$ and $\Phi^+$ to be a simple (or fundamental) root system and a positive root system, respectively. Therefore $\dim_\mathbb C(\mathcal{L})=|\Phi|+|\mathfrak Delta|$. Chevalley proved that there exists a basis of $\mathcal{L}$ such that all the structure constants, which define $\mathcal{L}$ as a Lie algebra, are integers (for example, see \cite[Theorem 4.2.1]{ca}). This is a key theorem to define Chevalley groups. Let $h_{\alpha} \in \mathcal{H}$ be the co-root corresponding to the root $\alpha$. Then, for each root $\alpha \in \Phi$, an element $e_{\alpha}$ can be chosen in $\mathcal{L}_{\alpha}$ such that $[ e_{\alpha}, e_{-\alpha} ]=h_{\alpha}$. The elements \begin{align}\label{chevalleybasis} \{e_{\alpha}, \alpha \in \Phi;\; h_{\delta},\delta \in \mathfrak Delta\} \end{align} form a basis for $\mathcal{L}$, called a \textbf{Chevalley basis}. The map $\mathrm{ad}e_{\alpha}$ is a nilpotent linear map on $\mathcal{L}$. For $t \in \mathbb{C}$, the map $\mathrm{ad}(te_{\alpha})= t(\mathrm{ad}e_{\alpha})$ is also nilpotent. Thus $\mathrm{exp}(t(\mathrm{ad}e_{\alpha}))$ is an automorphism of $\mathcal{L}$. We denote by $\mathcal{L}_{\mathbb{Z}}$ the subset of $\mathcal{L}$ of all $\mathbb{Z}$-linear combinations of the Chevalley basis elements of $\mathcal{L}$. Thus $\mathcal{L}_{\mathbb{Z}}$ is a Lie algebra over $\mathbb{Z}$. We define $\mathcal{L}_{k}:=\mathcal{L}_{\mathbb{Z}}\otimes_{\mathbb{Z}} k$. Then $\mathcal{L}_{k}$ is a Lie algebra over $k$ with respect to the natural Lie multiplication. So we are in a position to define the Chevalley groups of adjoint type or the elementary Chevalley groups over any field $k$. The \textit{adjoint Chevalley group} of type $\Phi$ over the field $k$, denoted by $G(\Phi, k)$ or simply $G$, is defined to be the subgroup of automorphism group of the Lie algebra $\mathcal{L}_k$ generated by $x_\alpha(t):=\mathrm{exp}(t(\mathrm{ad}e_{\alpha}))$ for all $\alpha \in \Phi, t \in k$. In fact, the group $G$ over $k$ is determined up to isomorphism by the simple Lie algebra $\mathcal{L}$ over $\mathbb{C}$ and the field $k$. Following the notations as in \cite[Theorem 12.1.1]{ca} (assume $\Phi\neq A_1$), let $\widetilde{G}$ be the abstract group generated by $\widetilde{x}_{\alpha}(t)$ ($\alpha\in \Phi, t\in k$) satisfying the following relations: \begin{enumerate}[leftmargin=*] \item $\widetilde{x}_{\alpha}(t)\widetilde{x}_{\alpha}(s)=\widetilde{x}_{\alpha}(t+s)$\; ($t,s\in k$), \item $[\widetilde{x}_{\beta}(s),\widetilde{x}_{\alpha}(t)] =\displaystyle\prod_{\substack{i,j>0\\i\alpha+j\beta\in \Phi}}\widetilde{x}_{i\alpha+j\beta}(C_{ij,\alpha\beta}(-t)^is^j)$\; ($\alpha,\beta\in \Phi$ and $t,s\in k$), \item $\widetilde{h}_{\alpha}(t)\widetilde{h}_{\alpha}(s)=\widetilde{h}_{\alpha}(ts)$ \;($t,s\in k^{\times}$), \end{enumerate} where $\widetilde{h}_{\alpha}(t)=\widetilde{n}_{\alpha}(t)\widetilde{n}_{\alpha}(-1)$, $\widetilde{n}_{\alpha}(t)=\widetilde{x}_{\alpha}(t)\widetilde{x}_{-\alpha}(-t^{-1})\widetilde{x}_{\alpha}(t)$ and $C_{ij,\alpha\beta}$ are certain integers. Then $\widetilde{G}/Z(\widetilde{G})\cong G$ (the adjoint Chevalley group). The group $\widetilde{G}$ is called \textit{universal Chevalley group} of type $\Phi$ over $k$. For detail constructions of $\widetilde{G}$ see \cite[Chapter 3]{St2}. If $Z$ is any subgroup of $Z(\widetilde{G})$, then the factor group $\widetilde{G}/Z$ is called \textit{Chevalley group}. For $\alpha\in \Phi, t\in k^{\times}$ set \begin{align} n_{\alpha}(t):&=x_{\alpha}(t)x_{-\alpha}(-t^{-1})x_{\alpha}(t) \\ h_{\alpha}(t):&=n_{\alpha}(t)n_{\alpha}(-1)\label{torus}. \end{align} \begin{notation}\label{uvhn} Let $G$ be an adjoint Chevalley group. For each $\alpha\in \Phi$, let $w_{\alpha}$ be the reflection in the hyperplane orthogonal to $\alpha$. The following notations shall be used throughout this paper. \begin{align*} U&=\langle x_{\alpha}(t)\mid \alpha \in \Phi^{+}, t\in k\rangle, \\ V&=\langle x_{\alpha}(t)\mid \alpha \in \Phi^{-}, t\in k\rangle, \\ H&=\langle h_{\alpha}(t)\mid \alpha \in \mathfrak Delta, t\in k^{\times}\rangle, \\ N&=\langle H, n_{\alpha}(1)\mid \alpha \in \Phi\rangle,\\ W&=\langle w_{\alpha} \mid \alpha\in \mathfrak Delta \rangle. \end{align*} \end{notation} Observe that $n_{\alpha}(1)Hn_{\alpha}(1)^{-1}=H$, i.e., $H$ is a normal subgroup of $N$ and $N/H\cong W$ given by $n_{\alpha}(1)H\mapsto w_\alpha$. \begin{example}\label{example1} Let $\mathcal{L}_k=\mathfrak{sl}_2(k)=\left\{\begin{pmatrix} a&b\\c&d\end{pmatrix}\in \mathrm{M}_2(k)\mid a+d=0\right\}$ be the simple Lie algebra of type $A_1$. A Chevalley basis for $\mathfrak{sl}_2(k)$ is $$\left\{e_\alpha=\begin{pmatrix}0&1\\0&0\end{pmatrix}, e_{-\alpha}=\begin{pmatrix}0&0\\1&0\end{pmatrix}; h_\alpha=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\right\}.$$ \end{example} Now $\widetilde{x}_{\alpha}(t):=\exp(te_{\alpha})=\begin{pmatrix}1&t\\0&1\end{pmatrix}$ and $\widetilde{x}_{-\alpha}(t):=\exp(te_{-\alpha})=\begin{pmatrix}1&0\\t&1\end{pmatrix}$. Then \[ \widetilde{n}_{\alpha}(t):=\widetilde{x}_{\alpha}(t)\widetilde{x}_{-\alpha}(-t^{-1})\widetilde{x}_{\alpha}(t)=\begin{pmatrix}0&t\\-t^{-1}&0\end{pmatrix},\; \widetilde{h}_{\alpha}(t):=\widetilde{n}_{\alpha}(t)\widetilde{n}_{\alpha}(-1)=\begin{pmatrix}t&0\\0&t^{-1}\end{pmatrix}.\] Note that $\SL_2(k)=\langle\widetilde{x}_{\alpha}(t),\widetilde{x}_{-\alpha}(t)\mid t\in k\rangle$, and the adjoint Chevalley group $G(A_1, k)=\langle x_{\alpha}(t), x_{-\alpha}(t)\mid t\in k\rangle$. Thus there is a surjective homomorphism $\eta: \SL_2(k)\rightarrow G(A_1,k)$ given by $\eta(\widetilde{x}_{\alpha}(t))=x_{\alpha}(t),\; \eta(\widetilde{x}_{-\alpha}(t))=x_{-\alpha}(t),\; \eta(\widetilde{n}_{\alpha}(t))=n_{\alpha}(t)$ and $\eta(\widetilde{h}_{\alpha}(t))=h_{\alpha}(t)$ with $\ker(\eta)=\{\pm I_2\}$. Hence $G(A_1,k)\cong \mathrm{PSL}_2(k)$. \begin{definition}\label{hhat} Let $R:=\mathbb{Z}\Phi$ be the root lattice and $\chi:R \rightarrow k^{\times}$ be a $k$-character (i.e., a group homomorphism from the additive group $R$ to the multiplicative group $k^{\times}$). Then $\chi$ gives rise to an automorphism, denoted by $h(\chi)$, of the Lie algebra $\mathcal{L}_k$ given by $h(\chi)(e_\alpha)=\chi(\alpha)e_{\alpha} \;(\alpha\in \Phi)$ and $h(\chi)(h_{\delta})=h_{\delta} \;(\delta\in \mathfrak Delta)$. Here $e_\alpha, h_{\delta}$ as in equation \eqref{chevalleybasis}. Define $\widehat{H}:=\{h(\chi)\mid \chi:\mathbb{Z}\Phi \rightarrow k^{\times}\}$. \end{definition} For $\alpha\in \mathfrak Delta, t\in k^{\times}$ the element $h_{\alpha}(t)$ corresponds to the character $\chi:=\chi_{\alpha, t}$ given by $\chi_{\alpha, t}(\beta)=t^{\frac{2(\alpha, \beta)}{(\alpha, \alpha)}}$, where $\frac{2(\alpha, \beta)}{(\alpha, \alpha)}\in \mathbb{Z}$ are the Cartan integers (for all $\beta\in \Phi$). Therefore $H\leq \widehat{H}$. In other words, $H$ can be viewed as a subgroup of $G$ consisting of all automorphisms $h(\chi)$ of $\mathcal{L}_k$ for which a $k$-character $\chi$ of $\mathbb Z\Phi$ can be extended to a $k$-character of $\mathbb{Z}\langle q_1, q_2, \ldots, q_l\rangle $ (\textit{weight lattice}), where $q_1, \ldots, q_l$ are fundamental weights. Note that $\widehat{H}\leq N_{\textup{Aut}(\mathcal{L}_k)}(G)$. (For details, see \cite[Chapter 7]{ca}). \begin{example} For $\SL_2(k)$, we have $H=\{\textup{diag}(t,t^{-1})\mid t\in k^{\times}\}\cong k^{\times}$. In this special case, let $\mathfrak Delta=\{\alpha\}$ and $\Phi=\{\alpha, -\alpha\}$ (as rank is $1$). Therefore the root lattice is $R=\mathbb{Z}\langle\alpha\rangle\cong \mathbb{Z}$ and the weight lattice is (say) $P=\mathbb{Z}\langle\alpha/2\rangle\cong \mathbb{Z}$. Now let $\chi$ be a $k$-character of $R$ given by $\chi(\alpha)=t$ ($t\in k^{\times}$). Then $\chi(\alpha/2)^2=\chi(\alpha/2+\alpha/2)=\chi(\alpha)=t$. Hence, if $k=\mathbb Q$ (resp. $k=k_0(T), T$ transcendental over $k_0\subset k$) then $H\neq\widehat{H}$ since not all $\mathbb Q$-character $\chi$ (resp. $k_0(T)$-character) can be extended to its weight lattice $P$ as $\mathbb Q$-character (resp. $k_0(T)$-character). But if $k=\mathbb C$ then $H=\widehat{H}$. \end{example} \subsection{Twisted Chevalley groups}\label{tchev} Every semisimple linear group of classical type can be thought of as a Chevalley group or as a twisted Chevalley group. In general, the classical simple groups are the special linear, symplectic, special orthogonal and special unitary groups corresponding to forms whose Witt index is sufficiently large. The twisted groups were discovered independently by Steinberg (cf. \cite{steinberg59}) and Tits (cf. \cite{tits}). For our exposition, we will follow Steinberg's approach. The twisted Chevalley groups can be obtained as certain subgroups of the Chevalley groups $G=G(\Phi, k)\leq\textup{Aut}(\mathcal{L}_k)$, where $\mathcal{L}$ is a Lie algebra over $\mathbb{C}$ and $\mathcal{L}_k$ is the corresponding Lie algebra over $k$. We refer the reader to \cite{ca} and \cite{St2} for details. \begin{definition}[Graph automorphism] A symmetry of the Dynkin diagram induces this type of automorphisms. A symmetry of the Dynkin diagram of $\mathcal{L}$ is a permutation $\rho$ of the nodes of the diagram such that the number of edges joining nodes $i, j$ is the same as the number of edges joining nodes $\rho(i), \rho(j)$ for all $i\neq j$, i.e., if $n_{ij}$ is equal to the number of edges joining the nodes corresponding to $\alpha_i, \alpha_j \in \mathfrak Delta$ (simple roots), then $n_{ij}=n_{\rho(i)\rho(j)}$. Any graph automorphism will be denoted by $\overline{\rho}$. \end{definition} \begin{definition}[Field automorphism] Let $f$ be an automorphism of the field $k$, then the map \[\widetilde{f}:G\rightarrow G \text{ defined by }\widetilde{f}(x_{\alpha}(t))=x_{\alpha}(f(t)),\; \alpha \in \Phi, t\in k\] can be extended to an automorphism of $G$. The automorphisms obtained in this way are called field automorphisms. We shall abuse the notation slightly and denote the field automorphism of $G$ by $f$ itself. \end{definition} Since we are considering only $\mathrm{char}\;k=0$, then non-trivial graph automorphisms exist only for the root systems of types $A_l \, (l\geq 2)$, $D_l\, (l\geq 4), E_6$ and $D_4$, and the order of the graph automorphisms are $2, 2, 2$ and $3$, respectively. Let $\rho$ be a non-trivial symmetry of the Dynkin diagram of $\mathcal{L}$ then the order of $\rho$ is $2$ or $3$. For this article, by a Chevalley group of type $\Phi$ we always mean one of the following types: $A_l \, (l\geq 2)$, $D_l\, (l\geq 4)$ and $E_6$. Let $G$ be the Chevalley group of type $\Phi$. Then there is a graph automorphism $\overline{\rho}$ of $G$ such that \[\overline{\rho}(x_{\alpha}(t))=x_{\rho(\alpha)}(t),\] for all $\alpha \in \mathfrak Delta$ and $t\in k$. Let $f: G\rightarrow G$ be a field automorphism, i.e., $f(x_{\alpha}(t))=x_{\alpha}(f(t))$. Then \[(\overline{\rho} f)(x_{\alpha}(t))=\overline{\rho}(x_{\alpha}(f(t)))=x_{\rho(\alpha)}(f(t))=f(x_{\rho(\alpha)}(t))=f(\overline{\rho}(x_{\alpha}(t)))=(f \overline{\rho})(x_{\alpha}(t))\] for all $\alpha \in \mathfrak Delta$ and $t\in k$. Therefore $f\circ \overline{\rho}=\overline{\rho}\circ f$, as $G$ is generated by $x_{\alpha}(t)$. Let $n$ be the order of $\rho$, then \[\overline{\rho}^n\cdot x_{\alpha}(t)= x_{\rho^n(\alpha)}(t)=x_{\alpha}(t)\] for all $\alpha \in \mathfrak Delta$. Therefore $\overline{\rho}^n=\mathrm{Id}$. Hence for the automorphism $\sigma :=\overline{\rho}\circ f: G\rightarrow G$, we have $\sigma^n=\overline{\rho}^nf^n=f^n$. If $f$ is any non-trivial field automorphism such that $f^n=\mathrm{Id}$, then $\sigma^n=\mathrm{Id}$. With the terminologies as in Notation \ref{uvhn}, we have the following important result which will be used to describe the twisted Chevalley group. \begin{lemma}\cite[Proposition 13.4.1]{ca} Let $G$ be a Chevalley group of type $\Phi$ over a field $k$ of $\mathrm{char}\;k=0$, whose Dynkin diagram has a non-trivial symmetry $\rho$. Let $\overline{\rho}$ be the graph automorphism corresponding to $\rho$ and $f$ be a non-trivial field automorphism chosen such that $\sigma=\overline{\rho} f$ satisfies $\sigma^n=\mathrm{Id}$, i.e., $f$ is chosen so that $f^n=\mathrm{Id}$, where $n$ is the order of $\rho$ which is either $2$ or $3$. Then we have $\sigma(U)=U, \sigma(V)=V, \sigma(H)=H, \sigma(N)=N$ and $\sigma: N/H\cong W \rightarrow W$ is given by $\sigma(w_{\alpha})=w_{\rho({\alpha})}$ for all $\alpha \in \mathfrak Delta$. Here $w_{\alpha}$ denotes the reflection in the hyperplane orthogonal to $\alpha$. \end{lemma} Now we are in a position to define the twisted Chevalley groups as a certain subgroups of the Chevalley groups which are fixed elementwise by the automorphisms $\sigma=\overline{\rho}f$, where $\overline{\rho}$ is a graph automorphism and $f$ is a field automorphism of $G$. \begin{definition} The following notations will be used henceforth. \begin{enumerate} \item $U':=\{u\in U\mid \sigma(u)=u\}$. \item $V':=\{v\in V\mid \sigma(v)=v\}$. \item $G'_{\sigma}:=\langle U', V'\rangle\leq G$. \item $H':=H\cap G'_{\sigma}$. \item $N':=N\cap G'_{\sigma}$. \end{enumerate} \end{definition} The group $G'_{\sigma}$ is called the \textbf{twisted Chevalley group of adjoint type} with respect to the automorphism $\sigma$. Similarly, we can define $\widehat{H'}:=\widehat{H}\cap N_{\textup{Aut}(\mathcal{L})}(G'_{\sigma})$, which contains $H'$ and normalizes $G'_{\sigma}$ (cf. \defref{hhat}). This will be useful to describe the diagonal automorphisms of the twisted Chevalley group in the next section. In this context, we recall an important result by Steinberg, which says that all the twisted Chevalley groups of adjoint type are simple (with few exceptions), see \cite[p. 884, Theorem 8.1]{steinberg59}. Also, we can define the universal twisted Chevalley group $\widetilde{G}'_{\sigma}$. Generally twisted Chevalley groups are of the form $\widetilde{G}'_{\sigma}/Z$ (also denoted by $G'_{\sigma}$), where $Z\leq Z(\widetilde{G}'_{\sigma})$ is a central subgroup (cf. \secref{section21}). Then we have the following short exact sequence of groups \begin{align}\label{qtog} \xymatrix{1\ar[r]&Z(G'_{\sigma})\ar[r]&G'_{\sigma}\ar[r]&G'_{\sigma}/Z(G'_{\sigma})\ar[r]&1} \end{align} where $G'_{\sigma}/Z(G'_{\sigma})$ is the twisted Chevalley group of adjoint type. \begin{remark}\label{center} It follows from \cite[p. 29, Lemma 28 (d)]{St2} that the center $Z(\widetilde{G})$ of the universal Chevalley group $\widetilde{G}$ is finite. Also, observe that the center of the twisted Chevalley group of universal type is $Z(\widetilde{G}'_{\sigma})=Z(\widetilde{G})^{\sigma}=\{g\in Z(\widetilde{G})\mid \sigma(g)=g\}$ (see \cite[p. 108, Exercise]{St2}). Thus the center $Z(G'_{\sigma})$ of the twisted Chevalley group is finite. \end{remark} \begin{example} \begin{enumerate}\addtolength{\itemindent}{-6mm} \item\cite[p. 882, Section 6]{steinberg59} Let $\mathcal{L}=A_l\; (l\geq 2)$. Then the adjoint Chevalley group is $G\cong \mathrm{PSL}_{l+1}(k)$ and the universal Chevalley group is $\widetilde{G}\cong \mathrm{SL}_{l+1}(k)$. The twisted Chevalley group of adjoint type is $G'_{\sigma}\cong \mathrm{PSU}_{l+1}(k,J)$ and the universal twisted Chevalley group is $\widetilde{G}'_{\sigma}\cong \mathrm{SU}_{l+1}(k,J)$, where \[J=\epsilon\begin{pmatrix} &&&&1\\ &&&-1&\\&&1&&\\&-1&&&\\\reflectbox{$\ddots$}\end{pmatrix}.\] Here $\epsilon \in k$ such that $\epsilon+\bar{\epsilon}=0$ if $l$ is odd and $\epsilon=1$ if $l$ is even. Here $\bar\;:k\rightarrow k$ is an involutory automorphism of $k$ with fixed field $k_0$, i.e., $k$ is a degree two Galois extension of $k_0$. (A prototypical example is the Galois extension $\mathbb{C}$ over $\mathbb R$ with `bar' being the complex conjugate). \item\cite[p. 886, Section 9]{steinberg59} Let $\mathcal{L}=D_l\; (l\geq 5)$, then the adjoint Chevalley group is $G\cong \mathrm{P\Omega}_{2l}(k, B_D)$ and the universal Chevalley group is $\widetilde{G}\cong \mathrm{\Omega}_{2l}(k, B_D)$, where $B_D=x_1x_{-1}+\cdots+x_lx_{-l}$ (quadratic form over $k$). The adjoint twisted Chevalley group is $G'_{\sigma}\cong \mathrm{P\Omega}_{2l}(k_0, B)$ and the universal twisted Chevalley group is $\widetilde{G}'_{\sigma}\cong \mathrm{\Omega}_{2l}(k_0, B)$, where $B=x_1x_{-1}+\cdots+x_{l-1}x_{-(l-1)}+(x_l-dx_{-l})(x_l-\bar{d}x_{-l})$ (quadratic form over $k_0$), and $k=k_0(d)$. Here $\bar\; : k\rightarrow k$ is an order two automorphism of $k$ with $k_0$ being its fixed field. \end{enumerate} \end{example} \subsection{Automorphisms of twisted Chevalley groups} To study the $R_{\infty}$-property of a group, it is important to understand the automorphisms of the given group. In this section, we recall three fundamental types of automorphism of the twisted Chevalley group. We refer the reader to \cite[Section 12.2]{ca} for details. \begin{definition}[Inner automorphism] Given any $x\in G'_{\sigma}$, we define \[i_x:G'_{\sigma}\rightarrow G'_{\sigma} \text{ given by }i_x(g)=xgx^{-1}\] is an automorphism of $G'_{\sigma}$. The automorphism $i_x$ is called the inner automorphism of $G'_{\sigma}$ induced by $x$. \end{definition} \begin{definition}[Diagonal automorphisms] Let $h\in \widehat{H'}\setminus H'$. Then the map \[d_h:G'_{\sigma}\rightarrow G'_{\sigma} \text{ given by }d_h(g)=hgh^{-1}\] is an automorphism of $G'_{\sigma}$, since $G'_{\sigma}$ is normalized by $\widehat{H'}$. The automorphism $d_h$ is called a diagonal automorphism. The diagonal automorphisms of $G'_{\sigma}$ are obtained by conjugating with suitable diagonal matrices. \end{definition} \begin{definition}[Field automorphism] Let $f$ be an automorphism of the field $k$, then the map \[\overline{f}:G'_{\sigma}\rightarrow G'_{\sigma} \text{ given by }\overline{f}(x_{\alpha}(t))=x_{\alpha}(f(t)),\; \alpha \in \Phi, t\in k\] can be extended to an automorphism of $G'_{\sigma}$. The automorphisms obtained in this way are called field automorphisms. In terms of matrices this amount to replacing each term of the matrix by its image under $f$. \par Now we are ready to state the following theorem due to Steinberg that is the main tool to prove our results. \begin{theorem}\cite[p. 111, Theorem 36]{St2}\label{staut} With the preceding notations, let $\sigma=\overline{\rho}f(\neq \mathrm{Id})$ be the automorphism of the Chevalley group $G$. Then every automorphism $\varphi$ of the twisted Chevalley group $G_{\sigma}'$ is a product of an inner automorphism $i_g$, a diagonal automorphism $d_h$ and a field automorphism $\overline{f}$, i.e., $\varphi=\overline{f}\circ d_h \circ i_g$ for some $g\in G_{\sigma}'$ and $h\in \widehat{H'}$. \end{theorem} \end{definition} The following lemma will be used in the proof of our main theorems. \begin{lemma}\label{normality} Let $D$ be the group of all diagonal automorphisms of $G'_{\sigma}$ and $\Gamma$ be the group generated by all field and diagonal automorphisms of $G'_{\sigma}$, then $D$ is a normal subgroup of $\Gamma$. \end{lemma} \begin{proof} Clearly, $D$ is a subgroup of $\Gamma$. Let $\overline{f}$ be a field automorphism of $G'_{\sigma}$ and $d_h\in D$, where $h=\textup{diag}(h_1, \ldots, h_{|\mathfrak Delta|+|\Phi|})\in \widehat{H'}\setminus H', h_i\in k^{\times}$. Suppose $g=(g_{ij})\in G'_{\sigma}$, then we have \begin{align*} \overline{f}d_h\overline{f}^{-1}(g_{ij})&=\overline{f}d_h(f^{-1}(g_{ij}))=\overline{f}h(f^{-1}(g_{ij}))h^{-1} =\overline{f}(h_if^{-1}(g_{ij})h_j^{-1})\\ &=(f(h_if^{-1}(g_{ij})h_j^{-1}))=(f(h_i)g_{ij}f(h_j^{-1}))=d_{\widetilde{h}}(g_{ij}) \end{align*} where $\widetilde{h}=\textup{diag}(f(h_1), \ldots, f(h_{|\mathfrak Delta|+|\Phi|}))$. Hence $D$ is a normal subgroup of $\Gamma$. \end{proof} \subsection{Some useful results} The following two lemmas hold for arbitrary groups. Suppose $\varphi$ is an automorphism of a group $G$. Let $i_g$ be the inner automorphism of $G$ for some $g$ in $G$, i.e., $i_g(x)=gxg^{-1}$ for all $x\in G$. Let $\mathcal{R}(\varphi):=\{[g]_{\varphi}\mid g\in G \}$. Thus the Reidemeister number $R(\varphi)$ is the cardinality of $\mathcal{R}(\varphi)$. Now let $x,y\in G$ such that $[x]_{\varphi\circ i_g}=[y]_{\varphi\circ i_g}$. Then there exists a $z\in G$ such that $$y=zx(\varphi\circ i_g)(z^{-1})=zx\varphi(gz^{-1}g^{-1})=zx\varphi(g)\varphi(z^{-1})\varphi(g^{-1}).$$ This implies that $y\varphi(g)=zx\varphi(g)\varphi(z^{-1})$, i.e, $[x\varphi(g)]_{\varphi}=[y\varphi(g)]_{\varphi}$. Thus we get a well-defined map \[\widehat{\varphi}: \mathcal{R}(\varphi\circ i_g)\longrightarrow \mathcal{R}(\varphi)\] given by $\widehat{\varphi}([x]_{\varphi\circ i_g})=[x\varphi(g)]_{\varphi}$. The map $\widehat{\varphi}$ is bijective as well. We summarise this as follows: \begin{lemma}\cite[Corollary 3.2]{FLT}\label{inner} Suppose $\varphi \in \textup{Aut}(G)$ and $i_g \in \mathrm{Inn}(G)$ then $R(\varphi i_g)=R(\varphi)$. In particular, $R(i_g)=R(\mathrm{Id})$, i.e., the number of inner twisted conjugacy classes in $G$ is equal to the number of conjugacy classes in $G$. \end{lemma} The following result appears in \cite[Lemma 2.2]{MS}. We include a proof for the sake of completeness. \begin{lemma}\label{msqtog} Let $1\rightarrow N\overset{i}{\rightarrow} G\overset{\pi}{\rightarrow} Q\rightarrow 1$ be an exact sequence of groups. Suppose that $N$ is a characteristic subgroup of $G$, i.e., $\varphi(N)=N$ for all $\varphi\in \textup{Aut}(G)$. If $Q$ has the $R_{\infty}$-property, then $G$ also has the $R_{\infty}$-property. \end{lemma} \begin{proof} Suppose that $\varphi$ is any automorphism of $G$. Since $\varphi(N)=N$ then $\varphi$ induces an automorphism $\overline{\varphi}$ of $Q\cong G/N$ such that the following diagram commutes: \[\xymatrix{ 1\ar[r] & N\ar[r]^{i}\ar[d]_{\varphi|_{N}}& G\ar[r]^{\pi} \ar[d]^{\varphi}& Q\ar[r]\ar[d]^{\overline{\varphi}}&1 \\ 1\ar[r] & N\ar[r]^{i}& G\ar[r]^{\pi}& Q\ar[r]&1 }\] where $\varphi|_{N}$ is the automorphism of $N$. In particular, we have $\overline{\varphi}\circ \pi=\pi\circ \varphi$. Now, observe that $\pi$ induces a surjective map $\widehat{\pi}:\mathcal{R}(\varphi)\longrightarrow\mathcal{R}(\overline{\varphi})$ given by $\widehat{\pi}([x]_{\varphi})=[\pi(x)]_{\overline{\varphi}}$ for all $x\in G$. Hence $R(\varphi)\geq R(\overline{\varphi})$. Thus $G$ has the $R_{\infty}$-property since $Q$ has so. \end{proof} Let $\mathbb Q$ be the field of rational numbers, $S$ be the set of prime numbers and $2^{S}$ be the set of all subsets of $S$. Then define the following map \[\nu:\mathbb Q \rightarrow 2^{S}\] by $\nu(a/b)=\{\text{all prime divisors of }a\}\cup \{\text{all prime divisors of b}\}$, where $a$ and $b$ are mutually prime integers. Using the map $\nu$, Nasybullov \cite[Lemma 2.5]{FN} proved the following result. \begin{lemma}\label{lemma5} Let $k$ be a field of characteristic zero such that the transcendence degree of $k$ over $\mathbb Q$ is finite. If the automorphism $f$ of the field $k$ acts on the elements $z_1, z_2, \ldots$ of the field $k$ by the rule \[f:z_i\mapsto \alpha a_iz_i, \] where $\alpha\in k$, $1\neq a_i\in \mathbb Q\subset k$ and $\nu(a_i)\cap \nu(a_j)=\emptyset$ for $i\neq j$, then there are only a finite number of non-zero elements among $z_1, z_2, \ldots$. \end{lemma} We include a proof of the following lemma which can be found in \cite[Lemma 6]{NaT}. \begin{lemma}\label{lemma6} Let $R$ be an integral domain and $M$ be an infinite subset of $R$. Let $f(T)$ be a non-constant rational function with coefficients from the ring $R$. Then the set $P=\{f(a)\mid a\in M\}$ is infinite. \end{lemma} \begin{proof} Let $f(T)=\frac{g(T)}{h(T)}$, where $g(T),h(T)\in R[T]$ with $h(T)\neq 0$. If possible, suppose that the set $P$ is finite. That is for an infinite subset $\{a_i\mid i=1,2,\dots\}$ of $M$ we have $f(a_i)=c\in R$ (say). In that case, the polynomial $\alpha(T)=g(T)-ch(T)\in K[T]$ has infinitely many roots in $K$, where $K$ is the field of fraction of $R$. Thus $\alpha(T)=0$ and hence, $f(T)=c$ is a constant function, a contradiction. \end{proof} The proof of the following result is similar to the Chevalley group case as in \cite[Lemma 7]{NaT} and hence we skip the details. \begin{lemma}\label{lemma7} Let $g(T)=h_{\alpha_{1}}(T)h_{\alpha_{2}}(T)\cdots h_{\alpha_{l}}(T)$ (here $h_{\alpha_i}(T)$ as in equation \eqref{torus}) be an element of the twisted Chevalley group over $k(T)$ and $\chi: \mathbb{Z}\Phi \rightarrow k^{\times}$ be a homomorphism. Then for any $m$, the element $g(T)^mh(\chi)$ with respect to the Chevalley basis has a diagonal form such that its trace belongs to $k(T)\setminus k$. Here $k(T)$ denotes the field of rational functions with one variable $T$ over the field $k$. \end{lemma} \section{Proofs of the main results}\label{mainsection} We are now ready to establish our two main theorems. \subsection{Proof of \thmref{mainthm1}}\label{proof1} The argument in the case of Chevalley groups may be followed here verbatim. In view of equation \eqref{qtog} and \lemref{msqtog}, it is enough to prove this theorem for twisted Chevalley group of adjoint type $G'_{\sigma}$. Let $\varphi \in \textup{Aut}(G'_{\sigma})$, then by \thmref{staut} we have $\varphi=\overline{f}\circ d_h\circ i_g$, where $i_g$ is an inner automorphism for some $g\in G'_{\sigma}$, $d_h$ is a diagonal automorphism for some $h\in \widehat{H'}$ and $\overline{f}$ is a field automorphism. By Lemma \ref{inner}, we may assume that $\varphi=\overline{f}\circ d_h$. In view of Lemma \ref{normality}, we have $\varphi=d_{\widetilde{h}}\circ \overline{f}$ for some $\widetilde{h}\in \widehat{H'}$. \noindent \textbf{Claim:} $R(\varphi)=\infty$. Suppose if possible $R(\varphi)<\infty$. Let \begin{align*} g_i=h_{\alpha_1}(p_{i1})h_{\alpha_2}(p_{i2})\cdots h_{\alpha_l}(p_{il}), \end{align*} where $p_{11}<p_{12}<\cdots <p_{1l}<p_{21}<p_{22}<\cdots$ are primes and $\{\alpha_{1}, \alpha_{2}, \ldots, \alpha_l\}=\mathfrak Delta$ is the set of all simple roots. Now writing $g_i$ with respect to the aforementioned Chevalley basis \eqref{chevalleybasis}, we get \[g_i=\textup{diag}(a_{i1}, a_{i2}, \ldots, a_{i{|\Phi|}};\underbrace{1, \ldots, 1}_l),\] where $a_{ij}\in \mathbb Q$ such that $\nu(a_{ij})\neq \emptyset$ and $\nu(a_{ij})\cap \nu(a_{rs})=\emptyset$ for $i\neq r$, since $\nu(a_{ij})\subset \{p_{i1}, \ldots, p_{il}\}$ for all $i, j$. Since $a_{ij}\in \mathbb Q$ and field automorphism acts identically on the prime subfield $\mathbb Q$ of $k$, then $\overline{f}(g_i)=g_i$. Also, the diagonal automorphism $d_h$ acts by conjugation. Here $h$ is an $(|\Phi|+|\mathfrak Delta|)\times (|\Phi|+|\mathfrak Delta|)$-diagonal matrix and $g_i$ is diagonal as well, so $d_h(g_i)=g_i$ for all $i$. Therefore $\varphi(g_i)=(\overline{f}\circ d_h)(g_i)=g_i$ for all $i$. Since we are assuming that the number of $\varphi$-twisted conjugacy classes is finite, without loss of generality, we may also assume that $g_i\sim_{\varphi} g_1$ for all $i=2, 3, \ldots$. Therefore \begin{align*} g_1&=z_ig_i\varphi(z_i^{-1})=z_ig_i(\overline{f}d_h)(z_i^{-1})=z_ig_i(d_{\widetilde{h}}\overline{f})(z_i^{-1})=z_ig_i\widetilde{h}\overline{f}(z_i^{-1}) \widetilde{h}^{-1} \end{align*} for some $z_i\in G'_{\sigma}$ for all $i=2,3, \ldots$. Hence $g_1\widetilde{h}=z_i(g_i\widetilde{h}) \overline{f}(z_i^{-1})$. This implies that \begin{align}\label{field} \overline{f}(z_i)&=(g_1\widetilde{h})^{-1}z_i(g_i\widetilde{h}). \end{align} Let \[\widetilde{h}=\textup{diag}(b_1, b_2, \ldots, b_{|\Phi|}; \underbrace{1, \ldots, 1}_l)\in \widehat{H'},\] then we have \begin{align*} g_i\widetilde{h}=\textup{diag}(a_{i1}b_1, a_{i2}b_2, \ldots, a_{i|\Phi|}b_{|\Phi|}; \underbrace{1, \ldots, 1}_l) \end{align*} for all $i=1,2,\ldots $. Let $z_i=\begin{pmatrix}Q_i&R_i\\S_i&T_i\end{pmatrix}$ be a block matrix, where $Q_i=(q_{i, mn})_{|\Phi|\times |\Phi|}$, $R_i=(r_{i, mn})_{|\Phi|\times |\mathfrak Delta|}, S_i=(s_{i, mn})_{|\mathfrak Delta|\times |\Phi|}$ and $T_i=(t_{i, mn})_{|\mathfrak Delta|\times |\mathfrak Delta|}$. Then by equation \eqref{field}, for all $m, n=1, 2, \ldots, |\Phi|$, we have \[f(q_{i, mn})=a_{1m}^{-1}b_m^{-1}a_{in}b_nq_{i, mn}=c_{mn}a_{in}q_{i, mn},\] where $c_{mn}=(a_{1m}b_mb_n^{-1})^{-1}$. Since $\nu(a_{in})\neq \emptyset$ and $\nu(a_{in})\cap \nu(a_{jn})=\emptyset$ for $i\neq j$, then we can apply \lemref{lemma5} to the elements $q_{2, mn}, q_{3, mn}, \ldots$ and we get $q_{j, mn}=0$ for all $j>N_{mn}$ for some integer $N_{mn}$. Choose $N=\mathrm{max}_{m, n=1, 2, \ldots, |\Phi|}N_{mn}$, then we have $Q_j=(q_{j, mn})_{|\Phi|\times |\Phi|}=O_{|\Phi|\times |\Phi|}$ for all $j>N$ (by $O_{m\times n}$, we mean the $m\times n$ matrix all of whose entries are zero). Using the similar arguments to the matrices $\{S_i\}_{i}$, we conclude that all the matrices $S_i$ reduces to the zero matrix for sufficiently large indices $i$. Hence the matrix $z_i$ has the following form for sufficiently large indices $i$: \[z_i=\begin{pmatrix}O_{|\Phi|\times |\Phi|}&R_i\\O_{|\mathfrak Delta|\times |\Phi|}&T_i\end{pmatrix}.\] Therefore, for $i$ sufficiently large, $\mathrm{det}(z_i)=0$, which is a contradiction as $z_i\in G'_{\sigma}\leq \textup{Aut}(\mathcal{L}_k)$. Hence $R(\varphi)=\infty$. This completes the proof. \subsection{Proof of \thmref{mainthm2}} In view of equation \eqref{qtog} and \lemref{msqtog}, it is enough to show this theorem for twisted Chevalley group of adjoint type $G'_{\sigma}$. Let $\varphi \in \textup{Aut}(G'_{\sigma})$, then as in the proof of Theorem 1.2 in \secref{proof1}, we may assume that $\varphi=\overline{f}\circ d_h$. Since $\textup{Aut}(k)$, the automorphism group of the field $k$, is periodic, the field automorphism $\overline{f}$ is of finite order, say $n$. \noindent \textbf{Claim:} $R(\varphi)=\infty$. Suppose if possible $R(\varphi)<\infty$. Let $g(T)=h_{\alpha_{1}}(T)h_{\alpha_{2}}(T)\cdots h_{\alpha_{l}}(T)$ be an element of the twisted Chevalley group over $k(T)$, where $\alpha_1, \ldots, \alpha_l$ are the simple roots. Consider the elements $g_i:=g(x_i)$, where $\{x_i\}$ is an infinite set of non-zero rational numbers. Since we are assuming that the number of $\varphi$-twisted conjugacy classes is finite, without loss of generality, we may also assume that $g_i\sim_{\varphi} g_1$ for all $i=2, 3, \ldots$. Therefore \[g_i=z_ig_1\varphi({z_i^{-1}})\] for some $z_i\in G'_{\sigma}$ for all $i=2,3, \ldots $. Then we have \begin{equation}\label{keyeq} \begin{split} g_i&=z_ig_1\varphi(z_i^{-1})\\ \varphi(g_i)&=\varphi(z_i)\varphi(g_1)\varphi^2(z_i^{-1})\\ \varphi^2(g_i)&=\varphi^2(z_i)\varphi^2{(g_1)}\varphi^3(z_i^{-1})\\ &\ldots\\ \varphi^{n-1}(g_i)&=\varphi^{n-1}(z_i)\varphi^{n-1}(g_1)\varphi^n(z_i^{-1}). \end{split} \end{equation} In view of \lemref{normality}, for all $r=1, 2, \ldots, n$, we get $\overline{f}^r\circ d_h=d_{\widetilde{h}}\circ \overline{f}^r$ for some $\widetilde{h}\in \widehat{H'}$. Therefore $\varphi^r=d_{\widetilde{h}}\circ \overline{f}^r$ for all $r$. Since the field automorphism acts as an identity on the prime subfield $\mathbb Q$ of $k$, then $\overline{f}(g_i)=g_i$. Also, the diagonal automorphism $d_h$ acts by conjugation, where $h$ is an $(|\Phi|+|\mathfrak Delta|)\times (|\Phi|+|\mathfrak Delta|)$-diagonal matrix and $g_i$ is diagonal as well, so $d_h(g_i)=g_i$ for all $i$. Therefore $\varphi(g_i)=(\overline{f}\circ d_h)(g_i)=g_i$ for all $i$. Therefore $\varphi^r(g_i)=g_i$ for all $r$. From equation \eqref{keyeq}, we get $g_i^n=z_ig_1^nd_{\widetilde{h}}(z_i^{-1})=z_ig_1^n\widetilde{h}z_i^{-1}\widetilde{h}^{-1}$, since $\overline{f}^n=\mathrm{Id}$. Then \[g_i^n\widetilde{h}=z_i(g_1^n\widetilde{h})z_i^{-1},\] where $z_i\in G'_{\sigma}$ and $i=2, 3, \ldots$. Since $g_i^n\widetilde{h}=g(x_i)^n\widetilde{h}$ and $g(x_1)^n\widetilde{h}=g_1^n\widetilde{h}$ are conjugate in $\GL_{|\Phi|+|\mathfrak Delta|}(k)$, their traces are equal for all $i=2, 3, \ldots$. In view of \lemref{lemma7}, we see that $\mathrm{trace}(g(x_i)^n\widetilde{h})=\mathrm{trace}(g(x_1)^n\widetilde{h})\in k(T)\setminus k$, which contradicts \lemref{lemma6}. Therefore $R(\varphi)=\infty$. This completes the proof. \section{Isogredience classes and the $S_{\infty}$-property}\label{isogred} Let $G$ be an arbitrary group. Suppose $\Psi\in \textup{Out}(G):=\textup{Inn}(G)\backslash\textup{Aut}(G)$. Two elements $\alpha, \beta\in \Psi$ are said to be \emph{isogredient} (or similar) if $\beta=i_h\circ \alpha \circ i_{h^{-1}}$ for some $h\in G$, where $i_h(g)=hgh^{-1}$ for all $g\in G$. Observe that this is an equivalence relation on $\Psi$. Fix a representative $\gamma\in \Psi$. Then $\alpha=i_a\circ \gamma$ and $\beta=i_b\circ \gamma$ for some $a,b\in G$. Therefore $i_b\circ \gamma=\beta=i_h\circ \alpha \circ i_{h^{-1}} =i_h\circ i_a\circ \gamma \circ i_{h^{-1}}$. Then we have \[i_b=i_h\circ i_a\circ (\gamma \circ i_{h^{-1}}\circ\gamma^{-1})=i_h\circ i_a\circ i_{\gamma(h^{-1})}=i_{ha\gamma(h^{-1})}.\] Thus, $\alpha$ and $\beta$ are isogredient if and only if $b=ha\gamma(h^{-1})c$ for some $c\in Z(G)$ the center of $G$. Let $S(\Psi)$ denote the number of isogredience classes of $\Psi$. If $\Psi=\overline{\textup{Id}}=\textup{Inn}(G)\textup{Id}_G$, then $S(\overline{\textup{Id}})$ is the number of usual conjugacy classes of $G/Z(G)$. A group $G$ has the \emph{$S_{\infty}$-property} if $S(\Psi)=\infty$ for all $\Psi\in \textup{Out}(G)$. Observe that if a group $G$ possesses the $S_{\infty}$-property then $G/Z(G)$ possesses the $R_{\infty}$-property, and hence by \lemref{msqtog} $G$ satisfies the $R_{\infty}$-property. The converse also holds if $Z(G)=\{e\}$. We record this as follows (cf. \cite[Theorem 3.4]{ft2015}): \begin{lemma}\label{sinfty} Let $G$ be an arbitrary group such that $Z(G)=\{e\}$. Then $G$ has the $R_{\infty}$-property if and only if $G$ has the $S_{\infty}$-property. \end{lemma} An automorphism $\varphi$ of $ G $ is said to be \textit{central} if $ g^{-1}\varphi(g)\in Z(G) $ for all $ g \in G.$ Thus corresponding to every central automorphism $\varphi $ one can associate a homomorphism $ f_\varphi: G\to Z(G) $ given by $ f_\varphi(g)=g^{-1}\varphi(g).$ The following two lemmas appear in \cite[Propositions 1, 5, 12]{bnn2013}. We include a proof for the sake of completeness. \begin{lemma}\label{lemma49} If the $\varphi$-conjugacy class $[e]_{\varphi}$ of the unit element $e$ of a group $G$ is a subgroup, then it is always a normal subgroup. If $\varphi$ is a central automorphism of $G$, then $[e]_{\varphi}$ is a subgroup of $G$. \end{lemma} \begin{proof} Suppose $[e]_{\varphi}=\{g\varphi(g)^{-1}\mid g\in G\}$ is a subgroup. Then for any $ h\in G $, we get \[h(g\varphi(g)^{-1})h^{-1}=hg\varphi(g)^{-1}\varphi(h)^{-1}\varphi(h)h^{-1}=hg\varphi(hg)^{-1}(h \varphi(h)^{-1})^{-1}\in [e]_{\varphi}. \] For the second statement, assume that $ \varphi $ is a central automorphism. Then we can write $ \varphi(g)= gf_\varphi(g)$, where $ f_\varphi: G\to Z(G) $ is a homomorphism given by $f_{\varphi}(g)=g^{-1}\varphi(g)$. This implies \[[e]_{\varphi}=\{g\varphi(g)^{-1}\mid g\in G\}=\{gf_\varphi(g^{-1})g^{-1}\mid g\in G\} =\{f_\varphi(g^{-1})\mid g\in G\}. \] Hence $ [e]_{\varphi} $ is a subgroup as $f_{\varphi}$ is a homomorphism. \end{proof} Let $N$ be a normal subgroup of $G$ which is stable under $\varphi$ (i.e., $\varphi(N)=N$), then we denoted by $\overline{\varphi}$, the automorphism induced by $\varphi$ on $G/N$ and by $\overline{e}$, the identity element of the factor group $G/N$. \begin{lemma}\label{lemma51} Let $G$ be a group such that for some automorphism $\varphi$ of $G$, the twisted conjugacy class $[e]_{\varphi}$ is a subgroup of $G$. Suppose that $N$ is a normal $\varphi$-stable subgroup of $G$. Then the $\overline{\varphi}$-conjugacy class $[\overline{e}]_{\overline{\varphi}}$ is a subgroup of $G/N$. \end{lemma} \begin{proof} Let $ x,y\in [\overline{e}]_{\overline{\varphi}}=\{g\varphi(g)^{-1}N\mid g\in G\}$. Then $ x= g\varphi(g)^{-1}N$ and $ y= h\varphi(h)^{-1}N$ for some $ g, h\in G $. This gives \[ xy=g\varphi(g)^{-1} h\varphi(h)^{-1}N=z\varphi(z)^{-1} N\in [\overline{e}]_{\overline{\varphi}},\] for some $ z\in G $ since $[e]_{\varphi}$ is a subgroup of $G$. Similarly it can be shown that $x^{-1}\in[\overline{e}]_{\overline{\varphi}}.$ Hence, the proof. \end{proof} The following result is similar to \lemref{msqtog} in the context of the $S_{\infty}$-property. \begin{lemma}\cite[Lemma 2.3]{FN}\label{qtogs} Let $1\rightarrow N\rightarrow G\rightarrow Q\rightarrow 1$ be an exact sequence of groups. Suppose that $N$ is a characteristic subgroup of $G$ and $Q$ has the $S_{\infty}$-property, then $G$ also has the $S_{\infty}$-property. \end{lemma} \subsection{Proof of \corref{maincor1}} Let $G'_{\sigma}$ be a twisted Chevalley group of adjoint type. Therefore $Z(G'_{\sigma})=\{e\}$. Now by \thmref{mainthm1} and \thmref{mainthm2} we know that $G'_{\sigma}$ has the $R_{\infty}$-property. Hence, in view of \lemref{sinfty}, the twisted Chevalley group $G'_{\sigma}$ (of adjoint type) has the $S_{\infty}$-property. Now let $G'_{\sigma}$ be any twisted Chevalley group, then we have the following short exact sequence (as in equation \eqref{qtog}) \[1\rightarrow Z(G'_{\sigma})\rightarrow G'_{\sigma}\rightarrow G'_{\sigma}/Z(G'_{\sigma})\rightarrow 1,\] where $Z(G'_{\sigma})$ is a characteristic subgroup of $G'_{\sigma}$. Then by virtue of \lemref{qtogs}, $G'_{\sigma}$ has the $S_{\infty}$-property as $G'_{\sigma}/Z(G'_{\sigma})$ possesses the $S_{\infty}$-property. This completes the proof. \subsection{Proof of \corref{maincor2}} Suppose that $[e]_{\varphi}$ is a subgroup of $G'_{\sigma}$, then in view of \lemref{lemma51}, $[\overline{e}]_{\overline{\varphi}}$ is a subgroup of $G'_{\sigma}/Z(G'_{\sigma})$. Therefore, by \lemref{lemma49}, $[\overline{e}]_{\overline{\varphi}}$ is a normal subgroup of $G'_{\sigma}/Z(G'_{\sigma})$. Once again recall that Steinberg proved that the twisted Chevalley group of adjoint type $G'_{\sigma}/Z(G'_{\sigma})$ is simple. Therefore either $[\overline{e}]_{\overline{\varphi}}=Z(G'_{\sigma})$ or $[\overline{e}]_{\overline{\varphi}}=G'_{\sigma}/Z(G'_{\sigma})$. Again by \thmref{mainthm1} and \thmref{mainthm2}, $G'_{\sigma}/Z(G'_{\sigma})$ possesses the $R_{\infty}$-property. Therefore $[\overline{e}]_{\overline{\varphi}}=Z(G'_{\sigma})$ which implies $\overline{\varphi}=\mathrm{Id}_{G'_{\sigma}/Z(G'_{\sigma})}$. Hence $\varphi$ is a central automorphism of $G'_{\sigma}$. Conversely, suppose that $\varphi$ is a central automorphism of $G'_{\sigma}$. Then in view of \lemref{lemma49}, the $\varphi$-twisted conjugacy class $[e]_{\varphi}$ is a subgroup of $G'_{\sigma}$. This completes the proof. \textbf{Acknowledgement:} We would like to thank Timur Nasybullov for his wonderful comments and suggestions on this work and also, for helping with the proof of \lemref{normality}. We thank Swathi Krishna for correcting our English. We also would like to thank the anonymous referee for his/her careful reading and for many helpful comments and suggestions which improved the readability of this paper. \end{document}
math
51,210
\begin{document} \title{Dominant poles and tail asymptotics in the \\critical Gaussian many-sources regime \thanks{This work was financially supported by The Netherlands Organization for Scientific Research (NWO) and by an ERC Starting Grant.} } \author{ A.J.E.M. Janssen \and J.S.H. van Leeuwaarden } \maketitle \footnotetext[1]{Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven, The Netherlands. \{a.j.e.m.janssen,j.s.h.v.leeuwaarden\}@tue.nl. } \begin{abstract} The dominant pole approximation (DPA) is a classical analytic method to obtain from a generating function asymptotic estimates for its underlying coefficients. We apply DPA to a discrete queue in a critical many-sources regime, in order to obtain tail asymptotics for the stationary queue length. As it turns out, this regime leads to a clustering of the poles of the generating function, which renders the classical DPA useless, since the dominant pole is not sufficiently dominant. To resolve this, we design a new DPA method, which might also find application in other areas of mathematics, like combinatorics, particularly when Gaussian scalings related to the central limit theorem are involved. {\rm e}nd{abstract} \section{Introduction} Probability generating functions (PGFs) encode the distributions of discrete random variables. When PGFs are considered analytic objects, their singularities or poles contain crucial information about the underlying distributions. Asymptotic expressions for the tail distributions, related to large-deviations events, can typically be obtained in terms of the so-called dominant singularities, or dominant poles. The dominant pole approximation (DPA) for the tail distribution is then derived from the partial fraction expansion of the PGF and maintaining of this expansion the dominant fraction related to the dominant pole. Dominant pole approximations have been applied in many branches of mathematics, including analytic combinatorics \cite{flajolet} and queueing theory \cite{VanMieghem1996}. We apply DPA to a discrete queue that has an explicit expression for the PGF of the stationary queue length. Additionally, this queue is considered in a many-sources regime, a heavy-traffic regime in which both the demand on and the capacity of the systems grow large, while their ratio approaches one. This many-sources regime combines high system utilization and short delays, due to economies of scale. The regime is similar in flavor as the QED (quality and efficiency driven) regime for many-server systems \cite{halfinwhitt}, although an important difference is that our discrete queue fed by many sources falls in the class of single-server systems and therefore leads to a manageable closed form expression for the PGF of the stationary queue length $Q$. Denote this PGF by $Q(z)=\mathbb{E}(z^Q)$. PGFs can be represented as power series around $z=0$ with nonnegative coefficients (related to the probabilities). We assume that the radius of convergence of $Q(z)$ is larger than one (in which case all moments of $Q$ exist). This radius of convergence is in fact determined by the dominant singularity $Z_{0}$, the singularity in $|z|>1$ closest to the origin. For PGFs, due to Pringsheim's theorem \cite{flajolet}, $Z_{0}$ is always a positive real number larger than one. Then DPA leads to the approximation \begin{equation}\langlebel{tttijms} \mathbb{P}(Q> N)\approx \frac{c_0}{1-Z_{0}}\Big(\frac{1}{Z_{0}}\Big)^{N+1} \quad {\rm for} \ {\rm large} \ N {\rm e}nd{equation} with $c_0=\lim_{z\to Z_{0}}(z-Z_{0})Q(z)$. In many cases the approximation {\rm e}nd{equation}ref{tttijms} can be turned into a more rigorous asymptotic expansion (for $N$ large) for the tail probabilities $\mathbb{P}(Q> N)$. We shall now explain in more detail the many-sources regime, the discrete queue, and when combining both, the mathematical challenges that arise when applying DPA. \noindent {\bf Many sources and a discrete queue.} Consider a stochastic system in which demand per period is given by some random variable $A$, with mean $\mu_A$ and variance $\sigma^2_A$. For systems facing large demand one can set the capacity according to the rule $s=\mu_A+\beta \sigma_A$, which consists of a minimally required part $\mu_A$ and a variability hedge $\beta \sigma_A$. Such a rule can lead to economies of scale, as we will now describe in terms of a setting in which the demand per period is generated by many sources. Consider a system serving $n$ independent sources and let $X$ denote the generic random variable that describes the demand per source per period, with mean $\mu$ and variance $\sigma^2$. Denote the service capacity by $s_n$, so that the system utilization is given by $\rho_n=n\mu/s_n$, where the index $n$ expresses the dependence on the scale at which the system operates. The traditional capacity sizing rule would then be \begin{equation}\langlebel{aa} s_n=n\mu+\beta \sigma \sqrt{n} {\rm e}nd{equation} with $\beta$ some positive constant. The standard heavy-traffic paradigm \cite{britt1,nd1,nd2}, which builds on the Central Limit Theorem, then prescribes to consider a sequence of systems indexed by $n$ with associated loads $\rho_n$ such that (also using that $s=s_n\sim n\mu$) \begin{equation}\langlebel{bb} \rho_n=\frac{n\mu}{s_n}\sim 1-\frac{\beta\sigma}{\mu\sqrt{n}}=1-\frac{\gamma}{\sqrt{s_n}}, \quad {\rm as} \ n\to\infty, {\rm e}nd{equation} where $\gamma=\beta\sigma/\sqrt{\mu}$. We shall apply the many-sources regime given by {\rm e}nd{equation}ref{aa} and {\rm e}nd{equation}ref{bb} to a discrete queue, in which we divide time into periods of equal length, and model the net input in consecutive periods as i.d.d.~samples from the distribution of $A$, with mean $n\mu$ and variance $n\sigma^2$. The capacity per period $s_n$ is fixed and integer valued. The scaling rule in {\rm e}nd{equation}ref{bb} thus specifies how the mean and variance of the demand per period, and simultaneously $s_n$, will all grow to infinity as functions of $n$. Many-sources scaling became popular through the Anick-Mitra-Sondhi model \cite{Anick1982}, as one of the canonical models for modern telecommunications networks, in which a switch may have hundreds of different input flows. But apart from communication networks, the concept of many sources can apply to any service system in which demand can be regarded as coming from many different inputs (see e.g.~\cite{Bruneel1993,johanthesis,Dai2014,Newell1960,vanLeeuwaarden2006} for specific applications. \noindent{\bf How to adapt classical DPA?} As it turns out, the many-sources regime changes drastically the nature of the DPA. While the queue is pushed into the many-sources regime for letting $n\to\infty$, the dominant pole becomes barely dominant, in the sense that all the other poles (the dominated ones) of the PGF are approaching the dominant pole. For the partial fraction expansion of the PGF this means that it becomes hard, or impossible even, to simply discard the contributions of the fractions corresponding to what we call {\it dominated} poles: all poles other than the dominant pole. Moreover, the dominant pole itself approaches $1$ according to \begin{equation}\langlebel{dombeh} Z_{0}\sim 1+\frac{2\beta}{\sqrt{n\sigma^2}}, \quad {\rm as} \ n\to\infty. {\rm e}nd{equation} This implies that in {\rm e}nd{equation}ref{tttijms} the factor $c_0/(1-Z_{0})$ potentially explodes, while without imposing further conditions on $N$, the factor $Z_{0}^{-N-1}$ goes to the degenerate value 1. The many-sources regime thus has a fascinating effect on the location of the poles that renders a standard DPA useless for multiple reasons. We shall therefore adapt the DPA in order to make it suitable to deal with the complications that arise in the many-sources regime, with the goal to again obtain an asymptotic expansion for the tail distribution. First observe that the term $Z_{0}^{-N-1}$ in {\rm e}nd{equation}ref{tttijms} becomes non-degenerate when we impose that $N\sim K\sqrt{n\sigma^2}$, with $K$ some positive constant, in which case \begin{equation} \Big(\frac{1}{Z_{0}}\Big)^{N+1}\sim \Big(1+\frac{2\beta}{\sqrt{n\sigma^2}}\Big)^{-K \sqrt{n\sigma^2}}\to {\rm e}^{-2\beta K}\in(0,1) \quad {\rm as} \ n\to\infty. {\rm e}nd{equation} The condition $N\sim K\sqrt{n\sigma^2}$ is natural, because the fluctuations of our stochastic system are of the order $\sqrt{n\sigma^2}$. Of course, there are many ways in which $N$ and $n$ can be coupled, but due to {\rm e}nd{equation}ref{dombeh}, only couplings for which $N$ is proportional to $\sqrt{n}$ lead to a nondegenerate limit for $Z_{0}^{-N-1}$. Now let us turn to the other two remaining issues: The fact that $c_0/(1-Z_{0})$ potentially explodes and that the dominated poles converge to the dominant pole. To resolve these two issues we present in this paper an approach that relies on approximations of the type {\rm e}nd{equation}ref{dombeh} for all the poles (which are defined implicitly as the solutions to some equation). The approximations are accurate in the many-sources regime, and can then be substituted into the partial fraction expansion that describes the tail distribution. We replace the partial fraction expansion by a contour integral representation, and subsequently apply a dedicated saddle point method recently introduced in \cite{britt1}, with again a prominent role for the dominant pole (this time in relation to the saddle point). The key challenge is to bound the contributions of the contour integral when shifted beyond the dominant pole, a contribution which is substantial due to the relative large impact of the dominated poles. This saddle point method then provides a fully rigorous derivation of the asymptotic expression for $\mathbb{P}(Q> N)$ and is of the form \begin{equation} \langlebel{onesix} \mathbb{P}(Q> K \sqrt{n\sigma^2})\sim h(\beta)\cdot {\rm e}^{-2\beta K}, \quad {\rm as} \ n\to\infty. {\rm e}nd{equation} The function $h(\beta)$ in this asymptotic expression involves infinite series and Riemann zeta functions that are reminiscent of the reflected Gaussian random walk \cite{changperes,jllerch,cumulants}. Indeed, it follows from \cite[Theorem 3]{nd2} that our rescaled discrete queue converges under {\rm e}nd{equation}ref{bb} to a reflected Gaussian random walk. Hence, the tail distribution of our system in the regime {\rm e}nd{equation}ref{bb} should for large $n$ be well approximated by the tail distribution of the reflected Gaussian random walk. We return to this connection in Subsection \ref{subsec4.3}. Our approach thus relies on detailed knowledge about the distribution of all the poles of the PGF of $Q$, and in particular how this distribution scales with the asymptotic regime {\rm e}nd{equation}ref{aa}--{\rm e}nd{equation}ref{bb}. As it turns out, in contrast with classical DPA, this many-sources regime makes that all poles contribute to the asymptotic characterization of the tail behavior. Our saddle point method leads to an asymptotic expansion for the tail probabilities, of which the limiting form corresponds to the heavy-traffic limit, and pre-limit forms present refined approximations for pre-limit systems ($n<\infty$) in heavy traffic. Such refinements to heavy-traffic limits are commonly referred to as {{\rm e}m corrected diffusion approximations} \cite{siegmund,blanchetglynn,asmussen}. Compared with the studies that directly analyzed the Gaussian random walk \cite{changperes,jllerch,cumulants}, which is the scaling limit of our queue in the many-sources regime, we start from the pre-limit process description, and establish an asymptotic result which is valuable for a queue with a finite yet large number of sources. Starting this asymptotic analysis from the actual pre-limit process description is mathematically more challenging than directly analyzing the process limit, but in return gives valuable insights into the manner and speed at which the system starts displaying its limiting behavior. \noindent{\bf Outline of the paper.} In Section~\ref{sec2} we describe the discrete queue in more detail and present some preliminary results for its stationary queue length distribution. In Section~\ref{sec3} we give an overview of the results and the contour integration representation for the tail distribution. In Section~\ref{sec4}, we give a rigorous proof of the main result of the leading-order term using the dedicated saddle point method (Subsection~\ref{subsec4.1}), and of bounding the contour integral with integration paths shifted beyond the dominant pole (Subsection~\ref{subsec4.2}). In Section~\ref{subsec4.3} we elaborate on the connection between the discrete queue and the Gaussian random walk, and we present an asymptotic series for $\mathbb{P}(Q >N)$ comprising not only the dominant poles but also the dominated poles. \section{Model description and preliminaries}\langlebel{sec2} We consider a discrete stochastic model in which time is divided into periods of equal length. At the beginning of each period $k=1,2,3,...$ new demand $A_k$ arrives to the system. The demands per period $A_1,A_2,...$ are assumed independent and equal in distribution to some non-negative integer-valued random variable $A$. The system has a service capacity $s\in\mathbb{N}$ per period, so that the recursion \begin{equation} \langlebel{lind} Q_{k+1} = \max\{Q_k + A_k - s,0\},\qquad k=1,2,..., {\rm e}nd{equation} assuming $Q_1=0$, gives rise to a Markov chain $(Q_k)_{k\geq 1}$ that describes the congestion in the system over time. The PGF \begin{equation} \langlebel{e2} A(z)=\sum_{j=0}^{\infty} \mathbb{P}(A=j) z^j {\rm e}nd{equation} is assumed analytic in a disk $|z|<r$ with $r>1$, which implies that all moments of $A$ exist. We assume that $A_k$ is in distribution equal to the sum of work generated by $n$ sources, $X_{1,k}+...+X_{n,k}$, where the $X_{i,k}$ are for all $i$ and $k$ i.i.d.~copies of a random variable $X$, of which the PGF $X(z)=\sum_{j=0}^{\infty}\mathbb{P}(X=j)z^j$ has radius of convergence $r>1$, and \begin{equation} \langlebel{e3} 0<\mu_A=n\mu=n X'(1)<s. {\rm e}nd{equation} Under the assumption (\ref{e3}) the stationary distribution $\lim_{k\to\infty}\mathbb{P}(Q_k=j)=\mathbb{P}(Q=j)$, $j=0,1,\ldots$ exists, with the random variable $Q$ defined as having this stationary distribution. We let \begin{equation} \langlebel{e4} Q(z)=\sum_{j=0}^{\infty}\mathbb{P}(Q=j)z^j {\rm e}nd{equation} be the PGF of the stationary distribution. It is a well-known consequence of Rouch\'e's theorem that under {\rm e}nd{equation}ref{e3} $z^s-A(z)$ has precisely $s$ zeros in $|z|\leq1$, one of them being $z_0=1$. We proceed in this paper under the same assumptions as in \cite{britt1}. We assume that $|X(z)|<X(r_1)$, $|z|=r_1$, $z\neq r_1$, for any $r_1\in(0,r)$. Finally, we assume that the degree of $X(z)$ is larger than $s/n$. Under these conditions, $z_0$ is the only zero of $z^s-A(z)$ on $|z|=1$, and all others in $|z|\leq1$, denoted as $z_1,z_2,...,z_{s-1}$, lie in $|z|<1$. Furthermore, there are at most countably many zeros $Z_k$ of $z^s-A(z)$ in $1<|z|<r$, and there is precisely one, denoted by $Z_0$, with minimum modulus. There is the product form representation \cite{Bruneel1993,johanthesis} \begin{equation} \langlebel{2.1} Q(z)=\frac{(s-\mu_A)(z-1)}{z^s-A(z)} \rightarrowod_{j=1}^{s-1} \frac{z-z_j}{1-z_j}, {\rm e}nd{equation} where the right-hand side of (\ref{2.1}) is analytic in $|z|<Z_0$ and has a first-order pole at $z=Z_0$. We have for the tail probability (using that $Q(1)=1$) for $N=0,1,...$ \begin{equation} \langlebel{2.2} P(Q>N)=\sum_{i=N+1}^{\infty} P(Q=i)=C_{z^N} \Bigl[\frac{1-Q(z)}{1-z}\Bigr], {\rm e}nd{equation} where $C_{z^N}[f(z)]$ denotes the coefficient of $z^N$ of the function $f(z)$. By contour integration, Cauchy's theorem and $Q(1)=1$, we then get for $0<\varepsilon<Z_0-1$ \begin{eqnarray} \langlebel{2.3} P(Q>N) & = & \frac{1}{2\pi i} \int\limits_{|z|=1+\varepsilon} \frac{1}{z^{N+1}}~\frac{1-Q(z)}{1-z} dz \nonumber \\[3.5mm] & = & \frac{1}{2\pi i} \int\limits_{|z|=R} \frac{1}{z^{N+1}}~\frac{1-Q(z)}{1-z} dz+\frac{c_0}{Z_0^{N+1}(1-Z_0)}, {\rm e}nd{eqnarray} where $c_0={\rm Res}_{z=Z_0}[Q(z)]$ and $R$ is any number between $Z_0$ and $\min_{k\neq0}|Z_k|$. When $n$ and $s$ are fixed, we have that the integral on the second line of (\ref{2.3}) is $O(R^{-N})$, and so there is the DPA \begin{equation} \langlebel{2.4} P(Q>N)=\frac{c_0}{Z_0^{N+1}(1-Z_0)} (1+\mbox{exponentially small}),~~~~N\rightarrow\infty. {\rm e}nd{equation} In this paper we crucially rely on Pollaczek's integral representation for the PGF of Q \begin{equation} \langlebel{2.5} Q(z)={\rm e}xp\Big(\frac{1}{2\pi i} \int\limits_{|v|=1+\varepsilon} \ln \Bigl(\frac{z-v}{1-v}\Bigr) \frac{(v^s-A(v))'}{v^s-A(v)} dv\Big) {\rm e}nd{equation} that holds when $|z|<1+\varepsilon<Z_0$ (principal value of ln on $|v|=1+\varepsilon$). \section{Overview and results} \langlebel{sec3} In order to force the discrete queue to operate in the critical many-sources regimes, we shall assume throughout the paper the following relation between the number of sources $n$ and the capacity $s$: \begin{equation} \langlebel{1.2} \frac{n\mu}{s}=1-\frac{\gamma}{\sqrt{s}} {\rm e}nd{equation} with $\gamma>0$ bounded away from 0 and $\infty$ as $s\rightarrow\infty$. In this scaling regime, the zeros $z_j$ and $Z_k$ of $z^s-A(z)=0$ start clustering near $z=1$, as described in the next lemma (proved in the appendix). Let $z_j^*$ and $Z_k^*$ denote the complex conjugates of $z_j$ and $Z_k$, respectively. \begin{lem}\langlebel{lemdis} For finite $j,k=1,2,...$ and $s\rightarrow\infty$, \begin{equation} \langlebel{3.1} z_0=1,~~~~~~Z_0=1+\frac{2a_0b_0}{\sqrt{s}}+O(s^{-1}), {\rm e}nd{equation} \begin{equation} \langlebel{3.2} z_j=z_{s-j}^{\ast}=1+\frac{a_0}{\sqrt{s}} (b_0-\sqrt{b_0^2-2\pi ij})+O(s^{-1}), {\rm e}nd{equation} \begin{equation} \langlebel{3.3} Z_k=Z_{-k}^{\ast}=1+\frac{a_0}{\sqrt{s}} (b_0+\sqrt{b_0^2-2\pi ik})+O(s^{-1}) {\rm e}nd{equation} with \begin{equation} \langlebel{3.4} a_0=\frac{\sqrt{2\mu}}{\sigma},~~~~~~b_0=\frac{\gamma\sqrt{\mu}}{\sigma\sqrt{2}} {\rm e}nd{equation} and principal roots in {\rm e}nd{equation}ref{3.2}-{\rm e}nd{equation}ref{3.4}. {\rm e}nd{lem} Due to this clustering phenomenon, the main reasoning that underpin classical DPA cannot be carried over. Starting from the expression {\rm e}nd{equation}ref{2.4} we need to investigate what becomes of the term $c_0/(1-Z_0)$, and moreover, the validity of the exponentially-small phrase in (\ref{2.4}) and the actual $N$-range both become delicate matters that need detailed information about the distribution of the zeros as in Lemma \ref{lemdis}. Let us first present a result that identifies the relevant $N$-range: \begin{prop} \langlebel{prop4.6} \begin{equation} \langlebel{4.42} \frac{1}{Z_0^{N+1}}={\rm e}xp\Bigl(\frac{-2L\gamma\mu}{\sigma^2}\Bigr)(1+O(s^{-1/2})) {\rm e}nd{equation} when $N+1=L\sqrt{s}$ with $L>0$ bounded away from 0 and $\infty$. {\rm e}nd{prop} \begin{proof} We have from (\ref{3.1}) and {\rm e}nd{equation}ref{3.4} that $Z_0=1+\frac{2\gamma\mu}{\sigma^2\sqrt{s}}+O(\frac{1}{s})$. Hence \begin{eqnarray} \langlebel{4.43} \frac{1}{Z_0^{N+1}} = {\rm e}xp\Bigl({-}L\sqrt{s} \ln \Bigl( 1+\frac{2\gamma\mu}{\sigma^2\sqrt{s}}+O(s^{-1})\Bigr)\Bigr) = {\rm e}xp\Bigl(\frac{-L\gamma\mu}{\sigma^2}+O(s^{-1/2})\Bigr) {\rm e}nd{eqnarray} when $L$ is bounded away from 0 and $\infty$, and this gives the result. {\rm e}nd{proof} From (\ref{2.1}) we obtain the representation \begin{equation} \langlebel{3.5} \frac{c_0}{1-Z_0}={-} \frac{s-\mu_A}{sZ_0^{s-1}-A'(Z_0)} \rightarrowod_{j=1}^{s-1} \frac{Z_0-z_j}{1-z_j}. {\rm e}nd{equation} The next result will be proved in Section~\ref{sec4}. \begin{lem} \langlebel{lem4.1} \begin{equation} \langlebel{4.2} - \frac{s-\mu_A}{sZ_0^{s-1}-A'(Z_0)}=\Big(\frac{1}{Z_0}\Big)^{s-1}\Big(1+O(s^{-1/2})\Big). {\rm e}nd{equation} {\rm e}nd{lem} We thus get from (\ref{3.5}) and Lemma~\ref{lem4.1} \begin{equation} \langlebel{4.10} \frac{c_0}{1-Z_0}=\frac{P(Z_0)}{P(1)} \Bigl(1+O(s^{-1/2})\Bigr), {\rm e}nd{equation} where \begin{equation} \langlebel{4.11} P(Z)=\rightarrowod_{j=1}^{s-1} (1-z_j/Z)={\rm e}xp\Bigl(\sum_{j=1}^{s-1} \ln (1-z_j/Z)\Bigr) {\rm e}nd{equation} for $Z\in{\Bbb C}$, $|Z|\geq1$ (principal logarithm). To handle the product $P(z)$, in Lemma~\ref{lem4.3} below, we evaluate $\ln P(Z)$ for $|Z|\geq1$ in terms of the contour integral \begin{equation} \langlebel{4.12} I(Z)=\frac{1}{2\pi i} \int\limits_{|z|=1+\varepsilon} \frac{\ln (1-z^{-s}A(z))}{Z-z} dz, {\rm e}nd{equation} where $\varepsilon>0$ is such that $1<1+\varepsilon<Z_0$. \begin{lem} \langlebel{lem4.3} Let $\varepsilon>0$, $1<1+\varepsilon<Z_0$ and $|Z|\geq1$. Then \begin{equation} \langlebel{4.13} \ln P(Z)=\left\{\begin{array}{lll} - \ln (1-Z^{-1})+I(Z), \ 1+\varepsilon < |Z|< r,\\ - \ln (1-Z^{-1})+\ln (1-Z^{-s}A(Z))+I(Z), \ 1<|Z|<1+\varepsilon,\\ ~~\:\ln (\gamma\sqrt{s})+I(1), \ Z=1. {\rm e}nd{array}\right. {\rm e}nd{equation} {\rm e}nd{lem} The dedicated saddle point method, as considered in \cite{britt1}, applied to $I(Z)$, with saddle point $z_{\rm sp}=1+\varepsilon$ of the function $g(z)={-}\ln z+\frac{n}{s} \ln (X(Z))$, yields \begin{equation} \langlebel{3.9} I(1)={-}\ln [Q(0)]+O(s^{-1/2}),\quad I(Z_0)=\ln [Q(0)]+O(s^{-1/2}). {\rm e}nd{equation} Combining (\ref{3.1}), (\ref{3.4}), (\ref{3.5}), (\ref{4.2}) and (\ref{3.9}), then gives one of our main results: \begin{prop} \langlebel{prop4.4} \begin{equation} \langlebel{4.16} \ln \Bigl(\frac{c_0}{1-Z_0}\Bigr)={-}\ln (4b_0^2)+2\ln [Q(0)]+O(s^{-1/2}). {\rm e}nd{equation} {\rm e}nd{prop} The next step consists of bounding the integral on the second line of (\ref{2.3}), that can be written as \begin{equation} \langlebel{3.12} \frac{-1}{2\pi i} \int\limits_{|z|=R} \frac{Q(z)}{z^{N+1}(1-z)} dz, {\rm e}nd{equation} by choosing $R$ appropriately. To do this, we consider the product representation (\ref{2.1}) of $Q(z)$, and we want to choose $R$ such that $|z^s-A(z)|\geq C |z|^s$, $|z|=R$, for some $C>0$ independent of $s$. It will be shown in Section~\ref{sec4} that this is achieved by taking $R$ such that the curve $|z^s|=|A(z)|$, on which $Z_0$ and $Z_{\pm1}$ lie, is crossed near a point $z$ (also referred to as $Z_{\pm1/2}$), where $z^s$ and $A(z)$ have opposite sign. A further analysis, using again the dedicated saddle point method to bound the product $\rightarrowod_{j-1}^{s-1}$ in (\ref{2.1}), then yields that the integral in (\ref{3.12}) decays as $R^{-N}$. Finally using the asymptotic information in (\ref{3.1})-(\ref{3.3}) for $Z_0$ and $Z_{\pm1}$, with $Z_{\pm1/2}$ lying midway between $Z_0$ and $Z_{\pm1}$, the integral on the second line of (\ref{2.3}) can be shown to have relative order ${\rm e}xp({-}DN/\sqrt{s})$, for some $D>0$ independent of $s$, compared to the dominant-pole term in (\ref{2.4}). To summarize, we have now that \begin{equation}\langlebel{317} \mathbb{P}(Q> N)= \frac{c_0}{1-Z_{0}}\Big(\frac{1}{Z_{0}}\Big)^{N+1} \Big(1+O({\rm e}^{-DN/\sqrt{s}})\Big), {\rm e}nd{equation} for some $D>0$ independent of $s$, $N=1,2,\ldots$. The DPA $c_0/(1-Z_{0})^{-1}Z_{0}^{-N-1}$ of $\mathbb{P}(Q> N)$ thus has a relative error that decays exponentially fast. In Subsection~\ref{subGRW} the stationary queue length $Q$, considered in the many-sources regime, is shown to be connected to the Gaussian random walk. This connection will imply that the front factor of the DPA in {\rm e}nd{equation}ref{317} satisfies \begin{equation}\langlebel{318} \frac{c_0}{1-Z_{0}}=H(b_0) \Big(1+O(s^{-1/2})\Big), {\rm e}nd{equation} where $\ln H(b_0)$ has a power series in $b_0$ with coefficients that can be expressed in terms of the Riemann zeta function. Combining this with Proposition \ref{prop4.6} and {\rm e}nd{equation}ref{317} yields \begin{equation}\langlebel{319} \mathbb{P}(Q> N)=H(b_0){\rm e}xp\Big(-\frac{2L\gamma\mu}{\sigma^2}\Big)\Big(1+O(s^{-1/2})\Big) {\rm e}nd{equation} when $N+1=L\sqrt{s}$ with $L$ bounded away from $0$ and $\infty$. The leading term in {\rm e}nd{equation}ref{319} agrees with {\rm e}nd{equation}ref{onesix} when we identify \begin{equation}\langlebel{320} L=\sigma K/\sqrt{\mu}, \quad \gamma=\beta\sigma/\sqrt{\mu}, \quad s=n\mu+\beta\sigma\sqrt{n}\approx n\mu, \quad b_0=\beta/\sqrt{2} {\rm e}nd{equation} and $H(b_0)=h(\beta)$. In Subsection~\ref{subsubsec4.3.2} we extend for a fixed $M=1,2,\ldots$ the approach in Section \ref{sec4} by increasing the radius $R$ of the integration contour in {\rm e}nd{equation}ref{3.12} to $R_M$ such that the poles $Z_0, Z_{\pm 1},\ldots,Z_{\pm M}$ are inside $|z|=R_M$. this lead to \begin{eqnarray} \langlebel{321} P(Q>N)={\rm Re} \Bigl[\frac{c_0}{(1-Z_0) Z_0^{N+1}} + 2 \sum_{k=1}^M \frac{c_k}{(1-Z_k) Z_k^{N+1}}\Bigr] +O\Big(|Z_{M+1}|^{-N}\Big). {\rm e}nd{eqnarray} The front factors $c_k/(1-Z_k)$ in the series in {\rm e}nd{equation}ref{321} satisfy \begin{equation}\langlebel{322} \frac{c_k}{1-Z_k}=H_k(b_0) \Big(1+O(s^{-1/2})\Big), {\rm e}nd{equation} with $H_k(b_0)$ some explicitly defined integral. When $N+1=L\sqrt{s}$ with $L$ bounded away from $0$ and $\infty$, we find from {\rm e}nd{equation}ref{322} and Proposition \ref{prop4.6} that \begin{equation}\langlebel{323} \frac{c_k}{1-Z_k}=H(b_k){\rm e}xp\Big(-L a_0(\sqrt{b_0^2-2\pi i k}+b_0)\Big)\Big(1+O(s^{-1/2})\Big), {\rm e}nd{equation} compare with {\rm e}nd{equation}ref{318}, and it can be shown that this gives rise to an ${\rm e}xp(-DL/\sqrt{k})$ decay of the right-hand side of {\rm e}nd{equation}ref{323}. The results in {\rm e}nd{equation}ref{319} and {\rm e}nd{equation}ref{323} together give precise information as to how the DPA arises, with leading behavior from the dominant pole, and lower order refinements coming from the dominated poles. \section{DPA through contour integration} \langlebel{sec4} In this section we present the details of getting approximations of the tail probabilities using a contour integration approach as outlined in Section~\ref{sec3}. In Subsection~\ref{subsec4.1}, we concentrate on approximation of the front factor $c_0/(1-Z_0)$ and the dominant pole $Z_0$, and combine these to obtain an approximation of the leading-order term in (\ref{2.4}). This gives Lemma \ref{lem4.1}, Lemma \ref{lem4.3} and Proposition \ref{prop4.4}. In Subsection~\ref{subsec4.2} we assess and bound the integral on the second line of (\ref{2.3}) and thereby make precise what exponentially small in (\ref{2.4}) means in the present setting. \subsection{Approximation of the leading-order term} \langlebel{subsec4.1} \subsubsection{Proof of Lemma \ref{lem4.1}} From \begin{equation} \langlebel{4.3} Z_0^s=A(Z_0)=X^n(Z_0) ,~~\mu_A=n \mu=s\Bigl(1-\frac{\gamma}{\sqrt{s}}\Bigr) ,~~A'(Z_0)=n X'(Z_0) X^{n-1}(Z_0), {\rm e}nd{equation} we compute \begin{equation} \langlebel{4.4} \frac{s-\mu_A}{sZ_0^{s-1}-A'(Z_0)}=\frac{\gamma/\sqrt{s}}{1-(1-\gamma/\sqrt{s}) \dfrac{X'(Z_0)Z_0}{X'(1) X(Z_0)}}~\frac{1}{Z_0^{s-1}}. {\rm e}nd{equation} With the approximation {\rm e}nd{equation}ref{3.1}, written as \begin{equation} \langlebel{4.5} Z_0=1+\frac{d_0}{\sqrt{s}}+O(s^{-1}),~~~~~~d_0=\frac{2\gamma\mu}{\sigma^2}, {\rm e}nd{equation} we get \begin{eqnarray} \langlebel{4.6} X'(Z_0) = X'(1)+X''(1)(Z_0-1)+O(s^{-1}) = \mu+\frac{X''(1) d_0}{\sqrt{s}}+O(s^{-1}) {\rm e}nd{eqnarray} and \begin{eqnarray} \langlebel{4.7} X(Z_0) = X(1)+X'(1)(Z_0-1)+O(s^{-1}) = 1+\frac{\mu d_0}{\sqrt{s}}+O(s^{-1}). {\rm e}nd{eqnarray} Hence, by (\ref{4.5}--\ref{4.7}) and (\ref{A6}), \begin{eqnarray} \langlebel{4.8} & \mbox{} & 1-\Bigl(1-\frac{\gamma}{\sqrt{s}}\Bigr) \frac{X'(Z_0) Z_0}{X'(1) X(Z_0)} \nonumber \\[3mm] & & =~1-\Bigl(1-\frac{\gamma}{\sqrt{s}}\Bigr) \frac{\Bigl(\mu+\dfrac{X''(1) d_0}{\sqrt{s}}+O(s^{-1})\Bigr) \Bigl(1+\dfrac{d_0}{\sqrt{s}}+O(s^{-1})\Bigr)}{\mu\Bigl(1+\dfrac{\mu d_0}{\sqrt{s}}+O(s^{-1})\Bigr)} \nonumber \\[3mm] & & =~1-\bigl(1-\frac{\gamma}{\sqrt{s}}\Bigr)\Bigl(1+\frac{d_0}{\sqrt{s}} \Bigl(\frac{X''(1)}{\mu}+1-\mu\Bigr)+O(s^{-1})\Bigr) \nonumber \\[3mm] & & =~1-\bigl(1-\frac{\gamma}{\sqrt{s}}\Bigr)\Bigl(1+\frac{d_0}{\sqrt{s}}~\frac{\sigma^2}{\mu}+O(s^{-1})\Bigr)={-} \frac{\gamma}{\sqrt{s}}+O(s^{-1}), {\rm e}nd{eqnarray} where we have used $d_0$ of (\ref{4.5}) in the last step. This gives (\ref{4.2}). \subsubsection{Proof of Lemma \ref{lem4.3}} We have $|A(z)|<|z^s|$ when $z\neq1$, $1<|z|<Z_0$, and so $\ln (1-A(z)/z^s)$ is analytic in $z\neq1$, $1<|z|<Z_0$. When $|Z|>1+\varepsilon$, we have by partial integration and Cauchy's theorem \begin{eqnarray} \langlebel{4.14} I(Z) & = & \frac{1}{2\pi i} \int\limits_{|z|=1+\varepsilon} \ln \Bigl(1-\frac{z}{Z}\Bigr) \frac{(1-A(z)/z^s)'}{1-A(z)/z^s} dz \nonumber \\[3.5mm] & = & \sum_{j=0}^{s-1} \ln \Bigl(1-\frac{z_j}{Z}\Bigr)=\ln \bigl(1-\frac1Z\Bigr)+\ln P(Z). {\rm e}nd{eqnarray} This gives the upper-case formula in (\ref{4.13}), and the middle case follows in a similar manner by taking the residue at $z=Z$ inside $|z|=1+\varepsilon$ into account. For the lower case in (\ref{4.13}), we use the result of the middle case, in which we take $1<Z<1+\varepsilon$, $Z\downarrow1$. We have $I(Z)\rightarrow I(1)$ as $Z\downarrow1$, and \begin{eqnarray} \langlebel{4.15} & \mbox{} & \lim_{Z\downarrow1} \Bigl[{-}\ln \Bigl(1-\frac1Z\Bigr)+\ln \Bigl(1-\frac{X^n(Z)}{Z^s}\Bigr)\Bigr] \nonumber \\[3mm] & & =~\ln [(Z^s-X^n(Z))'|_{Z=1}]=\ln (s-n\mu)=\ln (\gamma\sqrt{s}), {\rm e}nd{eqnarray} and this completes the proof. \subsubsection{Proof of Proposition \ref{prop4.4}} By (\ref{4.10}) and Lemma~\ref{lem4.3} we have \begin{eqnarray} \langlebel{4.17} ]pace*{-1cm}\ln \Bigl(\frac{c_0}{1-Z_0}\Bigr) & = & \ln P(Z_0)-\ln P(1)+O(s^{-1/2}) \nonumber \\[3mm] & = & {-}\ln \Bigl(1-\frac{1}{Z_0}\Bigr)-\ln (\gamma\sqrt{s})+I(Z_0)-I(1)+O(s^{-1/2}) . {\rm e}nd{eqnarray} From {\rm e}nd{equation}ref{3.1} it follows that \begin{equation} \langlebel{4.18} -\ln \Bigl(1-\frac{1}{Z_0}\Bigr)-\ln (\gamma\sqrt{s})={-}\ln \bigl(\frac{2\gamma^2 \mu}{\sigma^2}\Bigr)+O(s^{-1/2}) ={-}\ln (4b_0^2)+O(s^{-1/2}), {\rm e}nd{equation} and so \begin{equation} \langlebel{4.19} \ln \Bigl(\frac{c_0}{1-Z_0}\Bigr)=I(Z_0)-I(1)-\ln (4b_0^2)+O(s^{-1/2}). {\rm e}nd{equation} Next, we consider the integral representation (\ref{4.12}) of $I(Z)$, where we take $\varepsilon$ such that \begin{equation} \langlebel{4.20} 1+\varepsilon=z_{\rm sp}=1+\frac{\gamma\mu}{\sigma^2\sqrt{s}}+O(s^{-1}), {\rm e}nd{equation} with $z_{\rm sp}$, see \cite[Section~3]{britt1}, the unique point $z\in(1,Z_0)$ such that \begin{equation} \langlebel{4.21} \frac{d}{dz} \Bigl[{-}\ln z+\frac{n}{s} \ln (X(z))\Bigr]=0. {\rm e}nd{equation} Observe that \begin{equation} \langlebel{4.22} z_{\rm sp}=\tfrac12 (1+Z_0)+O(s^{-1}),~~~~~~Z_0-z_{\rm sp}=z_{\rm sp}-1+O(s^{-1}), {\rm e}nd{equation} and this suggests that $I(Z_0)\approx{-}I(1)$, a statement made precise below, since the main contribution to $I(Z)$ comes from the $z$'s in (\ref{4.12}) close to $z_{\rm sp}$. \\ We have, see \cite{britt1}, \begin{equation} \langlebel{4.23} \ln [P(Q=0)]=\frac{1}{2\pi i} \int\limits_{|z|=z_{\rm sp}} \frac{\ln \Bigl(1-z^{-s}A(z)\Bigr)}{z(z-1)} dz. {\rm e}nd{equation} Now \begin{equation} \langlebel{4.24} \frac{1}{z-1}=\frac{1}{z(z-1)}+\frac1z, {\rm e}nd{equation} and \begin{equation} \langlebel{4.25} \int\limits_{|z|=z_{\rm sp}} \Bigl|\ln \Bigl(1-z^{-s}A(z)\Bigr)\Bigr| |dz|=O(s^{-1/2}), {\rm e}nd{equation} see \cite[Subsection~5.3]{britt1}. Hence, \begin{equation} \langlebel{4.26} I(1)=\frac{-1}{2\pi i} \int\limits_{|z|=z_{\rm sp}} \frac{\ln \Bigl(1-z^{-s}A(z)\Bigr)}{z-1} dz=\ln [P(Q=0)]+O(s^{-1/2}). {\rm e}nd{equation} As to $I(Z_0)$, we observe that, see (\ref{4.22}), \begin{equation} \langlebel{4.27} \frac{1}{Z_0-z}=\frac{1}{z-1}+2 \frac{z-\frac12 (1+Z_0)}{(Z_0-z)(z-1)}=\frac{1}{z-1}+2 \frac{z-z_{\rm sp}}{(Z_0-z)(z-1)}+O(1). {\rm e}nd{equation} Thus, \begin{equation} \langlebel{4.28} I(Z_0)={-}I(1)+2 \int\limits_{|z|=z_{\rm sp}} \frac{z-z_{\rm sp}}{(Z_0-z)(z-1)} \ln \Bigl(1-z^{-s}A(z)\Bigr) dz+O(s^{-1/2}). {\rm e}nd{equation} We next estimate the remaining integral in (\ref{4.28}). With the substitution $z=z(v)$, $-\frac12 \delta\leq v\leq\frac12 \delta$, we have $A(z(v))/z^s(v)=B {\rm e}xp({-}s{\rm e}ta v^2)$ with $0<B<1$ and ${\rm e}ta>0$ bounded away from 1 and 0, respectively, and \begin{equation} \langlebel{4.29} z(v)=z_{\rm sp}+iv+\sum_{n=2}^{\infty} c_n(iv)^n,~~~~~~-\tfrac12 \delta\leq v\leq\tfrac12 \delta, {\rm e}nd{equation} where $c_n$ are real. Then we get with exponentially small error \begin{eqnarray} \langlebel{4.30} \int\limits_{|z|=z_{\rm sp}} \frac{z-z_{\rm sp}}{(Z_0-z)(z-1)} \ln \Bigl(1-z^{-s}A(z)\Bigr) dz =\int\limits_{-\frac12\delta}^{\frac12\delta} \frac{(z(v)-z_{\rm sp}) z'(v)}{(Z_0-z(v))(z(v)-1)} \ln (1-B e^{-s{\rm e}ta v^2}) dv. {\rm e}nd{eqnarray} Now we get from (\ref{4.22}) and (\ref{4.29}) that \begin{eqnarray} \langlebel{4.31} (Z_0-z(v))(z(v)-1) & = & (z_{\rm sp}-1-iv+O(v^2))(z_{\rm sp}-1+iv+O(v^2)) \nonumber \\[3mm] & & +~O \Bigl(s^{-1} (z_{\rm sp}-1+iv+O(v^2))\Bigr) \nonumber \\[3mm] & = & |z_{\rm sp}-1|^2+v^2+O \Bigl(\Bigl(s^{-1}+v^2\Bigr)^{3/2}\Bigr). {\rm e}nd{eqnarray} Furthermore, \begin{equation} \langlebel{4.32} (z(v)-z_{\rm sp}) z'(v)={-}v+O(v^2). {\rm e}nd{equation} Thus \begin{equation} \langlebel{4.33} \frac{(z(v)-z_{\rm sp}) z'(v)}{(Z_0-z(v))(z(v)-1)}=\frac{-v+O(v^2)}{|z_{\rm sp}-1|^2+v^2+O\Bigl(\Bigl(s^{-1}+v^2\Bigr)^{3/2}\Bigr)}. {\rm e}nd{equation} Inserting this into the integral on the second line of (\ref{4.30}), we see that the $-v$ in (\ref{4.33}) cancels upon integration. Also \begin{equation} \langlebel{4.34} \int\limits_{-\frac12\delta}^{\frac12\delta} \frac{v^2}{|z_{\rm sp}-1|^2+v^2} \ln (1-B e^{-\frac12 s{\rm e}ta v^2}) dv=O(s^{-1/2}), {\rm e}nd{equation} and this finally shows that the integral in (\ref{4.30}) is $O(s^{-1/2})$. Then combining (\ref{4.19}), (\ref{4.26}), (\ref{4.28}), we get the result. \subsection{Bounding the remaining integral} \langlebel{subsec4.2} We have from (\ref{2.3}) \begin{equation} \langlebel{4.44} P(Q>N)=\frac{c_0}{Z_0^{N+1}(1-Z_0)}-\frac{1}{2\pi i} \int\limits_{|z|=R} \frac{Q(z)}{z^{N+1}(1-z)} dz, {\rm e}nd{equation} where $R\in(Z_0,|Z_{\pm1}|)$, and we intend to bound the integral at the right-hand side of (\ref{4.44}). We use in (\ref{4.44}) the $Q(z)$ as represented by the right-hand side of (\ref{2.1}) which is defined and analytic in $z$, $|z|<r$, $z\neq Z_k$. We write for $|z|<r$, $z\neq Z_k$ \begin{equation} \langlebel{4.45} \frac{Q(z)}{(1-z) z^{N+1}}=\frac{-1}{z^{N+2-s}}~\frac{s-\mu_A}{\rightarrowod_{j=1}^{s-1} (1-z_j)} ~\frac{1}{z^s-A(z)} \rightarrowod_{j=1}^{s-1} (1-z_j/z). {\rm e}nd{equation} Now $s-\mu_A=\gamma\sqrt{s}$, and by Lemma~\ref{lem4.3} and (\ref{4.26}), we have $\rightarrowod_{j=1}^{s-1} (1-z_j)=P(1)\geq C \gamma\sqrt{s}$ for some $C>0$ independent of $s$. Hence $(s-\mu_A)/\rightarrowod_{j=1}^{s-1} (1-z_j)$ is bounded in $s$. Next, for $|z|\geq Z_0$, we have by Lemma~\ref{lem4.3} \begin{equation} \langlebel{4.46} \rightarrowod_{j=1}^{s-1} (1-z_j/z)=\frac{z}{z-1} {\rm e}xp (I(z)), {\rm e}nd{equation} with $I(z)$ given by (\ref{4.12}) and admitting an estimate \begin{equation} \langlebel{4.47} |I(z)|=O \Big(|z-z_{\rm sp}|^{-1} \int\limits_{-\infty}^{\infty} \ln (1-B e^{-\frac12 s{\rm e}ta t^2}) dt\Big)=O(1) {\rm e}nd{equation} since $\sqrt{s}|z-z_{\rm sp}|$, $B\in(0,1)$ and ${\rm e}ta>0$ are all bounded away from 0. Therefore, there remains to be considered $(z^s-A(z))^{-1}$. We show below that there is a $C>0$, independent of $s$, such that \begin{equation} \langlebel{4.48} |z^s-A(z)|\geq C |z|^s {\rm e}nd{equation} when $z$ is on a contour $K$ as in Figure \langlebel{fig1}, consisting of a straight line segment \begin{equation} \langlebel{4.49} z=\xi+i{\rm e}ta,~~~~~~\xi={\rm Re} [\hat{Z}({\pm}\tfrac12)],~~~~~~{-} \frac{1}{\sqrt{s}} y_0\leq{\rm e}ta\leq\frac{1}{\sqrt{s}} y_0, {\rm e}nd{equation} and a portion of the circle \begin{equation} \langlebel{4.50} |z|=R=\sqrt{{\rm Re}^2 [\hat{Z}({\pm}\tfrac12)]+\dfrac{1}{s} y_0^2} {\rm e}nd{equation} that are joined at the points $({\rm Re}[\hat Z(\pm \tfrac12)],{\pm}\frac{1}{\sqrt{s}} y_0)$. Here \begin{equation} \langlebel{4.51} \hat{Z}(t)=1+\frac{a_0}{\sqrt{s}} ((b_0^2-2\pi it)^{1/2}+b_0), {\rm e}nd{equation} with $a_0,b_0>0$ given in (\ref{A10}) and independent of $s$, approximates the solution $z=Z(t)$, for real $t$ small compared to $s$, of the equation \begin{equation} \langlebel{4.52} \frac{n}{s} \ln X(z)-\ln z=\frac{2\pi it}{s} {\rm e}nd{equation} outside the unit disk, according to \begin{equation} \langlebel{4.53} Z(t)=\hat{Z}(t)+O\Bigl(\frac{t}{s}\Bigr). {\rm e}nd{equation} Thus on $K$ we have from (\ref{4.48}) \begin{equation} \langlebel{4.54} \Bigl|\frac{Q(z)}{(1-z) z^{N+1}}\Bigr|=O\Bigl(\frac{1}{(z-1) z^{N+1}}\Bigr), {\rm e}nd{equation} and we estimate \begin{align} \langlebel{4.55} & \Big|\frac{1}{2\pi i} \int\limits_{|z|=R} \frac{Q(z)}{z^{N+1}(1-z)} dz\Big| = \Big|\frac{1}{2\pi i} \int\limits_{z\in K} \frac{Q(z)}{z^{N+1}(1-z)} dz\Big| \nonumber \\ & = O \Big(\Bigl(\frac{1}{{\rm Re} [\hat{Z}({\pm}\frac12)]}\Bigr)^{N+1} \int\limits_{z\in K} \frac{|dz|}{|z-1|}\Big) = O \Big(\ln s\Bigl(\frac{1}{{\rm Re} [\hat{Z}({\pm}\frac12)]}\Bigr)^{N+1}\Big). {\rm e}nd{align} Here we have used that $|z-1|\geq{\rm Re} [\hat{Z}({\pm}\tfrac12)]-1\geq E/\sqrt{s}$, $z\in K$, for some $E>0$ independent of $s$. Observing that \begin{equation} \langlebel{4.56} \hat{Z}(0)=1+\frac{2a_0b_0}{\sqrt{s}}, {\rm e}nd{equation} \begin{equation} \langlebel{4.57} {\rm Re} [\hat{Z}({\pm}\tfrac12)]=1+\frac{a_0}{\sqrt{s}} [(\tfrac12 b_0^2+\tfrac12(b_0^4+\pi^2 )^{1/2})+b_0], {\rm e}nd{equation} we see that \begin{eqnarray} \langlebel{4.58} \bigg(\frac{\hat{Z}(0)}{{\rm Re} [\hat{Z}({\pm}\frac12)]}\bigg)^N & = & \Bigg(1-\frac{\dfrac{a_0}{\sqrt{s}} [(\frac12 b_0^2+\frac12(b_0^4+\pi^2)^{1/2})^{1/2}-b_0]} {1+\dfrac{a_0}{\sqrt{s}} [(\frac12 b_0^2+\frac12(b_0^4+\pi^2)^{1/2})^{1/2}+b_0]} \Bigg)^N \nonumber \\[3.5mm] & = & O({\rm e}xp({-}\hat{D}N/\sqrt{s})) {\rm e}nd{eqnarray} for some $\hat{D}>0$ independent of $s$. Hence, by (\ref{4.53}), we see that the relative error in (\ref{4.44}) due to ignoring the integral at the right-hand side is of order ${\rm e}xp({-}DN/\sqrt{s})$ with some $D>0$, independent of $s$. We show the inequality (\ref{4.48}) for $z\in K$ using the following property of $X$: there is a $\delta>0$ and a $\vartheta_1\in(0,\pi/2)$ such that for any $R\in[1,1+\delta]$ the function $|X(R e^{i\vartheta})|$ is decreasing in $|\vartheta|\in[0,\vartheta_1]$ while \begin{equation} \langlebel{4.59} \vartheta_1\leq|\vartheta|\leq\pi\Rightarrow |X(R e^{i\vartheta})|\leq |X(R e^{i\vartheta_1})|. {\rm e}nd{equation} This property follows from strict maximality of $|X(e^{i\vartheta})|$ in $\vartheta\in[{-}\pi,\pi]$ at $\vartheta=0$ and analyticity of $X(z)$ in the disk $|z|<r$ (with $r>1$). For the construction of the contour $K$ in (\ref{4.48}--\ref{4.49}), we consider the quantity \begin{equation} \langlebel{4.60} n \ln [X(1+v)]-\ln (1+v), {\rm e}nd{equation} where $v$ is of the form \begin{equation} \langlebel{4.61} v=\frac{2\gamma\mu}{\sigma^2\sqrt{s}}+\frac{x_0+iy}{\sqrt{s}}=\hat{Z}(0)-1+\frac{x_0+iy}{\sqrt{s}} {\rm e}nd{equation} with $x_0>0$ fixed and varying $y\in{\Bbb R}$. We choose $x_0$ such that the outer curve $Z(t)$ is crossed by $z=1+v$ near the points $Z({\pm}\frac12)$, where $z^s-A(z)$ equals $2z^s$. Thus, we choose \begin{equation} \langlebel{4.62} x_0={\rm Re} [\sqrt{s} (\hat{Z}({\pm}\tfrac12)-\hat{Z}_0)]={\rm Re} [a_0((b_0^2+\pi i)^{1/2}-b_0)]. {\rm e}nd{equation} We have, as in the analysis in the appendix, that \begin{eqnarray} \langlebel{4.63} n \ln [X(1+v)]-s \ln (1+v) =\frac{2\gamma\mu}{\sigma^2} x_0+(x_0^2-y^2)+2i\Bigl(\frac{\gamma\mu}{\sigma^2}-x_0\Bigr) y+O(sv^3)+O(v^2\sqrt{s}) . {\rm e}nd{eqnarray} With $x_0>0$ fixed and independent of $s$, see (\ref{4.62}), the leading part of the right-hand side in (\ref{4.63}) is independent of $s$ and describes, as a function of the real variable $y$, a parabola in the complex plane with real part bounded from above by its real value at $y=0$ and that passes the imaginary axis at the points $\pm\pi i$. Therefore, this leading part has a positive distance to all points $2\pi ik$, integer $k$. Now take $y_0$ such that \begin{equation} \langlebel{4.64} \frac{2\gamma\mu}{\sigma^2}+(x_0^2-y_0^2)={-} \Bigl(\frac{2\gamma\mu}{\sigma^2}+x_0^2\Bigr). {\rm e}nd{equation} In Figure \ref{fig1} we show the curve $K$ (heavy), the approximation $\hat Z(t)$ of the outer curve, and the choice $y_0={\rm e}ta_0\sqrt{s}$ for the case that $\gamma=1$, $\mu/\sigma^2=2$, $s=100$. It follows from the above analysis, with $v$ as in (\ref{4.61}), that \begin{equation} \langlebel{4.65} 1-\frac{X^n(1+v)}{(1+v)^s},~~~~~~-y_0\leq y\leq y_0, {\rm e}nd{equation} is bounded away from 0 and has a value $1-c,1-c^{\ast}$ at $y={\pm} y_0$, where $c$ is bounded away from 1 and $|c|<1$. Now write \begin{equation} \langlebel{4.66} R e^{i\vartheta_0}=\hat{Z}(0)+\frac{x_0+iy_0}{\sqrt{s}}=1+v_0. {\rm e}nd{equation} When $s$ is large enough, we have that $R\in[1,1+\delta]$ and $0\leq\vartheta_0\leq\vartheta_1$, where $\delta$ and $\vartheta_1$ are as above in (\ref{4.59}). We have \begin{equation} \langlebel{4.67} |X^n(1+v_0)|\leq |c| \ |1+v_0)|^s, {\rm e}nd{equation} and by (\ref{4.59}) and monotonicity of $|X(R e^{i\vartheta})|$, $\vartheta_0\leq|\vartheta|\leq\vartheta_1$, \begin{equation} \langlebel{4.68} |X^n(R e^{i\vartheta})|\leq |X^n(R e^{i\vartheta_0})|\leq |c| R^s,~~~~~~\vartheta_0\leq|\vartheta|\leq\pi. {\rm e}nd{equation} Therefore, (\ref{4.48}) holds on $K$ with \begin{equation} \langlebel{4.68a} C=\min \Bigl\{1-|c| , \min_{|y|\leq y_0} \Bigl| 1-\frac{X^n(1+v)}{(1+v)^s}\Bigr|\Bigr\} {\rm e}nd{equation} positive and bounded away from 0 as $s$ gets large. \begin{figure} \begin{center} \includegraphics[width= .5 \linewidth, angle =90 ]{fig1.pdf} {\rm e}nd{center} \caption{Integration curve $K$ consisting of line segment $z=\xi+i{\rm e}ta$, $-{\rm e}ta_0\leq{\rm e}ta\leq {\rm e}ta_0$, where $\xi={\rm Re}[\hat Z(\pm \tfrac12)]$ and ${\rm e}ta_0=y_0/\sqrt{s}$, and portion of the circle $|z|=R$ with $R=(\xi^2+{\rm e}ta_0^2)^{1/2}$. Choice of parameters: $\gamma=1$, $\mu/\sigma^2=2$, $s=100$. } \langlebel{fig1} {\rm e}nd{figure} \section{Correction terms and asymptotic expansion} \langlebel{subsec4.3} In this section we give a series expansion for the leading term in {\rm e}nd{equation}ref{4.16} involving the Riemann zeta function. We also show how to find an asymptotic series for $P(Q>N)$ as $N\rightarrow\infty$ of which the term involving the dominant pole is the leading term. Before we do so, we first discuss how this leading term is related to the Gaussian random walk and a result of Chang and Peres \cite{changperes}. \subsection{Connection with Gaussian random walk}\langlebel{subGRW} We know from \cite[Theorem 3]{nd2} that under the critical many-sources scaling, the rescaled queueing process converges to a reflected Gaussian random walk. The latter is defined as $(S_\beta(k))_{k\geq 0}$ with $S_\beta(0)=0$ and \begin{equation} S_\beta(k)=Y_1+\ldots+Y_k {\rm e}nd{equation} with $Y_1,Y_2,\ldots$ i.i.d.~copies of a normal random variable with mean $-\beta$ and variance 1. Assume $\beta>0$ (negative drift), and denote the all-time maximum of this random walk by ${M}_\beta$. Denote by $Q^{(s)}$ the stationary congestion level for a fixed $s$ (that arises from taking $k\to \infty$ in {\rm e}nd{equation}ref{lind}). Then, using $\rho_s=1-\gamma/\sqrt{s}$, with \begin{equation}\langlebel{gammachoice} \gamma=\frac{\beta\sigma\sqrt{s}}{\mu\sqrt{n}}, {\rm e}nd{equation} the spatially-scaled stationary queue length reaches the limit $Q^{(s)}/(\sigma\sqrt{n}) \stackrel{d}{\to} {M}_\beta$ as $s,n\to\infty$ (see \cite{jelenkovic,nd1,nd2}). The random variable ${M}_\beta$ was studied in \cite{changperes,jllerch,cumulants}. In particular, \cite[Thm.~1]{jllerch} yields, for $\beta<2\sqrt{\pi}$, \begin{equation} \langlebel{1.5} \mathbb{P}({M}_\beta=0)=\sqrt{2} \beta {\rm e}xp \Bigl\{\frac{\beta}{\sqrt{2\pi}} \sum_{r=0}^{\infty} \frac{\zeta(\frac12-r)({-}\frac12 \beta^2)^r} {r! (2r+1)}\Bigr\}, {\rm e}nd{equation} and from \cite{changperes} we have $\mathbb{P}({M}_\beta> K)=h(\beta,K) e^{-2\beta K}$ with \begin{equation} \langlebel{1.6} h(\beta,K)\rightarrow h(\beta)={\rm e}xp \Bigl\{\frac{\beta\sqrt{2}}{\sqrt{\pi}}\:\sum_{r=0}^{\infty} \frac{\zeta(\frac12-r)({-}\frac12 \beta^2)^r} {r! (2r+1)}\Bigr\}, {\rm e}nd{equation} exponentially fast as $K\rightarrow\infty$. Hence, there are the approximations \begin{equation} \langlebel{estimate} \mathbb{P}(Q> K \sqrt{n\sigma^2})\approx \mathbb{P}({M}_\beta>K)\approx h(\beta)\cdot {\rm e}^{-2\beta K}, \quad {\rm as} \ n\to\infty, {\rm e}nd{equation} where the second approximation holds for small values of $\beta$. We will now show how this second approximation in {\rm e}nd{equation}ref{estimate} follows from our leading term in the expansion. \begin{prop} \langlebel{prop4.5} \begin{equation} \langlebel{4.35} \ln \Bigl(\frac{c_0}{1-Z_0}\Bigr)=\frac{2b_0}{\sqrt{\pi}} \sum_{r=0}^{\infty} \frac{\zeta(\frac12-r)({-}b_0^2)^r}{r! (2r+1)}+ O(s^{-1/2}),~~~~~~0<b_0<\sqrt{2\pi}. {\rm e}nd{equation} {\rm e}nd{prop} \begin{proof}It is shown in \cite{britt1}, Subsection~5.3 that \begin{equation} \langlebel{4.37} \ln [Q(0)]=\ln [P(M_\beta=0)]+O(s^{-1/2}), {\rm e}nd{equation} in which we take the drift parameter $\beta$ according to \begin{equation} \langlebel{4.38} \beta=b_0\sqrt{2}=\gamma \sqrt{\dfrac{\mu}{\sigma^2}}. {\rm e}nd{equation} From \cite{cumulants} we have \begin{equation} \langlebel{4.39} \ln [P(M_\beta=0)]=\ln (2b_0)+\frac{b_0}{\sqrt{\pi}} \sum_{r=0}^{\infty} \frac{\zeta(\frac12-r)({-}b_0^2)^r}{r! (2r+1)},~~~~~0<b_0<\sqrt{2\pi}. {\rm e}nd{equation} Then from Proposition~\ref{prop4.4}, (\ref{4.37}) and (\ref{4.39}), we get the results in (\ref{4.35}). {\rm e}nd{proof} \subsection{Asymptotic series for $P(Q>N)$ as $N\rightarrow\infty$} \langlebel{subsubsec4.3.2} When inspecting the argument that leads to (\ref{2.3}), it is obvious that one can increase the radius $R$ of the integration contour to values $R_M$ between $|Z({\pm}M)|$ and $|Z({\pm}(M+1))|$ when $M=1,2,...$ is fixed. Here it must be assumed that $s$ is so large that $Z_k$ increases in $k=0,1,...,M+1$. Then, the poles of $Q(z)$ at $z=Z_{\pm k}$, $k=0,1,...,M $, are inside $|z|=R_M$, and we get \begin{eqnarray} \langlebel{4.69} P(Q>N)=\frac{c_0}{(1-Z_0) Z_0^{N+1}} + 2 \sum_{k=1}^M {\rm Re} \Bigl[\frac{c_k}{(1-Z_k) Z_k^{N+1}}\Bigr] - \frac{1}{2\pi i} \int\limits_{|z|=R_M} \frac{Q(z)}{(1-z) z^{N+1}} dz. {\rm e}nd{eqnarray} As in Subsection~\ref{subsec4.2}, one can argue that the integral on the second line of (\ref{4.69}) is relatively small compared to $|Z_M|^{-N-1}$ when $R_M$ is chosen between but away from $|Z_M|$ and $|Z_{M+1}|$. We now need the following result. \begin{lem} \langlebel{cor4.2} There holds \begin{equation} \langlebel{4.9} {-} \frac{s-\mu_A}{sZ_k^{s-1}-A'(Z_k)}=\frac{b_0}{\sqrt{b_0^2-2\pi ik}}~\frac{1}{Z_k^{s-1}} \Bigl(1+O\Bigl(\frac{1+|k|}{\sqrt{s}}\Bigr)\Bigr) {\rm e}nd{equation} when $k=o(s)$. {\rm e}nd{lem} \begin{proof} This follows from the appendix with a similar argument as in the proof of Lemma~\ref{lem4.1}. {\rm e}nd{proof} As to the terms in the series in (\ref{4.69}), we have for bounded $k$, see Lemma~\ref{cor4.2}, \begin{eqnarray} \langlebel{4.70} \frac{c_k}{1-Z_k} & = & - \frac{s-\mu_A}{sZ_k^{s-1}-A'(Z_k)} \rightarrowod_{j=1}^{s-1} \frac{Z_k-z_j}{1-z_j} \nonumber \\[3.5mm] & = & \frac{b_0}{\sqrt{b_0^2-2\pi ik}}\cdot\rightarrowod_{j=1}^{s-1} \frac{1-z_j/Z_k}{1-z_j}\cdot\Bigl( 1+O(s^{-1/2})\Bigr). {\rm e}nd{eqnarray} Furthermore, according to Lemma~\ref{lem4.3}, \begin{eqnarray} \langlebel{4.71} \ln \Bigl[\rightarrowod_{j=1}^{s-1} \frac{1-z_j/Z_k}{1-z_j}\Bigr] & = & I(Z_k)-I(1)-\ln \Bigl(1-\frac{1}{Z_k}\Bigr)-\ln (\gamma\sqrt{s}) \nonumber \\[3.5mm] & = & I(Z_k)-I(1)-\ln [2b_0(b_0+\sqrt{b_0^2-2\pi ik})]+ O(s^{-1/2}). {\rm e}nd{eqnarray} Thus, we get the following result. \begin{prop} \langlebel{prop4.7} For bounded $k\in{\Bbb Z}$, \begin{equation} \langlebel{4.72} \frac{c_k}{1-Z_k}=\frac{{\rm e}xp(I(Z_k)-I(1))}{2\sqrt{b_0^2-2\pi ik} (b_0+\sqrt{b_0^2-2\pi ik})} \Bigl(1+O(s^{-1/2})\Bigr). {\rm e}nd{equation} {\rm e}nd{prop} We aim at approximating $I(Z_k)$, showing, in particular, that $c_k/(1-Z_k)\neq0$ is bounded away from $0$ for bounded $k$ and large $s$. To that end, we conduct the dedicated saddle point analysis for $I(Z_k)$. We have for $|Z|\geq Z_0$, ${\rm Re}(Z)>z_{\rm sp}$, \begin{eqnarray} \langlebel{4.73} I(Z) & = & \frac{1}{2\pi i} \int\limits_{|z|=z_{\rm sp}} \frac{\ln \Bigl(1-z^{-s}A(z)\Bigr)}{Z-z} dz \nonumber \\[3.5mm] & = & \frac{1}{2\pi i} \int\limits_{-\frac12\delta}^{\frac12\delta} \frac{z(v)}{Z-z(v)} \ln (1-B e^{-\frac12 s{\rm e}ta v^2}) dv, {\rm e}nd{eqnarray} with exponentially small error in the last identity as $s\rightarrow\infty$. With $g(z)={-}\ln z+\frac{n}{s} \ln X(z)$, we let \begin{equation} \langlebel{4.74} B={\rm e}xp(sg(z_{\rm sp}))=e^{-b_0^2}\Bigl(1+O(s^{-1/2})\Bigr) ,~~~~{\rm e}ta=g''(z_{\rm sp})=\frac{\sigma^2}{\mu}+O(s^{-1/2}), {\rm e}nd{equation} and $z(v)$ is as in (\ref{4.29}) and defined implicitly by $g(z(v))=g(z_{\rm sp})-\frac12 v^2g''(z_{\rm sp})$. We then find, by using $z(v)=z_{\rm sp}+iv+O(v^2)$ and $z'(v)=i+O(v)$, that \begin{eqnarray} \langlebel{4.75} I(Z) & = & \frac{-1}{2\pi i} \int\limits_{-\infty}^{\infty} \frac{\ln (1-B e^{-\frac12 s{\rm e}ta v^2})}{v+i(Z-z_{\rm sp})} dv+O(s^{-1/2}) \nonumber \\[3.5mm] & = & \frac{-1}{2\pi i} \int\limits_{-\infty}^{\infty} \frac{\ln (1-B e^{-t^2})}{t+i(Z-z_{\rm sp}) \sqrt{s{\rm e}ta/2}} dt+ O(s^{-1/2}), {\rm e}nd{eqnarray} where in the last step the substitution $t=v \sqrt{s{\rm e}ta/2}$ has been made. Combining in the last integrand in (\ref{4.75}) the values at $t$ and $-t$ for $t\geq0$, we get the following result. \begin{prop} \langlebel{prop4.8} For $|Z|\geq Z_0$, ${\rm Re}(Z)>z_{\rm sp}$, \begin{equation} \langlebel{4.76} I(Z)=J(d)+O(s^{-1/2}), {\rm e}nd{equation} where $d=(Z-z_{\rm sp}) \sqrt{s{\rm e}ta/2}$, and \begin{equation} \langlebel{4.77} J(d)=\frac{1}{\pi} \int\limits_0^{\infty} \frac{d}{t^2+d^2} \ln (1-B e^{-t^2}) dt. {\rm e}nd{equation} {\rm e}nd{prop} In the context of Proposition~\ref{prop4.7}, we consider \begin{equation} \langlebel{4.78} d=d_k=(Z_k-z_{\rm sp}) \sqrt{ s{\rm e}ta/2 }, {\rm e}nd{equation} with \begin{equation} \langlebel{4.79} z_{\rm sp}=1+\frac{a_0b_0}{\sqrt{s}}+O(s^{-1}),~~~~~~Z_k=1+\frac{a_0}{\sqrt{s}} (\sqrt{b_0^2-2\pi ik}+b_0)+O(s^{-1}). {\rm e}nd{equation} Using the definitions of $a_0$, $b_0$ in (\ref{A10}) and ${\rm e}ta$ in (\ref{4.74}), we get \begin{equation} \langlebel{4.80} d_k=\hat{d}_k+O(s^{-1/2})~;~~~~~~\hat{d}_k=(b_0^2-2\pi ik)^{1/2}. {\rm e}nd{equation} We have that \begin{equation} \langlebel{4.81} |\hat{d}_k|\geq b_0,~~~~~~{\rm arg}(\hat{d}_k)\in({-}\tfrac14 \pi,\tfrac14 \pi), ~~~~~~k\in{\Bbb Z}. {\rm e}nd{equation} Since for $t\geq0$ and ${\rm arg}(d)\in({-}\frac14 \pi,\tfrac14 \pi)$, we have \begin{equation} \langlebel{4.82} \ln (1-B e^{-t^2})<0,~~~~~~{\rm arg}\Bigl(\frac{d}{t^2+d^2}\Bigr)\in({-}\tfrac14 \pi,\tfrac14 \pi), {\rm e}nd{equation} we see that we have complete control on the quantities $J(\hat{d}_k)$ (also note (\ref{4.74}) for this purpose). Using that $-I(1)=I(Z_0)+O(s^{-1/2})$, see (\ref{4.28}), we get the following result. \begin{prop} \langlebel{prop4.9} For bounded $k\in{\Bbb Z}$, \begin{equation} \langlebel{4.83} \frac{c_k}{1-Z_k}=\frac{{\rm e}xp(J(\hat{d}_k)+J(\hat{d}_0))}{2\hat{d}_k(\hat{d}_k+\hat{d}_0)} +O(s^{-1/2}), {\rm e}nd{equation} where the leading quantity in (\ref{4.83}) $\neq 0,\infty$, $\hat{d}_k$ is given in (\ref{4.80}) with $\hat{d}_0=b_0$, and $J$ is given in (\ref{4.77}). {\rm e}nd{prop} \begin{thm} \langlebel{thm4.10} There is the asymptotic series \begin{equation} \langlebel{4.84} P(Q>N)\sim{\rm Re} \Bigl[\frac{c_0}{(1-Z_0) Z_0^{N+1}}+2 \sum_{k=1}^{\infty} \frac{c_k}{(1-Z_k) Z_k^{N+1}}\Bigr], {\rm e}nd{equation} where the ratio of the terms in the series with index $M$ and $M-1$ is $O(|Z_{M-1}/Z_M|^N)$. {\rm e}nd{thm} \begin{proof} This follows from (\ref{4.69}), in which the integral is $o(|Z_M|^{-N})$ and the term with $k=M$ is $O(|Z_M|^{-N})$, while the reciprocal of the term with $k=M-1$ is $O(|Z_{M-1}|^{-N})$ by Proposition~\ref{prop4.9}. In the consideration of the terms with $k=M-1,M $, it is tacitly accumed that $s$ is so large that $|Z_k|$, $k=0,1,...,M$ is a strictly increasing sequence. {\rm e}nd{proof} \appendix \section{Proof of Lemma \ref{lemdis}} We consider the zeros $z_j$, $j=0,1,...,s-1 $, and $Z_k$, $k\in{\Bbb Z}$, of the function $z^s-A(z)$ in the unit disk $|z|\leq1$ and in the annulus $1<|z|<r$, respectively, in particular those that are relatively close to 1. These zeros are elements of the set $S_{A,s}=\{z\in{\Bbb C} | |z|<r,~|z^s|=|A(z)|\}$. For $z\in S_{A,s}$, we have that $\ln (z^s X^{-n}(z))$ is purely imaginary. We thus consider the equation \begin{equation} \langlebel{A1} s \ln z=n \ln X(z)+2\pi it {\rm e}nd{equation} with $z$ near 1 and $t$ small compared to $s$. Writing \begin{equation}\ \langlebel{A2} u=2\pi t, ~~~~~~z=1+v, {\rm e}nd{equation} we get by Taylor expansion around $z=1$ the equation \begin{equation} \langlebel{A3} s(v-\tfrac12 v^2+O(v^3))=n \ln (1+X'(1)v+\tfrac12 X''(1)v^2+O(v^3))+iu. {\rm e}nd{equation} Dividing by $s$ and using that $\frac{n}{s} X'(1)=1-\gamma/\sqrt{s}$, yields \begin{equation} \langlebel{A4} v-\tfrac12 v^2+O(v^3)=\Bigl(1-\frac{\gamma}{\sqrt{s}}\Bigr) \Bigl(v+\frac{X''(1)-(X'(1))^2}{2X'(1)} v^2+O(v^3)\Bigr)+i \frac{u}{s}, {\rm e}nd{equation} i.e., \begin{equation} \langlebel{A5} - \frac{\gamma}{\sqrt{s}} v+\frac{\sigma^2}{2\mu} v^2+i \frac{u}{s}= O\Bigl(\frac{v^2}{\sqrt{s}}\Bigr)+O(v^3), {\rm e}nd{equation} where we have used that \begin{equation} \langlebel{A6} \mu=X'(1), ~~~~~~\sigma^2=X''(1)-(X'(1))^2+X'(1)>0. {\rm e}nd{equation} Dividing in (\ref{A5}) by $\sigma^2/2\mu$ and completing a square, we get \begin{equation} \langlebel{A7} \Bigl(v-\frac{\gamma\mu}{\sigma^2\sqrt{s}}\Bigr)^2=\Bigl(\frac{\gamma\mu}{\sigma^2\sqrt{s}}\Bigr)^2 \Bigl(1-\frac{2iu\sigma^2}{\gamma^2\mu}\Bigr)+O\Bigl(\frac{v^2}{\sqrt{s}}\Bigr)+O(v^3). {\rm e}nd{equation} Taking square roots at either side of (\ref{A7}) and using that the leading term at the right-hand side of (\ref{A7}) has order $(1+|u|)/s$, we get \begin{equation} \langlebel{A8} v=\frac{\gamma\mu}{\sigma^2\sqrt{s}}\pm\frac{\gamma\mu}{\sigma^2\sqrt{s}} \Bigl( 1-\frac{2iu\sigma^2}{\gamma^2\mu}\Bigr)^{1/2}+O\Bigl(\frac{|v|^2+|v|^3\sqrt{s}} {\sqrt{1+|u|}}\Bigr). {\rm e}nd{equation} Irrespective of the $\pm$-sign, the leading part of right-hand side of (\ref{A8}) has order $((1+|u|)/s)^{1/2}$ (and even $(|u|/s)^{1/2}$ in the case of the $-$-sign), and the $O$-term has order $(1+|u|)/s$ which is $o(((1+|u|)/s)^{1/2})$ as long as $|u|/s=o(1)$. Thus, in that regime of $u$, we have \begin{eqnarray} \langlebel{A9} z=1+v & = & 1+\frac{\gamma\mu}{\sigma^2\sqrt{s}} \Bigl(1\pm\Bigl( 1-\frac{2iu\sigma^2}{\gamma^2\mu}\Bigr)^{1/2}\Bigr)+O\Bigl(\frac{1+|u|}{s}\Bigr) \nonumber \\[3mm] & = & 1\pm\frac{a_0}{\sqrt{s}} ((b_0^2-iu)^{1/2}\pm b_0)+O\Bigl(\frac{1+|u|}{s}\Bigr), {\rm e}nd{eqnarray} where we have inserted \begin{equation} \langlebel{A10} a_0=\Bigl(\frac{2\mu}{\sigma^2}\Bigr)^{1/2}, ~~~~~~b_0=\Bigl(\frac {\gamma^2\mu}{2\sigma^2}\Bigr)^{1/2}=\tfrac12 \gamma a_0. {\rm e}nd{equation} In the case of the minus sign in (\ref{A9}), the $O$-term may be replaced by $O(|u|/s)$. Choosing $u=2\pi j$ with $j=0,1,...,$ and $j=o(s)$, we get from (\ref{A9}) with the minus sign, {\rm e}nd{equation}ref{3.2}. Choosing $u=2\pi k$ with $k\in{\Bbb Z}$ and $k=o(s)$, we get from (\ref{A9}) with the plus sign, {\rm e}nd{equation}ref{3.3}. {\rm e}nd{document}
math
55,906
\begin{document} \if11 { \title{\bf Nonstationary Nearest Neighbor Gaussian Process: hierarchical model architecture and MCMC sampling} \author{Sébastien Coube-Sisqueille$^{a,1}$ \thanks{ The authors gratefully acknowledge \textit{E2S: Energy and Environment Solutions}}\hspace{.2cm}, Sudipto Banerjee$^{b,2}$, and Benoît Liquet$^{a, c, 3}$ \\ \small $^a$ Laboratoire de Mathématiques et de leurs Applications,\\ \small Université de Pau et des Pays de l'Adour, Pau, France\\ \small $^b$ Department of Biostatistics, University of California, Los Angeles, United States of America\\ \small $^{c}$ School of Mathematical and Physical Sciences, Macquarie University, Sydney, Australia\\ \small $^{1}$ [email protected] $^{2}$ [email protected] $^{3}$ [email protected] } \maketitle } \fi \if01 { \begin{center} {\LARGE\bf Nonstationary Nearest Neighbor Gaussian Process: hierarchical model architecture\\ and MCMC sampling} \end{center} } \fi \begin{abstract} Nonstationary spatial modeling presents several challenges including, but not limited to, computational cost, the complexity and lack of interpretation of multi-layered hierarchical models, and the challenges in model assessment and selection. This manuscript develops a class of nonstationary Nearest Neighbor Gaussian Process (NNGP) models. NNGPs are a good starting point to address the problem of the computational cost because of their accuracy and affordability. We study the behavior of NNGPs that use a nonstationary covariance function, exploring their properties and the impact of ordering on the effective covariance induced by NNGPs. To simplify spatial data analysis and model selection, we introduce an interpretable hierarchical model architecture, where, in particular, we make parameter interpretation and model selection easier by integrating stationary range, nonstationary range with circular parameters, and nonstationary range with elliptic parameters within a coherent probabilistic structure. Given the NNGP approximation and the model framework, we propose a MCMC implementation based on Hybrid Monte-Carlo and nested interweaving of parametrizations. We carry out experiments on synthetic data sets to explore model selection and parameter identifiability and assess inferential improvements accrued from the nonstationary model. Finally, we use those guidelines to analyze a data set of lead contamination in the United States of America. \end{abstract} \noindent {\it Keywords:} Bayesian hierarchical models; Hybrid Monte-Carlo; Interweaving; Nearest-Neighbor Gaussian processes; Nonstationary spatial modeling. \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \spacingset{1.45} \section{Introduction} \label{sec:intro} Bayesian hierarchical models for analyzing spatially and temporally oriented data are widely employed in scientific and technological applications in the physical, environmental and health sciences \citep{cressie2015statistics, banerjee2014hierarchical, gelfand2019handbook}. Such models are constructed by embedding a spatial process within a hierarchical structure, \begin{equation}\label{eq: generic_paradigm} [\mbox{data}\,|\, \mbox{process},\; \mbox{parameters}]\times [\mbox{process}\,|\, \mbox{parameters}]\times [\mbox{parameters}]\;, \end{equation} which specifies the joint probability law of the data, an underlying spatial process and the parameters. The process in (\ref{eq: generic_paradigm}) is a crucial inferential component that introduces spatial and/or temporal dependence, allows us to infer about the underlying data generating mechanism and carry out predictions over entire spatial-temporal domains. Point-referenced spatial data, which is our focus here, refer to measurements over a set of locations with fixed coordinates. These measurements are assumed to arise as a partial realization of a spatial process over the finite set of locations. A stationary Gaussian process is a conspicuous specification in spatial process models. Stationarity imposes a simplifying assumption on the dependence structure of the process such as the association between measurements at any two points being a function of the separation between the two points. While this assumption is unlikely to hold in most scientific applications, stationary Gaussian process models are easier to compute. Also, they can effectively capture spatial variation and substantially improve predictive inference that are widely sought in environmental data sets. The aforementioned references provide several examples of stationary Gaussian process models and their effectiveness. Nonstationary spatial models relax assumptions of stationarity and can deliver wide-ranging benefits to inference. For example, when variability in the data is a complex function of space composed of multiple locally varying processes, the customary stationary covariance kernels may be inadequate. Here, richer and more informative covariance structures in nonstationary processes, while adding complexity, may be more desirable by improving smoothing, goodness of fit and predictive inference. Nonstationary spatial models have been addressed by a number of authors \citep[chapter 9]{higdon1998process, fuentes2002spectral, paciorek2003nonstationary, PP, cressie2008fixed, yang2021bayesian, risser2015regression, risser2016nonstationary, fuglstad2015does, Handbook_Spatial_Stats} The richness sought in nonstationary models have been exemplified in a number of the above references. \citet{paciorek2003nonstationary} and \citet{kleiber2012nonstationary} introduce nonstationarity by allowing the parameters of the Mat\'{e}rn class to vary with location, yielding local variances, local ranges and local geometric anisotropies. Such ideas have been extended and further developed in a number of different directions but have not been devised for implementation on massive data sets in the order of $10^5+$. For example, recent works have addressed data sets in the order of hundreds \citep{risser2015regression, ingebrigtsen2015estimation, heinonen2016non} or thousands \citep{fuglstad2015does} of locations, but this is modest with respect to the size of commonly encountered spatial data \citep[see the examples in][]{NNGP, heaton2019case, General_Framework}. A second challenge with nonstationary models is overparametrization arising from complex space-varying covariance kernels. This can lead to weakly identifiable models that are challenging to interpret and difficult to estimate. This also complicates model evaluation and selection as inference is very sensitive to the specifications of the model. We devise a new class of nonstationary spatial models for massive data sets that build upon Bayesian hierarchical models based on directed acyclic graphs (DAGs) such as the Nearest Neighbor Gaussian Process models \citep{NNGP} and, more generally, the family of Vecchia approximations \citep{General_Framework} to nonstationarity, which allow us to exploit their attractive computational and inferential properties \citep{General_Framework, finley2019efficient, Guinness_permutation_grouping}. The underlying idea is to endow the nonstationary process model from \citet{paciorek2003nonstationary} with NNGP specifications on the processes defining the parameters. Our approach relies upon matrix logarithms to specify processes for the elliptic covariance parameters of \citet{paciorek2003nonstationary}. The resulting parametrization is sparser than \citet{paciorek2003nonstationary} or \citet{risser2015regression} and is a natural extension of the usual logarithmic prior for positive parameters such as the marginal variance, the noise variance, and the range when it is not elliptic. We embed this nonstationary NNGP in a coherent and interpretable hierarchical Bayesian model framework as in \citet{heinonen2016non}, but differ in our focus on modeling large spatial data sets. A key challenge is learning about the nonstationary covariance processes. We pursue a Hamiltonian Monte Carlo (HMC) algorithm adapted from \citet{heinonen2016non}. Here, we draw distinctions from \citet{heinonen2016non} who used a full GP and classical matrix calculus that are impracticable for handling massive data sets and, specifically, for NNGP or other DAG-based models. We devise such algorithms specifically for NNGP models to achieve computational efficiency. We also differ from \citet{heinonen2016non} in that we pursue hierarchical latent process modeling. Estimating the latent field \citep{finley2019efficient} allows us to model non-Gaussian responses as well. In order to obtain an efficient algorithm, we hybridize the approach of \citet{heinonen2016non} with interweaving strategies of \citet{yu2011center, filippone2013comparative}. We implement a nested interweaving strategy that was envisioned by \cite{yu2011center}, but not applied to realistic models as far as we know. Our Gibbs sampler otherwise closely follows \citet{coube2020improving}, which is itself a tuned version of \citet{NNGP} using elements from \citet{yu2011center} and \citet{Gonzalez_parallel_gibbs} to improve the computational efficiency. We answer to the problem of interpretability by a parsimonious and readable parametrization of the nonstationary covariance structure, allowing to integrate random and fixed effects. We construct a nested family of models, where the simpler models are merely special states of the complex models. While we do not develop automatic model selection of the nonstationarity, we observe through experiments on synthetic data sets that a complex model that is unduly used on simple data will not overfit but rather degenerate towards a state corresponding to a simpler model. This behavior allows to detect over-modeling from the MCMC samples without waiting for full convergence. The balance of the article proceeds as follows. Section~\ref{section:nonstationary_NNGP} outlines the covariance and data models, and the properties of a nonstationary NNGP density. Those elements are put together into a Bayesian hierarchical model presented in Section \ref{sec:model}. Section~\ref{sec:mcmc_strategy} details the MCMC implementation of the model, with two pillars: the Gibbs sampler architecture using interweaving of parametrizations in section~\ref{sec:interweaving}, and the use of HMC in section~\ref{sec:HMC_nonstat}. In Section \ref{section:nonstationary_data_analysis} we focus on application: we use experiments on synthetic data to test the properties of the model. We use the model to analyze a data set of lead contamination in the US mainland. Section \ref{section:conclusion} summarizes our proposal and lays out the open problems arising from where we stand. \section{Nonstationary Nearest Neighbor Gaussian Process Space Time Model}\label{section:nonstationary_NNGP} \subsection{Process and response models} Let $\mathcal{S} = \{s_1, s_2,\ldots,s_n\}$ be a set of $n$ spatial locations indexed in a spatial domain $\mathcal{D}$, where $\mathcal{D} \subset \mathbb{R}^d$ with $d \in \{1,2,3\}$. For any $s\in \mathcal{D}$ we envision a spatial regression model \begin{equation}\label{equation:nonstat_gaussian} z(s) = x(s)^{\scriptsize{\mathrm{T}}}\beta + w(s) + \epsilon(s)\;, \end{equation} where $z(s)$ is an outcome variable of interest, $x(s)$ is a $p\times 1$ vector of explanatory variables or predictors, $\beta$ is the corresponding $1 \times p$ vector of fixed effects coefficients, $w(s)$ is a latent spatial process and $\epsilon(s)$ is noise attributed to random disturbances. In full generality, the noise will be modeled as heteroskedastic so that $\epsilon(s)\stackrel{ind}{\sim} \mathcal{N}\left(0, \tau^2(s)\right)$ while the latent process $w(s)$ is customarily modeled using a Gaussian process over $\mathcal{D}$. Therefore, \begin{equation} \label{equation:nonstat_latent_field} w(\mathcal{S}) := (w(s_1), w(s_2),\ldots,w(s_n))^{\scriptsize{\mathrm{T}}} \sim\mathcal{N}(0, \Sigma(\mathcal{S}))\;, \end{equation} where the elements of the $n\times n$ covariance matrix $\Sigma(\mathcal{S})$ are determined from a spatial covariance function $K(s,s')$ defined for any pair of locations $s$ and $s'$ in $\mathcal{D}$. In full generality, and what this manuscript intends to explore, the covariance function can accommodate spatially varying parameters to obtain nonstationarity. The $(i,j)$-th element of $\Sigma(\mathcal{S})$ is \begin{equation} \label{equation:nonstat_covariance} \Sigma(s_i, s_j) = K(s_i, s_j) = \sigma(s_i)\sigma(s_j) K_0(s_i, s_j; \alpha(s_i), \alpha(s_j)), \end{equation} where $\sigma(s_1\ldots s_n) := \{\sigma(s_i) : i=1,\ldots,n\}$ is a collection of (positive) spatially varying marginal standard deviations, $K_0(s,s'; \{\alpha(s),\alpha(s')\})$ is a valid spatial correlation function defined for any pair of locations $s$ and $s'$ in $\mathcal{D}$ with two spatial range parameters $\alpha(s)$ and $\alpha(s')$ that vary with the locations. Later on, for two sets of locations $a \in \mathcal{S}$ and $b \in \mathcal{S}$, we call $\Sigma(a,b)$ the rectangular submatrix of $\Sigma$ obtained by picking the rows and columns whose indices correspond respectively to those of $a$ and $b$ in $\mathcal{S}$. We also abbreviate $\Sigma(a,a)$ into $\Sigma(a)$. These parameters can be either positive-definite matrices offering a locally anisotropic nonstationary covariance structure or positive real numbers specifying a locally isotropic nonstationary range. For example, \citet{paciorek2003nonstationary} proposed a valid class of nonstationary covariance functions \begin{equation}\label{equation:covfun_aniso} K_0(s, s'; A(s), A(s')) = \frac{2^{d/2}|A(s)|^{1/4}|A(s')|^{1/4}}{|A(s)+A(s')|^{1/2}} K_{i}\left(d_M\left(s, s', (A(s) + A(s'))/2 \right)\right), \end{equation} where $A(s)$ and $A(s')$ are anisotropic spatially-varying range matrices, $d$ is the dimension of the space-time domain, $d_M(\cdot, \cdot, \cdot)$ is the Mahalanobis distance and $K_i$ is an isotropic correlation function. If $A(\cdot)$ do not vary by location, the covariance structure is anisotropic but stationary. A nonstationary correlation function is obtained by setting $A(s) = \alpha(s) I_d$, \begin{equation}\label{equation:covfun_iso} K_0(s, s'; \alpha(s), \alpha(s')) = \left(\frac{\sqrt{2}\alpha(s)^{1/4}\alpha(s')^{1/4}}{(\alpha(s)+\alpha(s'))^{1/2}}\right)^d K_{i}\left(d_E(s, s')/\left((\alpha(s)+\alpha(s'))/2 \right)\right)\;, \end{equation} where $d_E(\cdot, \cdot)$ is the Euclidean distance (Mahalanobis distance with matrix $I_d$). Spatial process parameters in isotropic covariance functions are not consistently estimable under fixed-domain asymptotic paradigms \citep{zhang2004inconsistent}. Therefore, irrespective of sample size, no function of the data can converge in probability to the value of the parameter from an oracle model. Irrespective of how many locations we sample, the effect of the prior on these parameters will not be eliminated in Bayesian inference. This can be addressed using penalized complexity priors to reduce the ridge of the equivalent range-marginal variance combinations to one of its points \citep{pc_prior_fuglstad2015interpretable}. The covariance function sharply drops to $0$ so the observations that inform about the covariance parameters at a location tend to cluster around the site . Nonstationary models are significantly more complex . T he parameters specifying the spatial covariance function are functions of every location in $\mathcal{D}$. These form uncountable collections and, hence, inference will require modeling them as spatial processes. This considerably exacerbates the challenges surrounding identifiability and inference for these completely unobserved processes. Asymptotic inference is precluded due to the lack of regularity conditions. Bayesian inference, while offering fully model-based solutions for completely unobserved processes, will also need to obviate the computational hurdles arising from (i) weakly identified processes, which result in poorly behaved MCMC algorithms, and (ii) scalability of inference to massive data sets. We address these issues using sparsity-inducing spatial process specifications. \begin{comment} \begin{enumerate} \item Combining a nonstationary marginal variance and range models sounds attractive, however we have concerns about the possibility to identify the two parameters. Identification is a problem for stationary models when the spatial domain is not large enough \citep{zhang2004inconsistent}. The problem can be addressed using PC priors in order to reduce the ridge of equivalent range-marginal variance combinations to one of its points \citep{pc_prior_fuglstad2015interpretable}. Due to the fact that covariance functions quickly drop to $0$, the locations that will have a non-null covariance with respect to one site are concentrated around it. The observations that will effectively allow to infer the covariance parameters at this site will then be reduced to a cluster of points around the site, a situation that reminds of the fill-in asymptotic of \citet{zhang2004inconsistent}. \item We also suspect that a non-stationary model may overfit when the observations are not dense enough with respect to the spatial process range. Consider a situation where the observations are dense enough to tell $w(\cdot)$ apart from $\epsilon(\cdot)$ but not to have precise estimates of the latent field. The samples of $w(\cdot)$ will vary and give broad \textit{a posteriori} confidence intervals. In the case of a nonstationary model, this variability could be explained by the nonstationary marginal variance and/or range, leading to a poor identification between the latent field value and those parameters. \item Another point is to tell spatially variable process variance $\sigma^2(\cdot)$ apart from spatially variable noise variance $\tau^2(\cdot)$. The samples of the latent field $w(\mathcal{S})$ can be quite fuzzy, in particular when the correlation function $K_0$ has low smoothness (for example: exponential kernel, that is Matérn covariance with smoothness $\nu = 0.5$). A combination of sample fuzziness and spatially variable marginal variance could be difficult to distinguish from a heteroskedastic noise. \item Eventually, it is difficult to identify range and smoothness when a Matérn model is used, even for stationary models. It may be wise to leave smoothness as a hyperparameter and use special cases of the Matérn function such as the exponential kernel ($\nu = 1/2$) or the squared exponential kernel ($\nu = 1$) as isotropic correlation $K_0(\cdot)$. \end{enumerate} Solutions for problems 1, 2 and 3 would be: \begin{enumerate} \item Not to use full nonstationary model if the identification problems are confirmed \item To use priors to guarantee that the spatially variable parameters will have a strong, smooth, large-scale spatial cohesion. For example, in problem 2, a short-scale prior for $\sigma(\cdot)$ will allow $\sigma(s)$ to go along $w(s)$, while a large-scale prior will bound it to nearby realizations of $\sigma(\cdot)$, giving a restoring force that will prevent $\sigma(s)$ from moving around freely. The extreme of this approach would be a prior that is so stiff that it is practically equivalent to a stationary model. \end{enumerate} \end{comment} \subsection{Nonstationary NNGP}\label{subsection:nonstat_NNGP} The customary NNGP \citep{NNGP} specifies a valid Gaussian process in two steps. We begin with a stationary $GP(0, K(s,s'))$ with covariance parameter $\theta$ so that $w({\cal S})$ has the probability law in (\ref{equation:nonstat_latent_field}). Let $f(w({\cal S})\,|\, \theta)$ be the corresponding joint density. First, we build a sparse approximation of this joint density. Using a fixed ordering of the points in ${\cal S}$ we construct a nested sequence ${\cal S}_{i-1} \subset {\cal S}_{i}$, where ${\cal S}_i = \{s_1, s_2,\ldots, s_{i-1}\}$ for $i=2,3,\ldots,n$. The joint density of the NNGP is given by $\tilde f(w({\cal S})\,|\, \theta) = f(w(s_1)\,|\,\theta)\prod_{i=2}^n \tilde f(w(s_i)\,|\, w({\cal S}_{i-1}), \theta)$ \citep[also referred to as Vecchia's approximation][]{vecchia1988estimation, stein2004approximating}, where \begin{equation} \label{equation:NNGP} \tilde f(w(s_i)\,|\, w({\cal S}_{i-1}), \theta) = f(w(s_i)\,|\, w(pa(s_i)),\theta), \end{equation} and $pa(s_i)$ comprises the parents of $s_i$ from a DAG over ${\cal S}$. The resulting density is $\tilde f(w({\cal S})\,|\, \theta) = N(w({\cal S})\,|\, 0, \tilde{\Sigma}({\cal S};\theta))$, where $\tilde{\Sigma}({\cal S};\theta)^{-1} = (I-A)^{\scriptsize{\mathrm{T}}}D^{-1}(I-A)$, $D$ is diagonal with conditional variances $\bar{\sigma}_i^2 = \sigma(s_i)^2 - \Sigma(i, pa(s_i))\Sigma(pa(s_i), pa(s_i))^{-1}\Sigma(pa(s_i),i)$ and $A$ is lower-triangular whose elements in the $i$th row can be determined as $A(i, pa(s_i)) = \Sigma(i, pa(s_i))\Sigma(pa(s_i), pa(s_i))^{-1}$ and $0$ otherwise. In other words, if $j \in pa(s_i)$, then $A(i,j) \neq 0$ and its value is given by the corresponding element in $A(i, pa(s_i))$, while $A(i,j) = 0$ whenever $j\neq pa(s_i)$. We define the NNGP \emph{right factor} $\tilde{R} = D^{-1/2}(I-A)$, which is also lower-triangular, so $\tilde{\Sigma}({\cal S};\theta)^{-1} = \tilde{R}^{\scriptsize{\mathrm{T}}}\tilde{R}$. The elements of $A$, $D$ and $\tilde{R}$ all depend upon the parameters $\theta$, but we suppress this in the notation unless required. The number of nonzero elements in the $i$-th row of $\tilde{R}$ is bounded by $|pa(s_i)|$ \citep{NNGP, General_Framework}. In the second step, we extend to any arbitrary location $s\in \mathcal{D}\setminus {\cal S}$ by modifying (\ref{equation:NNGP}) to $\tilde f(w(s)\,|\, w({\cal S}), \theta) = f(w(s)\,|\, w({\cal N}(s)),\theta)$, where ${\cal N}(s)$ is the set of a fixed number of neighbors of $s$ in ${\cal D}$. This extends Vecchia's likelihood approximation to a valid spatial process, referred to as the NNGP. In practice the set ${\cal S}$ is taken to be the set of observed locations (can be very large), $|pa(s_i)| = \min (m, |{\cal S}_{i-1}|)$ where $m << n$ is a fixed small number of nearest spatial neighbors of $s_i$ among points in ${\cal S}_{i-1}$, and ${\cal N}(s)$ is the set of $m$ nearest neighbors of $s \in \mathcal{D}\setminus {\cal S}$ among the points in ${\cal S}$. \citep[See, e.g.,][for extensions and adaptations.]{General_Framework, peruzzi2020highly}. We pursue nonstationarity analogous to \cite{paciorek2003nonstationary} using spatially-varying covariance parameters. This arises from the following fairly straightforward, but key, property \begin{equation}\label{equation:nonstat_NNGP} \tilde f(w(s_i)\,|\, w({\cal S}_{i-1}), \theta(\mathcal{S})) = f(w(s_i)\,|\, w(pa(s_i)),\theta(s_i \cup pa(s_i)))\;, \end{equation} where the NNGP density is derived using covariance kernels as in (\ref{equation:covfun_aniso})~and~(\ref{equation:covfun_iso}), both of which accommodate spatially-variable parameters $\theta(\mathcal{S})$. Equation~(\ref{equation:nonstat_NNGP}) reveals scope for substantial dimension reduction in the parameter space from $\theta(\mathcal{S})$ to $\theta(s_i \cup pa(s_i))$. We derive (\ref{equation:nonstat_NNGP}) in Section~\ref{subsection:demo_recursive_NNGP} of the Supplement. Another useful relationship relates the NNGP derived from the covariance function in \eqref{equation:nonstat_covariance} and its corresponding correlation function. Let $\tilde R_0$ be the NNGP factor obtained from the precision matrix using the correlation function $K_0(\cdot)$ instead of the covariance function $K(\cdot)$ and let $\sigma(\mathcal{S})$ be the nonstationary standard deviations taken at all spatial locations. Then, $\tilde R = \tilde R_0 \mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}$; see Section~\ref{subsection:demo_variance_nngp} of the Supplement for the proof. In particular, computing the log density of $N(w({\cal S})\,|\, 0; \tilde{\Sigma}({\cal S; \theta}))$ will require the determinant and inverse of $\tilde{R}$. These can be computed using \begin{align} |(\tilde R^{\scriptsize{\mathrm{T}}}\tilde R)^{-1}|^{-1/2} &= |\tilde R| = |\tilde R_0 \mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}| = \prod_{i=1}^n (\tilde R_0)_{i,i}/\sigma(s_i); \label{equation:nonstat_NNGP_variance_det} \\ w^{\scriptsize{\mathrm{T}}}\tilde R^{\scriptsize{\mathrm{T}}}{\tilde R} w &= w^{\scriptsize{\mathrm{T}}}\mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}\tilde R_0^{\scriptsize{\mathrm{T}}}\tilde R_0\mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}w\;. \label{equation:nonstat_NNGP_variance_prod} \end{align} From (\ref{equation:nonstat_NNGP_variance_det})~and~(\ref{equation:nonstat_NNGP_variance_prod}) we conclude that if $w(\cdot) \sim NNGP(0, K(\cdot,\cdot))$, where $K(\cdot, \cdot)$ is a nonstationary covariance function such as (\ref{equation:covfun_aniso}) or (\ref{equation:covfun_iso}), then $w({\cal S}) \sim N(0, \tilde{\Sigma}({\cal S}))$, where $\tilde{\Sigma}({\cal S}) = \mbox{\textrm{diag}}(\sigma({\cal S}))(\tilde{R}_0^{\scriptsize{\mathrm{T}}}\tilde{R}_0)^{-1}\mbox{\textrm{diag}}(\sigma({\cal S}))$. Hence, we write $w(s)\sim NNGP(0, K(\cdot, \cdot))$ to mean $w({\cal S})\sim N(0, \tilde{\Sigma}({\cal S}))$, where $\tilde{\Sigma}({\cal S})$ is constructed from $K(s_i, s_j)$ as described above, and the law of $w({\cal S}')$ at a collection of arbitrary points ${\cal S}' = \{s_i': i=1,2,\ldots,n'\}$ outside of ${\cal S}$ is $\prod_{i=1}^{n'}f(w(s_i')\,|\, w({\cal N}(s_i')),\theta)$, where ${\cal N}(s_i')$ is the set of a fixed number of neighbors of $s_i'$ in ${\cal D}$, i.e., the elements of $w({\cal S}')$ are conditionally independent given $w({\cal S})$. Furthermore, Section~\ref{sec:details_KL} shows that the ordering heuristics developed and tested by \citet{Guinness_permutation_grouping} hold for the nonstationary models in \cite{paciorek2003nonstationary} . \begin{comment} \begin{equation}\label{equation:nonstat_NNGP_variance_det} |(\tilde R^{\scriptsize{\mathrm{T}}}\tilde R)^{-1}|^{-1/2} = |\tilde R| = |\tilde R_0 \mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}| = \Pi_{i=1}^n (\tilde R_0)_{i,i}/\sigma(s_i) \end{equation} and \begin{equation}\label{equation:nonstat_NNGP_variance_prod} w^{\scriptsize{\mathrm{T}}}\tilde R^{\scriptsize{\mathrm{T}}}{\tilde R} w = w^{\scriptsize{\mathrm{T}}}\mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}\tilde R_0^{\scriptsize{\mathrm{T}}}\tilde R_0\mbox{\textrm{diag}}(\sigma(\mathcal{S}))^{-1}w\;, \end{equation} which is proportional to a multivariate Gaussian log-density with mean $0$ and precision matrix $\mbox{\textrm{diag}}(\sigma({\cal S}))^{-1}\tilde R_0^{\scriptsize{\mathrm{T}}}\tilde R_0\mbox{\textrm{diag}}(\sigma({\cal S}))^{-1}$.\\ \end{comment} \begin{comment} \paragraph{Nonstationary NNGP on the sphere.} \citet{paciorek2003nonstationary} gives a general method to construct a nonstationary function on the sphere using truncated kernels. This approach seemed tedious to transpose to NNGP, so we took advantage of the fact that NNGP is defined locally to define nonstationary NNGP on the sphere without defining a nonstationary covariance on the sphere. If the ordering of the points \citep{Guinness_permutation_grouping} guarantees that the parents of a point $s_i$ are close (within a few hundred kilometers), they can be projected on the tangent plane intersecting the sphere in $s_i$ with little deformation. The nonstationary conditional Gaussian distribution can then be computed on the tangent plane, and a NNGP distribution arises from the local behaviors. Note that even though NNGP is widely used as an approximation of a full Gaussian process, here we are defining a NNGP without knowing the actual covariance function. \\ This approach is straightforward to apply in the case of \eqref{equation:covfun_iso}, since the Euclidean distance on the tangent plane is not affected by a rotation of the plane's basis. However, in the case of \eqref{equation:covfun_aniso}, the Mahalanobis distance is used and rotation of the plane's basis matters. For regions that exclude the poles (not necessarily the actual magnetic poles but any couple of opposed points on earth), the tangent plane can be parametrized using the North and East directions as a basis. We did not find an approach that allows to work on the whole sphere. \end{comment} \begin{comment} \end{comment} \begin{comment} \end{comment} \begin{comment} \subsection{Log GP priors for spatially variable covariance parameters} \label{subsection:priors_spatially_variable} Spatial processes arising in forestry, ecology and other environmental sciences have certain explanatory variables contributing to nonstationarity in spatial dependence. We can conceive space-varying parameter fields through a log-Gaussian Process (log-GP) \citep{heinonen2016non}. Here, the parameter field $\theta(\mathcal{S})$ is modeled using \begin{equation} \label{equation:parameter_field_analysis} \log(\theta(s)) = X_\theta(s)^{\scriptsize{\mathrm{T}}}\beta_\theta + w_{\theta}(s)\;, \end{equation} where $X_{\theta}(s)$ is a vector of possible explanatory variables (or simply an intercept), $\beta_{\theta}$ are the corresponding slopes, and $w_\theta(\mathcal{S}) \sim \mathcal{N}(0, \zeta_\theta)$ with $\zeta_\theta0$ being a spatial covariance matrix capturing further variation beyond what is explained by the explanatory variables. Examples of explanatory variables include spatial coordinates of $s$ or sinusoidal terms to capture seasonality. By construction of the space-time hierarchical model, there is only one realization of $w(\cdot)$ in each spatial location so there cannot be more than two range or marginal variance parameters at the same location. \end{comment} \begin{comment} Designs generating multiple observations at each spatial location generate multiple realizations of the Gaussian error process $\epsilon(\cdot)$. The Gaussian data model at site $s$ becomes \begin{equation} \label{equation:heterosk_multiple_obs} z(s, i) = X(s, i)^{\scriptsize{\mathrm{T}}}\beta + w(s) + \epsilon(s, i), s\in \mathcal{S}, 1\leq i \leq(n_{obs}(s))\;, \end{equation} where $n_{obs}(s)\geq 1$ is the number of observations at site $s$. While $X_{\tau^2}(s, i)$ can change within spatial site $s$, we assume the latent field $W_{\tau}(s) \sim GP( \zeta_{\tau^2})$ is shared by the replicates. This yields \begin{equation} \label{equation:heterosk_multiple_obs_eps} \epsilon(s, i)\sim \mathcal{N}\left(0, \tau^2(s, i)\right) \quad~\text{and}~\quad log(\tau^2(s, i)) = W_{\tau}(s)+X_{\tau^2}(s, i)^{\scriptsize{\mathrm{T}}} \beta_{\tau^2}. \end{equation} \end{comment} \begin{comment} We favor log-GP priors for various reasons. Logarithms map the ratio scale into the interval scale and allow interpretation of $w_\theta$ and $\beta_\theta$ \citep{stevens1946theory}. They enrich the modeling framework by allowing explanatory variables that can lead to broader scientific investigations with respect to nonstationarity of environmental processes. Modeling the spatial process hierarchically as a GP retains interpretation with respect to the spatial variogram. Finally, the GP renders itself to scalable approximations including low rank methods, NNGP and several other extensions to analyze massive spatial data sets. Using log-Gaussian priors eliminates certain issues. For example, should one use the variance, the precision, or the standard deviation in order to parametrize the heteroskedastic noise variance $\tau^2(\cdot)$ and the marginal variance $\sigma^2(\cdot)$? Is it better to compare the scalar range parameters $\alpha$ or some power such as $\alpha^d$ ? In a $d$-dimensional space-time domain, the covariance function's radius varies proportionally to $\alpha(\cdot)$, but its volume varies proportionally to $\alpha^d(\cdot)$. Once passed to the logarithm, these parametrizations only differ by a multiplicative constant. \end{comment} \section{Hierarchical space-varying covariance models}\label{sec:model} We build a hierarchical space-varying covariance model over a set of spatial locations ${\cal S} = \{s_1,\ldots,s_n\}$. In particular, we extend (\ref{equation:nonstat_gaussian}) to accommodate replicated measurements at each location. If $z(s,j)$ is the $j$-th measurement at location $s$, where $j=1,2,\ldots,n_{s}$, and $z(s)$ is the $n_s\times 1$ vector of all measurements at location $s$, then (\ref{equation:nonstat_gaussian}) is modified to $z(s) = X(s)^{\scriptsize{\mathrm{T}}}\beta + 1_{n_{s}}w(s) + \epsilon(s)$, where $X(s)^{\scriptsize{\mathrm{T}}}$ is $n_s\times p$ with the values of predictors or design variables corresponding to each location $s$; $1_{n_s}$ is the $n_s\times 1$ vector of ones; $\epsilon(s)$ is an $n_s\times 1$ vector with $j$-th element $\epsilon(s) \sim N(0, \mbox{\textrm{diag}}(\tau^2(s)))$, $\tau^2(s)$ now being an $n_s\times 1$ vector; and $\beta$ and $w(s)$ are exactly as in (\ref{equation:nonstat_gaussian}). Note that $X(s)^{\scriptsize{\mathrm{T}}}$ can include predictors that do not vary within $s$, e.g. the elevation, and that can vary within the spatial location, e.g. multiple technicians can record measurements at $s$ and the technician's indicator may be used as a covariate. For scalar or vector valued $\theta(s)$ our proposed modeling framework is \begin{equation}\label{equation:hierarchical_nonstat_nngp} \begin{split} a)~ z &\sim N(X\beta + Mw(\mathcal{S}), \textrm{diag}(\tau^2))\;; \quad(b)~ w({\cal S}) \sim N(0, \tilde{\Sigma}({\cal S}; \theta({\cal S})))\;; \\ (c)~\log (\theta({\cal S})) &= X_{\theta}({\cal S})\beta_{\theta} + W_{\theta}({\cal S})\;;\quad (d)~ W_{\theta}({\cal S}) \sim N(0,\zeta_{\gamma_{\theta}})\;; \\ (e)~ \log (\tau^2) &= X_{\tau}\beta_{\tau} + MW_{\tau}({\cal S}) \;;\quad (f)~ W_{\tau}({\cal S}) \sim N(0, \zeta_{\gamma_{\tau}})\;; \quad(g)~ \{\gamma_{\theta}, \gamma_{\tau}\}\sim p(\cdot, \cdot)\;, \end{split} \end{equation} where $z$ denotes the $|z|\times 1$ vector of all measurements and $X$ is $|z|\times p$ obtained by stacking up $X(s_i)^{\scriptsize{\mathrm{T}}}$ over locations in ${\cal S}$; thus $|z| = \sum_{i=1}^n n_{s_i}$ and $n=|\mathcal{S}|$ the number of locations in $\mathcal{S}$. The link between the spatial sites and the observations of the response variable is operated by $M$, a matching matrix of size $|z| \times |\mathcal{S}|$ whose coefficients are $M_{i,j} = 1$ if the $i$-th element of $z$ corresponds to $j$-th spatial location, and zero otherwise. Since one observation cannot be done in two spatial locations at the same time, each row of $M$ has \textit{exactly} one term equal to one. Also, since there is at least one observation in each location, each column of $M$ has \textit{at least} one term equal to one. This, more general, model yields \eqref{equation:nonstat_gaussian} as a special case with $|z| = |\mathcal{S}|$ and $M$ as a permutation matrix. Both matrices $X$ and $X_\tau$ have $|z|$ rows that vary with measurements, while $X_\theta(\mathcal{S})$ has $|\mathcal{S}|$ rows that correspond to the spatial site. Specifying $X_\theta$ this way is necessary to prevent situations where $w(s)$ would not have correlation $1$ with itself. On the other hand, $X_\tau$ accommodates modeling the error within one spatial site, e.g. to account for variability among the measurements from different technicians within a single spatial location . The specification for $\theta({\cal S})$ emerges from a log-NNGP specification of the space-varying covariance kernel parameters $\theta(s)$ through $W_{\theta}(s) \sim NNGP(0, K(\cdot, \cdot))$. This framework accommodates learning about $\theta(s)$ and $\tau$ by borrowing information from measurements of explanatory variables, $X_{\theta}(s)$ and $X_{\tau}$, that are posited to drive nonstationary behavior with fixed effects $\delta_{\theta}$ and $\delta_{\tau}$, respectively. If such variables are absent, then $X_{\theta}(s)$ and $X_{\tau}(s)$ can be taken simply as an intercept or even set to $0$. The covariance matrices $\zeta_{\theta}$ and $\zeta_{\tau}$ are constructed from a specified covariance kernel for the log-NNGP with parameters $\gamma_{\theta}$ and $\gamma_{\tau}$, respectively. For model fitting, these hyper-parameters can be fixed at reasonable values with respect to the geometry of the spatial domain. Finally, $\{\gamma_{\theta}, \gamma_{\tau}\}$ are assigned probability laws based upon their specific constructions. If $\theta(s)$ is a matrix, as occurs with anisotropic range parameters in (\ref{equation:covfun_iso}), the above framework needs to be modified. Now $\theta(s) = A(s)$ is a $d\times d$ positive definite matrix with positive eigenvalues $\lambda_1(s), \lambda_2(s), \ldots, \lambda_d(s)$. If $A(s) = P(s)\Lambda(s)P(s)^{\scriptsize{\mathrm{T}}}$ is the spectral decomposition, then we use the matrix logarithm $\log A(s) = P(s)\log(\Lambda(s))P(s)^{\scriptsize{\mathrm{T}}}$, where $\log(\Lambda(s))$ is the diagonal matrix with the logarithm of the diagonal elements of $\Lambda(s)$. It is clear that the matrix logarithm maps the positive definite matrices to the symmetric matrices (but not necessarily positive definite) and that $(\log(A(s)))^{-1} = -\log(A(s))$, which is convenient for parametric specifications. The equation for $\log (\theta(s))$ in (\ref{equation:hierarchical_nonstat_nngp}) is now modified to $\log (A(s)) = \sum_{j=1}^{n_{X_A}} X_{\theta, j}(s) B_j + W_{\theta}(s)$, \begin{comment} \begin{equation}\label{equation:hierarchical_nonstat_nngp_modified} \log (A(s)) = \sum_{j=1}^{n_{X_A}} X_{\theta, j}(s) B_j + W_{\theta}(s)\;; \quad \mbox{vech}(W_{\theta}({\cal S})) \sim N(0, S_{\theta}\otimes \tilde{\Sigma}_{0})\;, \end{equation} \end{comment} where each $X_{\theta, j}(s)$ is a real-valued explanatory variable and each $B_j$ is a $d\times d$ symmetric matrix of fixed effects corresponding to $X_{\theta,j}(s)$ and each $W_{\theta}(s)$ is a $d\times d$ symmetric random matrix. Given that $A(s)$, $B_j$'s and $W_{\theta}(s)$ are symmetric, this specification contains redundancies that can be eliminated using half-vectorization of symmetric matrices where we use the $\mbox{vech}(\cdot)$ operator on a matrix to stack the columns (from the first to the last) of its lower-triangular portion. Therefore, we rewrite the model in terms of the $d(d+1)/2$ distinct elements of $\log(A(s))$ as \begin{equation}\label{equation:hierarchical_nonstat_nngp_modified} \mbox{vech}(\log (A(s))) = \mbox{vech}(M(s)) + \mbox{vech}(W_{\theta}(s))\;, \end{equation} where the fixed effects of the $n_{X_A}$ covariates are obtained through $M(s) = \sum_{j=1}^{n_{X_A}} X_{\theta, j}(s) B_j$ is $d\times d$. We obtain $(\mbox{vech}(\log (A))({\cal S})) \sim N((\mbox{vech}(M)({\cal S})), \tilde{\Sigma}_{0}\otimes S_{\theta})$, where $(\mbox{vech}(\log (A))({\cal S})) $ is obtained by stacking the vectors in (\ref{equation:hierarchical_nonstat_nngp_modified}) over ${\cal S}$ and $(\mbox{vech}(M)({\cal S})))$ is defined analogously. The specifications are completed by assigning priors on $S_{\theta}$. This specification is subsumed in (\ref{equation:hierarchical_nonstat_nngp}) with the model for $\theta(s)$ modified to (\ref{equation:hierarchical_nonstat_nngp_modified}) and $\gamma_{\theta} = S_{\theta}$. An illustration of the kind of distributions obtained from hierarchical model is presented in Figure \ref{fig:nonstat_ellipses}. Figure \ref{fig:matrix_log_nngp_ell} presents range ellipses generated with \eqref{equation:hierarchical_nonstat_nngp_modified}, while Figure \ref{fig:scalar_log_nngp_ell} presents range ellipses generated with \ref{equation:hierarchical_nonstat_nngp} (d). Figure \ref{fig:matrix_gp_samples_log_nngp} and Figure \ref{fig:scalar_gp_samples_log_nngp} represent one of their respective Gaussian Process sample paths, obtained with \eqref{equation:hierarchical_nonstat_nngp} (b). \begin{figure} \caption{Ellipses obtained with matrix log NNGP} \label{fig:matrix_log_nngp_ell} \caption{Circles obtained with scalar log NNGP} \label{fig:scalar_log_nngp_ell} \caption{NNGP samples corresponding to the ellipses} \label{fig:matrix_gp_samples_log_nngp} \caption{NNGP samples corresponding to the circles} \label{fig:scalar_gp_samples_log_nngp} \caption{Example of range ellipses and GP samples induced by the log-NNGP and matrix log-NNGP priors} \label{fig:nonstat_ellipses} \end{figure} \begin{comment} Denoting $\beta_{A_i}$ with $1\leq i\leq d(d+1)/2$ the vector obtained with the projections of $(B_1,\ldots, B_{n_{X_A}})$ on the $i^th$ element of the basis of symmetric matrices, and $log(A)_i$ the projection of $log(A)$ on the same element, we have \begin{equation} \label{equation:MLGP_normal_centered} (log(A(\mathcal{S}))_1, \ldots, log(A(\mathcal{S}))_{d(d+1)/2}) \sim \mathcal{N}(\mu, S \otimes \Sigma_0), \end{equation} the mean $\mu$ being obtained by stacking vertically $X_A \beta_{A_1},\ldots,X_A \beta_{A_{d(d+1)/2}}$. The stationary model and the nonstationary range model with scalar range are included in a model with matrix log-GP. If $S$ is null and that all $B_i$s are null except for the pseudo-intercept, then the induced correlation is stationary. As for the scalar range case, denote $v$ the coordinates of the matrix $I_d/\sqrt{d(d+1)/2}$ in the chosen basis of symmetric matrices. If $\sigma^2_\alpha\times v^Tv$, the random effect $w(s), \ldots, w_{d(d+1)/2}(s)$ is degenerate and its support restricted to the matrices that are proportional to $I_d$. If in addition $B_i = \beta_i \Sigma_{j=1}^{d(d+1)/2} v_j e_j = \beta_i I_d/\sqrt{d(d+1)/2}$, we recognize the nonstationary correlation with scalar range parameters of \eqref{equation:covfun_iso}. \end{comment} \begin{comment} \subsection{Hierarchical architecture using NNGPs} \label{subsection:LNNGP} In view of the good computational properties and accuracy of NNGP approximation \citep{Guinness_permutation_grouping, General_Framework}, we use log-NNGP and Matrix log-NNGP as priors for the spatially variable parameters. They are obtained by replacing the covariance matrix $\Sigma(\mathcal{S}, \theta)$ by a NNGP approximation $(\tilde R_\theta^T\tilde R_\theta)^{-1}$ in the log-GP and matrix log-GP priors. \paragraph{Estimating or not the covariance structure of log-NNGP priors.} The log-NNGP and matrix log-NNGP priors for the latent covariance parameters fields $\theta(\cdot)$ are themselves parametrized by at least a covariance matrix $(\tilde R_\theta^T \tilde R_\theta)^{-1}$ and an intercept that is integrated in $\beta_\theta$. In \citet{heinonen2016non}, the covariance parameters and the intercept are treated as hyper-parameters. We choose to leave only the hyperprior range as a user-chosen parameter, while estimating $\sigma^2_\theta$ (or its counterpart $S$ for elliptic range) and $\beta_\theta$. On the other hand, we chose not to sample $\alpha_\theta$. First, it is a costly operation since it involves to compute $\tilde R_\theta$. Moreover, in view of the identification problems that could occur in nonstationary models, we advocated for a prior with a high spatial coherence. In the case of a log-GP prior, this means that the range $\alpha_\theta$ should be high, maybe one tenth of the domain size. Given the lack of identification between range and variance when the spatial domain is small with respect to the process range \citep{zhang2004inconsistent}, . \paragraph{Prior distributions on high-level parameters.} The high-level parameters that are estimated by the model are the linear regression coefficients $\beta_\theta$ and the variance parameter $\sigma^2_\theta$ or $S$. We put an improper prior on $\beta_\theta$. This choice of an improper prior on the logarithm of a positive parameter is quite standard, but the literature of stationary spatial models generally advocates for stronger priors \citep{pc_prior_fuglstad2015interpretable, NNGP}. As for the variance parameter, we put a uniform prior on a $[-8; 3]$ window for $\log(\sigma^2_\theta)$, and for each log-eigenvalue of $S$ in the matrix case. The bottom of the interval induces a model that is practically stationary since the variance of the field of parameters will be very close to $0$. We chose not to let the variance fall any lower in order to avoid straying and numerical problems. On the other hand, $exp(3)\approx 20$, which means that the latent field can have a high variance too. We did not choose to allow the field to go any higher because of numerical problems, and because there would be no sensible interpretation of an extremely variable field of parameters. \end{comment} \section{MCMC algorithms} \label{sec:mcmc_strategy} \subsection{Ancillary-Sufficient Interweaving Strategy} \label{sec:interweaving} The problem of high correlation between latent fields and higher-level parameters is well-known in stationary spatial models, and several solutions exist such as blocking \citep{knorr2002block}, collapsing \citep{finley2019efficient} or interweaving \citep{filippone2013comparative}. Interweaving takes advantage of the discordance between two parametrizations of a latent field to sample high-level parameters. When those two parametrizations are an ancillary-sufficient couple, we have an Ancillary-Sufficient Interweaving Strategy (AS-IS); \citep[see][]{yu2011center} and also Section~\ref{subsection:interweaving}. In our application, we interweave the whitened and natural parametrizations of the latent field $w(\mathcal{S})$ from \eqref{equation:hierarchical_nonstat_nngp} (a) in order to update the higher-level parameters $W_\theta(\mathcal{S})$ and $\beta_\theta$ impacting the covariance structure decomposed in \eqref{equation:hierarchical_nonstat_nngp} (c) and \eqref{equation:hierarchical_nonstat_nngp_modified}. The so-called \textit{natural parametrization} of the latent field is found in the decomposition \eqref{equation:hierarchical_nonstat_nngp} (a), and is a sufficient parametrization. The \textit{whitened parametrization} of the latent field is ancillary, and is obtained by multiplying the natural parametrization with the right NNGP factor of its prior precision matrix from \eqref{equation:hierarchical_nonstat_nngp} (b): $ w^*(\mathcal{S}) = (\tilde{\Sigma}({\cal S}; \theta({\cal S})))^{-1/2}w(\mathcal{S}).$ The consequence of whitening is that the prior distribution \eqref{equation:hierarchical_nonstat_nngp} (b) becomes a standard normal distribution, hence the method's name. The covariance parameters have no effect anymore on the prior distribution of the latent field. In turn, they acquire a role in the decomposition of the data. In \eqref{equation:hierarchical_nonstat_nngp}(a), $w(s_i)$ replaced by the $i^{th}$ element of $(\tilde{\Sigma}({\cal S}; \theta({\cal S})))^{1/2}w^*(\mathcal{S})$, while \ref{equation:hierarchical_nonstat_nngp}(b) is replaced by $w^*(\mathcal{S})\sim \mathcal{N}(0, I_{|\mathcal{S}|})$. Further developments concerning the behavior of $w^*$ can be found in \citet{coube2021mcmc}. The interest parameter is sampled in two steps, one for each parametrization of the latent field. Those individual steps can be full conditional draws, random walk Metropolis steps, in our case they are Hybrid Monte-Carlo (HMC) steps. That is why, later on, two potentials will be derived for the HMC steps of each parameter. Note that $\gamma_\theta$ and $\gamma_\tau$ are themselves covariance parameters for the latent fields $W_\theta$ (\ref{equation:hierarchical_nonstat_nngp} (d)) and $W_\tau$ (\ref{equation:hierarchical_nonstat_nngp} (f)). In order to update $\gamma_\theta$ and $\gamma_\tau$, interweaving is used as well, this time treating $W_\theta$ and $W_\tau$ as latent fields, and using their respective whitened parametrizations. In the case of $\gamma_\theta$, \textit{nested interweaving} allows to use the whitened parametrizations of $W_\theta(\mathcal{S})$ and $w(\mathcal{S})$. Nested interweaving also is used to update $\beta_\theta$ and $\beta_\tau$, using centering from \citet{coube2020improving}. Those technical operations are laid out in detail in Section~ \ref{subsection:nested_interweaving}. \subsection{Hybrid Monte-Carlo} \label{sec:HMC_nonstat} Hybrid Monte-Carlo (HMC) \citep{neal2011mcmc} has already been implemented successfully by \citet{heinonen2016non} for nonstationary Gaussian processes. Our approach differs in two aspects. First, we use NNGP instead of full GP and, hence, derive the non-trivial differentiation of the NNGP-induced potential. Second, we use an ``HMC within AS-IS" algorithm and must find the gradients of the potential for the covariance parameters using both ancillary and sufficient parametrizations. We update the log-NNGP latent fields using HMC. Let $W_\lambda(\mathcal{S})$ be a field of parameters, either $\lambda = \theta$ in \eqref{equation:hierarchical_nonstat_nngp}(c) and \eqref{equation:hierarchical_nonstat_nngp_modified}, or $\lambda = \tau$ in \eqref{equation:hierarchical_nonstat_nngp}(e), and let $\zeta_\lambda$ be the associated log-NNGP prior covariance matrix, either $\zeta_\theta$ in \eqref{equation:hierarchical_nonstat_nngp}(d) or $\zeta_\tau$ in \eqref{equation:hierarchical_nonstat_nngp}(f). The negated log likelihood with respect to the field $W_\lambda(\mathcal{S})$ will then be, up to a multiplicative constant, $H_{W_\lambda} = -\log(f(W_\lambda(\mathcal{S}) | \zeta_\lambda)) -g_\lambda(W_\lambda(\mathcal{S}))$, where $f(\cdot)$ is the Normal density function involved in the log-NNGP prior and $g_\lambda(\cdot)$ is specified based upon the role of the parameter in the model and the chosen parametrization of the latent field in interweaving. Introducing the NNGP prior covariance of $W_\lambda$, we obtain $ \nabla_{W_\lambda} H_{W_\lambda} = \zeta_\lambda^{-1}W_\lambda - \nabla_{W_\lambda} g_\lambda(W_\lambda(\mathcal{S})). $ In order to improve the efficiency of the HMC step using prior whitening \citep{heinonen2016non, neal2011mcmc}, we consider the gradient with respect to $W_\lambda^*(\mathcal{S}) = \zeta_\lambda^{-1/2} W_\lambda(\mathcal{S})$ (with $\zeta_\lambda^{-T/2}\zeta_\lambda^{-1/2} = \zeta_\lambda^{-1})$, with $\zeta_\lambda^{-T/2} = (\zeta_\lambda^{-1/2})^T$. This gradient is given as \begin{equation} \label{equation:negated_ld_whitened} \nabla_{W_\lambda^*} H_{W_\lambda} = \zeta_\lambda^{-1/2} W_\lambda - \zeta_\lambda^{T/2}\nabla_{W_\lambda} g_\lambda(W_\lambda(\mathcal{S})). \end{equation} Details are provided in Section~\ref{subsection:general_gradient}. Using NNGPs to specify $\zeta_\lambda$ makes $\zeta_\lambda^{-1/2}$ sparse and triangular, which enables fast solving and multiplication. This transform is the same as the whitening presented in Section~\ref{sec:interweaving}, but here we use whitening to achieve approximate decorrelation between the components, while in Section \ref{sec:interweaving} we constructed an ancillary-sufficient couple. What remains is computing $\nabla_{W_\lambda} g_\lambda(W_\lambda(\mathcal{S}))$. When the marginal variance of the NNGP field is considered, that is $\lambda = \sigma$, \eqref{equation:nonstat_NNGP_variance_det} and \eqref{equation:nonstat_NNGP_variance_prod} allow to derive the gradients. Details are provided in Section~\ref{subsection:gradient_sigma2}. When the range of the NNGP field is considered, that is $\lambda = \alpha$, we find the gradient using a two-step method. The first part is to compute the derivatives of $\tilde R$ with respect to $W_\alpha$, $\tilde R$ being the NNGP factor such that, in equation \eqref{equation:hierarchical_nonstat_nngp}(b), we have $\tilde{\Sigma}({\cal S}; \theta({\cal S})) = (\tilde R^T\tilde R)^{-1}$. The details are presented in Section~\ref{subsection:derivative_tile_R}. In Section~ \ref{subsection:cost_derivative_tile_R}, we give an estimate of the required flops and RAM. The second step, laid out in Section~ \ref{subsection:gradient_alpha} is to express the gradients using the derivatives of $\tilde R$. We estimate the cost of this second step in Section~ \ref{subsection:cost_gradient_alpha}. The last case is the variance of the noise, $\lambda = \tau$, presented in Section~ \ref{subsection:gradient_tau2}. In that case, only the natural parametrization of the latent field is used. After the latent fields, we focus on the HMC update of the linear effects coefficients, either $\beta_\theta$ in \eqref{equation:hierarchical_nonstat_nngp} (c), $\beta_\tau$ in \eqref{equation:hierarchical_nonstat_nngp} (e), or $B_j$ in \eqref{equation:hierarchical_nonstat_nngp_modified}. This method is especially useful for the range parameters since it avoids an unaffordable Metropolis-within-Gibbs sweep over $\beta_\alpha$. Since we put an improper constant prior on $\beta_\theta$, those parameter impact the negated log-density only through their role in the decomposition of $\lambda$. Using the Jacobian chain rule, for $\lambda = \theta, \tau,$ $$ \nabla_{\beta_\lambda}H = J_{\beta_\lambda}^Tlog(\lambda) \cdot \nabla_{log(\lambda)}H = X_{\lambda}^T \cdot \nabla_{log(\lambda)}H.$$ In the case of the log-range and log-variance, there is a one-to-one correspondence between $log(\theta)$ and $W_\theta$, so that it is possible to replace $\nabla_{log(\theta)}H$ by $\nabla_{W_\theta}g_\theta(W_\theta)$. In the case of the noise variance, there can be several observations at the same spatial site. It is straightforward to derive $\nabla_{log(\tau^2)}H$ using $\frac{\partial H} {\partial \tau^2_i(s)}= \frac{\partial l(z_i(s)|\tau^2_i(s), w(s), X_i(s), \beta) }{\partial \tau^2_i(s)}$ ($i$ being the index of the observation at site $s$). We now focus on a last point: the covariance parameters for $W_\theta$ and $W_\tau$, respectively denoted $\gamma_\theta$ and $\gamma_\tau$ in \eqref{equation:hierarchical_nonstat_nngp}. Like before, we denote $\lambda = \theta, \tau$. Here, the nested interweaving strategy is used. When the ancillary parametrization $W^* = \zeta^{-1/2}_{\gamma_\lambda} W_\lambda$ is used, changing $\gamma_\lambda$ has an impact on $W_\lambda$. Using the Jacobian chain rule, $$ \nabla_{\gamma_\lambda}H = J_{\gamma_\lambda}^TW_\lambda\cdot \nabla_{W_\lambda} H = J_{\gamma_\lambda}^T(\zeta_{\gamma_\lambda}^{1/2}W_\lambda^*)\cdot \nabla_{W_\lambda} H. $$ In the case of elliptic range parameters, we have in virtue of the ``Vec trick" : $$ \nabla_{S}H = J_{S}^Tw_A\cdot \nabla_{w_A} H = J_{S}^T(S^{1/2}\otimes\tilde R_A^{-1} w_A^*) \cdot \nabla_{w_A} H= J_{S}^T(Vec(\tilde R_{A_0}^{-1} w_A^*(S^{1/2})^T))\cdot \nabla_{w_A} H.$$ In order to get the Jacobian, the derivatives of $S^{1/2}$ with respect to $S$ are obtained by finite differences. The derivatives of $\tilde R_{A_0}^{-1} w_A^*(S^{1/2})^T$ are in turn obtained by matrix multiplication, and plugged into the $Vec(\cdot)$ operator. \section{APPLICATIONS OF THE MODEL} \label{section:nonstationary_data_analysis} Our model is implemented and available at the public repository \if11 {\url{https://github.com/SebastienCoube/Nonstat-NNGP}}\fi \if01 {\url{(blinded)}}\fi. \subsection{Synthetic experiments} \label{subsection:synthetic_experiments} The impact of nonstationary modeling is explored in an experiment on simulated data sets presented in Section~\ref{section:wrong_modelling}. Three indicators were monitored: the Deviance Information Criterion (DIC) \citep{spiegelhalter1998bayesian} (Figures \ref{fig:wrong_modelling_1}, \ref{fig:wrong_modelling_2}), the mean square error (MSE) of the predicted field at unobserved sites (Figures \ref{fig:wrong_modelling_pred_1}, Section~\ref{fig:wrong_modelling_pred_2}), and the MSE of the smoothed field at observed sites (Figures \ref{fig:wrong_modelling_smooth_1}, \ref{fig:wrong_modelling_smooth_2}). In terms of DIC, it is clear that nonstationary modeling must be chosen over stationary modeling when the data is nonstationary. As for prediction at unobserved locations, a nonstationary noise variance sharply improves the MSE in relevant case (Figure \ref{fig:wrong_modelling_pred_1_7}). The other parameters seem to have little effect. In terms of MSE at the observed locations, the nonstationary model brings a clear improvement in all cases. For all three aforementioned indicators, it seems that over-modelling does not hurt. The boxplots corresponding to the ``right'' models are at the same level as the boxplots corresponding to over-modeling. As we shall see in the next paragraph, over-modeling does not affect the performance of the model because the non-stationary model encompasses the stationary model, and boils down to stationarity when confronted with stationary data. When stationary data is analyzed with a non-stationary model, the marginal variance parameter of the log-NNGP prior sticks to $0$, inducing a degenerate distribution. The parameter latent field ends up being constant, effectively inducing a stationary model. However, the problem of wasting time and resources fitting a complex and costlier model remains. In those test data sets, over-modeling can be detected just by looking at the MCMC chains, without needing to wait for full convergence. For example, in Figure \ref{fig:range_log_scale} we can see the $2000$ first states, for 3 separate chains, of the log-variance parameter for a range log-NNGP prior. On the left, the data is stationary, and the log-variance is very low. On the right, the data is non-stationary, and the log-variance is high enough to allow the parameter to move in the space. As for the model with anisotropic range parameters, it is also possible to detect over-modeling from the estimates. In order to do so, we look at the matrix logarithm of $S$ from \eqref{equation:hierarchical_nonstat_nngp_modified}. If $S \approx v^T\sigma^2v$, $v$ being the projection of $I_d/\sqrt{d(d+1)/2}$ in the chosen basis of symmetric matrices, then the model is effectively a nonstationary scalar range model. If $S$ is null, the model is stationary. We monitor three indicators: $$v~log(S)~v^T, ~~~~~ u~log(S)~u^T, ~~~~~ x~log(S)~x^T $$ with $u, x$ being a completion of $v$ in the basis of the symmetric matrices. In Figure \ref{fig:range_log_scale_elliptic}, we show the behavior of the indicators for three data sets: a stationary data set (Figure \ref{fig:range_log_scale_over_stat}), a nonstationary data set with scalar range (Figure \ref{fig:range_log_scale_over_circ}), and a nonstationary data set with elliptic range (Figure \ref{fig:range_log_scale_over_ell}). We can see that all three components are very low in Figure \ref{fig:range_log_scale_over_stat}, implying $S\approx \textbf{0}_{~3\times3}$, which makes in turn $w_A$ constant, eventually inducing a stationary prior for $w$. When the range is nonstationary with scalar parameters (Figure \ref{fig:range_log_scale_over_circ}), $vSv^T$ (in black) raises while the two other indicators are low. Eventually, when the data is nonstationary with elliptic range parameters (Figure \ref{fig:range_log_scale_over_ell}), all three indicators are high. \begin{figure} \caption{Stationary data} \label{fig:range_log_scale_over} \caption{Data with nonstationary range} \label{fig:range_log_scale_right} \caption{Log-variance of the log-NNGP prior of the range parameter (locally isotropic model)} \label{fig:range_log_scale} \end{figure} \begin{figure} \caption{Stationary data $~~~~~$ $~~~~~$ $~~~~~$ $~~~~~$ $~~~~~$} \label{fig:range_log_scale_over_stat} \caption{Data with nonstationary scalar range} \label{fig:range_log_scale_over_circ} \caption{Data with nonstationary elliptic range} \label{fig:range_log_scale_over_ell} \caption{Log-scale analysis of the matrix log-NNGP prior of the range parameter (locally anisotropic model)} \label{fig:range_log_scale_elliptic} \end{figure} A first approach to tell about the identification of parameters is to use model comparison criteria such as the DIC or MSE. If the parameters are not well-identified, then a change in the chosen model, for example replacing a model with nonstationary range by a model with nonstationary noise variance, should not affect the chosen criterion. From the experiment presented in Section~ \ref{section:wrong_modelling}, it is clear that nonstationary noise variance is well-identified and that forgetting it in relevant cases leads to under-fitting. The identification of nonstationary scalar range and marginal variance of the latent NNGP process is less clear, even though omitting both leads to under-fitting. On the one hand, on data with nonstationary range, a model with nonstationary variance does not do as good as a model with nonstationary range in terms of DIC and smoothing MSE (see Figures \ref{fig:wrong_modelling_1_4}, \ref{fig:wrong_modelling_smooth_1_4}). On the other hand, the converse is not true for data with nonstationary variance (Figures \ref{fig:wrong_modelling_1_6}, \ref{fig:wrong_modelling_smooth_1_6}); and on data with both non-stationary range and marginal variance, models with only either nonstationary range or variance do as good as the model with both (Figures \ref{fig:wrong_modelling_1_2} and \ref{fig:wrong_modelling_smooth_1_2}). This problem is not surprising: on small domains, range and variance are difficult to identify for stationary models \citep{zhang2004inconsistent}. However, a troubling observation shows that there is some kind of identification: when given the possibility, our model is able to make the right choice between the two parameters. In Figure \ref{fig:alpha_sigma_log_scale} in Section~\ref{section:wrong_modelling}, we used boxplots to summarize results of the models that estimate both nonstationary marginal variance and range. On the left (Figure \ref{fig:alpha_log_scale}), we can see estimates for the log-variance of $W_\alpha$'s log-NNGP prior. On the right (Figure \ref{fig:sigma_log_scale}), we see its counterpart for $W_{\sigma}$. In both subfigures, the boxplots are separated following the type of the data, $(\emptyset)$ being stationary data, $(\alpha)$ being data with nonstationary range, $(\sigma^2)$ being data with nonstationary variance, and $(\alpha+\sigma^2)$ being data with both nonstationarities (Section~ \ref{section:wrong_modelling} presents the naming system in detail). Recall that when the log-variance is low, the corresponding field is practically stationary. Then we can see that the right kind of nonstationarity is detected for all four configurations: when data is stationary, both log variances are very low, when the data is $(\sigma^2)$, then only the log-variance of $W_{\sigma}$ is high, etc. \subsection{Real data analysis} \subsubsection{Empirical guidelines} The full log-NNGP and matrix-log NNGP models show pathological behavior when applied to real data. It is yet to be investigated whether this is due to an intrinsic incompatibility of real data with the model architecture, or to the fact that the MCMC algorithm is not sturdy enough to run a complex model with high-dimensional data. However, the linear effects used to explain the covariance parameters behave well with real data. It is therefore possible to explain the covariance structure using environmental covariates. It is also possible to integrate spatial basis functions in the regressors in order to capture spatial patterns, see Section~ \ref{section:spatial_basis}. \subsubsection{Case study: lead concentration in the United States of America mainland} The lead data set presented by \citet{hengl2009practical} features various heavy metals measurements, including lead concentration. Various anthropic (density of air pollution, mining operations, toxic release, night lights, roads) and environmental (density of earthquakes, indices of green biomass, elevation, wetness, visible sky, wind effect) covariates are provided. Those variables may impact the emission of the lead, its diffusion, or both. The lead concentration and the covariates have been observed on $58097$ locations, with a total of $64274$ observations. As we can see in Figure \ref{fig:lead_measure_sites}, the measures are irregular, with large empty areas. The observations were passed to the logarithm. \begin{figure} \caption{Measure sites for lead concentration} \label{fig:lead_measure_sites} \end{figure} We used a NNGP with $10$ neighbors and the max-min order. We tested four models: a full model with non-stationary circular range, marginal variance, and noise $(\alpha + \sigma^2 + \tau^2)$, a model with non-stationary circular range and noise $(\alpha + \tau^2)$, a model with just the noise $(\tau^2)$, and a stationary model $(\emptyset)$. Each of those four models is tested with two smoothness parameters $\nu$ for the Matérn function: a rough $\nu = 0.5$ (corresponding to the exponential kernel), and a smooth $\nu = 1.5$. The range and the marginal variance were modeled using 25 spatial basis functions and the most spatially coherent regressors (elevation, green biomass index, and wetness index), while the noise was modeled with the basis functions and all the regressors. All runs were done with $4000$ iterations, the first $1000$ being discarded. The MCMC convergence was monitored using univariate Gelman-Rubin-Brooks diagnostics \citep{gelman1992inference, brooks1998general} of the high-level parameters (all the parameters of the models except the latent field $w(\mathcal{S})$). After $4000$ iterations, all the diagnostics reached a level of $1.2$ or below. The models are compared using the DIC, the log-density of the observations, and the log-density of the predictions on validation tests where $50000$ spatial locations were kept for training (see table \ref{tab:selection_criteria}). We can see that the smoother model does worse than the rougher model in terms of DIC and log-density at the observed locations, but does better at the predicted locations, whatever the kind of nonstationarity. Therefore, we conclude that the exponential kernel is overfitting the data and we select the Matérn model with $\nu = 1.5$. As for the selection of the nonstationary model, we can see that even though the full nonstationary model $(\alpha + \sigma ^2 + \tau^2)$ does better in terms of log-density at observed locations and DIC, the model that does best in terms of predicted log density is the model with nonstationary range and noise variance $(\alpha + \tau^2)$. We conclude that the full model over-fits the data and that a more parsimonious formulation does better. In addition to that, we remark that $(\alpha + \tau^2)$ only brings a marginal improvement with respect to $(\tau^2)$. While the performances of the nonstationary model are overall much better than the stationary model's, it is worth noticing that the former is not uniformly better that the latter. We focused on the observation-wise log-score for both smoothing and prediction, and compared the range-and-noise model with the stationary model. There was a clear spatial structure in the outcome taking value $1$ if the nonstationary model wins, and $0$ else. This spatial structure closely corresponds to the noise parameters estimated by the model, and regressing the outcome on the estimated log-range and log-noise confirms this observation: a higher noise will deteriorate the relative performances of the nonstationary model, while a higher range will not improve the smoothing but will improve the predictions. This interesting problem is to be further investigated. \begin{table} \caption{Selection criteria} \label{tab:selection_criteria} \centering \begin{tabular}{rcccccccc} \hline model & \multicolumn{2}{c}{$(\emptyset)$} & \multicolumn{2}{c}{$(\tau ^2)$} & \multicolumn{2}{c}{$(\alpha + \tau ^2)$} & \multicolumn{2}{c}{$(\alpha + \sigma^2 +\tau ^2)$} \\ smoothness & 0.5 & 1.5 & 0.5 & 1.5 & 0.5 & 1.5 & 0.5 & 1.5 \\ \hline log dens unobserved & -7955 & -7106 & -7611 & -6476 & -7459 & -6458 & -7968 & -6660 \\ log dens observed & -30957 & -35413 & -26761 & -31120 & -25843 & -29766 & -24358 & -28958 \\ DIC & 74508 & 78223 & 64792 & 68779 & 63460 & 66816 & 62132 & 66087 \\ \hline \end{tabular} \end{table} The predicted mean and standard deviation of the latent field are presented in Figure \ref{fig:lead_field}, with a comparison between stationary and nonstationary modeling. The mean nonstationary parameters are presented in Figure \ref{fig:lead_covparms}, with a decomposition between the part explained by environmental covariates and the part explained by spatial effects. The predicted means of lead contamination are quite similar between the stationary and nonstationary model. However, in regions where the range is higher, such as the Northeast, the nonstationary predictions tend to be smoother. In regions where the range is smaller, like Arizona, the nonstationary predictions are sharper, fuzzier. The predicted standard deviations are very different following whether the model is stationary or nonstationary (Figure \ref{fig:lead_sd}). In the stationary model, the only thing that imports is the spatial density of the observations (Figure \ref{fig:lead_measure_sites}). In the nonstationary model, regions with high spatial coherence such as the Northwest or the Western Midwest will have lower standard deviation, and other regions such as the West will have high standard deviation even if the measurements are dense there. \begin{figure} \caption{Predicted latent mean of the lead concentration} \label{fig:lead_mean} \caption{Predicted latent standard deviation of the lead concentration} \label{fig:lead_sd} \caption{Predicted mean and standard deviations of the latent field for the stationary and nonstationary models.} \label{fig:lead_field} \end{figure} \begin{figure} \caption{Predicted range} \label{fig:lead_range} \caption{Predicted noise} \label{fig:lead_noise} \caption{Predicted mean of the covariance parameters in the selected nonstationary model.} \label{fig:lead_covparms} \end{figure} \section{Summary and open problems} \label{section:conclusion} This paper undertook to generalize the NNGP hierarchical model to nonstationary covariance structures. We delivered a proposal that takes into account the problematic aspects of computational cost, model selection, and interpretation of the parameters. Along the way, we developed various tools that could be useful in other contexts. We found a flexible and interpretable parametrization for local anisotropy, embedding the nonstationary models in a coherent family \textit{à la} Russian doll. Thanks to the logarithmic transform, the user can easily interpret the parameters. This family of models seems quite resilient with respect to over-modeling, and could be useful in models that do not use NNGPs, and might combine well with additional regularizing priors. Another contribution is to bring a closed form of the derivatives of the NNGP density with respect to spatially-indexed covariance parameters. Those derivatives can be used elsewhere than in HMC, in MAP or maximum likelihood approaches for example. Eventually, in spite of problems with real-life examples, we made a step forward into showing that nested AS-IS can be a viable strategy for multi-layered hierarchical models with large data augmentations. Regardless of our success to link environmental covariates and spatial basis functions to the covariance structure, a problem that needs further investigation is the behavior of the matrix-log NNGP model on real data. The pathological behavior of the model may be due to an intrinsic incompatibility with the data, or because of a lack of robustness of the MCMC implementation. It is worth noticing that if the spatial basis functions are obtained from a GP covariance matrix, for example Predictive Process basis functions \citep{PP, coube2021mcmc} or Karhunen-Loève basis functions \citep{Handbook_Spatial_Stats}, then one only needs to put a Normal prior on the regression coefficients to obtain a low-rank GP prior. A low-rank log-GP or matrix log-GP, with fewer parameters and over-smoothing of the random effects, might be a good start to tackle the computational problems encountered by the model. A possible extension is an implementation of the model in more than two dimensions. In particular, elliptic covariances in 3 dimensions might prove useful to quantify drifts, for example rain moving across a territory. The matter to keep in mind is that ellipses in higher dimensions incur more differentiation, since the matrix logarithm of the range parameters will have $6$ coordinates instead of $3$. A computational scale up, discussed below, may be necessary. This, and the perspective to have coordinate spaces of dimension $3$ or more, lead us to the third point. Given the fact that we have found the gradients of the model density, the option of \textit{Maximum A Posteriori} (MAP) estimation should be considered seriously. One good starting point is that the high-level parameters seem to have unimodal distributions. The MAP could be reached by tinkering the Gibbs sampler we presented into a Coordinate Descent algorithm. While an Empirical Bayes approach would do for applications such as prediction of the response variable or smoothing, finding credible intervals around the MAP would be an interesting challenge. The computational effort in flops and RAM that is not spent on MCMC could be re-invested in doing NNGP with a richer Vecchia's approximation. \appendix \section{DEMONSTRATIONS} \label{section:demo} \subsection{Recursive conditional form of nonstationary NNGP} \label{subsection:demo_recursive_NNGP} We begin with the conditional density on the left hand side of (\ref{equation:nonstat_NNGP}) and proceed as below: \begin{multline}\label{equation: nonstat_NNGP_deriv1} \tilde f(w(s_i)\,|\, w({\cal S}_{i-1}), \theta(\mathcal{S})) = f(w(s_i)\,|\, w(pa(s_i)),\theta(\mathcal{S})) \\ = f(w(s_i\cup pa(s_i))\,|\, \theta(\mathcal{S}))/ f(w(pa(s_i))\,|\, \theta(\mathcal{S})). \end{multline} The joint distributions $f(w(s_i\cup pa(s_i))\,|\, \theta(\mathcal{S}))$ {and} $f(w(pa(s_i))\,|\, \theta(\mathcal{S}))$ are fully specified by $\Sigma(s_i\cup pa(s_i), \theta(\mathcal{S}))$ {and} $\Sigma(pa(s_i), \theta(\mathcal{S}))$. Since the covariance functions given by \eqref{equation:covfun_aniso} or \eqref{equation:covfun_iso} specify $\Sigma(s_i, s_j)$ using only $\{\theta(s_i), \theta(s_j)\}$ instead of $\theta(\mathcal{S})$, we obtain $f(w(s_i\cup pa(s_i))\,|\, \theta(\mathcal{S})) = f(w(s_i\cup pa(s_i))\,|\, \theta(s_i\cup pa(s_i)))$ and $f(w(pa(s_i))\,|\, \theta(\mathcal{S})) = f(w(pa(s_i))\,|\, \theta(pa(s_i)))$, which is equal to $f(w(pa(s_i))\,|\, \theta(s_i \cup pa(s_i)))$ since $w(pa(s_i))$ is conditionally independent of $\theta(s_i)$ given $\theta(pa(s_i))$. Substituting these expressions into the right hand side of (\ref{equation: nonstat_NNGP_deriv1}) yields (\ref{equation:nonstat_NNGP}). \subsection{Marginal variance of nonstationary NNGP} \label{subsection:demo_variance_nngp} Let $\Sigma({\cal S}) = (K(s_i,s_j))$ and let $\Sigma_0({\cal S}) = (K_0(s_i,s_j))$ be the spatial covariance and correlation matrices, respectively, constructed from the nonstationary covariance function $K(s_i,s_j)$ and the corresponding correlation function $K_0(s_i,s_j)$. Let $\tilde{\Sigma}({\cal S})^{-1} = \tilde{R}^{\top}\tilde{R}$ be the NNGP precision matrix using the nonstationary covariance $K(\cdot)$, where $\tilde{R}$ is the NNGP factor of $\tilde{\Sigma}({\cal S})^{-1}$. Analogously, let $\tilde{R}_0$ be the NNGP precision matrix using the nonstationary correlation $\Sigma_0$ from \eqref{equation:nonstat_covariance} and either \eqref{equation:covfun_aniso} or \eqref{equation:covfun_iso}. If $\bar \sigma_i = \sqrt{\mbox{var}(w(s_i)\,|\, w(pa(s_i)))}$, then a standard expression is $\bar \sigma_i = \left(\Sigma(s_i, s_i) - \Sigma(s_i, pa(s_i)) \Sigma(pa(s_i), pa(s_i))^{-1} \Sigma(pa(s_i), s_i)\right)^{1/2}$. Therefore, the $i$-th row of $\tilde R$ comprises (i) $1/\bar{\sigma}_i$ at index $i$; (ii) $-\Sigma(s_i, pa(s_i)) \Sigma(pa(s_i), pa(s_i))^{-1}/\bar \sigma_i$ at indices corresponding to $pa(s_i)$; and (iii) $0$ elsewhere. Letting $\bar{\sigma}_{0i}$ be the conditional correlation obtained from $\Sigma_0$ instead of $\Sigma$, it is easily seen that $\bar{\sigma}_i = \sigma(s_i)(\bar{\sigma}_{0i})$ using the elementary observations that $\sigma(s_i)^2 = \Sigma(s_i,s_i)$ (by definition of $\sigma(s_i)$) and that $\Sigma({\cal A}, {\cal B}) = \textrm{diag}(\sigma({\cal A}))\Sigma_{0}({\cal A}, {\cal B})\textrm{diag}(\sigma({\cal B}))$, where ${\cal A}$ and ${\cal B}$ are any two non-empty subsets of ${\cal S}$. Thus, \begin{equation} \begin{split} \bar \sigma_i &= \left(\Sigma(s_i, s_i) - \Sigma(s_i, pa(s_i)) \Sigma(pa(s_i), pa(s_i))^{-1} \Sigma(pa(s_i), s_i)\right)^{1/2}\\ &= \sigma(s_i)\left(\Sigma_0(s_i, s_i) - \Sigma_0(s_i, pa(s_i)\right) \Sigma_0(pa(s_i), pa(s_i))^{-1}\Sigma_0(pa(s_i), s_i))^{1/2}\\ & = \sigma(s_i)(\bar \sigma_0)_i \end{split} \end{equation} \begin{comment} \[ \begin{array}{lll} \bar \sigma_i & = & \left( \Sigma(s_i, s_i) - \Sigma(s_i, pa(s_i)) \Sigma(pa(s_i), pa(s_i))^{-1} \Sigma(pa(s_i), s_i)\right)^{1/2}\\ & = & \left(\sigma(s_i)^2\Sigma_0(s_i, s_i) - \sigma(s_i)\Sigma_0(s_i, pa(s_i)\right)~\textrm{diag}(\sigma(pa(s_i)))\\ & & ~\textrm{diag}(\sigma(pa(s_i)))^{-1}\Sigma_0(pa(s_i), pa(s_i))^{-1}~\textrm{diag}(\sigma(pa(s_i)))^{-1} \\ & & ~\textrm{diag}(\sigma(pa(s_i))) \Sigma_0(pa(s_i), s_i)\sigma(s_i))^{1/2}\\ & = & \sigma(s_i)(\bar \sigma_0)_i \end{array} \] \end{comment} Using this relationship, we can express the coefficients of row $i$ in $\tilde{R}$ as (i) $1/(\bar{\sigma}_{0i})\sigma(s_i))$ at position $i$; (ii) $\left(- \Sigma_0(s_i, pa(s_i)) \Sigma_0(pa(s_i), pa(s_i))^{-1}/\bar{(\sigma_0)}_i\right) = \tilde{R}_{0}(i,pa(i))\textrm{diag}(\sigma(pa(s_i)))^{-1}$ at the indices corresponding to $pa(s_i)$, which means that $\tilde{R}(i,j) = \tilde{R}_0(i,j)/\sigma(s_j)$ for all $s_j\in pa(s_i)$; and (iii) $0$ elsewhere. Comparing elements we obtain $\tilde{R} = \tilde{R}_{0}\textrm{diag}(\sigma({\cal S}))^{-1}$. \section{KL divergence between nonstationary NNGP and full nonstationary GP} \label{sec:details_KL} \subsection{Spatially indexed variances} The Kullback-Leibler (KL) divergence between two multivariate normal distributions $\mathcal{N}(\mu_1, \Sigma_1)$ and $\mathcal{N}(\mu_2,\Sigma_2)$ is $$ KL\left(\mathcal{N}_1 \parallel \mathcal{N}_2\right) = \frac{1}{2}\left( \operatorname{tr}\left(\Sigma_2^{-1}\Sigma_1\right) + \left(\mu_2 - \mu_1\right)^{\scriptsize{\mathrm{T}}} \Sigma_2^{-1}\left(\mu_2 - \mu_1\right) - k + \ln\left(\frac{|\Sigma_2|}{|\Sigma_1|}\right) \right).$$ Recalling that the NNGP precision is given by $\tilde{\Sigma}({\cal S})^{-1} = \textrm{diag}(\sigma(\mathcal{S}))^{-1}(\tilde R_0^{\scriptsize{\mathrm{T}}}\tilde R_0)\textrm{diag}(\sigma(\mathcal{S}))^{-1}$, that the full GP's covariance matrix is $\textrm{diag}(\sigma(\mathcal{S}))\Sigma_0 \textrm{diag}(\sigma(\mathcal{S}))$, and that the NNGP and GP mean are equal, the KL divergence between a nonstationary full GP with zero mean and the NNGP is $\displaystyle \frac{1}{2}\operatorname{tr}\left(\Sigma_0\tilde{R}_0^{\scriptsize{\mathrm{T}}}\tilde{R}_0\right)+\ln\left(\frac{|\tilde{R}_0^{\scriptsize{\mathrm{T}}}\tilde{R}_0|}{|\Sigma_0|}\right)$, which is the KL divergence between $N(0, \Sigma_0)$ and $N(0, \tilde{\Sigma}_0)$. It follows that spatially indexed variances do not affect the KL divergence between nonstationary full GPs and NNGPs. \subsection{spatially indexed variances scalar ranges} \label{subsection:details_KL_circ} Synthetic data sets with $10000$ observations were simulated on a domain with size $5\times 5$. The spatially variable log-range had mean $log(.1)$. Three factors were tested: \begin{itemize} \item the intensity of nonstationarity, by letting the log range's variance take different values ($0.1$, $0.3$, and $0.5$). \item the ordering (coordinate, max-min, random, middleout). \item the number of parents ($5$, $10$, $20$). \end{itemize} Using a linear model with interactions shows that the first factor has almost no role. The most important factor is the number of parents. Eventually, the NNGP approximation can be improved using the max-min and random order, joining \citet{Guinness_permutation_grouping}'s conclusions for stationary models in $2$ dimensions. See table \ref{tab:KL_circ} for more details about the effects of the factors. \subsection{Elliptic range case} \label{subsection:details_KL_elliptic} Synthetic data sets with $10000$ observations were simulated on a domain with size $5\times 5$. The spatially variable log-matrix range had mean $log(.1) \times I_2/\sqrt{2}$. Three factors were tested: \begin{itemize} \item the intensity of nonstationarity, by letting the variance of the coordinates of the log-range matrix take different values: ($0.1\times I_3$, $0.3\times I_3$, and $0.5\times I_3$). \item the ordering (coordinate, max-min, random, middleout). \item the number of parents ($5$, $10$, $20$). \end{itemize} The outcome is treated with a linear model, whose summary is presented in table \ref{tab:KL_elliptic}. Contrary to the first experiment, the intensity of the nonstationarity does play a role. \begin{table}[H] \centering \caption{Summary of linear regression of the KL divergence, in the scalar range case. } \label{tab:KL_circ} \footnotesize{The reference case has coordinate ordering, $5$ nearest neighbors, and a log-range variance of $0.1$} \\ \begin{tabular}{rrrrr} \hline & Estimate & Std. Error & t value & Pr($>|t|$) \\ \hline (Intercept) & 186.5156 & 0.4517 & 412.96 & 0.0000 \\ nonstat.intensity 0.3 & 1.4745 & 0.2957 & 4.99 & 0.0000 \\ nonstat.intensity 0.5 & 3.2509 & 0.2957 & 10.99 & 0.0000 \\ ordering max min & -47.6966 & 0.5914 & -80.66 & 0.0000 \\ ordering middle out & -4.7491 & 0.5914 & -8.03 & 0.0000 \\ ordering random & -47.2462 & 0.5914 & -79.89 & 0.0000 \\ 10 nearest neighbors & -135.9274 & 0.5914 & -229.86 & 0.0000 \\ 20 nearest neighbors & -176.4046 & 0.5914 & -298.31 & 0.0000 \\ max min: 10 & 25.9771 & 0.8363 & 31.06 & 0.0000 \\ middle out: 10 & 1.9064 & 0.8363 & 2.28 & 0.0228 \\ random: 10 & 25.7530 & 0.8363 & 30.79 & 0.0000 \\ max min: 20 & 41.4488 & 0.8363 & 49.56 & 0.0000 \\ middle out: 20 & 3.3930 & 0.8363 & 4.06 & 0.0001 \\ random: 20 & 40.9979 & 0.8363 & 49.02 & 0.0000 \\ \hline \end{tabular} \end{table} \begin{table}[H] \centering \caption{Summary of linear regression of the KL divergence, in the elliptic range case. } \label{tab:KL_elliptic} \footnotesize{The reference case has coordinate ordering, $5$ nearest neighbors, and a log-range variance of $0.1$} \\ \begin{tabular}{rrrrr} \hline & Estimate & Std. Error & t value & Pr($>|t|$) \\ \hline (Intercept) & 243.6011 & 1.8448 & 132.05 & 0.0000 \\ nonstat.intensity 0.3 & 23.8570 & 1.2077 & 19.75 & 0.0000 \\ nonstat.intensity 0.5 & 50.5490 & 1.2077 & 41.86 & 0.0000 \\ ordering max min & -50.5886 & 2.4154 & -20.94 & 0.0000 \\ ordering middle out & -0.0311 & 2.4154 & -0.01 & 0.9897 \\ ordering random & -50.6093 & 2.4154 & -20.95 & 0.0000 \\ 10 nearest neighbors & -176.3113 & 2.4154 & -72.99 & 0.0000 \\ 20 nearest neighbors & -238.4647 & 2.4154 & -98.73 & 0.0000 \\ max min: 10 & 19.4831 & 3.4159 & 5.70 & 0.0000 \\ middle out: 10 & -1.6965 & 3.4159 & -0.50 & 0.6195 \\ random: 10 & 19.5230 & 3.4159 & 5.72 & 0.0000 \\ max min: 20 & 38.3413 & 3.4159 & 11.22 & 0.0000 \\ middle out: 20 & -1.6284 & 3.4159 & -0.48 & 0.6337 \\ random: 20 & 38.3553 & 3.4159 & 11.23 & 0.0000 \\ \hline \end{tabular} \end{table} \section{Details about interweaving} \subsection{Interweaving} \label{subsection:interweaving} Interweaving is a method introduced by \cite{yu2011center}, which improves the convergence speed of models relying on data augmentation. Usually various parametrizations of the data augmentation are available. For example, in the context of our NNGP model, the latent field $$w~\stackrel{{a~priori}}{\sim}~ \mathcal{N}(0, \tilde\Sigma)$$ with $\tilde\Sigma = (\tilde R^T\tilde R)^{-1}$ can be re-parametrized as $$w^* = \tilde Rw~\stackrel{{a~priori}}{\sim}~\mathcal{N}(0, I_n).$$ The component-wise interweaving strategy of \cite{yu2011center} can be applied when two data augmentations $w_1$ and $w_2$ have a joint distribution $[\theta, w_1, w_2]$ (even if it is degenerate) such that its marginals $[\theta, w_1]$ and $[\theta, w_2]$ correspond to the two models with the different data augmentations. It takes advantage of the discordance between the two parametrizations to construct the following step in order to sample $\theta^{t+1}$: $$ [\theta, w_2|w_1^t, \ldots] \rightarrow[\theta^{t+1}, w_1^{t+0.5}|w_2, \ldots], $$ ``$\ldots$'' being the other parameters of the model. Since all the draws are done from full conditional distributions, the target joint distribution is always preserved. Joint sampling of the parameter and the data augmentation is much easier to implement when decomposed as: $$ \underbrace{ [\theta|w_1^t, \ldots]\rightarrow [w_2|w_1^t, \theta, \ldots]}_ {[\theta, w_2|w_1^t, \ldots]} \rightarrow \underbrace{[\theta^{t+1}|w_2, \ldots] \rightarrow[w_1^{t+0.5}|w_2, \theta^{t+1},\ldots]}_{[\theta^{t+1}, w_1^{t+0.5}|w_2, \ldots]}. $$ It is possible that the joint distribution is degenerate as long as it is well-defined, so that $[w_2|\theta, w_1]$ and $[w_1^{t+0.5}|w_2, \theta^{t+1},\ldots]$ are often deterministic transformations (in our application they are). For this reason even though the data augmentation is changed at the end of the sampling of $\theta$, $w$ still has to be updated in a separate step in order to have an irreducible chain: that is why we indexed it by $t+0.5$. The method builds its efficiency upon the fact that the parameter $\theta$ sampled in the first step, $[\theta, w_2|w_1^t, \ldots]$, is not used later in the algorithm. This first step is therefore equivalent to $[w_2|w_1^t, \ldots]$. If there is little correlation between the two parametrizations $w_1$ and $w_2$, the subsequent draw $[\theta^{t+1}|w_2, \ldots]$ can produce a $\theta^{t+1}$ far from $\theta^t$ even if there is a strong correlation between $\theta$ and either or both $w_1$ and $w_2$. The strategy being based on the discordance between two parametrizations, it is a good choice to pick an ancillary-sufficient couple, giving an Ancillary-Sufficient Interweaving Strategy (AS-IS). Following the terminology of \citet{yu2011center}, $w$ is sufficient when \textit{a posteriori} $(\theta | w, z) = (\theta | w)$, $z$ being the observed data and $\theta$ being the target high-level parameter. It is sufficient when it is \textit{a priori} independent from $\theta$. AS-IS already proved its worth for GP models: \citet{filippone2013comparative} show empirically that updating covariance parameters in a Gaussian Process model benefits from interweaving the natural parametrization $w$ (sufficient) and the whitened parametrization $w^*$ (ancillary), while \cite{coube2020improving} use centered and non-centered parametrizations to sample efficiently the coefficients associated with the fixed effects. \subsection{Nested interweaving for high-level parameters} \label{subsection:nested_interweaving} The problem in our model is that there are latent fields on various layers of the model. Nested AS-IS is envisioned by \citet{yu2011center} for such models, even though the authors do not provide application to realistic models. Consider a high-level parameter concerning the log-NNGP distributions of the covariance parameters. This high-level parameter may be the marginal variance of a log-NNGP distribution ($\gamma_\theta$, $\gamma_\tau$ from\eqref{equation:hierarchical_nonstat_nngp}, or $S_\theta$ from \eqref{equation:hierarchical_nonstat_nngp_modified}) or the regression coefficients ($\beta_\theta$, $\beta\tau$ from \eqref{equation:hierarchical_nonstat_nngp}, or $B$ from \eqref{equation:hierarchical_nonstat_nngp_modified}). This parameter is noted $\kappa$. Note $W_1$ and $W_2$ the parametrizations for the corresponding log-NNGP field of covariance parameters. Those parametrizations form a sufficient-ancillary pair, and can be a natural (sufficient) and whitened (ancillary) couple when we are working with marginal variance parameters \citep{filippone2013comparative}, or a centered (sufficient) and natural (ancillary) pair when working with the regression coefficients \citep{coube2020improving}. Eventually, note $w$ and $w^*$ the respectively natural and whitened parametrizations of the NNGP latent field from \eqref{equation:hierarchical_nonstat_nngp_modified}(a). A nested interweaving step aiming to update $\kappa$ can be devised as \begin{equation} \label{equation:nested_inteweaving} \begin{array}{c} \\ \end{array} \underbrace{ \begin{array}{c}[\kappa, W_2, w^* | W_1, w, \ldots] \rightarrow [\kappa, W_1, w^* | W_2, w, \ldots]\\ \swarrow \\ ~\hspace{-3pt}[\kappa, W_2, w | W_1, w^*, \ldots] \rightarrow [\kappa, W_1, w | W_2, w^*, \ldots] \end{array} }_{\text{interweaving $W$}} \hspace{-20pt} \left.\begin{array}{c} \\ \\ \\ \end{array}\right\}\rotatebox[origin = c]{90}{\scriptsize{\tiny interweaving $w$}}\\ \end{equation} Like before, it is much easier to sample sequentially, for example the blocked draw \\ $[\kappa, W_2, w^* | W_1, w, \ldots]$ writes as $$[\kappa| W_1, w, \ldots] \rightarrow \underbrace{[W_2|\kappa, W_1, w, \ldots]}_{\text{deterministic}} \rightarrow \underbrace{[w^*|\kappa, W_1, W_2, w, \ldots]}_{\text{deterministic}}. $$ Like before too, $W$ needs to be updated later, using an interweaving of parametrizations of $w$. \paragraph{Centering-upon-whitening nested interweaving for the log-NNGP regression coefficients.} \citet{coube2020improving} show that updating the regression coefficients of \eqref{equation:nonstat_gaussian} using an interweaving of $w$ and $w_{center} = w + X\beta^T$ considerably improves the behavior of the chains, in particular when some covariates have spatial coherence. We apply this strategy to update $\beta_\alpha$, $\beta_{\tau^2}$ and $\beta_{\sigma}$. In the case of the scalar range and the latent field's marginal variance, we are using nested interweaving. The two relevant parametrizations for the NNGP latent field of \eqref{equation:nonstat_gaussian} are the natural parametrization and the whitened latent field $w^* = \tilde R w$. So, for $\beta_\alpha$ and $\beta_{\sigma}$, the sampling step derived from \eqref{equation:nested_inteweaving} is $$[\beta_\theta|W, w, \ldots] \rightarrow [\beta_\theta|W_{centered}, w, \ldots] \rightarrow [\beta_\theta|W, w^*, \ldots] \rightarrow [\beta_\theta|W_{centered}, w^*, \ldots]. $$ For the sake of simplicity we do not write the implicit updates of the latent fields at each sampling of $\beta_\theta$. $W_{centered}$ being a sufficient augmentation, sampling from $[\beta_\theta|W_{centered}, w, \ldots]$ is the same as sampling from $[\beta_\theta|W_{centered}]$. The procedure is described in \citet{coube2020improving}. As for the updates conditionally on $W$, they can be done with an usual Metropolis-within-Gibbs sweep over the components of $\beta_\theta$ or with a Hybrid Monte-Carlo step detailed in the following Section \ref{sec:HMC_nonstat}. \paragraph{Whitening-whitening nested interweaving for the log-NNGP variance.} In the case of the marginal variance $\sigma_\theta \in \gamma_\theta$ (see \eqref{equation:hierarchical_nonstat_nngp}(g)) of a log-NNGP prior, two parametrizations of $W_\theta$ are available. The sufficient parametrization is the natural parametrization, while the ancillary parametrization is the whitened $w^*_\theta = \tilde R_{0_\theta}W_\theta/\sigma_\theta$, $\tilde R_{0_\theta}$ being the hyperprior correlation NNGP factor. Like before, for the latent field, we use $w$ and $w^*$. For the marginal variance $\sigma^2$, the circular range $\alpha$, and the elliptic range $A$, the step writes: $$[\sigma_\theta|W_\theta, w, \ldots] \rightarrow [\sigma_\theta|W_\theta^*, w, \ldots] \rightarrow [\sigma_\theta|W_\theta, w^*, \ldots] \rightarrow [\sigma_\theta|W_\theta^*, w^*, \ldots]. $$ Since $W_\theta$ is a sufficient statistic for $\sigma_\theta$, $[\sigma_\theta|W_\theta, w, \ldots]$ or $[\sigma_\theta|W_\theta, w^*, \ldots]$ are equivalent to $[\sigma_\theta|W_\theta]$. The procedure to update a marginal variance with such a parametrization is well-known \citep{PP, NNGP}. When the ancillary parametrization $W_\theta^*$ is used, a Metropolis-Hastings step or a HMC step can be used. Like before, only the sufficient parametrization of $w$ is used for the variance of the Gaussian noise. The step is: $$[\sigma_{\tau}|W_{\tau}, w, \ldots] \rightarrow [\sigma_{\tau}|W_{\tau}^*, w, \ldots]. $$ \section{Gradients for HMC updates of the covariance parameters} \subsection{General form of the gradient with respect to a whitened parameter field.} \label{subsection:general_gradient} Start from $$H(W_\lambda) =~ -log(f_\theta(W_\lambda(\mathcal{S}) | \zeta_\lambda)) -g(W_\lambda(\mathcal{S})) ~~\propto~~ W_\lambda^T \zeta_\lambda^{-1} W_\lambda /2~ -g(W_\lambda(\mathcal{S})), $$ $\zeta_\lambda$ being the covariance matrix induced by the log-NNGP prior. Find the gradient of $H(W_\lambda)$ with respect to $W_\lambda$: $$\nabla_{W_\lambda} H(W_\lambda) = \zeta_\lambda^{-1} W_\lambda - \nabla_{W_\lambda} g(W_\lambda(\mathcal{S})).$$ Then, apply the Jacobian ($J$) chain rule $\nabla \psi\circ \phi (x) = (J^T \phi) (x) \cdot (\nabla \psi)(\phi(x))$ with $\psi = H(\cdot)$ et $\phi (W_\lambda^*) = \zeta_\lambda^{1/2}W_\lambda^*$. With $J^T \left(\zeta_\lambda^{1/2}W_\lambda^*(\mathcal{S})\right) = \zeta_\lambda^{T/2} $, we obtain $$\nabla_{W_\lambda^*} H(W_\lambda) = \zeta_\lambda^{T/2}\zeta_\lambda^{-1}\zeta_\lambda^{1/2} W_\lambda^* - \zeta_\lambda^{T/2} \nabla_{W_\lambda} g(\zeta_\lambda^{1/2}W_\lambda^*(\mathcal{S})) = W_\lambda^* - \zeta_\lambda^{T/2} \nabla_{W_\lambda} g(W_\lambda(\mathcal{S})).$$ \subsection{Gradient of the log-density of the observations with respect to the latent NNGP field.} \label{subsection:gradient_nngp_field} A technical point to obtain the gradient of the log-density of the observations with respect to the latent field is that there can be several observations at the same spatial site. Consider a site $s\in\mathcal{S}$, and note these observations $obs(s)$. Note $obs(s) = \{i/M_{i,j}=1\}$ in \eqref{equation:hierarchical_nonstat_nngp} (a), that is the indices of the observations made at the spatial site $s$. We obtain, in virtue of the conditional independence of the observations of $z$ in \eqref{equation:hierarchical_nonstat_nngp} (a), $$ \begin{array}{ll} \frac{\partial l(z| w(\mathcal{S}), \beta, \tau(\mathcal{S}))}{\partial w(s)} &= \frac{\partial l(z_{obs(s)}| w(\mathcal{S}), \beta, \tau_{obs(s)})}{\partial w(s)}\\ &= \frac{\partial\Sigma_{x\in obs(s)}(z(x)-X(x)\beta^T-w(s))^2/2\tau(x)^2 }{\partial w(s)} \\ &= \Sigma_{x\in obs(s)}(w(s)-(z(x)-X(x)\beta^T))/\tau(x)^2. \end{array} $$ \subsection{Gradient with respect to $W_{\sigma}$} \label{subsection:gradient_sigma2} In the following, assume a variance parametrization $log(\sigma^2(s)) = W_{\sigma}(s) +X_{\sigma}(s)\beta_{\sigma}^T$. \paragraph{Sufficient augmentation.} When sufficient augmentation is used, the marginal variance intervenes in the NNGP density of the latent field. The resulting gradient is \begin{equation} -\nabla_{W_{\sigma}}g_{\sigma}^{sufficient}(W_{\sigma}) = (1/2, \ldots, 1/2) - \sigma^{-1}(\mathcal{S}) ~\circ~ \left(\textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w) ~~~\sigma^{-1}(\mathcal{S}) \right)/2. \end{equation} We start from $$g_{\sigma}^{sufficient}(W_{\sigma}) = f(w(\mathcal{S})| \alpha, \sigma(\mathcal{S})),$$ $\tilde f(\cdot)$ being the NNGP density from \eqref{equation:hierarchical_nonstat_nngp} (b). From \eqref{equation:nonstat_NNGP_variance_prod} and \eqref{equation:nonstat_NNGP_variance_det}, we write $$\tilde f(w(\mathcal{S})| \alpha, \sigma(\mathcal{S})) = exp\left(-\sigma^{-1}(\mathcal{S})^T~\textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w)\sigma^{-1}(\mathcal{S})/2\right) \Pi_{i=1}^n (\tilde R_0)_{i,i}/\sigma(s_i). $$ Passing to the negated log-density $$ \begin{array}{lll} -log\left(\tilde f(w(\mathcal{S})| \alpha, \sigma(\mathcal{S}))\right) & = & cst +\Sigma_{i=1}^n log(\sigma(s_i)) ~~ +\\ & & \sigma^{-1}(\mathcal{S})^T~\textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w)\sigma^{-1}(\mathcal{S})/2.\\ \end{array} $$ One the one hand, $$\nabla_{W_{\sigma}}\Sigma_{i=1}^n log(\sigma(s_i)) = \nabla_{W_{\sigma}}\Sigma_{i=1}^n log( (\sigma^2(s_i))^{1/2}) = \nabla_{W_{\sigma}}\Sigma_{i=1}^n log( \sigma^2(s_i))/2 = (1/2, \ldots, 1/2) $$ On the other hand, using $\sigma^{-1}(s) = (\sigma^{2}(s))^{-1/2} = exp(-(W_{\sigma}(s) +X_{\sigma}(s)\beta_{\sigma}^T)/2)$, we can write the Jacobian of $\sigma^{-1}$ with respect to $W_{\sigma}$: $$J_{W_{\sigma}}\sigma^{-1}(\mathcal{S}) = J_{W_{\sigma}}exp(-(W_{\sigma}(\mathcal{S}) +X_{\sigma}(\mathcal{S})\beta_{\sigma}^T)/2) = -\textrm{diag}(\sigma^{-1}(\mathcal{S})/2). $$ We also find the following gradient: $$\nabla_{\sigma^{-1}} \sigma^{-1}(\mathcal{S})^T~\textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w)\sigma^{-1}(\mathcal{S})/2 = \textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w)\sigma^{-1}(\mathcal{S}).$$ With the Jacobian chain rule, we combine the two previous formulas to find $$-\nabla_{W_{\sigma}} \sigma^{-1}(\mathcal{S})^T~\textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w)\sigma^{-1}(\mathcal{S})/2 = \sigma^{-1}(\mathcal{S}) ~\circ~ \left(\textrm{diag}(w)\tilde R_0^T\tilde R_0~\textrm{diag}(w) ~~~\sigma^{-1}(\mathcal{S}) \right)/2, $$ with $\circ$ the Hadamard product. Combining the two terms, we have the result. \paragraph{Ancillary augmentation.} When ancillary augmentation is used, the marginal variance has an impact on the observed field likelihood with respect to the latent field. The gradient writes \begin{equation} \label{equation:gradient_marginal_variance_ancillary} -\nabla_{W_{\sigma}} g_{\sigma}^{ancillary}(W_{\sigma}) = - \nabla_{w}~ l(z(\mathcal{S})|w, \beta, \tau) ~\circ (w/2), \end{equation} $\circ$ being the Hadamard product, and $\nabla_{W_{\sigma}}l(z(\mathcal{S})|w(\mathcal{S}), \beta, \tau)$ being discussed in Section~ \ref{subsection:gradient_nngp_field}. We start from $$-g_{\sigma}^{ancillary}(W_{\sigma}) = -l(z|w(\mathcal{S}) = \tilde R^{-1}w^* (\mathcal{S}) , \beta, \tau),$$ from \eqref{equation:hierarchical_nonstat_nngp} (a). The marginal variance affects the Gaussian density $l(\cdot)$ through $w(\mathcal{S}) = \tilde R^{-1}w^* (\mathcal{S})$. Using two times the Jacobian chain rule, $$\nabla_{W_{\sigma}(\mathcal{S})} l(z|w(\mathcal{S}) , \beta, \tau) = J^T_{W_{\sigma}(\mathcal{S})}\sigma^2(\mathcal{S}) J^T_{\sigma^2(\mathcal{S})}w(\mathcal{S}) \nabla_{w(\mathcal{S})} l(z|w(\mathcal{S}), \beta, \tau).$$ From $log(\sigma^2(s)) = W_{\sigma}(s) +X_{\sigma}(s)\beta_{\sigma}^T$, we have $$J^T_{W_{\sigma}(\mathcal{S})}\sigma^2(\mathcal{S}) = \textrm{diag}(\sigma^2(\mathcal{S})).$$ From the use of the ancillary parametrization, we have $$J^T_{\sigma^2(\mathcal{S})}w(\mathcal{S}) = J^T_{\sigma^2(\mathcal{S})}(\tilde R_0^{-1} w^*)\circ \sigma(\mathcal{S})= \textrm{diag}((\tilde R_0^{-1} w^*)\circ \sigma^{-1}(\mathcal{S})/2) $$ Combining the two previous expressions, we have $$J^T_{W_{\sigma}(\mathcal{S})}\sigma^2(\mathcal{S}) J^T_{\sigma^2(\mathcal{S})}w(\mathcal{S}) = \textrm{diag}((\tilde R_0^{-1} w^*)\circ \sigma(\mathcal{S})/2) = \textrm{diag}(w/2), $$ eventually leading to the result. \subsection{General derivative of $\tilde R$ with respect to nonstationary range parameters}\label{subsection:derivative_tile_R} The aim is to find $\partial \tilde R / \partial W_\alpha(s_j)$ with $j \in 1 ,\ldots, n$. Let's focus on the $i^{th}$ row of $\tilde R$, noted $\tilde R_{i,\cdot}$. The index of the row $i$ can be different from $j$. To find the derivative of $\tilde R_{i,\cdot}$ with respect to $ W_\alpha(s_j)$, we need to use the covariance matrix between $s_i$ and its parents $pa(s_i)$. Let's note $\Sigma^i$ the covariance matrix corresponding to $(pa(s_i), s_i)$, and let's block it as $ \Sigma^i = \left[ \begin{array}{c|c} \Sigma^i_{11}&\Sigma^i_{12}\\ \hline \Sigma^i_{21} & \Sigma^i_{22} \end{array} \right] $ $\Sigma^i_{11}$ being a $m\times m$, with $m = |pa(s_i)|$, covariance matrix corresponding to $pa(s_i)$, and $\Sigma^i_{22}$ being a $1\times 1$ matrix corresponding to $s_i$. From its construction, $\tilde R_{i,\cdot}$ has non-null coefficients only for the column entries that correspond to $s_i$ and its parents $pa(s_i)$. Therefore there is no need to compute the gradient but for those coefficients. The diagonal element $\tilde R_{i,i}$ has value $1/\bar \sigma_i$, $\bar \sigma_i$ being the standard deviation of $w(s_i)$ conditionally on $w(pa(s_i))$. The elements that correspond to $pa(s_i)$ have value $-\Sigma^i_{21}(\Sigma^i_{11})^{-1}/\bar\sigma_i$. \\ Let's start by the diagonal coefficient $\tilde R _{i,i}$: \\ $ \begin{array}{lllr} \partial(\tilde R_{ii})/\partial W_\alpha(s_j)& = & \partial((\bar\sigma_i^2)^{-1/2})/\partial W_\alpha(s_j)\\ & & \text{(chain rule)} \\ & = & -(\bar\sigma_i^{-3}/2) \times \partial(\bar\sigma_i^2)/\partial W_\alpha(s_j)\\ & & \text{(using conditional variance formula)}\\ & = & -(\bar\sigma_i^{-3}/2)\times \partial(\Sigma^i_{22} - \Sigma^i_{21}(\Sigma^i_{11})^{-1}\Sigma^i_{12} )/\partial W_\alpha(s_j) \\ & & \text{(product rule)} \\ &=& -(\bar\sigma_i^{-3}/2)\times \partial(\Sigma^i_{22})/\partial W_\alpha(s_j) \\ & &+(\bar\sigma_i^{-3})~\times \partial(\Sigma^i_{21})/\partial W_\alpha(s_j)(\Sigma^i_{11})^{-1}\Sigma^i_{12} \\ & &+(\bar\sigma_i^{-3}/2)\times \Sigma^i_{21}\partial\left((\Sigma^i_{11})^{-1}\right)/\partial W_\alpha(s_j)\Sigma^i_{12} \\ & & \text{(derivative of inverse)} \\ &=& -(\bar\sigma_i^{-3}/2)\times \partial(\Sigma^i_{22})/\partial W_\alpha(s_j) &(a)\\ & &+(\bar\sigma_i^{-3})~\times \partial(\Sigma^i_{21})/\partial W_\alpha(s_j)(\Sigma^i_{11})^{-1}\Sigma^i_{12} & (b)\\ & &-(\bar\sigma_i^{-3}/2)\times \Sigma^i_{21}(\Sigma^i_{11})^{-1}\left(\partial(\Sigma^i_{11})/\partial W_\alpha(s_j)\right)(\Sigma^i_{11})^{-1}\Sigma^i_{12}& (c)\\ \\ \end{array} $\\ Let's now differentiate the coefficients that correspond to $pa(s_i)$, located on row $\tilde R_{i,\cdot}$ at the left of the diagonal: \\ $ \begin{array}{lllr} \partial(-\Sigma^i_{21}(\Sigma^i_{11})^{-1}/\bar\sigma_i)/\partial W_\alpha(s_j)& = & -\partial(\Sigma^i_{21}(\Sigma^i_{11})^{-1}\times \tilde R_{ii})/\partial W_\alpha(s_j) \\ & & \text{(product rule)}\\ & = & - \left(\partial\Sigma^i_{21}/\partial W_\alpha(s_j)\right)(\Sigma^i_{11})^{-1}\times \tilde R_{ii}\\ & & -\Sigma^i_{21}\left(\partial\left((\Sigma^i_{11})^{-1}\right)/\partial W_\alpha(s_j)\right)\times \tilde R_{ii} \\ & & -\Sigma^i_{21}(\Sigma^i_{11})^{-1}\partial\tilde R_{ii}/\partial W_\alpha(s_j) \\ & & \text{(derivative of inverse)}\\ & = & - \left(\partial\Sigma^i_{21}/\partial W_\alpha(s_j)\right)(\Sigma^i_{11})^{-1}\times \tilde R_{ii}& (d)\\ & & +\Sigma^i_{21}(\Sigma^i_{11})^{-1}\left(\partial\Sigma^i_{11}/\partial W_\alpha(s_j)\right)(\Sigma^i_{11})^{-1}\times \tilde R_{ii} & (e) \\ & & -\Sigma^i_{21}(\Sigma^i_{11})^{-1}\times \underbrace{\partial\tilde R_{ii}/\partial W_\alpha(s_j)}_{\text{already known}} &(f) \\ \end{array} $\\ From those derivatives, it appears that the elements that are needed to get the derivative of $\tilde R_{i,\cdot}$ are $(\Sigma^i)^{-1}$ and $\partial \Sigma^i/\partial W_\alpha(s_j)$ (with $s_j\in s_i \cup pa(s_i)$). The former can anyway be re-used in order to obtain $\tilde R$. The latter can be approximated using finite differences: $$\partial \Sigma^i/\partial W_\alpha(s_j) \approx \left( \Sigma^i(W_\alpha(s_j)+dW_\alpha(s_j)) - \Sigma^i(W_\alpha(s_j))\right)/dW_\alpha(s_j).$$ \subsection{Computational cost of the derivative of $\tilde R$ with respect to nonstationary range parameters} \label{subsection:cost_derivative_tile_R} We can see that the differentiation of $\tilde R_{i,\cdot}$ is non-null only for $\alpha(pa(s_i)\cup s_i)$ because the entries of $\Sigma^i$ are given by $K(s, t, \alpha(s), \alpha(t))$ with $s,t\in s_i\cup pa(s_i)$. Conversely, if $ W_\alpha(s_j)$ moves, only the rows of $\tilde R$ that correspond to $s_i$ and its children on the DAG move as well. This means that in order to compute the derivative of $\tilde R$ with respect to $\alpha_j$, the row differentiation operation must actually be done $|ch(s_j)|+1$ times and not $n$ times. Knowing the fact that $\Sigma_{j=1}^n |ch(s_j)| = \Sigma_{j=1}^n |pa(s_j)| = m\times n$ ($m$ being the number of nearest neighbors used in Vecchia approximation), we can see that row differentiation must be done $(m+1)\times n$ times in order to get all the derivatives of $\tilde R$ with respect to $\alpha (s_1,\ldots,s_n)$. Given the fact that one row has $m+1$ non-null terms and that $(m+1)\times n$ rows are differentiated, the cost in RAM to store the differentiation of $\tilde R$ will be $O(m+1)^2n$. On the other hand, the flop cost of differentiation itself may seem daunting. However, the fact that spatially-variable covariance parameters affect pairwise covariances considerably simplifies the problem. In the derivatives, there are only 3 terms that depend on $\alpha(s_j)$, they are $ \partial(\Sigma^i_{22})/\partial W_\alpha(s_j) $, $\partial(\Sigma^i_{12})/\partial W_\alpha(s_j)$, and $\partial(\Sigma^i_{11})/\partial W_\alpha(s_j)$. Let's separate the cases: \begin{enumerate} \item When $i\neq j$ \begin{enumerate} \item $\partial(\Sigma^i_{12})/\partial W_\alpha(s_j)$ has only one non-null coefficient. \item $\partial(\Sigma^i_{11})/\partial W_\alpha(s_j)$ is a $m\times m$ matrix with cross structure (non-null coefficients only for the row and the column corresponding to $s_j$). \item $ \partial(\Sigma^i_{22})/\partial W_\alpha(s_j)$ is a null $1\times 1$ matrix. \end{enumerate} \item When $i = j$ \begin{enumerate} \item $\partial(\Sigma^i_{12})/\partial W_\alpha(s_j)$ is a dense vector of length $m$. \item $\partial(\Sigma^i_{11})/\partial W_\alpha(s_j)$ is null. \item $ \partial(\Sigma^i_{22})/\partial W_\alpha(s_j)$ is null. because a change in $W_\alpha(s_i)$ does not affect the marginal variance of $w(s_i)$ (a change in $w_{\sigma^2(s_i)}$ does). \end{enumerate} \end{enumerate} The costliest part of the formulas is to compute $(\Sigma^i_{11})^{-1}$. However, this part needs only to be computed one time since it is not affected by differentiation. Even better, $(\Sigma^i_{11})^{-1}$ and $\Sigma^i_{21}(\Sigma^i_{11})^{-1}$ can be used to compute $\tilde R$ and then recycled on the fly to compute the derivatives. The computational effort needed to get them can then be removed from the cost of the derivative and remain in the cost of $\tilde R$. \\ Applying all those remarks gives table \ref{tab:R_diff_alpha_cost}. \begin{table}[H] \caption{costs to compute $\partial \tilde R_{i, \cdot}/\partial ( W_\alpha(s_j))$} \label{tab:R_diff_alpha_cost} \centering \begin{tabular}{c|cccccc} & (a) & (b) & (c) & (d) & (e) & (f) \\ $i = j$ & $O(1)$ & $O(m)$ & $0$ & $O(m^2)$ & $0$ & $0$\\ $s_i \in ch(s_j)$ & $0$ & $O(1)$ & $O(m)$ & $O(m)$ & $O(m)$ & $0$\\ \end{tabular} \end{table} \noindent Using table \ref{tab:R_diff_alpha_cost} and again $\Sigma_{j=1}^n |ch(s_j)| = \Sigma_{j=1}^n |pa(s_j)| = m\times n$, we can see that the matrix operations should have a total cost of $O(m^2\times n)$. The cost of the finite difference approximation to $\partial \Sigma^i/\partial W_\alpha(s_j)$ must be added to this. The cost of computing the finite differences in one coefficient of $\Sigma^i$ depends on whether isotropic or anisotropic range parameters are used. In the case of isotropic range parameters, only a recomputation of the covariance function \eqref{equation:covfun_iso} with range $exp(log(\alpha(s)+dw))$ instead of $exp(log(\alpha(s)))$ will be needed. In the other case, the SVD of $log(A)$ must be computed again. What's more, the covariance function \eqref{equation:covfun_aniso} involves the Mahalanobis distance instead of the Euclidean distance. The cost will then depend on $d$, and be higher than in the case with isotropic covariance parameters. \\ However, due to \eqref{equation:nonstat_covariance}, it appears that if $W_\alpha (s_j)$ moves, only the row and column of $\Sigma^i$ that correspond to $s_j$ will be affected. Moreover, due to the symmetry of $\Sigma^i$, the row and the column will be changed exactly the same way. Therefore, computing $\partial \Sigma^i/\partial W_\alpha(s_j)$ involves only $m+1$ finite differences since $\Sigma^i$ is of size $(m+1)\times(m+1)$. \\ The finite difference $\partial \Sigma^i/\partial W_\alpha(s_j)$ must be computed $m+1$ time for each row of $\tilde R$, and there is $n$ rows. Therefore, the total cost of the finite differences should be $O(m+1)^2n$\\ Therefore, we can hope that careful implementation of the derivative of $\partial \tilde R/\partial (\alpha(s_1,\ldots, s_n))$ will cost $O(n(m+1)^2)$ operations, in the same order as computing $\tilde R$ itself \citep{Guinness_permutation_grouping}. \subsection{Gradient of the negated log-density with respect to $W_\alpha$} \label{subsection:gradient_alpha} \paragraph{Sufficient augmentation.} In the case of the sufficient augmentation, the range intervenes in the NNGP prior of the latent field. The gradient writes: \begin{equation} \label{equation:gradient_range_sufficient} -\frac{\partial g_{\alpha}^{sufficient}(W_{\alpha})}{\partial W_{\alpha}(s_i)} = \left(w^T\tilde R^T\right)\left(\partial \tilde R /\partial W_\alpha(s_i)\right) w + \Sigma_{j/s_j\in \{s_i\cup ch(s_i)\}} \left(\partial \tilde R_{j,j}/\partial W_\alpha(s_i)\right)/\tilde R_{j,j}. \end{equation} Start from the negated log density of the latent field with sufficient augmentation: $$-g_{\alpha}^{sufficient}(W_{\alpha}(\mathcal{S})) = log\left(|\tilde R\left(\mathcal{S},\alpha(\mathcal{S})\right)|\right)+w^T\tilde R\left(\mathcal{S},\alpha(\mathcal{S})\right)^T\tilde R\left(\mathcal{S},\alpha(\mathcal{S})\right)w \times 1/2.$$ Let's write the derivative of the log-determinant $log(|\tilde R|)$:\\ $\begin{array}{lll} \partial log(|\tilde R|)/\partial W_\alpha(s_j)& = & \partial (\Sigma_{i=1}^n log(\tilde R_{i,i}))/\partial W_\alpha(s_j) \text{(because $\tilde R$ is triangular)}\\ & = & \Sigma_{i=1}^n \partial log(\tilde R_{i,i})/\partial W_\alpha(s_j) \\ & & \text{(only the rows corresponding to $s_j$ and its children are affected)} \\ & = & \Sigma_{i/s_i\in \{s_j\cup ch(s_j)\}} \partial log(\tilde R_{i,i})/\partial W_\alpha(s_j) \\ & & \text{log-function derivative} \\ & = & \Sigma_{i/s_i\in \{s_j\cup ch(s_j)\}} \left(\partial \tilde R_{i,i}/\partial W_\alpha(s_j)\right)/R_{i,i} \\ \end{array}$\\ Let's write the derivative of $w^T\tilde R^T\tilde Rw \times 1/2$: \\ $\begin{array}{lll} \partial \left(w^T\tilde R^T\tilde Rw\times 1/2 \right )/\partial W_\alpha(s_j) &=& \partial \left( (w^T\tilde R^T) (\tilde R w) \times1/2\right )\partial W_\alpha(s_j) \\ &=& \partial(w^T\tilde R^T)/\partial W_\alpha(s_j) (\tilde R w)\times1/2 + \\ &&(w^T\tilde R^T)\partial(\tilde R w)/\partial W_\alpha(s_j) \times1/2 \\ &=& (w^T\tilde R^T)\partial(\tilde R w)/\partial W_\alpha(s_j) \\ &=& (w^T\tilde R^T)(\partial \tilde R /\partial W_\alpha(s_j) w) \\ \end{array}$\\ \paragraph{Ancillary Augmentation.} When ancillary augmentation is used, the covariance parameters intervene in the density of the observations knowing the latent field. This induces: \begin{equation} \label{equation:gradient_range_ancillary} -\partial g_{\alpha}^{ancillary}(W_{\alpha})/\partial W_{\alpha}(s_i) = \nabla_w l(z(s_i)| w,\beta,\tau)^T \tilde R^{-1}\left(\partial\tilde R/\partial( W_\alpha(s_i))\right) w, \end{equation} Start from $$ -g_{\alpha}^{ancillary}(W_{\alpha}) = -l(z|w = \tilde R^{-1} w^*, X, \beta,\tau).$$ Applying differentiation, we get\\ $\begin{array}{l} \partial\left(-l(z|w = \tilde R^{-1} w^*, X, \beta,\tau)\right)/\partial( W_\alpha(s_j))\\ \text{(Conditional independence)}\\ = \Sigma_{i=1}^n\Sigma_{x\in obs(s_i)} -\partial\left(l(z_x| w(s_i) =\left(\tilde R^{-1} w^*\right)_i, X, \beta,\tau)\right)/\partial( W_\alpha(s_j))\\ \text{(Chain rule)} \\ = \Sigma_{i=1}^n - \partial \left(\tilde R^{-1} w^*\right)_i/\partial( W_\alpha(s_j)) \times \\ ~~~\Sigma_{x\in obs(s_i)}\partial\left(l(z_x| w(s_i) = \left(\tilde R^{-1} w^*\right)_i, X, \beta,\tau)\right)/\partial(w(s_i))\\ \text{($w^*$ is not changed by $\theta$)}\\ = \Sigma_{i=1}^n - \left(\partial\tilde R^{-1}/\partial( W_\alpha(s_j)) w^*\right)_i \times \\ ~~~\Sigma_{x\in obs(s_i)}\partial\left(l(z_x| w(s_i) = \left(\tilde R^{-1} w^*\right)_i, X, \beta,\tau)\right)/\partial(w(s_i)) \\ \text{(Differentiation of inverse)}\\ = \Sigma_{i=1}^n \left(\tilde R^{-1}\partial\tilde R/\partial( W_\alpha(s_j)) \tilde R^{-1} w^*\right)_i \times \\ ~~~\Sigma_{x\in obs(s_i)}\partial\left(l(z_x| w(s_i) = \left(\tilde R^{-1} w^*\right)_i, X, \beta,\tau)\right)/\partial(w(s_i)) \\ \text{(Recognising gradient of $l(\cdot)$ in $w$)}\\ = \nabla_w l(z| \tilde R^{-1} w^*, X, \beta,\tau) \tilde R^{-1}\partial\tilde R/\partial( W_\alpha(s_j)) \tilde R^{-1} w^*. \\ \end{array}$\\ \subsection{Computational cost of the gradient of the negated log-density with respect to $W_\alpha$} \label{subsection:cost_gradient_alpha} Both sufficient and ancillary formulations have a partial derivative with a term under the shape: $$u^T \partial\tilde R/\partial( W_\alpha(s_j)) v,$$ with $u$ an $v$ two vectors with affordable cost. \\ Due to its construction, $\partial\tilde R/\partial( W_\alpha(s_j))$ has non-null rows only at the rows that correspond to $s_j$ and $ch(s_j)$, and each of those rows has itself at most $m+1$ non-null coefficients. Sparse matrix-vector multiplication $(\partial \tilde R/\partial( W_\alpha(s_j))) v$ therefore costs $O((m+1)\times (1+|ch(s_j)|))$ operations. Given the fact that $\Sigma_{j=1}^n|ch(s_j)| = \Sigma_{j=1}^n|pa(s_j)| = n\times m$, we can expect that the computational cost needed to compute $(\partial \tilde R/\partial( W_\alpha(s_j))) v$ for $j\in 1,\ldots, n$ will be $O(n\times (m+1)^2)$ operations, which is affordable. \\ Moreover, due to the fact that $\partial\tilde R/\partial( W_\alpha(s_j))$ has non-null rows only at the rows that correspond to $s_j$ and $ch(s_j)$, we can deduce that $(\partial \tilde R/\partial( W_\alpha(s_j))) v$ has non-null terms only on the slots that correspond to $s_i$ and its children. Computing $u^T (\partial \tilde R/\partial( W_\alpha(s_j))) v$ will then cost $O(ch(s_j)+1)$ operations. Using again $\Sigma_{j=1}^n|ch(s_j)| = \Sigma_{j=1}^n|pa(s_j)| = n\times m$, we can deduce that (if we know already $(\partial \tilde R/\partial( W_\alpha(s_j))) v$) computing $u^T (\partial \tilde R/\partial( W_\alpha(s_j))) v$ for $j \in 1 ,\ldots, n$ will cost $O(n(m+1))$. \subsection{Gradient of the negated log-density with respect to $W_{\tau}$} \label{subsection:gradient_tau2} Note that we use a variance parametrization $log(\tau^2) = W_\tau+X_\tau\beta_\tau^T$. Here, only the sufficient parametrization is used. Due to the fact that there can be more than one observation per spatial site, we give the following partial derivative, for $s\in\mathcal{S}$: \begin{equation} -\partial g_{\tau}^{sufficient}(W_{\tau})/\partial W_{\tau}(s) = \Sigma_{x\in obs(s)} 1/2 -\tau^2(x)\times (z(x)-w(s)-X(x)\beta^T)^2/2. \end{equation} We start from the log-density of the observed field knowing its mean and its variance as described in \eqref{equation:hierarchical_nonstat_nngp}: $$g_{\tau}^{sufficient}(W_{\tau}) = l(z| w(\mathcal{S}), \beta, \tau).$$ Using the conditional independence of $z$ knowing the parameters of the model, we have $$\partial l(z| w(\mathcal{S}), \beta, \tau)/\partial (W_\tau(s)) = \partial \Sigma_{x\in obs(s)}l(z_{x}| w(s), \beta, \tau_x)/\partial (W_\tau(s)).$$ Introducing the Gaussian formula for $l(\cdot)$, we get $$\partial \Sigma_{x\in obs(s)}l(z_{x}| w(s), \beta, \tau_x)/\partial (W_\tau(s)) = \partial \Sigma_{x\in obs(s)}\frac{(z_x - X_x\beta -w)}{2\tau^2}- \frac{log(\tau^2)}{2} /\partial (W_\tau(s)).$$ Differentiating with respect to $W_{\tau}$ brings the result. \section{EXPERIMENTS ON SYNTHETIC DATA SETS} \label{section:wrong_modelling} We would like to investigate the improvements caused by using nonstationary modeling when it is relevant, the problems caused by using nonstationary modeling when it is irrelevant, and the potential identification and overfitting problems of the model we devised. Our general approach to find answers to those questions is to run our implementation on synthetic data sets and analyze their results. Following the nonstationary process and data model we defined using \eqref{equation:hierarchical_nonstat_nngp} and \eqref{equation:hierarchical_nonstat_nngp_modified}, there is $12$ possible configurations counting the full stationary case: $2$ marginal variance models, $2$ noise variance models, $3$ range models. In order to keep the Section readable, we use the following notation for the different models: \begin{itemize} \item $(\emptyset)$ is the stationary model. \item $(\sigma^2)$ is a model with nonstationary marginal variance. \item $(\tau^2)$ is a model with heteroskedastic noise variance. \item $(\alpha)$ is a model with nonstationary range and isotropic range parameters. \item $(A)$ is a model with nonstationary range and elliptic range parameters. \item Complex models are noted using ``$+$''. For example, a model with nonstationary marginal variance and heteroskedastic noise variance is noted $(\sigma^2+ \tau^2)$. \end{itemize} Our approach here is to use a possibly misspecified model and see what happens. Four cases are possible: \begin{itemize} \item The ``right" model, in the sense it matches perfectly the process used to generate the data (however, potential identification and overfitting problems may cause it to be a bad model in practice). \item ``Wrong'' models, where some parameters that are stationary in the data are non-stationary in the model, and some parameters that are stationary in the model are non-stationary in the data. \item Under-modeling, where some parameters that are stationary in the model are non-stationary in the data, but all parameters that are stationary in the data are stationary in the model. \item Over-modeling, where some parameters that are stationary in the data are non-stationary in the model, but all parameters that are stationary in the model are stationary in the data. \end{itemize} If a nonstationary model actually helps to analyze nonstationary data, we should see if the ``right" model does better than under-modeling. The problem of overfitting will be assessed by comparing over-modeling, under-modeling, and the ``right" model. If there is some overfitting, over-modeling or even ``right"modeling would have worse performances than simpler models. Identification problems will be monitored by looking at the ``wrong" models and under-modeling. If some model formulations are interchangeable, then some of the ``wrong'' models should perform as good as the ``right'' model. Also, if two parametrizations are equivalent, then using either parametrization should do as good as using both, therefore under-modeling should do as good as the ``true" model. The models are compared using the Deviance Information Criterion (DIC) \citep{spiegelhalter1998bayesian}, the smoothing MSE, and the prediction MSE. Remember that the observations of the model at the observed sites $\mathcal{S}$ are disrupted by the white noise $\epsilon$. The true latent field, used to generate the data, is named $w_{true}$. The estimated latent field is named $\hat w$. The smoothing MSE is $$MSE_{smooth} = \frac{1}{\#\mathcal{S}}~\Sigma_{s\in \mathcal{S}}(\hat w(s) -w_{true}(s))^2.$$ The field is also predicted at unobserved sites $\mathcal{P}$, giving the prediction MSE $$MSE_{pred} = \frac{1}{\#\mathcal{P}}~\Sigma_{s\in \mathcal{P}}(\hat w(s) -w_{true}(s))^2.$$ The following method was used to create synthetic data sets. \begin{enumerate} \item $12000$ locations are drawn uniformly on a square whose sides have length $5$. \item The $10000$ first locations are kept for training. $20000$ observations are done at these locations. First, each location is granted an observation. Then, each of the $10000$ remaining observations is assigned to a location chosen following an uniform multinomial distribution. \item The marginal variance of the nonstationary NNGP is defined. In the case of a stationary model, $log(\sigma^2) = 0$. In the case of a nonstationary model, $log(\sigma^2) = W_\sigma$, $W_\sigma$ being a Matérn field with range $0.5$, marginal variance $0.5$, and smoothness $1$. \item The marginal range of the nonstationary NNGP is defined. In the case of a stationary model, $log(\alpha) = log(0.1)$. In the case of a nonstationary model with circular range, $log(\alpha) = log(0.1) + W_\alpha$, $W_\alpha$ being a Matérn field with range $0.5$, marginal variance $0.5$, and smoothness $1$. In the case of a nonstationary model with elliptic range, the three components of $vech(log(A))$ (\ref{equation:hierarchical_nonstat_nngp_modified}) are modeled independently. $ \left\{\begin{array}{ll} vech(log(A))_1 & = W_\alpha^1 + log(0.1),\\ vech(log(A))_2 & = W_\alpha^2,\\ vech(log(A))_3 & = W_\alpha^3 + log(0.1), \end{array}\right.$ where $W_\alpha^{1, 2, 3}$ are Matérn fields with range $0.5$, marginal variance $0.5$, and smoothness $1$. Note that $log(A) = (log(0.1))\times I_2$ corresponds to the isotropic case with range equal to $0.1$. \item The nonstationary NNGP latent field is sampled using $\alpha$ and $\sigma$, using an exponential kernel as the isotropic function in \eqref{equation:covfun_aniso}. \item The response variable is sampled by adding a Gaussian decorrelated noise with variance $\tau^2$ to the latent field. \end{enumerate} We started with the eight models obtained by combining $(\sigma), (\alpha)$, and $(\tau)$, giving us $(\emptyset)$, $(\sigma^2)$, $(\tau^2 )$, $(\alpha)$, $(\sigma^2 + \tau^2)$, $(\tau^2 + \alpha )$, $(\sigma^2+\alpha)$, and $(\sigma^2+\tau^2+\alpha)$. We tested each data-model configuration, yielding $64$ situations in total. Each case was replicated $30$ times. The respective results of DIC, prediction, and smoothing, are summarized by box-plots in Figures \ref{fig:wrong_modelling_1}, \ref{fig:wrong_modelling_pred_1}, and \ref{fig:wrong_modelling_smooth_1}. Second, we focused on the case of elliptic range parameters with the three models obtained by combining $(\alpha)$ and $(A)$, giving us $(\emptyset)$, $(\alpha)$, and $(A)$. Like before, we tested the $9$ data-model configurations $30$ times each. The results are summarized by box-plots in figures~\ref{fig:wrong_modelling_2}, \ref{fig:wrong_modelling_pred_2}, and \ref{fig:wrong_modelling_smooth_2}. \thispagestyle{empty} \begin{figure} \caption{$(\sigma^2+\tau^2+\alpha)$ data} \label{fig:wrong_modelling_1_1} \caption{$(\sigma^2+\alpha)$ data} \label{fig:wrong_modelling_1_2} \caption{$(\tau^2+\alpha)$ data} \label{fig:wrong_modelling_1_3} \caption{$(\alpha)$ data} \label{fig:wrong_modelling_1_4} \caption{$(\sigma^2+\tau^2)$ data} \label{fig:wrong_modelling_1_5} \caption{$(\sigma^2)$ data} \label{fig:wrong_modelling_1_6} \caption{$(\tau^2)$ data} \label{fig:wrong_modelling_1_7} \caption{$(\emptyset)$ data} \label{fig:wrong_modelling_1_8} \caption{DIC of the models for the different simulated scenarios} \label{fig:wrong_modelling_1} \end{figure} \thispagestyle{empty} \begin{figure} \caption{$(\sigma^2+\tau^2+\alpha)$ data} \label{fig:wrong_modelling_pred_1_1} \caption{$(\sigma^2+\alpha)$ data} \label{fig:wrong_modelling_pred_1_2} \caption{$(\tau^2+\alpha)$ data} \label{fig:wrong_modelling_pred_1_3} \caption{$(\alpha)$ data} \label{fig:wrong_modelling_pred_1_4} \caption{$(\sigma^2+\tau^2)$ data} \label{fig:wrong_modelling_pred_1_5} \caption{$(\sigma^2)$ data} \label{fig:wrong_modelling_pred_1_6} \caption{$(\tau^2)$ data} \label{fig:wrong_modelling_pred_1_7} \caption{$(\emptyset)$ data} \label{fig:wrong_modelling_pred_1_8} \caption{Prediction MSE of the models for the different simulated scenarios} \label{fig:wrong_modelling_pred_1} \end{figure} \thispagestyle{empty} \begin{figure} \caption{$(\sigma^2+\tau^2+\alpha)$ data} \label{fig:wrong_modelling_smooth_1_1} \caption{$(\sigma^2+\alpha)$ data} \label{fig:wrong_modelling_smooth_1_2} \caption{$(\tau^2+\alpha)$ data} \label{fig:wrong_modelling_smooth_1_3} \caption{$(\alpha)$ data} \label{fig:wrong_modelling_smooth_1_4} \caption{$(\sigma^2+\tau^2)$ data} \label{fig:wrong_modelling_smooth_1_5} \caption{$(\sigma^2)$ data} \label{fig:wrong_modelling_smooth_1_6} \caption{$(\tau^2)$ data} \label{fig:wrong_modelling_smooth_1_7} \caption{$(\emptyset)$ data} \label{fig:wrong_modelling_smooth_1_8} \caption{Smoothing MSE of the models for the different simulated scenarios} \label{fig:wrong_modelling_smooth_1} \end{figure} \begin{figure} \caption{$(A)$ data} \label{fig:wrong_modelling_2_1} \caption{$(\alpha)$ data} \label{fig:wrong_modelling_2_2} \caption{$(\emptyset)$ data} \label{fig:wrong_modelling_2_3} \caption{DIC of the models for the different simulated scenarios, in the anisotropy model} \label{fig:wrong_modelling_2} \end{figure} \begin{figure} \caption{$(A)$ data} \label{fig:wrong_modelling_pred_2_1} \caption{$(\alpha)$ data} \label{fig:wrong_modelling_pred_2_2} \caption{$(\emptyset)$ data} \label{fig:wrong_modelling_pred_2_3} \caption{Prediction MSE of the models for the different simulated scenarios, in the anisotropy model} \label{fig:wrong_modelling_pred_2} \end{figure} \begin{figure} \caption{$(A)$ data} \label{fig:wrong_modelling_smooth_2_1} \caption{$(\alpha)$ data} \label{fig:wrong_modelling_smooth_2_2} \caption{$(\emptyset)$ data} \label{fig:wrong_modelling_smooth_2_3} \caption{Smoothing MSE of the models for the different simulated scenarios, in the anisotropy model} \label{fig:wrong_modelling_smooth_2} \end{figure} \begin{figure} \caption{Estimates of the log variance for $W_\alpha$} \label{fig:alpha_log_scale} \caption{Estimates of the log variance for $W_\alpha$} \label{fig:sigma_log_scale} \caption{Estimates of the log-variance of $W_\alpha$ and $w_\sigma$ in the model $(\alpha+\sigma^2)$ following the type of the data} \label{fig:alpha_sigma_log_scale} \end{figure} \section{Getting a spatial basis from a (large) NNGP factor} \label{section:spatial_basis} The basis consists in a truncated Karhunen-Loève decomposition (KLD) of a Predictive Process \citep[PP,][]{PP} basis obtained from the NNGP used in the log-NNGP or matrix log-NNGP priors. While the PP approximation is prone to lead to over-smoothing \citep[see the discussion in ][]{NNGP}, this is not a problem here since the hyperprior range is supposed to be high, inducing a smooth, large-scale prior. Start by generating a Predictive Process spatial basis of size $k$, given as : $$B = \tilde R_\theta^{-1} M,$$ $\tilde R_\theta^{-1}$ being a NNGP factor (the same that would be used to define a log-NNGP or matrix log-NNGP prior) and $M$ being a matrix of size $n\times k$ such that $M_{i, j}=1$ if $i=j$ and $M_{i, j}=0$ everywhere else. Using fast solving relying on the sparsity and triangularity of $\tilde R$, this step is affordable. See \citet{coube2021mcmc} for developments concerning the link between NNGP and PP. Note that the first locations of $\mathcal{S}$ must be well spread over the space in order to get a satisfying PP basis. This can be obtained with the max-min or random ordering heuristics \citep{Guinness_permutation_grouping}. The number of vectors $k$ should be large enough (a few hundred), so that there is a strong conditioning of the $n-k$ last locations. In virtue of the PP approximation, the NNGP covariance can be approached as $$(\tilde R_\theta^T\tilde R_\theta)^{-1}\approx BB^T.$$ In order to ease computation and avoid pathological MCMC behaviors that may occur with too many covariates, $B$ is summarized using truncated SVD \citep[with for example the \textsf{R} package \textsf{irlba} by][]{lewis2019irlba} by $B \approx UDV$, giving: \begin{equation} \label{eq:approx_KL_basis} (\tilde R_\theta^T\tilde R_\theta)^{-1}\approx BB^T \approx UDVV^TDU^T= UD^2U^T. \end{equation} $UD^2U^T$ is an approximate Karhunen-Loève decomposition of $R_\theta^T$, and the empirical orthogonal functions (EOFs) of $U$ will be used as spatial covariates. Following \citet{Handbook_Spatial_Stats}, the first EOFs parametrize great spatial variations, and the following EOFs represent smaller, local changes. The number of vectors of $U$ can be selected using the values of $D^2$ and/or looking at spatial plots of the EOFs. Prediction at new locations can be done by prolonging the PP basis and retrieving the prolonged truncated KLD basis by linear recombination. Start by appending the predicted locations below the observed locations, and by computing a joint NNGP factor (note that the upper left corner of this factor is no other than $\tilde R_\theta$). Compute a PP basis at the predicted locations $B_{pred}$ by applying linear solving like before, and removing the first $n$ rows of the basis (they correspond to the observed locations). Then, using the SVD from \eqref{eq:approx_KL_basis}, the KLD basis at the predicted locations $U_{pred}$ is obtained through $$U_{pred} = B_{pred}V^TD^{-1}.$$ We see two potential improvements for this approach. The first is to get rid of the PP and to find a way to compute straightforwardly a truncated KLD of $(\tilde R_\theta^T\tilde R_\theta)^{-1}$. The second is to use Gaussian priors to make this approach equivalent to a degenerate GP prior defined from a full-rank NNGP. This might lead to a more frugal and sturdier version of our log-NNGP prior. \end{document}
math
131,592
\begin{document} \title{Quantized Media with Absorptive Scatterers and Modified Atomic Emission Rates} \author{L.G.~Suttorp and A.J.~van~Wonderen} \address{Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands} \begin{abstract} Modifications in the spontaneous emission rate of an excited atom that are caused by extinction effects in a nearby dielectric medium are analyzed in a quantummechanical model, in which the medium consists of spherical scatterers with absorptive properties. Use of the dyadic Green function of the electromagnetic field near a a dielectric sphere leads to an expression for the change in the emission rate as a series of multipole contributions for which analytical formulas are obtained. The results for the modified emission rate as a function of the distance between the excited atom and the dielectric medium show the influence of both absorption and scattering processes. \end{abstract} \maketitle \setlength{\mathindent}{0cm} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \section{Introduction}\label{sec1} The emission rate of an excited atom is modified if the electromagnetic properties of its surroundings differ from that of vacuum \cite{P46}. For an atom in front of a dielectric medium filling a half-space, the rate varies with the distance between the atom and the medium \cite{A75} - \cite{KSW01}. Usually, the medium is taken to be homogeneous on the scale of the atomic wavelength, so that its electromagnetic properties are fully described by a susceptibility, which does not vary appreciably on the scale of the wavelength. In general, it will be complex to account for absorption and dispersion. For such a configuration modifications of the atomic radiative properties have been confirmed experimentally a few years ago \cite{ICLS04}. If the structure of the medium cannot be neglected, scattering effects play a role as well, so that extinction in such a medium is driven by both absorption and scattering. Extinction by scattering in material media is quite common, owing to the presence of impurities and defects. The interplay between the two types of extinction in atomic decay rates can be investigated in a model in which both of these features occur simultaneously. In a recent paper \cite{SvW10} we have studied the change in the decay rate of an atom in the presence of a medium consisting of non-overlapping spheres that are made of absorptive dielectric material. The spheres are distributed randomly in a half-space, with a uniform average density. In order to describe the absorptive dielectric material of the spheres in a quantummechanically consistent way a damped-polariton model has been employed \cite{HB92b}. By introducing an effective susceptibility for the composite medium and after a detailed analysis of surface contributions, we could derive an asymptotic expression for the change in the emission rate at relatively large distances between the atom and the medium. In the present paper we take a somewhat different approach so as to derive an analytic expression for the emission rate that is valid for all distances between the atom and the medium. We shall start from exact expressions for the electromagnetic Green function in the presence of a dielectric sphere of arbitrary radius. As before, the absorptive dielectric material of the spheres will be described by a damped-polariton model. For simplicity we shall assume that the density of the spherical scatterers in the medium is low, so that multiple-scattering effects can be neglected. \section{Spontaneous emission in the presence of absorbing dielectrics} In the damped-polariton model an absorptive linear dielectric medium is described by a polarization density that is coupled to a bath of harmonic oscillators with a continuous range of frequencies~\cite{HB92b}. The Hamiltonian of the damped-polariton model can be diagonalized exactly, as has been shown both for the case of a uniform dielectric \cite{HB92b} and for a dielectric with arbitrary inhomogeneities \cite{SWo04}. Diagonalization of the Hamiltonian in the general non-homogeneous case yields \begin{equation} H_d=\int d {\bf r}\int_0^{\infty} d \omega\, \hbar\omega \, {\bf C}^{\dagger}({\bf r},\omega)\cdot {\bf C}({\bf r},\omega)\, , \label{2.1} \end{equation} with annihilation operators ${\bf C}({\bf r},\omega)$ and associated creation operators. The electric field can be expressed in terms of these operators as \cite{SWo04}: \begin{equation} {\bf E}({\bf r})=\int d {\bf r}'\int_0^{\infty} d \omega \, \mbox{\sffamily\bfseries{f}}_E({\bf r},{\bf r}',\omega)\cdot{\bf C}({\bf r}',\omega) +{\rm h.c.} \, ,\label{2.2} \end{equation} with a tensorial coefficient: \begin{equation} \mbox{\sffamily\bfseries{f}}_E({\bf r},{\bf r}',\omega)=-i\, \frac{\omega^2}{c^2} \left(\frac{\hbar \, {\rm Im}\,\varepsilon({\bf r}',\omega+i0)}{\pi\varepsilon_0}\right)^{1/2}\, \mbox{\sffamily\bfseries{G}}({\bf r},{\bf r}',\omega+i 0) \, . \label{2.3} \end{equation} Here $\varepsilon$ is the complex local (relative) dielectric constant, which follows from the parameters of the model. Furthermore, $\mbox{\sffamily\bfseries{G}}$ is the tensorial Green function, which satisfies the differential equation \begin{eqnarray} && -\nabla\times [\nabla\times \mbox{\sffamily\bfseries{G}} ({\bf r},{\bf r}',\omega+i 0)]\nonumber\\ &&+\frac{\omega^2}{c^2}\, \varepsilon({\bf r},\omega+i 0)\, \mbox{\sffamily\bfseries{G}}({\bf r},{\bf r}',\omega+i 0) =\mbox{\sffamily\bfseries{I}}\, \delta({\bf r}-{\bf r}')\, , \label{2.4} \end{eqnarray} with $\mbox{\sffamily\bfseries{I}}$ the unit tensor. The atomic decay rate in the presence of an absorbing dielectric follows from the inhomogeneous damped-polariton mod\-el in its diagonalized form by employing perturbation theory in leading order \cite{SvW10}. It can be expressed as an integral over a product of the coefficients (\ref{2.3}) and suitable atomic matrix elements: \begin{eqnarray} \Gamma=\frac{2\pi}{\hbar^2 \omega_a^2} \int d {\bf r} \int d {\bf r}' \int d {\bf r}'' \, \langle e|{\bf J}_a({\bf r}')|g\rangle \cdot \mbox{\sffamily\bfseries{f}}_E({\bf r}',{\bf r},\omega_a) \cdot &&\nonumber\\ \cdot\tilde{\mbox{\sffamily\bfseries{f}}}_E^\ast({\bf r}'',{\bf r},\omega_a)\cdot \langle g|{\bf J}_a({\bf r}'')|e\rangle\, , && \label{2.5} \end{eqnarray} with $e$ and $g$ denoting the excited and the ground state of the atom, $\omega_a$ the atomic transition frequency, and the tilde denoting the tensor transpose. Furthermore, ${\bf J}_a({\bf r})$ is the atomic local current density $-\half e\sum_i\{{\bf p}_i/m,\delta({\bf r}-{\bf r}_i)\}$, with ${\bf r}_i\, , \, {\bf p}_i$ the positions and momenta of the electrons and curly brackets denoting the anticommutator. In the electric-dipole approximation the atomic decay rate can be expressed in terms of the Green function as: \begin{equation} \Gamma= -\frac{2\omega_a^2}{\varepsilon_0 \hbar c^2} \,\langle e|\bfmu|g\rangle \cdot {\rm Im}\, \mbox{\sffamily\bfseries{G}}({\bf r}_a,{\bf r}_a,\omega_a+i 0)\cdot \langle g|\bfmu|e\rangle \, , \label{2.6} \end{equation} with ${\bf r}_a$ the atomic position and $\bfmu=-e\sum_i({\bf r}_i-{\bf r}_a)$ the atomic electric dipole moment. The above expression for the decay rate of an excited atom in the presence of an inhomogeneous absorptive dielectric can be obtained as well by invoking the fluctuation-dissipation theorem \cite{BHLM96,SKW99}. \section{Green functions}\label{sec2} \setcounter{equation}{0} The Green function in vacuum fulfils the differential equation (\ref{2.4}) with $\varepsilon=1$. It follows from the scalar Green function $G_s({\bf r},{\bf r}',\omega)={\rm exp}(i\omega |{\bf r}-{\bf r}'|/c)/(4\pi|{\bf r}-{\bf r}'|)$ as $\mbox{\sffamily\bfseries{G}}_0({\bf r},{\bf r}',\omega)=-[\mbox{\sffamily\bfseries{I}}+(c^2/\omega^2)\nabla\nabla]G_s({\bf r},{\bf r}',\omega)$. Its explicit form in spherical coordinates is obtained from the expansion of $G_s$ in spherical harmonics and spherical Bessel functions. The ensuing form for the vacuum Green function is \cite{T71,C95} \begin{eqnarray} &&\mbox{\sffamily\bfseries{G}}_0({\bf r},{\bf r}',\omega+i0)=-ik\sum_{\ell=1}^{\infty} \sum_{m=-\ell}^{\ell}\frac{(-1)^m}{\ell(\ell+1)}\nonumber\\ &&\times\left\{\theta(r-r')\left[ {\bf M}_{\ell, m}^{(h)}({\bf r}){\bf M}_{\ell,-m}({\bf r'})+ {\bf N}_{\ell, m}^{(h)}({\bf r}){\bf N}_{\ell,-m}({\bf r'}) \right]\right.\nonumber\\ &&+\left. \theta(r'-r)\left[ {\bf M}_{\ell, m}({\bf r}){\bf M}_{\ell,-m}^{(h)}({\bf r'})+ {\bf N}_{\ell, m}({\bf r}){\bf N}_{\ell,-m}^{(h)}({\bf r'}) \right]\right\}\nonumber\\ &&+k^{-2}\, {\bf e}_r{\bf e}_r\, \delta({\bf r}-{\bf r}')\, , \label{3.1} \end{eqnarray} with $k=\omega/c$ the wavenumber, ${\bf e}_r$ a unit vector in the direction of ${\bf r}$ and $\theta(r)$ a step function that equals 1 for positive and 0 for negative argument. The vector harmonics are defined as \begin{eqnarray} &&{\bf M}_{\ell,m}({\bf r})=\nabla\wedge[{\bf r}\psi_{\ell,m}({\bf r})]\, ,\label{3.2}\\ &&{\bf N}_{\ell,m}({\bf r})=k^{-1}\nabla\wedge[\nabla\wedge[{\bf r}\psi_{\ell,m}({\bf r})]]\, ,\label{3.3} \end{eqnarray} where $\psi_{\ell,m}({\bf r})$ stands for $j_\ell(kr)\, Y_{\ell,m}(\theta,\phi)$, with $j_\ell$ spherical Bessel functions and $Y_{\ell,m}$ spherical harmonics. The superscripts $(h)$ in (\ref{3.1}) denote the analogous vector harmonics with spherical Hankel functions $h_\ell^{(1)}$ instead of $j_\ell$. The expression (\ref{3.1}) may be checked by substitution in the differential equation (\ref{2.4}). Differentiation of the step functions yields singular terms, which together with the last term lead to the right-hand side of (\ref{2.4}). The Green function in the presence of a dielectric sphere is the sum of the vacuum Green function and a correction term. For a sphere centered at the origin the latter has the form \cite{T71}-\cite{LKLY94} \begin{eqnarray} &&\mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r}',\omega+i0)= k\sum_{\ell=1}^{\infty}\sum_{m=-\ell}^{\ell} \frac{(-i)^{\ell}(-1)^m}{2\ell+1} \nonumber\\ &&\times\left[ B^e_\ell\, {\bf N}_{\ell,m}^{(h)}({\bf r}){\bf N}_{\ell,-m}^{(h)}({\bf r}')+ B^m_\ell\, {\bf M}_{\ell,m}^{(h)}({\bf r}){\bf M}_{\ell,-m}^{(h)}({\bf r}') \right] \label{3.4} \end{eqnarray} for ${\bf r}$ and ${\bf r}'$ both outside the sphere. The electric and magnetic multipole amplitudes read \cite{M08}-\cite{BW99}: \begin{equation} B^p_\ell=i^{\ell+1}\, \frac{2\ell+1}{\ell(\ell+1)}\, \frac{N^p_\ell}{D^p_\ell}\, , \label{3.5} \end{equation} with $p=e,m$. The numerators and denominators are given as \begin{eqnarray} &&N^e_\ell=\varepsilon\, f_\ell(q)\, j_\ell(q')- j_\ell(q)\,f_\ell(q')\, , \nonumber\\ && N^m_\ell=f_\ell(q)\, j_\ell(q')-j_\ell(q)\,f_\ell(q')\, , \nonumber\\ &&D^e_\ell=\varepsilon\, f^{(h)}_\ell(q)\, j_\ell(q')-h^{(1)}_\ell(q)\,f_\ell(q')\, , \nonumber\\ &&D^m_\ell= f^{(h)}_\ell(q)\, j_\ell(q')-h^{(1)}_\ell(q)\,f_\ell(q')\, , \label{3.6} \end{eqnarray} with $f_\ell(q)=(\ell+1)\, j_\ell(q)-q\, j_{\ell+1}(q)$ and $f^{(h)}_\ell(q)=(\ell+1)\, h^{(1)}_\ell(q)-q\, h^{(1)}_{\ell+1}(q)$. The spherical Bessel and Hankel functions depend on $q=k a$ and $q'=\sqrt{\varepsilon}\, q$, with $a$ the radius of the sphere. To determine the change in the atomic decay rate due to the presence of a dielectric sphere one needs the components of the Green function $\mbox{\sffamily\bfseries{G}}_c$ for coinciding arguments. The non-vanishing components follow from (\ref{3.4}) as: \begin{eqnarray} &&{\bf e}_r\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega+i0)\cdot {\bf e}_r = \nonumber\\ && =\frac{1}{4\pi k r^2}\sum_{\ell=1}^\infty (-i)^\ell\, [\ell(\ell+1)]^2\, B^e_\ell \, [h_\ell^{(1)}(kr)]^2 \label{3.7} \end{eqnarray} and \begin{eqnarray} &&{\bf e}_\theta\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega+i0)\cdot {\bf e}_\theta = {\bf e}_\phi\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega+i0)\cdot {\bf e}_\phi = \nonumber\\ && =\frac{1}{8\pi k r^2}\sum_{\ell=1}^\infty (-i)^\ell\, \ell(\ell+1) \left\{ B^e_\ell \, \left[\frac{d}{dr}[rh_\ell^{(1)}(kr)]\right]^2\right.\nonumber\\ &&\left. \rule{3.5cm}{0cm}+ B^m_\ell\, \left[kr\, h_\ell^{(1)}(kr) \right]^2 \right\}\, , \label{3.8} \end{eqnarray} in agreement with \cite{DKW01}. Here ${\bf e}_r$, ${\bf e}_\theta$ and ${\bf e}_\phi$ are unit vectors in a spherical coordinate system. \section{Decay near a half-space of absorptive scatterers} \setcounter{equation}{0} We consider a halfspace $z<0$ filled with a dilute set of spherical scatterers. The non-overlapping spheres are randomly distributed with a uniform average density. An excited atom is located at ${\bf r}_a=(0,0,z_a)$, with $z_a>a$, so that the minimal distance between the atom and the scatterers is positive. The decay rate is given by the sum of the vacuum decay rate and a correction term. The modified rate depends on the orientation of the dipole-moment transition matrix element. If the dipole moment is oriented perpendicular to the $z$-axis, the vacuum rate is $\Gamma_{0,\perp}=\omega_a^3\, |\langle e|\bfmu_\perp|g\rangle|^2/(3\pi \varepsilon_0 \hbar c^3)$. A similar formula is valid for a dipole moment oriented parallel to the $z$-axis, with $\bfmu_\perp$ replaced by $\bfmu_\parallel$. If multiple-scattering effects are neglected, the correction term in the decay rate is given by the sum of the correction terms due to all spheres. For the perpendicular orientation one finds \begin{eqnarray} &&\Gamma_{c,\perp}= -\frac{2\omega_a^2}{\varepsilon_0 \hbar c^2} \,\sum_i\langle e|\bfmu_\perp|g\rangle \cdot\nonumber\\ &&\cdot {\rm Im}\, \mbox{\sffamily\bfseries{G}}_c({\bf r}_a-{\bf R}_i,{\bf r}_a-{\bf R}_i,\omega_a+i 0)\cdot \langle g|\bfmu_\perp|e\rangle\, , \label{4.1} \end{eqnarray} with $\mbox{\sffamily\bfseries{G}}_c$ given by (\ref{3.4}) and ${\bf R}_i$ the positions of the centers of the spheres. Choosing the $x$-axis to be parallel to the transition matrix element and averaging over the positions of the spheres we get \begin{eqnarray} &&\langle\Gamma_{c,\perp}\rangle=-\frac{6\pi n c}{\omega_a}\, \Gamma_{0,\perp}\, {\rm Im} \int_{z<0} d{\bf r} \nonumber\\ &&\times \, {\bf e}_x\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r}_a-{\bf r},{\bf r}_a-{\bf r},\omega_a+i0)\cdot{\bf e}_x\, , \label{4.2} \end{eqnarray} with $n$ the uniform density of the spheres and ${\bf e}_x$ a unit vector along the $x$-axis. The volume integral can be written as a triple integral, viz.\ over $|{\bf r}-{\bf r}_a|$, $z$ and an azimuthal angle. Upon carrying out the latter two of these integrals one finds for the volume integral in (\ref{4.2}): \begin{eqnarray} &&\pi \int_{z_a}^\infty dr\left[ \left(\frac{z_a^3}{3r}-z_a r+\frac{2r^2}{3}\right) {\bf e}_r\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega_a+i0)\cdot {\bf e}_r \right.\nonumber\\ &&\left. +\left(-\frac{z_a^3}{3r}+\frac{r^2}{3}\right) {\bf e}_\theta\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega_a+i0)\cdot {\bf e}_\theta \right.\nonumber\\ &&\left.+\left(-z_ar+r^2\right) {\bf e}_\phi\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega_a+i0)\cdot {\bf e}_\phi\right]\, . \label{4.3} \end{eqnarray} Insertion of (\ref{3.7}) and (\ref{3.8}) yields \begin{eqnarray} &&\langle\Gamma_{c,\perp}\rangle=-\frac{3\pi n c^3}{4\omega_a^3}\, \Gamma_{0,\perp}\, {\rm Im} \sum_{\ell=1}^{\infty} (-i)^\ell \ell(\ell+1)\nonumber\\ &&\times\left[B^e_{\ell}\, J^e_{\ell,\perp}(\zeta_a)+ B^m_{\ell}\, J^m_{\ell,\perp}(\zeta_a)\right]\, ,\label{4.4} \end{eqnarray} with multipole amplitudes $B_\ell^p$ given by (\ref{3.5})-(\ref{3.6}) with $k=\omega_a/c$, and with the integrals \begin{eqnarray} && J^e_{\ell,\perp}(\zeta)=2\ell(\ell+1)\int_{\zeta}^\infty dt\, \left(\frac{\zeta^3}{3t^3}-\frac{\zeta}{t}+\frac{2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2\nonumber\\ &&+\int_{\zeta}^\infty dt\, \left(-\frac{\zeta^3}{3t^3}-\frac{\zeta}{t}+\frac{4}{3}\right) \left[ \frac{d}{dt}\left[th^{(1)}_\ell(t)\right]\right]^2 \label{4.5} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\perp}(\zeta)=\int_{\zeta}^\infty dt\, \left(-\frac{\zeta^3}{3t}-\zeta t+\frac{4t^2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2\, , \label{4.6} \end{eqnarray} with $\zeta$ equal to $\zeta_a=(\omega_a+i0) z_a/c$. The derivative of the spherical Hankel function in (\ref{4.5}) can be rewritten in terms of Hankel functions with a different index \cite{AS65}. For large $\zeta$ the asymptotic forms of these integrals are \begin{equation} J^e_{\ell,\perp}(\zeta)\simeq (-1)^{\ell+1}\frac{e^{2i\zeta}}{2\zeta} \, , \quad J^m_{\ell,\perp}(\zeta)\simeq (-1)^{\ell}\frac{e^{2i\zeta}}{2\zeta} \, , \label{4.7} \end{equation} so that (\ref{4.4}) becomes \begin{equation} \langle\Gamma_{c,\perp}\rangle\simeq \frac{3\pi n c^3}{8\omega_a^3\zeta_a}\, \Gamma_{0,\perp}\, {\rm Im} \sum_{\ell=1}^{\infty} i^\ell \ell(\ell+1) \left(B^e_{\ell} - B^m_{\ell}\right)\, e^{2i\zeta_a}\, , \label{4.8} \end{equation} which falls off proportionally to $1/\zeta_a$. Substituting the leading terms of $B^e_1$, $B^e_2$ and $B^m_1$ for small values of $q$ and $\varepsilon-1$ one recovers a result found before \cite{SvW10}. Similar expressions may be obtained for the correction to the decay rate of an excited atom with a dipole moment parallel to the $z$-axis. Instead of (\ref{4.2}) one gets a formula with the $zz$-component of $\mbox{\sffamily\bfseries{G}}_c$ . Upon carrying out the integrals one arrives at the analogue of (\ref{4.4}), with the integrals \begin{eqnarray} && J^e_{\ell,\parallel}(\zeta)=2\ell(\ell+1)\int_{\zeta}^\infty dt\, \left(-\frac{2\zeta^3}{3t^3}+\frac{2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2\nonumber\\ &&+\int_{\zeta}^\infty dt\, \left(\frac{2\zeta^3}{3t^3}-\frac{2\zeta}{t}+\frac{4}{3}\right) \left[ \frac{d}{dt}\left[th^{(1)}_\ell(t)\right]\right]^2 \label{4.9} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\parallel}(\zeta)=\int_{\zeta}^\infty dt\, \left(\frac{2\zeta^3}{3t}-2\zeta t+\frac{4t^2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2 \, . \label{4.10} \end{eqnarray} For large $\zeta$ their asymptotic forms are \begin{equation} J^e_{\ell,\parallel}(\zeta)\simeq (-1)^{\ell+1}\frac{ie^{2i\zeta}}{2\zeta^2} \, , \quad J^m_{\ell,\parallel}(\zeta)\simeq (-1)^{\ell}\frac{ie^{2i\zeta}}{2\zeta^2} \, , \label{4.11} \end{equation} so that the decay rate for large $\zeta_a$ becomes \begin{eqnarray} &&\langle\Gamma_{c,\parallel}\rangle\simeq \frac{3\pi n c^3}{8\omega_a^3\zeta_a^2}\, \Gamma_{0,\parallel}\, {\rm Im} \sum_{\ell=1}^{\infty} i^{\ell+1} \ell(\ell+1) \left(B^e_{\ell} - B^m_{\ell}\right)\, e^{2i\zeta_a}\, .\nonumber\\ &&\mbox{} \label{4.12} \end{eqnarray} In contrast to (\ref{4.8}) the right-hand side is proportional to the inverse square of $\zeta_a$. For general values of $\zeta_a$ we have to evaluate the integrals in (\ref{4.5})-(\ref{4.6}) and (\ref{4.9})-(\ref{4.10}), as will be done in the following section. \section{Evaluation of integrals} \setcounter{equation}{0} The integrals $J^p_{\ell,\perp}$ and $J^p_{\ell,\parallel}$ are linear combinations of integrals of the general form \begin{eqnarray} && I_{\ell_1,\ell_2,n}(\zeta)=\int_1^\infty du\, u^{-n}\, h^{(1)}_{\ell_1}(\zeta u)\, h^{(1)}_{\ell_2}(\zeta u)\, , \label{5.1} \end{eqnarray} which is symmetric in $\ell_1,\ell_2$. In fact, upon inspecting (\ref{4.5})-(\ref{4.6}) and (\ref{4.9})-(\ref{4.10}) we find that explicit expressions are needed for the integrals $I_{\ell,\ell,n}(\zeta)$ with $n=-2, -1, 0, 1, 3$ and for $I_{\ell,\ell-1,n}(\zeta)$ for $n=-1,0,2$. With the use of standard identities \cite{AS65} for spherical Hankel functions and by means of a partial integration we may derive several relations connecting these integrals for different values of the parameters: \begin{eqnarray} && I_{\ell_1-1,\ell_2,n}(\zeta)+I_{\ell_1+1,\ell_2,n}(\zeta)= \frac{2\ell_1+1}{\zeta}\, I_{\ell_1,\ell_2,n+1}(\zeta)\, , \label{5.2}\\ &&(n-\ell_1-\ell_2)\,I_{\ell_1-1,\ell_2,n}(\zeta)+ (n+\ell_1-\ell_2+1)\, I_{\ell_1+1,\ell_2,n}(\zeta)\nonumber\\ &&+(2\ell_1+1)\, I_{\ell_1,\ell_2+1,n}(\zeta) =\frac{2\ell_1+1}{\zeta}\,h^{(1)}_{\ell_1}(\zeta)\, h^{(1)}_{\ell_2}(\zeta)\, . \label{5.3} \end{eqnarray} In order to obtain explicit expressions for $I_{\ell_1,\ell_2,n}$ with $n=0,1,3$ we start from a result \cite{AS65} that is valid for $n=0$ and $\ell_1\neq\ell_2$: \begin{eqnarray} &&(\ell_1+\ell_2+1)I_{\ell_1,\ell_2,0}(\zeta)=\nonumber\\ &&=\frac{\zeta}{\ell_1-\ell_2} \left( h^{(1)}_{\ell_1}h^{(1)}_{\ell_2-1}-h^{(1)}_{\ell_1-1}h^{(1)}_{\ell_2}\right) +h^{(1)}_{\ell_1}h^{(1)}_{\ell_2} \, , \label{5.4} \end{eqnarray} as may be checked by differentiation. We omit the argument $\zeta$ of the spherical Hankel functions from now on. To obtain the corresponding expression for $\ell_1=\ell_2$ we put $\ell_1=\ell+1$, $\ell_2=\ell$ and $n=0$ in (\ref{5.3}) and use (\ref{5.4}) in the second term. In this way we obtain a recursion relation connecting $I_{\ell,\ell,0}$ for consecutive values of $\ell$. Solving this relation by employing the identity $I_{0,0,0}(\zeta)=-(2i/\zeta)\, E_1(-2i\zeta)+[h^{(1)}_0]^2$ (with $E_1$ the exponential integral \cite{AS65}) as an initial condition, we find \begin{eqnarray} &&I_{\ell,\ell,0}(\zeta)=-\frac{2i}{(2\ell+1)\zeta}E_1(-2i\zeta) \nonumber\\ &&+\frac{2}{2\ell+1}\sum_{k=0}^{\ell}\left[h_k^{(1)}\right]^2- \frac{1}{2\ell+1}\left[h_\ell^{(1)}\right]^2 \label{5.5} \end{eqnarray} for all $\ell\geq 0$. The exponential integral of purely imaginary argument can be expressed in terms of sine and cosine integrals as $E_1(-2i\zeta)=-{\rm Ci}(2\zeta)-i{\rm Si}(2\zeta)+i\pi/2$. With the help of the identities (\ref{5.2}), (\ref{5.4}) and the recursion relations for the spherical Hankel functions one derives expressions for $I_{\ell,\ell,1}$ (with $(\ell\geq 1$) and $I_{\ell,\ell,3}$ (with $\ell\geq 2$), in the form of linear combinations of products of spherical Hankel functions: \begin{eqnarray} &&I_{\ell,\ell,1}(\zeta)=\left[-\frac{\zeta^2}{2\ell(\ell+1)}+\frac{1}{2(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&-\frac{\zeta^2}{2\ell(\ell+1)}\left[h_{\ell-1}^{(1)}\right]^2 +\frac{\zeta}{\ell+1}h_\ell^{(1)}h_{\ell-1}^{(1)} \label{5.6} \end{eqnarray} for $\ell\geq 1$, and \begin{eqnarray} &&I_{\ell,\ell,3}(\zeta)=\left[-\frac{\zeta^4}{3(\ell-1)\ell(\ell+1)(\ell+2)} -\frac{\zeta^2}{6(\ell+1)(\ell+2)}\right.\nonumber\\ &&\left.+\frac{1}{2(\ell+2)}\right] \left[h_\ell^{(1)}\right]^2 +\left[-\frac{\zeta^4}{3(\ell-1)\ell(\ell+1)(\ell+2)}\right.\nonumber\\ &&\left.-\frac{\zeta^2}{6(\ell-1)(\ell+2)}\right]\left[h_{\ell-1}^{(1)}\right]^2 +\left[\frac{2\zeta^3}{3(\ell-1)(\ell+1)(\ell+2)}\right.\nonumber\\ &&\left.+\frac{\zeta}{3(\ell+2)}\right]h_\ell^{(1)}h_{\ell-1}^{(1)} \label{5.7} \end{eqnarray} for $\ell\geq 2$. It turns out that these formulas cannot be used for $I_{0,0,1}$ and $I_{1,1,3}$, since the expressions diverge in these cases. However, these special cases can be obtained straightforwardly by connecting them to the exponential integral of the same argument as in (\ref{5.5}). Expressions for $I_{\ell,\ell,n}$ with $n=-2$ follow by choosing $\ell_1=\ell+1$, $\ell_2=\ell$ and $n=-2$ in (\ref{5.3}). The second term at the left-hand side drops out for these values of the parameters. As a result a simple recurrence relation for $I_{\ell,\ell,-2}$ is found, which may be solved for all $\ell\geq 0$ by employing the identity $I_{0,0,-2}(\zeta)=-ie^{2i\zeta}/(2\zeta^3)$ as a starting point. One gets for $\ell\geq 0$: \begin{eqnarray} &&I_{\ell,\ell,-2}(\zeta)=-\frac{1}{2} \left[h_\ell^{(1)}\right]^2- \frac{1}{2}\left[h_{\ell+1}^{(1)}\right]^2+\frac{2\ell+1}{2\zeta} h_\ell^{(1)}h_{\ell+1}^{(1)} \, . \label{5.8} \end{eqnarray} Furthermore, by choosing in (\ref{5.3}) the parameters as $n=-2$ and $\ell_1=\ell_2$ as either $\ell$ or $\ell+1$, one gets two identities, which may be combined with (\ref{5.2}) so as to obtain a recursion relation for $I_{\ell,\ell,-1}$. Solving that relation with the initial condition $I_{0,0,-1}(\zeta)=-\zeta^{-2}E_1(-2i\zeta)$, we find for all $\ell\geq 0$: \begin{eqnarray} &&I_{\ell,\ell,-1}(\zeta)=-\zeta^{-2}E_1(-2i\zeta)+\sum_{k=1}^\ell \frac{2k+1}{2k(k+1)} \left[h_k^{(1)}\right]^2 \nonumber\\ &&+\frac{1}{2} \left[h_0^{(1)}\right]^2-\frac{1}{2(\ell+1)} \left[h_{\ell}^{(1)}\right]^2\, . \label{5.9} \end{eqnarray} It should be noted that the sum drops out for $\ell=0$. Finally, we need expressions for $I_{\ell,\ell-1,p}$ for $p=-1$ and $p=2$. Once more we use the identity (\ref{5.3}), now for the choice $\ell_1=\ell_2=\ell$ and $n=-1$. It yields a recursion relation for $I_{\ell,\ell-1,-1}$ from which we get for $\ell\geq 1$: \begin{eqnarray} && I_{\ell,\ell-1,-1}(\zeta)=-i\zeta^{-2}E_1(-2i\zeta)+\zeta^{-1} \sum_{k=0}^{\ell-1} \left[h_k^{(1)}\right]^2\, . \label{5.10} \end{eqnarray} Turning to the case $p=2$, one derives a result for $I_{\ell,\ell-1,2}$ by a repeated use of (\ref{5.2}) in combination with (\ref{5.4}). Once again linear combinations of products of two spherical Hankel functions are found, at least for $\ell\geq 2$: \begin{eqnarray} && I_{\ell,\ell-1,2}(\zeta)=\left[-\frac{\zeta^3}{3(\ell-1)\ell(\ell+1)}-\frac{\zeta}{6(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^3}{3(\ell-1)\ell(\ell+1)}-\frac{\zeta}{6(\ell-1)}\right]\left[h_{\ell-1}^{(1)}\right]^2 \nonumber\\ &&+\left[\frac{2\zeta^2}{3(\ell-1)(\ell+1)}+\frac{1}{3}\right]h_\ell^{(1)}h_{\ell-1}^{(1)}\, . \label{5.11} \end{eqnarray} For $\ell=1$ this expression is singular and cannot be used. From a direct evaluation of the integral for this special case one finds that an exponential integral shows up, as before. \section{Results} \setcounter{equation}{0} The explicit expressions for the basic integrals (\ref{5.1}) that we derived in the previous section can be employed now to determine the corrections to the decay rate. For a perpendicular orientation of the excited atom the correction (\ref{4.4}) to the decay rate is governed by the integrals (\ref{4.5})-(\ref{4.6}) for which we get upon substitution of the relevant contributions: \begin{eqnarray} &&J^e_{\ell,\perp}(\zeta)=\zeta E_1(-2i\zeta)\nonumber\\ && +\left[-\frac{\zeta^5}{6\ell(\ell+1)}-\frac{\zeta^3(2\ell^2-2\ell-3)}{6\ell(\ell+1)} +\frac{\zeta\ell}{2(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^5}{6\ell(\ell+1)}-\frac{\zeta^3(2\ell^2+2\ell-3)}{6\ell(\ell+1)} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[\frac{\zeta^4}{3(\ell+1)}+\frac{\zeta^2(2\ell^2+3\ell-2)}{3(\ell+1)} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{2k(k+1)}\left[h_k^{(1)}\right]^2 -\frac{1}{2}\zeta^3\left[h_0^{(1)}\right]^2 \label{6.1} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\perp}(\zeta)=\zeta E_1(-2i\zeta)\nonumber\\ && +\left[\frac{\zeta^5}{6\ell(\ell+1)}-\frac{\zeta^3(2\ell+1)}{3(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[\frac{\zeta^5}{6\ell(\ell+1)}-\frac{2\zeta^3}{3} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^4}{3(\ell+1)}+\frac{2\zeta^2(2\ell+1)}{3} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{2k(k+1)}\left[h_k^{(1)}\right]^2 -\frac{1}{2}\zeta^3\left[h_0^{(1)}\right]^2\, , \label{6.2} \end{eqnarray} for all $\ell\geq 1$. Likewise, the results for a parallel atomic orientation read \begin{eqnarray} &&J^e_{\ell,\parallel}(\zeta)=2\zeta E_1(-2i\zeta)\nonumber\\ && +\left[\frac{\zeta^5}{3\ell(\ell+1)}-\frac{\zeta^3(4\ell^2+2\ell-3)}{3\ell(\ell+1)} +\frac{\zeta\ell}{\ell+1}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[\frac{\zeta^5}{3\ell(\ell+1)}-\frac{\zeta^3(4\ell^2+4\ell-3)}{3\ell(\ell+1)} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{2\zeta^4}{3(\ell+1)}+\frac{2\zeta^2(4\ell^2+6\ell-1)}{3(\ell+1)} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{k(k+1)}\left[h_k^{(1)}\right]^2 -\zeta^3\left[h_0^{(1)}\right]^2 \label{6.3} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\parallel}(\zeta)=2\zeta E_1(-2i\zeta)\nonumber\\ && +\left[-\frac{\zeta^5}{3\ell(\ell+1)}-\frac{2\zeta^3(\ell-1)}{3(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^5}{3\ell(\ell+1)}-\frac{2\zeta^3}{3} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[\frac{2\zeta^4}{3(\ell+1)}+\frac{2\zeta^2(2\ell+1)}{3} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{k(k+1)}\left[h_k^{(1)}\right]^2 -\zeta^3\left[h_0^{(1)}\right]^2\, , \label{6.4} \end{eqnarray} again for all $\ell\geq 1$. As remarked above, the exponential integrals can be expressed in sine and cosine integrals. After insertion of the expressions (\ref{6.1})-(\ref{6.2}) and the multipole amplitudes (\ref{3.5}) into (\ref{4.4}), the average correction to the decay rate for the perpendicular configuration is found in terms of well-known special functions depending on $\zeta_a$, $q$ and $\varepsilon$. It may be plotted as a function of $\zeta_a$, for various choices of $q$ and $\varepsilon$. To facilitate comparison with our previous results \cite{ SvW10} we introduce the decay rate correction function $f_\perp(\zeta_a,q,\varepsilon)=-16 \langle\Gamma_{c,\perp}\rangle /(3nv_0\Gamma_{0,\perp})$ with $v_0=4\pi a^3/3$ the volume of the spheres. We shall first consider two special cases that we have treated in \cite{SvW10}: purely scattering spheres and purely absorbing spheres. In the former case we choose the dielectric constant to be real ($\varepsilon=1.5$) and the spherical radius to be finite on the scale of the wavelength ($q=0.5$). For the purely absorbing case with vanishingly small spheres ($q\rightarrow 0$), we take the dielectric constant to be complex with the value $\varepsilon=1.5+i\, 0.5$, as in \cite{SvW10}. Since for small $q$ the multipole amplitudes $B^e_\ell$ and $B^m_\ell$ behave as $q^{2\ell+1}$ and $q^{2\ell+3}$, respectively, only the electric dipole amplitude $B^e_1=iq^3(\varepsilon-1)/(\varepsilon+2)$ contributes to (\ref{4.4}) for the purely absorbing case. In Figs.~\ref{fig1} and \ref{fig2} the curves for $f_\perp(\zeta_a)$ are compared to their asymptotic counterparts for large $\zeta_a$ that follow from (\ref{4.8}). As can be seen from these figures, the asymptotic curves are quite adequate \begin{figure} \caption{Decay rate correction function $f_\perp(\zeta_a)$ (solid line) and its asymptotic form at large distances (dashed line), for a medium with scattering spheres (with $q=0.5$, $\varepsilon(\omega_a)=1.5$).} \label{fig1} \end{figure} \begin{figure} \caption{Decay rate correction function $f_\perp(\zeta_a)$ (solid line) and its asymptotic forms at small and large distances (dashed lines), for a medium with absorbing spheres (with $q=0$, $\varepsilon(\omega_a)=1.5+i \, 0.5$).} \label{fig2} \end{figure} already for $\zeta_a\approx 3$. In the asymptotic regime the results given in \cite{SvW10} are corroborated. (It should be noted that the curves given in \cite{SvW10} erroneously represent $-f_\perp$ instead of $f_\perp$.) For small distances the behaviour of the atomic decay rates in the two cases differ considerably. In fact, for the purely scattering case of Fig.~\ref{fig1} the decay rate attains a finite value when $\zeta_a$ approaches its minimum value $q$. On the other hand, for the purely absorbing case of Fig.~\ref{fig2} the decay rate correction function is governed by $J^e_{1,\perp}(\zeta)$, which according to (\ref{6.1}) has the asymptotic form $-1/(4\zeta^3)$ for small $\zeta$. Hence, the decay rate correction function diverges as $-(3/2)\, {\rm Im}[(\varepsilon-1)/(\varepsilon+2)] /\zeta_a^3$ for $\zeta_a\rightarrow 0$ in this case. For a more general situation in which both scattering and absorption take place, we choose $q=0.5$ and $\varepsilon=1.5+i\, 0.5$, with results presented in Fig.~\ref{fig3}. For large $\zeta_a$ the decay \begin{figure} \caption{Decay rate correction function $f_\perp(\zeta_a)$ (solid line) and its asymptotic forms at small and large distances (dashed lines), for a medium with scattering and absorbing spheres (with $q=0.5$, $\varepsilon(\omega_a)=1.5+i \, 0.5$).} \label{fig3} \end{figure} rate falls off like $\zeta_a^{-1}$, in agreement with (\ref{4.8}). For small distances the decay rate diverges, as in Fig.~\ref{fig2}. In fact, as $\zeta_a \rightarrow q$, the rate is found to be proportional to $(\zeta_a-q)^{-1}$. This follows from (\ref{4.4}), since the series converges increasingly slowly when $\zeta_a$ approaches $q$. Indeed, for large $\ell$ the electric multipole amplitudes $B^e_\ell$ are given by $[i^\ell/(l^2[(2\ell-1)!!]^2)]\,[(\varepsilon-1)/(\varepsilon+1)]\, q^{2\ell+1}$, while the integral (\ref{6.1}) gets the form $-[(2\ell-1)!!]^2/(2\zeta_a^{2\ell+1})$. Hence, the electric multipole contribution to the $\ell$-th term in the series of (\ref{4.4}) is $-\half[(\varepsilon-1)/(\varepsilon+1)]\, (q/\zeta_a)^{2\ell+1}$. Since the magnetic multipole contributions turn out to be negligible for large $\ell$, the asymptotic form of (\ref{4.4}) for $\zeta_a$ tending to $q$ is proportional to $\sum_{\ell=1}^\infty (q/\zeta_a)^{2\ell+1}\simeq q/[2(\zeta_a-q)]$, so that the asymptotic form of $f_\perp$ reads \begin{equation} f_\perp\simeq -\frac{3}{4q^2(\zeta_a-q)}\, {\rm Im} \left[\frac{\varepsilon-1}{\varepsilon+1}\right] \label{6.5} \end{equation} for $\zeta_a\rightarrow q$. The physical mechanism for the divergence in the decay rates of Figs.~\ref{fig2} and \ref{fig3} for small $\zeta_a$ is the efficient non-radiative energy transfer from the atom to the absorbing spheres that dominates the atomic decay in the near zone. A similar divergent behaviour has been found in a classical treatment of the energy transfer between an excited molecule and a homogeneous absorbing medium filling a halfspace \cite{CPS74}. It should be noted that (\ref{6.5}) loses its meaning when $\zeta_a-q$ becomes so small that the approximations made in deriving it are no longer valid. In particular, perturbation theory in lowest order and the electric-dipole approximation are not adequate to describe the decay for very small distances. Furthermore, the notion of scatterers with a structureless surface gets lost as well in that case. For the parallel configuration we have likewise evaluated $f_\parallel(\zeta_a,q,\varepsilon)=-16 \langle\Gamma_{c,\parallel}\rangle /(3nv_0\Gamma_{0,\parallel})$. The result for the mixed case of both scattering and absorption is given in Fig.~\ref{fig4} for the same choice of the parameters $q$ and $\varepsilon$ as in Fig.~\ref{fig3}. \begin{figure} \caption{Decay rate correction function $f_\parallel(\zeta_a)$ (solid line) and its asymptotic forms at small and large distances (dashed lines), for a medium with scattering and absorbing spheres (with $q=0.5$, $\varepsilon(\omega_a)=1.5+i\, 0.5$).} \label{fig4} \end{figure} The figure clearly shows that for the parallel configuration the correction to the atomic decay rate goes faster to zero with increasing $\zeta_a$ than for the perpendicular configuration. This is in accordance with the findings of section 4, where it has been seen that in the asymptotic regime the correction to the atomic decay rate is proportional to the inverse distance in the perpendicular configuration, but to its square in the parallel configuration. As before, the asymptotic expression is adequate from $\zeta_a\approx 3$ onwards. For small distances, with $\zeta_a\rightarrow q$, the asymptotic form of $f_\parallel$ is twice that of $f_\perp$, as follows by comparing the asymptotic forms of (\ref{6.1}) and (\ref{6.3}) for large $\ell$. In conclusion, we have shown how absorption and scattering processes in a medium may cooperate in modifying the emission rate of an excited atom in its vicinity. The explicit expressions for the decay rate that we have obtained permit a detailed analysis of the behaviour of the emission rate for arbitrary distances between the atom and the medium. As we have seen, the effects of absorption and of scattering are qualitatively different, when the atom approaches the medium. \end{document}
math
36,321
\mathbf{b}egin{document} \mathbf{b}egin{abstract} We study the slices of the parameter space of cubic polynomials where we fix the multiplier of a fixed point to some value $\lambda$. The main object of interest here is the radius of convergence of the linearizing parametrization. The opposite of its logarithm turns out to be a sub-harmonic function of the parameter whose Laplacian $\mu_\lambda$ is of particular interest. We relate its support to the Zakeri curve in the case the multiplier is neutral with a bounded type irrational rotation number. In the attracting case, we define and study an analogue of the Zakeri curve, using work of Petersen and Tan. In the parabolic case, we define an analogue using the notion of asymptotic size. We prove a convergence theorem of $\mu_{\lambda_n}$ to $\mu_\lambda$ for $\lambda _n= \exp(2\pi i \pqn)$ and $\lambda = \exp(2\pi i\theta)$ where $\theta$ is a bounded type irrational and $\pqn$ are its convergents. \end{abstract} \maketitle \tableofcontents \section*{Structure of the document} The first section defines and studies the parameter spaces of cubic polynomials under several normalization related to different markings, and the relations between these different spaces. \mathbb{C}ref{sec:phi} recalls generalities on the linearizing power series and maps, in the case of an attracting fixed point. If the dynamics is given by a polynomial, we define a special subset of the basin that we call $U(P)$ and that is the image of the disk of convergence of the linearizing parametrization. It is strictly contained in the basin of the fixed point and will play a role for attracting multipliers similar to the role played by Siegel disks for neutral multipliers. \mathbb{C}ref{sec:3} proves generalities about the radius of convergence $r$ of the linearizing power series, with a focus on its dependence on the polynomial. We prove that, if $\lambda$ is fixed, $-\log r$ is a subharmonic function of the remaining parameter. The measure $\mu_\lambda = \mathbb{D}elta -\log r$ is introduced. We prove that its total mass is $2\pi$. \mathbb{C}ref{sec:attr} studies attracting slices when both critical points are marked. We deduce from the work of Petersen and Tan that the set of parameters for which both critical points are attracted to the fixed point is an annulus. We define the set $Z_\lambda$ for which both critical points are on $\partial U(P)$. We prove that the support of $\mathbb{D}elta -\log r$ is equal to $Z_\lambda$. \mathbb{C}ref{sec:5} studies similar slices but in the case when $\theta$ is a bounded type number. Zakeri proved that the set of parameters for which both critical points are on the boundary of the Siegel disk is a Jordan curve $Z_\lambda$. We prove that the support of $\mu_\lambda$ is $Z_\lambda$. \mathbb{C}ref{sec:parabo} is about parabolic slices, i.e.\ $\lambda$ is a root of unity. We use the asymptotic size $L$ of parabolic points as an analogue of the conformal radius of Siegel disk. We prove that $-\log L$ is a subharmonic function of the parameter and that its Laplacian is a sum of Dirac masses situated at parameters for which the fixed point is degenerate, i.e.\ has too many petals. \mathbb{C}ref{sec:7} proves the convergence of $\mu_{\lambda_n}$ to $\mu_\lambda$ in the weak-$\ast$ topology for $\lambda _n= \exp(2\pi i \pqn)$ and $\lambda = \exp(2\pi i\theta)$ where $\theta$ is a bounded type irrational and $\pqn$ are its convergents. \section{Normalisations}\label{sec:normz} Seminal works in the study of cubic polynomials include \cite{art:BH1,art:BH2,art:Zakeri}. We assume here that the reader is familiar with holomorphic dynamics. We will consider conjugacy classes of cubic polynomials with or without marked points. The conjugacies will be by \emph{affine maps}, i.e.\ maps of the form $z\in\mathbb{C} \longrightarrowto az+b$ with $a\in\mathbb{C}^*$ and $b\in \mathbb{C}$ and will have to respect the markings. We will consider three different markings and their relations. A priori the quotient spaces are just sets. However since we quotient an analytic manifold by analytic relations, there is more structure. This is not the object here to develop a general theory of such quotients. In our case there will be families of representatives $P_{a,b}$, defined for complex numbers $(a,b)$ varying in an open subset of $\mathbb{C}^2$, whose $4$ coefficients vary holomorphically with $(a,b)$, whose marked points vary holomorphically too, such all equivalence classes are represented, and such that either equivalence classes have only one representative, or at most two. The equivalence relation is still analytic in $(a,b)$ and the quotient is a priori only an orbifold. These orbifold turn out to still be an analytic manifolds in our particular three cases. The choice of our three families are called here \emph{normalizations}. A more general notion of normalization can certainly be developped but this is not the object of the present article. \subsection{First family}\label{sub:norm_1st} We first consider the family of cubic polynomials with one non-critical fixed point marked and both critical points marked, up to affine conjugacy. We choose the following representative: the point $0$ is fixed with multiplier $\lambda \in \mathbb{C}^*$. The critical points are $1$ and $c \in \mathbb{C}^\ast$, the second one is taken as a parameter. It follows from an easy (and classical) computation that \[ P_{\lambda,c}(z) = \lambda z \left( 1 - \frac{(1 + \sfrac{1}{c})}{2} z + \frac{\sfrac{1}{c}}{3} z^2 \right) \] and that any marked polynomial is uniquely represented (see \cite{art:Zakeri}). When $\lambda$ is a root of unity: $\lambda = \o{\pq}$, for simplicity we will sometimes denote by $P_{\pq,c}$ the polynomial $P_{\lambda,c}$. \mathbf{b}egin{proposition} Some easy remarks: \mathbf{b}egin{itemize} \item Switching the role of the critical points $1$ and $c$ is equivalent to replacing $c$ by $1/c$. We have the symmetry \mathbf{b}egin{equation}\label{eq:sym} c^{-1}P_{\lambda,c}(cz) = P_{\lambda,1/c} (z). \end{equation} \item The map has a double critical point if and only if $c=1$. \item When $c \to \infty$, $P_{\lambda,c}$ converges uniformly on compact subset of $\mathbb{C}$ to a quadratic polynomial witch fixes $0$ with multiplier $\lambda$, namely to $Q_\lambda(z) := \lambda z \left( 1 - \frac{z}{2} \right)$. \end{itemize} \end{proposition} \subsection{Second family}\label{sub:norm_2nd} We consider the set of cubic polynomials with one non-critical fixed point marked but the critical points are not marked. It amounts to identifying $c$ and $1/c$. Since $c\neq 0$, because we assume that the fixed point is not critical, we can take \[ v:=\frac{c+c^{-1}}{2} \] as a parameter. The map $c\longrightarrowto v$ is a ramified cover from $\mathbb{C}^*$ to $\mathbb{C}$, ramified at $c=1$ and $c=-1$ which are mapped respectively to $v=1$ and $v=-1$.\footnote{These are the same values but we are reluctant to call them fixed points.} The first case $v=1$ corresponds to maps with a double critical point, as noticed above. The second case $v=-1$ corresponds to the maps in the family $P_{\lambda,c}$ that commute with $z\longrightarrowto -z$: \[ P_{\lambda,-1}(z) = \lambda (z- \frac{1}{3}z^3)\] This map is conjugated to $z\longrightarrowto \lambda z + z^3$, which is a form that may or may not be more familiar to the reader. Finally, note that $v=0$ is in the range of the map $c\longrightarrowto v$: it corresponds to $c=\pm i$. However we do not have a special dynamical interpretation for this value of $c$. \subsection{Third family}\label{sub:norm_3rd} We consider the set of unmarked cubic polynomials up to affine conjugacy. It can be parametrized by the set of monic centred cubic polynomials up to conjugacy by $z\longrightarrowto -z$, i.e.\ \[ P(z)= z^3+az+b \] with $(a,b)\sim (a,-b)$. The pair of parameters $(a,b^2)$ then gives a bijection from this family with $\mathbb{C}^2$. We will sometimes denote $b_2 = b^2$ so that \[(a,b_2) = (a,b^2)\] The map $P$ is unicritical if and only if $a=0$. The map $P$ commutes with a non-trivial affine map if and only if it commutes with $z\longrightarrowto -z$, if and only if $b=0$. \subsection{Topological aspect} We have identified sets of affine conjugacy equivalence classes of polynomials (marked on not) with open subsets $U\subset\mathbb{C}^2$. We now interpret the sequential convergence for the topology of $\mathbb{C}^2$ in terms of those classes. \mathbf{b}egin{proposition} For any of the three families, a sequence of (marked) polynomials classes $[P_n]$ converges in $U$ to the (marked) polynomial class $[P]$ if and only if there exists representatives $Q_n\sim P_n$ and $Q\sim P$ such that $Q_n$ tends to $Q$ as degree $3$ polynomials (i.e.\ coefficient by coefficient; there are $4$ such coefficients) and such that each marked point of $Q_n$ converges to the corresponding marked point of $Q$. \end{proposition} \mathbf{b}egin{proof} The ``only if'' direction ($\mathbb{R}ightarrow$) of the equivalence is the easiest. We have families parametrized analytically, hence continuously, by either $(\lambda,c)$ or $(a,b)$. For one choice of the parameter $(\lambda,v)$ there is at most two values of $(\lambda,c)$ that realize it. Similarly, for one choice of the parameter $(a,b^2)$ there is at most two values of $(a,b)$ that realize it. Moreover, given a converging sequence of parameters, one can choose the realization so that it converges too. Last, the marked points depend continuously on the corresponding parameters: $0$ does not move at all and $c\longrightarrowto c$ is trivially continuous\ldots For the ``if'' direction ($\Leftarrow$), we treat first the $(a,b^2)$-parameter space. The affine conjugacies from any cubic polynomial $Q=q_3 z^3 + q_2 z^2 + q_1 z + q_0$ to a monic centred polynomial correspond to the change of variable $z=\alpha w +\mathbf{b}eta$ such that $\alpha^2 q_3 = 1$ and that send the centre $-q_2/3q_3$ to $0$, a condition that, once $\alpha$ is fixed, determines a unique $\mathbf{b}eta$, more precisely $\mathbf{b}eta = \alpha q_2/3q_3$. Then $Q$ is conjugated by this affine map to $w\longrightarrowto w^3 + aw+b$ with $a = q_1-q_2^2/3q_3$ and $b = \frac{2q_2^3/q_3^2+9(1-q_1)q_2/q_3+27q_0}{27\alpha}$. In particular, since $\alpha^2 = 1/q_3$, $(a,b^2)$ is a holomorphic, hence continuous, function of the coefficients $(q_0,\ldots,q_3)\in \mathbb{C}^3\times\mathbb{C}^*$. It follows that given $Q_n\longrightarrow Q$, the corresponding conjugate maps will have values of $(a_n,b_n^2)$ that converge to $(a,b^2)$. Now we go for the $(\lambda,c)$ and $(\lambda,v)$ spaces. Obviously if $Q_n\longrightarrow Q$ and the marked fixed point of $Q_n$ converges then $\lambda_n\longrightarrow \lambda$. Moreover, the set of critical points of $Q_n$ converges, by Rouché's theorem, to that of $Q$ (there is no drop of degree that would allow one critical point to tend to infinity). Now if the set of critical points of $Q$ is ${c_0,c_1}$ and the fixed point of $Q$ is $p$ then parameter $v$ is given by $\frac12\left(\frac{c_0-p}{c_1-p}+\frac{c_1-p}{c_0-p}\right)$, which implies the convergence for the $v$-parameter. If moreover critical point are marked then $c= \frac{c_0-p}{c_1-p}$. This implies the convergence of $c$ if the convergence also concerns critical marked points. \end{proof} A more satisfying approach to the topology would be to consider the quotient topologies but we prefer to stay at more basic level. \subsection{Correspondence between the normalizations} There is a map associated to forgetting markings, from the first family to the second, and a similar map from the second to the third: they induce maps \[ (\lambda,c) \longrightarrowto (\lambda, v) \longrightarrowto (a,b_2) \] where $\lambda$, $c$, $v$, $a$, and $b_2 = b^2$ were introduced in the preceding sections. Recall that \[ v=\frac{c+c^{-1}}{2} \] A computation gives \mathbf{b}egin{align} \label{eq:lvab1} a &=\frac{\lambda(1-v)}{2} \\ \label{eq:lvab2} b_2 &= \frac{\lambda}{3}\cdot\frac{v+1}{2}\cdot\left(1+(v-2)\frac{\lambda}{3}\right)^2 \end{align} Let us denote \[ \Theta : \left\{ \mathbf{b}egin{array}{rcl} \mathbb{C}^*\times\mathbb{C}&\to& \mathbb{C}^2 \\ (\lambda,v)&\longrightarrowto& (a,b_2) \end{array} \right. \] and stress that even though $\Theta$ has a polynomial expression, we only consider its \emph{restriction} to non-vanishing values of $\lambda$. Of particular interest is the set of \emph{unicritical polynomials}, which as we have seen in $(\lambda,c)$-space correspond exactly to those polynomials for which $c=1$, in $(\lambda,v)$-space to those for which $v=1$ and in $(a,b_2)$-space those for which $a=0$. We have $\Theta(\lambda,1) = (0,\frac{\lambda}{3}\left(\frac{\lambda}{3}-1\right)^2)$ i.e.\ \[ (\lambda, c = 1) \longrightarrowto (\lambda, v=1) \longrightarrowto (a = 0, b_2 = \frac{\lambda}{3}\left(\frac{\lambda}{3}-1\right)^2) \] Every cubic polynomial in the third family can be marked in at most $3$ different ways as an element of the second family, since there is at most three fixed points: hence the preimage of an element of $\mathbb{C}^2$ by $\Theta$ has at most $3$ elements. Also, a cubic polynomial cannot have all its fixed points critical, in particular $\Theta$ is surjective. The following lemma sums up a precise analysis. \mathbf{b}egin{lemma} Let $P$ an affine conjugacy class of cubic polynomials be represented by $(a,b_2)=(a,b^2)\in\mathbb{C}^2$. The fibre $\Theta^{-1}(P)$ (in $(\lambda,v)$-space) has $3$ elements unless one of the following occurs: \mathbf{b}egin{enumerate} \item $(a,b^2)=(0,0)$ i.e.\ $P$ is conjugate to $z^3$; then the fibre has $1$ element $(\lambda,v)=(3,1)$; \item $(a,b^2)=(1,0)$ i.e.\ $P$ is conjugate to $z+z^3$ which has a triple fixed point; then the fibre has $1$ element $(\lambda,v)=(1,-1)$; \item $(a,b^2)=(3/2,0)$, then $P$ has a symmetry and both critical points are fixed; the fibre has $1$ element $(\lambda,v)=(3/2,-1)$; \item $b=0$ and $a\notin\{0,1,3/2\}$, then $P$ has a symmetry and the fibre has $2$ elements; \item $(a,b^2)=(4/3,-4/3^6)$, then $P$ has a double fixed point and a critical fixed point; the fibre has $1$ element $(\lambda,v) = (1,-5/3)$; \item $P$ has a double fixed point and another fixed point that is not critical; then the fibre has $2$ elements. \end{enumerate} \end{lemma} \mathbf{b}egin{proof} (We only indicate the method of the proof, the details of the computations are not relevant.) The fibre has less than three element if and only if either less than three different fixed points can be marked, or there exists an affine self-conjugacy sending two different marked points one to the other. The first case occurs if and only if two fixed points coincide or if a fixed point is critical. In the second case we must have $b=0$. \end{proof} The map from the first to the second family is much simpler. We recall it takes the form $(\lambda,c)\longrightarrowto(\lambda,v=\frac{c+c^{-1}}{2})$. The map $c\longrightarrowto v$ is well-known:\footnote{The map $z\longrightarrowto z+z^{-1}$ it is nowadays often referred to as the \emph{Joukowsky transform}.} it is surjective from $\mathbb{C}^*$ to $\mathbb{C}$ and fibres have two elements unless $v=1$ or $v=-1$. Every polynomial in the second family corresponds to at most $2$ polynomials in the first because there is at most two critical points. Hence a cubic polynomial has at most $6$ representatives in $(\lambda,c)$-space. The map $(\lambda,c)\longrightarrowto(a,b_2)$ is surjective because it is the composition of the surjective maps $\Theta$ and $(\lambda,c)\longrightarrowto (\lambda,v)$. Recall that a mapping is open when it maps open subsets to open subsets.\footnote{Recall that a characterization of continuous functions is that the \emph{preimage} of an open subset is open.} The following theorem can be found in \cite{book:cha}, Corollary page~328, Section~54. \mathbf{b}egin{theorem}[Osgoode] Let $n\geq 1$ and consider a holomorphic map $f$ from an open subset $U\subset \mathbb{C}^n$ to $\mathbb{C}^n$. If all fibres of $f$ are discrete then $f$ is open. \end{theorem} It follows that the map $\Theta$ is an open mapping. The simpler map $(\lambda,c)\longrightarrowto (\lambda,\frac {c+c^{-1}}{2})$ is open too. \mathbf{b}egin{definition}\label{def:E} Let $\cal E$ be the set of affine conjugacy classes of polynomials which have a fixed critical point. \end{definition} The set $\cal E$ is characterized by the equation \[b_2 + \frac{a}{3}\left(1-\frac{2a}{3}\right)^2= 0\] Its preimage in $(\lambda,c)$-space has equation $3-6\lambda^{-1} \in\{ c,c^{-1}\}$. Recall that $\lambda=0$ is not part of this space. Its preimage in $(\lambda,v)$-space has equation $\frac{3\lambda-6}\lambda + \frac\lambda{3\lambda-6} = 2v$ with $\lambda\neq 2$. \mathbf{b}egin{proposition}\label{prop:proper} The map $\Theta$ is proper over $\mathbb{C}^2\setminus {\cal E}$. \end{proposition} \mathbf{b}egin{proof} Consider a sequence of marked polynomials such that their unmarked equivalence classes $(a_n,b_n^2)$ converge in $\mathbb{C}^2\setminus {\cal E}$. Let $(a,b^2)$ be the limit class and $P=z^3+az+b$. We can extract a subsequence such that $a_n$ and $b_n$ converge. Replacing $b$ by $-b$ if necessary, we have $a=\lim a_n$ and $b=\lim b_n$. Let $P_n(z) = z^3+a_n z+b_n$. The fixed points of $P_n$ remain in a bounded subset of $\mathbb{C}$. We can extract a subsequence so that the marked fixed point converges. The limit will then be a fixed point of $P$. Since we assumed that $(a,b^2)$ is not in $\cal E$, it follows that we have a valid marking for $P$. The eigenvalue $\lambda_n$ of the fixed point obviously converges to that of $P$ and by Rouché's theorem the pair of critical points converge as a compact subset of $\mathbb{C}$ to that of $P$. By the sequential characterization of compact subsets of metric spaces, it follows that $\Theta$ is proper from $\Theta^{-1}(\mathbb{C}^2\setminus {\cal E})$ to $\mathbb{C}^2\setminus {\cal E}$. \end{proof} \mathbf{b}egin{remark*} Two remarks \mathbf{b}egin{itemize} \item If we had decided that $\Theta$ is defined on all of $\mathbb{C}^2$ by the same polynomial formulae as in \eqref{eq:lvab1} and \eqref{eq:lvab2}, then $\Theta$ would be proper over all the target set $\mathbb{C}^2$. \item But we defined $\Theta$ on $\mathbb{C}^*\times\mathbb{C}$ and it cannot be proper over a neighborhood of a point $[P]\in\cal E$. Indeed we can find nearby polynomials $P_n\longrightarrow P$ with $[P_n]\notin \cal E$ and which have a fixed point of very small eigenvalue, and if we mark this fixed point, the corresponding marked polynomial will not converge (they will converge to a polynomial with $\lambda = 0$ which we decided is outside of the domain of $\Theta$). \end{itemize} \end{remark*} \subsection{Some special subsets of parameter space}\label{sub:spec} Recall that a polynomial is hyperbolic when all critigcal points lie in basins of attracting periodic points and that this condition is stable by perturbation: the set of parameters for which the polynomials are hyperbolic forms an open subset in the parameter space. By a \emph{hyperbolic component} for a given parametrization is meant a connected component of this subset. \mathbf{b}egin{definition}\label{def:clmc} The \emph{connectivity locus} of the cubic polynomials denoted by $\mathbb{C}on_3$ is the set of cubic polynomials up to affine conjugacy which have a connected Julia set.\\ The \emph{principal hyperbolic component}, $\mathcal{H}_0$ is the hyperbolic component containing the affine conjugacy class of the polynomial $P(z) = z^3$. \end{definition} We recall a classical theorem\footnote{Whose attribution is not easy to determine.} \mathbf{b}egin{theorem} The affine class of a polynomial belongs to $\mathcal{H}_0$ if and only if both critical points belong to the immediate basin of an attracting fixed point. Its Julia set is a quasicircle. \end{theorem} \mathbf{b}egin{proof} Let us consider a path from $z^3$ to $P$ whose class remains in $\mathcal{H}_0$. By the Mañé-Sad-Sullivan theory, the Julia set follows a holomorphic motion while the class of $P$ remains in $\mathcal{H}_0$. One consequence is that it is a quasicicle. The other is that the critical points cannot jump out of the immediate basin. The converse follows from a theorem of Milnor in \cite{art:mil}: every hyperbolic component has a centre, i.e.\ a map that is post critically finite. For the same reason as in the previous paragraph, this other polynomial map has both critical points in the immediate basin of an attracting fixed point $a$. We will prove below that the two critical points coincide with $a$. Since there is only one affine conjugacy class of polynomial with a fixed double critical point (the class of $P(z)=z^3$), the result will follow. We will use the following topological lemma: given a \emph{connected and non empty} ramified covering $U\to D$ over a topological disk $D$ and a point $b\in D$, if the set of ramification values is contained in $\{b\}$, then $b$ has a unique preimage and $U$ is a topological disk. Since $a$ is attracting (possibly superattracting), there is a disk $D=B(a,\varepsilonilon)$ small enough so that every point in $D\setminus\{a\}$ have infinite orbit and $P(D)\subset D$. In particular, since $P$ is post critically finite, the first time a critical orbit enters this disk must be by hitting $a$ directly. By the topological lemma, it follows by induction that there exists a sequence $D_n$ of topological disks containing $a$ such that $D_0=D$, $D_{n+1}$ is the connected component of $P^{-1}(D_n)$ containing $a$ and $a$ is the only preimage of $a$ in $D_{n+1}$. We will also use the fact, proved by an easy induction, that $D_n$ is a connected component of $P^{-n}(D)$. From $P(D)\subset D$ it follows that $D_{n} \subset D_{n+1}$. The basin $B$ of $a$ is equal to the union $U$ of all $D_n$: indeed $U$ is open and the complement of $U$ in $B$ is open (see the next paragraph), hence empty by connectedness of $B$. Since $a$ is the only element of $P^{-1}(a)$ in $D_{n+1}$ it follows that $a$ is the only element of $P^{-1}(a)$ in $B$. We have seen that the critical points, which both belong to $B$ by hypothesis, eventually map to $a$. Hence they are both equal to $a$, Q.E.D. Let us justify the claim made in the previous paragraph: let $z\in B\setminus U$. Consider $n$ such that $P^n(z)\in D$. Then there is an open disk $V$ of centre $z$ and such that $P^n(V)\subset D$, and thus for all $k\geq 0$, $P^{n+k}(V)\subset D$. Hence the connected set $V$ is contained in a connected component of $P^{-(n+k)}(D)$. It follows that $V \cap D_{n+k} = \varnothing$ for otherwise $V$ would be contained in $D_{n+k}$ hence $z\in D_k$, contradicting $z\notin U$. Hence $V \cap U = \varnothing$. \end{proof} \mathbf{b}egin{lemma}\label{lem:top1} In metric spaces, if a function is continuous open and proper from a non-empty set to a connected set then it is surjective. \end{lemma} \mathbf{b}egin{proof} The image is everything because it is non-empty open and closed. The last point is proved by taking a convergent sequence $f(x_n)$ and the compact set $\{f(x_n)\}_{n\in\mathbb{N}}\cup\{\lim f(x_n)\}$. Its preimage being compact we can extract a subsequence of $x_n$ so that $x_n$ converges. Then by continuity $\lim f(x_n)=f(\lim x_n)$. \end{proof} The sets $\mathcal{H}_0$, $\mathbb{C}on_3$ and $\cal E$ live in the $(a,b_2)$-space (see \mathbb{C}ref{def:E,def:clmc}: they are respectively the principal hyperbolic domain, the connectivity locus and the maps with a fixed critical point). We denote by $\mathbb{C}on_3'$ the preimage of $\mathbb{C}on_3$ in the $(\lambda,c)$-space. It is not relevant here whether $\mathbb{C}on_3'$ is connected or not. \mathbf{b}egin{proposition}\label{prop:0ra} The preimage of $\mathcal{H}_0$ in $(\lambda,v)$-space has two connected components. One contains $(\lambda=3,v=1)$ and is contained in $``|\lambda|>1"$ (call it $\mathcal{H}^r$) and the other one contains $(|\lambda|<1, v=1)$ and is contained in $``|\lambda|<1"$ (call it $\mathcal{H}^a$). The map from $\mathcal{H}^a$ to $\mathcal{H}_0\setminus\cal E$ is a homeomorphism. \end{proposition} \mathbf{b}egin{proof} The polynomial $(\lambda=3,v=1)$ corresponds to $P(z)=z^3$ and is hence in the preimage of $\mathcal{H}_0$ and satisfies $|\lambda|>1$. Let us denote $\mathcal{H}^r$ the connected component of the preimage of $\mathcal{H}_0$ that contains it. The polynomials $(|\lambda|<1, v=1)$ have a unique critical point an it must be in the basin of the attracting fixed point by Fatou's theorem. Let us denote $\mathcal{H}^a$ the connected component of the preimage of $\mathcal{H}_0$ that contains this connected subset. Consider the open subset $\mathcal{H}_0$ of $\mathbb{C}^2$. Its preimage by $\Theta$ is an open subset of $\mathbb{C}^*\times \mathbb{C}$, its connected components are hence open subsets of $\mathbb{C}^*\times \mathbb{C}$. The image by $(\lambda,v)\longrightarrowto(a,b_2)$ of these components are open because $\Theta$ is an open mapping. By classical theorems of Fatou, a map in $\mathcal{H}_0$ cannot have a neutral cycle. Hence the preimage of $\mathcal{H}_0$ cannot meet $|\lambda|=1$: any connected component must be contained either in $|\lambda|<1$ or $|\lambda|>1$. The map $\Theta$ is injective on the intersection of $|\lambda|<1$ with the preimage of $\mathcal{H}_0$, because there is only one attracting point to mark, and its image is $\mathcal{H}_0\setminus{\cal E}$: the attracting fixed point cannot be marked if and only if it is critical. By the \emph{invariance of domain} theorem, it follows that $\Theta$ is a homeomorphism from $``|\lambda|<1" \cap\Theta^{-1}(\mathcal{H}_0)$ to $\mathcal{H}_0\setminus{\cal E}$. (Alternatively we could have used the fact that $\Theta$ is proper over $\mathbb{C}^2\setminus{\cal E}$---hence over $\mathcal{H}_0\setminus{\cal E}$---because in metric spaces, a proper continuous bijective map is necessarily a homeomorphism.) In particular $``|\lambda|<1" \cap\Theta^{-1}(\mathcal{H}_0)$ is connected and coincides with the set $\mathcal{H}^a$ defined at the beginning of this proof. Let us prove that $\Theta$ is proper from $``|\lambda|>1"\cap \Theta^{-1}(\mathcal{H}_0)$ to $\mathcal{H}_0$. This has already been proved over $\mathcal{H}_0\setminus{\cal E}$, see \mathbb{C}ref{prop:proper}. The extension to all of $\mathcal{H}_0$ essentially follows from the facts that in the marked point $z=0$ cannot have a multiplier that tend to $0$ since we are in $``|\lambda|>1"$, and that there is a uniformly bounded number of fixed points. Here is a detailed proof: Let us assume that a sequence of polynomials $P_n$ in $\mathcal{H}_0$ with a marked repelling fixed point is such that the affine conjugacy class of $P_n$, unmarked, converge to some polynomial $P$ in $\mathcal{H}_0$. Recall that the two repelling fixed points depend holomorphically on polynomials near $P$: there is a neighborhood $V$ of $P$ and two holomorphic functions $\xi_1$, $\xi_2$ from $V$ to $\mathbb{C}$ such that the repelling fixed points of any $Q\in V$ are $\xi_1(Q)$ and $\xi_2(Q)$. The marked point may be any of these two fixed points and may occasionally jump from one to the other as $n$ varies. But we can extract a subsequence so that the marked point is always $\xi_1(P_n)$ or $\xi_2(P_n)$. It then converges to $\xi_i(P)$ for $i=1$ or $2$. By the sequential characterization of compact subsets of metric spaces, this proves the claim. By \mathbb{C}ref{lem:top1}, any component of ``$|\lambda|>1\cap \Theta^{-1}(\mathcal{H}_0)$'' surjects to $\mathcal{H}_0$. Since $z^3$ has only one preimage in ``$|\lambda|>1\cap \Theta^{-1}(\mathcal{H}_0)$'', it follows that ``$|\lambda|>1\cap \Theta^{-1}(\mathcal{H}_0)$'' is connected. \end{proof} \mathbf{b}egin{proposition} Each of the two components $\mathcal{H}^a$, $\mathcal{H}^r$ in the previous proposition, which sit in $(\lambda,v)$-space, has a preimage in $(\lambda,c)$-space that is connected. The first one contains $(\lambda=3,c=1)$ and is contained in ``$|\lambda|>1$''and the other one contains $(|\lambda|<1, c=1)$ and is contained in ``$|\lambda|<1$''. The map $(\lambda,c)\longrightarrowto (\lambda, v=\frac{c+c^{-1}}{2})$ is a two-to-one covering over these sets minus $``v=1"\cup``v=-1''$. \end{proposition} \mathbf{b}egin{proof} The claim on covering properties follows from $c\longrightarrowto v$ being a $2:1$ covering from $\hat \mathbb{C}\setminus\{-1,1\}$ to itself. Moreover this map is proper from $\mathbb{C}^*$ to $\mathbb{C}$, and thus so is too the map $(\lambda,c)\longrightarrowto(\lambda,v)$ from $(\mathbb{C}^*)^2$ to $\mathbb{C}^*\times \mathbb{C}$. The claim on connectedness then follows from the fact that both preimages contain a point with $c=1$: consider for instance a component $A'$ of the preimage of $\mathcal{H}^a$. Since the map is proper, it is proper from $A'$ to $\mathcal{H}^a$. By \mathbb{C}ref{lem:top1} the restriction $A'\to \mathcal{H}^a$ is surjective. But since $(\lambda,v=1)$ has only one preimage, $(\lambda, c=1)$, there can be only one such component. \end{proof} Let us define the following subset of $(\lambda,c)$-space: \[ \mathcal{H}_0' = \left\{(\lambda,c)\,;\, |\lambda|<1,\ \text{both critical points of $P_{\lambda,c}$ lie in the immediate basin of $0$}\right\} \] Note that $\mathcal{H}_0'$ contains $(\lambda,c=-1)$ for any $\lambda\in \mathbb{D}$. Indeed, the polynomial $P_{\lambda,-1}$ commutes with $z\longrightarrowto -z$, which swaps both critical points. Since at least one critical point is in the immediate basin, which contains $0$ hence is invariant too by $-z$, it follows that both critical points are in the immediate basin. Recall that $\cal E$ denotes the subset of polynomials with a fixed critical point in the set $\mathbb{C}^2$ of unmarked classes. By the analysis above: \mathbf{b}egin{itemize} \item $\mathcal{H}_0'$ contains $(|\lambda|<1,c=-1)$ and $(|\lambda|<1,c=1)$. \item $\mathcal{H}_0'$ is connected and the map $(\lambda,c)\longrightarrowto (a,b_2)$ sends it to $\mathcal{H}_0\setminus{\cal E}$ as a $2:1$ ramified cover. \item More precisely the map $(\lambda,c)\longrightarrowto (a,b_2)$ is a \mathbf{b}egin{itemize} \item $1:1$ homeomorphism from $``c=1"\cap \mathcal{H}_0'$ to $a=0 \cap (\mathcal{H}_0\setminus{\cal E})$, \item $1:1$ homeomorphism from $``c=-1"\cap \mathcal{H}_0'$ to $b=0\cap (\mathcal{H}_0\setminus{\cal E})$, \item $2:1$ covering from $\mathcal{H}_0'\setminus ``c\in\{-1,1\}"$ to $(\mathcal{H}_0\setminus{\cal E})\setminus ``a=0, \text{ or } b=0"$. \end{itemize} \end{itemize} We prove the following lemma here, for future reference in this document. \mathbf{b}egin{lemma}\label{lem:cesc} For each $\lambda\in\mathbb{C}^*$ there exists $\rho>0$ such that if $|c|>\rho$ then $c$ belongs to the basin of infinity for $P_{\lambda,c}$. For $|c|<\rho^{-1}$, then $1$ belongs to the basin of infinity for $P_{\lambda,c}$. \end{lemma} \mathbf{b}egin{proof} Given a polynomial $f(z) = a_1 z + \cdots + a_3 z^3$, a trap in the basin of infinity is given by $|z|>R$ with \[ R=\max(\sqrt{\frac{2}{|a_3|}}, \frac{4|a_2|}{|a_3|}, \sqrt{\frac{4|a_1|}{|a_3|}}).\] The first lower bound on $R$ ensures that the term $a_3 z^3$ has modulus $>2|z|$. The other two that the rest has modulus $<\frac{1}{2}|a_3 z^3|$. Given the formula of $P_{\lambda,c}$, which we recall here: \[ P_{\lambda,c}(z) = \lambda z \left( 1 - \frac{(1 + \sfrac{1}{c})}{2} z + \frac{\sfrac{1}{c}}{3} z^2 \right) \] this yields \[ R = \max(\sqrt{\frac{6|c|}{|\lambda|}} , 6|c+1| , \sqrt{12 |c|})\] The point $c$ is not in the trap for $c$ big, but its first iterate is: indeed one computes \[P_{\lambda,c}(c) = \lambda c \frac{3-c}{6}.\] This proves the first claim. The second claim follows from the symmetry relation $c^{-1}P_{\lambda,c}(cz) = P_{\lambda,1/c} (z)$. \end{proof} \section{The linearizing power series}\label{sec:phi} Given a map $f(z) = \lambda z + \cdots$, with $\lambda\neq 0$, we define by abuse of notations the function \[ \lambda(z)= \lambda z \] There are two variants of the linearizing maps: it is either a map $\varphi(z) = z +\cdots$ such that \[ \varphi \circ f = \lambda \circ \varphi \] holds near $0$, or a map $\psi(z) = z + \cdots$ such that \[ f\circ \psi = \psi \circ \lambda \] holds near $0$. Douady used to call the first one a \emph{linearizing coordinate}, and the second one a \emph{linearizing parametrization}. We shall use both variants and, by imitation of Douady's conventions for Fatou coordinates, we use the symbol $\varphi$ for the first and $\psi$ for the second. We recall\footnote{without proof, this is very classical} that if $\lambda$ is not a root of unity, then as formal power series normalized by the condition to be of the form $z + {\cal O}(z^2)$ and satisfying the above equations, $\psi$ and $\varphi$ exist and are unique. (If $\lambda$ is a root of unity then $\varphi$ and $\psi$ may or may not exist, and that if they exist, then they are not unique.) We recall without proof the following classical facts, valid for all $f$. \mathbf{b}egin{proposition} Assume that $\lambda\in\mathbb{C}^*$ is not a root of unity. Let $r\in[0,+\infty]$ denote the radius of convergence of $\psi$ and $r'$ the radius of convergence of $\varphi$. \mathbf{b}egin{itemize} \item $r>0$ iff $r'>0$ \item $r>0$ iff the origin is linearizable \item if $|\lambda|\neq 1$ then the origin is always linearizable \item if $|\lambda|<1$ then a holomorphic extended linearizing coordinate $\varphi$ with $\varphi'(0)=1$ is defined on the basin of attraction of $f$; it satisfies $\varphi\circ f = \lambda \circ \varphi$ on the basin but is not necessarily injective if $f$ is not injective; $\varphi(z) = \lim f^n(z)/\lambda^n$, which converges locally uniformly on the basin \item if $|\lambda|=1$ and $f$ is linearizable then a holomorphic linearizing coordinate $\varphi$ with $\varphi'(0)=1$ is defined on the Siegel disk of $f$, is necessarily injective thereon and maps it to a Euclidean disk \item in the last two cases above, the power series expansion at the origin of the holomorphic map $\varphi$ coincides with the formal linearizing power series $\varphi$ introduced earlier \end{itemize} \end{proposition} We will be mainly interested in $r(\lambda,c) := r(P_{\lambda,c})$ the radius of convergence of the power series $\psi$ associated to $f = P_{\lambda,c}$. If context makes it clear, we will use the shorter notations $r(c)$ or even $r$. Even though $\psi$ originally designates a formal power series, we will use the same symbol $\psi$ to also denote the holomorphic function defined on its disk of convergence $B(0,r)$ by the sum of the power series. We recall the following facts, specific to polynomials. Let $P$ be a polynomial of degree at least $2$ with $P(z) = \lambda z + {\cal O}(z^2)$ near $0$ and $\lambda\in\mathbb{C}^*$ which is not a root of unity: \mathbf{b}egin{itemize} \item If $|\lambda|\leq 1$ then the sum of the power series $\psi$ on its disk of convergence\footnote{Which may be empty\ldots\ in which case the statements gives no information.} defines an injective function. \item If $|\lambda|=1$ and $P$ is linearizable then $r'$ is equal to the distance from $0$ to the boundary of its Siegel disk and $r$ is equal to the conformal radius of the Siegel disk w.r.t.\ $0$. \item If $|\lambda|<1$, then $r'$ is equal to the distance from $0$ to the boundary of its attracting basin, and $r$ is equal to the conformal radius w.r.t.\ $0$ of the special subset $U$ defined below in \mathbb{C}ref{prop:U}. \end{itemize} \noindent{\mathbf{b}f Note:} Concerning the first point (which also trivially holds if $f$ is linear): if $\lambda$ has modulus one but is not a root of unity, there is a nice proof in \cite{book:Milnor}. And in the case $|\lambda|<1$ it follows from the following argument: the map $\psi$ satisfies $\psi (\lambda z) = f (\psi(z))$ on its disk of convergence. So if $\psi(x)=\psi(y)$ then $\psi(\lambda^n x) = \psi(\lambda^n y)$. Since $\psi'(0)=1$ the map $\psi$ is injective near $0$, so $\lambda^n x=\lambda^n y$ for $n$ big enough. Hence $x=y$. We recall the following classic fact, essentially a consequence of the maximum principle. \mathbf{b}egin{lemma}[folk.]\label{lem:folk} if $V\subset \mathbb{C}$ is a bounded Jordan domain and $P$ a non-constant polynomial then all the connected components $U$ of $P^{-1}(V)$ are (bounded) Jordan domains and $P:\partial U\to \partial V$ is a covering whose degree coincides with the degree of the proper map $P:U\to V$. \end{lemma} We recall a classical result, the holomorphic dependence of $\varphi_P$ on $P$: \mathbf{b}egin{lemma}[folk.]\label{lem:holodep} Let $U$ be a complex manifold (parameter set). Consider any analytic family of polynomials $\zeta\in U\longrightarrowto P_\zeta$ (not necessarily of constant degree), all fixing the origin with the same multiplier $\lambda$ with $0<|\lambda|<1$. Let $\cal B$ denote the fibred union of basins of $0$: $\cal B = \{(\zeta,z)\,;\,z\in B(P_\zeta)\}$. Then $\cal B$ is an open subset of $U\times \mathbb{C}$ and the map $(\zeta,z)\longrightarrowto \varphi_{P_\zeta}(z)$ is analytic. \end{lemma} \mathbf{b}egin{proof} Openness follows from the existence of a stable trap near $0$, for small perturbation of the parameter $\zeta$. Analyticity follows from the local uniform convergence of the following formula: \[\varphi_f(z)=\lim_{n\to\infty} \frac{ f^n(z)}{\lambda^n}\] \end{proof} \mathbf{b}egin{proposition}\label{prop:U} Assume that $P$ is a degree $\geq 2$ polynomial fixing $0$ with attracting multiplier $\lambda\neq 0$. Then the map $\psi$ is injective on $B(0,r)$. The set \[U:=\psi(B(0,r))\] is compactly contained in the basin of $P$. It is a Jordan domain and $P$ is injective on its boundary. There is no critical point of $f$ in $U$ and there is at least one critical point of $f$ on $\partial U$. \end{proposition} \mathbf{b}egin{proof} Since $P(\psi(z)) = \psi(\lambda z)$ is true at the level of power series, it is true on all $B(0,r)$. It follows that $P^n(\psi(z)) = \psi(\lambda^n z)\longrightarrow \psi (0) = 0$ as $n\to+\infty$. Thus the image of $\psi$ is contained in the basin of attraction of $0$ for $P$. Hence $\varphi$ is defined on the image of $\psi$. Since $\varphi$ has a power series expansion at the origin that is the inverse of that of $\varphi$ it follows that $\varphi \circ \psi$ is the identity near $0$, and hence on $B(0,r)$ by analytic continuation. In particular $\psi$ is injective on $B(0,r)$. The absence of critical point of $P$ in $U$ follows easily from this and $P\circ\psi = \psi\circ \lambda$. The set $U$ is the preimage by $P$ of $P(U) = \psi(B(0,|\lambda|r))$. Since the latter is compactly contained in $U$, hence in the basin, it follows that all components of $P^{-1}(P(U))$ are compactly contained in their respective Fatou components. In particular $U$ is compactly contained in the immediate basin. Also, from $P(U) = \psi(B(0,|\lambda|r))$ it follows that $P(U)$ is a Jordan domain (with analytic boundary) and hence $U$, like every connected component of $P^{-1}(P(U))$, must be a Jordan domain by the first part of \mathbb{C}ref{lem:folk}. Moreover, by the second part of the lemma, since $P$ is injective on $U$ it follows that $P$ is injective on $\partial U$. For the last point we proceed by contradiction. Consider the sets $U(\rho)=\psi(B(0,\rho))$ and let $U'(\rho)$ be the connected component containing $0$ of $P^{-1}(U(\rho))$. Then $U(r)=U = U'(|\lambda|r)$. Assume by way of contradiction that $P$ has no critical point on $\partial U$. Then $U$ sits at a positive distance from the other components of $P^{-1}(P(U))$. It follows that given a neighborhood $V$ of $\overline U$, then for $\varepsilonilon>0$ small enough and $\rho=|\lambda|r+\varepsilonilon$ we have $U'(\rho)\subset V$ and $U'(\rho)$ does not contain critical points of $P$. Since $P$ is is proper and without critical points from $U'(\rho)$ to $U(\rho)$, it is a cover, and since the image is a topological disk, it is injective. But then we get a contradiction with the definition of $r$: indeed one could extend $\psi$ to $B(0,\rho/|\lambda|)$ by letting $\psi(z)$ be the unique point of $P^{-1}(\psi(\lambda z))$ in $U'(\rho)$. \end{proof} Since $\psi'(0)=1$ it follows that $r$ is the conformal radius of $U:=\psi(B(0,r))$ w.r.t.\ the origin. Moreover, we have $r=|\varphi(z)|$ where $z$ is any point on $\partial U$. This will be particularly useful when we take $z$ to be (one of) the critical point(s) on $\partial U$ when we study how $r$ depends on the polynomial, so we number this equation for future reference: \mathbf{b}egin{equation}\label{eq:rpc} \forall \text{ critical point } c\in\partial U,\ r=|\varphi(c)|. \end{equation} If only one critical point is on $\partial U$ we call it the \emph{main critical point}. For each $\lambda\in\mathbb{D}^*$ there are values of $c$ such that there is more that one critical point on $\partial U$ for $P_{\lambda,c}$: for instance this is the case for $c=-1$, for which we recall that the polynomial commutes with $z\longrightarrowto -z$. Two remarks: \mathbf{b}egin{enumerate} \item Morally the main critical point, if there is one, is the closest to the attracting fixed point. It is not the closest for the Euclidean distance but it is indeed for some other notion of distance defined using $\varphi$. However we will not need this here. \item It is important to realize that the main critical point does not necessarily have the least value of $|\varphi|$ among all critical points: sometimes there is another critical point, possibly in the immediate basin, that maps under some iterate $P^k$ to a point that is ``closer'' to $0$ that the same iterate $P^k$ applied to the main critical point. It may even happen with $k=1$. \end{enumerate} \mathbf{b}egin{lemma}\label{lem:scsP} Let $P_n$ be a sequence of polynomials fixing $0$ with multiplier $\lambda_n\in \ov \mathbb{D} \setminus\{0\}$ not a root of unity and assume that $P_n$ tends to a polynomial $P$ of degree at least $2$ uniformly on compact subsets of $\mathbb{C}$. \mathbf{b}egin{itemize} \item Then \[r(P)\geq \limsup r(P_n)\] where $r(P)$ is defined to be $0$ if the the fixed point $0$ of $P$ is parabolic or superattracting. \item Moreover denoting $r_0=\liminf r(P_n)$, the sequence $\psi_{P_n}$ converges to $\psi_{P}$ on every compact subset of $B(0,r_0)$. \end{itemize} \end{lemma} \mathbf{b}egin{proof} Let $\psi_n=\psi_{P_n}$. The identity $P_n \circ \psi_n(z) = \psi_n (\lambda_n z)$ holds on $B(0,r(P_n))$. Let $\tilde r=\limsup r(P_n)$. If $\tilde r=0$ there is nothing to prove, so we assume $\tilde r>0$. To prove the first claim of the lemma, let us extract a subsequence such that $r(P_n)$ converges to $\tilde r$. The maps $\psi_n$ being univalent and normalized by $\psi_n(0)=0$ and $\psi_n'(0)=1$, they form a normal sequence on $B(0,\tilde r)$.\footnote{Usually a normal \emph{family} is defined for maps defined on a common set of definition. By a normal sequence on $B(0,\tilde r)$ we mean the following: the domain eventually contains every compact subset of $B(0,\tilde r)$ and any subsequence has a subsubsequence that converges uniformly on compact subsets of $B(0,\tilde r)$.} By continuity, any extracted limit $\ell$ must satisfy $P \circ \ell (z)= \ell (\lambda z)$ for $z\in B(0,\tilde r)$, $\ell(0)=0$ and $\ell'(0)=1$. In particular $P$ is linearizable, hence $P'(0)$ cannot be $0$ nor a root of unity. Hence $\ell$ must have the same power series expansion as $\psi_P$, so the limit is unique. Moreover, the radius of convergence of $\psi_P$ is at least $\tilde r$. Let $\hat r=\liminf r(P_n)$. The proof of the second claim is very similar, but this time we do not extract subsequences. The family $\psi_n$ is normal on $B(0,\hat r)$. Any extracted limit must linearize, hence this limit is unique and coincides with the restriction of $\psi_P$ to $B(0,\hat r)$. \end{proof} \mathbf{b}egin{lemma}\label{lem:psiext} The function $\psi$ extends as a homeromorphism $\ov\psi : \ov B(0,r) \to \ov U$. \end{lemma} \mathbf{b}egin{proof} In \mathbb{C}ref{prop:U} we saw that $P$ is injective on $\partial U$. Hence $P$ is injective on $\ov U$ and since $\ov U$ is compact, $P$ is a homeomorphism from $\ov U$ to its image. Let $g$ be the inverse homeomorphism and let $\ov{\psi}(z) = g\circ \psi (\lambda z)$ for $z\in \ov B(0,r)$. Then $\ov\psi$ is a continuous extension. \end{proof} \mathbf{b}egin{lemma}\label{lem:cP} Let $P_n$ be a sequence of polynomials of degree $\geq 2$, fixing $0$ with multiplier $\lambda_n \in \mathbb{D}^*$ and assume $P_n$ tends to a polynomial $P$ of degree at least $2$ uniformly on compact subsets of $\mathbb{C}$. Assume moreover that $\lambda = P'(0) \in \mathbb{D}^*$. Then \mathbf{b}egin{enumerate} \item $\varphi_{P_n}\longrightarrow \varphi_P$ uniformly on compact subsets of the basin of $0$ for $P$, \item $r(P_n) \longrightarrow r(P)$, \item for all sequence $z_n\in \ov B(0,r(P_n))$, such that $z_n$ converges to some $z_\infty\in\mathbb{C}$, then $\ov\psi_{P_n}(z_n) \longrightarrow \ov\psi_P(z_\infty)$, \item if $c_n\in\partial U(P_n)$ is a critical point such that $c_n$ converges, then its limit $c_\infty$ is a critical point of $P$ that belongs to $\partial U(P)$. \end{enumerate} \end{lemma} \mathbf{b}egin{proof} The first point follows from the local uniform convergence\footnote{The proof is classical so we omit it here.} of the following formula: \[\varphi_f(z)=\lim_{n\to\infty} \frac{ f^n(z)}{\lambda^n}.\] For the second point, we have seen in \mathbb{C}ref{lem:scsP} that $\limsup r(P_n)\leq r(P)$, so there remains to prove that $\liminf r(P_n) \geq r(P)$. The map $P$ is injective on $U(P)$, so by a variant of Hurwitz's theorem, for all compact subset $K$ of $U(P)$, for $n$ big enough the map $\varphi_{P_n}$ is injective on $K$. If we take $K=\psi_P(B(0,r(P)-\varepsilon/2))$ we get that for $n$ big enough, the image of the restriction $\varphi_{P_n}|_K$ contains $B'=B(0,r(P)-\varepsilon)$ and since it is injective, its reciprocal is defined on $B'$. As a formal power series, this reciprocal coincides with $\psi_{P_n}$ by uniqueness of the linearizing formal power series, and thus $r(P_n)\geq r(P)-\varepsilon$. Let us prove the third point. The case where $|z_\infty|<r(P)$ is already covered by the last point of \mathbb{C}ref{lem:scsP}, so we assume that $|z_\infty|=r(P)$. Denote $w_\infty = \ov\psi_P(z_\infty)$ and $w_n = \ov\psi_{P_n}(z_n)$. Then $w_\infty\in \partial U(P)$ and the objective is to prove that $w_n\longrightarrow w_\infty$. Let us first treat the case where $w_\infty$ is not a critical point of $\varphi_P$. Then there exists $\varepsilon$ such that for $n$ big enough, $\varphi_{P_n}$ has an inverse branch $h_n$ defined on $B(z_\infty,\varepsilon)$ that converges uniformly to an inverse branch $h$ of $\varphi_P$ that satisfies $h(z_\infty) = w_\infty$. Note that $h(z') = \psi_P(z')$ for all $z'$ in the non-empty connected set $L = B(0,r(P)) \cap B(z_\infty,\varepsilon)$. Then by the above and by by \mathbb{C}ref{lem:scsP}, $h_n$ and $\psi_{P_n}$ both tend to $h$ uniformly on compact subsets of $L$. Since $h$ is non-constant, we in particular have that for all ball $B$ compactly contained in $L$, then for $n$ big enough: $\psi_{P_n}(B) \cap h_n(B)\neq \varnothing$. But $\psi_{P_n}$ and $h_n$ are both inverse branches of the same map $\varphi_{P_n}$. It follows that they coincide on the connected component $W_n$ of the intersection of their domain that contains $B$. The set $W_n = B(z_\infty,\varepsilon) \cap B(0,r(P_n))$ eventually contains $z_n\longrightarrow z\infty$ and we have $\psi_{P_n}(z_n) = h_n(z_n) \longrightarrow h(z_\infty) = \psi_P(z_\infty)$. We now treat the case where $w_\infty$ is a critical point of $\varphi_P$. Note that $\varphi_P$ has only finitely many critical points on $\partial U$. For $\varepsilon>0$ let $K = \ov B(0,r(P)) \cap \ov B(z_\infty,\varepsilon)$ and choose $\varepsilon$ small enough so that in $\ov\psi_P(K)$ the point $w_\infty$ is the only critical point of $\varphi_P$ and the only preimage of $z_\infty$ by $\varphi_P$. Let us proceed by contradiction and assume that $w_n = \psi_{P_n}(z_n) \centernot\longrightarrow w_\infty$. Then either $w_n$ leaves every compact of the basin of $0$ for $P$, or it has a subsequence that converges to some $w'$ in this basin, and then by the first point of the current lemma, $\varphi_P(w') = z_\infty$. In both cases $w_n$ is eventually out of some fixed neighborhood $V$ of $\ov\psi_P(K)$. Since $K$ is connected, this means there exists another sequence $z'_n$ with $w'_n:=\psi_{P_n}(z'_n)$ that satisfies: $w'_n \longrightarrow w' \notin V \cup \varphi_P^{-1}(z_\infty)$. By the first point of the current lemma, $z'_n = \varphi_{P_n}(w'_n) \longrightarrow z':= \varphi_P(w')$. Since $K$ is closed, we have $z'\in K$. By definition $\varphi_P(w')\neq z_\infty$, i.e.\ $z'\neq z_\infty$. Let us prove that $w'=\ov\psi_P(z')$: if $|z|<r(P)$ then this is covered by the last point of \mathbb{C}ref{lem:scsP}; if $|z'|=r(P)$, since $z'\neq z_\infty$ then by the choice of $K$, the point $\ov\psi_P(z')$ is not a critical of $\varphi_P$ and by the analysis in the previous paragraph, $w'=\ov\psi_P(z')$. So $w' \in \ov\psi_P(K)$, which contradicts $w'\notin V$. Last, we prove the fourth point. By passing to the limit in $P_n(c_n)=0$ we get that $c_\infty$ is a critical point of $P$. Let $z_n = (\ov\psi_{P_n})^{-1}(c_n)$. By the first point of \mathbb{C}ref{lem:scsP}, $z_n$ is a bounded sequence and any extracted limit $z'$ satisfies $|z'|\leq r(P)$. By the third point of the present lemma, for the extracted sequence we have $c_n \ov \psi_{P_n}(z_n) \longrightarrow \ov \psi_P(z')$, hence $\ov\psi_P(z') = c_\infty$. Hence $c_\infty$ belongs to $\ov \psi_P(\ov B(0,r(P)) = U(P)$. Since it is critical it cannot belong $U(P)$ so $c_\infty \in \partial U(P)$. \end{proof} \subsection{Applications to our family of cubic polynomials} Let $\mathbb{U}_\mathbb{Q}$ denote the set of roots of unity. If $\lambda\in\mathbb{U}_\mathbb{Q}$ then the linearizing power series is not defined. We set \[r(P_{\lambda,c})=0\] in this case. This is a natural choice because we know in advance that $P_{\lambda,c}$ is not linearizable.\footnote{No rationally indifferent periodic point of a degree $\geq 2$ rational map can be linearizable.} The following lemma is a direct application of \mathbb{C}ref{lem:scsP}: \mathbf{b}egin{lemma} The map $(\lambda,c)\longrightarrowto r(\lambda,c)$ restricted to values of $\lambda \in \ov{\mathbb{D}}^*$ is upper semi-continuous. \end{lemma} \section{About the radius of convergence of the linearizing parametrization}\label{sec:3} Given $f(z)=\lambda z +\cal O(z^2)$ with $\lambda \neq 0$ nor equal to a root of unity, we noted $\psi$ the formal power series solution of $\psi = z + \cal O(z^2)$ and $f\circ \psi = \psi \circ \lambda$ where by abuse of notation $\lambda$ denotes the function $z\longrightarrowto \lambda z$. Let us write $\psi = \psi_f$ to highlight the dependence on $f$ and let \[f = \sum a_n z^n\] \[\psi_f = \sum b_n z^n\] be the respective (formal) power series expansions. We will sometimes write $b_n(f)$ to emphasize the dependence on $f$. Below we will denote, for a given formal power series $s$ in $z$, its $z^n$ coefficient by $[s]_n$. We recall here a few well-know facts: \mathbf{b}egin{itemize} \item (Cauchy-Hadamard formula) The radius of convergence $r$ of $\psi_f$ is given by \[\frac{1}{r}=\limsup |b_n|^{1/n}\] \item $b_n$ is uniquely determined by the strong recursion formula \[\lambda^n b_n = \lambda b_n + \sum_{k=2}^{n} a_n [\psi^k(z)]_n\] where $\psi^k$ stands for the multiplicative $k$-th power (not the $k$-th iterate). Note that the sum starts with $k=2$ and that $[\psi^k(z)]_n$ only depends on the coefficients $b_1,\ldots,b_{n-k+1}$ (here $b_1=1$). \end{itemize} Let us apply this to the family $P_{\lambda,c}$ with a fixed $\lambda$. We recall the definition: \[ P_{\lambda,c}(z) = \lambda z \left( 1 - \frac{(1 + \sfrac{1}{c})}{2} z + \frac{\sfrac{1}{c}}{3} z^2 \right) \] We thus have $a_1 = \lambda$, $a_2 = -\lambda\frac{1+c^{-1}}{2}$, $a_3 = \lambda c^{-1}/3$ and all other $a_n$ are equal to $0$. It follows that \mathbf{b}egin{lemma} For a fixed $\lambda$ that is not a root of unity, nor $0$, the coefficient $b_n(P_{\lambda,c})$ is a polynomial in $c^{-1}$, of degree at most $n-1$. \end{lemma} \mathbf{b}egin{proof} This is proved by induction on $n$. By definition $b_1=1$. Then $b_n = (\lambda^n-\lambda)^{-1}(a_2\sum_{i+j=n} b_ib_j + a_3\sum_{i+j+k=n}b_ib_jb_k)$. If $n\geq 2$ and the claim holds up to $n-1$ then the term involving $a_3$ has degree at most $1+(i-1)+(j-1)+(k-1) = n-2$ and the term involving $a_2$ at most $1+(i-1)+(j-1)=n-1$. \end{proof} \mathbf{b}egin{lemma}\label{lem:cont} Let $r(P_{\lambda,c})$ denote the radius of convergence of $\psi_{P_{\lambda,c}}$. If either $0<|\lambda| <1$ or $\theta\in\mathbb{R}$ is a Brjuno number and $\lambda=e^{2\pi i\theta}$, then \[c\longrightarrowto r(P_{\lambda,c})\] is a continuous function of $c\in\mathbb{C}^*$. \end{lemma} \mathbf{b}egin{proof} In the Brjuno case, it is a direct application of a theorem in \cite{thesis:Cheritat}: the proposition on page 79, Chapter 3. In the attracting case, by \cref{eq:rpc} we have $r=|\varphi_P(c_P)|$ where $c_P$ is one of the two critical points of $P=P_{\lambda,c}$.\footnote{So $c_P = c$ or $c_P =1$, depending on the value of $c$. It is \emph{not} true that $c_P$ depends continuously on $P$, not even locally (when the two critical points belong to $\partial U$ but are distinct, the point $c_P$ may jump from one to the other for nearby parameters. And it will: it follows from the analysis that we will make later of the curve $Z_\lambda$, see \mathbb{C}ref{sec:attr}.} We saw in \mathbb{C}ref{lem:holodep} that $\varphi_P$ depends continuously (holomorphically!) on $P$. The claim then follows from the fourth point of \mathbb{C}ref{lem:cP}. \end{proof} By the symmetry $c^{-1}P_{\lambda,c}(cz) = P_{\lambda,1/c} (z)$ valid for all $c\in\mathbb{C}^*$ and $z\in\mathbb{C}$, we get that \mathbf{b}egin{equation}\label{eq:symr} -\log r(P_{\lambda,c}) + \log |c|= -\log r(P_{\lambda,1/c}). \end{equation} We refer to \cite{Ra} for the definition of a subharmonic function (definition~2.2.1 page~28): it is a function from some open subset $U$ of $\mathbb{C}$ to $[-\infty,+\infty)$ that is upper semi-continuous and satisfies the \emph{local submean inequality}. \mathbf{b}egin{proposition}\label{prop:sh} Under the same assumptions, \[c\longrightarrowto -\log r(P_{\lambda,c})\] is a subharmonic function of $c\in\mathbb{C}^*$. \end{proposition} \mathbf{b}egin{proof} First it is upper semi-continuous: indeed it is continuous according to the previous lemma. We then have to check the local submean inequality. Recall that \[-\log r = \limsup \frac1n\log b_n\] So $-\log r$ is the decreasing limit of the functions $u_n = \sup_{k\geq n} \frac1k\log b_k$. We can't claim that $u_n$ is subharmonic because we do not know if it is upper semi-continuous. However we will still check that $u_n$, then $-\log r$, satisfy the submean inequality on any disk $B(0,\rho)$ compactly contained in $\mathbb{C}^*$. Consider such a disk. Then the continuous function $r$ reaches a minimum $r_0>0$ on its closure. Recall that the power series $\psi_r$ defines an \emph{injective} function on its domain of convergence. It follows then from the Bieberbach-De Branges theorem\footnote{Most previously known bounds on the coefficients of univalent function are easier to prove and are also sufficient for this purpose.} that $|b_n|\leq n/r_0^n$. In particular : $\frac1n\log b_n$ is a sequence of functions on $B(0,\rho)$ that is bounded from above by some constant $M\in \mathbb{R}$. The functions $u_n$ are hence bounded from above by the same $M$. Each function $\frac1k\log b_k$ being subharmonic, it satisfies the submean inequality on any disk contained in $\mathbb{C}$. It easily follows that $u_n$ does too. Since this weakly decreasing\footnote{By this we mean $u_{n+1}\leq u_n$.} sequence is bounded from above, the monotone convergence theorem holds and its limit thus satisfies the submean inequality. As an alternative proof, one can use the extension of the Brelot-Cartan theorem called Theorem~3.4.3 in \cite{Ra}. The limsup $u$ satisfies $u^*=u$ since $u$ is continuous. We saw in the previous paragraph that the sequence of functions is locally uniformly bounded from above. So the hypotheses of the theorem are satisfied. \end{proof} As $c\longrightarrow\infty$, the map $P_{\lambda,c}$ converges uniformly on every compact subset of $\mathbb{C}$ to $Q_\lambda:z\longrightarrowto \lambda z (1-\frac{z}{2})$. \mathbf{b}egin{lemma}\label{lem:qlr} If $|\lambda|\leq 1$ and \[|c|\geq \frac{7^2}{7^2/2-7-10}\left(\frac 12 + \frac{7}{3|\lambda|}\right)\] then $P_{\lambda,c}$ has a quadratic-like restriction $P_{\lambda,c}: V\longrightarrowto B(0,10/|\lambda|)$ where $V$ is the connected component containing $0$ of $P^{-1}(B(0,10/|\lambda|))$. The critical point of this quadratic-like restriction is $z=1$. \end{lemma} \mathbf{b}egin{proof} We first change variable with $z= \lambda^{-1} w$. Then $P_{\lambda,c}$ is conjugated to \[F(w) = \lambda w -\frac{w^2}{2} + c^{-1} w^2 \left(\frac{-1}{2} +\frac{w}{3\lambda}\right)\] The condition above ensures that the rightmost term has modulus $\leq 7^2/2-7-10$ whenever $|w|\leq 7$. When $|w|=7$ we have \[|\lambda w -\frac{w^2}{2}| \geq 7^2/2-7 \] and hence \[|F(w)|\geq 10.\] Moreover, by Rouché's theorem, $F(w)$ winds the same number of times around $0$ than $-\frac{w^2}{2}$ does, i.e.\ $2$ times. Let $V$ be the preimage of $B(0,10)$ by $F$ restricted to $B(0,7)$. Then $F$ is proper from $V$ to $B(0,10)$ and has degree $2$. Either set $V$ has two connected components and $F$ has no critical point on $V$, or $F$ has a critical point on $V$ and $V$ has one component. Now the point $w=\lambda$ is critical and \[F(\lambda) = \lambda^2\left(\frac12-\frac{c^{-1}}{6} \right).\] Note that \[|c|\geq \frac{7^2}{7^2/2-7-10}\left(\frac 12 + \frac{7}{3}\right) = \frac{833}{45}\] hence \[|F(\lambda)| \leq \frac{1}{2} + \frac{1}{6}\cdot\frac{45}{833} < 10\] and of course $|\lambda|<7$. Hence we are in the second case and $P:\lambda^{-1}V\to B(0,10/|\lambda|)$ is quadratic-like, with $\lambda^{-1}V \subset B(0,7/|\lambda|)$. By hypothesis $|c| \geq \frac{7^2}{7^2/2-7-10}\left(\frac 12 + \frac{7}{3|\lambda|}\right)$, hence $|c| \geq \frac{7^2}{7^2/2-7-10}\cdot\frac{7}{3\lambda } > 7$, hence $z=c$ cannot be the critical point of the quadratic-like restriction, so this has to be $z=1$. \end{proof} \mathbf{b}egin{remark*} In fact, if $|\lambda|\leq 1$, to ensure the existence of a quadratic-like restriction it is enough that one critical point escapes to infinity. We could then have used \mathbb{C}ref{lem:cesc}. One advantage of \mathbb{C}ref{lem:qlr} is that we have an explicit and simple range for the quadratic-like restriction. \end{remark*} According to \cite{Ra}, section 3.7, the Laplacian, in the sense of distributions, of a subharmonic function is represented by a Radon\footnote{The general definition of a Radon measure is elaborate, but on $\mathbb{R}^n$ it is just a (positive) measure on the Borel sets that is locally finite, i.e.\ finite on every compact set. There is a correspondence between such Radon measures and positive linear operators on the set of continuous functions $\mathbb{R}^n \to \mathbb{R}$ with compact support.} measure, let us call it $\mu_\lambda$. \mathbf{b}egin{proposition}\label{prop:harmo} Assume that we have an open subset $W$ of $\mathbb{C}$, and a family of quadratic-like maps for $c\in W$, $f_c : U_c \to V_c$ that all satisfy $f_c(z) = \lambda z + \cal O(z^2)$. Assume that the fibred union $\mathbf{b}igcup_{c\in W} \{c\}\times U_c$ is open in $\mathbb{C}^2$ and that $f_c(z)$ varies analytically with $c$. Then the function $c\longrightarrowto -\log r(f_c)$ is harmonic on $W$. \end{proposition} \mathbf{b}egin{proof} Case 1: $|\lambda|<1$. Then the immediate basin of $0$ for $f_c$ is the basin of $0$ for its restriction. We thus have $r(f_c) = |\varphi_{f_c}(1)|$, whence the harmonicity of $-\log(r)$ in $1/c$. Case 2: $|\lambda|=1$ and $\theta$ is Brjuno. The Julia set $J$ of the restriction undergoes a holomorphic motion as $c$ varies locally in $W$: its multiplier at the origin remains constant and non-repelling, so all other cycles remain repelling, hence undergo a holomorphic motion, so their closure $J$ too by the $\lambda$-lemma. We can then apply the analysis of Sullivan (see \cite{Zakeri2}, or \cite{BC1}, proposition~2.14): when a holomorphic family of maps with an indifferent fixed point $0$ has a Siegel disk whose boundary undergoes a holomorphic motion w.r.t.\ the parameter, then $-\log r$ is a harmonic function of $1/c$. \end{proof} \mathbf{b}egin{proposition}\label{prop:p2} Under the same assumptions as \mathbb{C}ref{lem:cont}, the function $f:c\in\mathbb{C}^*\longrightarrowto -\log r(P_{\lambda,c})$ is harmonic near $0$ and near $\infty$. The support of the measure $\mu_\lambda = \mathbb{D}elta f$ is bounded away from $0$ and $\infty$. The function $f$ has a limit as $c\longrightarrow \infty$. The function $f(1/c)$ has an harmonic extension near $0$ whose value is $-\log r(Q_\lambda)$. \end{proposition} \mathbf{b}egin{proof} The second claim follows from the first. The last claim follows from the third, but we will prove it directly. From \cref{eq:symr} we can deduce harmonicity near $0$ from the harmonicity near $\infty$. We will simlutaneously prove the existence of a limit as $c\to\infty$. These follow from the following remark, already done by Yoccoz in \cite{Yoccoz}. By \mathbb{C}ref{lem:qlr} there is some $R>0$, depending on $\lambda$, such that $|c|>R$ implies that the following map is quadratic-like with critical point $z=1$: the restriction $P_{\lambda,c}: V\longrightarrowto B(0,10/|\lambda|)$ where $V$ is the connected component containing $0$ of $P^{-1}(B(0,10/|\lambda|))$. Moreover, its domain converges as $c\longrightarrow\infty$ and the restriction depends holomorphically on $1/c$, including when $c=\infty$, as $P$ tends to $Q$ on every compact subset of $\mathbb{C}$. We can then apply \mathbb{C}ref{prop:harmo}. \end{proof} We stress the following fact, that is kind of hidden in the previous statement: \mathbf{b}egin{equation}\label{eq:limr} r(P_{\lambda,c}) \underset{c\to\infty}{\longrightarrow} r(Q_\lambda). \end{equation} Note that the proof of \mathbb{C}ref{prop:p2} and the formula in the statement of \mathbb{C}ref{lem:qlr} gives that the support of $\mu_\lambda$ is contained in the closed ball of radius \mathbf{b}egin{equation}\label{eq:R} R = \kappa_0 + \kappa_1/|\lambda| \end{equation} for some explicit positive constants $\kappa_0$, $\kappa_1$. From \mathbb{C}ref{prop:p2}, it follows that the measure $\mu_\lambda$ has a support that is a compact subset of $\mathbb{C}^*$. Since it is locally finite, it follows that it is finite. \mathbf{b}egin{proposition}\label{prop:totalmass} The total mass of $\mu_\lambda$ is $2\pi$. \end{proposition} \mathbf{b}egin{proof} As $c\to \infty$, the map $P_{\lambda,c}$ converges on every compact subset of $\mathbb{C}$ to the polynomial $Q_\lambda(z)=\lambda z(1-\frac{z}2)$, whose unique critical point is $z=1$. We saw that the function $-\log r$ has a limit \[a=-\log r(Q_\lambda)\] as $c\to\infty$. Using the symmetry $c^{-1}P_{\lambda,c}(cz) = P_{\lambda,1/c} (z)$ valid for all $c\in\mathbb{C}^*$ and $z\in\mathbb{C}$, we get that \[-\log r(P_{\lambda,c}) + \log |c|= -\log r(P_{\lambda,1/c}).\] Hence \[-\log r(P_{\lambda,c}) \underset{c\to 0}= -\log |c| +a+ o(1).\] Let \[f(c) = \mathbf{b}ig\langle \mu_\lambda , \frac{1}{2\pi}\log |z| \mathbf{b}ig\rangle = \int_{z\in\mathbb{C}} \frac{1}{2\pi}\log|c-z| d\mu_\lambda(z).\] Then $\mathbb{D}elta f = \mu_\lambda$ (\cite{Ra}, Theorem 3.7.4). By Weyl's lemma, \cite{Ra} Theorem 3.7.10, two subharmonic functions with the same Laplacian differ by a harmonic function. In particular $-\log r(P_{\lambda,c})+\log|c|-f(c)$ is a harmonic function of $c\in\mathbb{C}^*$ that has a limit when $c\to 0$ and is $= (1-\frac{\mathrm{mass}}{2\pi})\log|c|+o(\log|c|)$ when $c\to\infty$. Such a harmonic function is necessarily constant, hence the mass is $2\pi$. \end{proof} As a bonus, we get the representation formula below: \mathbf{b}egin{equation}\label{eq:logr} -\log r(P_{\lambda,c}) = -\log r(Q_\lambda) - \log|c| + \int_{z\in\mathbb{C}} \frac{1}{2\pi}\log|c-z| d\mu_\lambda(z) \end{equation} where $Q_\lambda(z)=\lambda z(1-\frac{z}2)$. \section{Attracting slices}\label{sec:attr} Recall the set $\cal H_0$, which is the principal hyperbolic component in the family of unmarked affine conjugacy classes of cubic polynomials, and one of whose charaterization is that there is an attracting fixed point whose immediate basin contains all critical points, see \mathbb{C}ref{sub:spec}. We see $\cal H_0$ as a subset of $\mathbb{C}\times\mathbb{C}$ via the $(a,b^2)$ parameterization of such conjugacy classes, see \mathbb{C}ref{sub:norm_3rd}. \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=11.5cm]{png/bif-attr-c.png} \end{center} \caption{The bifurcation locus of the $\lambda$-slice for $\lambda = 0.4i$, represented in the $c$-coordinate. The critical point $z=1$ bifurcates on the dark red set. The critical point $z=c$ bifurcates on the dark blue set, barely visible in the centre of the picture. This set is the image of the dark red one by the inversion $c\longrightarrowto 1/c$. The isolated dark dots are artifacts of the method used to detect the bifurcation locus.} \label{fig:bifattrc} \end{figure} Petersen and Tan Lei described in \cite{PT} the fibres of the map: $\pi : \mathcal{H}_0 \longrightarrow \mathbb{D}$ which associate to each polynomial the multiplier of the attracting fixed point. Denote \[\mathcal{H}_0(\lambda) =\pi^{-1}(\lambda)\] \mathbf{b}egin{theorem}[Petersen and Tan] Let $\lambda \in \mathbb{D}^\ast$, then $\mathcal{H}_0(\lambda)$ is a topological disk. \label{thm:attr} \end{theorem} Recall the sets $\mathcal{H}_0^a$ and $\mathcal{H}_0'$ : they are the connected components contained in ``$|\lambda|<1$'' of the preimage of $\mathcal{H}_0$ by respectively the maps $(\lambda,v)\longrightarrowto (a,b^2)$ and $(\lambda,c)\longrightarrowto (a,b^2)$ introduced in \mathbb{C}ref{sec:normz}. For $\lambda\in\mathbb{D}^*$ let $\mathcal{H}_0'(\lambda) \subset \mathbb{C}$ denote the $\lambda$-slice of $\mathcal{H}_0'$, and $\mathcal{H}_0^a(\lambda)$ be defined similarly, i.e.\ \mathbf{b}egin{align*} \mathcal{H}_0^a(\lambda) & = \{v\in\mathbb{C}\,;\,(\lambda,v)\in\mathcal{H}_0^a\} \\ \mathcal{H}_0'(\lambda) & = \{c\in\mathbb{C}^*\,;\,(\lambda,c)\in\mathcal{H}_0'\} \end{align*} Let $F$ denote the rational map given by $F(z) = \frac{z+z^{-1}}{2}$. Then \mathbf{b}egin{equation}\label{eq:hap} \mathcal{H}_0'(\lambda) = F^{-1}\left(\mathcal{H}_0^a(\lambda)\right) \end{equation} \mathbf{b}egin{corollary} The set $\mathcal{H}_0^a(\lambda)$ is a topological disk. The set $\mathcal{H}_0'(\lambda)$ is a topological annulus. \end{corollary} \mathbf{b}egin{proof} Recall that ${\cal E}\subset \mathbb{C}^2$ denotes the unmarked polynomial classes that have a fixed critical point. This set is disjoint from $\mathcal{H}_0$. In the $(\lambda,v)$-coordinate, $\lambda$ is precisely the multiplier of the attracting cycle, so $\mathcal{H}_0^a(\lambda)$ is exactly the preimage of $\pi^{-1}(\lambda)$ by $(\lambda,v)\longrightarrowto (a,b^2)$. The first statement is thus immediate since $(\lambda,v)\longrightarrowto(a,b^2)$ is a homeomorphism from $\mathcal{H}_0^a$ to $\mathcal{H}_0$ by \mathbb{C}ref{prop:0ra}. The second statement then follows from topological properties of $F$ and the fact that $c=-1$ and $c=1$ both belong to $\mathcal{H}_0'(\lambda)$. Indeed $P_{\lambda,c=-1}$ is conjugate to $\lambda z+z^3$ which commutes with $z\longrightarrowto -z$; since one critical point $z_0$ is in the immediate basin $0$, the (distinct) critical point $-z_0$ is too; concerning the polynomial $P_{\lambda,c=1}$, it is unicritical and hence all critical points are in the immediate basin. The preimage by $F$ of a topological disk $D$ containing $-1$ and $1$ is connected (because it is a ramified cover over $D$ and $1$ has only one preimage) and we conclude using the Riemann Hurwitz formula. \end{proof} In \mathbb{C}ref{sec:phi} we introduced the quantity $r=r(P_{\lambda,c})$, which is the conformal radius of a special subset $U=U(P_{\lambda,c})$ of the basin of attraction of $0$ for $P_{\lambda,c}$. We recall that set $U$ contains $0$, its boundary contains at least one critical point but $U$ contains none and the linearizing coordinate $\varphi(z)=\varphi_{\lambda,c}(z) = z+\cdots$ is a bijection from $U$ to the round disk $B(0,r)$. Let \[ Z_\lambda = \{c\in\mathbb{C}^*\,;\,\text{both critical points of $P_{\lambda,c}$ belong to }\partial U\}. \] We call this a Z-curve, chosen after the name of Zakeri, who defined a similar set in the bounded type indifferent case instead of the attracting case, and proved, using quasiconformal surgery, that his set is a Jordan curve. Another name for this set could have been the Petersen-Tan set, as it is the pull-back of the seam that they define (see \mathbb{C}ref{sub:PT}). In this section we will prove the following: \mathbf{b}egin{theorem}\label{thm:ll} The set $Z_\lambda$ is a Jordan curve. Let $c\in\mathbb{C}^*$. If $c$ lies in the bounded component of $\mathbb{C}\setminus Z_\lambda$ then the unique critical point of $P_{\lambda,c}$ that belongs to $\partial U$ is $c$, and it is $1$ if $c$ belongs to the unbounded component. For each fixed $\lambda\in\mathbb{D}^*$, the function $c\in\mathbb{C}^*\longrightarrowto -\log r(P_{\lambda,c})$ is subharmonic and continuous. It is harmonic on $\mathbb{C}^* \setminus Z_\lambda$. Its Laplacian has total mass $2\pi$ and its support is equal to $Z_\lambda$. \end{theorem} Recall that $c$ and $1$ are the two critical points of $P_{\lambda,c}$ so on $Z_\lambda$ we have $|\varphi(c)|=|\varphi(1)|$, but the converse does not hold. We will see that $Z_\lambda$ is naturally parametrized by $\arg\left(\varphi(c)/\varphi(1)\right)$. \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=12cm]{png/bif-attr-c-lines.png} \end{center} \caption{We enriched \mathbb{C}ref{fig:bifattrc} with lines corresponding to the locus where both critical points $z=1$ and $z=c$ belong to the (whole) basin of $0$ and have the following property: $|\varphi_P(c)/\varphi_P(1)|\in|\lambda|^\mathbb{Z}$.} \label{fig:bifattrclines} \end{figure} \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=\textwidth]{png/zoom-1.png} \end{center} \caption{Zoom on the central part of \mathbb{C}ref{fig:bifattrclines}. The set $Z_\lambda$ is the outermost circular-shaped curve.} \label{fig:zoom1} \end{figure} \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=12cm]{png/bif-attr-v-lines.png} \end{center} \caption{Analog of \mathbb{C}ref{fig:bifattrclines} but this time drawn in the $v$-coordinate. It is not anymore possible to decree which critical point is red or blue, so we used only red for the bifurcation locus. The scar is quite visible: it is the tail of the tadpole-shaped figure in the centre.} \label{fig:bifattrvlines} \end{figure} \subsection{About the Petersen-Tan theorem}\label{sub:PT} Recall that $\pi^{-1}(\lambda)$ is the set of affine conjugacy classes of cubic polynomials in ${\mathcal{H}}_0$ whose attracting fixed point has multiplier $\lambda$. In \cite{PT} is defined a bijection from $\pi^{-1}(\lambda)$ to a topological disk $D_\lambda$. More precisely $D_\lambda$ is obtained as follows: take the basin of attraction $B(Q_\lambda)$ of $0$ for the quadratic polynomial $Q_\lambda(z)=\lambda z + z^2$. To simplify notations we write \[P=P_{\lambda,c}\text{ and }Q=Q_\lambda.\] Remove $U(Q)$ from it (the set $U(\cdots)$ has been defined in \mathbb{C}ref{prop:U}). Close the hole thus created by gluing $\partial U(Q)$ to itself according to the following rule: $z_1\sim z_2$ iff ($z_1,z_2\in\partial U(Q)$ and $\varphi_\lambda(z_1)\varphi_\lambda(z_2) = \varphi_Q(c_Q)^2$) where: $c_Q$ denotes the critical point of $Q$, $\varphi_\lambda:B(P)\to \mathbb{C}$ the extended linearizing coordinate of $P$, and $\varphi_Q$ the analogue for $Q$ (see \mathbb{C}ref{sec:phi}). This last relation can be understood as follows: the three complex numbers $\varphi_\lambda(z_1)$, $\varphi_\lambda(z_2)$ and $\varphi_Q(c_Q)$ all belong to a circle of centre $0$ and we ask the first two to be symmetric with respect to the reflection along the line passing through $0$ and $\varphi_Q(c_Q)$.\footnote{There is a difference with \cite{PT} because we took another normalizing convention for the functions $\varphi$.} Let \[ D_\lambda:=(B(Q)\setminus U(Q))\,/\!\sim\] and let \[ \Pi: B(Q)\setminus U(Q) \to D_\lambda\] be the quotient map. The target $D_\lambda = (B(Q)\setminus U(Q))\,/\!\sim$ is a priori just a topological disk. We give it a complex structure with an atlas as follows: one chart is the identity on the following open subset of $\mathbb{C}$: $B(Q)\setminus \ov U(Q)$. For $z\in \partial U(Q)$ such that $\varphi_Q(z)/\varphi_Q(1)\neq \pm 1$, so that there is a point $z'\neq z$ on $\partial U(Q)$ that is equivalent to $z$. We can use the map $z\longrightarrowto F(\varphi_Q(z)/\varphi_Q(1))$, where $F(z)=(z+z^{-1})/2$, to define a chart near $z$ in the quotient, since the map $\varphi_Q$ is invertible near $z$ and $z'$ and maps nearby points in $B(Q)\setminus U(Q)$ to points in $\mathbb{C}\setminus\mathbb{D}$. See \mathbb{C}ref{fig:seam-2}. The same also works if $\varphi_Q(z)/\varphi_Q(1) = -1$ because $\varphi_Q$ is a bijection near $z=z'$, but will not work near $z=1$ where $\varphi_Q$ has a critical point. There, one can use instead a branch of $z\longrightarrowto\sqrt[3]{1-F(\varphi_Q(z)/\varphi_Q(1))}$. See \mathbb{C}ref{fig:seam-2}. With this atlas, the map $\Pi$ is holomorphic on $B(Q)\setminus\ov U(Q)$ and has a holomorphic extension to neighborhoods of points of $\partial U(Q)\setminus\{1\}$. \mathbf{b}egin{figure}[ht] \mathbf{b}egin{tikzpicture} \node at (0,0) {\includegraphics[width=\textwidth-.5cm]{seam-1.pdf}}; \node at (2.3,0.7) {$z\longrightarrowto \varphi_Q(z)/\varphi_Q(1)$}; \node at (1.3,-2.5) {$F: z\longrightarrowto \frac{z+z^{-1}}{2}$}; \end{tikzpicture} \caption{Holomorphic charts for the quotient $(B(Q)\setminus U(Q))\ /\sim $ where $Q(z)=\lambda (z-\frac{z^2}{2}$. Illustration near a point not in $\ov U(Q)$ (green point), near a cardinal two fibre on $\partial U(Q)$ (pair of black points), near the point on $\partial U(Q)$ such that $\varphi_1(z)/\varphi_Q(1)=-1$. The case of the critical point $z=1$ is treated in \mathbb{C}ref{fig:seam-2}} \label{fig:seam-1} \end{figure} \mathbf{b}egin{figure}[ht] \mathbf{b}egin{tikzpicture} \node at (0,0) {\includegraphics[width=\textwidth-.5cm]{seam-2.pdf}}; \node at (-1.6,-0.1) {$\varphi_Q/\varphi_Q(1)$}; \node at (-2.4,-1.2) {$F$}; \node at (2.3,-1.05) {$z\longrightarrowto \sqrt[3]{1-z}$}; \end{tikzpicture} \caption{Continuation of \mathbb{C}ref{fig:seam-1}; holomorphic charts near the critical point on $\partial U(Q)$.} \label{fig:seam-2} \end{figure} The \emph{seam} $\partial U(Q)\ / \sim$ is a Jordan arc (it is homeomorphic the quotient of a circle by a reflection). Petersen and Tan call it the \emph{scar}. \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=10cm]{png/Basin_Q.png} \end{center} \caption{The filled-in Julia set of $Q:z\longrightarrowto \lambda(z-\frac{z^2}2)$ for $\lambda = 0.4i$ has been drawn in yellow and in its interior we represented ``equipotential'' lines, defined as the locus where $|\varphi_Q(z)|/|\varphi_Q(c_Q)| \in |\lambda|^\mathbb{Z}$, where $c_Q=1$ is the critical point of $Q$. The eye is the set $U(Q)$, which is the left lobe of the central lemniscate shaped curve. The attracting fixed point is at the centre of the pupil of the eye.} \label{pic:Q} \end{figure} \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=7cm]{glue_Q.pdf} \end{center} \caption{To define the model space, remove the eye $U(Q)$ and glue the top and bottom eyelids together.} \label{pic:Qglue} \end{figure} Then they define a bijection from $\mathcal{H}_0(\lambda)$ to $D_\lambda = (B(Q)\setminus U(Q))\,/\!\sim$ and prove that it is holomorphic (they also prove more properties). We explain here without proof the definition of the bijection, the interested reader may look at \cite{PT} for more details. Let $P=P_{\lambda,c}$ with $[P] \in \mathcal{H}_0$, i.e. both critical points in the immediate basin of the attracting fixed point $0$. Denote $U = U(P)$. Recall that there must be at least one critical point on $\partial U$. Sometimes both are on $\partial U$. Denote by $c_0$ such a critical point and denote $c_1$ be the other one (possibly equal to $c_0$ if $P$ is unicritical). If there are two critical points on $\partial U$ then there is a choice of which one we call $c_0$. The point $c_0$ is called the \emph{first critical point}. We will also consider the \emph{co-critical} points $\mathit{co}_0$ and $\mathit{co}_1$, defined by $\{c_i,\mathit{co}_i\} = P^{-1}(P(c_i))$, for $i=1,2$. If $c_0=c_1$ then $\mathit{co}_0=c_0=c_1=\mathit{co}_1$. Recall that $\psi_P$ is the linearizing parametrization, maps its disks of convergence to $U$ and has a continuous extension $\ov \psi_P$ to a homeomorphism from the closed disk to $\ov U$, whose reciprocal is the restriction of $\varphi_P$ to $\ov U$. The same statements hold for $Q$. There is hence a natural conjugacy $\tilde \eta_P : \ov{U} \to \ov{U}(Q)$ of $P$ to $Q$ sending $c_0$ to $c_Q$ and obtained by \[\tilde\eta_P(z) = \ov\psi_Q \left(\frac{\varphi_Q(c_Q)}{\varphi_P(c_0)} \varphi_P(z)\right)\] It only depends on the affine conjugacy class of $P$: if $s\in\mathbb{C}^*$ and $f(z) = sP(s^{-1}z)$ then $\tilde\eta_f(z) = \tilde\eta_P(s^{-1}z)$. Petersen and Tan prove that there exist: \mathbf{b}egin{itemize} \item a special connected, simply connected and compact subset $\Omega$ of the basin $B(P)$ containing the closure of $U(P)$ and both critical points and such that $\mathit{co}_0$ is either not in $\Omega$, or is a non-separating point of $\Omega$ contained in its boundary; \item a semi-conjugacy $\eta_P:\Omega\to B(Q)$ of $P$ to $Q$, extending $\tilde \eta_P$. \end{itemize} The construction above only depends on the affine conjugacy class of $P$: if $s\in\mathbb{C}^*$ and $f(z) = sP(s^{-1}z)$ then $\eta_f(z) = \eta_P(s^{-1}z)$. \mathbf{b}egin{definition}[Petersen-Tan bijection] To a an affine class $[P]\in\mathcal{H}_0(\lambda)$ (without marked point), let $\Phi$ associate the point \[\Phi([P]):=\Pi(\eta_P(c_1))\in (B(Q)\setminus U(Q))\ /\sim\] where $\eta$ is the extension mentioned above. \end{definition} Note that this value does not depend on the chosen representative $P$ of the class. The following immediate consequence will be useful later. \mathbf{b}egin{lemma}\label{lem:ppu} If both critical points of $P$ are on $\partial U$, then \[\eta_P(c_1) = \ov\psi_Q \left(\frac{\varphi_Q(c_Q)\varphi_P(c_1)}{\varphi_P(c_0)}\right).\] \end{lemma} Since Petersen and Tan proved that $\Phi$ is a homeomorphism\footnote{They even proved that it is analytic for some natural complex structures on the domain and the range of the map $\Phi$.} it follows obviously that \mathbf{b}egin{equation}\label{eq:inj} \text{the map $\Phi$ is injective.} \end{equation} We numbered that fact for future reference. Curiously, the following statement is not present in \cite{PT}. For completeness we give here a proof using quasiconformal deformation and injectivity of the map $\Pi$. \mathbf{b}egin{proposition}\label{prop:PTo} Let $[P]\in\mathcal{H}_0(\lambda)$. Then $\Phi([P])$ belongs to the seam if and only if both critical points of $P$ belong to $\partial U(P)$. \end{proposition} \mathbf{b}egin{proof} If both critical points are on $\partial U(P)$ then $|\psi(c_1)|=|\psi(c_0)|$ hence $\frac{\varphi_Q(c_Q)\varphi_P(c_1)}{\varphi_P(c_0)}$ has the same modulus as $\varphi_Q(c_Q)$ hence $\eta_P(c_1)\in\partial U(Q)$, so $\Phi([P])$ belongs to the seam. For the converse, we will prove below that for all $\theta\in\mathbb{R}$ there is a cubic polynomial $P=P_\theta$ whose critical points $c_0$, $c_1$ both belong $\partial U(P)$ and such that $\varphi_P(c_1)/\varphi_P(c_0) = e^{i\theta}$. Once this claim is proved, consider any $z\in\partial U(Q)$. Then $\varphi_Q(z)/\varphi_Q(c_Q)$ has modulus one, hence is of the form $e^{i\theta}$ for some $\theta\in\mathbb{R}$. From \mathbb{C}ref{lem:ppu} we get $\Phi([P_\theta]) = z$. In other words: any point on the seam is the image by $\Pi$ of the class of some of the maps $P_\theta$ of the claim. Now injectivity of $\Pi$ implies that a cubic map $P$ whose class is mapped to the seam by $\Pi$ must be one of the $P_\theta$ so must have both critical points on $\partial U(P)$. \end{proof} The proposition above used the following fact, that we prove now. \mathbf{b}egin{lemma}\label{lem:alltheta} For all $\theta\in\mathbb{R}$ there is a cubic polynomial $P=P_\theta$ whose critical points $c_0$, $c_1$ both belong $\partial U(P)$ and such that $\varphi_P(c_1)/\varphi_P(c_0) = e^{i\theta}$. \end{lemma} \mathbf{b}egin{proof} For $\theta=0$ the map $P_{\lambda,c=1}$ satisfies the assumption: its critical points coincide. For $\theta=\pi$, the map $P_{\lambda,c=-1}$, for which $c_1=-c_0$, satisfies the assumption: it has at least one critical point $c_0\in\partial U$ and since $U$ is invariant by $z\longrightarrowto -z$, the other critical point $-c_0$ is also on $\partial U$. Moreover $\Phi_P$ commutes with $z\longrightarrowto -z$, whence the claim. For another value of $\theta$, we build $P$ by quasiconformal deformation of $P_0 = P_{\lambda,-1}$, i.e.\ the map $P$ will by the conjugate of $P_0$ by the straightening of a $P_0$-invariant Beltrami form $\mu_1$. Such a conjugate is holomorphic and is a self-map of $\mathbb{C}$ of topological degree $3$, hence a cubic polynomial. To find $\mu_1$, we proceed as follows: The map $\varphi_{P_0}$ sends $\ov U(P_0)$ to a closed round disk $B(0,R)$ and sends both critical points $c_0$, $c_1$ to antipodal points $a_0$, $a_1$ on its bounding circle. Let $f$ be a (Lipschitz) homeomorphism of $[0,2\pi]$ fixing both ends and sending $\pi$ to $\theta$. We can assume that $f$ is linear on $[0,\pi]$ and $[\pi,2\pi]$ but it is not necessary. We can periodize $f$ into a Lipschitz homeomorphism of $\mathbb{R}$ commuting with $x \longrightarrowto x+2\pi$. Let \[b = -\log \lambda\] (we can take any determination of its logarithm). Then $\mathbb{R}e b>0$. Let \[a = \frac{\operatorname{Im} b}{\mathbb{R}e b}.\] Then let \[W:\mathbf{b}egin{array}{rcl} \mathbb{C}&\to&\mathbb{C} \\ x+iy &\longrightarrowto& x+i (ax+f(y-ax)) \end{array}\] Which commutes with $z\longrightarrowto z+b$. The map $W$ is semi-conjugate via $\exp$ to a map $V:\mathbb{C}\to\mathbb{C}$: \[\exp\circ W = V \circ \exp\] The map $V$ has been designed to commute with $z\longrightarrowto \lambda z$, to be quasiconformal and to send $-1$ to $e^{i\theta}$. Let $\mu_V$ be the pull-back by $V$ of the null Beltrami form. Let $\mu_0$ be defined on $U(P_0)$ as the pull-back of $\mu$ by the map $z\longrightarrowto \varphi_P(z /\varphi_P(c_0))$. Since $\mu_0$ is invariant by $P_0$ on the forward invariant set $U(P_0)$, we can complete $\mu_0$ into a $P_0$-invariant Beltrami form $\mu_1$ on $\mathbb{C}$ in the usual way: it is null outside the basin of attraction of $0$ and in the basin it is obtained by iterated pull-backs of $\mu_0$ by $P_0$. Now let $S$ be the straightening of $\mu_1$, i.e.\ $S$ sends $\mu_1$ to the null form. Let $P_1 = S\circ P_0 \circ S^{-1}$. The map $H= V\circ \varphi_{P_0}\circ S^{-1}$ sends the null-form to the null-form hence is holomorphic. It is defined on the basin of $P_1$ and moreover a direct computation shows that it conjugates $P_1$ to the multiplication by $\lambda$. Hence by uniqueness of the linearizing maps, we get that $H = a' \varphi_{P_1}$ for some $a'\in\mathbb{C}^*$. In particular the image of $U(P_0)$ by $S$ is $U(P_1)$: indeed it contains a critical point (in fact, both) of $P_1$ and $H$ is injective on it and maps it to a round disk. One also checks that $H$ sends the critical points of $P_1$ to two points whose quotient is $e^{i\theta}$. The lemma follows. \end{proof} \subsubsection{About semi-conjugacies} Here, we discuss the impossibility of having a semi-conjugacy on the whole basin, this section has no application in the present document. Let $P=P_{\lambda,c}$ with both critical points in the immediate basin of the attracting fixed point $0$. Let $U_n = U_n(P_{\lambda,c})$ denote the connected component containing $0$ of $ P_{\lambda,c}^{-n}(U)$. One can prove by induction that $U_n$ is simply connected.\footnote{If $U_{n-1}$ simply connected but not $U_n$ then the complement of $U_n$ in the Riemann sphere would have a bounded component $C$, whose image $P(C)$ is disjoint from $U_{n-1}$ and whose boundary is contained in $\partial U_{n-1}$. With these properties, $P(C)$ must contain infinity, and since $P$ is a polynomial, $C$ must contain infinity, but $C$ is bounded, leading to a contradiction.} We have $U_0 = U$ and $U_{n-1}\Subset U_n$. Note that no critical points belong to $U_0$ and that at least one critical point belongs to $U_1$ because it contains $\partial U$. There is some $n_0\geq 1$ such that both critical points belong to $U_n$ iff $n\geq n_0$. There is at least one critical point of $P$ on $\partial U_0$, let $c_0$ denote one of them ($c_0$ is a first critical point according to the terminology above) and let $\{c_0,c_1\}$ be the set of critical points of $P$. The map $P$ is a ramified covering from $U_n$ to $U_{n-1}$ of degree $3$ if $n\geq n_0$, and of degree $2$ otherwise.\footnote{The value of degree can be deduced from the Riemann-Hurwitz formula: since the sets $U_n$ are simply connected, the degree is $1+$ the number of critical points of $P$ in $U_n$.} Note that $\mathit{co}_0$ and $\mathit{co}_1$ also belong to $U_{n_0}$, since $P$ has degree $3$ on $U_{n_0}$ and $U_{n_0-1}$ contains the two critical values. For $n=0$, there are non-unique conjugacies $\zeta_0:U_0\to U_0(Q)$ of $P$ to $Q$. They are also the conformal maps from $U_0$ to $U_0(Q)$ that map $0$ to $0$. They all have a continuous extension to $\partial U_0$ because $U_0$ and $U_0(Q)$ are Jordan domains. There is a unique conjugacy whose extension maps $c_0$ to the critical point of $Q$, and we now call this one $\zeta_0$. As long as $n<n_0$ there is a conjugacy $\zeta_n$ of $P_{\lambda,c}$ to $Q$, extending $\zeta_{0}$, defined on $U_n$ and mapping to $U_n(Q)$ and if $n>0$ then $\zeta_{n}$ extends $\zeta_{n-1}$. Now there is a complication: there is no conjugacy $\zeta_{n_0}:U_{n_0}\to U_{n_0}(Q)$ from $P$ to $Q$ extending $\zeta_{n_0}$. Indeed $P$ is a degree $3$ ramified covering from $U_{n_0}$ to its image whereas $Q$ is a degree $2$ ramified covering from $U_{n_0}(Q)$ to its image. One may hope that allowing $\zeta_{n_0}$ to have critical points may solve the problem, however: \mathbf{b}egin{lemma}There is no holomorphic extension $\zeta$ of $\zeta_{n_0-1}$ to $U_{n_0}$. \end{lemma} \mathbf{b}egin{proof} Let us work by contradiction and assume there is such a $\zeta$. By holomorphic continuation, the relation $\zeta \circ P = Q\circ \zeta$ holds on $U_{n_0}$. This implies, denoting $\deg(f,z)$ the local degree at $z$ of a holomorphic map $f$ : \mathbf{b}egin{equation}\deg(\zeta,P(z)) \deg(P,z) = \deg(Q,\zeta(z)) \deg(\zeta,z)\end{equation} Now since $\zeta_{n_0-1}$ is injective on $U_{n_0-1}=P(U_{n_0})$, it follows that $\forall z\in U_{n_0}$, we have $\deg(\zeta,P(z)) = 1$ so \mathbf{b}egin{equation}\label{eq:degs2} \deg(P,z) = \deg(Q,\zeta(z)) \deg(\zeta,z)\end{equation} in particular if $z\in U_{n_0}$ and $\zeta(z)$ is the critical point of $Q$ then $z$ is a critical point of $P$, and its local degree is even, thus equal to $2$. This immediately rules out the possiblity that $c_0=c_1$, for the local degree of this double critical point would be $3$. Otherwise $P(\mathit{co}_0) = P(c_0)$ hence $Q(\zeta(\mathit{co}_0)) = \zeta(P(\mathit{co}_0)) = \zeta(P(c_0)) = Q(\zeta(c_0)) = Q(c)$ where $c$ denotes the critical point of $Q$, so $\zeta(\mathit{co}_0) = c$ because $c$ is the only preimage of $Q(c)$ by $Q$. Hence $\mathit{co}_0$ must be critical as we already remarked. But this is not the case, leading to a contradiction. \end{proof} As the proof above shows, the obstruction is essentially due to the co-critical point $\mathit{co}_0$. This is why Petersen and Tan had to extend the conjugacy $\zeta_{n_0-1}$ into a semi-conjugcacy $\zeta$ defined only on some subset of $U_{n_0}$ containing $c_1$ and either not containing $\mathit{co}_0$ or at least with $\mathit{co}_0$ not ``in the way''. For this, they had to consider many cases, and we will not review them here. \subsection{Proof of Theorem~\ref{thm:ll}} First note that the claim on the total mass of $\mu_\lambda$ was proven in \mathbb{C}ref{prop:totalmass}. Recall that $0<|\lambda|<1$ and that \[ Z_\lambda = \{c\in\mathbb{C}^*\,;\,\text{both critical points of $P_{\lambda,c}$ belong to }\partial U\} \] where $U = U(P_{\lambda,c})$ is the set defined in \mathbb{C}ref{prop:U}. The critical points of $P_{\lambda,c}$ are $z=1$ and $z=c$. The set $Z_\lambda$ contains $1$ and when $c\in\mathbb{C}^* \setminus Z_\lambda$, there is only one critical point on $\partial U$. We invite the reader to read the statement of the fourt point of \mathbb{C}ref{lem:cP} again, about limits of critical points on $\partial U$ when the polynomial varies. \mathbf{b}egin{assertion} The set $Z_\lambda$ is closed in $\mathbb{C}^*$. On any connected component of the complement of $Z_\lambda$, it is always the same critical point that belongs to $\partial U$. \end{assertion} The two assertions follow from the fourth point of \mathbb{C}ref{lem:cP} and continuity of the two critical points of $P_{\lambda,c}$ with respect to $c$. Let us prove that $Z_\lambda$ is a Jordan curve (which gives an independent proof of the fact that it is closed). Recall that $P_{\lambda,c}$ and $P_{\lambda,1/c}$ are conjugate by an affine map fixing $0$, in particular $Z_\lambda$ is invariant by $c\longrightarrowto 1/c$. It contains $c=1$ because both critical points are then identical. It also contains $c=-1$ for then $P_{\lambda,-1}$ commutes with $z\longrightarrowto -z$, hence $U$, which contains $0$, is invariant by $-z$ too and the two critical point $-1$ and $1$ thus belong to $\partial U$ simultaneously. Let $I_\lambda$ denote the image of $Z_\lambda$ by $c\longrightarrowto v = \frac{c+c^{-1}}{2}$. It contains $v=1$ and $v=-1$ and $Z_\lambda$ is its whole preimage. \mathbf{b}egin{lemma}\label{lem:Il} The set $I_\lambda$ is a Jordan arc. \end{lemma} \mathbf{b}egin{proof} Recall that $\mathcal{H}_0(\lambda)$ can be seen as a subset of $\mathbb{C}^2$ via the $(a,b^2)$ coordinates. By \mathbb{C}ref{prop:0ra} this set is homeomorphic by $(\lambda,v)\longrightarrowto(a,b^2)$ to the subset that we denoted $\mathcal{H}^a(\lambda)$ of $(\lambda,v)$-space, of classes of polynomials with an attracting fixed point marked of multiplier $\lambda$. According to Petersen and Tan (see \mathbb{C}ref{sub:PT}, \mathbb{C}ref{prop:PTo}), the fact that both critical points are on $\partial U(P)$ is equivalent to the fact that $\Phi([P])$ belongs to the seam $\Pi(\partial U(Q))$. Since the seam is a Jordan curve, and $\Phi$ is a homeomorphism from $\mathcal{H}_0(\lambda)$ to its image, the lemma follows. \end{proof} \mathbf{b}egin{corollary} The set $Z_\lambda$ is a Jordan curve. \end{corollary} \mathbf{b}egin{proof} By lifting properties of coverings, the Jordan arc minus its ends has two disjoint lifts starting from its middle point by $c\longrightarrowto v$. The ends of each lift must converge to $-1$ and $1$. The union of these two Jordan arcs is then a simple closed curve and equal to $Z_\lambda$. \end{proof} In particular $Z_\lambda$ is bounded. Since it is invariant by $c\longrightarrowto 1/c$, it is also bounded away from $0$. In particular, the complement of $Z_\lambda$ in $\mathbb{C}$ has two components, one that is bounded and one that is unbounded. \mathbf{b}egin{assertion} The unique critical point of $P_{\lambda,c}$ that belongs to $\partial U$ is $c$ if $c\neq 0$ belongs to the bounded component of $\mathbb{C}\setminus Z_\lambda$, and it is $1$ if $c$ belongs to the unbounded component. \end{assertion} By the discussion at the beginning of this section, it is enough to prove that the unbounded component contains \emph{at least one parameter} for which $c\in \partial U$ and similarly that the bounded component minus $0$ contains at least one parameter for which $1\in \partial U$. When $c$ tends to infinity, the map $P_{\lambda,c}$ tends on every compact subset of $\mathbb{C}$ to the quadratic polynomial $Q_\lambda(z)=\lambda z(1-\frac{z}2)$. We have $J(Q_{\lambda}) \subset B(0,10)$. The restriction of $Q_\lambda$ as a map from $Q^{-1}(B(0,10))$ to $B(0,10)$ is quadratic-like. For $|c|$ big enough, there is a quadratic-like restriction of $P_{\lambda,c}$ whose domain contains $0$ and is contained in $B(0,10)$ (a perturbation of a quadratic like map is still quadratic like, up to reducing its domain). This restriction has an attracting fixed point $z=0$, hence there is a critical point of the restriction in its basin. This critical point must belong to $B(0,10)$ hence cannot be equal to $c$ if $c$ is big enough. Hence it must be the critical point $z=1$. The boundary of the basin is contained in the Julia set of the restriction, hence in the Julia set of the full polynomial. It implies that $c$ is not in the immediate basin of $0$ for $P_{\lambda,c}$, a fortiori not in $\partial U$. Recall that $P_{\lambda,1/c}$ is conjugate to $P_{\lambda,c}$ by $z\longrightarrowto cz$: \[c^{-1}P_{\lambda,c}(cz) = P_{\lambda,1/c} (z).\] The conjugacy $z\longrightarrowto cz$ sends respectively the critical points $1$ and $1/c$ of $P_{\lambda,1/c}$ to the critical points $c$ and $1$ of $P_{\lambda,c}$. Applying this change of variable, it follows from the above discussion that the critical point on $\partial U$ is $c$ when $|c|$ is small. \mathbf{b}egin{assertion} The function $c\longrightarrowto -\log r(P_{\lambda,c})$ defined on $\mathbb{C}^*$ is subharmonic and continuous. It is harmonic on $\mathbb{C}^*\setminus Z_\lambda$. \end{assertion} Continuity has been proven in \mathbb{C}ref{lem:cont} and subharmonicity in \mathbb{C}ref{prop:sh}. To prove harmonicity on the complement of $Z_\lambda$, we use \cref{eq:rpc} on page~\pageref{eq:rpc}, according to which $r(P)=|\varphi_P(c_P)|$ where $c_P$ is the critical point on $\partial U(P)$, and we use holomorphic dependence of $\varphi_P$ on $P$, \mathbb{C}ref{lem:holodep}. We saw that $c_P=1$ on one component and $c_P=c$ on the other component. The (distribution) Laplacian of a subharmonic function is known to be a Radon measure, let us call it $\mu_\lambda$. \mathbf{b}egin{lemma} The support of $\mu_\lambda$ is equal to $Z_\lambda$. \end{lemma} \mathbf{b}egin{proof} The support is the complement of the biggest open set on which $-\log r$ is harmonic. We proved that $-\log r$ is harmonic on the complement of the closed set $Z_\lambda$, hence the support of $\mu_\lambda$ is contained in $Z_\lambda$. To prove the converse inclusion, we adapt to our setting an argument that was explained to us by \'Avila in the setting of Siegel disks. We will proceed by contradiction and assume that there is some ball $B=B(c_0,\rho)$ with $c_0\in Z_\lambda$ and on which the function $-\log r$ is harmonic. Let us deduce from this that one of the critical points is on $\partial U$ for all $c\in B$, i.e.\ that either $\forall c\in B$, $1\in\partial U(P_{\lambda,c})$ or $\forall c\in B$, $c\in \partial U(P_{\lambda,c})$. This leads to a contradiction since $c_0\in Z_\lambda$ is accumulated by points in each of the two complementary components of $Z_\lambda$, and on one of those components the only critical point on $\partial U(P_{\lambda,c})$ is $z=1$ and on the other it is $z=c$. A harmonic function on a simply connected open set is the real part of some holomorphic function, so $\log r(P_{\lambda,c}) = \mathbb{R}e g(c)$ with $g : B\to\mathbb{C}$ holomorphic. In other words, \[\forall c\in B,\ r(P_{\lambda,c}) = | h(c) |\] for the non-vanishing holomorphic function $h=\exp \circ \,g$. On the other hand, $r(P) = |\varphi_P(c_P)|$ for any critical point $c_P\in\partial U(P)$. Recall that the critical point $c_P$ with $P=P_{\lambda,c}$ is unique if $c$ is not in $Z_\lambda$ and that is a holomorphic function of $c$ in the complement of $Z_\lambda$. Since $Z_\lambda$ is a Jordan curve, $c_0$ is in the closure of both complementary components of $Z_\lambda$. Taking the intersection of a complementary component with $B$ may disconnected it, but at least on each component $C'$ of this intersection, the function $\varphi(c_P)/h(c)$ has constant modulus equal to $1$, so is constant on $C'$. Choose one such component $C'$ and call $u$ this constant: \[|u|=1,\text{ and}\] \[\forall c\in C',\ \varphi(c_P) = u h(c).\] Recall that $\psi_P$ has a continuous extension $\ov\psi_P$ to a homeomorphism from $\ov B(0,r(P))$ to $\ov U(P)$ (\mathbb{C}ref{lem:psiext}). For $c\in B$ let \[\zeta(c) = \ov{\psi}_{P_{\lambda,c}}(uh(c)),\] which is defined since by assumption $r(P_{\lambda,c}) = |h(c)|$ and $|u|=1$. The function $\zeta$ is continuous by the third point of \mathbb{C}ref{lem:cP}. For $c\in C'$ we have $\zeta(c)=c_P$. Let us prove that the function $\zeta$ is homlomorphic. It is the pointwise limit as $\varepsilonilon\to 0$ of the holomorphic functions $c \longrightarrowto \ov{\psi}_{P_{\lambda,c}}(u(1-\varepsilonilon)h(c))$. These functions are uniformly bounded on compact subsets of $B$: one argument for that is that they take value in the filled-in Julia set, which are contained in a common ball when the parameter varies little. A uniformly bounded pointwise limit of holomorphic functions is holomorphic. It follows that $\zeta$ is locally holomorphic, hence holomorphic. Now note that $\zeta$ coincides with one of the two critical points $z=c$ or $z=1$ of $P$ on the component $C'$. By holomorphic continuation of equalities, $\zeta$ is this critical point on all $B$. It follows that one of the critical points is always on $\partial U$ for $c\in B$, leading to the aforementioned contradiction. \end{proof} \subsection{Parametrizing the Z-curve in the attracting case} Let $\mathbb{U}=\partial \mathbb{D}$ and consider the map \[\Psi:\mathbf{b}egin{array}{rcl} Z_\lambda & \to & \mathbb{U} \\ c & \longrightarrowto & \varphi_{P}(c)/\varphi_P(1) \end{array}\] where, as usual, $P = P_{\lambda,c}$ and $1$ and $c$ are the critical points of $P$. Recall that $c=1$ and $c=-1$ both belong to $Z_\lambda$. We have: \mathbf{b}egin{itemize} \item $\Psi(1) = 1$, since in this case, the two critical points coincide, \item $\Psi(-1) = -1$, because in this case the map $\varphi_P$ commutes with $z\longrightarrowto-z$. \end{itemize} Recall that $c\in Z_\lambda$ iff $1/c\in Z_\lambda$. \mathbf{b}egin{lemma}\label{lem:Psiinvc} $\Psi(1/c) = 1/\Psi(c)$. \end{lemma} \mathbf{b}egin{proof} Let $P=P_{\lambda,c}$ and $B = P_{\lambda,1/c}$. We have $B(z) = P(cz)/c$ and $\varphi_{B}(z)=\varphi_P(cz)/c$, hence $\Psi(1/c)$ $=$ $\varphi_{B}(1/c)/\varphi_{B}(1)$ $=$ $\varphi_{P}(c\times 1/c)/\varphi_P(c\times 1)$ $=$ $\varphi_{P}(1)/\varphi_P(c)$ $=$ $1/\Psi(c)$. \end{proof} Let us prove that: \mathbf{b}egin{lemma}The map $\Psi$ is a homeomorphism. \end{lemma} \mathbf{b}egin{proof}Note that \mathbf{b}egin{itemize} \item It is continuous, since $\varphi_P$ depends holomorphically on $P$. \item By \mathbb{C}ref{lem:alltheta}, the map $\Psi$ is surjective. \item To prove injectivity of $\Psi$ we will use injectivity of $\Phi$, see Eq.~\eqref{eq:inj} as follows: \end{itemize} Consider two maps $P_{\lambda,c}$, $P_{\lambda,c'}$ in $Z_\lambda$ that have the same image by $\Psi$. Then their affine class (without marked point) have the same image by $\Phi$ according to \mathbb{C}ref{lem:ppu}. It follows that the two maps are affine conjugate. Hence either $c'=c$ or $c' = 1/c$. In the latter case, by \mathbb{C}ref{lem:Psiinvc}, we have $\Psi(c')=1/\Psi(c)$. Since we assumed moreover that $\Psi(c')=\Psi(c)$, it follows that $\Psi(c) = 1/\Psi(c)$, hence $\Psi(c) = \pm 1$. We already know that $\Psi(1)=1$ and $\Psi(-1)=-1$ and hence we have that either $c$ or $1/c$ is equal to $\pm 1$ by the above analysis. But then $c=1/c$. \end{proof} \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=\textwidth]{png/volcano-5.png} \end{center} \caption{3D rendering of the graph of the function $c\longrightarrowto \log r(P_{\lambda,c}) - \log |c|$ for $\lambda = 0.8i$ and $c$ varying in a bounded subset of $\mathbb{C}$. This graph, a smooth surface except along a curve where it is creased, is textured with the bifurcation locus of the family $c\longrightarrowto P_{\lambda,c}$. Compared to the function $-\log r$, we added $\log|c|$ and then took the opposite. This allows an elegant representation as a volcano looking scenery. The horizontal scale and the vertical scales have been chosen different to fine-tune this aspect. The modified function is defined on $\mathbb{C}$, has a limit as $c\longrightarrow 0$, is harmonic outside $Z_\lambda$ and tends to $-\infty$ when $c\longrightarrow \infty$. Its Laplacian is the opposite of the measure $\mu_\lambda$.} \label{pic:heightfield} \end{figure} \mathbf{b}egin{figure}[ht] \mathbf{b}egin{center} \includegraphics[width=12.5cm]{png/ba2.png} \end{center} \caption{The bifurcation locus together with equipotentials, c.f.\ \mathbb{C}ref{pic:heightfield,fig:bifattrclines}} \label{pic:ba2} \end{figure} \section{Siegel slices}\label{sec:5} \subsection{Introduction} Shishikura proved that all bounded type Siegel disks of polynomials are quasicircles with a critical point in the boundary. This applies to our family: when $\theta$ is a bounded type irrational and $\lambda=e^{2\pi i\theta}$, then for all $c\in\mathbb{C}^*$ the Siegel disk of $P_{\lambda,c}$ at $0$ is a quasidisk containing at least one critical point. Let us recall a theorem of Zakeri, in \cite{art:Zakeri}. \mathbf{b}egin{theorem}[Zakeri]\label{thm:Zak} if $\theta$ is a bounded type irrational number and $\lambda = e^{2\pi i\theta}$, let \[Z_\lambda = \{c\in\mathbb{C}^*\;|\;\text{both critical points belong to }\partial \mathbb{D}elta(P_{\lambda,c}).\}\] Then $Z_\lambda$ is a Jordan curve. Call $I$ and $E$ the bounded and unbounded components of its complement in $\mathbb{C}$. Then $I$ contains $0$ and for all $c\in I\setminus\{0\}$, the critical point on $\partial \mathbb{D}elta(P_{\lambda,c})$ is $z=c$; for all $c\in E$, the critical point on $\partial \mathbb{D}elta(P_{\lambda,c})$ is $z=1$. \end{theorem} The set $Z_\lambda$ is referred to here as the \emph{Zakeri curve}. Given a measure $\mu$ let $\Supp\mu$ denote its support. The object of this section is to prove: \mathbf{b}egin{theorem}\label{thm:bddType} Let $\theta$ be a bounded type irrational and $\lambda = e^{2\pi i\theta}$. Then the support of $\mu_\lambda$ is equal to the Zakeri curve: \[ \Supp \mu_\lambda = Z_\lambda.\] \end{theorem} \subsection{Proof} Let us denote by $\mathit{cr}_1(c)=1$ and $\mathit{cr}_2(c)=c$ the two holomorphic parametrizations of the critical points of $P_{\lambda,c}$. Given a simply connected open subset $\mathbb{D}elta$ of $\mathbb{C}$ containing $0$ we denote by $r(\mathbb{D}elta)$ its conformal radius with respect to $0$. \subsubsection{\texorpdfstring{$\Supp\mu_\lambda \subset Z_\lambda$}{Supp mu lambda is contained in Z lambda}} Let $c_0\in\mathbb{C}^*\setminus Z_{\lambda}$. We will prove that $c_0\notin \Supp\mu_\lambda$. The following result is due to D.\ Sullivan (see \cite{Zakeri2}). \mathbf{b}egin{proposition}[Sullivan]\label{prop:sullivan} Let $(f_a)_{a \in B(a_0, r)} : (U, 0) \longrightarrow (\mathbb{C}, 0)$ be a one parameter family of holomorphic maps with $f_a(z) = \lambda z + \cal O(z^2)$ with $|\lambda| =1$ and that depends analytically on $a$. Assume that for all $a$ the map $f_a$ has a Siegel disc $\mathbb{D}elta_a$ around $0$, that $\mathbb{D}elta_a$ has finite conformal radius w.r.t.\ $0$ for at least one parameter and that $\partial \mathbb{D}elta_a$ undergoes a holomorphic motion as $a$ varies. Then, $a \longrightarrowto \log r( \mathbb{D}elta_a )$ is harmonic. \end{proposition} \mathbf{b}egin{proof} A simply connected subset of $\mathbb{C}$ has finite conformal radius iff it is not the whole complex plane. As $\partial \mathbb{D}elta_a$ undergoes a holomorphic motion, if one $\mathbb{D}elta_a$ is different from $\mathbb{C}$, then they are all different from $\mathbb{C}$. By Slodkowsky's theorem, let us extend the motion to a holomorphic motion of all the plane $\mathbb{C}$. Let $z_n$ be a sequence points in the Siegel disk associated to $f_{a_0}$ converging to the boundary of the Siegel disk. For $a$ in a small neighborhood of $a_0$ let $z_n(a)$ be the point that the holomorphic motion transports $z_n$ to. Let $\psi_a : B(0,r_a) \longrightarrow \mathbb{D}elta_a$ be the linearizing parametrization, normalized by $\psi_a(z) = z + \cal O(z^2)$ near $0$. Note that $a\longrightarrowto r_a$ is continuous by a theorem of Caratheordory (\cite{Po}, Section~1.4, in particular Theorem~1.8 page~29). Now look at, \[ u_n(a) = \psi_a^{-1}(z_n(a)) \] defined for $a\in B(a_0,r)$. For each $a$, the sequence $(z_n(a))_{n \in \mathbb{N}}$ converges to a point in the boundary of the Siegel disk. Thus, $(|u_n(a)|)_{n \in \mathbb{N}}$ converges to the conformal radius of $\mathbb{D}elta_a$, $r(a)$. The central remark is that the map \[ (a,z) \longrightarrow (\psi_a(z), z) \] is bi-analytic: indeed, $\psi_a(z)$ is given by a power series in $z$ whose coefficients depend analytically on $a$. So $a \longrightarrow u_n(a)$ is also analytic. Therefore, the maps $a \longrightarrowto \log |u_n(a)|$ are harmonic. They are (locally) bounded away from $\infty$: indeed we have $|u_n(a)|\leq r_a$ and $a\longrightarrowto r_a$ is continuous. The map $a \longrightarrow \log r(a)$ is the pointwise limit of those maps, an so is harmonic too (by \cite{Ra}, a pointwise limit of positive harmonic maps $h_n$ is harmonic, this immediately adapts to families that are bounded above by $B\in\mathbb{R}$ by considering $B-h_n$.). \end{proof} Since $c_0\notin Z_\lambda$, then by \mathbb{C}ref{thm:Zak} one of the critical points $\mathit{cr}_i(c)$ remains on the boundary of the Siegel disk $\mathbb{D}elta$ for $c$ in a ball $B(c_0,r_0)$ disjoint from $Z_\lambda$. When $\mathbb{D}elta$ is a Jordan curve, which is the case when $\theta$ has bounded type, the analytic conjugacy $\psi$ from the rotation to $\mathbb{D}elta$ extends to a homeomorphism (see \cite{book:Milnor}, Lemma~18.7), which is still conjugating the rotation to the dynamics. It follows that the critical point above is not (pre)periodic for any parameter $c\in B(c_0,r_0)$. Hence its orbit undergoes a holomorphic motion. This motion commutes with the dynamics. This motion extends continously to the closure of the critical orbit by the $\lambda$-lemma (see~\cite{MSS}), and the extension still commutes with the dynamics. By the conjugacy above, the critical point has a dense orbit in $\partial \mathbb{D}elta$ hence the closure \emph{is} $\partial \mathbb{D}elta$. Then by \mathbb{C}ref{prop:sullivan} the function $c\longrightarrowto \log r(P_{\lambda,c})$ is harmonic on $B(c_0,r_0)$. \subsubsection{\texorpdfstring{$Z_\lambda\subset \Supp\mu_\lambda$}{Z lambda is contained in Supp mu lambda}} Let $c_0\in\mathbb{C}^*$ such that $c_0\notin \Supp\mu$. We will prove that $c_0\notin Z_{\lambda}$. Let start by proving the following assertion, which is valid for all Brjuno rotation number $\theta$ (not only bounded type ones). Still denoting $\lambda = e^{2\pi i\theta}$: \mathbf{b}egin{assertion} In the complement of $\Supp\mu_\lambda$, the Siegel disk undergoes a holomorphic motion. \end{assertion} It follows from the main theorem in \cite{Zakeri2} but we include a proof here for completeness. It follows from the following more general version, whose proof was communicated to us by \'Avila: \mathbf{b}egin{proposition}\label{prop:holomo} Let $\lambda$ be a complex number of modulus $1$. Let $(f_a)_{a \in W} : (U, 0) \longrightarrow (\mathbb{C}, 0)$ be a one parameter family of holomorphic maps with $f_a(z) = \lambda z + \cal O(z^2)$. Assume they all have a Siegel disk $\mathbb{D}elta_a$ and assume that $a \longrightarrowto \log r( \mathbb{D}elta_a )$ is harmonic. Then, $\ov\mathbb{D}elta_a$ undergoes a holomorphic motion w.r.t.\ $a\in W$ that commutes with the dynamics and which is holomorphic in $(a,z)$ in the interior of the Siegel disk. \end{proposition} \mathbf{b}egin{proof} Denote $r_a = r(\mathbb{D}elta_a)$. Let $\psi_a : B(0,r_a) \longrightarrow \mathbb{D}elta_a$ be the linearizing parametrization, normalized by $\psi_a(z) = z + \cal O(z^2)$ near $0$. Since $a\longrightarrowto\log r_a$ is harmonic in $W$, there exists a holomorphic map $h:W \longrightarrow \mathbb{C}$ such that: $\log r_a = h(a)$. Let $g=\exp \circ h$, so that $|g(a)| = r_a $. Now define on $W \times \mathbb{D}elta_{a_0}$ \[ \zeta(a,z) = \psi_a \left( g(a) \times \psi_{a_0}^{-1}(z) \right), \] $\zeta$ is a holomorphic motion, which hence extends to the boundary of the maximal linearization domain by the $\lambda$-lemma. This motion satisfies $\zeta(a,f_{a_0}(z)) = f_{a}(\zeta(a,z))$ when $z\in \mathbb{D}elta_{a_0}$, and this relation extends by continuity to all $z\in\partial \mathbb{D}elta_{a_0}$. \end{proof} \noindent The proposition above implies the assertion. We still assume here that $\theta\in \cal B$ and consider some $c_0\notin\Supp\mu_\lambda$ where $\lambda = e^{2\pi i\theta}$. Since the set $\Supp\mu_\lambda$ is closed, there exists a ball $B(c_0,r_0)$ that is disjoint from $\Supp\mu_\lambda$. Let \[\zeta_c(z)=\zeta(c,z)\] be the holomorphic motion given by \mathbb{C}ref{prop:holomo}, based on parameter $c_0$, i.e.\ with $\zeta_{c_0}(z)=z$. Consider any critical point $\mathit{cr}(c_0) \in \partial \mathbb{D}elta(P_{\lambda,c_0})$ with either $\mathit{cr}=\mathit{cr}_1$ or $\mathit{cr}=\mathit{cr}_2$ (there is at least one). \mathbf{b}egin{lemma}\label{lem:crfollows} The critical point $\mathit{cr}(c)$ follows the motion $\zeta$ over $B(c_0,r_0)$, i.e.\ $\zeta_c^{-1}(\mathit{cr}(c))$ is constant. \end{lemma} \mathbf{b}egin{proof} We will proceed by contradiction and assume that it is not constant. In other words, $\zeta_c^{-1}(\mathit{cr}(c)) \not\equiv \mathit{cr}(c_0)$. Then the holomorphic functions $c\longrightarrowto \mathit{cr}(c)$ and $f:c\longrightarrowto \zeta_c(\mathit{cr}(c_0))$ differ, though they take the same value for $c=c_0$. In particular the winding number of $\mathit{cr}-f$ around $0$ is different from $0$, as $c$ loops through the circle of centre $c_0$ and radius $r_0/2$. Consider now a point $z_0\in\mathbb{D}elta(c_0)$ that is very close to $\mathit{cr}(c_0)$. Then the function $g:c\longrightarrowto\zeta_c(z_0)$ has to be close to the function $f$. If it is close enough, then the winding number of $\mathit{cr}-g$ around $0$ will be the same as that of $\mathit{cr}-f$, as $c$ loops through the same circle as above. Hence $\mathit{cr}-g$ must vanish, say at some parameter $c_2$. Then $\mathit{cr}(c_2)$ belongs to $\mathbb{D}elta(c_2)$, which is a contradiction. \end{proof} \noindent So any critical point that is on $\partial\mathbb{D}elta(P_{\lambda,c_0})$ remains on $\partial \mathbb{D}elta$ when $c$ varies in $B(c_0,r_0)$. Now restrict to the case where $\theta$ has bounded type. If both critical points were on $\partial\mathbb{D}elta(P_{\lambda,c_0})$ then this would be so over $B(c_0,r_0)$, i.e.\ we would have $B(c_0,r_0)\subset Z_{\lambda}$. But $Z_\lambda$ has no interior: it is a Jordan curve. Hence there can be only one critical point on $\partial\mathbb{D}elta(P_{\lambda,c_0})$, i.e.\ $c_0\notin Z_\lambda$. \section{Parabolic slices}\label{sec:parabo} Here we assume that $\lambda = e^{2\pi i\theta}$ with $\theta =p/q$ a rational number, written in lowest terms. \subsection{Introduction} I defined in my thesis \cite{thesis:Cheritat} the \emph{asymptotic size} of a parabolic point and proposed it as an analogue of the conformal radii of Siegel disks. \mathbf{b}egin{definition}\label{def:asslen} Orbits attracted by petals of a fixed non-linearizable parabolic point $p$ of a holomorphic map $f$ satisfy \[|f^n(z)-p| \sim \frac{L}{n^{1/r}} \] for some constant $L>0$ called the asymptotic size of $p$ and some $r\in\mathbb{N}^*$ which coincides with the number of attracting petals. \end{definition} One can compute $L$ from the asymptotic expansion of an iterate of $f$, provided it is tangent to the identity at $p$: If \[f^k(p+z) = p + z + C z^{m+1} + \cal O(z^{q+2})\] then \[r = m\] and \[L = \left|\frac{k}{mC}\right|^{1/m}.\] Under a conjugacy $g = h \circ f \circ h^{-1}$, the factor $L$ scales as a length: \[L(g) = L(f) \times |h'(p)|.\] Correspondingly, the factor $C$ scales as follows: \[g(p'+z) = p'+z+ \frac{C}{h'(p)^m} z^{m+1} + \cal O(z^{m+2})\] where $p'=h(p)$. In the case of the family $P_{\lambda,c}$, the fixed point $z=0$ has a multiplier that is a primitive $q$-th root of unity and it is well-known in this case that the number of petals must be a multiple of $q$, because $P$ permutes the petals in groups of $q$. Moreover, each cycle of petals attracts a critical point by Fatou's theorems, hence there is at most two cycles of petals, so it follows that \[m=q\text{ or }m=2q.\] In fact, one can develop \[P_{\lambda,c}^q(z) = z + C_{p/q}(c) z^{q+1} + \cal O(z^{q+2})\] and \[m=2q \iff C_{p/q}(c)=0.\] From now on we abbreviate with \[C:=C_{p/q}\] to improve readability. Now recall that \[ P_{\lambda,c}(z) = \lambda z \left( 1 - \frac{(1 + \sfrac{1}{c})}{2} z + \frac{\sfrac{1}{c}}{3} z^2 \right).\] To take advantage of the tools of algebra, it makes sense to use $1/c$ as a variable, so let us call \[u = \frac{1}{c}.\] So that \[ P_{\lambda,c}(z) = \lambda z \left( 1 - \frac{(1 + u)}{2} z + \frac{u}{3} z^2 \right).\] It follows that $C(c)$ is a polynomial in $u$. We thus define \[\check C(u) = C(c) = C(1/u).\] A polynomial $P(z)=\sum a_n z^n$ of degree $d$ is called \emph{symmetric} when its coefficients satisfy $a_{d-k}=a_k$ for all $k$. \mathbf{b}egin{lemma}\label{lem:degC} The polynomial $\check C$ has degree $q$ and is symmetric. \end{lemma} \mathbf{b}egin{proof} When $u$ tends to infinity then $c$ tends to $0$, and the map $f_c(z) = c^{-1}P_{\lambda,c}(cz)$ converges uniformly on every compact subset of $\mathbb{C}$ to $Q(z)=\lambda z (1-\frac{z}2)$. Let $Q^q(z) = z + C_0 z^{q+1}+\cal O(z^{q+2})$. Then $C_0\neq 0$ because $Q$ can have only one cycle of petals by Fatou's theorem, since it has only one critical point. Moreover, the scaling factor $c$ implies that we have $f_c^q(z) = z + c^{q}C(c) z^{q+1} + \cdots$. Hence $u^{-q}C(1/u)$ has a non-zero limit as $u\longrightarrow\infty$, because the limit is $C_0$. So the degree is $q$. The symmetry comes from the symmetry $c^{-1}P_{\lambda,c}(cz) = P_{\lambda,1/c} (z)$ of our family. By the scaling law, $c^{q} C_{p/q}(c) = C_{p/q}(1/c)$, i.e.\ \[u^{q} C_{p/q}(1/u) = C_{p/q}(u)\] which is another way to express that a degree $q$ polynomial is symmetric. \end{proof} In the proof above we saw that the leading coefficient of $\check C$ is the coefficient $C(Q)$ in \[Q^q(z) = z + C(Q) z^{q+1}+\cal O(z^{q+2})\] where \[Q(z)=\lambda z (1-\frac{z}2).\] In particular, if we denote $u_i$ the roots of $\check C$, counted with multiplicity, we get \mathbf{b}egin{equation}\label{eq:cCprod} \check C(u) = C(Q) \prod_{i=1}^q (u-u_i) \end{equation} Consider the function \[r(c)=\frac{1}{|C(c)|^{1/q}}.\] This quantity is equal to the asymptotic size $L$ of $0$ for $P_{\lambda,c}$ except when $C(c)=0$, where $r(c)=+\infty$. The quantity $-\log r$ computes to \[-\log r(c) = - \frac{1}{q} \log |C(c)|.\] The function $-\log r$ is then harmonic in $\mathbb{C}^*$ minus the set of zeroes of $C$. By convenience, we set $\log +\infty = +\infty$, and $-\log r$ is then subharmonic in $\mathbb{C}^*$. From \cref{eq:cCprod} we get \mathbf{b}egin{equation}\label{eq:logL} -\log r(c) = -\log L(Q)-\log|c| + \frac{1}{q} \sum \log |c-c_i| \end{equation} where $L(Q)$ denotes the asymptotic size of $Q$ at $0$. The generalized Laplacian applied to $-\log r$ on $\mathbb{C}^*$ is a finite sum of dirac masses that we call $\mu_{\lambda}$ (recall that $\lambda = e^{2\pi i \pq}$): \[\mu_{\lambda} = \frac{2\pi}q \sum_{i=1}^q \delta_{c_i}\] where $c_i$ are the roots of $C$ counted with multiplicity. \mathbf{b}egin{remark*} In fact the roots of $\check C$ are simple: it should follow more or less immediately from Proposition~4.6 in \cite{Bo}. Note also that an analogue statement was proved in \cite{art:BEE} for the family of degree $2$ rational maps by a transversality arguments using quadratic differentials. We expect that \cite{art:BEE} adapts to the present situation with little modification. \end{remark*} By the symmetry of the family, i.e.\ \cref{eq:sym}, the following analogue of \cref{eq:symr} holds: \mathbf{b}egin{equation}\label{eq:symr2} -\log r(c) + \log |c|= -\log r(1/c) \end{equation} We also stress one consequence of the above computations, an analogue of \cref{eq:limr} on page~\pageref{eq:limr}: \mathbf{b}egin{equation}\label{eq:limr2} r(c) \underset{c\to\infty}\longrightarrow r(Q). \end{equation} To finish this section, let us note that \mathbf{b}egin{lemma}\label{lem:ka} There exist some $\kappa>0$ such that for all $\pq$, the roots of $\check C$ are contained in $\ov B(0,\kappa)$. The support of all the corresponding measures $\mu_{\lambda}$ are thus contained in $\ov B(0,\kappa)$. \end{lemma} \mathbf{b}egin{proof} If one critical point escapes to infinity, there can be only one cycle of petals by Fatou's theorem. For $c$ to escape, by the proof of \mathbb{C}ref{lem:cesc} and since $|\lambda|=1$, it is enough that $|c|>\kappa$ with $\kappa>3$ and $\frac{(\kappa-3)\kappa}{6} > \max\mathbf{b}ig(\sqrt{6\kappa},6(1+\kappa),\sqrt{12\kappa}\mathbf{b}ig)$. For instance, $\kappa=40$ is enough. \end{proof} \section{Limits of measures}\label{sec:7} A note on terminology : by measures we will always mean \emph{positive} measures. If we ever need other kind of measures, we will call them signed measures or complex measures. \subsection{Statement} How does $\mu_\lambda$ does depend on $\lambda$? \mathbf{b}egin{conjecture}[Buff] Let $\mathbb{U}_X = \{\,\exp(2i\pi x)\,;x\in X\,\}$ and let $\cal B$ denote the set of Brjuno numbers. The function $\lambda\in\mathbb{D}\cup\mathbb{U}_\mathbb{Q}\cup\mathbb{U}_{\cal B}\longrightarrowto \mu_\lambda$ has a continuous extension to $\ov{\mathbb{D}}$ for the weak-$\ast$ topology on measures. \end{conjecture} This states several things: that $\mu_\lambda$ depends continuously on $\lambda$, even at parameters for which $0$ is neutral with rational or Brjuno rotation number, but also that is has a limit at non-Brjuno irrational rotation numbers. Here we prove a special case of this conjecture: \mathbf{b}egin{theorem}\label{thm:main} Let $\theta$ be a bounded type irrational and $p_n/q_n$ its approximants. Then $\mu_{e^{2\pi i\pqn}}\longrightarrow \mu_{e^{2\pi i\theta}}$ for the weak-$\ast$ topology. \end{theorem} We recall that the locally finite Borel measures on $\mathbb{R}^n$ are called the Radon measures and are in natural bijection with positive linear functionals on the space of continuous real-valued function with compact support $C^0_c(\mathbb{R}^n)$ (no need to endow this space with a topology, thanks to positivity). Weak-$\ast$ convergence $\mu_n\longrightarrow \mu$ for locally finite Borel measures on $\mathbb{R}^n$ means that for all continuous function $\varphi:\mathbb{C}\to\mathbb{R}$ (dubbed test functions) with compact support, \[\int \varphi \mu_{n} \longrightarrow \int \varphi \mu.\] \subsection{Generalities} The proof of \mathbb{C}ref{thm:main} will use potential theory and we will recall a few generalities about this and other things. We first recall a classic fact: the space of Borel measures supported on $\ov B(0,R)$ and of mass $\leq M$ is compact for the weak-$\ast$ topology, which is metrizable. In particular for a sequence of such measures to converge, it is enough to prove the uniqueness of extracted limits. The \emph{potential} of a finite Borel measure $\mu$ with compact support in the plane is defined by \[ u(z) = \int \frac{1}{2\pi} \log|z-w| d\mu(w)\] It is a subharmonic function and satisfies \[\mathbb{D}elta u = \mu\] in the sense of distributions and better: \[\int (\mathbb{D}elta \varphi) u = \int \varphi \mu \text{ for all $\varphi\in C^0_c(\mathbb{C})$}\] i.e.\ the test functions can be taken $C^0$ instead of $C^\infty$. In the rest of this section on generalities, we will carefully avoid the language of distributions. \mathbb{C}ref{sub:pfthmmain}. \mathbf{b}egin{remark*} Note the difference of convention with \cite{Ra}: there, we have $u(z) = \int \log|z-w| d\mu(w)$ and $\mathbb{D}elta u = 2\pi\mu$. \end{remark*} \mathbf{b}egin{lemma} We have the following expansion: \mathbf{b}egin{equation}\label{eq:expanu} u(z) = \frac{\operatorname{mass} \mu}{2\pi}\log |z| + 0 + o(1). \end{equation} \end{lemma} \mathbf{b}egin{proof} This is well-know, we recall a proof here. When $z$ is not in the support then:\\ $\displaystyle\int \frac{1}{2\pi} \log|z-w| d\mu(w) = \int \frac{1}{2\pi} \left(\log|z| +\log \Big| 1-\frac wz \Big|\right) d\mu(w) = \frac{\operatorname{mass}\mu}{2\pi} \log|z| +$\\ $\displaystyle \frac{1}{2\pi}\int \log \Big| 1-\frac wz \Big| d\mu(w)$. Now if the support of $\mu$ is contained in $B(0,R)$ and $|z|>2R$ then for $w$ in the support, $\mathbf{b}ig|\log |1-w/z|\mathbf{b}ig| \leq C|w/z|$ with $C>0$ some constant, hence\\ $\displaystyle \left|\int \log \Big| 1-\frac wz \Big| d\mu(w) \right|\leq \frac{C'}{|z|}$ with $C' = C\int |w| d\mu(w)$. \end{proof} Let \[ \ell(z) = \frac{1}{2\pi}\log|z|\] This map is locally $L^1$, so for $\varphi\in\mathbb{C}^\infty_c(\mathbb{C})$, the convolution $\varphi * \ell$ is also $C^{\infty}$. However, it does not necessarily have compact support. The potential of a (finite with compact support) Borel measure defined above is just \[ u = \mu * \ell\] where $*$ refers to the convolution operator. Note that $\mu * \ell \in L^1_{\mathrm{loc}}(\mathbb{C})$, i.e.\ $\mu * \ell$ is locally integrable w.r.t.\ the Lebesgue measure (this can be deduced from the first part of the statement below, but this is also a classical fact for subharmonic functions, see the paragraph after remark following the lemma). We will need the following classical lemma: \mathbf{b}egin{lemma}\label{lem:fub} For a finite measure with compact support $\mu$ and a continuous test function $\varphi$ with compact support, then $\varphi\times (\mu * \ell)$ is integrable with respect to the Lebesgue measure and: \[\int \varphi\times (\mu * \ell) = \int (\varphi * \ell) \times \mu.\] \end{lemma} (Note: the left hand side is to be understood as an integral over the Lebesgue measure of the integrable function: $\varphi\times (\mu * \ell)$; the right hand side as the integral of the continuous function $(\varphi * \ell)$ over the measure $\mu$. The function $\varphi * \ell$ is continous since $\varphi$ is continuous with compact support and $\ell\inL^1_{\mathrm{loc}}(\mathbb{C})$.) \mathbf{b}egin{proof} This is a simple application of the Fubini theorems, but we will check it carefully. Let $\lambda$ be the Lebesgue measure. Consider the product measure $\lambda\times\mu$ and the measurable function $(z,w)\longrightarrowto \varphi(z)\ell(z-w)$. Fixing $w$, the integral of its absolute value with respect to $d\lambda(z)$ is $\int_\mathbb{R} |\varphi(z)\ell(z-w)| d\lambda(z) = \int_{\Supp \varphi} |\varphi(z)\ell(z-w)| d\lambda(z)$, i.e.\ we can restrict $z$ to the support of $\varphi$. Using $v=z-w$ we get $\int_{\Supp \varphi} |\varphi(z)\ell(z-w)| d\lambda(z) \leq \mathbf{b}ig(\!\max|\varphi| \mathbf{b}ig)\int_{v\in -w+\Supp\varphi} |\ell(v)|d\lambda(v)$. Decompose \[\ell(v) = \log |v| = \max(0,\log |v|) + \min(0,\log|v|) = \ell^+(v) - \ell^-(v).\] We have \[ |\ell(v)| = \ell^-(v) + \ell^+(v) \] so, denoting $S_w = -w+\Supp\varphi$: \mathbf{b}egin{eqnarray*} \int_{v\in S_w} |\ell(v)| & = & \int_{S_w} \ell^-(v) + \int_{S_w} \ell^+(v) \\ & \leq & \int_{\mathbb{C}} \ell^- + \lambda(\Supp\varphi) \max_{v\in S_w} \ell^+(v) \end{eqnarray*} One computes $\int_{\mathbb{C}} \ell^- = \pi/2 < +\infty$. Let $R>0$ so that $\Supp\varphi\subset B(0,R)$ and $R'>0$ so that $\Supp \mu \subset B(0,R')$. Then $\forall w \in\Supp\mu$, $\forall v\in S_w$: \[\ell^+(v) \leq \max\mathbf{b}ig (0,\log(|w|+R)\mathbf{b}ig) \leq \max\mathbf{b}ig (0,\log(R'+R)\mathbf{b}ig).\] Summing up: there is some $K>0$ such that for all $w$ in the support of $\mu$: \[\int_\mathbb{C} |\varphi(z)\ell(z-w)|d\lambda(z) \leq K.\] Since $\mu$ has finite mass it follows that \[\int_{\mathbb{C}\times\mathbb{C}} |\varphi(z)\ell(z-w)|d\lambda(z) d\mu(w)<+\infty.\] So we can apply the Fubini-Tonelli theorem, from which follows that the function $z\longrightarrowto \varphi(z) \left(\int_\mathbb{C} \ell(z-w) d\mu(w)\right)$ is $L^1$ with respect to the Lebesgue measure and that we have \[\int_\mathbb{C} \varphi(z) \left(\int_\mathbb{C} \ell(z-w) d\mu(w)\right)d\lambda(z) = \int_\mathbb{C} \left(\int_\mathbb{C} \varphi(z) \ell(z-w) d\lambda(z)\right) d\mu(w) .\] The left hand side is equal to $\int \varphi\times (\mu * \ell)$ and using $\ell(z-w) = \ell(w-z)$ the right hand side is equal to $\int (\varphi * \ell) \times \mu$. \end{proof} \mathbf{b}egin{remark*} This would be tempting to write the conclusion of the previous lemma as \[ \langle \mu * \ell, \varphi\rangle = \langle \mu, \varphi * \ell\rangle\] except that as a test function, $\varphi * \ell$ is indeed smooth but does not necessarily have compact support, so we prefer to avoid this notation. \end{remark*} We recall that a subharmonic function on a connected open subset $X$ of $\mathbb{R}^n$ that is not $\equiv -\infty$ is in $L^1_{\mathrm{loc}}(X)$, see theorem~2.5.1 in \cite{Ra} or corollary~3.2.8 in \cite{Ho}. Also, a subharmonic function is upper semi-continuous and takes values in $[-\infty,+\infty)$, hence it is always locally bounded from above. \subsection{Proof of Theorem~\ref{thm:main}}\label{sub:pfthmmain} Let us denote by \[u_{\theta} = \text{the potential of $\mu_\lambda$} = \mu_\lambda * \ell\] where $\lambda = e^{2\pi i \theta}$ for $\theta$ a Brjuno number or a rational number. Recall that when $\theta$ is a Brjuno number, we call $r(c)$ the conformal radius of the Siegel disk and when $\theta$ is rational, we set $r(c) =L$, the asymptotic size of the parabolic point when it has $q$ petals, and $r(c)=+\infty$ when it has $2q$ petals. We use $\log +\infty = +\infty$ as a convenience when speaking of $\log r$. In the rational case $\mu_\lambda$ is a sum of Dirac masses: $\mu_\lambda = \sum_{i=1}^{q} \frac{2\pi}{q} \delta_{c_i}$. It follows that $u_\pq(c) = \frac{1}{q}\sum \log |c-c_i|$. In the rational or Brjuno case, from \cref{eq:logr,eq:logL} it follows that, for $c\neq 0$: \mathbf{b}egin{equation}\label{eq:pot} u_\theta(c) = -\log r(P_{\lambda,c}) + \log|c| + \log r(Q_\lambda). \end{equation} From \cref{eq:symr,eq:symr2} we have \[ -\log r(P_{\lambda,1/c}) = -\log r(P_{\lambda,c}) + \log |c|. \] and since by \cref{eq:pot} we have \[ -\log r(P_{\lambda,c}) = u_\theta(c) - \log|c| - \log r(Q_\lambda)\] we get $\forall c\in\mathbb{C}^*$: \mathbf{b}egin{equation} u_\theta(c) = -\log r(P_{\lambda,1/c}) + \log r(Q_\lambda) \end{equation} which will interest us when $c\longrightarrow 0$ : indeed by \cref{eq:limr,eq:limr2} \mathbf{b}egin{corollary}\label{cor:zero} For $\theta$ a Brjuno number or a rational number: \[u_\theta(0) = 0\] \end{corollary} By the symmetry formulas \cref{eq:symr,eq:symr2} we have, for all $\theta$ Brjuno or rational: \mathbf{b}egin{equation}\label{eq:symu} u_\theta(c) = u_\theta(1/c) + \log |c| \end{equation} To alleviate the notations and in particular nested subscript and superscripts, which tend to be hard to read, we first set \[ \setlength{\arraycolsep}{3pt} \mathbf{b}egin{array}{rclrclrclrclrcl} P[\lambda,c] &=& P_{\lambda,c} & Q[\lambda] &=& Q_{\lambda} & u[\lambda] &=& u_{\lambda} & \mu[\lambda] &=& \mu_{\lambda} & r[\lambda](c) &=& r(P_{\lambda,c}) \end{array} \] and then we will use the following abuse of notations: \[ \setlength{\arraycolsep}{3pt} \mathbf{b}egin{array}{rclrclrclrcl} P_{n,c} &=& P[e^{2\pi i\pqn},c] & Q_{n} &=& Q[e^{2\pi i\pqn}] & u_n &=& u[e^{2\pi i\pqn}] & \mu_n &=& \mu[e^{2\pi i\pqn}] \\ P_{\theta,c} &=& P[e^{2\pi i\theta},c] & Q_{\theta} &=& Q[e^{2\pi i\theta}] & u_\theta &=& u[e^{2\pi i \theta}] & \mu_\theta &=& \mu[e^{2\pi i \theta}] \end{array} \] and denote \[ \mathbf{b}egin{array}{rcl} r_n(c) &=& r[e^{2\pi i\pqn}](c) \\ r_\theta(c) &=& r[e^{2\pi i\theta}](c). \end{array} \] One key point is the following bound. \mathbf{b}egin{proposition}\label{prop:maj} \[\limsup_{n\to\infty} \sup_\mathbb{C} (u_n-u_\theta) \leq 0.\] \end{proposition} \noindent Its proof is based on a study in \cite{thesis:Cheritat} and is postponed to \mathbb{C}ref{sub:pfmaj}. We then prove a partial result in the more general case when $\theta$ is a Brjuno number: \mathbf{b}egin{proposition}\label{prop:super} Let $\theta$ be a Brjuno number, $\lambda=e^{2\pi i \theta}$ and let $W_\theta$ be the component containing a neighborhood of $0$ of the complement of the support of $\mu_{\lambda}$. Then $u_{n} \longrightarrow u_\theta$ in $L^1_{\mathrm{loc}} (W_\theta)$. \end{proposition} \noindent Note that in this proposition, the convergence is only claimed on $W_\theta$. However, by the symmetry formula \cref{eq:symu}, the convergence also occurs on $1/W_\theta := \{\,1/z\,;\,z\in W_\theta\text{ and }z\neq 0\,\}$. In \mathbb{C}ref{sub:pfsuper} we deduce \mathbb{C}ref{prop:super} from \mathbb{C}ref{prop:maj}. In fact, we will only need a weaker result than \mathbb{C}ref{prop:maj}: that \[\limsup_{n\to\infty} \sup_K (u_n-u_\theta) \leq 0\] for all compact subset $K$ of the set $W_\theta$; but our method gives the stronger \mathbb{C}ref{prop:maj}. \mathbf{b}egin{remark*} We do not need the following fact but find it interesting: $1/W_\theta$ is disjoint from $W_\theta$: otherwise $W_\theta$ would be a neighborhood of $0$ and of $\infty$ and one of the critical point would remain on $\partial \mathbb{D}elta(P_{\theta,c})$ by \mathbb{C}ref{lem:crfollows}, which contradicts \mathbb{C}ref{lem:cesc}). \end{remark*} Let us denote \[D_n \xrightharpoonup{\ast} D\] the weak-$\ast$ convergence. We will then complete the job by the following: \mathbf{b}egin{corollary}\label{cor:c2} If $\theta$ is a Brjuno number and $\mathbb{C}= \ov{ W_\theta}\cup \ov{1/W_\theta}$ then $\mu_n \xrightharpoonup{\ast} \mu_\theta$. \end{corollary} \noindent In \mathbb{C}ref{sub:pfc2} we deduce \mathbb{C}ref{cor:c2} from \mathbb{C}ref{prop:super}. \mathbb{C}ref{cor:c2} applies in particular to the case when $\theta$ has bounded type since by \mathbb{C}ref{thm:bddType} we know that the support is a Jordan curve. This proves \mathbb{C}ref{thm:main}. In the subsequent sections, we prove the three statements above. \subsection{Proof of Corollary~\ref{cor:c2} from Proposition~\ref{prop:super}}\label{sub:pfc2} Recall that the measures $\mu_n$ and $\mu_\theta$ all have total mass $2\pi$ and support in a common ball $\ov B(0,R)$ for some $R>0$ by \mathbb{C}ref{lem:qlr,prop:harmo,lem:ka}. By weak-$\ast$ compactness of the set $E$ of Borel measures of mass $2\pi$ on $\ov B(0,R)$, it is enough to prove that for all subsequence of $\mu_n$ that has a weak-$\ast$ limit $\mu\in E$, then $\mu=\mu_\theta$. Recall that $u_n = \mu_n * \ell$ and $u_\theta = \mu_\theta * \ell$. Let \[u = \mu * \ell.\] Then $u$ is a subharmonic function. \mathbf{b}egin{lemma}\label{lem:eqW} Let $\lambda$ be the Lebesgue measure: \[u_n\lambda \xrightharpoonup{\ast} u\lambda\] \end{lemma} \mathbf{b}egin{proof} For all continuous test function $\varphi$, the function $\varphi * \ell$ is continuous (but not necessarily with compact support) and by \mathbb{C}ref{lem:fub}: \[\int (\varphi * \ell) \times \mu_n = \int \varphi \times (\ell * \mu_n) =\int \varphi u_n\lambda\] and \[\int (\varphi * \ell) \times \mu = \int \varphi \times (\ell * \mu) = \int \varphi u\lambda.\] Now $\int (\varphi * \ell) \times \mu_n \longrightarrow \int (\varphi * \ell) \times \mu$ by definition of weak-$\ast$ convergence of $\mu_n$ to $\mu$. \end{proof} By \mathbb{C}ref{prop:super}, $u_n\longrightarrow u_\theta$ in $L^1_{\mathrm{loc}}(W_\theta)$. By the symmetry relations \cref{eq:symu}, this also holds on $1/W_\theta$. This implies the following weaker statement: $u_{n}\lambda \xrightharpoonup{\ast} u_\theta\lambda$ on $W_\theta\cup 1/ W_\theta$. It follows that $u\lambda=u_\theta\lambda$ on $W_\theta\cup 1/ W_\theta$, so $u=u_\theta$ almost everywhere on $W_\theta\cup 1/ W_\theta$. Since both functions are subharmonic, this implies (see theorem~2.7.5 in \cite{Ra}) \[u = u_\theta \text{ on $W_\theta\cup 1/ W_\theta$.}\] To extend this equality to $\mathbb{C}$, we will use the notion of non-thin sets, see \cite{Ra}. \mathbf{b}egin{definition}[Def.\ 3.8.1 page~79 in \cite{Ra}] A subset $S$ of $\mathbb{C}$ is \emph{non-thin} at $\zeta\in \mathbb{C}$, if $\zeta \in \ov{S\setminus\{\zeta\}}$ and if for all subharmonic function defined in a neighborhood of $\zeta$, \[\limsup_{\substack{z\to\zeta \\ z\in S\setminus\{\zeta\}}} u(z) = u(\zeta).\] \end{definition} \mathbf{b}egin{theorem}[Thm.\ 3.8.3 page~79 of \cite{Ra}] A connected set containing more than one point is non-thin at every point of its closure. \end{theorem} By hypothesis, $\mathbb{C}=\ov{W_\theta}\cup \ov{1/W_\theta}$. We already know that $u=u_\theta$ on $W_\theta$ and on $1/W_\theta$. Now for all $\zeta \in \ov{W_\theta}$, \[ u(\zeta) = \limsup_{\substack{z \to \zeta \\ z \in W_\theta}} u(z) = \limsup_{\substack{z \to \zeta \\ z \in W_\theta}} u_\theta(z) = u_\theta(\zeta)\] and similarly with $1/W_\theta$ in place of $W_\theta$. Hence $u=u_\theta$ on $\mathbb{C}$, so their generalized Laplacians are equal: $\mu = \mu_\theta$. This ends the proof of \mathbb{C}ref{cor:c2}. \subsection{Proof of Proposition~\ref{prop:super} from Proposition~\ref{prop:maj}}\label{sub:pfsuper} We will use a nice trick suggested by Xavier Buff. The subharmonic function $u_\theta$ is harmonic on $W_\theta$, hence $u_n-u_\theta$ is subharmonic on $W_\theta$. Moreover by \mathbb{C}ref{cor:zero}, $u_n(0) = 0 = u_\theta(0)$ so $u_n-u_\theta$ vanishes at the origin. By \mathbb{C}ref{prop:maj}, the \mathbb{C}ref{prop:super} will be a consequence of the following proposition applied to the functions $f_n = u_n - u_\theta$ on $X=W_\theta$ and with $x_0=0$. \mathbf{b}egin{proposition}\label{prop:upper2} Assume that $f_n$ is a sequence of subharmonic functions on a connected open subset $X$ of $\mathbb{C}$ and assume that \[\exists x_0\in X\text{ such that } f_n(x_0) \longrightarrow 0 \] and that for every compact subset $K$ of $X$, \[ \limsup_{n\to\infty}\ (\sup_K f_n) \leq 0\] Then $f_n \longrightarrow 0$ in $L^1_{\mathrm{loc}}(X)$. \end{proposition} \mathbf{b}egin{proof} For $n$ big enough we have $f_n(x_0)\neq -\infty$, so $f_n \not\equiv - \infty$, so it is in $L^1_{\mathrm{loc}}(X)$. Consider the set $\cal A$ of points of $X$ which have an open ball neighborhood $B$ on which $\int_B |f_n|\longrightarrow 0$. Then we claim that relative to $X$, the set $\cal A$ is open, closed and non-empty. Open is immediate. Closed follows from the following argument: Let $x\in X$ such that $x$ is in the closure of $\cal A$. Let $B=B(x,\varepsilon)$ be compactly contained in $X$. By hypothesis, there is s sequence $M_n\in\mathbb{R}$ such that $M_n\to 0$ and such that for all $n$, \[f_n \leq M_n\] on $B$. We have $M_n-f_n\geq0$ and $|f_n| = |-M_n+M_n-f_n| \leq |M_n| + M_n-f_n$, so it is enough to prove that $\int_B(M_n-f_n) \longrightarrow 0$. For $y\in B$, let \[\varphi_y (z) = x + \frac{(z-x)+(y-x)}{1 + \ov{(y-x)}(z-x)/\varepsilon^2}\] which is a conformal automorphism of $B$ mapping $x$ to $y$. Since $f_n\circ \varphi_y$ is also subharmonic (by corollary~2.4.3 in \cite{Ra}, subharmonicity is invariant by an analytic change of variable in the domain), we have \mathbf{b}egin{equation}\label{eq:e6} f_n(y) \lambda(B) \leq \int_B f_n\circ\varphi_y(z) d\lambda(z) \end{equation} Since $x\in \ov{\cal A}$, there is some $x'\in B$ such that $x'\in\cal A$, hence there is some open ball $B'\subset X$ containing $x'$ and a sequence $M'_n>0$ such that $M'_n\longrightarrow 0$ and such that $\forall n$, \[\int_{B'} |f_n|\leq M'_n.\] This is a fortiori true if we replace $B'$ by any open ball contained in $B'$, hence we can assume that $B'\Subset B$. For any $f\in L^1(B)$, we have by the change of variable $w=\varphi_y(z)$, $z=\varphi_y^{-1}(w)$: \[\int_{B} f(\varphi_y(z)) d\lambda(z) = \int_{B} f(w) |(\varphi_{y}^{-1})'(w)|^2 d\lambda(w)\] If we let $y$ vary in $B'\Subset B$ then the complicated term $|(\varphi_{y}^{-1})'(w)|$ remains bounded away from $0$ and $\infty$ by constants that depend only on $B$ and $B'$. Let $c\in(0,1)$ be a lower bound. Now recall that $f_n\leq M_n$, hence taking $f = M_n-f_n\geq 0$ above, \[ \int_{B} (M_n-f_n(\varphi_y(z))) d\lambda(z) \geq c^2 \int_{B} (M_n-f_n(w)) d\lambda(w)\] Whence \[\int_{B} (M_n-f_n(w)) d\lambda(w) \leq c^{-2}\left(M_n\lambda(B) - \int_B f_n\circ\varphi_y\right)\] and using \cref{eq:e6} we get \[\int_{B} M_n-f_n \leq c^{-2}(M_n - f_n(y))\lambda(B)\] Averaging the above over $y\in B'$ implies \mathbf{b}egin{equation}\label{eq:e7} \frac{1}{\lambda(B')}\int_{B} M_n-f_n \leq c^{-2}\frac{1}{\lambda(B')} \int_{B'} M_n-f_n \end{equation} Now $\int_{B'} M_n-f_n$, which is non-negative, tends to $0$: indeed, $M_n-f_n\leq |f_n| + |M_n|$ and $\int_{B'} f_n\leq M'_n$. By \cref{eq:e7}, it follows that $\int_{B} M-f_n\longrightarrow 0$. Q.E.D. Non-empty: we prove that $x_0\in \cal A$. Let $B(x_0,\varepsilon)$ compactly contained in $X$. By hypothesis, there is a sequence $M_n\longrightarrow 0$ such that for all $n$, $f_n \leq M_n$ on $B$. By subharmonicity, $f_n(x_0) \lambda(B) \leq \int_B f_n$. it follows that $\int_B |M_n-f_n| = \int_B M_n-f_n \leq (M_n-f_n(x_0))\lambda(B)$. Hence $\int_B |f_n| \leq \int_B |M_n| + \int_B |M_n-f_n| \leq (|M_n| + M_n - f_n(x_0))\pi \varepsilon^2$. By connectedness $\cal A=X$. \end{proof} This ends the proof of \mathbb{C}ref{prop:super}. \subsection{Proof of Proposition~\ref{prop:maj}}\label{sub:pfmaj} To prove it, we will adapt an inequality in \cite{thesis:Cheritat}, for which the following lemma from~\cite{thesis:Jellouli}, was crucial. \mathbf{b}egin{lemma}[Jellouli]\label{lem:jellouli} Let: \mathbf{b}egin{itemize} \item $f_\alpha : (U,0) \longrightarrow (\mathbb{C},0)$ be a family of holomorphic map defined for $\alpha$ in an open interval $I$ on a common domain $U$ and fixing the origin with multiplier $\o{\alpha}$, \item $\alpha_\ast\in I$ be irrational, \item $\alpha_n = \pqn$ be the convergents of $\alpha_\ast$. \end{itemize} We assume that $\forall \alpha,\mathbf{b}eta\in I$, $\|f_\alpha-f_\mathbf{b}eta\|_\infty \leq \Lambda |\alpha-\mathbf{b}eta|$ and that $f_{\alpha_\ast}$ is linearizable at $0$. Then $f_{\alpha_n}^{q_n} \longrightarrow \id$ uniformly on every compact subsets of $\mathbb{D}elta_{\alpha_\ast}$. \end{lemma} Its proof goes by conjugating $f_\alpha$ by the normalized linearizing map $\varphi_{\alpha_\ast}$ of $f_{\alpha_\ast}$, this gives $F_\alpha = \varphi_{\alpha_\ast} \circ f_\alpha\circ\varphi_{\alpha_\ast}^{-1}$, and proving the following two things by induction: let \[ r_\ast = r(\mathbb{D}elta(f_{\alpha_\ast}))\] for every compact $K \subset r_\ast \mathbb{D}$, there exists $N=N(K) \in \mathbb{N}$ and $C=C(K) > 0$ such that $\forall n \geq N$, the $q_n$ first iterates of $F_{\alpha_n}$ are defined on $K$ and do not leave $r(\mathbb{D}elta_{\alpha}) \mathbb{D}$ and $\forall z \in K, \, \forall n \geq N, \, \forall k \in \mathbb{N}$: \[ k \leq q_n \implies |F_{\alpha_n}^k (z) - e^{2 i \pi k \pqn} z | \leq \frac{C |z| k}{q_n^2} \] We will take advantage of one consequence of this: let us write the expansions \[f_{\alpha_n}^{q_n}(z) = z + C_n z^{q_n+1} + \cal O(z^{q_n+2})\] Then \mathbf{b}egin{lemma}\label{lem:l1} Under the same asumptions, \[\limsup |C_n|^{1/q_n} \leq 1/r_\ast\] \end{lemma} \mathbf{b}egin{proof} Since $\varphi_{\alpha_*}'(0)=1$, it follows that \[F_{\alpha_n}^{q_n}(z) = z + C_n z^{q_n+1} + \cal O(z^{q_n+2})\] for the same $C_n$ as for $f_{\alpha_n}$. Consider any $\rho<r_\ast$. For $n$ big enough, $F_{\alpha_n}^{q_n}$ is defined on $B(0,\rho)$. It has to take values in $B(0,r_\ast)$. By the Cauchy formula, \[ C_n = \frac{1}{i 2\pi} \int_{\partial B(0,\rho)} \frac{F_{\alpha_n}^{q_n}(z)}{z^{q_n+2}} dz \] consequently \[ |C_n| \leq \frac{r_\ast}{\rho^{1+q_n}} \] hence \[ |C_n|^{1/q_n} \leq \frac{1}{\rho}\left(\frac{r_\ast}{\rho}\right)^{1/q_n}\] so \[ \limsup |C_n|^{1/q_n}\leq \frac{1}{\rho} .\] Since this is valid for all $\rho<r_\ast$, the conclusion follows. \end{proof} Now to prove the \mathbb{C}ref{prop:maj}, we will need a form of uniformity of the above computations when the family depends on a supplementary parameter in Jellouli's lemma, so we will dig into its proof and pay attention to uniformity. Let $\psi_{c}$ be the linearizing parametrization of $P_{\theta,c}$ and $r(c)$ be its radius of convergence. Recall that $\psi_c$ is a holomorphic function defined on $B(0,r(c))$ whose image is the Siegel disk of $P_{e^{2\pi i \theta},c}$. Recall that I proved im my thesis \cite{thesis:Cheritat} that $r$ is a continuous function of $c$. Let \[f_{n,c}(z) = \psi_c^{-1} \circ P_{n,c} \circ \psi_c.\] This function is defined in some subset of $B(0,r(c))$. We will use \mathbf{b}egin{theorem}\label{lem:d0} For all Schlicht functions $f$, \[\forall z \in\mathbb{D},\ d(f(z),\partial f(\mathbb{D}))\geq d_0(|z|) = \frac{1}{4}\left(\frac{1-|z|}{1+|z|}\right)^2\] where $d$ denotes the Euclidean distance. \end{theorem} \mathbf{b}egin{proof} By corollary~1.4 in \cite{Po} we have $d(f(z),\partial f(\mathbb{D}))\geq \frac{1}{4}(1-|z|^2) |f'(z)|$ and by equation~(11) in theorem~1.6 in \cite{Po}, $|f'(z)|\geq \frac{1-|z|}{(1+|z|)^3}$, so $d(f(z),\partial f(\mathbb{D}))\geq \frac{1}{4}\left(\frac{1-|z|}{1+|z|}\right)^2$. \end{proof} In the case of our linearizing parametrization $\psi_\theta$, we can apply the above to $z\in\mathbb{D} \longrightarrowto r^{-1}\psi_\theta(r z)$ where $r$ is the conformal radius of $\mathbb{D}elta_\theta=\mathbb{D}elta(P_{\theta,c})$, which yields: \mathbf{b}egin{equation}\label{eq:distB} d(\psi_\theta(z),\partial \mathbb{D}elta_\theta) \geq rd_0(r^{-1}|z|). \end{equation} \mathbf{b}egin{theorem}\label{lem:d1} For all injective holomorphic $f:\mathbb{D}\to\mathbb{C}$, if $a\in f(\mathbb{D})$ and $b\in\mathbb{C}$ is such that $|b-a| \leq \frac{1}{2} d(a,\partial f(\mathbb{D}))$ then $b\in f(\mathbb{D})$ and \[|f^{-1}(b)-f^{-1}(a)| \leq 2 |b-a|/d(a,\partial f(\mathbb{D})).\] \end{theorem} \mathbf{b}egin{proof} Let $a'=f^{-1}(a)$, $b' = f^{-1}(b)$ and $U=f(\mathbb{D})$. We use the Schwarz-Pick hyperbolic metric on $\rho_U(w)|dw|$ on $U$. By classical estimates, $\rho_U(w) \leq \frac{1}{d(w,\partial U)} \leq \frac{2}{d(a,\partial U)}$ if $w\in B(a,\frac12 d(a,\partial U))$, so the straight segment from $a$ to $b$ has hyperbolic length $\leq 2 |b-a| / d(a,\partial U)$. It follows that the hyperbolic distance in $\mathbb{D}$ from $a'$ to $b'$ is $\leq 2 |b'-a'| /d(a,\partial U)$. The result then follows from the fact that on $\mathbb{D}$ the hyperbolic distance is greater than the Euclidean distance. \end{proof} In the case of our linearizing parametrization $\psi_\theta : r\mathbb{D} \to \mathbb{D}elta_\theta$, this gives \mathbf{b}egin{equation}\label{eq:distB2} |\psi_\theta^{-1}(b) - \psi_\theta^{-1}(a)| \leq 2 r |b-a|/d(a,\partial \mathbb{D}elta_\theta) \end{equation} under the condition $|b-a| \leq \frac{1}{2} d(a,\partial \mathbb{D}elta_\theta)$. \mathbf{b}egin{remark*} In the previous two theorems, we do not need the explicit bounds but only the existence of a bound, so a compactness argument could also have given a quick proof of them. \end{remark*} We have \mathbf{b}egin{equation}\label{eq:e5} P_{n,c}-P_{\theta,c} = (e^{2\pi i \pqn}-e^{2\pi i \theta}) \left( 1 - \frac{(1 + \sfrac{1}{c})}{2} z + \frac{\sfrac{1}{c}}{3} z^2 \right) \end{equation} We will now prove inequalities for \[\fbox{$\displaystyle|c|\geq 1$}\] and deduce later inequalities for $|c|\leq 1$ using the symmetry of the family. \mathbf{b}egin{lemma} There exists $M>0$ such that for all Brjuno number $\theta$, for all $c\in\mathbb{C}$ with $|c|\geq 1$ then the Siegel disk of $P_{e^{2\pi i \theta},c}$ is contained in $B(0,M)$. \end{lemma} \mathbf{b}egin{proof} By \mathbb{C}ref{lem:qlr}, if $|c|>833/45$ then the Siegel disk is contained in $B(0,10)$. Otherwise by \mathbb{C}ref{lem:cesc}, a trap in the basin of infinity is given by $|z|>R$ with $R = \max(6|c+1|,\sqrt{12|c|})$. If $1\leq|c|\leq R_1=833/45$, we have $R\leq R_2:=6(R_1+1)$. \end{proof} It follows, using \cref{eq:e5} and $|c|\geq 1$ that on $\mathbb{D}elta(P_{\theta,c})$, \mathbf{b}egin{equation}\label{eq:e4} |P_{n,c}-P_{\theta,c}| \leq M'\left|\pqn-\theta\right| \end{equation} with $M' = 2\pi \left( 1 + M + \frac{1}{3} M^2 \right)$ which is \emph{independent of $c$}. Let us denote $r=r_\theta(c)$. A point $z\in B(0,r)$ will be in the domain of $f_{n,c}$ iff $P_{n,c}(\psi_\theta(z))\in \mathbb{D}elta_\theta$. With $\theta$ in place of $n$, we have $P_{\theta,c}(\psi_\theta(z)) = \psi_\theta(R_\theta(z)) \in \mathbb{D}elta_\theta$. By \cref{eq:e4}, \[|P_{n,c}(\psi_\theta(z))-P_{\theta,c}(\psi_\theta(z))| \leq M'\left|\pqn-\theta\right|\] and by \cref{eq:distB}, \[d(\psi_\theta(R_\theta(z)) , \partial\mathbb{D}elta_\theta) \geq r d_0(r^{-1}|z|).\] For $f_{n,c}(z)$ to be defined, having $|P_{n,c}(\psi_\theta(z)) - \psi_\theta(R_\theta(z)) |\leq d(\psi_\theta(R_\theta(z)) , \partial\mathbb{D}elta_\theta) $ will be enough, hence it is enough that \[M'\left|\pqn-\theta\right| \leq r d_0(r^{-1}|z|).\] In particular, the domain of $f_{n,c}$ contains $B(0,(1-\varepsilon) r)$ as soon as \[M'\left|\pqn-\theta\right| \leq r d_0(1-\varepsilon).\] Consider any $z\in B(0,r)$. By applying \cref{eq:distB2} to $a = P_{\theta,c}(\psi_\theta(z)) = \psi_\theta(R_\theta(z))$ and $b=P_{n,c}(\psi_\theta(z))$ we get that if \[M'|\pqn-\theta| \leq \frac{1}{2} r d_0(r^{-1}|z|)\] then $M'|\pqn-\theta| \leq \frac{1}{2}d(a,\partial \mathbb{D}elta_\theta)$ and hence we can apply \cref{eq:distB2}: \mathbf{b}egin{align*} |f_{n,c}(z)-R_\theta(z)| & = |\psi_\theta^{-1}(b)-\psi_\theta^{-1}(a)| \\ & \leq 2 r |b-a| / d(a,\partial\mathbb{D}elta_\theta) \leq 2M'|\pqn-\theta|/d_0(r^{-1}|z|). \end{align*} To sum up: \mathbf{b}egin{corollary} For $|c|\geq 1$ and $0<\varepsilon<1$, for all $n$ such that \mathbf{b}egin{equation}\label{eq:cond} \frac{M'|\pqn-\theta|}{d_0(1-\varepsilon)} \leq \frac{1}{2} r_\theta(c) \end{equation} then $D := B(0,(1-\varepsilon) r_\theta(c)) \subset \operatorname{dom}f_{n,c}(z)$ and $\forall z\in D$ \mathbf{b}egin{equation} |f_{n,c}(z)-R_\theta(z)| \leq 2\frac{M'|\pqn-\theta|}{d_0(1-\varepsilon)}. \end{equation} \end{corollary} Recall that by the theory of continued fractions, \[\forall n\in\mathbb{N},\ |\pqn-\theta| \leq \frac{1}{q_n^2}.\] Consider now the condition \mathbf{b}egin{equation}\label{eq:cond2} \frac{M'/q_n}{d_0(1-\varepsilon)} \leq \frac{\varepsilon}{2}r_\theta(c). \end{equation} Note that $r_\theta(c)$ has a lower bound on $|c|\geq 1$, since it is continuous w.r.t.\ $c$ and has a limit when $c\longrightarrow\infty$ by \cref{eq:limr}. It follows that for a fixed $\varepsilon$, as soon as $n$ is big enough the condition above will be satisfied for all $c\in\mathbb{C}$ with $|c|\geq 1$. Let us still denote $r=r_\theta(c)$. Now if \cref{eq:cond2} is satisfied then a fortiori \cref{eq:cond} is satisfied. Assume now that $0<\varepsilon<1/2$. Then for $n$ big enough as above, we can then prove by induction on $k$ with $0\leq k \leq q_n$ that for all $z\in B(0,(1-2\varepsilon)r)$ (note the factor $2$ in front of $\varepsilon$), $f_{n,c}^k(z)$ is defined, \[ |f_{n,c}^k(z)-R_\theta^k(z)| \leq 2 \frac{M'k/q_n^2}{d_0(1-\varepsilon)}. \] and $f_{n,c}^k(z)\in B(0,(1-\varepsilon)r)$. By a similar computation as before, it follows that \[|C_n(c)| \leq \frac{r}{((1-2\varepsilon)r)^{1+q_n}} = \frac{1}{r^{q_n}} \cdot \frac{1}{(1-2\varepsilon)^{1+q_n}}\] Recall that $r_n(c) = \frac{1}{|C_n(c)|^{1/q_n}}$, hence \[-\log r_n(c) = \frac{\log|C_n(c)|}{q_n}\] so \[ -\log r_n(c) \leq - \log r_\theta(c) + \frac{1+q_n}{q_n} \log\frac{1}{1-2\varepsilon}.\] Now \[u_n(c)-u_\theta(c) = -\log r_n(c) + \log r_\theta(c) +\log L(Q_n) - \log r(Q_\theta),\] hence \[u_n(c)-u_\theta(c) \leq \frac{1+q_n}{q_n} \log\frac{1}{1-2\varepsilon} + \log L(Q_n) - \log r(Q_\theta)\] Now we use the following theorem from \cite{thesis:Cheritat}. \mathbf{b}egin{theorem}[Chéritat] \[L(Q_n) \longrightarrow r(Q_\theta)\] \end{theorem} It follows that for $n$ big enough, \[\limsup_{n\to\infty} \sup_{|c|\geq 1} u_n(c)-u_\theta(c) \leq \log\frac{1}{1-2\varepsilon}\] Since this is true for all $\varepsilon$: \[\limsup_{n\to\infty} \sup_{|c|\geq 1} (u_n(c)-u_\theta(c)) \leq 0.\] The case \[\fbox{$\displaystyle 0<|c|\leq 1$}\] immediately follows from \[u_n(1/c)-u_\theta(1/c) = u_n(c)-u_\theta(c)\] which is a consequence of \cref{eq:symu}. This ends the proof of \mathbb{C}ref{prop:maj}. \mathbf{b}ibliographystyle{alpha} \mathbf{b}ibliography{biblio} \end{document}
math
151,349
\begin{document} \title{Permutation-invariant codes encoding more than one qubit} \author{Yingkai \surname{Ouyang} } \affiliation{Singapore University of Technology and Design, 8 Somapah Road, Singapore} \email{yingkai\[email protected]} \author{Joseph \surname{Fitzsimons} } \affiliation{Singapore University of Technology and Design, 8 Somapah Road, Singapore} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore} \begin{abstract} A permutation-invariant code on $m$ qubits is a subspace of the symmetric subspace of the $m$ qubits. We derive permutation-invariant codes that can encode an increasing amount of quantum information while suppressing leading order spontaneous decay errors. To prove the result, we use elementary number theory with prior theory on permutation invariant codes and quantum error correction. \end{abstract} \maketitle The promise offered by the fields of quantum cryptography \cite{BB84,Eke91} and quantum computation \cite{nielsen-chuang} has fueled recent interest in quantum technologies. To implement such technologies, one needs a way to reliably transmit quantum information, which is inherently fragile and often decoheres because of unwanted physical interactions. If a decoherence-free subspace (DFS) \cite{ZaR97} of such interactions were to exist, encoding within it would guarantee the integrity of the quantum information. Indeed, in the case of the spurious exchange couplings \cite{Blundell}, the corresponding DFS is just the symmetric subspace of the underlying qubits. In practice, only approximate DFSs are accessible because of small unpredictable perturbations to the dominant physical interaction \cite{LBW99}, and using approximate DFSs necessitate a small amount of error correction. When the approximate DFS is the symmetric subspace, permutation-invariant codes can be used to negate the aforementioned errors \cite{Rus00,PoR04,ouyang2014permutation}. However, as far as we know, all previous permutation-invariant codes encode only one logical qubit \cite{Rus00,PoR04,ouyang2014permutation}. One may then wonder if there exist permutation-invariant codes that can encode strictly more quantum information than a single qubit whilst retaining some capability to be error-corrected. The first example of a permutation-invariant code which encodes one qubit into 9-qubits while being able to correct any single qubit error was given by Ruskai over a decade ago \cite{Rus00}. A few years later, Ruskai and Pollatshek found 7-qubit permutation invariant codes encoding a single qubit which correct arbitrary single qubit errors \cite{PoR04}. Recently permutation-invariant codes encoding a single qubit into $(2t+1)^2$ qubits that correct arbitrary $t$-qubit errors has been found \cite{ouyang2014permutation}. Here, we extend the theory of permutation-invariant codes. Our permutation-invariant code $\mathcal C$ has as its basis vectors the logical 1 of $D$ distinct permutation invariant codes given by \cite{ouyang2014permutation}, where each such code encodes only a single qubit. Surprisingly, this simple construction can yield a permutation-invariant code encoding more than a single qubit while correcting spontaneous decay errors to leading order. Permutation-invariant codes are particularly useful in correcting errors induced by {\em quantum permutation channels with spontaneous decay errors}, with Kraus decomposition $\mathcal N(\rho) = \mathcal A ( \mathcal P ( \rho) ) = \sum_{\alpha, \beta} A_\beta P_\alpha \rho P_\alpha ^\dagger A_\beta$, where $\mathcal P$ and $\mathcal A$ are quantum channels satisfying the completeness relation $\sum_{\alpha} P_\alpha ^\dagger P_\alpha = \sum_{\beta} A_\beta ^\dagger A_\beta = \mathbb 1 $ and $\mathbb 1$ is the identity operator on $m$ qubits. The channel $\mathcal P$ has each of its Kraus operators $P_\alpha$ proportional to $e^{i \theta_\alpha \hat a_\alpha}$, where $\theta_\alpha$ is the infinitesimal parameter and the infinitesimal generator $\hat a_\alpha$ is any linear combination of exchange operators. By a judicious choice of $\theta_\alpha$ and $\hat a_\alpha$, the channel $\mathcal P$ can model the stochastic reordering and coherent exchange of quantum packets as well as out-of-order delivery of classical packets \cite{Pax97}. The channel $\mathcal A$ on the other hand models spontaneous decay errors, otherwise also known as amplitude damping errors, where an excited state in each qubit independently relaxes to the ground state with probability $\gamma$. Our permutation-invariant code is inherently robust against the effects of channel $\mathcal P$, and can suppress all errors of order $\gamma$ introduced by channel $\mathcal A$, and is hence approximately robust against the composite noisy permutation channel $\mathcal N$. We quantify the error correction capabilities of our permutation-invariant codes $\mathcal C$ with code projector $\Pi$ beginning from the approximate quantum error correction criterion of Leung {\em et\ al.} \cite{LNCY97}. Since the Kraus operators $P_\alpha$ of the permutation channel leave the codespace of any permutation-invariant code unchanged, it suffices only to consider the effects of the amplitude damping channel $\mathcal A$. The optimal entanglement fidelity between an adversarially chosen state $\rho$ in the permutation-invariant codespace and error-corrected noisy counterpart is just \begin{align} 1 -\epsilon = \sup_{\mathcal R} \inf_{\rho} \mathcal F_e (\rho, \mathcal R \circ \mathcal A), \label{eq:eps-def} \end{align} where $\epsilon$ is the the {\em worst case error} \cite{ouyang2014permutation} that we need to suppress. Lower bounds for the above quantity can be found using various techniques from the theory of optimal recovery channels \cite{BaK02,Fletcher08,Yam09,Tys10,BeO10,BeO11,ouyang2014permutation}, but we restrict our attention to the simpler (but suboptimal) approach of \cite{LNCY97,ouyang2014permutation}. Suppose that we can find a truncated Kraus set $\Omega$ \cite{ouyang2013truncated} of the channel $\mathcal A$ such that for every distinct pair of $A,B \in \Omega$, the spaces $A \mathcal C$ and $B \mathcal C$ are pairwise orthogonal. Then the truncated recovery map of Leung {\em et\ al.} $\mathcal R_{\Omega,\mathcal C} (\mu):= \sum_{A \in \Omega} \Pi U_A ^\dagger \mu U_A \Pi $ is a valid quantum operation, where $U_A$ is the unitary in the polar decomposition of $A \Pi = U_A \sqrt{ \Pi A ^\dagger A \Pi }$. Since $\mathcal R_{\Omega,\mathcal C} $ is now a special instance of a recovery channel in Eq.~(\ref{eq:eps-def}), we trivially get $\epsilon \le 1- \inf_{\rho} \mathcal F_e (\rho, \mathcal R_{\Omega,\mathcal C} \circ \mathcal A).$ As explained in \cite{ouyang2014permutation}, the analysis of Leung {\em et\ al.} \cite{LNCY97} allows one to show that \begin{align} \mathcal F_e (\rho, \mathcal R_{\Omega,\mathcal C} \circ \mathcal A) \ge \sum_{A \in \Omega}\lambda_A, \end{align} where $\lambda_A= \min_{\substack{|\psi\rangle \in \mathcal C \\ \langle\psi|\psi\rangle = 1 }} \langle\psi | A ^\dagger A |\psi\rangle$ quantifies the worst case deformation of each corrupted codespace $A \mathcal C $. The symmetric subspace of $m$ qubits is central to the study of permutation-invariant codes, and has a convenient choice of basis vectors, namely the {\em Dicke states} \cite{BGu13,MHT12,TGG09,ouyang2014permutation}. A Dicke state of weight $w$, denoted as $|{\rm D}^m_w\rangle$, is a normalized permutation-invariant state on $m$ qubits with a single excitation on $w$ qubits. Our code $\mathcal C$ is the span of the logical states $|d_L\rangle$ for $d = 1,\dots, D$, and these states can be written as superposition over Dicke states, with amplitudes proportional to the square root of the binomial distribution. Namely for positive integers $n_d$ and $g_d$, \begin{align} |d_L\rangle = \sum_{j \in \mathcal I_d } \sqrt{ \frac{\bi{ n_d}{j}}{2^{n_d-1}}} | {\rm D}_{g_d j}^m\rangle \label{eq:new-pi-states} \end{align} and the set $\mathcal I_d$ comprises of the odd integers from 1 to $2 \floor{\frac{n_d-1}{2}}+1$.. The states $|d_L\rangle, A |d_L\rangle$ can be made to be pairwise orthogonal via a judicious choice of constraints on the positive integer parameters $n_1, \dots, n_D$, $g_1, \dots g_D$ and $m$. We elucidate the case for $D \ge 3$ since permutation invariant codes encoding only one qubit \cite{ouyang2014permutation} are already known. Here, we require $n_1, \dots, n_D$ to be pairwise coprime integers with $n_1 \le \dots \le n_D$, and define their product to be $N = n_1 \dots n_D$. The length of our code is a polynomial in $N$, given by $m = N^q$ for any integer $q \ge 3$. Moreover we set $g_d = N/n_d$ so that for distinct $d$ and $d'$, the greatest common divisor of $g_d$ and $g_{d'}$ is precisely gcd$(g_d, g_{d'}) = N/(n_d n_{d'}) > 1$, so that $g_d$ and $g_{d'}$ are not coprime. Furthermore, we require that $g_d \ge 3 $, $n_d \ge 4$. The reason for requiring $g_d$ and $g_{d'}$ to not be coprime is that it allows the inner products $\langle d_L | d'_L\rangle$ and $\langle d_L | A^\dagger B |d'_L\rangle$ to be identically zero for distinct $d$ and $d'$ and for any operators $A,B$ acting nontrivially on strictly less than $\frac{\min_d g_d}{2}$ qubits when $N$ is even. To see this, we analyze the linear Diophantine equation \begin{align} x_{d,d'} g_d = y_{d,d'} g_{d'} + s \label{eq:linear-Diophantine}, \end{align} with $s=0,\pm 1 $. This linear Diophantine equation has a solution $(x_{d,d'},y_{d,d'})$ if and only if $s$ is a multiple of gcd($g_d,g_{d'}$). Having gcd($g_d,g_{d'})> 1$ ensures that Eq.~(\ref{eq:linear-Diophantine}) has no solution for non-zero $s$ such that $|s| < {\rm gcd}(g_d,g_{d'})$. When $s=0$, integer solutions $(x_{d,d'},y_{d,d'})$ where $0 < x_{d,d'} g_d = y_{d,d'} g_{d'} < N$ do not exist. To see this, note that the minimum positive solutions of Eq.~(\ref{eq:linear-Diophantine}) are precisely $x_{d,d'} = \frac{g_{d'}}{{\rm gcd}(g_d,g_{d'})}$ and $y_{d,d'} = \frac{g_d}{{\rm gcd}(g_d,g_{d'})}$, and hence we must require that $\frac{g_d g_{d'}}{{\rm gcd}(g_d,g_{d'})} < N$ be an invalid inequality. But our construction gives $\frac{g_d g_{d'}}{{\rm gcd}(g_d,g_{d'})} = \frac{g_d g_{d'} n_d n_{d'}}{N} = N$. This immediately implies several orthogonality conditions on the states given by Eq.~(\ref{eq:new-pi-states}) for large $n_1$. We use a sequence of large consecutive primes and an even number to construct our sequence of coprimes. We let $n_1 = p_k$, where $p_k$ denotes the $k$-th prime, and let $n_2 = n_1+1$. We also let $n_j = p_{k+j-2}$ for all $j = 3, \dots, D$, which gives us our $D$ coprime integers. The length of our code is $m =((p_k+1)(p_k \dots p_{k+D-2} ))^q$. In the special case when $D=3$, we can use the existence of twin primes $n_1$ and $n_3$ a bounded distance apart \cite{zhang2014bounded} (at most 600 apart \cite{maynard2013small}), and let $n_2 = n_1 + 1$, which yields $m = (n_1 n_3(n_1 + 1))^q$. The oft used Kraus operators for an amplitude damping channel on a single qubit are $A_0 = |0\rangle\langle0| + \sqrt{1 - \gamma} |1\rangle\langle1| $ and $A_1 = \sqrt{\gamma} |0\rangle\langle1| $ respectively, with $\gamma$ modeling the probability for a transition from the excited $|1\rangle$ state to the ground state $|0\rangle$. On $m$ qubits, the Kraus operators of the amplitude damping channel have a tensor product structure, given by $A_{x_1} \otimes \dots \otimes A_{x_m}$ where $x_1, \dots,x_m = 0,1$. We focus our attention on the Kraus operators $K_0 = A_0 ^{\otimes m}$, and $F_j$ which applies $A_1$ on the $j$-th qubit and applies $A_0$ everywhere else for $j =1,\dots, m$. The choice of Kraus operators for a quantum channel is not unique, and we can equivalently consider a subset of the Kraus operators in a Fourier basis. Namely, for $\ell =1, \dots, m$, we define $K_\ell = \frac{1}{\sqrt{m}} \sum_{j=1}^m \omega^{(\ell-1) (j-1) } F_j,$ where $\omega = e^{2\pi i /m }$. We choose the set of Kraus operators that we wish to correct to be $\Omega =\{ K_0 , K_1, \dots, K_m \}$. Now the spaces $A \mathcal C$ and $B \mathcal C$ are orthogonal for distinct $A,B \in \Omega$. Note that for $\ell, \ell' = 1,\dots, m$, \begin{align} &\langled_L | K_\ell ^\dagger K_{\ell'} |d_L\rangle\notag\\ =& \frac{1 }{m}\sum_{j=1}^m \sum_{j'=1}^m \omega^{ -(\ell-1)(j-1) + (\ell' - 1)(j'-1) } \langled_L | F_j ^\dagger F_{j'} |d_L\rangle \notag\\ =& \sum_{j=1}^m \omega^{(\ell'-\ell)(j-1)} \langled_L | F_j ^\dagger F_{j} |d_L\rangle \notag\\ +& \frac{1}{m} \sum_{d=1}^{m-1} \sum_{j=1}^m \omega^{-(\ell - 1)(j-1) + (\ell'-1)(j-1+d)} \langled_L | F_j ^\dagger F_{j+d} |d_L\rangle, \label{eq:Kraus-fourier} \end{align} where the addition in the subscript is performed modulo $m$. Using the invariance of $\langled_L | F_j ^\dagger F_{j} |d_L\rangle$ and $\langled_L | F_j ^\dagger F_{j'} |d_L\rangle$ for distinct $j,j' = 1,\dots, m$ along with the identity \begin{align} \sum_{d=1}^{m-1} \sum_{j=1}^m \omega^{-(\ell - 1)(j-1) + (\ell'-1)(j-1+d)} &= (m \delta_{\ell', 1} - 1) m \delta_{\ell,\ell'} \notag, \end{align} one can simplify (\ref{eq:Kraus-fourier}) to get \begin{align} &\langled_L | K_\ell ^\dagger K_{\ell'} |d_L\rangle\notag\\ =&\delta_{\ell, \ell'} \left( \langled_L | F_1 ^\dagger F_1 |d_L\rangle + (m \delta_{\ell,1} -1) \langled_L | F_1 ^\dagger F_m |d_L\rangle \right), \label{eq:diagonalized-Krauses} \end{align} which completes the proof of the orthogonality of $A \mathcal C$ and $B \mathcal C$ for distinct $A,B \in \Omega$. Now we have \begin{align} \langled_L| K_0 ^\dagger K_0 |d_L \rangle &= \sum_{ t \in \mathcal I_d} \frac{ \bi{n_d}{t} } {2^{n_d-1}} (1-\gamma) ^{g_d t} \notag\\ \langled_L| F_1 ^\dagger F_1 |d_L \rangle &= \gamma\sum_{ t \in \mathcal I_d} \frac{ \bi{n_d}{t} } {2^{n_d-1}} (1-\gamma) ^{g_d t-1} \frac{g_d t}{m} \notag\\ \langled_L| F_1 ^\dagger F_m |d_L \rangle &= \gamma\sum_{ t \in \mathcal I_d} \frac{ \bi{n_d}{t} } {2^{n_d-1}} (1-\gamma) ^{g_d t-1} \frac{g_d t(m-g_dt)}{m(m-1)} . \end{align} Using the Taylor series $(1-\gamma)^{g_d t} = 1 - g_d t \gamma + \frac{g_d t(g_dt-1)}{2} \gamma^2 + O(\gamma^3)$ and $(1-\gamma)^{g_d t-1} = 1 - (g_d t-1) \gamma + O(\gamma^2)$ with the binomial identities $\sum_{t = 0}^{n_d} t\bi{n_d} {t} = 2^{n_d-1} n_d$, $ \sum_{t = 0}^{n_d} t^2 \bi{n_d} {t} = 2^{n_d-2} n_d ( n_d+1)$ and $ \sum_{t = 0}^{n_d} t^3 \bi{n_d} {t} = 2^{n_d-3} n_d^2 ( n_d+3)$ \cite{ouyang2014permutation,PBM86}, we get \begin{align} \langled_L| K_0 ^\dagger K_0 |d_L \rangle &= 1- \frac{N }{2} \gamma \notag\\ &\quad + \left( \frac{N^2 +N g_d }{8} - \frac{N}{4} \right) \gamma^2 + O(\gamma^3) \notag\\ \langled_L| F_1 ^\dagger F_1 |d_L \rangle &= \frac{N }{2m} \gamma - \left( \frac{N^2 +N g_d}{4m} - \frac{N}{2m} \right) \gamma^2 \notag\\ &\quad + O(\gamma^3) \notag\\ \langled_L| F_1 ^\dagger F_m |d_L \rangle &= \frac{\left( \frac{N }{2} - \frac{N^2 + N g_d}{4m} \right) }{m-1}\gamma \notag\\ &\quad + \frac{N^3 +3N^2 g_d}{8m(m-1)} \gamma^2 \notag\\ & \quad - \frac{ (N^2 +N g_d)\left( 1+ \frac{1}{m} \right)-2N}{4(m-1)} \gamma^2 \notag\\ & \quad + O(\gamma^3). \end{align} Now for all $|\psi\rangle \in \mathcal C$ where $\langle\psi|\psi\rangle = 1$, we can write $|\psi\rangle = \sum_{d=1 }^D a_d |d_L\rangle$ such that $\sum_{d =1 }^D |a_d|^2 = 1 + O(2^{-n_1}) $ \footnote{The term $O(2^{-n_1}) $ arises because of the slight non-orthogonality of the states $|d_L\rangle$.}. Hence for all $A \in \Omega$, $\langle\psi | A ^\dagger A | \psi\rangle = \sum_{d = 1 }^D |a_d|^2 \langle d_L | A ^\dagger A | d_L\rangle$ which implies that $\lambda_A \ge \min_{d=1, \dots, D} \langle d_L | A ^\dagger A | d_L \rangle { (1+O(2^{-n_1}) )}$. This implies that \begin{align} 1- \epsilon &\ge 1 - \frac{ N g_1 }{ 4m } \gamma - \frac{c N^2}{8} \gamma^2 + O(\gamma^3) { + O(2^{-n_1}) }, \end{align} where \begin{align} c = 1 + \frac{2g_D-g_1}{N} - \frac{2}{N} + \frac{ 3g_1 }{m} + \frac{4 g_1}{ N} . \end{align} Since $m = N^q$, $1-\epsilon \ge 1 - \frac{1}{4N^{q-2}} \gamma - \frac{c N^2}{8} \gamma^2 + O(\gamma^3) + O(2^{-n_1}) $ and for fixed $N$ and large $q$, the asymptotic error is second order in $\gamma$ with $\epsilon \sim \frac{ c' N^2}{8} \gamma^2 + O(\gamma^3) { + O(2^{-n_1}) }$, where $c' = 1 + \frac{2g_D-g_1}{N} - \frac{2}{N} + \frac{4 g_1}{ N}$. In summary, we have generalized the construction of permutation-invariant codes to enable the encoding of multiple qubits while suppressing leading order spontaneous decay errors. These permutation-invariant codes might allow for the construction of new schemes in physical systems, such as improved quantum communication along isotropic Heisenberg spin-chains \cite{BuB05PRA,BGB05,BuB05NJP,SJBB07}. Symmetry of error-correction codes have also recently been exploited to symmetrise prover strategies in the context of interactive proofs \cite{FV,Ji}, and so the extremely high symmetry of the codes studied here may also have theoretical implications. This research was supported by the Singapore National Research Foundation under NRF Award No. NRF-NRFF2013-01. Y. Ouyang also acknowledges support from the Ministry of Education, Singapore. {} \end{document}
math
17,370
\begin{document} \begin{titlepage} \begin{center} January 6, 1996 LBL-38129 \\ \vskip .15in {\large \bf Pole-Factorization Theorem in Quantum Electrodynamics} \vskip .1in Henry P. Stapp \\ {\em Lawrence Berkeley Laboratory\\ University of California\\ Berkeley, California 94720} \end{center} \vskip .05in \begin{abstract} In quantum electrodynamics a classical part of the S-matrix is normally factored out in order to obtain a quantum remainder that can be treated perturbatively without the occurrence of infrared divergences. However, this separation, as usually performed, introduces spurious large-distance effects that produce an apparent breakdown of the important correspondence between stable particles and poles of the S-matrix, and, consequently, lead to apparent violations of the correspondence principle and to incorrect results for computations in the mesoscopic domain lying between the atomic and classical regimes. An improved computational technique is described that allows valid results to be obtained in this domain, and that leads, for the quantum remainder, in the cases studied, to a physical-region singularity structure that, as regards the most singular parts, is the same as the normal physical-region analytic structure in theories in which all particles have non-zero mass. The key innovations are to define the classical part in coordinate space, rather than in momentum space, and to define there a separation of the photon-electron coupling into its classical and quantum parts that has the following properties: 1) The contributions from the terms containing only classical couplings can be summed to all orders to give a unitary operator that generates the coherent state that corresponds to the appropriate classical process, and 2) The quantum remainder can be rigorously shown to exhibit, as regards its most singular parts, the normal analytic structure. \end{abstract} \vskip .2in To appear in Annales de L'Institut Henri Poincare, 1996: Proceedings of Conference ``New Problems in the general theory of fields and particles''. \end{titlepage} \noindent{\bf 1. Introduction} The pole-factorization property is the analog in quantum theory of the classical concept of the stable physical particle. This property has been confirmed in a variety of rigorous contexts$^{1,2,3}$ for theories in which the vacuum is the only state of zero mass. But calculations$^{4,5,6}$ have indicated that the property fails in quantum electrodynamics, due to complications associated with infrared divergences. Specifically, the singularity associated with the propagation of a physical electron has been computed to be not a pole. Yet if the mass of the physical electron were $m$ and the dominant singularity of a scattering function at $p^2=m^2$ were not a pole then physical electrons would, according to theory, not propagate over laboratory distances like stable particles, contrary to the empirical evidence. This apparent difficulty with quantum electrodynamics has been extensively studied$^{7,8,9}$, but not fully clarified. It is shown here, at least in the context of a special case that is treated in detail, that the apparent failure in quantum electrodynamics of the classical-type spacetime behaviour of electrons and positrons in the macroscopic regime is due to approximations introduced to cope with infrared divergences. Those divergences are treated by factoring out a certain classical part, before treating the remaining part perturbatively. It can be shown, at least within the context of the case examined in detail, that if an accurate classical part of the photonic field is factored out then the required correspondence-principle and pole-factorization properties do hold. The apparent failure of these latter two properties in references $4$ through $7$ are artifacts of approximations that are not justified in the context of the calculation of macroscopic spacetime properties: some factors $\exp ikx$ are replaced by substitutes that introduce large errors for small $k$ but very large $x$. The need to treat the factor $\exp ikx$ approximately arises from the fact that the calculations are normally carried out in momentum space, where no variable $x$ occurs. The present approach is based on going to a mixed representation in which both $x$ and $k$ appear. This is possible because the variable $k$ refers to photonic degrees of freedom whereas the variable $x$ refers to electronic degrees of freedom. To have a mathematically well defined starting point we begin with processes that have no charged particles in the initial or final states: the passage to processes where charged particles are present initially or finally is to be achieved by exploiting the pole-factorization property that can be proved in the simpler case considered first. To make everything explicit we consider the case where a single charged particle runs around a spacetime closed loop: in the Feynman coordinate-space picture the loop passes through three spacetime points, $x_1, x_2,$ and $x_3$, associated with, for example, an interaction with a set of three localized external disturbances. Eventually there will be an integration over these variables. The three regions are to be far apart, and situated so that a triangular electron/positron path connecting them is physically possible. To make the connection to momentum space, and to the pole-factorization theorem and correspondence principle, we must study the asymptotic behaviour of the amplitude as the three regions are moved apart. Our procedure is based on the separation defined in reference 11 of the electromagnetic interaction operator into its ``classical'' and ``quantum'' parts. This separation is made in the following way. Suppose we first make a conventional energy-momentum-space separation of the (real and virtual photons) into ``hard'' and ``soft'' photons, with hard and soft photons connected at ``hard'' and ``soft'' vertices, respectively. The soft photons can have small energies and momenta on the scale of the electron mass, but we shall not drop any ``small'' terms. Suppose a charged-particle line runs from a hard vertex $x^-$ to a hard vertex $x^+$. Let soft photon $j$ be coupled into this line at point $x_j$, and let the coordinate variable $x_j$ be converted by Fourier transformation to the associated momentum variable $k_j$. Then the interaction operator $-ie\gamma_{\mu_j}$ is separated into its ``classical'' and ``quantum'' parts by means of the formula $$ -ie \gamma_{\mu_j}= C_{\mu_j} + Q_{\mu_j}, \eqno(1.1) $$ where $$ C_{\mu_j} = -ie{z_{\mu_j}\over z\cdot k_{j}} \slashash{k}_j, \eqno(1.2) $$ and $z=x^+ - x^-$. This separation of the interaction allows a corresponding separation of soft photons into ``classical'' and ``quantum'' photons: a ``quantum'' photon has a quantum coupling on at least one end; all other photons are called ``classical'' photons. The full contribution from all classical photons is represented in an extremely neat and useful way. Specialized to our case of a single charged-particle loop $L(x_1, x_2, x_3)$ the key formula reads $$ F_{op}(L(x_1,x_2, x_3))=:U(L(x_1,x_2,x_3)) F'_{op} (L(x_1,x_2,x_3)):.\eqno(1.3) $$ Here $F_{op} (L(x_1, x_2, x_3))$ is the Feynman {\it operator} corresponding to the sum of contributions from {\it all} photons coupled into the charged-particle loop $L(x_1, x_2, x_3)$, and $F_{ op}'(L(x_1, x_2, x_3))$ is the analogous operator if all contributions from classical photons are excluded. The operators $F_{ op}$ and $F'_{ op}$ are both normal ordered operators: i.e., they are operators in the asymptotic-photon Hilbert space, and the destruction operators of the incoming photons stand to the right of the creation operators of outgoing photons. On the right-hand side of $(1.3)$ all of the contributions corresponding to classical photons are included in the unitary-operator factor $U(L)$ defined as follows: $$ U(L) = e^{<a^*\cdot J(L)>} e^{-\widehat{C}lf <J^*(L)\cdot J(L)>} e^{-<J^*(L)\cdot a>}e^{i\Phi (L)}. \eqno(1.4) $$ Here, for any $a$ and $b$, the symbol $<a\cdot b >$ is an abbreviation for the integral $$ <a\cdot b>\equiv \int {d^4k\over (2\pi)^4} 2\pi \theta (k_0)\delta (k^2) a_\mu(k)(-g^{\mu\nu} )b_\nu(k),\eqno(1.5) $$ \noindent and $J(L,k)$ is formed by integrating $\exp ikx$ around the loop $L$: $$ J_\mu(L,k) \equiv \int_L dx_\mu e^{ikx}.\eqno(1.6) $$ This classical current $J_{\mu}(L)$ is conserved: $$ k^\mu J_\mu (L, k) =0. \eqno(1.7) $$ The $a^*$ and $a$ in $(1.4)$ are photon creation and destruction operators, respectively, and $\Phi (L)$ is the classical action associated with the motion of a charged classical particle along the loop $L$: $$ \Phi (L) = {(-ie)^2\over 8\pi} \int_L dx'_{\mu} g^{\mu\nu} \int_L dx''_\nu \delta((x' - x'')^2)\eqno(1.8) $$ The operator $ U(L)$ is {\it pseudo} unitary if it is written in explicitly covariant form, but it can be reduced to a strictly unitary operator using by $(1.7)$ to eliminate all but the two transverse components of $a_\mu (k),a^*_\mu (k), J_\mu(k)$, and $J^*_\mu (k)$. The colons in (1.3) indicate that the creation-operator parts of the normal- ordered operator $F'_{op}$ are to be placed on the left of $U(L)$. The unitary operator $U(L)$ has the following property: $$ U(L)|vac > = |C(L) >. \eqno(1.9) $$ Here $|vac>$ is the photon vacuum, and $|C(L)>$ represents the normalized coherent state corresponding to the classical electromagnetic field radiated by a charged classical point particle moving along the closed spacetime loop $L$, in the Feynman sense. The simplicity of (1.3) is worth emphasizing: it says that the complete effect of all classical photons is contained in a simple unitary operator that is independent of the quantum-photon contributions: this factor is a well-defined unitary operator that depends only on the (three) hard vertices $x_1, x_2$, and $x_3$. It is independent of the remaining details of $F'_{op}(L(x_1, x_2, c_3))$, even though the classical couplings are originally interspersed in all possibly ways among the quantum couplings that appear in $F'_{op}(L(x_1, x_2, x_3))$. The operator $U(L)$ supplies the classical bremsstrahlung-radiation photons associated with the deflections of the charged particles that occur at the three vertices, $x_1,x_2,$ and $x_3$. Block and Nordsieck$^{12}$ have already emphasized that the infrared divergences arise from the classical aspects of the elecromagnetic field. This classical component is exactly supplied by the factor $U(L)$. One may therefore expect the remainder $F'_{op} (L(x_1, x_2, x_3))$ to be free of infrared problems: if we transform $F'_{op}(L(x_1,x_2,x_3))$ into momentum space, then it should satisfy the usual pole-factorization property. A primary goal of this work is to show that this pole-factorization property indeed holds. To recover the physics one transforms $F'_{op}$ to coordinate space, and then incorporates the real and virtual classical photons by using $1.3$ and $1.4$. The plan of the paper is as follows. In the following section 2 rules are established for writing down the functions of interest directly in momentum space. These rules are expressed in terms of operators that act on momentum--space Feynman functions and yield momentum--space functions, with classical or quantum interactions inserted into the charged-particle lines in any specified desired order. It is advantageous always to sum together the contributions corresponding to all ways in which a photon can couple with C--type coupling into each individual side of the triangle graph $G$. This sum can be expressed as a sum of just two terms. In one term the photon is coupled at one endpoint, $x^+$, of this side of $G$, and in the other term the photon is coupled into the other end point, $x^-$, of this side of $G$. Thus all C--type couplings become converted into couplings at the hard--photon vertices of the original graph $G$. This conversion introduces an important property. The charge--conservation (or gauge) condition $k^\mu J_\mu =0$ normally does not hold in quantum electrodynamics for individual graphs: one must sum over all ways in which the photon can be inserted into the graph. But in the form we use, with each quantum vertex $Q$ coupled into the interior of a line of $G$, but each classical vertex $C$ placed at a hard--photon vertex of $G$, the charge--conservation equation (gauge invariance) holds for each vertex separately: $k^\mu J_\mu =0$ for each vertex. In section 3 the modification of the charged--particle propagator caused by inserting a single quantum vertex $Q_\mu$ into a charged-particle line is studied in detail. The resulting (double) propagator is re--expressed as a sum of three terms. The first two are ``meromorphic'' terms having poles at $p^2=m^2$ and $p^2 = m^2-2pk -k^2$, respectively, in the variable $p^2$. Because of the special form of the quantum coupling $Q_\mu$ each residue is of first order in $k$, relative to what would have been obtained with the usual coupling $\gamma_\mu$. This extra power of $k$ will lead to the infrared convergence of the residues of the pole singularities. Our proof that this convergence property holds can be regarded as a systematization and confirmation of the argument for infrared convergence given by Grammer and Yennie$^{13}$. The third term is a nonmeromorphic contribution. It is a difference of two logarithms. This {\it difference} has a power of $k$ that renders the contribution infrared finite. \noindent {\bf 2. Basic Momentum--Space Formulas} The separation of the soft--photon interaction into its quantum and classical parts is defined in Eq. (1.1). This separation is defined in a mixed representation in which hard photons are represented in coordinate space and soft photons are represented in momentum space. In this representation one can consider a ``generalized propagator''. It propagates a charged particle from a hard--photon vertex $y$ to a hard--photon vertex $x$ with, however, the insertion of soft--photon interactions. Suppose, for example, one inserts the interactions with two soft photons of momenta $k_1$ and $k_2$ and vector indices $\mu_1$ and $\mu_2$. Then the generalized propagator is $$ \eqalignno{ P_{\mu_1, \mu_2} &(x,y; k_1, k_2)\cr &= \int {d^4p\over (2\pi )^4} e^{-ipx + i(p+k_1+k_2)y}\cr &\times {i\over \slashash{p}-m+i0}\gamma_{\mu_1}{i\over \slashash{p}+\slashash{k}_1-m+i0}\gamma_{\mu_2}{i\over \slashash{p}+\slashash{k}_1+\slashash{k}_2-m+i0}.&(2.1)\cr} $$ The generalization of this formula to the case of an arbitrary number of inserted soft photons is straightforward. The soft--photon interaction $\gamma_{\mu_j}$ is separated into its parts $Q_{\mu_j}$ and $C_{\mu_j}$ by means of (1.1), with the $x$ and $y$ defined as in (1.2). This separation of the soft--photon interaction into its quantum and classical parts can be expressed also directly in momentum space. Using (1.1) and (1.2), and the familiar identities $$ {1\over \slashash{p}-m} \slashash{k} {1\over \slashash{p} + \slashash{k} - m} = {1\over \slashash{p}-m} - {1\over \slashash{p}+ \slashash{k}-m},\eqno(2.2) $$ and $$ \left( - {\partial\over \partial p^\mu}\right) {1\over \slashash{p}-m} = {1\over \slashash{p}-m} \gamma_\mu {1\over \slashash{p}-m},\eqno(2.3) $$ one obtains for the (generalized) propagation from $y$ to $x$, with a single classical interaction inserted, the expression (with the symbol $m$ standing henceforth for $m-i0$) $$ \eqalignno{ P_\mu(x,y; C,k) &= \int {d^4p\over (2\pi )^4} \left( {i\over \slashash{p}-m}\slashash{k} {i\over \slashash{p}+\slashash{k}-m}\right) {z_\mu\over zk+io} e^{-ipz+iky}\cr &= \int {d^4p \over (2\pi )^4} e^{-ipz+iky} \int^1_0 d\lambda\left(-i {\partial\over\partial p^\mu}\right) \left({i\over \slashash{p}+\lambda\!\slashash{k}-m} \right)\cr &&(2.4)\cr} $$ The derivation of this result is given in reference 14. Comparison of the result (2.4) to (2.1) shows that the result in momentum space of inserting a single quantum vertex $j$ into a propagator $i(\slashash{p}-m)^{-1}$ is produced by the action of the operator $$ \widehat{C}_{\mu_j} (k_j)= \int^1_0 d\lambda_j O(p\to p+ \lambda_j k_j)\left(-i{\partial\over \partial p^{\mu_j}}\right)\eqno(2.5) $$ upon the propagator $i(\slashash{p}- m)^{-1}$ that was present {\it before} the insertion of the vertex $j$. One must, of course, also increase by $k_j$ the momentum entering the vertex at $y$. The operator $O(p\to p+\lambda_jk_j)$ replaces $p$ by $p+\lambda_jk_j$. This result generalizes to an arbitrary number of inserted classical photons, and also to an arbitrary generalized propagator: the momentum--space result of inserting in all orders into any generalized propagator $P_{\mu_1, \cdots , \mu_n} (p; k_1, \cdots , k_n)$ a set of $N$ classically interacting photons with $j= n+1, \cdots, n+N$ is $$ \eqalignno{ &\prod^{n+N}_{j=n+1} \widehat{C}_{\mu_j}(k_j) P_{\mu_1, \cdots , \mu_n} (p; k_1, \cdots , k_n) =\int^1_0 \ldots \int^1_0 d\lambda_{n+1}\ldots d\lambda_{n+N} \prod^{N}_{j=1} \left( -i{\partial\over \partial p^{\mu_{n+j}}}\right)\cr &\hbox{\hskip.25in} P_{\mu_1, \cdots, \mu_n} (p+a; k_1, \cdots , k_n)&(2.6)\cr} $$ where $a= \lambda_{n+1} k_{n+1} + \cdots + \lambda_{n+N} k_{n+N}$. The operations are commutative, and one can keep each $\lambda_j=0$ until the integration on $\lambda_j$ is performed. One may not wish to combine the results of making insertions in all orders. The result of inserting the classical interaction at just one place, identified by the subscript $j\epsilon\{1,\cdots ,n\}$, into a (generalized) propagator $P_{\mu_1 \cdots \mu_n} (p; k_1, \cdots , k_n )$, abbreviated now by $P_{\mu_j}$, is produced by the action of $$ \eqalignno{ \widetilde{C}_{\mu_j}(k_j)&\equiv\cr &\int^\infty_{0} d\lambda_j O(p_i\to p_i+ \lambda_jk_j )\left(-{\partial\over \partial p^{\mu_j}}\right)&(2.7)\cr} $$ upon $k_j^{\sigma_j}P_{\sigma_j}$. There is a form analogous to (2.7) for the Q interaction: the momentum--space result produced by the insertion of a Q coupling into $P_{\mu_1\cdots \mu_n}(p; k_1,\cdots k_\mu) = P_{\mu_j}$ at the vertex identified by $\mu_j$ is given by the action of $$ \widetilde{Q}_{\mu_j}(k_j) \equiv (\delta_{\mu_j}^{\sigma_j}k_j^{\rho_j} - \delta_{\mu_j}^{\rho_j} k_j^{\sigma_j})\widetilde{C}_{\rho_j}(k_j)\eqno(2.8) $$ upon $P_{\sigma_j}$ . An analogous operator can be applied for each quantum interaction. Thus the generalized momentum--space propagator represented by a line $L$ of a graph $G$ into which $n$ quantum interactions are inserted in a fixed order is $$ \eqalignno{ &P_{\mu_1\cdots \mu_n}(p; Q, k_1,Q, k_2, \cdots Q, k_n)=\cr &\prod^n_{j=1} \left[ \int^\infty_0 d\lambda_j(\delta_{\mu_j}^{\sigma_j}k_j^{\rho_j}- \delta^{\rho_j}_{\mu_j} k_j^{\sigma_j})\left( -{\partial\over \partial p^{\rho_j}}\right)\right]\cr &\Big({i\over \slashash{p}+\slashash{a}-m}\gamma_{\sigma_1}{i\over\slashash{p}+\slashash{a} + \slashash{k}_1-m} \gamma_{\sigma_2}{i\over \slashash{p}+\slashash{a}+\slashash{k}_1+\slashash{k}_2-m}\cr &\cdots \times \gamma_{\sigma_n}{i\over \slashash{p}+\slashash{a}+\slashash{k}_1+\cdots \slashash{k}_n - m}\Big),&(2.9)\cr} $$ where $$ a= \lambda_1 k_1 + \lambda_2 k_2 + \cdots \lambda_n k_n.\eqno(2.10) $$ If some of the inserted interactions are classical interactions then the corresponding factors $(\delta_{\mu_j}^{\sigma_j}k_j^{\rho_j} - \delta_{\mu_j}^{\rho_j}k_j^{\sigma_j})$ are replaced by $(\delta_{\mu_j}^{\rho_j}k_j^{\sigma_j})$. These basic momentum--space formulas provide the starting point for our examination of the analyticity properties in momentum space, and the closely related question of infrared convergence. One point is worth mentioning here. It concerns the conservation of charge condition $k^\mu J_\mu (k) =0$. In standard Feynman quantum electrodynamic this condition is not satisfied by the individual photon--interaction vertex, but is obtained only by summing over all the different positions where the photon interaction can be coupled into a graph. This feature is the root of many of the difficulties that arise in quantum electrodynamics. Equation (2.9) shows that the conservation -- law property holds for the individual {\it quantum} vertex: there is no need to sum over different positions. The classical interaction, on the other hand, has a form that allows one easily to sum over all possible locations along a generalized propagator, even before multiplication by $k^\mu$. This summation converts the classical interaction to a sum of two interactions, one located at each end of the line associated with the generalized propagator. (See, for example, Eq. (4.1) below). We always perform this summation. Then the classical parts of the interaction are shifted to the hard--photon interaction points, at which $k^{\mu}J_{\mu}(k)=0$ holds. \noindent{\bf 3. Residues of Poles in Generalized Propagators} Consider a generalized propagator that has only quantum--interaction insertions. Its general form is, according to (2.9), $$ \eqalignno{ \prod^{n}_{j=1}&\left[\left( \delta_{\mu_j}^{\sigma_j} k^{\rho_j}_j - \delta_{\mu_j}^{\rho_j} k^{\sigma_j}_j \right) \int^\infty_0 d\lambda_j \left(-{\partial\over\partial p^{\rho_j}} \right)\right]\cr &( {i\over \slash{p} +\slash{a}-m} \gamma_{\sigma_1} {i\over \slash{p} +\slash{a}+\slash{k}_1-m} \gamma_{\sigma_2} {i\over\slash{p}+\slash{a}+\slash{k}_1+\slash{k}_2-m}\cr &\cdots \times \gamma_{\sigma_{n}} {i\over \slash{p}+ \slash{a} + \slash{k}_1 \cdots + \slash{k}_n -m} \bigg)&(3.1)\cr} $$ where $$ a=\lambda_1 k_1 + \cdots + \lambda_n k_n .\eqno(3.2) $$ The singularities of (3.1) that arise from the multiple end--point $\lambda_1 = \lambda_2 = \cdots \lambda_n =0$ lie on the surfaces $$ p^2_i = m^2,\eqno(3.3) $$ where $$ p_i = p + k_1 + k_2 +\cdots + k_i.\eqno(3.4) $$ At a point lying on only one of these surfaces the strongest of these singularities is a pole. The Feynman function appearing in (3.1) can be decomposed into a sum of poles times residues. At the point $a=0$ this gives $$ \eqalignno{ &{i(\slash{p}+m)\gamma_{\mu_1} i(\slash{p}+\slash{k}_1+m) \gamma_{\mu_2}\cdots \gamma_{\mu_n} i(\slash{p}+\cdots + \slash{k}_n+m)\over (p^2-m^2) ((p+k_1)^2-m^2)((p+\cdots + k_n)^2-m^2)}\cr & \ \ \ \ \ = \sum^n_{i=0} {N_{1i}\over D_{1i}} {i(\slash{p}_i+m)\over p^2_i-m^2} {N_{2i}\over D_{2i}},&(3.5)\cr} $$ where for each $i$ the numerator occurring on the right--hand side of this equation is identical to the numerator occurring on the left--hand side. The denominator factors are $$ D_{1i} = \prod_{j<i} (2p_i k_{ij}+(k_{ij})^2 + i0),\eqno(3.6a) $$ and $$ D_{2i} = \prod_{j>i}(2p_i k_{ij} + (k_{ij})^2 + i0),\eqno(3.6b) $$ where $$ \ k_{ij} = \sigma_{ij} [(k_1 + \cdots + k_j)-(k_1 + \cdots + k_i)].\eqno(3.7) $$ The sign $\sigma_{ij}=\pm$ in (3.7) is specified in reference 14, where it is also shown that that the dominant singularity on $p^2_i - m^2 =0$ is the function obtained by simply making the replacement $$ \int^\infty_0 d\lambda_j \left( - {\partial\over \partial p^{\rho_j}}\right) \left( O(p\to p+\lambda_j k_j)\right) \to p_{i\rho_j}(p_ik_j)^{-1}.\eqno(3.8) $$ Each value of $j$ can be treated in this way. Thus the dominant singularity of the generalized propagator (3.1) on $p_i^2 - m^2=0$ is $$ \eqalignno{ \prod^n_{j=1} &\left[ \left( \delta_{\mu_j}^{\sigma_j} k_j^{\rho_j} - \delta_{\mu_j}^{\rho_j} k_j^{\sigma_j}\right) p_{i\rho_j} (p_i k_j)^{-1}\right]\cr &\times {N_{1i} i(\slash{p}_i+m)N_{2i}\over D_{1i}(p^2_i-m^2)D_{2i}}. &(3.9)\cr} $$ The numerator in (3.9) has, in general, a factor $$ \eqalignno{ &\ \ \ \ \ i(\slash{p}_i- \slash{k}_i+m)\gamma_{\sigma_i}i(\slash{p}_i+m) \gamma_{\sigma_{i+1}}i(\slash{p}_i+\slash{k}_{i+1}+m)\cr &=i(\slash{p}_i-\slash{k}_i+m)\gamma_{\sigma_i} i((\slash{p}_i+m)i(2p_{i\sigma_{i+1}} + \gamma_{\sigma_{i+1}}\slash{k}_{i+1})\cr &\ \ \ \ \ +i(\slash{p}_i - \slash{k}_i +m) \gamma_{\sigma_i} \gamma_{\sigma_{i+1}}(p^2_i-m^2)\cr &=i(2p_{i\sigma_i}-\slash{k}_i\gamma_{\sigma_i})i(\slash{p}+m)i(2p_{i\sigma_{i+1}} + \gamma_{\sigma_{i+1}}\slash{k}_{i+1})\cr &\ \ \ \ \ +i(p^2_i-m^2)\gamma_{\sigma_i}(2p_{i\sigma_{i+1}}+\gamma_{\sigma_{i+1}} \slash{k}_{i+1})\cr &\ \ \ \ \ +i(\slash{p}_i-\slash{k}_i+m)\gamma_{\sigma_i}\gamma_{\sigma_{i+1}}(p^2_i-m^2) &(3.10)\cr} $$ The last two terms in the last line of this equation have factors $p^2_i-m^2$. Consequently, they do not contribute to the residue of the pole at $p_i^2-m^2=0$. The terms in (3.10) with a factor $2p_{i\sigma_{i+1}}$, taken in conjunction with the factor in (3.9) coming from $j=i+1$, give a dependence $2p_{i\rho_j} 2p_{i\sigma_j}$. This dependence upon the indices $\rho_j$ and $\sigma_j$ is symmetric under interchange of these two indices. But the other factor in (3.9) is antisymmetric. Thus this contribution drops out. The contribution proportional to $p_{i\sigma_i}$ drops out for similar reasons. Omitting these terms that do not contribute to the residue of the pole at $p^2_i - m^2$ one obtains in place of (3.10) the factor $$ (-i\slash{k}_i\gamma_{\sigma_i})i(\slash{p}_i+m)(i\gamma_{\sigma_{i+1}}\slash{k}_{i+1}) \eqno(3.11) $$ which is first--order in both $\slash{k}_i$ and $\slash{k}_{i+1}$. That these ``convergence factors'' actually lead to infrared convergence is shown in references 14 and 15. \noindent{\bf 4. Inclusion of the Classical Interactions} The arguments of the preceeding section dealt with processes containing only $Q$--type interactions. In that analysis the order in which these $Q$--type interactions were inserted on the line $L$ of $G$ was held fixed: each such ordering was considered separately. In this section the effects of adding $C$--type interaction are considered. Each $C$--type interactions introduces a coupling $k^\sigma\gamma_\sigma =\slash{k}$. Consequently, the Ward identities, illustrated in (2.2), can be used to simplify the calculation, but only if the contributions from all orders of its insertion are treated together. This we shall do. Thus for $C$--type interactions it is the operator $\widehat{C}$ defined in (2.5) that is to be used rather than the operator $\widetilde{C}$ defined in (2.7). Consider, then, the generalized propagator obtained by inserting on some line $L$ of $G$ a set of $n$ interactions of $Q$--type, placed in some definite order, and a set of $N$ $C$--type interactions, inserted in all orders. The meromorphic part of the function obtained after the action of the $n$ operators $\widetilde{Q}_j$ is given by (3.9). The action upon this of the $N$ operators $\widehat{C}_j$ of (2.5) is obtained by arguments similar to those that gave (3.9), but differing by the fact that (2.5) acts upon the propagator present {\it before\/} the action of $\widehat{C}_j$, and the fact that now both limits of integration contribute, thus giving for each $\widehat{C}_j$ two terms on the right--hand side rather than one. Thus the action of $N$ such $\widehat{C}_j$'s gives $2^N$ terms: $$ \eqalignno{ \Bigg[ \prod^{n+N}_{j=n+1} &\widehat{C}_{\mu_j}(k_j) P_{\mu_1\cdots \mu_n} (p; Q, k_1, Q, k_2, \cdots Q, k_n) {\Bigg]}_{Mero}\cr &= \sum^{2^N}_{\Theta=1} S{gn}(\Theta )\sum^n_{i=0} \prod^{n+N}_{j=n+1} \left( {ip^\Theta_{i \mu_j}\over p^\Theta_i k_j} \right)\cr &\times \left\{ \prod^n_{j=1} \left[ \left( \delta_{\mu j}^{\sigma_j}k_j^{\rho_j} - \delta^{\rho_j}_{\mu_j}k_j^{\sigma_j} \right) \left( {p^\Theta_{i\rho_j}\over p^\Theta_ik_j} \right) \right] \right\} \cr &\times {N^\Theta_{1i}\over D^\Theta_{1i}} {i(\slash{p}^\Theta_i+m)\over (p^\Theta_i)^2-m^2} {N^\Theta_{2i}\over D^\Theta_{2i}}, &(4.1) \cr} $$ where $$ \eqalignno{ \Theta &= (\Theta_{n+1}, \cdots , \ \Theta_{n+N}),\cr \Theta_j &= +1 \ {\hbox{or}} \ 0,\cr S{gn}(\Theta )&= (-1)^{\Theta_{n+1}}(-1)^{\Theta_{n+2}}\cdots (-1)^{\Theta_{n+N}}\cr p_i^\Theta &= p_i + \Theta_{n+1}k_{n+1} +\cdots + \Theta_{n+N}k_{n+N},\cr p_i &= p+k_1 + \cdots + k_i, &(4.2) \cr} $$ and the superscript $\Theta$ on the $N$'s and $D$'s means that the argument $p_i$ appearing in (3.5) and (3.6) is replaced by $p_i^\Theta$. Note that even though the action of $\widehat{C}_j$ and $\widetilde{Q}_j$ involve integrations over $\lambda$ and differentiations, the meromorphic parts of the resulting generalized propagators are expressed by (4.1) in relatively simple closed form. These meromorphic parts turn out to give the dominant contributions in the mesoscopic regime. The essential simplification obtained by summing over all orders of the $C$--type insertions is that after this summation each $C$--type interaction gives just two terms. The first term is just the function before the action of $\widehat{C}_j$ multiplied by $ip_{i\mu_j} (p_i k_j)^{-1}$; the second is minus the same thing with $p_i$ replaced by $p_i +k_j$. Thus, apart from this simple factor, and, for one term, the overall shift in $p_i$, the function is just the same as it was before the action of $\widehat{C}_j$. Consequently, the power--counting arguments used for $Q$--type couplings go through essentially unchanged. Details can be found in references 14 and 15. \vskip 9pt \noindent{\bf 5. Comparison to Other Recent Works} \vskip 9pt The problem of formulating quantum electrodynamics in an axiomatic field-theoretic framework has been examined by Fr\"{o}hlich, Morchio, and Strocchi$^{8}$ and by D. Buchholz$^9$, with special attention to the non-local aspects arising from Gauss' law. Their main conclusion, as it relates to the present work, is that the energy-momentum spectrum of the full system can be separated into two parts, the first being the photonic asymptotic free-field part, the second being a remainder that: 1) is tied to charged particles, 2) is nonlocal relative to the photonic part, and 3) can have a discrete part corresponding to the electron/positron mass. This separation is concordant with the structure of the QED Hamiltonian, which has a photonic free-field part and an electron/positron part that incorporates the interaction term $eA^{\mu} J_{\mu}$, but no added term corresponding to the non-free part of the electromagnetic field. It is also in line with the separation of the classical electromagnetic field, as derived from the Li\'{e}nard-Wiechert potentials, into a ``velocity'' part that is attached (along the light cone) to the moving source particle, and an ``acceleration'' part that is radiated away. It is the ``velocity'' part, which is tied to the source particle, and which falls off only as $r^{-1}$, that is the origin of the ``nonlocal'' infraparticle structure that introduces peculiar features into quantum electrodynamics, as compared to simple local field theories. In the present approach, the quantum analog of this entire classical structure is incorporated into the formula for the scattering operator by the unitary factor $U(L)$. It was shown in ref. 11, Appendix C, that the non-free ``velocity'' part of the electromagnetic field generated by $U(L)$ contributes in the correct way to the mass of the electrons and positrons. It gives also the ``Coulomb'' or ``velocity'' part of the interaction between different charged particles, which is the part of the electromagnetic field that gives the main part of Gauss' law asymptotically. Thus our formulas supply in a computationally clean way these ``velocity field'' contributions that seem so strange when viewed from other points of view. Comparisons to the works in references 17 through 22 can be found in reference 14. \noindent{\bf References} \begin{enumerate} \item J. Bros {\it in} Mathematical Problems in Theoretical Physics: Proc. of the Int. Conf. in Math. Phys. Held in Lausanne Switzerland Aug 20-25 1979, ed. K. Osterwalder, Lecture Notes in Physics 116, Springer-Verlag (1980); H. Epstein, V. Glaser, and D. Iagolnitzer, Commun. Math. Phys. {\bf80}, 99 (1981). \item D. Iagolnitzer, {\it Scattering in Quantum Field Theory: The Axiomatic and Constructive Approaches}, Princeton University Press, Princeton NJ, in the series: Princeton Series in Physics. (1993); J. Bros, Physica {\bf 124A}, 145 (1984) \item D. Iagolnitzer and H.P. Stapp, Commun. Math. Phys. {\bf 57}, 1 (1977); D. Iagolnitzer, Commun. Math. Phys. {\bf 77}, 251 (1980) \item T. Kibble, J. Math. Phys. {\bf 9}, 315 (1968); Phys. Rev. {\bf 173}, 1527 (1968); {\bf 174}, 1883 (1968); {\bf 175}, 1624 (1968). \item D. Zwanziger, Phys. Rev. {\bf D7}, 1082 (1973). \item J.K. Storrow, Nuovo Cimento {\bf 54}, 15 (1968). \item D. Zwanziger, Phys. Rev. {\bf D11}, 3504 (1975); N. Papanicolaou, Ann. Phys.(N.Y.) {\bf 89}, 425 (1975) \item J. Fr\"{o}hlich, G. Morchio, and F. Strocchi, Ann.Phys.(N.Y) {\bf 119}, 241 (1979); Nucl. Phys. {\bf B211}, 471 (1983); G. Morchio and F. Strocchi, {\it in} Fundamental Problems in Gauge Field Theory, eds. G. Velo and A.S. Wightman, (NATO ASI Series) Series B:Physics {\bf 141}, 301 (1985). \item D. Buchholz, Commun. Math. Phys. {\bf 85}, 49 (1982); Phys. Lett. B {\bf 174}, 331 (1986); {\it in} Fundamental Problems in Gauge Field Theory, eds. G. Velo and A.S. Wightman, (NATO ASI Series) Series B: Physics {\bf 141}, 381 (1985); \item T. Kawai and H.P. Stapp, {\it in} 1993 Colloque International en l'honneur de Bernard Malgrange (Juin, 1993/ at Grenoble) Annales de l'Institut Fourier {\bf 43.5}, 1301 (1993) \item H.P. Stapp, Phys. Rev. {\bf 28D}, 1386 (1983). \item F. Block and A. Nordsieck, Phys. Rev. {\bf 52}, 54 (1937). \item G. Grammer and D.R. Yennie, Phys. Rev. {\bf D8}, 4332 (1973). \item T. Kawai and H.P. Stapp, Phys. Rev. D {\bf 52}, 2484 (1995). \item T. Kawai and H.P. Stapp, Phys. Rev. D {\bf 52}, 2505, 2517 (1995). \item T. Kawai and H.P. Stapp, {\it Quantum Electrodynamics at Large Distances}, Lawrence Berkeley Laboratory Report LBL-25819 (1993). \item J. Schwinger Phys. Rev. {\bf 76}, 790 (1949). \item D. Yennie, S. Frautschi, and H. Suura, Ann. Phys. (N.Y.) {\bf 13}, 379 (1961). \item K.T. Mahanthappa. Phys. Rev. {\bf 126}, 329 (1962); K.T Mahanthappa and P.M. Bakshi, J. Math. Phys. {\bf 4}, 1 and 12 (1963). \item V. Chung, Phys. Rev. {\bf 140}, B1110 (1965) \item P.P. Kulish and L.D. Fadde'ev, Theor. Math. Phys. {\bf 4}, 745 (1971). \item E. d'Emilio and M. Mintchev, Fortschr. Phys. {\bf 32}, 473 (1984); Phys. Rev. {\bf 27}, 1840 (1983) \end{enumerate} \vskip .2in This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contract DE-AC03-76SF00098. \end{document}
math
35,058
\begin{document} \begin{htmlabstract} <p class="noindent"> This paper explores connections between Heegaard genus, minimal surfaces, and pseudo-Anosov monodromies. Fixing a pseudo-Anosov map &phi; and an integer n, let M<sub>n</sub> be the 3&ndash;manifold fibered over S<sup>1</sup> with monodromy &phi;<sup>n</sup>. </p> <p class="noindent"> JH Rubinstein showed that for a large enough n every minimal surface of genus at most h in M<sub>n</sub> is homotopic into a fiber; as a consequence Rubinstein concludes that every Heegaard surface of genus at most h for M<sub>n</sub> is standard, that is, obtained by tubing together two fibers. We prove this result and also discuss related results of Lackenby and Souto. </p> \end{htmlabstract} \begin{abstract} This paper explores connections between Heegaard genus, minimal surfaces, and pseudo-Anosov monodromies. Fixing a pseudo-Anosov map $\phi$ and an integer $n$, let $M_n$ be the 3--manifold fibered over $S^1$ with monodromy $\phi^n$. JH Rubinstein showed that for a large enough $n$ every minimal surface of genus at most $h$ in $M_n$ is homotopic into a fiber; as a consequence Rubinstein concludes that every Heegaard surface of genus at most $h$ for $M_n$ is standard, that is, obtained by tubing together two fibers. We prove this result and also discuss related results of Lackenby and Souto. \end{abstract} \begin{asciiabstract} This paper explores connections between Heegaard genus, minimal surfaces, and pseudo-Anosov monodromies. Fixing a pseudo-Anosov map phi and an integer n, let M_n be the 3-manifold fibered over S^1 with monodromy phi^n. JH Rubinstein showed that for a large enough $n$ every minimal surface of genus at most h in M_n is homotopic into a fiber; as a consequence Rubinstein concludes that every Heegaard surface of genus at most h for M_n is standard, that is, obtained by tubing together two fibers. We prove this result and also discuss related results of Lackenby and Souto. \end{asciiabstract} \title{The Heegaard genus of bundles over $S^1$} \section{Introduction} \label{sec:intro} The purpose of this article to explore theorems of Rubinstein and Lackenby. Rubinstein's Theorem studies the Heegaard genus of certain hyperbolic 3--manifolds that fiber over $S^1$ and Lackenby's Theorem studies the Heegaard genus of certain Haken manifolds. Our target audience is 3--manifold theorists with good understanding of Heegaard splittings but perhaps little experience with minimal surfaces. We will explain the background necessary for these theorems and prove them (in particular, in \fullref{sec:monotonicity} we explain the main tool needed for analyzing minimal surfaces). All manifolds considered in this paper are closed, orientable 3--manifolds and all surfaces considered are closed. By the genus of a 3--manifold $M$, denoted $g(M)$, we mean the genus of a minimal genus Heegaard surface for $M$. A {\it least area} surface is a map from a surface into a Riemannian 3--manifold that minimizes the area in its homopoty class. A {\it minimal surface} is a critical point of the area functional. Therefore a least area surface is always minimal, as a global minimum is always a critical point. A local minimum of the area functional is called a stable minimal surface and has index zero. However, some minimal surfaces (and in particular the minimal Heegaard surfaces we will study in this paper) are unstable and have positive index. This is similar to a saddle point of the area functional. An easy example is the equatorial sphere $\{x_4=0\}$ in $S^3$ (where $S^3$ is the unit sphere in $\mathbb{R}^4$). One nice property that all minimal surfaces share is that their mean curvature is zero. This turns out to be equivalent to a surface being minimal. It follows that the intrinsic curvature of a minimal surface is bounded above by the curvature of the ambient manifold. Thus, the curvature of a minimal surface $S$ in a hyperbolic manifold is bounded above by $-1$, and by Gauss--Bonnet the area of $S$ is at most $2\pi\chi(S)$, where $\chi(S)$ is the Euler characteristic of $S$. We assume familiarity with the basic notions of 3--manifold theory (see, for example, Hempel \cite{hempel} or Jaco \cite{jaco}), the basic nations about Heegaard splittings (see, for example, \cite{scharlemann-review}), and Casson and Gordon's concept of {\it strong irreducibility/weak reducibility} \cite{casson-gordon}. A more refined notion, due to Scharlemann and Thompson, is {\it untelescoping} \cite{untel} (see also Saito, Scharlemann and Schultens \cite{schschsaito}). Untelescoping is, in essence, iterated application of weak reduction (indeed, in some cases a single weak reduction does not suffice; see Kobayashi \cite{kobayashi-ST}). In \fullref{sec:lackenby} we assume familiarity with this concept. In \cite{rubinstein} Rubinstein used minimal surfaces to study the Heegaard genus of hyperbolic manifolds that fiber over $S^1$, more precisely, of closed 3--manifolds that fiber over the circle with fiber a closed surface of genus $g$ and pseudo-Anosov monodromy (say $\phi$). We denote such manifold by $M_\phi$ or simply $M$ when there is no place for confusion. While there exist genus two manifolds that fiber over $S^1$ with fiber of arbitrarily high genus (for example, consider 0--surgery on 2 bridge knots with fibered exteriors; see Hatcher and Thurston \cite{hatcher-thurston}) Rubinstein showed that this is often not the case. A manifold that fibers over $S^1$ with genus $g$ fiber has a Heegaard surface of genus $2g+1$ that is obtained by taking two disjoint fibers and tubing them together once on each side. We call this surface and surfaces obtained by stabilizing it {\it standard}. $M$ has a cyclic cover of degree $d$ (denoted $M_{\phi^d}$ or simply $M_d$), dual to the fiber, whose monodromy is $\phi^d$. Rubinstein shows that for small $h$ and large $d$ any Heegaard surface for $M_d$ of genus at most $h$ is standard. In particular, the Heegaard genus of $M_d$ (for sufficiently large $d$) is $2g+1$. The precise statement of Rubinstein's Theorem is: \begin{thm}[Rubinstein] \label{thm:rubinstein} Let $M_\phi$ be a closed orientable 3--manifold that fibers over $S^1$ with pseudo-Anosov monodromy $\phi$. Let $M_d$ be the $d$--fold cyclic cover of $M_\phi$ dual to the fiber. Then for any integer $h \geq 0$ there exists an integer $n>0$ so that for any $d \geq n$, any Heegaard surface of genus at most $h$ for $M_d$ is standard. \end{thm} \begin{rmk} In \cite{BS} Bachman and Schleimer gave a combinatorial proof of \fullref{thm:rubinstein}. \end{rmk} Rubinstein's proof contains two components: the first component is a reduction to a statement about minimal surfaces. We state and prove this reduction in \fullref{sec:reduction}. It says that if $M_d$ has the property that every minimal surface of genus at most $h$ is disjoint from some fiber then every Heegaard surface for $M_d$ of genus at most $h$ is standard. The second component of Rubinstein's proof is to show that for large enough $d$, this property holds for $M_d$; this was obtained independently by Lackenby \cite[Theorem~1.9]{lackenby1}. A statement and proof are given in \fullref{sec:main}; we describe it here. Let $M$ be a hyperbolic manifold and $F \subset M$ a non-separating surface (not necessarily a fiber in a fibration over $S^1$). Construct the $d$--fold cyclic cover dual to $F$, denoted $M_d$, as follows: let $M^*$ be $M$ cut open along $F$. Then $\partial M^*$ has two components, say $F_-$ and $F_+$. The identification of $F_-$ with $F_+$ in $M$ defines a homeomorphism $h\co F_- \to F_+$. We take $d$ copies of $M^*$ (denoted $M^*_i$, with boundaries denoted $F_{i,-}$ and $F_{i,+}$ ($i=1,\dots,d$)) and glue them together by identifying $F_{i,+}$ with $F_{i+1,-}$ (the indices are taken modulo $d$). The gluing maps are defined using $h$. The manifold obtained is $M_d$. In \fullref{thm:main} we prove that for any $M$ there exists $n$ so that if $d \geq n$ then any minimal surface of genus at most $h$ in $M_d$ is disjoint from at least one of the preimages of $F$. The proof is an area estimate. Let $S$ be a minimal surface in a hyperbolic manifold $M_d$ as above; denote the components of the preimage of $F$ by $F_1,\dots,F_n$. If $S$ intersects every $F_i$ we give a lower bound on its area by showing that there exists a constant $a > 0$ so that $S$ has area at least $a$ near every $F_i$ that it meets. Hence if $S$ intersects every $F_i$ it has area at least $ad$. Fixing $h$, if $d > \frac{2\pi(2h-2)}{a}$ then $S$ has area greater than $2\pi(2h-2)$. As mentioned above, the minimal surface $S$ inherits a metric with curvature bounded above by $-1$, and by Gauss--Bonnet the area of $S$ is at most $2 \pi (2g(S)-2)$. Thus $2\pi(2h-2) < \mbox{ area of } (S) \leq 2 \pi (2g(S)-2)$. Solving for $g(S)$ we see that $g(S) > h$ as required. We note that $a$ is determined by the geometry of $M$. The only tool needed for this is a simple consequence of the {\it Monotonicity Principle}. It says that any minimal surface in a hyperbolic ball of radius $R$ that intersects the center of the ball has at least as much area as a hyperbolic disk of radius $R$. We briefly explain this in \fullref{sec:monotonicity}. For the purpose of illustration we give two proofs in the case that the minimal surface is a disk. One of the proofs requires the following fact: the length of a curve on a sphere or radius $r$ that intersects every great circle is at least $2\pi r$, that is, such a curve cannot be shorter than a great circle. We give two proofs of this fact in Appendices~\ref{sec:cruves-on-spheres-1} and \ref{sec:cruves-on-spheres-2}. Let $N_1$ and $N_2$ be simple manifolds with $\partial N_1 \cong \partial N_2$ a connected surface of genus $g \geq 2$ (denoted $S_g$). We emphasize that by $\partial N_1 \cong \partial N_2$ we only mean that the surfaces are homeomorphic. Let $M'$ be a manifold obtained by gluing $N_1$ to $N_2$ along the boundary. Then the image of $\partial N_1 = \partial N_2$ (denoted $S$) in $M'$ is an essential surface. If $F \subset M'$ is any essential surface with $\chi(F) \geq 0$, then after isotoping $F$ to minimize $|F \cap S|$, any component of $F \cap N_1$ or $F \cap N_2$ is essential and has non-negative Euler characteristic (possibly, $F \cap S = \emptyset$). But simplicity of $N_1$ and $N_2$ implies that there are no such surfaces. We conclude that $M'$ is a Haken manifold with no essential surfaces of non-negative Euler characteristic. By Thurston's Uniformization of Haken Manifolds $M'$ is hyperbolic or Seifert fibered. If $M'$ is Seifert fibered then $S$ can be isotoped to be either vertical (that is, everywhere tangent to the fibers) or horizontal (that is, everywhere transverse to the fibers). Both cases contradict simplicity of $N_1$ and $N_2$; the details are left to the reader. We conclude that $M'$ is hyperbolic. Note however, that although $N_1$ and $N_2$ admit hyperbolic metrics, the restriction of the hyperbolic metric on $M'$ to $N_1$ and $N_2$ does not have to resemble them. After fixing parameterizations $i_1\co S_g \to \partial N_1$ and $i_2\co S_g \to \partial N_2$ any gluing between $\partial N_1$ and $\partial N_2$ is given by a map $i_2 \circ f \circ (i_1^{-1})$ for some map $f\co S_g \to S_g$. Fix $f\co S_g \to S_g$ a pseudo-Anosov map, let $M_f$ be the bundle over $S^1$ with fiber $S_g$ and monodromy $f$, and $M_\infty$ the infinite cyclic cover of $M_f$ dual to the fiber. For $n \in\mathbb{N}$, let $M_n$ be the manifold obtained by gluing $N_1$ to $N_2$ using the map $i_2 \circ f^n \circ (i_1^{-1})$. (Note that this is {\it not} $M_d$.) Soma \cite{soma} showed that for properly chosen points $x_n \in M_n$, $(M_n,x_n)$ converge geometrically (in the Hausdorff--Gromov sense) to $M_\infty$. In \cite{lackenby} Lackenby uses an area argument to show that for fixed $h$ and sufficiently large $n$ every minimal surface of genus at most $h$ in $M_n$ is disjoint from the image of $\partial N_1 = \partial N_2$ (denoted $S$). This implies that any Heegaard surface of genus at most $h$ weakly reduces to $S$, and in particular for sufficiently large $n$, by Schultens \cite{schultens} $g(M_n) = g(N_1) + g(N_2) - g(S)$. In \fullref{sec:lackenby} we discuss Lackenby's Theorem, following the same philosophy we used for \fullref{thm:rubinstein}. Finally we mention Souto's far reaching generalization of Lackenby's Theorem \cite{souto} and a related theorem of Namazi and Souto \cite{namazi-souto}; however, a detailed discussion and the proofs of these theorems are beyond the scope of this note. \noindent {\bf Acknowledgment}\qua We thank Hyam Rubinstein for helpful conversations and the anonymous referee for many helpful suggestions. \section{Reduction to minimal surfaces} \label{sec:reduction} In this section we reduce \fullref{thm:rubinstein} to a statement about minimal surfaces in $M_d$. We note that the result here applies to any hyperbolic bundle $M$, but for consistency with applications below we use the notation $M_d$. \begin{thm}[Rubinstein] \label{thm:reduction} Let $M_d$ be a hyperbolic bundle over $S^1$. Assume that every minimal surface of Euler characteristic $\geq 2-2h$ in $M_d$ is disjoint from some fiber. Then any Heegaard surface for $M_d$ of genus at most $h$ is standard. \end{thm} \begin{proof} Let $\Sigma \subset M_d$ be a Heegaard surface of genus at most $h$. By destabilizing $\Sigma$ if necessary we may assume $\Sigma$ is not stabilized. Assume first that $\Sigma$ is strongly irreducible. Then by Pitts and Rubinstein \cite{pitts-rubinstein} (see also Colding and De Lellis \cite{colding-lellis}) one of the following holds: \begin{enumerate} \item $\Sigma$ is isotopic to a minimal surface. \item $M_d$ contains a one-sided, non-orientable, incompressible surface (say $H$). Let $H^*$ denote $H$ with an open disk removed. Then $\Sigma$ is isotopic to $\partial N(H^*)$. Equivalently, $\Sigma$ is isotopic to the surface obtained by tubing $\partial N(H)$ once, inside $N(H)$, via a straight tube. \end{enumerate} Both cases lead to a contradiction: \begin{enumerate} \item Isotope $\Sigma$ to a minimal representative. Let $\gamma \subset M_d$ be a curve. Since $\Sigma \subset M_d$ is a Heegaard surface $\gamma$ is freely homotopic into $\Sigma$. By assumption, $\Sigma$ is disjoint from some fiber $F$. Thus after free homotopy $\gamma \cap F = \emptyset$, and in particular $\gamma$ has algebraic intersection zero with $F$. But this is absurd: clearly there exist a curve $\gamma$ that intersects $F$ algebraically once. \item Similarly, any curve $\gamma \subset M_d$ is isotopic into $\partial N(H^*)$. Since $\partial N(H^*) \subset N(H)$ and $N(H)$ is an $I$--bundle over $H$, $\gamma$ is isotopic into $H$. Since $H$ is essential, by Schoen and Yau \cite{schoen-yau} (see also Freedman, Hass and Scott \cite{freedman-hass-scott}) $H$ can be isotoped to be least area and in particular minimal. Note that $2(\chi(H) -1) = 2\chi(H^*) = 2\chi(N(H^*))= \chi(\partial N(H^*)) = \chi(\Sigma) = 2-2h$. Hence $\chi(H) = 2-h > 2-2h$. By assumption $H$ is disjoint from some fiber $F$. Thus $\gamma$ can be homotoped to be disjoint from $F$, contradiction as above. \end{enumerate} \begin{rmk} It is crucial to our proof that $H$ is essential. Let $H \subset M_d$ be a non-separating surface so that $\mbox{cl}(M_d \setminus N(H))$ is a handlebody. Let $H^*$ be $H$ with $n$ disks removed, for some $n\geq 1$. It is easy to see that $\partial N(H^*)$ is a Heegaard splitting. However, if $H$ is compressible, or if $n>1$, then $\partial N(H^*)$ destabilizes. (The details are left to the reader.) The converse was recently studied by Bartolini and Rubinstein \cite{bartrubin}. \end{rmk} Next assume that $\Sigma$ is weakly reducible. By Casson and Gordon \cite{casson-gordon} a carefully chosen weak reduction of $\Sigma$ yields a (perhaps disconnected) essential surface $S$, and every component of $S$ has genus less than $g(\Sigma)$ (and hence less than $h$). By \cite{schoen-yau} (see also \cite{freedman-hass-scott}) $S$ is homotopic to a least area (and hence minimal) representative. By assumption $S$ is disjoint from some fiber, and in particular $S$ is embedded in fiber cross $[0,1]$. Hence $S$ is itself a collection of (say $n$) fibers and $\Sigma$ is obtained from $S$ by tubing. Note that since $\Sigma$ separates so does $S$. We conclude that $n$ is even. Denote the components of $S$ by $S_1,\dots,S_n$ and the components of $M_d$ cut open along $S$ by $C_i (i=1,\dots,n$) so that $\partial C_i = F_i \sqcup F_{i+1}$ (indices taken mod $n$). Thus $C_i$ is homeomorphic to fiber cross $[0,1]$. Fix $i$ and let $\Sigma_i$ be the surface obtained by pushing $\partial C_i$ slightly into $C_i$ and then tubing along the tubes that are contained in $C_i$. It is easy to see that the component of $C_i$ cut open along $\Sigma_i$ that contains $\partial C_i$ is a compression body. The other component is homeomorphic to a component obtained by compressing one the handlebodies of $M_d$ cut open along $\Sigma$. Hence it is a handlebody. We conclude that $\Sigma_i$ is a Heegaard splitting of $C_i$, and both components of $\partial C_i$ are on the same side of $\Sigma_i$. Scharlemann and Thompson \cite{ST-HS-of-CB} call $\Sigma_i$ a {\it type II} Heegaard splitting of $C_i$. By \cite{ST-HS-of-CB} either $\Sigma_i$ is obtained by a single tube that is of the form $\{p\} \times [0,1]$ (for some $p$ in the fiber) or it is stabilized. Clearly, if $\Sigma_i$ is stabilized so is $\Sigma$. We conclude that $\Sigma$ is obtained from $S$ by a single, straight tube in each $C_i$. We complete the proof by showing that $n=2$. Suppose, for a contradiction, that $n > 2$. On $S_1$ we see two disks, say $D_0$ and $D_1$, where the tubes in $C_0$ and $C_1$ intersect it. Let $F_1^*$ be $F_1 \setminus (\mbox{int} D_0 \sqcup \mbox{int} D_1)$. For $i=0,1$ let $\alpha_i \subset F_i^*$ be a properly embedded arc with $\partial \alpha_i \subset \partial D_i$ and so that $|\alpha_0 \cap \alpha_1| =1$. Note that $\alpha_i \times [0,1]$ is a meridional disk in $C_i$ ($i=0,1$) and these disks intersect once on $F_1$. Since $n > 2$ these disks do not have another intersection. Hence $\Sigma$ destabilizes, contradicting our assumption. We conclude that $n=2$. \end{proof} \section{The Monotonicity Principle} \label{sec:monotonicity} The Monotonicity Principle studies the growth rate of minimal surfaces. All we need is the simple consequence of the Monotonicity Principle, \fullref{pro:monotonicity}, stated below. For illustration purposes, we give two proofs of \fullref{pro:monotonicity} in the special case when the minimal surface intersects the ball in a (topological) disk. A proof for the Monotonicity Principle for annuli is given in Lackenby \cite[Section~6]{lackenby1}. For the general case, see Simon \cite{simon} or Choe \cite{choe}. We will use the following facts about minimal surfaces: (1) if a minimal surface $F$ intersects a small totally geodesic disk $D$ and locally $F$ is contained on one side of $D$ then $D \subseteq F$. (2) If $D$ is a little piece of the round sphere $\partial B$ (for some metric ball $B$) and $F \cap D \neq \emptyset$, then locally $F \not\subset B$. Roughly speaking, these facts state that a minimal surface cannot have ``maxima'' (or, the maximum principle for minimal surfaces). In this section we use the following notation: $B(r)$ is a hyperbolic ball of radius $r$, which for convenience we identify with the ball of radius $r$ in the Poincar\'e ball model in $\mathbb{R}^3$, centered at $O = (0,0,0)$. The boundary of $B(r)$ is denoted $\partial B(r)$. A great circle in $\partial B(r)$ is the intersection of $B(r)$ with a totally geodesic disk that contains $O$, or, equivalently, the intersection of $\partial B(r)$ with a 2--dimensional subspace of $\mathbb{R}^3$. For convenience, we use the horizontal circle (which we shall call the equator) as a great circle and denote the totally geodesic disk it bounds $D_0$. Note that $\partial D_0$ separates $\partial B(r)$ into two disks which we shall call the northern and southern hemispheres, and $D_0$ separates $B(r)$ into two (topological) balls which we shall call the northern and southern half balls. The ball $B(r)$ is foliated by geodesic disks $D_t$ ($-r \leq t \leq +r$), where $D_t$ is the intersection of $B(r)$ with the geodesic plane that is perpendicular to the $z$--axis and intersects it at $(0,0,t)$. Here and throughout this paper, we denote the area of a hyperbolic disk of radius $r$ by $a(r)$. In the first proof below we use the fact that if a curve on a sphere intersects every great circle then it is at least as long as a great circle (\fullref{pro:great-circles}). This is an elementary fact in spherical geometry. In Appendices~\ref{sec:cruves-on-spheres-1} and \ref{sec:cruves-on-spheres-2} we give two proofs of this fact, however, we encourage the reader to find her/his own proof and send it to us. \begin{pro} \label{pro:monotonicity} Let $B(R)$ be a hyperbolic ball of radius $R $ centered at $O$ and $F \subset M$ a minimal surface so that $O \in F$. Then the area of $F$ is at least $a(R)$. \end{pro} \begin{rmk} Lackenby's approach \cite{lackenby1} does not require the full strength of the Monotonicity Principle. He only needs the statement for annuli, and in that case he gives a self-contained proof in \cite[Section~6]{lackenby1}. \end{rmk} We refer the reader to \cite{simon} or \cite{choe} for a proof. For the remainder of the section, assume $F \cap B(R)$ is topologically a disk. Then we have: \begin{proof}[First proof] Fix $r$, $0 < r \leq R$. Fix a great circle in $\partial B(r)$ (which for convenience we identify with the equator). Suppose that $F \cap \partial B(r)$ is not the equator, we will show that $F \cap \partial B(r)$ intersects both the northern and southern hemispheres. Suppose for contradiction for some $r$ this is not the case. Then one of the following holds: \begin{enumerate} \item $F \cap \partial B(r) = \emptyset$. \item $F \cap \partial B(r) \neq \emptyset$ and $F \cap \partial B(r)$ does not intersect one of the two hemispheres. \end{enumerate} Assuming Case~(1) happens, and let $r' > 0$ be the largest value for which $F \cap \partial B(r') \neq \emptyset$. Then $F$ and $\partial B(r')$ contradict fact (2) mentioned above. Next assume Case~(2) happens (say $F$ does not intersect the southern hemisphere). Let $t$ be the most negative value for which $F \cap D_t \neq \emptyset$. Since $O \in F$, $-r < t \leq 0$. Then by fact~(1) above, $F$ must coincide with $D_t$. If $t < 0$ then $D_t$ intersects the southern hemisphere, contrary to our assumptions. Hence $t=0$ and $F$ is itself $D_0$; thus $F \cap B(r)$ is the equator, again contradicting our assumptions. By assumption $F \cap B(R)$ is a disk and therefore $F \cap \partial B(r)$ is a circle. Clearly, a circle that intersects both the northern and the southern hemispheres must intersect the equator. We conclude that $F \cap \partial B(r)$ intersects the equator, and as the equator was chosen arbitrarily, $F \cap \partial B(r)$ intersects every great circle. By \fullref{pro:great-circles} $F \cap B(r)$ is at least as long as a great circle in $\partial B(r)$. Since the intersection of a totally geodesic disk with $\partial B(r)$ is a great circle, integrating these lengths shows that the area of $F \cap B(r)$ grows at least as fast as the area of a geodesic disk, proving the proposition. \end{proof} \begin{proof}[Second proof] Restricting the metric from $M$ to $F$, distances can increase but cannot decrease. Therefore $F \cap \partial B(R)$ is at distance (on $F$) at least $R$ from $O$ and we conclude that $F$ contains an entire disk of radius $R$. The induced metric on $F$ has curvature at most $-1$ and therefore areas on $F$ are at least as big as areas in $\mathbb{H}^2$. In particular, the disk of radius $R$ about $O$ has area at least $a(R)$. \end{proof} \section{Main Theorem} \label{sec:main} By Theorem~\ref {thm:reduction} the main task in proving \fullref{thm:rubinstein} is showing that (for large enough $d$) a minimal surface of genus at most $h$ in $M_d$ is disjoint from some fiber $F$. Here we prove: \begin{thm} \label{thm:main} Let $M$ be a compact, orientable hyperbolic manifold and $F \subset M$ a non-separating, orientable surface. Let $M_d$ denote the cyclic cover of $M$ dual to $F$ of degree $d$ (as in the introduction). Then for any integer $h \geq 0$ there exists a constant $n$ so that for $d \geq n$, any minimal surface of genus at most $h$ in $M_d$ is disjoint from a component of the preimage of $F$. \end{thm} \begin{proof} Fix an integer $h$. Denote the distance in $M$ by $d(\cdot,\cdot)$. Push $F$ off itself to obtain $\widehat{F}$, a surface parallel to $F$ and disjoint from it. For each point $p \in F$ define: $$R(p) = \min \{\mbox{radius of injectivity at }p, \ d(p,\widehat{F}) \}.$$ Since $\widehat{F}$ is compact $R(p) > 0$. Define: $$R = \min \{R(p) |p \in F\}.$$ Since $F$ is compact $R > 0$. Note that $R$ has the following property: for any $p \in F$, the set $\{q \in M: d(p,q) < R \}$ is an embedded ball and this ball is disjoint from $\widehat{F}$. As above, let $a(R)$ denote the area of a hyperbolic disk of radius $R$. Let $n$ be the smallest integer bigger than $\frac{2\pi (2h-2)}{a(R)}$. Fix an integer $d \geq n$. Denote the preimages of $F$ in $M_d$ by $F_1,\dots,F_d$. Let $S$ be a minimal surface in $M_d$. Suppose $S$ cannot be isotoped to be disjoint from the preimages of $F_i$ for any $i$. We will show that $g(S) > h$, proving the theorem. Pick a point $p_i \in F_i \cap S$ ($i=1,\dots,d$) and let $B_i$ be the set $\{p \in M_d | d(p,p_i) < R\}$. By choice of $R$, for each $i$, $B_i$ is an embedded ball and the preimages of $\widehat{F}$ separate these balls; hence for $i \neq j$ we see that $B_i \cap B_j = \emptyset$. $S \cap B_i$ is a minimal surface in $B_i$ that intersects its center and by \fullref{pro:monotonicity} (the Monotonicity Principle) has area at least $a(R)$. Summing these areas we see that the area of $S$ fulfills: \begin{eqnarray*} \mbox{Area of } S &\geq& d \cdot a(R) \\ &\geq& n \cdot a(R) \\ &>& \frac{2\pi (2h-2)}{a(R)} \cdot a(R) \\ &=& 2 \pi (2h-2) \end{eqnarray*} But a minimal surface in a hyperbolic manifold has curvature $\leq -1$ and hence by the Gauss--Bonnet Theorem, the area of $S \leq -2 \pi \chi(S) = 2\pi(2g(S - 2))$. Hence, the genus of $S$ is greater than $h$. \end{proof} \begin{rmkk}[Suggested project]{\rm In \fullref{thm:main} we treat the covers dual to a non-separating essential surface (denoted $M_d$ there). In the section titled ``Generalization'' of \cite{lackenby}, Lackenby shows (among other things) how to amalgamate along non-separating surfaces. Does his construction and \fullref{thm:main} give useful bounds on the genus of $M_d$, analogous to \fullref{thm:rubinstein}? }\end{rmkk} \section{Lackenby's Theorem} \label{sec:lackenby} Lackenby studied the Heegaard genus of manifolds containing separating essential surfaces. Here too, the result is asymptotic. We begin by explaining the set up. Let $N_1$ and $N_2$ be simple manifolds with $\partial N_1 \cong \partial N_2$ a connected surface of genus $g \geq 2$ (that is, $\partial N_1$ and $\partial N_2$ are homeomorphic). Let $S$ be a surface of genus $g$ and $\psi_i\co S \to \partial N_i$ parameterizations of the boundaries ($i=1,2$). Let $f\co S \to S$ be a pseudo-Anosov map. For any $n$ we construct the map $f_n = \psi_2 \circ f^n \circ (\psi_1)^{-1}: \partial N_1 \to \partial N_2$. By identifying $\partial N_1$ with $\partial N_2$ by the map $f_n$ we obtain a closed hyperbolic manifold $M_n$. Let $S \subset M_n$ be the image of $\partial N_1 =\partial N_2$. With this we are ready to state Lackenby's Theorem: \begin{thm}[Lackenby \cite{lackenby}] \label{thm:lackenby} With notation as in the previous paragraph, for any $h$ there exists $N$ so that for any $n \geq N$ any genus $h$ Heegaard surface for $M_n$ weakly reduces to $S$. In particular, by setting $h = g(N_1) + g(N_2) - g(S)$ we see that there exists $N$ so that if $n \geq N$ then $g(M_n) = g(N_1) + g(N_2) - g(S)$. \end{thm} \begin{proof}[Sketch of proof] As in Sections~\ref{sec:reduction} and \ref{sec:main}, the proof has two parts which we bring here as two claims: \begin{clm} Suppose that every every minimal surface in $M_n$ of genus at most $h$ can be homotoped to be disjoint from $S$. Then any Heegaard surface of genus at most $h$ weakly reduces to $S$. In particular, if $h \geq g(N_1) + g(N_2) - g(S)$ then $g(M_n) = g(N_1) + g(N_2) - g(S)$. \end{clm} \begin{clm} There exists $N$ so that if $n \geq N$ then any minimal surface of genus at most $h$ in $M_n$ can be homotoped to be disjoint from $S$. \end{clm} Clearly, Claim~1 and 2 imply Lackenby's Theorem. We now sketch their proofs. We paraphrase Lackenby's proof of Claim~1: let $\Sigma$ be a Heegaard surface of genus at most $h$. Then by Scharlemann and Thompson \cite{st-untel} $\Sigma$ untelescopes to a collection of connected surfaces $F_i$ and $\Sigma_j$ where $\cup_i F_i$ is an essential surface (with $F_i$ its components) and $\Sigma_j$ are strongly irreducible Heegaard surfaces for the components of $M_n$ cut open along $\cup_i F_i$; in particular $M_n$ cut open along $(\cup_i F_i) \cup (\cup_i \Sigma_j)$ consists of compression bodies and the images of the $F_i$'s form $\partial_-$ of these compression bodies. Since $F_i$ and $\Sigma_j$ are obtained by compressing $\Sigma$, they all have genus less than $h$. By \cite{schoen-yau}, \cite{freedman-hass-scott}, and \cite{pitts-rubinstein} the surfaces $F_i$ and $\Sigma_j$ can be made minimal. We explain this process here: since $F_i$ are essential surfaces they can be made minimal by \cite{schoen-yau} (see also \cite{freedman-hass-scott}). Next, since the $\Sigma_j$'s are strongly irreducible Heegaard surfaces for the components of $M_n$ cut open along $\cup_i F_i$, each $\Sigma_j$ can be made minimal within its component by \cite{pitts-rubinstein} (see also \cite{colding-lellis}). Note that the surfaces $F_i$ and $\Sigma_j$ are disjointly embedded. By assumption, $S$ can be isotoped to be disjoint from every $F_i$ and every $\Sigma_j$. Therefore, $S$ is an essential closed surface in a compression body and must be parallel to a component of $\partial_-$. Therefore, for some $i$, $S$ is isotopic to $F_i$. In Rieck and Kobayashi \cite[Proposition~2.13]{rieck-kobayashi} it was shown that if $\Sigma$ untelescopes to the essential surface $\cup_i F_i$, then $\Sigma$ weakly reduces to any {\it connected separating} component of $\cup_i F_i$; therefore $\Sigma$ weakly reduces to $S$. This proves the first part of Claim~1. Since $S$ is connected any minimal genus Heegaard splittings for $N_1$ and $N_2$ can be amalgamated (the converse of weak reduction \cite{schultens}). By amalgamating minimal genus Heegaard surfaces we see that for any $n$, $g(M_n) \leq g(N_1) + g(N_2) - g(S)$. By applying the first part of Claim~1 with $h = g(N_1) + g(N_2) - g(S)$ we see that for sufficiently large $n$, a minimal genus Heegaard surface for $M_n$ weakly reduces to $S$; by \cite[Proposition~2.8]{rieck-kobayashi} $g(M_n)= g(N_1) + g(N_2) - g(S)$, completing the proof of Claim~1. We now sketch the proof of Claim~2. Fix $h$ and assume that for arbitrarily high values of $n$, $M_n$ contains a minimal surface (say $P_n$) of genus $g(P_n) \leq h$ that cannot be homotoped to be disjoint from $S$. Let $M_f$ be the bundle over $S^1$ with monodromy $f$ and fix two disjoint fibers $F$, $\widehat{F} \subset M_f$. Let $R$ be as in \fullref{sec:main}. Let $M_{\infty}$ be the infinite cyclic cover dual to the fiber. Soma \cite{soma} showed that there are points $x_n \in M_n$ so that $(M_n,x_n)$ converges in the sense of Hausdorff--Gromov to the manifold $M_\infty$. These points are near the minimal surface $S$, and the picture is that $M_n$ has a very long ``neck'' that looks more and more like $M_\infty$. For sufficiently large $n$ there is a ball $B(r) \subset M_n$ for arbitrarily large $r$ that is $1-\epsilon$ isometric to $B_\infty(r) \subset M_\infty$. Note that $B_\infty(r)$ contains arbitrarily many lifts of $F$ separated by lifts of $\widehat{F}$. Since $P_n$ cannot be isotoped to be disjoint from $S$, its image in $M_\infty$ cannot be isotoped off the preimages of $F$. As in \fullref{sec:main} we conclude that the images of $P_n$ have arbitrarily high area. However, areas cannot be distorted arbitrarily by a map that is $1-\epsilon$ close to an isometry. Hence the areas of $P_n$ are unbounded, contradicting Gauss--Bonnet; this contradiction completes our sketch. \end{proof} In \cite{souto} Souto generalized Lackenby's result (see also a recent paper by Li \cite{li}). Although his work is beyond the scope of this paper, we give a brief description of it here. Instead of powers of maps, Souto used a combinatorial condition on the gluings: fixing essential curves $\alpha_i \subset N_i$ ($i=1,2$) and $h>0$, Souto shows that if $\phi\co N_1 \to N_2$ fulfills the condition ``$d_{\mathcal{C}}(\phi(\alpha_1), \alpha_2)$ is sufficiently large'' then any Heegaard splitting for $N_1 \cup_\phi N_2$ of genus at most $h$ weakly reduces to $S$. The distance Souto uses---$d_{\mathcal{C}}$---is the distance in the ``curve complex'' (as defined by Hempel \cite{hempel-distance}) and {\it not} the hyperbolic distance. Following Kobayashi \cite{kobayashi-height} Hempel showed that raising a fixed monodromy $\phi$ to a sufficiently high power does imply Souto's condition. Hence Souto's condition is indeed weaker than Lackenby's, and it is in fact too weak for us to expect Soma-type convergence to $M_\infty$. However, using Minsky \cite{minsky} Souto shows that given a sequence of manifolds $M_{\phi_n}$ with $d_{\mathcal{C}}(\phi_n(\alpha_1), \alpha_2) \to \infty$, the manifolds $M_{\phi_n}$ are ``torn apart'' and the cores of $N_1$ and $N_2$ become arbitrarily far apart. For a precise statement \cite[Proposition~6]{souto}. Souto concludes that for sufficiently large $n$, any minimal surface for $M_n$ that intersects both $N_1$ and $N_2$ has high area and therefore genus greater than $h$. Souto's Theorem now follows from Claim~1 above. A similar result was obtained by Namazi and Souto \cite{namazi-souto} for gluing of handlebodies. They show that if $N_1$ and $N_2$ are genus $g$ handlebodies and $\partial N_1 \to \partial N_2$ is a generic pseudo-Anosov map (for a precise definition of ``generic'' in this case see \cite{namazi-souto}) then for any $\epsilon > 0$ and for large enough $n$ the manifold $M_{f^n}$ obtained by gluing $N_1$ to $N_2$ via $f^n$ admits a negatively curved metric with curvatures $K$ so that $-1 - \epsilon < K < -1 + \epsilon$. Namazi and Souto use this metric to conclude many things about $M_{f^n}$, for example, that both its Heegaard genus and its rank (that is, number of generators needed for $\pi_1(M_{f^n})$) are exactly $g$. \appendix \section{Appendix: Short curves on round spheres: take one} \label{sec:cruves-on-spheres-1} In this section we prove the following proposition, which is a simple exercise in spherical geometry used in \fullref{sec:monotonicity}. Let $S^2(r)$ be a sphere of constant curvature $+(\frac{1}{r})^2$. We isometrically identify $S^2(r)$ with $\{(x,y,z) \in \mathbb{R}^3 | x^2 + y^2 + z^2 = r^2\}$ and refer to it as a round sphere of radius $r$. \begin{pro} \label{pro:great-circles} Let $S^2(r)$ be a round sphere of radius $r$ and $\gamma \subset S^2$ a rectifiable closed curve. Suppose $l(\gamma) \leq 2\pi r$ (the length of great circles). Then $\gamma$ is disjoint from some great circle. \end{pro} \begin{rmk} The proof also shows that if $\gamma$ is a {\it smooth} curve that meets every great circle then $l(\gamma) = 2\pi r$ if and only if $\gamma$ is itself a great circle. \end{rmk} \begin{proof} Let $\gamma$ be a curve that intersects every great circle. Let $z_{\mathrm{min}}$ (for some $z_{\mathrm{min}} \in \mathbb{R}$) be the minimal value of the $z$--coordinate, taken over $\gamma$. Rotate $S^2(r)$ to maximize $z_{\mathrm{min}}$. If $z_{\mathrm{min}} > 0$ then $\gamma$ is disjoint from the equator, contradicting our assumption. We assume from now on $z_{\mathrm{min}} \leq 0$. Suppose first $z_{\mathrm{min}} = 0$. Suppose, for contradiction, that there exists a closed arc $\alpha$ on the equator so that $l(\alpha) = \pi r$ and $\alpha \cap \gamma = \emptyset$. By rotating $S^2(r)$ about the $z$--axis (if necessary) we may assume $\alpha = \{(x,y,0) \in S^2(r) | y \leq 0\}$. Then rotating $S^2(r)$ slightly about the $x$--axis pushes the points $\{(x,y,0) \in S^2(r) | y > 0\}$ above the $xy$--plane. By compactness of $\gamma$ and $\alpha$ there is some $\epsilon$ so that $d(\gamma,\alpha) > \epsilon$. Hence if the rotation is small enough, no point of $\gamma$ is moved to (or below) $\alpha$. Thus, after rotating $S^2(r)$, $z_{\mathrm{min}} > 0$, contradiction. We conclude that every arc of the equator of length $\pi r$ contains a point of $\gamma$. Therefore there exists a sequence of points $p_i \in \gamma \cap \{(x,y,0)\}$ ($i=1,\dots,n$, for some $n \geq 2$), ordered by their order along the equator ({\it not} along $\gamma$), so that $d(p_i,p_{i+1})$ is at most half the equator (indices taken modulo $n$). The shortest path connecting $p_i$ to $p_{i+1}$ is an arc of the equator, and we conclude that $l(\gamma) \geq 2\pi r$ as required. If we assume, in addition, that $l(\gamma) = 2\pi r$ then either $\gamma$ is itself the equator or $\gamma$ consists of two arcs of great circle meeting at $c_1 \cup c_2$. Note that this can in fact happen, but then $\gamma$ is not smooth. This completes the proof in the case $z_{\mathrm{min}} = 0$ Assume next $z_{\mathrm{min}} < 0$. Let $c_{\mathrm{min}}$ be the latitude of $S^2(r)$ at $z = z_{\mathrm{min}}$, and denote the length of $c_{\mathrm{min}}$ by $d_{\mathrm{min}}$. Suppose there is an open arc of $c_{\mathrm{min}}$ of length $\frac{1}{2} d_{\mathrm{min}}$ that does not intersect $\gamma$. Similar to above, by rotating $S^2(r)$ we may assume this arc is given by $\{(x,y,z_{\mathrm{min}}) \in c_{\mathrm{min}} | y < 0\}$. Then a tiny rotation about the $x$--axis increases the $z$--coordinate of all points $\{(x,y,z)| y \geq 0, \ z \leq 0\}$. As above ,this increases $z_{\mathrm{min}}$, contradicting our choice of $z_{\mathrm{min}}$. Therefore there is a collection of points $p_i \in \gamma \cap c_{\mathrm{min}}$ ($i=1,\dots,n$, for some $n \geq 3$), ordered by their order along the equator ({\it not} along $c_{\mathrm{min}}$), so that $d(p_i,p_{i+1}) < \frac{1}{2} d_{\mathrm{min}}$ (indices taken modulo $n$). The shortest path connecting $p_i$ to $p_{i+1}$ is an arc of a great circle. However, such arc has points with $z$--coordinate less than $z_{\mathrm{min}}$, and therefore cannot be a part of $\gamma$. The shortest path containing all the $p_i$'s on the punctured sphere on $\{(x,y,z) \in S^2(r) | z \geq z_{\mathrm{min}}\}$ is the boundary, that is, $c_{\mathrm{min}}$ itself. Unfortunately, $l(c_{\mathrm{min}}) < 2\pi r$. Upper hemisphere to the rescue! $\gamma$ must have a point with $z$--coordinate at least $-z_{\mathrm{min}}$, for otherwise rotating $S^2(r)$ by $\pi$ about any horizontal axis would decrease $z_{\mathrm{min}}$. Then $l(\gamma)$ is at least as long as the shortest curve containing the $p_i$'s and some point $p$ on or above $c_{\mathrm{min}}$, the circle of $\gamma$ at $z = z_{\mathrm{min}}$. Let $\gamma$ be such a curve. By reordering the indices if necessary it is convenient to assume that $p$ is between $p_1$ and $p_2$. It is clear that moving $p$ so that its longitude is between the longitudes of $p_1$ and $p_2$ shortens $\gamma$ (note that since $d(p_1,p_2) < \frac{1}{2} d_{\mathrm{min}}$ this is well-defined). We now see that $\gamma$ intersects the equator in two point, say $x_1$ and $x_2$. Replacing the two arcs of $\gamma$ above the equator by the short arc of the equator decreases length. It is not hard to see that the same hold when we replace the arc of $\gamma$ below the equator with the long arc of the equator. We conclude that $l(\gamma) > l(\mbox{equator}) = 2\pi r$. \end{proof} \section{Appendix: Short curves on round spheres: take two} \label{sec:cruves-on-spheres-2} We now give a second proof of \fullref{pro:great-circles}. For convenience of presentation we take $S^2$ to be a sphere of radius 1. Let $\gamma$ be a closed curve that intersects every great circle. Every great circle is defined by two antipodal points, for example, the equator is defined by the poles. Thus, the space of great circles is $\mathbb{R}P^2$. Since $S^2$ has area $4\pi$, $\mathbb{R}P^2$ has area $2\pi$. Let $f\co S^2 \to \mathbb{R}P^2$ be the ``map'' that assigns to a point $p$ all the great circles that contain $p$; thus, for example, if $p$ is the north pole then $f(p)$ is the projection of the equator to $\mathbb{R}P^2$. Let $C$ be a great circle. We claim that $\gamma \cap C$ contains at least two points of $\gamma$. (If $\gamma$ is not embedded then the two may be the same point of $C$.) Suppose, for a contradiction, that $\gamma$ meets some great circle (say the equator) in one point only (Say $(1,0,0)$). By the Jordan Curve Theorem, $\gamma$ does not cross the equator. By tilting the equator slightly about the $y$--axis it is easy to obtain a great circle disjoint from $\gamma$. Hence we see that $\gamma$ intersects every great circle at least twice. Equivalently, $f(\gamma)$ covers $\mathbb{R}P^2$ at least twice. Let $\alpha_i$ be a small arc of a great circle, of length $l(\alpha_i)$; note that this length is exactly the angle $\alpha_i$ supports in radians. Say for convenience $\alpha_i$ starts at the north pole and goes towards the equator. The points that define great circles that intersect $\alpha_i$ are given by tilting the equator by $\alpha_i$ radians. This gives a set whose area is $\alpha_i/\pi$ of the total area of $S^2$. Since the area of $S^2$ is $4\pi$, it gives a set of area $4l(\alpha_i)$. This set is invariant under the antipodal map, and so projecting to $\mathbb{R}P^2$ the area is cut by half, and we get: \begin{equation} \label{eq:areas_and_lengths} \mbox{Area of } f(\alpha_i) = 2l(\alpha_i). \end{equation} Fix $\epsilon > 0$. Let $\alpha$ be an approximation of $\gamma$ by small arcs of great circles, say $\{\alpha_i\}_{i=1}^n$ are the segments of $\alpha$. We require $\alpha$ to approximate $\gamma$ well in the following two senses: \begin{enumerate} \item $l(\alpha) \leq l(\gamma) + \epsilon$. \item Under $f$, $\alpha$ covers $\mathbb{R}P^2$ as well as $\gamma$ does (except, perhaps, for a set of measure $\epsilon$); ie, the area of $f(\alpha) \geq $ the area of $f(\gamma) - \epsilon$ (area measured with multiplicity). \end{enumerate} From this we get: \begin{eqnarray*} 4\pi - \epsilon &=& \mbox{twice the area of } \mathbb{R}P^2 - \epsilon \\% &\leq& \mbox{the area of } f(\gamma) - \epsilon \\% &\leq& \mbox{area of } f(\alpha) \\% &=& \Sigma_{i=1}^n \mbox{area of }f(\alpha_i) \\% &=& \Sigma_{i=1}^n 2l(\alpha_i) \\% &=& 2l(\alpha) \\% &\leq& 2(l(\gamma) + \epsilon). \end{eqnarray*} (In the fifth equality we use Equation~\eqref{eq:areas_and_lengths}.) Since $\epsilon$ was arbitrary, dividing by 2 we get the desired result: $2\pi \leq l(\gamma)$. \end{document}
math
43,981
\begin{document} \title{On Equivalence of M$\sp{\natural} \begin{abstract} A fundamental theorem in discrete convex analysis states that a set function is M$\sp{\natural}$-concave if and only if its conjugate function is submodular. This paper gives a new proof to this fact. \end{abstract} {\bf Keywords}: combinatorial optimization, discrete convex analysis, M$\sp{\natural}$-concave function, valuated matroid, submodularity, conjugate function \section{Introduction} \label{SCintro} Let $f: 2^{N} \to {\mathbb{R}} \cup \{ -\infty \}$ be a set function on a finite set $N = \{ 1,2,\ldots, n \}$, where the effective domain ${\rm dom\,} f = \{ X \subseteq N \mid f(X) > -\infty \}$ is assumed to be nonempty. The conjugate function $g: {\mathbb{R}}\sp{N} \to \mathbb{R}$ of $f$ is defined by \begin{align} g(p) &= \max\{ f(X) - p(X) \mid X \subseteq N \} \qquad ( p \in {\mathbb{R}}\sp{N}), \label{conjcave2vex01} \end{align} where $p(X) = \sum_{i \in X} p_{i}$ (see Remark \ref{RMconjdef}). A set function $f: 2^{N} \to {\mathbb{R}} \cup \{ -\infty \}$ with ${\rm dom\,} f \not= \emptyset$ is called {\em M$\sp{\natural}$-concave} \cite{Mdcasiam,MS99gp} if, for any $X, Y \in {\rm dom\,} f$ and $i \in X \setminus Y$, it holds that \footnote{ We use short-hand notations such as $X - i = X \setminus \{ i \}$, $Y + i = Y \cup \{ i \}$, $X - i + j =(X \setminus \{ i \}) \cup \{ j \}$, and $Y + i - j =(Y \cup \{ i \}) \setminus \{ j \}$. } \begin{equation} \label{mconcav1} f( X) + f( Y ) \leq f( X - i ) + f( Y + i ), \end{equation} or there exists some $j \in Y \setminus X$ such that \begin{equation} \label{mconcav2} f( X) + f( Y ) \leq f( X - i + j) + f( Y + i - j). \end{equation} Since $f( X) + f( Y ) > -\infty$ for $X, Y \in {\rm dom\,} f$, (\ref{mconcav1}) requires $X - i, Y + i \in {\rm dom\,} f$, and (\ref{mconcav2}) requires $X - i + j, Y + i - j \in {\rm dom\,} f$. A function $g: \mathbb{R}\sp{N} \to \mathbb{R}$ is called {\em submodular} if it satisfies the following inequality: \begin{equation} \label{gsubmR} g(p) + g(q) \geq g(p \vee q) + g(p \wedge q) \qquad (p, q \in {\mathbb{R}}\sp{N}), \end{equation} where $p \vee q$ and $p \wedge q$ are the componentwise maximum and minimum of $p$ and $q$, respectively. The following theorem states one of the most fundamental facts in discrete convex analysis \cite{Mdca98,Mdcasiam} that M$\sp{\natural}$-concavity of a set function $f$ can be characterized by submodularity of the conjugate function $g$. \begin{theorem} \label{THmnatcavbyconjfn01} A set function $f: 2\sp{N} \to \mathbb{R} \cup \{ -\infty \}$ with ${\rm dom\,} f \not= \emptyset$ is M$\sp{\natural}$-concave if and only if its conjugate function $g: {\mathbb{R}}\sp{N} \to \mathbb{R}$ is submodular. \finbox \end{theorem} This theorem was first given by Danilov and Lang \cite{DL01gs} in Russian; it is cited by Danilov, Koshevoy, and Lang \cite{DKL03gs}. It can also be derived through a combination of Theorem 10 of Ausubel and Milgrom \cite{AM02auc} with the equivalence of gross substitutability and {M$^{\natural}$}-convexity due to Fujishige and Yang \cite{FY03gs}. A self-contained detailed proof can be found in a recent survey paper by Shioura and Tamura \cite[Theorem 7.2]{ST15jorsj}. The objective of this paper is to give yet another proof to the above theorem. The proof does not use polyhedral-geometric characterizations of {M$^{\natural}$}-convex sets and functions, nor does it depend on the M-L conjugacy theorem in discrete convex analysis. Section~\ref{SCprelim} offers preliminaries from discrete convex analysis, and Section~\ref{SCproof} presents the proof. Section~\ref{SClocexcproof} is a technical appendix. \begin{remark} \rm \label{RMconjdef} The definition (\ref{conjcave2vex01}) of the conjugate function $g(p)$ here is consistent with its interpretation in economics. If $f(X)$ denotes the utility (or valuation) function for a bundle $X$, then $g(p)$ in (\ref{conjcave2vex01}) is the indirect utility function under the price vector $p$. In convex analysis, however, the conjugate of a concave function $f$ is more often defined as $f\sp{*}(p) = \min_{x} \{ p\sp{\top} x - f(x) \}$. \finbox \end{remark} \section{Preliminaries on M-concave Functions} \label{SCprelim} A set function $f: 2^{N} \to {\mathbb{R}} \cup \{ -\infty \}$ with ${\rm dom\,} f \not= \emptyset$ is called {\em valuated matroid} \cite{DW90, DW92} if, for any $X, Y \in {\rm dom\,} f$ and $i \in X \setminus Y$, there exists some $j \in Y \setminus X$ such that \begin{equation} \label{valmatexc1} f( X) + f( Y ) \leq f( X - i + j) + f( Y + i -j). \end{equation} This property is referred to as the {\em exchange property}. A valuated matroid is also called an {\em M-concave set function} \cite{Mstein96,Mdcasiam}. The effective domain $\mathcal{B}$ of an M-concave function forms the family of bases of a matroid, and in particular, $\mathcal{B}$ consists of equi-cardinal subsets, i.e., $|X| = |Y|$ for all $X, Y \in \mathcal{B}$. As is obvious from the definitions, M-concave functions form a subclass of M$\sp{\natural}$-concave functions. \begin{proposition} \label{PRmcav=mnatcav+equicard} A set function $f$ is M-concave if and only if it is an M$\sp{\natural}$-concave function and $|X| = |Y|$ for all $X, Y \in {\rm dom\,} f$. \finbox \end{proposition} The concepts of M-concave and M$\sp{\natural}$-concave functions are in fact equivalent. For a function $f: 2^{N} \to {\mathbb{R}} \cup \{ -\infty \}$, we associate a function $\tilde{f}$ with an equi-cardinal effective domain. Denote by $r$ and $r'$ the maximum and minimum, respectively, of $|X|$ for $X \in {\rm dom\,} f$. Let $s \geq r-r'$, $S = \{ n+1,n+2,\ldots, n+s \}$, and $\tilde{N} = N \cup S = \{ 1,2,\ldots, \tilde n \}$, where $\tilde n =n+s$. We define $\tilde{f}: 2^{\tilde N} \to {\mathbb{R}} \cup \{ -\infty \}$ by \begin{align} \label{assocMdef} \tilde{f}(Z) = \left\{ \begin{array}{ll} f(Z \cap N) & (|Z| = r) , \\ -\infty & (\mbox{otherwise}) . \\ \end{array} {\rm ri\,}ght. \end{align} Then, for $X \subseteq N$ and $U \subseteq S$, we have $\tilde{f}(X \cup U) = f(X)$ if $|U|=r - |X|$. \begin{proposition} \label{PRmnatequicardvalmat} A set function $f$ is M$\sp{\natural}$-concave if and only if $\tilde{f}$ is M-concave. \end{proposition} \begin{proof} This fact is well known among experts. Since $f$ is a projection of $\tilde{f}$, the ``if'' part follows from \cite[Theorem 6.15 (2)]{Mdcasiam}. A proof of the ``only-if'' part can be found, e.g., in \cite{Mmultexcstr17}. \end{proof} The exchange property for M-concave set functions is in fact equivalent to a local exchange property under some assumption on the effective domain. We say that a family $\mathcal{B}$ of equi-cardinal subsets is {\em connected} if, for any distinct $X, Y \in \mathcal{B}$, there exist $i \in X \setminus Y$ and $j \in Y \setminus X$ such that $Y + i - j \in \mathcal{B}$. As is easily seen, $\mathcal{B}$ is connected if and only if, for any distinct $X, Y \in \mathcal{B}$ there exist distinct $i_{1}, i_{2}, \ldots, i_{m} \in X \setminus Y$ and $j_{1}, j_{2}, \ldots, j_{m} \in Y \setminus X$, where $m = |X \setminus Y| = |Y \setminus X|$, such that $Y \cup \{ i_{1}, i_{2}, \ldots, i_{k} \} \setminus \{ j_{1}, j_{2}, \ldots, j_{k} \} \in \mathcal{B}$ for $k=1,2,\ldots,m$. The following theorem is a strengthening by Shioura \cite[Theorem 2]{Shi00lev} of the local exchange theorem of Dress--Wenzel \cite{DWperf92} and Murota \cite{Mmax97} (see also \cite[Theorem 5.2.25]{Mspr2000}, \cite[Theorem 6.4]{Mdcasiam}) \footnote{ In \cite{DWperf92,Mmax97}, the effective domain is assumed to be a matroid basis family, and the assumption is weakened to connectedness in \cite{Shi00lev}. It is well known that a matroid basis family is connected. }. \begin{theorem} \label{THmcavlocexc01} A set function $f: 2\sp{N} \to {\mathbb{R}} \cup \{ -\infty \}$ is M-concave if and only if {\rm (i)} ${\rm dom\,} f$ is a connected nonempty family of equi-cardinal sets, and {\rm (ii)} for any $X, Y \in {\rm dom\,} f$ with $|X \setminus Y|=2$, there exist some $i \in X \setminus Y$ and $j \in Y \setminus X$ for which {\rm (\ref{valmatexc1})} holds. \end{theorem} \begin{proof} The ``only-if'' part is obvious. For the ``if'' part, the proof of Theorem 5.2.25 in \cite[pp.295--297]{Mspr2000} works with the only modification in the proof of Claim 2 there. Since the proof is omitted in \cite{Shi00lev}, we include the proof in Section~\ref{SClocexcproof}. \end{proof} \section{A Proof of Theorem \ref{THmnatcavbyconjfn01}} \label{SCproof} We prove the characterization of M$\sp{\natural}$-concavity by submodularity of the conjugate function (Theorem \ref{THmnatcavbyconjfn01}). Let $f: 2\sp{N} \to {\mathbb{R}} \cup \{ -\infty \}$ be a set function with ${\rm dom\,} f \not= \emptyset$, and $g: {\mathbb{R}}\sp{N} \to \mathbb{R}$ be its conjugate function, which is defined as $g(p) = \max\{ f(X) - p(X) \mid X \subseteq N \}$ in (\ref{conjcave2vex01}). We first show that {M$^{\natural}$}-concavity of $f$ implies submodularity of $g$. \begin{lemma} \label{LMmnatTOconjsubm01} If $f$ is M$\sp{\natural}$-concave, then $g$ is submodular. \end{lemma} \begin{proof} As is well known, $g$ is submodular if and only if \begin{equation} \label{mTOcjsbm01prf1} g( p+ a \unitvec{i}) + g( p+ b \unitvec{j}) \geq g( p) + g( p + a \unitvec{i}+ b \unitvec{j}) \end{equation} for any $p \in {\mathbb{R}}\sp{N}$, distinct $i,j \in N$, and $a, b \geq 0$, where $\unitvec{i}$ and $\unitvec{j}$ are the $i$th and $j$th unit vectors, respectively. For simplicity of notation we assume $p={\bf 0}$, and write $p\sp{i}= a \unitvec{i}$, $p\sp{j}= b \unitvec{j}$, and $p\sp{ij}= a \unitvec{i}+ b \unitvec{j}$. Take $X, Y \subseteq N$ such that \[ g(p) = f(X) - p(X) = f(X), \quad g( p\sp{ij}) = f(Y) - p\sp{ij}(Y) = f(Y) - a | Y \cap \{ i \} | - b | Y \cap \{ j \} |. \] Note also that $g( p\sp{i}) = \max\ \{ f(Z) - a | Z \cap \{ i \} | \mid Z \subseteq N \} $ and similarly for $g( p\sp{j})$. \begin{itemize} \item If $| Y \cap \{ i,j \} | =2$, then $g(p)+g( p\sp{ij}) = ( f(X) - a ) + ( f(Y) - b ) \leq ( f(X) - a | X \cap \{ i \} | ) + ( f(Y) - b | Y \cap \{ j \} | ) \leq g( p\sp{i}) + g( p\sp{j})$. \item If $| Y \cap \{ i,j \} | =1$, we may assume $i \in Y$ and $j \not\in Y$. Then $g(p)+g( p\sp{ij}) = f(X) + ( f(Y) - a) \leq ( f(X) - a| X \cap \{ i \} | ) + ( f(Y) - b | Y \cap \{ j \} | ) \leq g( p\sp{i}) + g( p\sp{j})$. \item If $| Y \cap \{ i,j \} | =0$, then $g(p)+g( p\sp{ij}) = f(X) + f(Y)$. If $i \not\in X$, we have $f(X) + f(Y) = ( f(X) - a | X \cap \{ i \} | ) + ( f(Y) - b | Y \cap \{ j \} | ) \leq g( p\sp{i}) + g( p\sp{j})$. Similarly, if $j \not\in X$. Suppose $\{ i,j \} \subseteq X$. By the {M$^{\natural}$}-concave exchange property, we have $f(X) + f(Y) \leq f(X') + f(Y')$, where $(X',Y')=(X-i, Y+i)$ or $(X',Y')=(X-i+k, Y+i-k)$ for some $k \in Y \setminus X$. Since $i \not\in X'$ and $j \not\in Y'$, we have $f(X') + f(Y') = ( f(X') - a| X' \cap \{ i \} | ) + ( f(Y') - b | Y' \cap \{ j \} | ) \leq g( p\sp{i}) + g( p\sp{j})$. \end{itemize} \end{proof} Next, we show, in two steps, that submodularity of $g$ implies {M$^{\natural}$}-concavity of $f$. We treat the M-concave case in Lemmas \ref{LMmnatFRconjsubm01-0} to \ref{LMmnatFRconjsubm01-2}, and the {M$^{\natural}$}-concave case in Lemma~\ref{LMmnatFRconjsubm01-4}. It is emphasized that the combinatorial essence is captured in Lemma~\ref{LMmnatFRconjsubm01-1} for the M-concave case. \begin{lemma} \label{LMmnatFRconjsubm01-0} If ${\rm dom\,} f$ is a family of equi-cardinal sets and $g$ is submodular, then ${\rm dom\,} f$ is connected. \end{lemma} \begin{proof} To prove this by contradiction, suppose that ${\rm dom\,} f$ is not connected. Then there exist $X, Y \in {\rm dom\,} f$ such that $|X \setminus Y|=|Y \setminus X| \geq 2$ and there exists no $Z \in {\rm dom\,} f \setminus \{ X, Y \}$ satisfying $X \cap Y \subseteq Z \subseteq X \cup Y$. Let $i_{0}$ be any element of $X \setminus Y$ and $j_{0}$ be any element of $Y \setminus X$. Let $M$ be a sufficiently large positive number in the sense that $M \gg n$ and $M \gg F$ for $F = \max \{ |f(W)| \mid W \in {\rm dom\,} f \}$. Define $p, q \in {\mathbb{R}}^{N}$ by \begin{align*} & p_{i} = \left\{ \begin{array}{ll} -M & (i = i_{0}) , \\ 0 & (i \in (X \setminus Y) \setminus \{ i_{0} \}) , \\ \end{array} {\rm ri\,}ght. \quad q_{i} = \left\{ \begin{array}{ll} 0 & (i = i_{0}) , \\ -M & (i \in (X \setminus Y) \setminus \{ i_{0} \}) ; \\ \end{array} {\rm ri\,}ght. \end{align*} \begin{align*} & p_{i} = q_{i} = \left\{ \begin{array}{ll} -M & (i = j_{0}) , \\ 0 & (i \in (Y \setminus X) \setminus \{ j_{0} \}) , \\ - M\sp{2} & (i \in X \cap Y ), \\ + M\sp{2} & (i \in N \setminus (X \cup Y) ) . \\ \end{array} {\rm ri\,}ght. \end{align*} Denote $m=|X \setminus Y|$ and $C = M\sp{2} |X \cap Y|$. Since there is no $Z \in {\rm dom\,} f \setminus \{ X, Y \}$ satisfying $X \cap Y \subseteq Z \subseteq X \cup Y$, we have \begin{align*} g(p) &= \max\{ f(X) - p(X), f(Y)-p(Y) \} \notag \\ &= \max\{ f(X) + M, f(Y)+ M \} + C \notag \\ &\leq F + M + C, \\ g(q) &= \max\{ f(X) - q(X), f(Y)-q(Y) \} \notag \\ &= \max\{ f(X) + (m-1)M, f(Y)+ M \} + C \notag \\ &\leq F + (m-1)M + C, \end{align*} and therefore \begin{equation} \label{domconnprfgpgq} g(p) + g(q ) \leq 2 F + mM + 2C. \end{equation} Similarly, we have \begin{align*} g(p \vee q) &= \max\{ f(X) - (p \vee q) (X), f(Y)- (p \vee q)(Y) \} \notag \\ &= \max\{ f(X) , f(Y)+ M \} + C \notag \\ & = f(Y)+ M + C, \\ g(p \wedge q ) &= \max\{ f(X) - (p \wedge q )(X), f(Y)-(p \wedge q )(Y) \} \notag \\ &= \max\{ f(X) + m M, f(Y)+ M \} + C \notag \\ & = f(X) + mM + C, \end{align*} and therefore \begin{equation} \label{domconnprfgpVqgpAq} g(p \vee q) + g(p \wedge q ) = f(X) + f(Y) + (m+1)M + 2C. \end{equation} Since $M \gg F$, it follows from (\ref{domconnprfgpgq}) and (\ref{domconnprfgpVqgpAq}) that $g(p) + g(q ) < g(p \vee q) + g(p \wedge q )$, which contradicts the submodularity of $g$. \end{proof} \begin{lemma} \label{LMmnatFRconjsubm01-1} If ${\rm dom\,} f$ is a family of equi-cardinal sets and $g$ is submodular, then $f$ has the local exchange property {\rm (ii)} in Theorem \ref{THmcavlocexc01}. \end{lemma} \begin{proof} To prove by contradiction, suppose that the local exchange property fails for \RED{ some $X, Y \in {\rm dom\,} f$ } with $|X \setminus Y|=|Y \setminus X|=2$. To simplify notations we assume $X \setminus Y = \{ 1 , 2 \}$ and $Y \setminus X = \{ 3 , 4 \}$, and write $\alpha_{ij} = f((X \cap Y) + i + j)$, etc. Then we have $\alpha_{12} + \alpha_{34} > \max\{\alpha_{13} + \alpha_{24}, \alpha_{14} + \alpha_{23}\}$ by the failure of the local exchange property. Consider an undirected graph $G =(V,E)$ on vertex set $V = \{ 1,2,3,4 \}$ and edge set $E = \{ (i,j) \mid \alpha_{ij} > -\infty \}$. The graph $G$ has a unique maximum weight perfect matching $M = \{ (1,2), (3,4) \}$ with respect the edge weight $\alpha_{ij}$. By duality (see Remark \ref{RMmnatconj01LPdual} below) there exists $\hat p = (\hat p_{1}, \hat p_{2}, \hat p_{3}, \hat p_{4}) \in {\mathbb{R}}\sp{4}$ such that $\alpha_{12} = \hat p_{1} + \hat p_{2}$, $\alpha_{34} = \hat p_{3} + \hat p_{4}$, and $\alpha_{ij} < \hat p_{i} + \hat p_{j}$ if $(i,j) \not= (1,2), (3,4)$. Define $\beta_{ij} = \alpha_{ij} - \hat p_{i} - \hat p_{j}$, to obtain $\beta_{12} = \beta_{34} = 0$ and $\beta_{ij} < 0$ if $(i,j) \not= (1,2), (3,4)$. \RED{ Let $B=\min \{ |\beta_{ij}| \mid (i,j) \ne (1,2), (3,4) \}$ $(>0)$. } To focus on \RED{ $I = \{ 1,2,3,4 \}$ } we partition $p$ into two parts as $p = (p', p'')$ with \RED{ $p' \in {\mathbb{R}}^{I}$ and $p'' \in {\mathbb{R}}^{ N \setminus I }$. } We express $p' = \hat p + q$ with \RED{ $q = (q_{1}, q_{2}, q_{3}, q_{4}) \in {\mathbb{R}}\sp{I}$, } while fixing $p''$ to the vector $\bar p$ defined by \begin{align*} \bar p_{i} = \left\{ \begin{array}{ll} - M & (i \in X \cap Y ), \\ + M & (i \in N \setminus (X \cup Y) ) \\ \end{array} {\rm ri\,}ght. \end{align*} with a sufficiently large positive number $M$. Let $h(q) = g(\hat p + q, \bar p) - M|X \cap Y|$. By the choice of $\hat p$ and $\bar p$ as well as the assumed equi-cardinality of ${\rm dom\,} f$, we have \RED{ \begin{align*} h(q) &= g(\hat p + q, \bar p) - M|X \cap Y| \\ &=\max \{ f(Z) - (\hat p + q)(Z \cap I) - \bar p(Z \setminus I) \mid Z \subseteq N \} - M|X \cap Y| \\ &=\max \{ f(Z) - (\hat p + q)(Z \cap I) \mid (X \cap Y) \subseteq Z \subseteq (X \cap Y) \cup I \} \\ &=\max \{ f((X \cap Y) \cup J) - (\hat p + q)(J) \mid J \subseteq I, |J|=2 \} \\ &= \max \{ \beta_{ij} - q_{i} - q_{j} \mid i,j \in I, \ i \ne j \} \end{align*} } if $\| q \|_{\infty}$ is small enough compared with $M$. \RED{ Furthermore, if $\| q \|_{\infty} \leq B/4$, we have \[ h(q) = \max \{ - q_{1} - q_{2}, - q_{3} - q_{4} \}. \] } Let $a$ \RED{ be a (small) positive number satisfying $a \leq B/4$. } Then $h(0,0,0,0) = h(a,-a,0,0) = h(a,0,0,0) = 0$ and $h(0,-a,0,0) = a$. This shows a violation of submodularity of $h$, and hence that of $g$. \end{proof} Lemmas \ref{LMmnatFRconjsubm01-0} and \ref{LMmnatFRconjsubm01-1} with Theorem \ref{THmcavlocexc01} show the following. \begin{lemma} \label{LMmnatFRconjsubm01-2} If ${\rm dom\,} f$ is a nonempty family of equi-cardinal sets and $g$ is submodular, then $f$ is an M-concave function. \finbox \end{lemma} \begin{remark} \rm \label{RMmnatconj01LPdual} In general, the perfect matching polytope of a graph $G=(V,E)$ is described by the following system of equalities for $x \in {\mathbb{R}}\sp{E}$: (i) $x_{e} \geq 0$ for each $e \in E$, (ii) $x(\delta(v)) = 1$ for each $v \in V$, (iii) $x(\delta(U)) \geq 1$ for each $U \subseteq V$ with $|U|$ being odd $\geq 3$, where $\delta(v)$ denotes the set of edges incident to a vertex $v$ and $\delta(U)$ the set of edges between $U$ and $V \setminus U$; see Schrijver \cite[Section 25.1]{Sch03}. In the proof of Lemma~\ref{LMmnatFRconjsubm01-1} we have $V = \{ 1,2,3,4 \}$, in which case the inequalities of type (iii) are not needed, since $\delta(U)= \delta(v)$ for $U$ with $|U|=3$ and the vertex $v \in V \setminus U$. Consider the maximum weight perfect matching problem on our $G=(V,E)$. This problem can be formulated in a linear program to maximize $\sum_{(i,j) \in E} \alpha_{ij} x_{ij}$ subject to $\sum_{j} x_{ij} = 1$ for $i=1,2,3,4$ and $x_{ij} \geq 0$ for $(i,j) \in E$. Our assumption $\alpha_{12} + \alpha_{34} > \max\{\alpha_{13} + \alpha_{24}, \alpha_{14} + \alpha_{23}\}$ means that this problem has a unique optimal solution $x$ with $x_{12}= x_{34}=1$ and $x_{ij}= 0$ for $(i,j) \not= (1,2), (3,4)$. The dual problem is to minimize $p_{1} + p_{2} + p_{3} + p_{4}$ subject to $p_{i} + p_{j} \geq \alpha_{ij}$ for $(i,j) \in E$. The strict complementary slackness guarantees the existence of a pair of optimal solutions $(x_{ij} \mid (i,j) \in E)$ and $(p_{i} \mid i=1,2,3,4)$ with the property that either $x_{ij} > 0$ or $p_{i} + p_{j} > \alpha_{ij}$ (exactly one of these) holds for each $(i,j) \in E$. Therefore, there exists $(\hat p_{1}, \hat p_{2}, \hat p_{3}, \hat p_{4})$ such that $\alpha_{12} = \hat p_{1} + \hat p_{2}$, $\alpha_{34} = \hat p_{3} + \hat p_{4}$, and $\alpha_{ij} < \hat p_{i} + \hat p_{j}$ for $(i,j) \not= (1,2), (3,4)$. \finbox \end{remark} Next we turn to the {M$^{\natural}$}-concave case. Consider the function $\tilde{f}: 2^{\tilde N} \to {\mathbb{R}} \cup \{ -\infty \}$ of (\ref{assocMdef}) associated with $f: 2^{N} \to {\mathbb{R}} \cup \{ -\infty \}$, where $\tilde{N} = N \cup S$ and ${\rm dom\,} \tilde{f} \subseteq \{ X \mid |X|=r \}$. We take $S$ with $|S| \geq r-r'+2$. Let $\tilde{g}(p,q)$ denote the conjugate of $\tilde{f}$, where $p \in {\mathbb{R}}\sp{N}$ and $q \in {\mathbb{R}}\sp{S}$. \begin{lemma} \label{LMmnatFRconjsubm01-4} If $g$ is submodular, then $\tilde{g}$ is submodular. \end{lemma} \begin{proof} By definition, \begin{equation} \label{mFRconjsubm01prf3} \tilde{g}(p,q) = \max\{ f(X) - p(X) - q(U) \mid X \subseteq N, \ U \subseteq S, \ |X| + |U| = r \} . \end{equation} It suffices to prove that \begin{equation} \label{mTOcjsbm01prf6} \tilde{g}( \tilde p + a \tilde{\unitvec{i}}) + \tilde{g}( \tilde p + b \tilde{\unitvec{j}}) \geq \tilde{g}(\tilde p) + \tilde{g}( \tilde p + a \tilde{\unitvec{i}}+ b \tilde{\unitvec{j}}) \end{equation} holds for any $\tilde p = (p,q) \in {\mathbb{R}}\sp{N \cup S}$, distinct $i,j \in N \cup S$, and $a, b \geq 0$, where $\tilde{\unitvec{i}}$ and $\tilde{\unitvec{j}}$ are the $i$th and $j$th unit vectors in ${\mathbb{R}}\sp{N \cup S}$, respectively. \RED{ Let $h_{ij}(a,b) = \tilde{g}( \tilde p + a \tilde{\unitvec{i}} + b \tilde{\unitvec{j}})$. Then \eqref{mTOcjsbm01prf6} can be rewritten as \begin{equation} \label{mFRconjsubm01prf7} h_{ij}(a,0) + h_{ij}(0,b) \geq h_{ij}(0,0) + h_{ij}(a,b). \end{equation} In the following we assume $\tilde p = (p,q) = {\bf 0}$ for notational simplicity (without essential loss of generality). Then it follows from \eqref{mFRconjsubm01prf3} that \begin{align} & h_{ij}(a,b) = \tilde{g}( \tilde p + a \tilde{\unitvec{i}} + b \tilde{\unitvec{j}}) \nonumber \\ &= \max\{ f(X) - a | (X \cup U) \cap \{ i \} | - b | (X \cup U) \cap \{ j \} | \mid |X| + |U| = r \} . \label{mFRconjsubm01prf8} \end{align} If $i, j \in S$, for example, the function to be maximized reduces to $f(X) - a | U \cap \{ i \} | - b | U \cap \{ j \} |$ while, for each $X \in {\rm dom\,} f$, there exists a subset $U$ of $S$ satisfying $|X| + |U| = r$ and $U \cap \{ i,j \} = \emptyset$ since $|S| \geq r-r'+2$ and $|X| \geq r'$. Therefore, $h_{ij}(a,b) = \max_{X} f(X) = g({\bf 0}) = g(p)$ for $i, j \in S$. In a similar way we obtain \[ h_{ij}(a,b) = \begin{cases} g(p + a \unitvec{i}+ b \unitvec{j}) & (i,j \in N), \\ g(p + a \unitvec{i}) & (i \in N, j \in S), \\ g(p + b \unitvec{j}) & (i \in S, j \in N), \\ g(p) & (i,j \in S) . \end{cases} \] Then \eqref{mFRconjsubm01prf7} follows from the assumed submodularity of $g$. } \OMIT{ For simplicity of notation we assume $\tilde p = {\bf 0}$, and write $\tilde p\sp{i}= a \tilde{\unitvec{i}}$, $\tilde p\sp{j}= b \tilde{\unitvec{j}}$, and $\tilde p\sp{ij}= a \tilde{\unitvec{i}}+ b \tilde{\unitvec{j}}$. Take $X, Y \subseteq N$ and $U, V \subseteq S$ such that $|X| + |U| = |Y| + |V| = r$, \begin{align*} & \tilde{g}(\tilde p) = f(X) - \tilde p (X \cup U) = f(X), \\ & \tilde{g}(\tilde p\sp{ij}) = f(Y) - \tilde p\sp{ij}(Y \cup V) = f(Y) - a | (Y \cup V) \cap \{ i \} | - b | (Y \cup V) \cap \{ j \} |. \end{align*} Note also that $\tilde{g}(\tilde p\sp{i}) = \max \{ f(Z) - a | (Z \cup W) \cap \{ i \} | \mid Z \subseteq N, \ W \subseteq S \} $ and similarly for $\tilde g(\tilde p\sp{j})$. If $\{ i,j \} \subseteq N$, (\ref{mTOcjsbm01prf6}) reduces to $g( p+ a \unitvec{i}) + g( p+ b \unitvec{j}) \geq g( p) + g( p + a \unitvec{i}+ b \unitvec{j})$, which holds since $g$ is assumed to be submodular. The remaining cases are easier (not essential). In case of $\{ i,j \} \subseteq S$, we can assume, by $|S| \geq r-r'+2$, that $V \cap \{ i,j \} = \emptyset$, which implies that $\tilde{g}(\tilde p\sp{ij}) = g(p)$. Similarly, we have $\tilde{g}(\tilde p\sp{i}) = \tilde{g}(\tilde p\sp{j}) = g(p)$ as well as $\tilde{g}(\tilde p) = g(p)$. Therefore, (\ref{mTOcjsbm01prf6}) holds. In case of $| N \cap \{ i,j \} | = | S \cap \{ i,j \} | =1$, we may assume $i \in N$ and $j \in S$ by symmetry. By $|S| \geq r-r'+2$, we have $\tilde{g}(\tilde p\sp{ij}) = \tilde{g}(\tilde p\sp{i}) = g(p\sp{i})$ and $\tilde{g}(\tilde p\sp{j}) = \tilde{g}(\tilde p) = g(p)$, where $p\sp{i}= a \unitvec{i} \in {\mathbb{R}}\sp{N}$ and $p\sp{j}= b \unitvec{j} \in {\mathbb{R}}\sp{N}$. Therefore, (\ref{mTOcjsbm01prf6}) holds. } \end{proof} We are now in the position to complete the proof of Theorem \ref{THmnatcavbyconjfn01}. If the conjugate function $g$ of $f$ is submodular, $\tilde{g}$ is also submodular by Lemma~\ref{LMmnatFRconjsubm01-4}. Then $\tilde{f}$ is M-concave by Lemma~\ref{LMmnatFRconjsubm01-2}, and therefore $f$ is {M$^{\natural}$}-concave by Proposition~\ref{PRmnatequicardvalmat}. \section{Appendix: Proof of Theorem \ref{THmcavlocexc01}} \label{SClocexcproof} A self-contained proof of Theorem \ref{THmcavlocexc01} is presented here. This is basically the same as the proof of Theorem 5.2.25 in \cite[pp.295--297]{Mspr2000} adapted to our present notation, with the difference only in the proof of Claim 2. Let $\mathcal{B} = {\rm dom\,} f$. For $p \in {\mathbb{R}}\sp{N}$ we define \[ f_{p}(X) = f(X)+p(X), \quad f_{p}(X,i,j) = f_{p}(X-i+j)-f_{p}(X) \qquad (X \in \mathcal{B}), \] where $f_{p}(X,i,j)= - \infty$ if $X-i+j \not\in \mathcal{B}$. For $X, Y \in \mathcal{B}$, $i \in X\setminus Y$, and $j \in Y\setminus X$, we have \begin{equation} \label{wpeq} f(X,i,j) + f(Y,j,i) = f_{p}(X,i,j) + f_{p}(Y,j,i). \end{equation} If $X \in \mathcal{B}$, $X \setminus Y = \{ i_{0}, i_{1} \}$, $Y \setminus X = \{ j_{0}, j_{1} \}$ (with $i_{0} \not= i_{1}$, $j_{0} \not= j_{1}$), the local exchange property (condition (ii) in Theorem \ref{THmcavlocexc01}) implies \footnote{ If $Y \not\in \mathcal{B}$, the inequality (\ref{vmprupper2}) is trivially true with $ f_{p}(Y) = -\infty$. } \begin{equation} f_{p}(Y) - f_{p}(X) \leq \max \{ f_{p}(X,i_{0},j_{0}) + f_{p}(X,i_{1},j_{1}), \ f_{p}(X,i_{0},j_{1}) + f_{p}(X,i_{1},j_{0}) \} . \label{vmprupper2} \end{equation} Define \begin{eqnarray*} \mathcal{D} &=& \{ (X,Y) \mid X, Y \in \mathcal{B}, \ \exists \, i_{*} \in X \setminus Y, \ \forall \, j \in Y\setminus X: \nonumber \\ & & \qquad\qquad \ f(X)+f(Y) > f(X-i_{*}+j) + f(Y+i_{*}-j) \}, \end{eqnarray*} which denotes the set of pairs $(X,Y)$ for which the exchange property (\ref{valmatexc1}) fails. We want to show $\mathcal{D} = \emptyset$. Suppose, to the contrary, that $\mathcal{D} \not= \emptyset$, and take $(X,Y) \in \mathcal{D}$ such that $|Y\setminus X|$ is minimum and let $i_{*} \in X \setminus Y$ be the element in the definition of $\mathcal{D}$. We have $|Y\setminus X| > 2$. Define $p \in {\mathbb{R}}\sp{N}$ by \[ p_{j} = \left\{ \begin{array}{ll} - f(X,i_{*},j) & (j \in Y\setminus X, \ X-i_{*}+j \in \mathcal{B}) , \\ f(Y,j,i_{*}) + \varepsilon & (j \in Y\setminus X, \ X-i_{*}+j \not\in \mathcal{B}, \ Y+i_{*}-j \in \mathcal{B}) , \\ 0 & (\mbox{otherwise}) \end{array} {\rm ri\,}ght. \] with some $\varepsilon > 0$. \begin{description} \item[Claim 1:] \quad \vspace*{-\baselineskip} \begin{eqnarray} f_{p}(X,i_{*},j) &=& 0 \qquad \mbox{if $j \in Y\setminus X, \ X-i_{*}+j \in \mathcal{B}$}, \label{wpB1} \\ f_{p}(Y,j,i_{*}) & < & 0 \qquad \mbox{for $j \in Y\setminus X$}. \label{wpB2} \end{eqnarray} \end{description} The inequality (\ref{wpB2}) can be shown as follows. If $X-i_{*}+j \in \mathcal{B}$, we have $f_{p}(X,i_{*},j)=0$ by (\ref{wpB1}) and \[ f_{p}(X,i_{*},j) + f_{p}(Y,j,i_{*}) = f(X,i_{*},j) + f(Y,j,i_{*}) < 0 \] by (\ref{wpeq}) and the definition of $i_{*}$. Otherwise we have $f_{p}(Y,j,i_{*})= - \varepsilon$ or $-\infty$ according to whether $Y+i_{*}-j \in \mathcal{B}$ or not. \begin{description} \item[Claim 2:] There exist $i_{0} \in X \setminus Y$ and $j_{0} \in Y\setminus X$ such that $i_{0} \not= i_{*}$, $Y+i_{0}-j_{0} \in \mathcal{B}$, and \begin{equation} \label{wpmax} f_{p}(Y,j_{0},i_{0}) \geq f_{p}(Y,j,i_{0}) \qquad (j \in Y\setminus X) . \end{equation} \end{description} First, we show the existence of $i_{0} \in X \setminus Y$ and $j \in Y\setminus X$ such that $Y + i_{0} - j \in \mathcal{B}$ and $i_{0} \not= i_{*}$. By connectedness of $\mathcal{B}$ and $|X \setminus Y| > 2$, there exist $i_{1} \in X \setminus Y$ and $j_{1} \in Y \setminus X$ such that $Z=Y + i_{1} - j_{1} \in \mathcal{B}$. If $i_{1} \not= i_{*}$, we are done with $(i_{0},j) = (i_{1},j_{1})$. Otherwise, again by connectedness, there exist $i_{2} \in X \setminus Z$ and $j_{2} \in Z \setminus X$ such that $W = Z + i_{2} - j_{2} \in \mathcal{B}$. Since $|W \setminus Y| = 2$ with $W = Y + \{ i_{1}, i_{2} \} - \{ j_{1}, j_{2} \}$, we obtain $ Y + i_{2} - j_{1} \in \mathcal{B}$ or $ Y + i_{2} - j_{2} \in \mathcal{B}$ from (\ref{valmatexc1}). Hence we can take $(i_{0},j) =(i_{2},j_{1})$ or $(i_{0},j) =(i_{2},j_{2})$; note that $i_{2}$ is distinct from $i_{*}$. Next we choose the element $j_{0}$. By the choice of $i_{0}$, we have $f_{p}(Y,j,i_{0}) > -\infty$ for some $j \in Y\setminus X$. By letting $j_{0}$ to be an element $j \in Y\setminus X$ that maximizes $f_{p}(Y,j,i_{0})$, we obtain (\ref{wpmax}). Thus Claim 2 is established under the connectedness assumption. \begin{description} \item[Claim 3:] $(X,Z) \in \mathcal{D}$ with $Z=Y+i_{0}-j_{0}$. \end{description} To prove this it suffices to show \[ f_{p}(X,i_{*},j) + f_{p}(Z,j,i_{*}) < 0 \qquad (j \in Z\setminus X). \] We may restrict ourselves to $j$ with $X-i_{*}+j \in \mathcal{B}$, since otherwise the first term $f_{p}(X,i_{*},j)$ is equal to $-\infty$. For such $j$ the first term is equal to zero by (\ref{wpB1}). For the second term it follows from (\ref{vmprupper2}), (\ref{wpB2}), and (\ref{wpmax}) that \begin{eqnarray*} f_{p}(Z,j,i_{*}) & = & f_{p}(Y+\{ i_{0},i_{*} \} - \{ j_{0},j \} ) - f_{p}(Y+ i_{0} - j_{0}) \nonumber \\ & \leq & \max \left[ f_{p}(Y,j_{0},i_{0})+f_{p}(Y,j,i_{*}), f_{p}(Y,j,i_{0})+f_{p}(Y,j_{0},i_{*}) {\rm ri\,}ght] - f_{p}(Y,j_{0},i_{0}) \nonumber \\ & < & \max \left[ f_{p}(Y,j_{0},i_{0}), f_{p}(Y,j,i_{0}) {\rm ri\,}ght] - f_{p}(Y,j_{0},i_{0}) \ = 0. \end{eqnarray*} Since $|Z\setminus X| = |Y\setminus X|-1$, Claim 3 contradicts our choice of $(X,Y) \in \mathcal{D}$. Therefore we conclude $\mathcal{D} = \emptyset$. This completes the proof of Theorem \ref{THmcavlocexc01}. \begin{remark} \rm \label{RMproofmodif} For the ease of reference, we describe here the necessary change in the proof of \cite[Theorem 5.2.25]{Mspr2000} in the notation there. The necessary change is localized to the proof of \begin{quote} Claim 2: There exist $u_{0} \in B\setminus B'$ and $v_{0} \in B'\setminus B$ such that $u_{0} \not= u_{*}$, $B'+u_{0}-v_{0} \in \mathcal{B}$, \begin{equation} \label{wpmax-1} \omega_{p}(B',v_{0},u_{0}) \geq \omega_{p}(B',v,u_{0}) \qquad (v \in B'\setminus B) . \end{equation} \end{quote} We now assume connectedness of $\mathcal{B}$, instead of its exchange property. First, we show the existence of $u_{0} \in B\setminus B'$ and $v \in B'\setminus B$ such that $B' + u_{0} - v \in \mathcal{B}$ and $u_{0} \not= u_{*}$. By connectedness of $\mathcal{B}$ and $|B\setminus B'| > 2$, there exist $u_{1} \in B \setminus B'$ and $v_{1} \in B' \setminus B$ such that $B''=B' + u_{1} - v_{1} \in \mathcal{B}$. If $u_{1} \not= u_{*}$, we are done with $(u_{0},v) = (u_{1},v_{1})$. Otherwise, again by connectedness, there exist $u_{2} \in B \setminus B''$ and $v_{2} \in B'' \setminus B$ such that $B''' = B'' + u_{2} - v_{2} \in \mathcal{B}$. Since $|B''' \setminus B'| = 2$ with $B''' = B' + \{ u_{1}, u_{2} \} - \{ v_{1}, v_{2} \}$, we obtain $ B' + u_{2} - v_{1} \in \mathcal{B}$ or $ B' + u_{2} - v_{2} \in \mathcal{B}$ from (\ref{valmatexc1}). Hence we can take $(u_{0},v) =(u_{2},v_{1})$ or $(u_{0},v) =(u_{2},v_{2})$; note that $u_{2}$ is distinct from $u_{*}$. Next we choose the element $v_{0}$. By the choice of $u_{0}$, we have $\omega_{p}(B',v,u_{0}) > -\infty$ for some $v \in B'\setminus B$. By letting $v_{0}$ to be an element $v \in B'\setminus B$ that maximizes $\omega_{p}(B',v,u_{0})$, we obtain (\ref{wpmax-1}). Thus Claim 2 is established under the connectedness assumption. \finbox \end{remark} \end{document}
math
32,329
\begin{document} \title{The Lorenz Renormalization Conjecture} \begin{abstract} The renormalization paradigm for low-dimensional dynamical systems is that of hyperbolic horseshoe dynamics. Does this paradigm survive a transition to more physically relevant systems in higher dimensions? This article addresses this question in the context of Lorenz dynamics which originates in homoclinic bifurcations of flows in three dimensions and higher. A conjecture classifying the dynamics of the Lorenz renormalization operator is stated and supported with numerical evidence. \end{abstract} \section{Introduction} Renormalization in low-dimensional dynamical systems is characterized by hyperbolic horseshoe dynamics with contraction within topological families and expansion otherwise. There are an abundance of low-dimensional systems which adhere to this paradigm, such as unimodal maps \cite{AL11}, critical circle maps \cite{Y03} and circle maps with breaks \cite{KT13}; as well as partial results for dissipative H\'enon-like maps \cite{CLM05}, area-preserving maps \cites{EKW84,GJM16} and higher-dimensional analogs of unimodal maps \cite{CEK81}. This research springs from the question: in what way does the renormalization paradigm need to be modified as its scope is expanded to include more physically relevant systems coming from flows and maps in higher dimensions? We expect renormalization phenomena like universality to survive due to the fact that they have been measured in real physical systems \cites{ML79,L81}, as first predicted to be possible by \ocite{CT78}. Surprisingly, it was shown in \ocite{MW16} that even in the one-dimensional setting of Lorenz maps, instability of renormalization is not only associated with changes in topology; the dynamics of the renormalization operator inside topological classes is not necessarily a contraction. This also has a fundamental impact on the question of rigidity as discussed in \sref{sec:rigidity}. The purpose of this article is to state a conjecture which classifies the dynamics of the Lorenz renormalization operator and to support this conjecture with numerical experiments. We hope that it will act as a focus for what should aim to be proven for these systems. More importantly, we wish to provide an indication of what kind of renormalization phenomena to expect as the field transitions towards physically relevant systems. The article is organized into two sections. In this introduction we go over the necessary definitions and make several remarks along the way before stating the Lorenz Renormalization Conjecture in \sref{sec:renorm-conj}. Having accomplished that, we go on to describe the numerical experiments performed to support the conjecture and include the results of these experiments. The source code, together with instructions on how to reproduce the results, are freely available online \cite{W18}. \subsection{Lorenz maps} \begin{definition} \label{def:lorenz} Let $I = [l, r]$ be a closed interval. A \defn{Lorenz map} $f$ on $I$ is a monotone increasing function which is continuous except at a \defn{critical point}, $c \in (l,r)$, where it has a jump discontinuity, and $f(I\setminus\{c\}) \subset I$ (see figure~\ref{fig:monotone-8-2}). The branches\footnote{ Even though $f$ is undefined at $c$, its branches continuously extend to $c$ since $f$ is bounded.} $f_0:[l,c] \to I$ and $f_1:[c,r] \to I$ of $f$ are assumed to satisfy: \begin{enumerate*}[label=(\roman*)] \item $f_0(c) = r$ and $f_1(c) = l$, \item $f_k(x) = \phi_k(\abs{c - x}^\alpha)$, for some \defn{critical exponent} $\alpha > 0$, and $\Cset^2$--diffeomorphisms $\phi_k$, $k=0,1$. \end{enumerate*} The \defn{set of Lorenz maps} on $[0,1]$ is denoted $\Lset$. \end{definition} \begin{convention} Unless the interval $I$ in the above definition is mentioned, it is implicitly assumed to be the unit interval $[0,1]$. \end{convention} \begin{remark} It bears pointing out that the critical point $c$ is \emph{not} fixed, but depends on the map $f$. Later on we will see that the critical point moves under renormalization. This is an essential feature of Lorenz maps which has very strong consequences on the dynamics and results in new renormalization phenomena not present in unimodal and circle dynamics \cite{MW16}. \end{remark} \begin{remark} The second condition on the branches ensures that the behavior of $f$ near the critical point is like that of the power map $x^\alpha$ near $0$. This condition and the assumption $\alpha > 1$ leads to a well-defined renormalization theory. \end{remark} \begin{convention} The critical exponent $\alpha \in \reals$ is fixed and $\alpha > 1$. \end{convention} \begin{remark} \label{rem:flow} Lorenz maps were introduced by \ocite{GW79} in order to describe the dynamics of three-dimensional flows geometrically similar to the well-known Lorenz system \cite{L63}. The flows they consider have a saddle with a one-dimensional unstable manifold which exhibits recurrent behavior. Their construction is to take a transversal section to the stable manifold and assume that the associated first-return map has an invariant foliation whose leaves are exponentially contracted. Taking a quotient over the leaves results in a one-dimensional map as described by definition~\ref{def:lorenz}. In the above construction the critical exponent $\alpha$ naturally comes out as the absolute value of the ratio between two eigenvalues of the linearized flow at the singularity. In particular, it is important for Lorenz theory to be able to handle any real critical exponent $\alpha > 0$ (as opposed to unimodal theory where it may be possible to get away with saying something like ``the critical exponent is generically two''). \ocite{GW79} considered $\alpha \in (0,1)$; the first to investigate $\alpha > 1$ were \ocite{ACT81}. \end{remark} \begin{remark} In more generality, Lorenz maps can be thought of as the underlying dynamical model for a large class of higher dimensional flows undergoing a homoclinic bifurcation. Hence there are very strong reasons why Lorenz dynamics needs to be further explored. We can only guess that this theory is still so largely underdeveloped, as compared to unimodal and circle dynamics, because of the fact that the holomorphic tools developed in these other theories are not suitable for adaptation to discontinuities and arbitrary real critical exponents. New ideas and tools are desperately needed! \end{remark} \begin{remark} There is a genuine problem relating to smoothness that needs mentioning. Even if the invariant foliation mentioned in remark~\ref{rem:flow} is smooth, the holonomy map need not be \cites{M97,HPS77}. Hence, the associated Lorenz map need not have $\Cset^2$ branches, regardless of how smooth the initial flow is. Without $\Cset^2$--smoothness the renormalization apparatus breaks down \cite{CMdMT09}. In transferring results about maps to flows this problem needs to be addressed. \end{remark} \subsection{Renormalization} \begin{definition} \label{def:rescale} Let $A_I:[0,1] \to I$ denote the increasing affine map taking $[0,1]$ onto~$I$. The \defn{rescaling} to $[0,1]$ of $g: U \to V$ (synonymously, $g$ \defn{rescaled} to $[0,1]$) is the map $G:[0,1]\to[0,1]$ defined by $G = A^{-1}_V \circ g \circ A_U$. In this situation we also conversely say that $g$ is a rescaling of $G$. \end{definition} \begin{definition} \label{def:renorm} A Lorenz map $f$ is \defn{renormalizable} iff there exist $n_0,n_1 \geq 2$ such that $I = [f^{n_1 - 1}(0), f^{n_0 - 1}(1)]$ is contained in $(0,1)$ and contains $c$ in its interior, and such that the first-return map to $I$ is again a Lorenz map (on~$I$); the first-return map rescaled to $[0,1]$ is called a \defn{renormalization} of~$f$ and the symbolic coding of its branches defines the \defn{type} (or \defn{combinatorics}), $\word = (\word_0,\word_1)$, of the renormalization.\footnote{ Explicitly, let $I_k = I \cap [k,c)$ and define $\word_k$ to be the finite word on symbols $\{0,1\}$ such that $f^j(I_k) \subset [\word_k(j),c)$ for $j=0,\dotsc,\abs{\word_k}-1$ and $k=0,1$.} In this case we also say that $f$ is \defn{$\word$--renormalizable} and call the rescaled first-return map a \defn{$\word$--renormalization}. \end{definition} \begin{definition} The type $\word = (\word_0,\word_1)$ is said to be of \defn{monotone combinatorics} if $\word_0 = 011\dotsm1$ and $\word_1 = 100\dotsm0$; more succinctly, it is also called \defn{$(a,b)$--type}, where $a = \abs{\word_0} - 1$ and $b = \abs{\word_1} - 1$. \end{definition} \begin{remark} A Lorenz map may have more than one renormalization, but each will have a distinct type; in particular, if $f$ is both $\word$--renormalizable and $\word'$--renormalizable (with $\word \neq \word'$), then $\word'_0$ and $\word'_1$ are finite words on symbols $\{\word_0,\word_1\}$ with at least one of each symbol, or vice versa. Defining $\abs{\word} = \abs{\word_0} + \abs{\word_1}$ we have that either $\abs{\word} < \abs{\word'}$, or $\abs{\word'} < \abs{\word}$ \cite{MdM01}. \end{remark} \begin{definition} \label{def:Rop} Define the \defn{renormalization operator}, $\Rop$, by sending a renormalizable $f$ to the $\word$--renormalization of $f$ for which $\abs{\word}$ is minimal. Maps for which $\Rop^j f$ is renormalizable for every $j \geq 0$ are called \defn{infinitely renormalizable}; in the special case where $\Rop^j f$ is $\word$--renormalizable and $\word$ does not depend on $j$, $f$ is called \defn{infinitely $\word$--renormalizable} (this is also known by the name \defn{stationary combinatorics}). The orbit $\{f, \Rop f, \Rop^2 f,\dotsc\}$ is called the \defn{successive renormalizations} of~$f$. \end{definition} \begin{conjecture} The closure of the post-critical set, $\cantor_f$, of an infinitely $\word$--renormalizable map $f$ is a minimal Cantor attractor. \end{conjecture} \begin{remark} For Lorenz maps, $\cantor_f$ is the union of the $\omega$--limit sets of the critical values, $f_0(c)$ and $f_1(c)$. This conjecture is a theorem for a large class of monotone combinatorics \cites{MW14,MW16}. \end{remark} \subsection{Rigidity} \label{sec:rigidity} \begin{conjecture} \label{conj:Tset} The set $\Tset_\word$ of infinitely $\word$--renormalizable Lorenz maps coincides with the topological conjugacy class of any $f \in \Tset_\word$. Furthermore, $\Tset_\word \subset \Lset$ is a manifold of codimension two. \end{conjecture} \begin{remark} The first statement would follow if it were shown that there are no wandering intervals for $f \in \Tset_\word$. This is known for a large class of monotone combinatorics \cites{MW14,MW16} but the general problem of when Lorenz maps do not support wandering intervals is still wide open. The codimension of $\Tset_\word$ must be two since topologically full families of Lorenz maps are two-dimensional \cite{MdM01}. \end{remark} \begin{definition} The (classical) notion of \defn{rigidity} is when two topologically conjugate maps are automatically smoothly conjugate on their attractors. \end{definition} \begin{remark} Smooth maps look affine on small scales, so in the presence of rigidity two maps have attractors which on a large scale may look very different but when zoomed in on a particular spot they start to look the same. In this sense rigidity is a strong form of \defn{metric universality}; we will not say more about the latter here and instead focus on the former. \end{remark} \begin{remark} Two crucial ingredients in proving classical rigidity is first to prove that successive renormalizations converge and then to control the rate of convergence. Typically, these ingredients come from the fact that there is a hyperbolic renormalization fixed point which attracts both maps. It is worth pointing out that the study of rigidity in dynamics was initiated by \ocite{H79}, answering a conjecture by \ocite{A61}, but the close connection between rigidity and renormalization was only later realized. \end{remark} \begin{definition} The \defn{rigidity class} of $f \in \Tset_\word$ is defined as the set of $g \in \Tset_\word$ such that $f$ and $g$ are smoothly conjugate on their attractors. \end{definition} \begin{remark} With this terminology we may characterize classical rigidity as the statement that a topological class coincides with a rigidity class. From \ocite{MW16} we know that $T_\word$ may, depending on $\word$, consist of more than one rigidity class. Hence, the classical concept of rigidity is too restrictive, see also \ocite{MP17}. Instead, the correct notion should be to describe the arrangement of a topological class into rigidity classes \cite{MPW17}. Even in the classical cases of critical circle maps and unimodal maps there is already a natural foliation into codimension--$1$ rigidity classes determined by a fixed value for the critical exponent. This is however a trivial observation compared to the above mentioned articles which concern far more subtle phenomena. \end{remark} \subsection{Main conjecture} \label{sec:renorm-conj} \begin{definition} The successive renormalizations of $f$ are \defn{attracted to a degenerate flipping $2$--cycle} iff $\Rop^{2k} f$ and $\Rop^{2k+1} f$ converge to smooth maps on~$[0,1]$, and the critical points have limits $c(\Rop^{2k} f) \to 0$ and $c(\Rop^{2k+1} f) \to 1$ (or vice versa). \end{definition} \begin{remark} Here ``degenerate'' refers to the limits not being Lorenz maps and ``flipping'' refers to the fact that the critical points $c(\Rop^k f)$ flip between being close to zero and being close to one. Informally, the limiting cycle can be thought of as two Lorenz maps with critical point $0$ and $1$, respectively. \end{remark} \begin{lrc} Let $\Tset_\word$ be the set of infinitely $\word$--renormalizable Lorenz maps. For each $\word$ (such that $\Tset_\word \neq \emptyset$) exactly one of the following statements holds, and conversely, to each statement there are $\word$ for which it is realized: \begin{enumerate}[label=(\Alph*)] \item \label{T-rigid} $\Tset_\word$ is a rigidity class and the stable manifold of a hyperbolic renormalization fixed point. \item \label{T-foliated} $\Tset_\word$ is foliated by codimension--$1$ rigidity classes, one of which is the stable manifold of a hyperbolic renormalization fixed point. The successive renormalizations of any $f \in \Tset_\word$ not in this stable manifold are attracted to a degenerate flipping $2$--cycle. \item \label{T-stratified} There exists a nonempty, open and connected set $\Tset_\word^\star \subsetneq \Tset_\word$ which is a rigidity class as well as the stable manifold of a hyperbolic renormalization fixed point; its complement, $\Tset_\word \setminus \Tset_\word^\star$, consists of two connected components which are foliated by rigidity classes of codimension one. The boundary of $\Tset_\word^\star$ in $\Tset_\word$ is a rigidity class as well as the stable manifold of a hyperbolic renormalization periodic point of (strict) period two. The successive renormalizations of any $f \in \Tset_\word \setminus \Tset_\word^\star$ not in this stable manifold are attracted to a degenerate flipping $2$--cycle. \end{enumerate} \end{lrc} \begin{remark} The Lorenz Renormalization Conjecture can be generalized from stationary to periodic combinatorics in the obvious way. For unbounded combinatorics it is not clear what the right conjecture should be as it is possible to force successive renormalizations to not be relatively compact by choosing larger and larger return times for one branch. This leads to Lorenz maps whose attractor does not have a physical measure \cite{MW18}. \end{remark} \begin{remark} A very surprising feature of Lorenz maps is that the dimension of the unstable manifold of a renormalization fixed point depends on the combinatorics; in cases \ref{T-rigid} and~\ref{T-stratified} the dimension is two and in case~\ref{T-foliated} it is three. Two of the unstable directions are always related to moving the two critical values;\footnote{ Just as the one unstable direction for unimodal renormalization is related to moving the one critical value.} a third unstable direction is gained when the movement of the critical point under renormalization becomes unstable (see figure~\ref{fig:eigenvalues}). In the confounding case~\ref{T-stratified} there is a mix of both: the fixed point has two unstable directions, whereas the period--$2$ point has three unstable directions. This situation occurs e.g.\ for monotone $(8,2)$--type (see figure~\ref{fig:monotone-8-2}). \end{remark} \begin{remark} Evidence for case~\ref{T-rigid} is supported by \ocite{MW14}. More recent is \ocite{MW16} where the unstable behavior of the renormalization operator within topological classes was discovered; it supports case~\ref{T-foliated}. Case~\ref{T-stratified} is so far only supported by this article. Numerically no other cases seem to occur, see \sref{sec:results} for examples of each case. \end{remark} \begin{remark} Fixed points, $f$, of monotone $(a,a)$--type are symmetric\footnote{ That is, the critical point is $c(f)=0.5$ and $1 - f(x) = f(1-x)$.} and they are in one-to-one correspondence with unimodal renormalization fixed points; it is an exercise to verify that the unimodal map $g(x) = f(\min\{x,1-x\})$, with $g(0.5) = 1$, is a fixed point of the unimodal renormalization operator. In particular, the monotone $(1,1)$--type Lorenz renormalization fixed point corresponds to the well known fixed point of the unimodal period-doubling operator. It seems reasonable to expect all of these ``unimodal fixed points'' to be dynamically similar, but curiously they are not; conjecturally, for $a > \max\{2\alpha - 1, 2\}$ they belong to case~\ref{T-foliated}, else they belong to case~\ref{T-rigid}. For example, when $\alpha = 2$ this ``bifurcation'' occurs for $a=4$, see \sref{sec:results}. \end{remark} \begin{remark} Compare the Lorenz Renormalization Conjecture with the classical systems of unimodal maps, critical circle maps, etc. In these systems only case~\ref{T-rigid} can occur and the limit set of renormalization, $\mathcal A$, is a \defn{horseshoe}; that is, $\mathcal A$ is hyperbolic and the restriction $\Rop|\mathcal A$ is conjugate to a full shift on infinitely many symbols. Furthermore, orbits of the renormalization operator (where defined) are exponentially contracted to $\mathcal A$ \cite{AL11}. As a counterpoint, the limit set of Lorenz renormalization cannot be a horseshoe due to case~\ref{T-stratified}; instead, it seems to strictly contain a horseshoe which because of case~\ref{T-foliated} does not attract all orbits of renormalization. \end{remark} \begin{remark} Consider how the Lorenz Renormalization Conjecture influences \defn{parameter universality} phenomena. Classically, a topologically full family (of dimension one) transversally intersects a stable manifold (of codimension one) of a hyperbolic renormalization fixed point; this causes iterated images of the family under renormalization to accumulate on an unstable manifold and the bifurcation patterns of the family asymptotically look like those of the unstable manifold. Here, the iterated images of a topologically full family (which has dimension two) under renormalization need not accumulate on an unstable manifold; it depends on which rigidity class the family hits (a stable manifold may have codimension three inside~$\Lset$). However, a three-dimensional family will generically hit all rigidity classes and hence asymptotically contain all possible bifurcation patterns. Universality persists but in a more intricate fashion and there is now a distinction between topologically full families (of dimension two) and geometrically full families (of dimension three). \end{remark} \begin{figure} \caption{The fixed point $f_\star$ and period--$2$ orbit $\{f_\flat,f_\sharp\} \label{fig:monotone-8-2} \end{figure} \section{Numerics} In this section numerical experiments which support the Lorenz Renormalization Conjecture are described. The purpose of these experiments is to locate approximate renormalization fixed points and to estimate the relative sizes of the eigenvalues of the derivative of~$\Rop$ at these fixed points. Approximate periodic points of $\Rop$ can also be located with this method by considering the combinatorics of twice renormalizable maps. The purpose is \emph{not} to provide accurate estimates. This method will not rule out existence of other periodic points of renormalization, only to give evidence in favor of existence of the three cases of the Lorenz Renormalization Conjecture. From our observations there seem to be no other cases. \subsection{Representation of Lorenz maps} \begin{definition} \label{def:Lrep} Let $\diff$ denote the set of orientation-preserving diffeomorphisms on~$[0,1]$ and define the family \begin{equation*} F: (0,1) \times [0,1) \times (0,1] \times \diff \times \diff \to \Lset \end{equation*} as follows: given $(c,v,\phi)$, where $v = (v_0, v_1)$ and $\phi = (\phi_0,\phi_1)$, define $F(c,v,\phi)$ to be the Lorenz map $f: [0,1]\setminus\{c\} \to [0,1]$ whose branches $f_0:[0,c]\to[v_0,1]$ and $f_1:[c,1]\to[0,v_1]$ are the rescalings of $\phi_0(1 - (1-x)^\alpha)$ and $\phi_1(x^\alpha)$, respectively (see definition~\ref{def:rescale}). The parameters $v = (v_0,v_1)$ are called \defn{boundary values}. \end{definition} \begin{remark} \label{rem:injective} It is clear that $F$ is injective; furthermore, its image is renormalization invariant by lemma~\ref{lem:R}. \end{remark} \begin{definition} \label{def:Ltrunc} Let $\Dtrunc \subset \diff$ be a finite-dimensional subset of diffeomorphisms together with a projection $\Dproj: \diff \to \Dtrunc$. Let \begin{equation*} \Ltrunc = (0,1) \times [0,1) \times (0,1] \times \Dtrunc \times \Dtrunc \end{equation*} denote the \defn{set of truncated Lorenz maps}. \end{definition} \begin{remark} \label{rem:diff-approx} For simplicity of implementation, we choose $\Dtrunc$ to be a set of piecewise linear homeomorphisms. Of course, this is not a subset of diffeomorphisms but for the purpose of the numerics it empirically does not matter. To address the issue of smoothness, cubic interpolation could be used instead of linear interpolation, but then care has to be taken that the interpolation is monotone. Another idea is to linearly interpolate functions on $[0,1]$ and taking the inverse of the nonlinearity operator; this would ensure monotonicity as well as $\Cset^2$--smoothness. A third idea is to use finite pure internal structures, which ensures monotonicity and $\Cset^\infty$--smoothness \cite{MW14}. We choose not to pursue these paths here as the implementation would become more involved and since it would not give qualitatively different results. \end{remark} \subsection{Truncated renormalization} \begin{lemma} \label{lem:R} Let $f = F(c,v,\phi)$ as in definition~\ref{def:Lrep}. If $f$ is $\word$--renormalizable, then $\Rop f = F(c',v',\phi')$ for some $(c',v',\phi')$. Explicitly, let $n_k = \abs{\word_k}$, $p_0 = f^{n_1 - 1}(0)$, $p_1 = f^{n_0 - 1}(1)$, $\tilde\phi_0(x) = v_0 + (1 - v_0)\phi_0(x)$, and $\tilde\phi_1(x) = v_1\phi_1(x)$; then \begin{equation} \label{renorm-params} c' = \frac{c - p_0}{p_1 - p_0}, \quad v_0' = \frac{f^{n_0}(p_0) - p_0}{p_1 - p_0}, \quad v_1' = \frac{f^{n_1}(p_1) - p_0}{p_1 - p_0}, \end{equation} and $\phi'_0$, $\phi'_1$ are the respective rescalings of \begin{align*} f^{n_0 - 1} \circ \tilde\phi_0: [\tilde\phi_0^{-1}\circ f(p_0),1] \to [f^{n_0}(p_0), p_1], \\ f^{n_1 - 1} \circ \tilde\phi_1: [0, \tilde\phi_1^{-1}\circ f(p_1)] \to [p_0, f^{n_1}(p_1)]. \end{align*} \end{lemma} \begin{proof} Denote the first-return map associated with the renormalization by $g: I\setminus\{c\} \to I$, where $I = [p_0,p_1]$. Then $c'$ is the relative position of $c$ in~$I$, $v_0'$ is the relative length of $g([p_0,c))$ in~$I$, and $v_1'$ is the relative length of $g((c,p_1])$ in~$I$; written out this is \eqref{renorm-params}. The statement for $\phi'_0$, $\phi'_1$ is just saying that they are the branches of $g$ without the initial folding $x^\alpha$ that comes from $f|_I$. Since $g$ is a first-return to $I$ the $f$--images of $I$ do not meet the critical point before they return; this means that $\phi'_k$ are diffeomorphisms. \end{proof} \begin{definition} \label{def:Rtrunc} Let $F$ and $(\Dtrunc,\Dproj)$ be as in definitions \ref{def:Lrep} and \ref{def:Ltrunc}, respectively, and let $P(c,v,\phi) = (c,v,\Dproj(\phi_0),\Dproj(\phi_1))$. For every renormalizable $F(c,v,\phi)$, define the \defn{truncated renormalization operator}, $\Rtrunc$, by \begin{equation*} \Rtrunc(c,v,\phi) = P \circ F^{-1} \circ \Rop \circ F(c,v,\phi). \end{equation*} This is well-defined by remark~\ref{rem:injective}. \end{definition} \begin{remark} For a class of monotone combinatorics with $\abs{\word}$ large the renormalization operator is close to having finite dimensional image, in the sense that the diffeomorphisms $\phi'_k$ in lemma~\ref{lem:R} are close to being linear \cite{MW16}. In other words, $\Rtrunc$ can automatically be a good approximation of $\Rop$, depending on the combinatorics. \end{remark} \begin{remark} \label{rem:renorm3d} Taking the above remark to its extreme, it even makes sense to consider the trivial set $\Dtrunc = \{ \id \}$ of diffeomorphisms, and looking at the corresponding truncated renormalization operator; it is explicitly defined by \eqref{renorm-params} with $\phi = (\id,\id)$. This is the operator we used to estimate the eigenvalues in figure~\ref{fig:eigenvalues}. Empirically, it exhibits all the dynamics of the Lorenz Renormalization Conjecture and seems to be a remarkably good approximation of the full renormalization operator as far as qualitative behavior is concerned. This should not come as a great surprise as one method of proving existence of fixed points for $\Rop$ involves homotoping to this three-dimensional truncation and proving it has a fixed point \cites{MW14,MW16}. \end{remark} \begin{definition} \label{def:Rmod} For every renormalizable $F(c,v,\phi)$, define the \defn{modified renormalization operator}, $\Rmod: (c,v,\phi) \mapsto (c',v')$, in the same way as the truncated renormalization operator, except changing \eqref{renorm-params} to \begin{equation*} \begin{aligned} c' &= p_0 - c + (p_1 - p_0) c, \\ v_0' &= p_0 - f^{n_0}(p_0) + (p_1 - p_0) v_0, \\ v_1' &= p_0 - f^{n_1}(p_1) + (p_1 - p_1) v_1. \end{aligned} \end{equation*} Note that the image of $\Rmod$ is contained in $\reals^3$. \end{definition} \begin{remark} The idea of the above operator is to improve the numerical behavior of $\Rtrunc$ by not dividing by the length of the return interval in~\eqref{renorm-params}. From the same equation it can be seen that the set of zeros of~$\Rmod$ coincide with the set of $(c,v,\phi)$ for which $(c,v)$ are fixed by $\Rtrunc$. We found that the Newton method on~$\Rmod$ has better convergence properties than the Newton method on $\Rtrunc - \id$. Given $\word$, we use it to determine what the right value for~$c$ should be for a truncated renormalization fixed point (see the fixed point algorithm in the next section). \end{remark} \subsection{Locating fixed points} \label{sec:locating} The perhaps simplest idea for locating fixed points of the truncated renormalization operator is to use a Newton iteration. This is feasible for short combinatorics, but for longer combinatorics it is practically impossible to find starting guesses for which it converges. The method we employ can be thought of as acting on the two-dimensional families $v \mapsto F(c,v,\phi)$ (see definition~\ref{def:Lrep}). It consists of three separate algorithms: one which determines a $v$ such that $F(c,v,\phi)$ is renormalizable, followed either by an algorithm which takes $F(c,v,\phi)$ and produces a new $c$, or one which takes $F(c,v,\phi)$ and produces a new $\phi$. Combined, these methods empirically behave like a contraction toward a family which contains a renormalization fixed point and for which the first algorithm is a contraction toward this fixed point. \begin{definition}[Renormalization fixed point algorithm] Input: the combinatorics $\word$. \begin{enumerate}[label=(\arabic*)] \item Pick an initial guess for $c$ and $\phi$. \item \label{fixedpt-algo-thurston} Apply the modified Thurston algorithm to $v \mapsto F(c, v, \phi)$ to get new boundary values $v'$ (see \sref{sec:thurston} and remark~\ref{rem:mod-thurston}). \item \label{fixedpt-algo-newton} Take a Newton step with the operator $\Rmod$ on $F(c, v', \phi)$ to get a new critical point $c'$. \item Apply the modified Thurston algorithm to $v \mapsto F(c', v, \phi)$ to get new boundary values $v''$. \item \label{fixedpt-algo-renorm} Apply $\Rtrunc$ to $F(c', v'', \phi)$ to get new diffeomorphisms $\phi'$. \item \label{fixedpt-algo-last} Stop if $(c,v,\phi) = (c',v'',\phi')$, else set $c = c'$, $\phi = \phi'$ and go back to step~\ref{fixedpt-algo-thurston}. \end{enumerate} Output: the Lorenz map $F(c,v,\phi)$ (supposedly a renormalization fixed point). \end{definition} \begin{remark} The above algorithm empirically seems to converge for the initial guesses $\phi = (\id,\id)$ and a large set of $c$. Theoretically, there is no guarantee for the output to be a renormalization fixed point, but practically we observe that it is (as long as the algorithm converges). \end{remark} \subsection{The Thurston algorithm} \label{sec:thurston} The Thurston algorithm is a fixed point method that realizes any periodic combinatorics in a full family of maps. It originates in \ocite{DH93} and is also known as the Spider Algorithm in the complex setting \cite{HS94}. In real dynamics it is usually employed to prove the full family theorem \cites{MdM01,dMvS93}. We use it to locate renormalizable maps within the two-dimensional families $v \mapsto F(c,v,\phi)$ (see definition~\ref{def:Lrep}). \begin{definition}[The Thurston Algorithm] Input: a critical point $c$, diffeomorphisms $\phi = (\phi_0,\phi_1)$, and combinatorics $\word = (\word_0,\word_1)$. \begin{enumerate}[label=(\arabic*)] \item Pick an initial guess of \defn{shadow orbits}\footnote{ The name comes from the fact that in the end $x_k$ will be actual orbits of the critical values $0$ and~$1$ under some map $f$ in the family; i.e.\ $x_k(j) = f^j(k)$.} \begin{equation*} \{x_k(0) = k,x_k(1),\dots,x_k(m - 1) = c\},\quad m = \abs{\word_0} + \abs{\word_1},\; k=0,1. \end{equation*} Let $\altword_k$ be the concatenation of $\word_k$ followed by $\word_{1-k}$, for $k=0,1$. \item \label{thurston-setv} Set $v = (x_0(1),x_1(1))$, and let $f = F(c,v,\phi)$ with branches $f_0$ and~$f_1$. \item \label{thurston-pullback} Pull back $x_k$ with $f$ according to the combinatorics $\altword_k$: \begin{equation*} y_k(j - 1) = f^{-1}_{\altword_k(j)}(x_k(j)),\quad j = 1, \dotsc, m - 1,\; k = 0,1. \end{equation*} \item \label{thurston-setlast} Set $y_k(m - 1) = c$, $k=0,1$. \item Stop if $y_k = x_k$, else set $x_k$ to $y_k$, $k=0,1$, and go back to~\ref{thurston-setv}. \end{enumerate} Output: the map $f$ which is a realization of the combinatorics $\word$ in the family $v \mapsto F(c,v,\phi)$. \end{definition} \begin{remark} As long as the initial guess is chosen consistently (i.e.\ if the shadow orbits are ordered according to $\word$) this algorithm is guaranteed to stop; in this case, the realization $f$ is renormalizable and the boundary values of $\Rtrunc f$ equal the critical point of $\Rtrunc f$. In practice the algorithm converges if the initial guess consists of uniformly spaced points $x_0(0) < \dots < x_0(m - 1)$ and $x_1(0) > \dots > x_1(m - 1)$ even though these are not ordered according to the combinatorics $\word$. \end{remark} \begin{remark} \label{rem:mod-thurston} We modify the above algorithm so that the realization $f$ fixes its boundary values under renormalization; i.e.\ $\Rtrunc f(k) = f(k)$, for $k=0,1$. This is convenient as we are interested in renormalization fixed points. The modification is to replace step~\ref{thurston-setlast} with: \begin{enumerate}[label=($4'$)] \item Let $p_0 = x_0(\abs{\word_1} - 1)$ and $p_1 = x_1(\abs{\word_0} - 1)$ and set \begin{equation*} y_k(m - 1) = p_0 + (p_1 - p_0) v_k,\quad k=0,1. \end{equation*} \end{enumerate} Note that $[p_0, p_1]$ is the return interval of $f$ if $y_k = x_k$, so what this step does is to set the relative boundary values of the first-return map. Replacing $v_k$ with parameters $t_k$ varying in $[0,1]$ it is possible to find the whole domain of $\word$--renormalizability in the family. \end{remark} \begin{remark} There is a relationship between the modified Thurston algorithm from the previous remark and the renormalization operator---if the modified Thurston algorithm is applied to a family which contains a renormalization fixed point then the output of the algorithm will be the renormalization fixed point. So the renormalization fixed point is also the fixed point of a contractive ``Thurston operator.'' \end{remark} \subsection{Implementation} The source code for an implementation of the fixed point algorithm of \sref{sec:locating} is freely available online \cite{W18}. It compiles to three executables which were used to produce the results of \sref{sec:results}; see the accompanying README for instructions on how to reproduce the results. The Eigen library \cite{GJ10} is used for linear equation solvers and eigenvalue estimation; we also use its bindings to the multiple precision library MPFR \cites{FHLPZ07,H08} as well as its automatic differentiation routines. Standard double precision arithmetic is only sufficient for short combinatorics, which is why the implementation needs multiple precision. Automatic differentiation is used to evaluate the derivative of $\Rtrunc$. Note that this is not the same thing as numerical differentiation (taking finite differences); instead it uses the chain-rule to exactly (up to numerical precision) evaluate derivatives. \subsection{Results} \label{sec:results} The experiments in this section were performed using a truncation of $\Rtrunc$ in dimension three up to dimension $1000$. Higher dimensions were needed only when evaluating the renormalization of period--$2$ points, such as in figure~\ref{fig:monotone-8-2}, otherwise the three-dimensional truncation gave qualitatively accurate results. Results are only stated for monotone combinatorics; some non-monotone combinatorics were tested as well but it is harder to present these in a clear manner so they are not included. The programs also work with arbitrary $\alpha$ but experiments investigating the $\alpha$--dependence have been left out to keep this section focused. The following table shows which of case \ref{T-rigid}, \ref{T-foliated} or \ref{T-stratified} of the Lorenz Renormalization Conjecture the first few monotone $(a,b)$--types fall under for $\alpha=2$: \begin{center} \footnotesize \begin{tabular}{cccccccccc|c} $\mathbf1$ & $\mathbf2$ & $\mathbf3$ & $\mathbf4$ & $\mathbf5$ & $\mathbf6$ & $\mathbf7$ & $\mathbf8$ & $\mathbf9$ & $\cdots$ & $(a,b)$ \\ \hline A & A & A & A & A & A & A & A & A & $\cdots$ & $\mathbf1$ \\ & A & A & A & A & A & C & C & C & $\cdots$ & $\mathbf2$ \\ & & A & B & B & B & B & B & B & $\cdots$ & $\mathbf3$ \\ & & & B & B & B & B & B & B & $\cdots$ & $\mathbf4$ \\ & & & & B & B & B & B & B & $\cdots$ & $\mathbf5$ \\ & & & & & B & B & B & B & $\cdots$ & $\mathbf6$ \\ & & & & & & B & B & B & $\cdots$ & $\mathbf7$ \\ & & & & & & & B & B & $\cdots$ & $\mathbf8$ \\ & & & & & & & & B & $\cdots$ & $\mathbf9$ \end{tabular} \end{center} For example, the above table shows that $(a,a)$--type has a two-dimensional unstable manifold for $a=1,2,3$, and a three-dimensional unstable manifold for $a\geq4$; $(a,2)$--types with $a\geq7$ has both a fixed point and a period--$2$ point. Note that the complete table is symmetric about the diagonal. \begin{remark} It is known that $a$ and $b$ sufficiently large implies case~\ref{T-foliated} \cite{MW16}. It is not clear exactly when case~\ref{T-stratified} occurs; from the above table only $(a,1)$--type and $(a,2)$--type seem viable, but a test with increasing $a$ did not reveal any $(a,1)$--types of case~\ref{T-stratified}. Note that we are only discussing stationary combinatorics and $\alpha=2$ here. \end{remark} In creating the above table we performed roughly the following steps: \begin{enumerate}[label=(\arabic*)] \item Locate a fixed point for the three-dimensional truncated renormalization operator (see remark~\ref{rem:renorm3d}), using $c=0.5$ as an initial guess for the critical point; if it doesn't converge, try other values for $c$ until it does. The derivative of the three-dimensional truncation of $\Rtrunc$ at the fixed point has three eigenvalues. Denote the eigenvalue with the smallest magnitude by~$\lambda_c$; this is the eigenvalue associated with moving the critical point (the other two eigenvalues are associated with changing the boundary values). If $\lambda_c \in (0,1)$ then we must be in case~\ref{T-rigid}; if $\lambda_c \in (-1,0]$ we go to the next step; if $\abs{\lambda_c} > 1$ we must be in case~\ref{T-foliated}. The behavior of $\lambda_c$ is illustrated in figure~\ref{fig:eigenvalues}. \item Try to locate a period--$2$ orbit of $\Rtrunc$ by looking for a fixed point of twice $(a,b)$--renormalizable type.\footnote{ For example, once $(2,1)$--renormalizable type is given by $(011,10)$ and twice $(2,1)$--renormalizable is given by $(0111010,10011)$.} We observe in this situation that one of three things happen: \begin{enumerate}[label=(\roman*)] \item the algorithm diverges by $c\uparrow 1$ (most common case), \item the algorithm converges to the fixed point found in the previous step (only seems to happen if $c$ is picked close to the $c$ of the fixed point), \item the algorithm converges and $c$ is different from that of the fixed point. \end{enumerate} In the first two situations we are in case~\ref{T-rigid} and in the last situation we are in case~\ref{T-stratified}. In the first two situations this step is repeated with different guesses for $c$ to make sure the last situation was not missed due to a bad initial guess. The graphs of the fixed point and period--$2$ orbit for $(8,2)$--type can be found in figure~\ref{fig:monotone-8-2}. \item Increase the dimension of the truncation of $\Rtrunc$ to see if it affects the above classification; in all cases we tried the eigenvalues changed slightly in value but not enough to affect the classification. \end{enumerate} \begin{figure} \caption{Dependence of the eigenvalue associated with movement of the critical point on monotone type $(a,b)$ for $\alpha=2$; estimated using the three-dimensional truncation of $\Rtrunc$.} \label{fig:eigenvalues} \end{figure} \section*{Notation} \begin{center} \begin{tabular}{ p{2cm} p{7cm} p{1cm} } $f$, $f_0$, $f_1$ & Lorenz map $f$ with branches $f_0$, $f_1$ & \pageref{def:lorenz} \\ $c$, $c(f)$ & the critical point of $f$ & \pageref{def:lorenz} \\ $\alpha$ & critical exponent & \pageref{def:lorenz} \\ $\Lset$ & set of Lorenz maps & \pageref{def:lorenz} \\ $\word = (\word_0, \word_1)$ & type of renormalization & \pageref{def:renorm} \\ $\Rop$ & renormalization operator & \pageref{def:Rop} \\ $\Tset_\word$ & topological class & \pageref{conj:Tset} \\ $v = (v_0,v_1)$ & boundary values, $v_k = f(k)$ & \pageref{def:Lrep} \\ $\phi = (\phi_0,\phi_1)$ & diffeomorphisms & \pageref{def:Lrep} \\ $F$ & family of Lorenz maps $F(c,v,\phi)$ & \pageref{def:Lrep} \\ $\Dtrunc$, $\Dproj$ & finite-dimensional diffeomorphism, projection & \pageref{def:Ltrunc} \\ $\Ltrunc$ & set of truncated Lorenz maps & \pageref{def:Ltrunc} \\ $\Rtrunc$, $\Rmod$ & truncated renormalization operators & \pageref{def:Rtrunc}, \pageref{def:Rmod} \end{tabular} \end{center} \begin{bibdiv} \begin{biblist} \bib{ACT81}{article}{ author={Arneodo, A.}, author={Coullet, P.}, author={Tresser, C.}, title={A possible new mechanism for the onset of turbulence}, journal={Phys. Lett. A}, volume={81}, date={1981}, number={4}, pages={197--201}, } \bib{A61}{article}{ author={Arnol{\cprime}d, V. I.}, title={Small denominators. I. Mapping the circle onto itself}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={25}, date={1961}, pages={21--86}, } \bib{AL11}{article}{ author={Avila, A.}, author={Lyubich, M.}, title={The full renormalization horseshoe for unimodal maps of higher degree: exponential contraction along hybrid classes}, journal={Publ. Math. Inst. Hautes \'Etudes Sci.}, number={114}, date={2011}, pages={171--223}, } \bib{CLM05}{article}{ author={De Carvalho, A.}, author={Lyubich, M.}, author={Martens, M.}, title={Renormalization in the H\'enon family. I. Universality but non-rigidity}, journal={J. Stat. Phys.}, volume={121}, date={2005}, number={5-6}, pages={611--669}, } \bib{CMdMT09}{article}{ author={Chandramouli, V. V. M. S.}, author={Martens, M.}, author={de Melo, W.}, author={Tresser, C. P.}, title={Chaotic period doubling}, journal={Ergod. Theory Dyn. Syst.}, volume={29}, date={2009}, number={2}, pages={381--418}, } \bib{CEK81}{article}{ author={Collet, P.}, author={Eckmann, J.-P.}, author={Koch, H.}, title={Period doubling bifurcations for families of maps on ${\bf R}^{n}$}, journal={J. Statist. Phys.}, volume={25}, date={1981}, number={1}, pages={1--14}, } \bib{CT78}{article}{ author={Coullet, P.}, author={Tresser, C.}, title={It\'erations d'endomorphismes et groupe de renormalisation}, journal={C. R. Acad. Sci. Paris S\'er. A-B}, volume={287}, date={1978}, number={7}, pages={A577--A580}, } \bib{DH93}{article}{ AUTHOR = {Douady, A.}, AUTHOR = {Hubbard, J. H.}, TITLE = {A proof of Thurston's topological characterization of rational functions}, JOURNAL = {Acta Math.}, VOLUME = {171}, YEAR = {1993}, PAGES = {263--297}, } \bib{EKW84}{article}{ author={Eckmann, J.-P.}, author={Koch, H.}, author={Wittwer, P.}, title={A computer-assisted proof of universality for area-preserving maps}, journal={Mem. Amer. Math. Soc.}, volume={47}, date={1984}, number={289}, pages={vi+122}, } \bib{FHLPZ07}{article}{ author = {Fousse, L.}, author = {Hanrot, G.}, author = {Lef\`{e}vre, V.}, author = {P{\'e}lissier, P.}, author = {Zimmermann, P.}, title = {MPFR: A Multiple-precision Binary Floating-point Library with Correct Rounding}, journal = {ACM Trans. Math. Softw.}, volume = {33}, number = {2}, year = {2007}, } \bib{GJM16}{article}{ author={Gaidashev, D.}, author={Johnson, T.}, author={Martens, M.}, title={Rigidity for infinitely renormalizable area-preserving maps}, journal={Duke Math. J.}, volume={165}, date={2016}, number={1}, pages={129--159}, } \bib{GW79}{article}{ author={Guckenheimer, J.}, author={Williams, R. F.}, title={Structural stability of Lorenz attractors}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={50}, date={1979}, pages={59--72}, } \bib{GJ10}{misc}{ author = {Guennebaud, G.}, author = {Jacob, B.}, title = {Eigen v3}, year = {2010}, note = {available at \url{http://eigen.tuxfamily.org}}, } \bib{H79}{article}{ author={Herman, M.}, title={Sur la conjugaison diff\'erentiable des diff\'eomorphismes du cercle \`a des rotations}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, volume={49}, date={1979}, pages={5--233}, } \bib{HPS77}{book}{ author={Hirsch, M. W.}, author={Pugh, C. C.}, author={Shub, M.}, title={Invariant manifolds}, series={Lecture Notes in Mathematics, Vol. 583}, publisher={Springer-Verlag}, place={Berlin-New York}, date={1977}, pages={ii+149}, } \bib{H08}{misc}{ author = {Holoborodko, P.}, title = {MPFR C++}, year = {2008}, note = {available at \url{http://www.holoborodko.com/pavel/mpfr/}}, } \bib{HS94}{book}{ author={Hubbard, J. H.}, author={Schleicher, D.}, title={The spider algorithm}, series={Proc. Sympos. Appl. Math.}, volume={49}, publisher={Amer. Math. Soc.}, place={Providence, RI}, date={1994}, pages={155--180}, } \bib{KT13}{article}{ author={Khanin, K.}, author={Teplinsky, A.}, title={Renormalization horseshoe and rigidity for circle diffeomorphisms with breaks}, journal={Comm. Math. Phys.}, volume={320}, date={2013}, number={2}, pages={347--377}, } \bib{L81}{article}{ author={Linsay, P. S.}, title={Period doubling and chaotic behavior in a driven anharmonic oscillator}, journal={Phys. Rev. Lett.}, year={1981}, volume={47}, number={19}, pages={1349--1352}, } \bib{L63}{article}{ author={Lorenz, E. N.}, title={Deterministic nonperiodic flow}, journal={J. Atmospheric Sci.}, year={1963}, volume={20}, pages={130--141}, } \bib{MdM01}{article}{ author={Martens, M.}, author={de Melo, W.}, title={Universal models for Lorenz maps}, journal={Ergod. Theory Dyn. Syst.}, year={2001}, volume={21}, number={3}, pages={833--860}, } \bib{MP17}{article}{ author={Martens, M.}, author={Palmisano, L.}, title={Rigidity foliations}, eprint={arXiv:1704.06328}, date={2017}, } \bib{MPW17}{article}{ author = {Martens, M.}, author = {Palmisano, L.}, author = {Winckler, B.}, title = {The rigidity conjecture}, journal = {Indagationes Mathematicae}, year = {2017}, doi = {10.1016/j.indag.2017.08.001} } \bib{MW14}{article}{ author={Martens, M}, author={Winckler, B}, title={On the hyperbolicity of Lorenz renormalization}, journal={Comm. Math. Phys.}, volume={325}, date={2014}, number={1}, pages={185--257}, } \bib{MW16}{article}{ author={Martens, M.}, author={Winckler, B.}, title={Instability of Renormalization}, eprint={arXiv:1609.04473}, date={2017}, } \bib{MW18}{article}{ author={Martens, M.}, author={Winckler, B.}, title={Physical measures for infinitely renormalizable Lorenz maps}, journal={Ergod. Theory Dyn. Syst.}, volume={38}, date={2018}, number={2}, pages={717--738}, } \bib{ML79}{article}{ author={Maurer, J.}, author={Libchaber, A.}, title={Rayleigh--B\'enard experiment in liquid helium; frequency locking and the onset of turbulence}, journal={J. de Phys. Lett.}, year={1979}, volume={40}, number={16}, pages={419--423}, } \bib{dMvS93}{book}{ author={de Melo, W.}, author={van Strien, S.}, title={One-dimensional dynamics}, volume={25}, publisher={Springer-Verlag}, place={Berlin}, date={1993}, } \bib{M97}{article}{ author = {Milnor, J.}, title = {Fubini Foiled: Katok's Paradoxical Example in Measure Theory}, journal = {Math. Intelligencer}, year = {1997}, volume = {19}, number = {2}, pages = {30--32}, } \bib{W18}{misc}{ author = {Winckler, B.}, title = {The Lorenz Renormalization Conjecture -- supplementary material}, year = {2018}, note = {available at \url{https://github.com/b4winckler/src-lorenz-renorm-conj}}, } \bib{Y03}{article}{ author={Yampolsky, M.}, title={Renormalization horseshoe for critical circle maps}, journal={Comm. Math. Phys.}, volume={240}, date={2003}, number={1-2}, pages={75--96}, } \end{biblist} \end{bibdiv} \end{document}
math
51,139
\begin{document} \thispagestyle{empty} \title{Squarability of rectangle arrangements} \begin{abstract} We study when an arrangement of axis-aligned rectangles can be transformed into an arrangement of axis-aligned squares in $\mathbb{R}^2$ while preserving its structure. We found a counterexample to the conjecture of J. Klawitter, M. N\"{o}llenburg and T. Ueckerdt whether all arrangements without crossing and side-piercing can be squared. Our counterexample also works in a more general case when we only need to preserve the intersection graph and we forbid side-piercing between squares. We also show counterexamples for transforming box arrangements into combinatorially equivalent hypercube arrangements. Finally, we introduce a linear program deciding whether an arrangement of rectangles can be squared in a more restrictive version where the order of all sides is preserved. \end{abstract} \section{Introduction} In this paper, we are concerned with the following problem. Given an arrangement of axis-aligned rectangles in $\mathbb{R}^2$, is it possible to find an arrangement of axis-aligned squares with corresponding properties? J. Klawitter, M. N\"{o}llenburg and T. Ueckerdt \cite{main} asked which geometric rectangle arrangements can be transformed into combinatorially equivalent square arrangements. While showing some necessary and sufficient conditions for that, the question whether there exists an unsquarable rectangle arrangement without crossings and side-piercings (see Figure \ref{intersection_types}) remained open. We show a counterexample for that -- an arrangement of rectangles which is not combinatorially equivalent to any square arrangement. Moreover, our counterexample works even in a more general case when we only need to preserve the intersection graph of arrangements and we forbid side-piercing between squares. In Section \ref{higherDimensions} we generalize the problem to higher dimensions -- considering hypercubes instead of squares and boxes instead of rectangles. We show that allowing crossings or side-piercings in any dimension leads to arrangements of boxes for which no corresponding arrangement of hypercubes exists. Besides constructing counterexamples we also present an algorithm for deciding whether a given arrangement is squarable when the order of all sides has to be preserved (which implies combinatorial equivalence). \subsection{Preliminaries} Let $\mathcal{R}$ denote a given set of axis-aligned rectangles in $\mathbb{R}^2$ and $\s$ be a mapping from $\mathcal{R}$ to axis-aligned squares in $\mathbb{R}^2$ satisfying certain restrictions. If such $\s$ exists, we say that $\mathcal{R}$ is \emph{squarable} and $\s$ is a \emph{squaring} of $\mathcal{R}$. Thus $\s(\mathcal{R})$ is a set of squares obtained from $\mathcal{R}$ in a way specific to the particular variant and $\s(R)$ is the square representing the rectangle $R\in\mathcal{R}$. In each variant we explain the restrictions placed on the input set of rectangles $\mathcal{R}$ and on the output set of squares $\s(\mathcal{R})$. \begin{figure} \caption{Intersection types. Respectively: corner intersection, side-piercing, cross intersection and containment.} \label{intersection_types} \end{figure} There are four intersection types: corner intersection, side-piercing, cross intersection and containment (see Figure \ref{intersection_types}). Note that we do not include empty intersection (formed by disjoint rectangles) as an intersection type. Also, we only consider sets of rectangles where no two rectangle sides are collinear. In all the discussed variants, we assume that the input set $\mathcal{R}$ contains no two rectangles with side-piercing or cross intersection. Allowing these intersection types easily leads to instances of arrangements of rectangles that cannot be squared -- any two rectangles with the cross intersection clearly cannot be squared as well as the arrangement of four rectangles in Figure~\ref{piercing} for side-piercing. \begin{figure} \caption{An arrangement that cannot be squared due to side-piercing intersections.} \label{piercing} \end{figure} Without loss of generality, we assume all the rectangles have positive coordinates. If it is not the case we just translate the whole arrangement. For a rectangle $R$ we denote: \begin{itemize} \item $t(R)$ to be the $y$-coordinate of the top side of $R$, \item $b(R)$ to be the $y$-coordinate of the bottom side of $R$, \item $r(R)$ to be the $x$-coordinate of the right side of $R$, \item $l(R)$ to be the $x$-coordinate of the left side of $R$, \item $h(R)$ to be the height of $R$: $h(R)=t(R)-b(R)$, \item $w(R)$ to be the width of $R$: $w(R)=r(R)-l(R)$. \end{itemize} \subsection{Variants of the squarability problem} Let $\mathcal{R}$ be an arrangement of rectangles and $\s$ be a squaring of $\mathcal{R}$. We say that $\mathcal{R}$ and $\s(\mathcal{R})$ are \emph{combinatorially equivalent} if for any $R_1,R_2\in\mathcal{R}$, the intersection type of $\s(R_1)$ and $\s(R_2)$ is the same as the intersection type of $R_1$ and $R_2$ and these intersections happen exactly on the same sides (and corners). For example, if $R_1$ and $R_2$ have corner intersection that is in the upper left corner on $R_1$ and the lower right corner of $R_2$, the same must hold for $\s(R_1)$ and $\s(R_2)$. Note that the above definition of combinatorial equivalence is strictly weaker than the one given in \cite{main}. This definition is, however, convenient to us as the basic requirement. Since our counterexample works in this less restrictive case, it is also a counterexample when the referenced definition is used. The following are variants of the squarability problem. They vary in the strength of the assumptions we put on the mapping $\s$. \begin{description} \item[Preserve order of all sides.] The output $\s(\mathcal{R})$ has to be combinatorially equivalent to $\mathcal{R}$ and, moreover, the respective order of sides on both axes has to be preserved. On a chosen axis, we can construct the sequence of sides of rectangles $\mathcal{R}$ from left to right as they appear, i.e., every rectangle will appear exactly twice. Then the same sequence of sides has to be realized in $\s(\mathcal{R})$. \item[Combinatorial equivalence.] The output $\s(\mathcal{R})$ has to be combinatorially equivalent. \item[Keep intersections, forbid side-piercing.] First, we require that the intersection graphs of $\mathcal{R}$ and $\s(\mathcal{R})$ are isomorphic, i.e., it holds that $R_1\cap R_2\neq\emptyset$ if and only if $\s(R_1)\cap\s(R_2)\neq\emptyset$ for all $R_1,R_2\in\mathcal{R}$. Additionally, the squares in the output set $\s(\mathcal{R})$ must only have corner intersections or containment. \item[Keep intersection graph.] We only require that the intersection graphs of $\mathcal{R}$ and $\s(\mathcal{R})$ are isomorphic. \end{description} Note that if $\s$ satisfies ``Preserve order of all sides'', then it satisfies ``Combinatorial equivalence''. In the same sense, ``Combinatorial equivalence'' implies ``Keep intersection, forbid side-piercing'' (by the assumption that $\mathcal{R}$ contains no side-piercing), which implies ``Keep intersection graph''. \section{Counterexamples} In this section we will discuss examples of arrangements of rectangles, which cannot be squared in terms of the mapping $\s$. In each subsection we consider squarability with respect to of one of the variants. We will start with the most restrictive case and proceed to more general variants. \subsection{Preserving order of all sides} If we want the resulting arrangement of squares to preserve the order of all sides, there is an easy example of four rectangles that cannot be squared.\\ \begin{figure} \caption{An arrangement not squarable in the most restrictive case.} \label{fig:firstcounter} \end{figure} \begin{theorem}\label{4rect} The arrangement of rectangles in Figure \ref{fig:firstcounter} cannot be squared while preserving order of all sides. \end{theorem} \begin{proof} After squaring the arrangement we would get $w(A)>w(B)=h(B)>h(C)=w(C)>w(D)=h(D)>h(A)=w(A)$; thus, the arrangement is unsquarable. \end{proof} This is an easy observation but it is important, because this arrangement is exactly the one we will find in latter cases to prove unsquarability of other arrangements. \subsection{Combinatorial equivalence} In the second most restrictive definition of the mapping $\s$ we want the resulting arrangement of squares to not only have the same types of intersections but also to have the same position. This means that if there is a rectangle A and a rectangle B intersecting A in the top right corner then $\s(B)$ will intersect $\s(A)$ again in the top right corner. \begin{figure} \caption{An arrangement not squarable when $\s$ keeps the combinatorial equivalence.} \label{fig:secondcounter} \end{figure} \begin{theorem} The arrangement of rectangles in the left picture of Figure \ref{fig:secondcounter} cannot be squared. \end{theorem} \begin{proof} To prove that, we want to show that the four bold rectangles form the pattern from Theorem \ref{4rect}. To do that, we need to prove that there is that cyclic condition on lengths of their sides. It suffices to show the dependency only for one pair of neighbouring rectangles since the arrangement is symmetric. In the right picture of Figure \ref{fig:secondcounter}, there is the situation for $A$ and $B$ where only the important rectangles are drawn. Suppose the rectangles are orientated as in the picture (orientation is fixed for the whole arrangement). To prove $w(A)>w(B)$ in all possible mappings $\s$, it is sufficient to show $l(A)<l(B)$ and $r(A)>r(B)$. We observe that when two rectangles $C$ and $D$ intersect a~common rectangle $E$ on its top (or bottom) side, $C$ being the one intersecting it in the left corner, and $C,D$ do not intersect each other it must hold $r(C)<l(D)$. When two rectangles $F,G$ intersect each other then $l(F)<r(G)$. These two observations used on the red sides of the rectangles in Figure \ref{fig:secondcounter} together give us $r(A)>r(B)$. To prove $l(A)<l(B)$ we use the observations for the blue sides. \end{proof} \subsection{Keep intersections, forbid side-piercing} So far we have been mainly building tools and considering easy examples. For $\s$ which only keeps intersections without allowing side-piercing in $\s(\mathcal{R})$ we still need one more tool. \begin{figure} \caption{$\Sigma$-gadget and its usage.} \label{fig:structure} \end{figure} We refer to the arrangement depicted in the left picture of Figure \ref{fig:structure} as a \emph{$\Sigma$-gadget}. It is an arrangement of rectangles that can be squared even in the most restrictive case and we use it to force some useful properties. \begin{lemma} All squarings of the $\Sigma$-gadget that keep intersections but forbid side-piercings are combinatorially equivalent, up to rotation and reflection. \end{lemma} \begin{proof} First look at the rectangles $K, L, M, N$ in the middle. There is only one way to square them upon a rotation and reflection. Then we want to square rectangles $A,B,C$ and $D$. Notice that $A$ can be contained neither in $K$ nor in $L$ because it intersects $P$. This, and the fact the side-piercing is forbidden, gives us three possibilities how to place $A$, relatively to $K$ and $L$. It can be either in the position as in the Figure \ref{fig:structure} or in such position that it contains the intersection of the squares $K$ and $L$ or in the opposite corner than in the first case. In Figure~\ref{fig:3options} we see all the important cases. \begin{figure} \caption{Three possible ways of placing rectangle $A$.} \label{fig:3options} \end{figure} The first case is the one we want. In the second case, the position of $A$ forces $P$ (and $Q$) to intersect the bottom left corner of $A$ because $P$ ($Q$) needs to intersect $D$ ($B$) without intersecting $K$ ($L$). This means $P$ and $Q$ both intersect the bottom left corner of $A$ and so they intersect each other, a contradiction. In the last case, $A$ would intersect $M$ and $N$, a contradiction. Therefore, there is only one way to square $A$ and by symmetry the same is true for $B, C$ and $D$. Now the rectangles $P, Q, R$ and $S$ can also be squared in only one way, completing the proof. \end{proof} First we explain how we use the $\Sigma$-gadget in an arrangement. If we want another rectangle (or another $\Sigma$-gadget) to intersect our $\Sigma$-gadget in a corner, it must intersect both the surrounding rectangle and one of $A,B,C$ or $D$ depending on in which corner it intersects $\Sigma$-gadget. Besides these two it does not intersect anything else. Now that we know that the $\Sigma$-gadget can be squared in exactly one way and how to use it in an arrangement, let us explore some of its useful properties. As is illustrated in the right picture of Figure \ref{fig:structure}, the most useful property comes to play when the $\Sigma$-gadget is intersected by rectangles in opposite corners, lets call them $E$ and $F$. Usually this only gives us one of the following conditions: \begin{itemize} \item $r(E)<l(F)$ (blue colored sides in the picture). \item $t(E)<b(F)$ (red colored sides in the picture). \end{itemize} The $\Sigma$-gadget provides both conditions at the same time, which is very useful when forcing the situation like in Theorem \ref{4rect}. At the same time, if the $\Sigma$-gadget is intersected in two corners, we can always say whether the corners are \emph{opposite} or \emph{adjacent}. For the purposes of arrangements in which the $\Sigma$-gadget is used, when we talk about the height, width, left side and so on, we always mean the height, width, left side, ... of the outer rectangle. \begin{figure} \caption{An arrangement using $\Sigma$-gadget not squarable even in the least restrictive case without side-piercing.} \label{fig:finalcounter} \end{figure} Having such a strong tool it is now easy to create an arrangement of rectangles that cannot be squared. \begin{theorem} The arrangement from Figure \ref{fig:finalcounter} with the $\Sigma$-gadget instead of each rectangle cannot be squared. \end{theorem} \begin{proof} We show that rectangles $A$, $B$, $C$ and $D$ form the same arrangement as we saw in Theorem \ref{4rect}. Rectangles 1 and 2 lie on the same side of $B$. Rectangles 3 and 4 lie in the opposite corners of 1 and 2 respectively with respect to $B$. Because rectangles 1 and 2 are $\Sigma$-gadgets, this implies $l(B)>r(3)$ and $r(3)>l(A)$ since rectangles $A$ and 3 intersect each other. We showed $l(B)>l(A)$ and similarly using rectangles 2 and 4 we can show $r(B)<r(A)$. Together this gives us $w(B)<w(A)$. Rotating the argument around the arrangement we show that if the arrangement gets squared it holds $w(B)<w(A)=h(A)<h(D)=w(D)<w(C)=h(C)<h(B)=w(B)$, which cannot be true. \end{proof} One could think this cannot be all. After all in previous cases we needed to show there is only one way to draw the arrangement of squares and we always ended up with a square which we couldn't add. Note that we did just that by showing the $\Sigma$-gadget can be squared in only one way. \section{Higher dimensions} \label{higherDimensions} In this section we will make some observations about arrangements of boxes in higher dimensions. We use the same notation as before, that is $\mathcal{R}$ denotes a set of axis-aligned boxes in $\mathbb{R}^d$ and $S$ its mapping to a set of axis-aligned hypercubes in $\mathbb{R}^d$. We will often work with projections of $\mathbb{R}^d$ to a subset of coordinates. For a set $I \subseteq \{1, \ldots, d\}$ let $\mu_I: \mathbb{R}^d \to \mathbb{R}^{|I|}$ be a projection that ``forgets'' all coordinates not indexed by $I$. Furthermore for a singleton-indexed projection we shorten its notation $\mu_{\{c\}} = \mu_c$. The result of a projection $\mu_I$ applied to $\mathcal{R}$ is an arrangement of axis-aligned boxes or hypercubes in $\mathbb{R}^{|I|}$. \begin{figure} \caption{An arrangement of three boxes in $\mathbb{R} \label{fig:fig3d} \end{figure} The notion of combinatorial equivalence extends naturally to higher dimensions. We can observe that with each extra dimension we get new intersection types. For example, consider the following arrangement of only three boxes in $\mathbb{R}^3$ from Figure \ref{fig:fig3d}. Each pair of these boxes intersects in such a way that one pierces the edge of the other. We claim that there cannot be a combinatorially equivalent arrangement of hypercubes. Assume that we have such an arrangement of hypercubes $A'$, $B'$ and $C'$. Then the projection $\mu_1$ forces $A'$ to be bigger than $B'$, similarly $\mu_2$ for $B'$ and $C'$. Finally, $\mu_3$ forces $C'$ to be bigger than $A'$ and that is a contradiction. \subsection{Boxicity and cubicity} In the beginning we restricted to arrangements without side-piercings and cross intersections. It is fairly easy to see how they lead to counterexamples in the more restricted settings. However it is not so clear whether this restriction is needed in the least restrictive setting, i.e. preserving just the intersection graph. We will construct arrangements with these intersections which cannot be represented by an intersection graph of axis-aligned hypercubes up to a given dimension. Let $G$ be a simple undirected graph. The \emph{boxicity} of $G$ is the smallest dimension $d$ such that $G$ can be represented as an intersection graph of axis-aligned boxes in $\mathbb{R}^d$. Similar notions are the \emph{cubicity} of $G$, where we consider a representation as an intersection graph of axis-aligned hypercubes, and the \emph{unit cubicity} of $G$, where all the hypercubes have to be unit. The notion of boxicity and unit cubicity (usually referred simply as cubicity) was introduced in 1969 by Roberts \cite{boxicity} and has since been actively studied, e.g. in~\cite{chandran2009upper}. The results we prove in this section were shown previously for unit cubicity in \cite{boxicity}. But our definition of cubicity is more general. Furthermore, let $R(k, d)$ denote the smallest integer such that every coloring of the complete graph on $R(k,d)$ vertices with $d$ colors contains a monochromatic clique of size $k$. This is indeed one of the Ramsey numbers and it is well known that such a value exists. As before, we want to construct such a graph that if there was an intersection-pattern equivalent arrangement of hypercubes it would force a cyclical inequality of hypercube sizes. However we do not have any tool yet for showing such inequalities in the most general setting. \begin{lemma} \label{lemma:Ramsey} Let $G$ be a graph and $v$ be a vertex which has at least $R(k+2,d)$ neighbours that are pairwise non-adjacent. Suppose $G$ can be represented as an intersection graph of an arrangement $\mathcal{R}$ of axis-aligned hypercubes in $\mathbb{R}^d$ and $f: V(G) \to \mathcal{R}$ is the corresponding mapping. Then there is a neighbour $w$ of $v$ such that the hypercube $f(w)$ is more than $k$ times smaller than the hypercube $f(v)$. \end{lemma} \begin{proof} Unsurprisingly, we will prove our claim using a coloring of the complete graph on $R(k+2, d)$ vertices. Each vertex gets labelled by one of the $R(k+2, d)$ neighbours of $v$. Observe that if two axis-aligned hypercubes $R_1$ and $R_2$ in $\mathbb{R}^d$ are disjoint then there is an integer $c$ such that $\mu_c(R_1)$ and $\mu_c(R_2)$ are disjoint. We will color an edge with any $c$ such that the corresponding hypercubes are disjoint under $\mu_c$. The number of vertices guarantees us a monochromatic clique of size $k+2$. That means there are $k+2$ neighbours of $f(v)$ that are pairwise disjoint under $\mu_c$ for some $c$. We have $k+2$ pairwise disjoint intervals and all of them need to intersect the interval $\mu_c(f(v))$. From this follows that the smallest interval $\mu_c(f(w))$ is more than $k$ times smaller than the interval $\mu_c(f(v))$. And since we are dealing with axis-aligned hypercubes, the hypercube $f(w)$ is more than $k$ times smaller than the hypercube $f(v)$. \end{proof} \begin{theorem} For every $d$ there is a graph $G$ with boxicity $2$ and cubicity larger than $d$. \end{theorem} \begin{proof} Consider a complete bipartite graph $G$ with each partition of size $R(3,d)$. The boxicity of $G$ is 2 since one partition of $G$ can be represented as a set of vertical rectangles and the other as a set of horizontal rectangles (see Figure~\ref{fig:boxicity2}). Now suppose for a contradiction that the cubicity of $G$ is at most $d$ and fix any intersection representation with hypercubes in $\mathbb{R}^{d'}$, $d' \le d$. Let $v$ be the vertex of $G$ such that the corresponding hypercube is the smallest one. Since $v$ has exactly $R(3, d)$ pairwise disjoint neighbours, by Lemma \ref{lemma:Ramsey} there must be a neighbour of $v$ such that its corresponding hypercube is strictly smaller, that is a contradiction. \end{proof} \begin{figure} \caption{An arrangement of rectangles whose intersection graph is a complete bipartite graph with each partition of size $R(3,2) = 6$.} \label{fig:boxicity2} \end{figure} \section{Deciding squarability via LP} \subsection{The problem} In this section we present a linear program deciding whether a given arrangement of $n$ rectangles $\mathcal{R}=\{R_1,\ldots,R_n\}$ in $\mathbb{R}^2$ can be squared while preserving order of all sides. Without loss of generality we can assume that all the endpoints of the intervals $[l(R_i),r(R_i)]$ and $[b(R_i),t(R_i)]$ have distinct values for all $i\in\{1,\ldots,n\}$. Otherwise we could change the endpoints a little without changing intersections between rectangles. By ordering the endpoints of the intervals of projected rectangles into an increasing sequence, we obtain the sequence $a'_1< a'_2< \cdots< a'_{2n}$, where $a'_j=l(R_i)$ or $r(R_i)$ for some $i\in\{1,\ldots,n\}$ (see Figure \ref{Fig1}). Replacing $l(R_i)$ and $r(R_i)$ by $i$ then yields the sequence $a_1,a_2,\ldots,a_{2n}$ of numbers $\{1,\ldots,k\}$, we call this sequence the $x$-sequence of $\mathcal{R}$. Clearly, each $i\in\{1,\ldots,n\}$ appears there exactly twice, thus for every $i\in\{1,\ldots,n\}$ we can define $a(i)=(j_1,j_2)$ such that $j_1<j_2$ and $a_{j_1}=a_{j_2}=i$. The $x$-sequence describes the respective ordering of the rectangles' $x$-coordinates. A $y$-sequence and the corresponding function $b$ are defined analogously. \begin{figure} \caption{The $x$-sequence is $1,2,3,1,2,3$ and the $y$-sequence is $2,3,1,2,1,3$.} \label{Fig1} \end{figure} The decision problem can be reformulated in the following way: Given a family of rectangles $\mathcal{R}=\{R_1,\ldots,R_n\}$, does there exist a family of squares $\mathcal{S}=\{S_1,\ldots,S_n\}$ such that the $x$-sequence of $\mathcal{S}$ is identical to that of $\mathcal{R}$ and the $y$-sequence of $\mathcal{S}$ is identical to that of $\mathcal{R}$? \subsection{Linear program} Let us present a linear program solving the problem for an input set $\mathcal{R}=\{R_1,\ldots,R_n\}$. Let $a_1,a_1,\ldots,a_{2n}$ be the $x$-sequence and $b_1,b_2,\ldots,b_{2n}$ the $y$-sequence of $\mathcal{R}$. We have variables \[x_1,\ldots,x_{2n-1},y_1,\ldots,y_{2n-1}\geq 1,\] where the value of $x_j$ represents the distance of the corresponding interval endpoints of rectangles $R_{a_{j}}$ and $R_{a_{j+1}}$ and the value of $y_j$ represents the distance of the corresponding endpoints of $R_{b_j}$ and $R_{b_{j+1}}$ (see Figure \ref{Fig2}). Let $(\mathbf{x},\mathbf{y})=(x_1,\ldots,x_{2n-1},y_1,\ldots,y_{2n-1})$ be any feasible solution to the following set of equalities. For every $i=1,\ldots,n$ we have an equality \[\sum_{k=j_1}^{j_2-1}x_k=\sum_{k=j_1'}^{j_2'-1}y_k,\:\text{where }a(i)=(j_1,j_2),\,b(i)=(j_1',j_2')\] \begin{figure} \caption{The meaning of the variables $x_1,\ldots,x_{2n-1} \label{Fig2} \end{figure} From the solution $(\mathbf{x},\mathbf{y})$ we construct the corresponding set of squares $\mathcal{S}=\{S_1,\ldots,S_n\}$ as follows. Let $a(i)=(j_1,j_2)$ and $b(i)=(j_1',j_2')$, we set the coordinates of $S_i$ such that \[l(S_i)=\sum_{k=1}^{j_1-1}x_k,\quad r(S_i)=\sum_{k=1}^{j_2-1}x_k,\] \[b(S_i)=\sum_{k=1}^{j_1'-1}y_k,\quad t(S_i)=\sum_{k=1}^{j_2'-1}y_k\] As $x_i,y_i\geq 1$ for all $i\in\{1,\ldots,2n-1\}$, it is clear that the $x$-sequence and $y$-sequences of $\mathcal{R}$ are preserved in $\mathcal{S}$. The claim that $\mathcal{S}$ consists of squares follows immediately from the constraints of the linear program. Thus we obtain that if the linear program finds a feasible solution, we can construct an appropriate set of squares. Reversely, let $\mathcal{S}$ be a set of squares that has the same $x$-sequence and $y$-sequence as $\mathcal{R}$. We can construct the variables $x_1,\ldots,x_{2n-1}$ and $y_1,\ldots,y_{2n-1}$ as the corresponding distances. It remains to sufficiently ``blow up'' this solution so that all of the variables are at least $1$. This is easily accomplished by multiplying the variables by the inverse of the minimum of them. We obtain a feasible solution to the linear program, as desired. \section{Acknowledgments} The authors would like to thank Pavel Valtr, Jan Kratochv\'{i}l and Stephen Kobourov for supervising the seminar where this paper was created. Our gratitude also goes to the anonymous referees for their helpful comments. \small \end{document}
math
25,446
\begin{document} \title{EPR-Bohr and Quantum Trajectories: \\ Entanglement and Nonlocality} \author{Edward R. Floyd \\ Jamaica Village Road, Coronado, CA 92118-3208, USA \\ [email protected]} \date{\today} \maketitle \begin{abstract} {Quantum trajectories are used to investigate the EPR-Bohr debate in a modern sense by examining entanglement and nonlocality. We synthesize a single ``entanglement molecule" from the two scattered particles of the EPR experiment. We explicitly investigate the behavior of the entanglement molecule rather than the behaviors of the two scattered particles to gain insight into the EPR-Bohr debate. We develop the entanglement molecule's wave function in polar form and its reduced action, both of which manifest entanglement. We next apply Jacobi's theorem to the reduced action to generate the equation of quantum motion for the entanglement molecule to produce its quantum trajectory. The resultant quantum trajectory manifests entanglement and has retrograde segments interspersed between segments of forward motion. This alternating of forward and retrograde segments generates nonlocality and, within the entanglement molecule, action at a distance. Dissection of the equation of quantum motion for the entanglement molecule, while rendering the classical behavior of the two scattered particles, also reveals an emergent ``entanglon" that maintains the entanglement between the scattered particles. The characteristics of the entanglon and its relationship to nonlocality are examined.} \end{abstract} \footnotesize \noindent PACS Nos. 3.65Ta, 3.65Ca, 3.65Ud \noindent Keywords: EPR, entanglement, nonlocality, determinism, quantum trajectories, action at a distance \normalsize \section{Introduction} ``Can quantum mechanical description of physical reality be considered complete?" was the title that Einstein, Podolsky and Rosen (EPR) [\ref{bib:epr}] and Bohr [\ref{bib:bohr}] used in their 1935 debate regarding reality and completeness of quantum mechanics. The issues circa 1935 were ``physical reality" and ``completeness of the Schr\"odinger wave function, $\psi$." Subsequently, Bell [\ref{bib:bell}] and the Aspect experiments [\ref{bib:aspect}] have shown quantum mechanics to be nonlocal. The modern issues of the EPR-Bohr debate are entanglement and nonlocality [\ref{bib:fine},\ref{bib:hw}]. Many in the physics community remain skeptical about the theoretical foundation for nonlocality in quantum mechanics not withstanding the findings of experiments more accurate than the original Aspect experiments with regard to detection and locality loopholes [\ref{bib:hw}--\ref{bib:zeilinger}]. Herein, we investigate EPR phenomena with quantum trajectories with a goal of answering the locality loophole issue. Quantum trajectories are shown to render insight on nonlocality in quantum mechanics. In the course of this investigation, analysis of the quantum trajectories revealed an additional quantum entity, introduced as an ``entanglon", that can superluminally maintain entanglement between the two EPR particles. The quantum trajectory representation of quantum mechanics is a nonlocal, phenomenological theory that is deterministic. Herein, ``deterministic" means in the spirit of EPR that if without disturbing a system one can predict with certainty the value of a physical quantity (the quantum trajectory), then there exists an element of physical reality that corresponds to such a physical quantity (the quantum trajectory) [\ref{bib:epr}]. Quantum trajectories with their nonlocal character are adduced as a natural representation for investigating EPR phenomena and to render insight into how entanglement induces nonlocality. The quantum Hamilton-Jacobi equation underlies the quantum trajectory representation of quantum mechanics [\ref{bib:prd34},\ref{bib:vigsym3}]. The underlying Hamilton-Jacobi formulation couches the quantum trajectory representation of quantum mechanics in a configuration space, time domain rather than a Hilbert space of wave mechanics. Faraggi and Matone, using a quantum equivalence principle that connects all physical systems by a coordinate transformation, have independently derived the quantum stationary Hamilton-Jacobi equation (QSHJE) without using any axiomatic interpretations of $\psi $ [\ref{bib:fm},\ref{bib:fm2}]. With Bertoldi, they have extended their work to higher dimensions and to relativistic quantum mechanics [\ref{bib:bfm}]. The quantum trajectory representation of quantum mechanics contains more information then the Schr\"odinger wave function, $\psi$ [\ref{bib:prd34}--\ref{bib:fm2},\ref{bib:rc}--\ref{bib:fpl9}]. All of the foregoing has posited the quantum trajectory representation as the superior method for examining fundamental issues of quantum mechanics. The quantum trajectory representation has been used to investigate the foundations of quantum mechanics free of axiomatic interpretations of Copenhagen philosophy [\ref{bib:prd34}--\ref{bib:fp37a}]. With regard to the circa 1935 issue of completeness of $\psi$, the quantum trajectory representation has already shown the existence of microstates in $\psi$ that provides a counterexample showing that $\psi$ is not an exhaustive description of quantum phenomena [\ref{bib:prd34},\ref{bib:fm2},\ref{bib:rc}--\ref{bib:fpl9}]. This investigation studies the example considered by both EPR and Bohr in their 1935 papers where two identical particles without spin are entangled and scattered from each other in opposite directions by some interaction [\ref{bib:epr},\ref{bib:bohr}]. We investigate this example in a quantum Hamilton-Jacobi representation and develop the quantum trajectory. Rather than examining the individual quantum trajectories of the two entangled particles, we synthesize an ``EPR-molecule" from the two entangled particles and subsequently examine the EPR-molecule's quantum trajectory to gain insight on how entanglement induces nonlocality. Synthesizing an EPR-molecule renders a reduced action in an Euclidian space rather than in the configuration space described by the two entangled particles. Synthesizing an EPR-molecule is reminiscent of synthesizing a dispherical particle for an idealized quantum Young's diffraction experiment (a simplified double slit experiment) where it was shown that the subsequent quantum trajectory for the dispherical particle transited both slits simultaneously [\ref{bib:fp37b}]. Quantum trajectory for multi-chromatic particles have also explained wave packet spreading [\ref{bib:fp37a}]. The terminology ``EPR-molecule" is reserved for the example considered by both EPR and Bohr in their 1935 papers where they examine identical particles recoiling in opposite directions from each other after an entangling and scattering interaction [\ref{bib:epr},\ref{bib:bohr}]. This investigation examines this situation in the limit that the recoiling particles become identical. For situations where the recoiling particles are not identical, the terminology ``epr-molecule" is used. For general situations of entanglement, the terminology ``entanglement molecule" is used, which is a generalization of D\"{u}r's 2001 usage to describe entanglement among qubits [\ref{bib:dur}]. Herein, the concept of a self-entangled, quantum particle [\ref{bib:fp37b},\ref{bib:fp37a}] is extended to synthesize an entanglement molecule from two entangled particles. Much of the formulation for describing EPR phenomenon is common to that for self-entangled phenomenon, but the application differs physically. Herein, we apply quantum trajectories to investigate the quantum motion of entanglement molecules. For non-identical entangled particles, the consequent epr-molecule may spread and manifest nonlocality consistent with the Aspect experiments [\ref{bib:aspect}]. Our investigation of EPR in a quantum trajectory representation first synthesizes the epr-molecule. We next extract the generator of the quantum motion (quantum reduced action or Hamilton's characteristic function) for the epr-molecule from its wave function. Jacobi's theorem then renders the quantum trajectory for the epr-molecule. The resultant quantum trajectory has retrograde segments interspersed between its segments of forward motion. This alternating of forward and retrograde segments generates the nonlocality associated with entanglement. Dissection of the equation of quantum motion for the quantum trajectory reveals the classical motion for the two recoiling particles plus motion for an emergent additional entity that contains the entanglement information. This entity is designated as the ``entanglon". The entanglon is to the entangled molecule what the chemical bond is to a standard molecule. The motion for the EPR-molecule is determined from the motion for the epr-molecule in the limit that the recoiling particles become identical. In Section 2, we develop the formulation for applying quantum trajectories to the EPR gedanken experiment. This includes synthesizing the entanglement molecule, developing its generator of quantum motion, and subsequently developing its quantum trajectory. In Section 3, we examine a particular example of an epr-molecule. We generate its theoretical equation of quantum motion. In Section 4, we examine the corresponding example for the EPR-molecule by taking the EPR-limit of our results for the epr-example. In Section 5, we exhibit the emergence of the ``entanglon" in the quantum trajectory representation of quantum mechanics. In Section 6, we present conclusions. Our conclusions include a discussion of how our quantum trajectory representation differs with the Copenhagen interpretation and with Bohmian mechanics. \section{Formulation} We adopt the physical setup of the original gedanken experiment considered by EPR [\ref{bib:epr}] and Bohr [{\ref{bib:bohr}] for investigation. However, we shall examine the EPR experiment in a quantum Hamilton-Jacobi representation rather than in a Schr\"{o}dinger wave function ($\psi$) representation to gain insight into the relationship between entanglement and nonlocality. Let us consider two particles, $P_1$ and $P_2$, with spatial wave functions , $\psi_1(x)$ and $\psi_2(x)$, that interact through an instantaneous impulse (kick) at time $t=0$, rather than through an interaction over the duration between $t=0,T$ as per EPR [{\ref{bib:epr}], and then become entangled for $t>0$. The positions ($x_1,x_2$) of the two particles are co-located at the time of impulse interaction, $t=0$ at $x_1,x_2=0$. The masses of $P_1$ and $P_2$ are respectively given by $m$ and $\alpha^2m$ where $0 < \alpha \le 1$. The factor $\alpha$ in $\psi_2$ is inserted arbitrarily as a convenient means by which we may approach EPR in the limit $\alpha \to 1$. More about this later. For mathematical simplicity, let us conjure up some interaction and an inertial reference system, reminiscent of EPR [\ref{bib:epr}] and Bohr[\ref{bib:bohr}], that induces the two particles to recoil from each other in opposite directions after impulse with their spatial wave functions given by \begin{equation} \psi_1(x)=\exp(ikx), \ \ \ \ \psi_2(x)=\alpha \exp(-ikx+i\beta); \ \ \ \ t > 0 \label{eq:recoil} \end{equation} \noindent where $k=[2(1+\alpha^2)mE]^{1/2}/\hbar$, $E$ is energy of the epr-molecule and $-\pi<\beta \le \pi$. The term $\beta$ represents a phase shift between the two particles. In our chosen reference system, $(\psi_1,\psi_2)$ form a set of independent solutions of the Schr\"{o}dinger equation with energy $E$ [\ref{bib:prd34},\ref{bib:fm2}], which helps facilitate the application of the quantum trajectory representation. The wave functions $\psi_1$ and $\psi_2$ have not been normalized absolutely but do have relative normalization with regard to each other as manifested by the factor $\alpha$. By Eq.\ (\ref{eq:recoil}), $\psi_1$ and $\psi_2$ are not the wave functions for identical particles unless $\alpha = 1$. EPR [\ref{bib:epr}] and Bohr [\ref{bib:bohr}] considered identical particles. For completeness, the particular combination of impulse interaction at $t=0$ and particle velocities $\dot{x}_1$ and $\dot{x}_2$ for $-1 \ll t<0$ necessary to render the particular results of Eq. (\ref{eq:recoil}) is generally not unique. While the particles $P_1$ and $P_2$ have causal positions $x_1$ and $x_2$ respectively, their wave functions $\psi_1$ and $\psi_2$ in the Copenhagen interpretation render the Born probability amplitude over $x$. In the quantum trajectory representation, the set $(\psi_1,\psi_2)$ of independent solutions to the Schr\"{o}dinger equation in one dimension are related to the reduced action in the quantum trajectory representation of quantum mechanics through the invariance of the Schwarzian derivative of the reduced action with regard to $x$ under a M\"{o}bius transform $(a\psi_1-b\psi_2)/(c\psi_1-d\psi_2),\ ad-bc \ne 0$ [\ref{bib:fm2}]. Our criterion for choice of inertial reference system, for which $\psi_1$ and $\psi_2$ have the wave numbers $k$ and $-k$, generates the relationship $x_1=-x_2/\alpha^2$ for the positions of the two particles. This is an extension of Fine's conservation of relative position [\ref{bib:fine}]. For $\alpha=\pm 1$, conservation of relative position holds, and the inertial reference system is the center-of-mass inertial system. Conservation of relative position will induce loss of parameter independence and outcome independence [\ref{bib:hw}] in the EPR experiment. EPR and Bohr assumed that, for times sufficiently long after interaction at $t=0$, then $x_1+x_2 \gg 1$ sufficiently to ensure ``separability" of the particles $P_1$ and $P_2$. But we herein assume that the two particles remain entangled no matter how far apart they become as first confirmed by the Aspect experiments [\ref{bib:aspect}]. For entanglement in the wave function representation of quantum mechanics, we may synthesize an epr-molecule as a simple polar wave function, $\psi_{epr}$, from the entangled pair (bipolar wave function) , $\psi_1$\ and $\psi_2$, by [\ref{bib:fp37a},\ref{bib:holland}] \begin{eqnarray} \psi_{epr}(x) & = & \overbrace{\exp(ikx) + \alpha \exp(-ikx + i\beta )}^{\mbox {\normalsize bipolar wave function}} \nonumber \\ & = & \underbrace{[1+\alpha^2+2 \alpha \cos(2kx+\beta)]^{1/2} \exp \left[ i \arctan \left( \frac{\sin(kx) - \alpha \sin(kx+\beta)}{\cos(kx) + \alpha \cos(kx+\beta)}\right) \right]}_{\mbox {\normalsize polar wave function is still an eigenfunction for $E=\hbar^2k^2/[2m(1+\alpha^2)]$.}} \label{eq:eprpsi} \end{eqnarray} \noindent where we have dropped the subscript upon particle position $x$ by the extension of conservation of relative position. The above construction is just superpositional principle at work. It converts a bipolar {\itshape Ansatz} to a polar {\itshape Ansatz} [\ref{bib:vigsym3},\ref{bib:fm2},\ref{bib:prd26},\ref{bib:prd25}--\ref{bib:wyatt}]. In the EPR limit, then $\lim_{\alpha \to 1} \psi_{epr} \to \psi_{EPR} = 2 \cos(kx),i2 \sin(kx)$ respectively for $\beta = 0,\pi$ as expected. From Eq.\ (\ref{eq:eprpsi}), $\psi_{epr}$ has the same form as a dichromatic wave function $\psi_{dichromatic}$ investigated in Reference \ref{bib:fp37a}. But $\psi_{epr}$ and $\psi_{dichromatic}$ represent different physics as the two spectral components of $\psi_{dichromatic}$ induce self-entanglement within a dichromatic particle. The wave function for the epr-molecule, $\psi_{epr}$, as exhibited by Eq.\ (\ref{eq:eprpsi}), does not uniquely specify the epr-components. For example, the entanglement of a running wave function, $(1-\alpha)\exp(ikx)$ and a standing wave function, $2 \alpha \exp(-i\beta/2) \cos(kx+\beta/2)$ would also render the very same $\psi_{epr}$ given by Eq.\ (\ref{eq:eprpsi}). By the superpositional theorem, $\psi_{epr}$ remains valid for any combination of particles as long as the collective sum of their spectral components are consistent with the right side of the upper line of Eq.\ (\ref{eq:eprpsi}). In the wave function representation, $\psi_{epr}$ as represented by Eq.\ (\ref{eq:eprpsi}), is inherently nonlocal for it is not factorable, that is [\ref{bib:ch}] $\psi_{epr} \ne K \psi_1 \psi_2$ where K is a constant. Any measurement upon the $\psi_{epr}$ for the epr-molecule concurrently measures $\psi_1$ and $\psi_2$. Likewise, in the quantum trajectory representation of quantum mechanics, entanglement implies that the reduced action (Hamilton's characteristic function) for the epr-molecule, $W_{epr}$ is inseparable by particles, that is $W_{epr} \ne W_{\mbox{\scriptsize particle 1}} + W_{\mbox{\scriptsize particle 2}}$. The $\psi_{epr}$ is not the wave function representing EPR landscape. The actual wave function for the EPR-molecule, $\psi_{EPR}$, for identical particles is given by \[ \psi_{EPR} = \lim_{\alpha \to 1} (\psi_{epr}). \] \noindent In general, we shall investigate EPR phenomena, where $\alpha=1$, by \[ \lim_{\alpha \to 1} {\Big(\mbox{epr-phenomenon}\Big) \to \mbox{EPR-phenomenon}}. \] \noindent This avoids directly working with standing waves to establish quantum trajectories and permits us to study the behavior of quantum trajectories and other phenomena in the limit that the complex running wave function, $\psi_{epr}$, becomes a real standing wave function, $\psi_{EPR}$, as $\alpha \to 1$. A generator of the motion for the epr-molecule is its reduced action, $W_{epr}$. Its reduced action may be extracted from the un-normalized $\psi_{epr}$ as microstates do not exist for $\psi_{epr}$ [\ref{bib:fpl9}]. The reduced action is given by [\ref{bib:fp37a},\ref{bib:holland}] \begin{equation} W_{epr} = \hbar \arctan \left( \frac{\sin(kx) - \alpha \sin(kx+\beta)}{\cos(kx) + \alpha \cos(kx+\beta)}\right). \label{eq:eprw} \end{equation} \noindent Whereas we extracted the reduced action, $W_{epr}$, from the Schr\"odinger wave function herein for convenience, Faraggi and Matone have shown that in general the reduced action may be derived from their quantum equivalence principle independent of the Schr\"odinger formulation of quantum mechanics [\ref{bib:fm}]. The reduced action, $W_{epr}$, is also the solution of the QSHJE for $E=\hbar^2k^2/[2m(1+\alpha^2)]$ [\ref{bib:fp37a}]. Equation (\ref{eq:eprw}) posits a deterministic $W_{epr}$ in Euclidean space in contrast to $\psi_{epr}$ with its probability amplitude being posited in Hilbert space. The absolute value of $W_{epr}$ increases monotonically with $x$ as the arctangent function in $W_{epr}$ as it jumps to the next Riemann sheet whenever the the underlying tangent function becomes singular. The conjugate momentum for epr-molecule is given by \begin{equation} \partial W_{epr}/\partial x = \frac{\hbar k}{[1+\alpha^2+2 \alpha \cos(2kx+\beta)]}. \label{eq:cm} \end{equation} \noindent The conjugate momentum manifests entanglement by the cosine term in the denominator on the right side of Eq.\ (\ref{eq:cm}). We note from Eqs.\ (\ref{eq:cm}) and (\ref{eq:eom}) that the conjugate momentum is not the mechanical momentum, i.e., \begin{equation} \partial W_{epr}/\partial x \ne m\dot{x}. \label{eq:mm} \end{equation} The equation of quantum motion for the epr-molecule is generated from $W_{epr}$ by Jacobi's theorem as \begin{equation} \underbrace{t_{epr}-\tau=\frac{\partial W_{epr}}{\partial E}}_{\mbox{Jacobi's theorem}}=\frac{mx(1-\alpha^2)}{\hbar k[1+\alpha ^2 +2\alpha \cos(2kx + \beta)]} \label{eq:eom} \end{equation} \noindent where $t$ is time and $\tau$ specifies the epoch. The quantum trajectory for the epr-molecule is in Euclidean space and renders determinism as proposed by EPR [\ref{bib:epr}] for the position of the epr-molecule as a function of time that can be predicted with certainty without disturbing the system. In the forgoing, we note that our use of ``certainty" is appropriate for three reasons. First, in the Copenhagen interpretation, the Heisenberg uncertainty principle uses an insufficient subset of initial values of the necessary and sufficient set of initial values that specify unique quantum motion [\ref{bib:fm2},\ref{bib:prd29},\ref{bib:ijmpa15}]. Second, the quantum trajectories exist in Euclidean space here while the Schr\"{o}dinger wave function representation is formulated in Hilbert space [\ref{bib:rc}]. And third, the quantum Hamilton-Jacobi representation contains more information than the Schr\"odinger wave function representation and renders a unique, deterministic quantum trajectory [\ref{bib:fm2},\ref{bib:prd29},\ref{bib:fpl9},\ref{bib:ijmpa15}]. Realism follows from determinism for the epr-molecule maintains a precise, theoretical quantum trajectory independent of it being measured. Nevertheless, nothing herein implies that a measurement on an epr-molecule does not physically disturb the epr-molecule in compliance with Bohr's complementarity principle. The use of Jacobi's theorem to develop an equation of quantum motion, Eq.\ (\ref{eq:eom}), is consistent with Peres's quantum clocks where $t-\tau=\hbar (\partial \varphi /\partial E)$ where $\varphi$ is the phase of the complex wave function of the particle under consideration [\ref{bib:peres}]. Equation (\ref{eq:eom}) is a generalization of this for it applies to situations where the wave function is real [\ref{bib:fm2},\ref{bib:prd26},\ref{bib:fpl9}]. We also note that the development of quantum trajectories differs from those of Bohmian mechanics [\ref{bib:bohm}]. Bohmian mechanics assumes that the conjugate momentum, $\partial W_{epr}/\partial x$, is the mechanical momentum in contradiction to Eq.\ (\ref{eq:mm}) and subsequently integrates it to render an equation of quantum motion that differs from Eq.\ (\ref{eq:eom}). Recently, Ghose has shown for some entangled multiparticle systems that choosing the particle distribution in Bohmian mechanics consistent with the ``quantum equilibrium hypothesis" cannot be assured: a Bohmian interpretation becomes problematic for such systems [\ref{bib:ghose}]. Ghose did investigate in a Bohmian representation the entanglement, Eq.\ (\ref{eq:eprpsi}), which is studied herein. In closing this section, we note that measurements on $\psi_{epr}$ concurrently measure $\psi_1$ and $\psi_2$ support the position of Bohr in the EPR-Bohr debates [\ref{bib:bohr}]. On the other hand, the very existence of quantum trajectories for the epr-molecule supports the position of EPR with regard to reality [\ref{bib:epr}]. As previously discussed, the quantum Hamilton-Jacobi representation contains more information than $\psi$ which challenges the completeness of $\psi$ which also supports the position of EPR. \section{Example} Let us consider the particular example of the quantum trajectories of an epr-molecule specified by $\hbar=1, \ m=1, \ k=\pi/2, \ \alpha = 0.5$, \ $\tau=0$, and $\beta = 0,\pi$. The resultant quantum trajectories, which are governed by Eq.\ (\ref{eq:eom}), are exhibited on Fig.\ 1 where the solid line renders the quantum trajectory for $\beta=0$ and dashed line, $\beta=\pi$. These quantum trajectories are launched from the origin, $(t,x)=(0,0)$. Near $x = 1,2,3,\cdots$, the quantum trajectory for the epr-molecule with $\beta = 0$ on Fig.\ 1 has turning points with regard to time, $t$, where the quantum trajectories change between forward and retrograde motion [\ref{bib:fp37a}]. The turning points cause the quantum trajectory to alternate between forward and retrograde motion implying nonlocality and action at a distance as the quantum trajectory at various instances of time has separate, multiple locations. Furthermore, the good behavior (at least continuous first-order derivatives) implies superluminality of the quantum trajectories at the turning points for the epr-molecule where $\dot{x} \to \pm \infty$ at the extrema in $t$. This superluminality is another manifestation of nonlocality. We note that these superluminalities at the turning points are integratable as exhibited on Fig.\ 1. The quantum trajectory for the epr-molecule, as exhibited by Fig.\ 1, is restricted to the approximate wedge given by \[ \frac{mx}{3\hbar k} \le t \le \frac{3mx}{\hbar k}. \] \noindent The upper boundary of the wedge, $t_u=3mx/(\hbar k)$, manifests maximum destructive interference between $\psi_1$ and $\psi_2$ while the lower boundary, $t_{\ell}=mx/(3 \hbar k)$, manifests maximum reinforcement. This may be generalized with regard to $\alpha$ by \begin{equation} \frac{(1-\alpha)mx }{(1+\alpha)\hbar k} \le t \le \frac{(1+\alpha)mx}{(1-\alpha)\hbar k}. \label{eq:wedge} \end{equation} \noindent This wedge may be densely filled by varying the phase shift $\beta$ over its range $(-\pi/2,\pi/2)$ as exhibited by Fig.\ 2. For $\alpha \ll 1$, latent early time reversals may be suppressed [\ref{bib:fp37a}]. As the quantum trajectory for the epr-molecule progresses out the wedge away from its launch point at the wedge's apex at the origin $(t,x)=(0,0)$, the durations of time spent on individual forward and retrograde segments increase. The dichromatic particle offers a precedent for understanding this motion of alternating forward and retrograde segments whose duration progressively increase as manifesting wave packet spreading [\ref{bib:fp37a}]. Here, the analogous behavior for the epr-molecule manifests an increasing spatial displacement between its two component particles, $P_1$ and $P_2$. There is another way to interpret the quantum trajectories exhibited in Figs.\ 1 and 2 where the concept of retrograde motion is replaced by invoking the use of creation and annihilation operations at the turning points [\ref{bib:fp37a}]. At the local temporal minima, there is maximum reinforcement between $\psi_1$ and $\psi_2$, which synthesize $\psi_{epr}$, at these temporal local minima where pairs of quantum trajectories for the epr-molecule are spontaneously created. Within each pair, one quantum trajectory propagates in the $+x$ direction; the other, in the $-x$ direction. Note that these creation operations do not imply that $\psi_{epr}$ has been spectrally analyzed into $\psi_1$ and $\psi_2$ to propagate separately on the two different branches: rather the creation operations manifest spontaneous nonlocality where $\psi_{epr}$ propagates along both branches. Each branch of the pair terminates at local temporal maximum where it is annihilated along with another branch from another pair of quantum trajectories as exhibited on Fig.\ 1. These annihilated quantum trajectories were created at different local temporal minima and propagate in opposite directions with regard to $x$. The local temporal maxima represent points of maximum interference between $\psi_1$ and $\psi_2$ within $\psi_{epr}$. \section{Quantum trajectory for EPR-molecule} The wave function for the EPR-molecule is a standing wave function. As such, its corresponding quantum trajectory is ill defined. We shall resolve its quantum trajectory by a limiting process. We still assume the conditions $\hbar =1,\ m=1, \ k=\pi/2,$ and $\beta=0$. For $\beta=0$, the epr-reduced action simplifies to \[ W_{epr}=\hbar \left[ \arctan \left(\frac{1-\alpha}{1+\alpha} \tan(kx) \right) \right]. \] The EPR wave function by the upper line of Eq.\ (\ref{eq:eprpsi}) with $\alpha = 1$ trivially represents a standing wave function, $2 \cos(kx)$. Likewise, the limiting process, $\alpha \to (1-)$, when applied to the second line of Eq.\ (\ref{eq:eprpsi}), also renders \begin{equation} \lim _{\alpha \to (1-)} \big(\psi _{epr}\big) = 2\cos(kx) = \psi_{EPR}. \label{eq:psiEPR} \end{equation} \noindent Our limiting process for EPR has $\alpha$ approach 1 from below, $\alpha \to (1-)$. Concurrently, the instantaneous inertial reference frame, which is dependent upon $\alpha$, is continuously constrained throughout the limiting process to maintain the wave numbers, $k$ and $-k$ for $\psi_1$ and $\psi_2$ respectively. In the limit $\alpha \to (1-)$, both edges of the wedge exhibited in Figs.\ 1 and 2 become orthogonal [\ref{bib:fp37a}]. The wedge spans the entire quadrant $t,x \ge 0$ of the $t,x$-plane. Had we chosen to take the limit of $\alpha$ approaching 1 from above, then Eq.\ (\ref{eq:psiEPR}) would still be valid but the wedge would have spanned the quadrant $t \ge 0,x \le 0$ of the $t,x$-plane. The equation of quantum motion for the EPR-molecule, which by Jacobi's theorem, $t_{EPR}=\partial W_{EPR}/\partial E$, is rendered by taking the limit of $\alpha \to 1$ from below of the epr equation of quantum motion, Eq.\ (\ref{eq:eom}). For a launch point (initial position) of $(t,x)=(0,0)$, quantum motion for the EPR-molecule in the limit $\alpha \to (1-)$ is given by [\ref{bib:fpl9}] \begin{equation} \lim _{\alpha \to (1-)} t_{epr}= t_{EPR} = \sum _{n=1}^{\infty}\delta [x-(2n-1)\pi /(2k)] = \sum _{n=1}^{\infty}\{ \delta [x-(2n-1)] , \ \ x>0, \ \tau_{EPR}=0 \label{eq:eom-} \end{equation} \noindent consistent with the equation of quantum motion, Eq.\ (\ref{eq:eom}). For $x<0$ and the launch point still at $(t,x)=(0,0)$, we investigate the case $1 \le \alpha \le \infty$ using the limiting process $\alpha \to 1$ from above. This renders \begin{equation} \lim _{\alpha \to (1+)} t_{\downarrow epr} = t_{\downarrow EPR} = -\sum _{n=1}^{\infty} \delta [x-(2n-1)\pi /(2k)] = -\sum _{n=1}^{\infty}\{ \delta [x-(2n-1)], \ \ x<0, \ \tau_{\downarrow EPR}=0 \label{eq:eom+} \end{equation} \noindent where the prefix $\downarrow$ in the subscripts denotes the limiting process $\alpha \to (1+)$ to generate quantum trajectories into the domain $x<0$. The prefix $\uparrow\hspace*{1cm}pace{-4pt}\cup\hspace*{1cm}pace{-5pt}\downarrow$ denotes the union of the limiting processes $\alpha \to (1\mp)$. For launch point at $x=0$, $\uparrow\hspace*{1cm}pace{-4pt}\cup\hspace*{1cm}pace{-5pt}\downarrow$EPR-molecule has positive infinite velocity for $x>0$ and $x \ne 1,3,5,\cdots$ by Eq.\ (\ref{eq:eom-}) and negative infinite velocity for $x<0$ and $x \ne -1,-3,-5,\cdots$ by Eq.\ (\ref{eq:eom+}) in this nonrelativistic representation. These infinite magnitudes of velocity at $x \ne \pm 1,\pm 3,\pm 5,\cdots$ imply action at \emph{infinite} distances within the EPR-molecule in this nonrelativistic examination. Also, at the trigger points of the $\delta$-function of Eqs.\ (\ref{eq:eom-}) and (\ref{eq:eom+}), $x= \pm 1,\pm 3,\pm 5,\cdots$, the EPR-molecule has nil velocity consistent with $\psi_{EPR}=2 \cos(kx)$. Thus, the limiting process, $\alpha \to 1$, renders the expected standing wave function for $\psi_{EPR}$ given by Eq.\ (\ref{eq:psiEPR}) while the limiting process also renders a consistent equation of quantum motion for $t_{\uparrow\hspace*{1cm}pace{-1pt}\cup\hspace*{1cm}pace{-1pt}\downarrow}$ given by Eqs.\ (\ref{eq:eom-}) and (\ref{eq:eom+}). The alternative interpretation using creation and annihilation operations, which already has been discussed in Section 3, begs the question whether these operations imply high energy processes. They do not. This is shown by applying Faraggi and Matone's effective quantum mass, $m_{Q_{EPR}}=m(1-\partial Q_{EPR}/\partial E)$ where $Q$ is Bohm's quantum potential [\ref{bib:fm}], to this investigation. For the EPR-molecule, $m_{Q_{EPR}}$ becomes [\ref{bib:fp37a}] \begin{eqnarray} \lim_{\alpha \to 1} m_{Q_{\pm EPR}} & = & 0, \ \ x \ne \pm 1,\pm 3,\pm 5, \cdots \nonumber \\ & = & \infty, \ \ x = \pm 1,\pm 3,\pm 5, \cdots. \label{eq:EPReom} \end{eqnarray} \noindent Note that $m_{Q_{\pm EPR}}$ here becomes infinite where the velocity of the EPR-molecule is nil and becomes nil where the velocity is infinite. This is consistent with conjugate momentum remaining finite [\ref{bib:fm}]. Herein, neither do creation operations imply endoergic processes nor do annihilation operations imply exoergic processes. \section{The ``entanglon"} \noindent Let us now demonstrate the emergence of the entanglon for an epr-molecule from the equation of quantum motion in the quantum trajectory representation of quantum mechanics. We shall dissect the equation of quantum motion for the synthetic epr-molecule, Eq. (\ref{eq:eom}), to resolve the contributions to $t_{epr}$ by particles $P_1$ and $P_2$ individually. These two individual contributions are insufficient by themselves to render $t_{epr}$ for there remains a contribution due to the entanglement between the two particles. Equation (\ref{eq:eom}) may be dissected as \begin{eqnarray} t_{epr} & = & \ \frac{mx(1-\alpha^2)}{\hbar k[1+\alpha ^2 +2\alpha \cos(2kx + \beta)]} \nonumber \\ & = & \ \underbrace{\frac{mx}{\hbar k} \frac{1}{1+\alpha^2}}_{\mbox{\small particle 1}} \ - \ \underbrace{\frac{mx}{\hbar k} \frac{2\alpha \frac{1-\alpha^2}{1+\alpha^2} \cos(2kx+\beta)}{1+\alpha^2+2\alpha \cos(2kx+\beta)}}_{\mbox{\small entanglon}} \ - \ \underbrace{\frac{mx}{\hbar k} \frac{\alpha^2}{1+\alpha^2}}_{\mbox{\small particle 2}} \label{eq:aeom} \end{eqnarray} \noindent where the epoch has been set as $\tau=0$. The contributions to $t_{epr}$ from particles 1 and 2 are weighted. In the EPR limit, $\alpha \to 1$, the contributions of particles 1 and 2 cancel each other. The remaining contribution that emerges in Eq.\ (\ref{eq:aeom}) has been allocated to an entity now identified as the ``entanglon". Its contribution to $t_{epr}$ in Eq.\ (\ref{eq:aeom}) is identified as $t_{epr_e}$ Then, in the EPR limit, $\alpha \to (1-)$, $t_{EPR_e}$ is given by \begin{eqnarray} t_{EPR_e} & = & \lim_{\alpha \to (1-)} \left( \frac{mx}{\hbar k} \frac{2\alpha \frac{1-\alpha^2}{1+\alpha^2} \cos(2kx+\beta)}{1+\alpha^2+2\alpha \cos(2kx+\beta)} \right) \nonumber \\ & = & \left\{ \begin{array}{ll} 0,\ \ \ x \ne 1,3,5,\cdots \\ \lim_{\alpha \to (1-)}\left( \frac{mx}{\hbar k} \frac{2}{1-\alpha}\right) \to \infty,\ \ \ x=1,3,5,\cdots. \end{array} \right. \label{eq:teEPR} \end{eqnarray} \noindent Hence, $t_{EPR}$ exhibits multi $\delta$-function behavior at $x = n\pi /(2k), n=1,3,5,\cdots .$ As the contributions to $t_{EPR}$ from particles 1 and 2 mutually cancel each other in the EPR limit $\alpha \to (1-)$ as shown by Eq.\ (\ref{eq:aeom}), we have that $t_{EPR}=t_{EPR_e}$. The $\delta$-function behavior of $t_{EPR_e}$ for the entanglon as given by Eq.\ (\ref{eq:teEPR}) is consistent with the motion of the standing wave exhibited by Eq.\ (\ref{eq:EPReom}) at $x=1,3,5,\cdots$ for $\beta=0$. Thus, the entanglon induces retrograde motion, which manifests nonlocality. The entanglon in the EPR limit implies action (entanglement) at infinite distances within the EPR-molecule as $t_{EPR_e} \to 0$ by Eq.\ (\ref{eq:teEPR}) for $x=1,3,5,\cdots$. The entanglon is not an ``external" force carrier between particles such as the photon, graviton, etc.\ for the latent motions for the individual particles $P_1$ and $P_2$ of the epr-molecule remain linear with constant velocity as shown in Eq.\ (\ref{eq:eom}). Nor does the entanglon change either wave function, $\psi_1$ or $\psi_2$. Nevertheless, the entanglon does maintain the correlation between $\psi_1$ and $\psi_2$, which it may do so superluminally. In so doing, the entanglon renders an ``internal" force within the epr-molecule influencing the quantum trajectory of the epr-molecule while maintaining a coherent epr-molecule. The entanglon also has characteristics in common with the gluon. Neither exists in isolation. When coherence within the epr-molecule is lost, then the entanglon no longer exists. There is another characteristic in common for entanglons and gluons which regards strength with range. Gluons become stronger with range. Also, as range increases, the entanglon, as well as the epr-molecule, spontaneously develops an additional pair of segments that alternate with regard to retrograde and forward motion in the quantum trajectory. These segments imply the existence of multipaths, which are inherently nonlocal, for the entanglon. The number of multipaths increase with range, which mitigates any loss of coherence between the entangled particles with range. Thus, the concept that entangled particles that are widely separated in this nonrelativistic investigation should become independent of each other due to Einstein locality is refuted. For completeness, the forward and retrograde segments of the entanglon are reminiscent of Cramer's transactional interpretation of quantum mechanics [\ref{bib:cramer}]. The transactional interpretation postulates that a quantum interaction be a standing wave synthesized from a retarded (forward-in-time) wave and an advanced (retrograde) wave. The concept of the entanglon also supports a hierarchy of entanglement critical for an undivided universe that has been postulated in Bohmian mechanics [\ref{bib:bh}]. \section{Conclusions} We conclude that entangled particles may be synthesized into entanglement molecules. The quantum trajectory representation of quantum mechanics does describe causal behavior of the entanglement molecule without invoking the Born probability postulate for $\psi$. The particular quantum trajectory for an entanglement molecule may be specified by a single constant of the motion, $E$. The quantum trajectory representation including Faraggi and Matone's quantum equivalence principle and their quantum effective mass does resolve some of the mysteries of EPR. The quantum trajectory representation renders the emergence of the entanglon which maintains coherence between widely separated, entangled entities. Quantum trajectories in a nonrelativistic theory have shown for the EPR gedanken experiment that entanglement may be maintained superluminally. In the case of the EPR limit, entanglement is maintained instantaneously. Also, quantum trajectories in the EPR limit imply action at infinite distances in this nonrelativistic investigation. Hence, the locality loophole cannot be closed. This opus is consistent with Copenhagen through the description of the wave function for the epr-molecule, $\psi_{epr}$ as exhibited by Eq.\ (\ref{eq:eprpsi}) but differs thereafter. The anticipated Copenhagen response would stipulate that a measurement upon $\psi_{epr}$ would render a probabilistic outcome for the epr-molecule. By axiomatic precept, Copenhagen has denied the very existence of the deterministic quantum trajectories, which were used herein. As noted in the Introduction, the quantum trajectory interpretation of quantum mechanics has already shown that $\psi$ is not a complete description of quantum phenomena [\ref{bib:fm2},\ref{bib:rc}--\ref{bib:fpl9}]. This opus is also consistent with Bohmian mechanics [\ref{bib:holland}] through the description of the epr-reduced action, $W_{epr}$ as exhibited by Eq.\ (\ref{eq:eprw}) but differs thereater. Both representations are based upon the same quantum Hamilton-Jacobi equation and develop the same generator of quantum motion. While $W_{epr}$ is a common generator of quantum motion, quantum trajectories and Bohmian mechanics have however different equations of quantum motion. The quantum trajectory representation develops its equation of quantum motion from Jacobi's theorem, Eq.\ (\ref{eq:eom}). On the other hand, the Bohmian equation of quantum motion is the integration of the conjugate momentum, Eq.\ (\ref{eq:cm}) [\ref{bib:holland}]. For completeness, should a measuring process on the entangled molecule use a matched filter designed to measure some property of $\psi_1$ for example, then the measuring process will detect that property of $\psi_1$. To detect the entangled molecule, the measuring filter must be matched to the entanglement molecule as a whole. \noindent {\bf Acknowledgements} I heartily thank Marco Matone and Alon E.\ Faraggi for their interesting discussions and encouragement. \noindent {\bf References} \begin{enumerate}\itemsep -.06in \item \label{bib:epr} A.\ Einstein, B.\ Podolski and N.\ Rosen, Phys.\ Rev.\ {\bfseries 47}, 777 (1935). \item \label{bib:bohr} N.\ Bohr, Phys.\ Rev.\ {\bfseries 48}, 696 (1935). \item \label{bib:bell} J.\ Bell, {\it Speakable and Unspeakable in Quantum Mechanics} (Cambridge University Press, Cambridge, 1987). \item \label{bib:aspect} A.\ Aspect, P.\ Grangier and G.\ Roger, Phys. Rev.\ Lett.\ {\bfseries 47}, 460 (1981); {\bfseries 49}, 91 (1982); A.\ Aspect, J.\ Dalibard and G.\ Roger, Phys.\ Rev.\ Lett.\ {\bfseries 49}, 1804 (1982). \item \label{bib:fine} A.\ Fine, $<$http://plato.stanford.edu/archives/sum2004/entries/qt-epr/$>$. \item \label{bib:hw} D.\ Home and A.\ Whitaker, {\it Einstein's Struggles with Quantum Theory} (Springer, New York, 2007). \item \label{bib:kwiat} P.\ G.\ Kwiat, K.\ Mattle, H.\ Weinfurter, A.\ Zeilinger, A.\ V.\ Sergienko, and Y.\ H.\ Shih, Phys.\ Rev.\ Lett.\ {\bfseries 75}, 4337 (1995). \item \label{togerson} J.\ R.\ Togerson, D.\ Branning, C.\ H.\ Monken, and L.\ Mandel, Phys.\ Lett.\ {\bfseries A 204}, 323 (1995). \item \label{digiuesppe} G.\ Di Giuseppe, F.\ De Martini, and D.\ Boschi, Phys.\ Rev.\ {\bfseries A 56}, 176 (1997). \item \label{hardy} D.\ Boschi, S.\ Branca, F.\ De Martini, and L.\ Hardy, Phys.\ Rev.\ Lett.\ {\bfseries 79}, 2755 (1997). \item \label{bib:weihs} G.\ Weihs, T.\ Jennewein, C.\ Simon, H.\ Weinfurter, and A.\ Zeilinger, Phys.\ Rev.\ Lett.\ {\bfseries 81}, 5039 (1998). \item \label{bouwmeester} D.\ Bouwmeester, J.-W.\ Pan, M.\ Daniell, H.\ Weinfurter, and A.\ Zeilinger, Phys.\ Rev.\ Lett.\ {\bfseries 82}, 1345 (1999). \item \label{bib:tittel} W.\ Tittel, J.\ Brendel, N.\ Gisin, and H.\ Zbinden, Phys.\ Rev.\ {\bfseries A 59}, 4150 (1999). \item \label{bib:rowe} M.\ A.\ Rowe, D.\ Klepinski, V.\ Meyer, C.\ A.\ Sackett, V.\ M.\ Itano, C.\ Monroe, and D.\ J.\ Wineland, Nature {\bfseries 409}, 791 (2001). \item \label{bib:zeilinger} S.\ Gr\"oblacher, T.\ Paterek, R.\ Kaltenbaek, C.\ Brukner, M.\ \'Zukowski, M.\ Aspelmeyer, and A.\ Zeilinger, Nature {\bfseries 446}, 1469 (2007). \item \label{bib:prd34} E.\ R.\ Floyd, Phys.\ Rev.\ {\bf D 34}, 3246 (1982). \item \label{bib:vigsym3} E.\ R.\ Floyd, {\it Gravitation and Cosomology: From the Hubble Radius to the Planck Scale; Proceedings of Symposium in Honour of the 80th Birthday of Jean-pierre Vigier}, ed. by R. L. Amoroso et al (Kluwer Academic, 721 Dordrecht, 2002), extended version quant-ph/00009070. \item \label{bib:fm} A.\ E.\ Faraggi and M.\ Matone, Phys.\ Rev.\ Lett.\ {\bf 78}, 163 (1997) hep-th/9606063; Phys.\ Lett.\ {\bf B 437}, 369 (1997), hep-th/9711028; {\bf B 445}, 77 (1999), hep-th/9809125; 357 (1999), hep-th/9809126; {\bf B 450}, 34 (1999), hep-th/9705108; {\bf A 249}, 180 (1998), hep-th/9801033. \item \label{bib:fm2} A.\ E.\ Faraggi and M.\ Matone, Int.\ J.\ Mod.\ Phys.\ {\bf A 15}, 1869 (2000) hep-th/98090127. \item \label{bib:bfm} G.\ Bertoldi, A.\ E.\ Faraggi and M.\ Matone, Class.\ Quant.\ Grav.\ {\bfseries 17} 3965 (2000), hep-th/9909201. \item \label{bib:rc} R.\ Carroll, Can.\ J.\ Phys.\ {\bf 77}, 319 (1999), quant-ph/9904081; {\it Quantum Theory, Deformation and Integrability} (Elsevier, 2000, Amsterdam) pp. 50--56, {\it Uncertainty, Trajectories, and Duality}, quant-ph/0309023. \item \label{bib:prd26} E.\ R.\ Floyd, Phys.\ Rev.\ {\bf D 26}, 1339 (1982). \item \label{bib:prd29} E.\ R.\ Floyd, Phys.\ Rev.\ {\bf D 29}, 1842 (1982). \item \label{bib:fpl9} E.\ R.\ Floyd, Found.\ Phys.\ Lett.\ {\bf 9}, 489 (1996), quant-ph/9707051. \item \label{bib:fp37b} E.\ R.\ Floyd, Found.\ Phys.\ {\bfseries 37}, 1403 (2007), quant-ph/0605121. \item \label{bib:fp37a} E.\ R.\ Floyd, Found.\ Phys.\ {\bfseries 37}, 1386 (2007), quant-ph/0605120. \item \label{bib:dur} W.\ D\"{u}r, Phys.\ Rev.\ {\bfseries A 62}, 020303(R) (2001). \item \label{bib:holland} P.\ R.\ Holland, {\it The Quantum Theory of Motion} (Cambridge University Press, Cambridge, 1993) pp.\ 86--87, 141--146. \item \label{bib:prd25} E.\ R.\ Floyd, Phys.\ Rev.\ {\bf D 25}, 1547 (1982). \item \label{bib:poirier} B.\ Poirier, J.\ Chem.\ Phys.\ {\bf 121}, 4501 (2004). \item \label{bib:wyatt} R.\ E.\ Wyatt, {\it Quantum Dynamics with Trajectories: Introduction to Quantum Hydrodynamics} (Springer, New York, 2005). \item \label{bib:ch} J.\ F.\ Clauser and M.\ A.\ Horne, Phys.\ Rev.\ {\bfseries D 10}, 526 ((1974). \item \label{bib:ijmpa15} E.\ R.\ Floyd, Int.\ J.\ Mod.\ Phys.\ {\bfseries A 15}, 1363 (2000), quant-ph/9907092. \item \label{bib:peres} A.\ Peres, Amer.\ J.\ Phys.\ {\bfseries 48}, 552 (1980). \item \label{bib:bohm} D.\ Bohm, Phys.\ Rev.\ {\bfseries 85}, 166 (1953). \item \label{bib:ghose} P.\ Ghose, Adv.\ Sc.\ Lett., vol 2, pp 97-99 (2009), arXiv:0905.2037v1. \item \label{bib:cramer} J.\ G.\ Cramer. Rev.\ Mod.\ Phys.\ {\bfseries 58}, 647-688, July (1986). \item \label{bib:bh} D.\ Bohm and B.\ J.\ Hiley, {\itshape The Undivided Universe} (London, Routledge, 1993). \end{enumerate} \noindent {\bf Figure Captions} \noindent Fig.\ 1. Motion, $x(t)$, of the epr-molecule for $\tau=0$, $A=1$, $B=0.5$, $k=\pi/2$ and $\beta = 0$ as a solid line and for $\beta = \pi$ as a dashed line. \noindent Fig.\ 2. Motion of the epr-molecule, $x(t)$, for $\tau=0$, $A=1$, $B=0.5$, $k=\pi/2$ and $\beta = 0,\pi/4,\pi/2, \cdots,7\pi/4$ for a set of trajectories. All trajectories are displayed as solid lines. \end{document}
math
45,266
\begin{document} \begin{abstract} We generalize the theory of critical groups from graphs to simplicial complexes. Specifically, given a simplicial complex, we define a family of abelian groups in terms of combinatorial Laplacian operators, generalizing the construction of the critical group of a graph. We show how to realize these critical groups explicitly as cokernels of reduced Laplacians, and prove that they are finite, with orders given by weighted enumerators of simplicial spanning trees. We describe how the critical groups of a complex represent flow along its faces, and sketch another potential interpretation as analogues of Chow groups. \end{abstract} \title{Critical Groups of Simplicial Complexes} \section{Introduction} Let $G$ be a finite, simple, undirected, connected graph. The \emph{critical group} of $G$ is a finite abelian group $K(G)$ whose cardinality is the number of spanning trees of $G$. The critical group is an interesting graph invariant in its own right, and it also arises naturally in the theory of a discrete dynamical system with many essentially equivalent formulations --- the \emph{chip-firing game}, \emph{dollar game}, \emph{abelian sandpile model}, etc.--- that has been discovered independently in contexts including statistical physics, arithmetic geometry, and combinatorics. There is an extensive literature on these models and their behavior: see, e.g., \cite{Biggs,BLS,Dhar,GodRoy,LP}. In all guises, the model describes a certain type of discrete flow along the edges of $G$. The elements of the critical group correspond to states in the flow model that are stable, but for which a small perturbation causes an instability. The purpose of this paper is to extend the theory of the critical group from graphs to simplicial complexes. For a finite simplicial complex $\Delta$ of dimension~$d$, we define its higher critical groups as $$K_i(\Delta) := \ker \partial^{\phantom{*}}_i / \im(\partial^{\phantom{*}}_{i+1}\partial^*_{i+1})$$ for $0\leq i\leq d-1$; here $\partial^{\phantom{*}}_j$ means the simplicial boundary map mapping $j$-chains to $(j-1)$-chains. The map $\partial^{\phantom{*}}_{i+1}\partial^*_{i+1}$ is called an \emph{(updown) combinatorial Laplacian operator}. For $i=0$, our definition coincides with the standard definition of the critical group of the 1-skeleton of $\Delta$. Our main result (Theorem~\ref{main-thm}) states that, under certain mild assumptions on the complex $\Delta$, the group $K_i(\Delta)$ is in fact isomorphic to the cokernel of a reduced version of the Laplacian. It follows from a simplicial analogue of the matrix-tree theorem \cite{DKM,DKM2} that the orders $|K_i(\Delta)|$ of the higher critical groups are given by a torsion-weighted enumeration of higher-dimensional spanning trees (Corollary~\ref{count-corollary}) and in terms of the eigenvalues of the Laplacian operators (Corollary~\ref{alt-product}). In the case of a simplicial sphere, we prove (Theorem~\ref{spheres}) that the top-dimensional critical group is cyclic, with order equal to the number of facets, generalizing the corresponding statement \cite{Lor1,Lor2,Merris} for cycle graphs. In the case that $\Delta$ is a skeleton of an $n$-vertex simplex, the critical groups are direct sums of copies of $\mathbb{Z}/n\mathbb{Z}$; as we discuss in Remark~\ref{Molly}, this follows from an observation of Maxwell \cite{Maxwell} together with our main result. We also give a model of discrete flow (Section~\ref{sec:model}) on the codimension-one faces along facets of the complex whose behavior is captured by the group structure. Finally, we outline (Section~\ref{sec:intersection}) an alternative interpretation of the higher critical groups as discrete analogues of the Chow groups of an algebraic variety. The authors thank Andy Berget, Hailong Dao, Craig Huneke, Manoj Kummini, Gregg Musiker, Igor Pak, Vic Reiner, and Ken Smith for numerous helpful discussions. \section{Critical Groups of Graphs} \label{sec:graphs} \subsection{The chip-firing game} We summarize the chip-firing game on a graph, omitting the proofs. For more details, see, e.g., Biggs~\cite{Biggs}. Let $G=(V,E)$ be a finite, simple\footnote{The chip-firing game and our ensuing results can easily be extended to allow parallel edges; we assume that $G$ is simple for the sake of ease of exposition.}, connected, undirected graph, with $V=[n] \cup q =\{1,2,\dots,n,q\}$ and $E=\{e_1,\dots,e_m\}$. The special vertex $q$ is called the \emph{bank} (or ``root'' or ``government''). Let $d_i$ be the degree of vertex~$i$, i.e. the number of adjacent vertices. The chip-firing game is a discrete dynamical system whose state is described by a \emph{configuration} vector~$\mathbf{c}=(c_1,\dots,c_n)\in\mathbb{N}^n$. Each $c_i$ is a nonnegative integer that we think of as the number of ``chips'' belonging to vertex~$i$. (Note that the number $c_q$ of chips belonging to the bank~$q$ is not part of the data of a configuration.) Each non-root vertex is generous (it likes to donate chips to its neighbors), egalitarian (it likes all its neighbors equally), and prudent (it does not want to go into debt). Specifically, a vertex~$v_i$ is called \emph{ready} in a configuration $\mathbf{c}$ if $c_i\geq d_i$. If a vertex is ready, it can \emph{fire} by giving one chip to each of its neighbors. Unlike the other vertices, the bank is a miser. As long as other vertices are firing, the bank does not fire, but just collects chips. As more and more chips accumulate at the bank, the game eventually reaches a configuration in which no non-bank vertex can fire. Such a configuration is called \emph{stable}. At this point, the bank finally fires, giving one chip to each of its neighbors. Unlike the other vertices, the bank is allowed to go into debt: that is, we do not require that $c_q\geq d_q$ for the bank to be able to fire. Denote by $\mathbf{c}(x_1,\dots,x_r)$ the configuration obtained from $\mathbf{c}$ by firing the vertices $x_1,\dots,x_r$ in order. This sequence (which may contain repetitions) is called a \emph{firing sequence} for $\mathbf{c}$ if every firing is permissible: that is, for each $i$, either $x_i\neq q$ is ready to fire in the configuration $\mathbf{c}(x_1,\dots,x_{i-1})$, or else $x_i=q$ and $\mathbf{c}(x_1,\dots,x_{i-1})$ is stable. A configuration $\mathbf{c}$ is called \emph{recurrent} if there is a nontrivial firing sequence $X$ such that $\mathbf{c}(X)=\mathbf{c}$. A configuration is called \emph{critical} if it is both stable and recurrent. For every starting configuration $\mathbf{c}$, there is a uniquely determined critical configuration $[\mathbf{c}]$ that can be reached from~$\mathbf{c}$ by some firing sequence \cite[Thm.~3.8]{Biggs}. The \emph{critical group} $K(G)$ is defined as the set of these critical configurations, with group law given by $[\mathbf{c}]+[\mathbf{c}']=[\mathbf{c}+\mathbf{c}']$, where the right-hand addition is componentwise addition of vectors. The \emph{abelian sandpile model} was first introduced in \cite{Dhar} as an illustration of ``self-organized criticality''; an excellent recent exposition is \cite{LP}. Here, grains of sand (analogous to chips) are piled at each vertex, and an additional grain of sand is added to a (typically randomly chosen) pile. If the pile reaches some predetermined size (for instance, the degree of that vertex), then it \emph{topples} by giving one grain of sand to each of its neighbors, which can then topple in turn, and so on. This sequence of topplings is called an \emph{avalanche} and the associated operator on states of the system is called an \emph{avalanche operator}. (One can show that the avalanche operator does not depend on the order in which vertices topple; this is the reason for the use of the term ``abelian''.) The sandpile model itself is the random walk on the stable configurations, and the critical group is the group generated by the avalanche operators. The critical group can also be viewed as a discrete analogue of the Picard group of an algebraic curve. This point of view goes back at least as far as the work of Lorenzini~\cite{Lor1,Lor2} and was developed, using the language of divisors, by Bacher, de~la~Harpe, and Nagnibeda \cite{BHN} (who noted that their ``setting has a straightforward generalization to higher dimensional objects''). It appears in diverse combinatorial contexts including elliptic curves over finite fields (Musiker~\cite{Musiker}), linear systems on tropical curves (Haase, Musker and Yu~\cite{HMY}), and Riemann-Roch theory for graphs (Baker and Norine~\cite{BN}). \subsection{The algebraic viewpoint} The critical group can be defined algebraically in terms of the Laplacian matrix. \begin{definition} Let $G$ be a finite, simple, connected, undirected graph with vertices $\{1,\dots,n,q\}$. The {Laplacian matrix} of $G$ is the symmetric matrix $L$ (or, equivalently, linear self-adjoint operator) whose rows and columns are indexed by the vertices of $G$, with entries \begin{displaymath} \ell_{ij} = \begin{cases} d_i &\text{ if } i=j,\\ -1 &\text{ if } ij\in E,\\ 0 &\text{ otherwise.} \end{cases} \end{displaymath} \end{definition} Firing vertex~$i$ in the chip-firing game is equivalent to subtracting the $i^{th}$ column of the Laplacian (ignoring the entry indexed by~$q$) from the configuration vector~$\mathbf{c}$. Equivalently, if $\mathbf{c}'=\mathbf{c}(x_1,\dots,x_r)$, then the configurations $\mathbf{c}$ and $\mathbf{c}'$ represent the same element of the cokernel of the Laplacian (that is, the quotient of $\mathbb{Z}^{n+1}$ by the column space of~$L$). It is immediate from the definition of $L$ that $L(\mathbf{1})=\mathbf{0}$, where $\mathbf{1}$ and $\mathbf{0}$ denote the all-ones and all-zeros vectors in $\mathbb{N}^{n+1}$. Moreover, it is not difficult to show that $\rank L=|V|-1=n$. In terms of homological algebra, we have a chain complex \begin{equation} \label{graph-chain} \mathbb{Z}^{n+1} \timesrightarrow{L} \mathbb{Z}^{n+1} \timesrightarrow{S} \mathbb{Z} \to 0 \end{equation} where $S(\mathbf{c}) = \mathbf{c}\cdot\mathbf{1} = c_q+c_1+\cdots+c_n$. The equation $L(\mathbf{1})=\mathbf{0}$ says that $\ker(S)\supseteq\im(L)$. Moreover, $\rank L=n=\rank\ker S$, so the abelian group $\ker(S) / \im(L)$ is finite. \begin{definition} The \emph{critical group} of a graph $G$ is $K(G) = \ker(S)/\im(L)$. \end{definition} This definition of the critical group is equivalent to that in terms of the chip-firing game \cite[Thm.~4.2]{Biggs}. The order of the critical group is the determinant of the \emph{reduced Laplacian} formed by removing the row and column indexed by~$q$ \cite[Thm.~6.2]{Biggs}. By the matrix-tree theorem, this is the number of spanning trees. As we will see, the algebraic description provides a natural framework for generalizing the critical group. \section{The Critical Groups of a Simplicial Complex} We assume familiarity with the basic algebraic topology of simplicial complexes; see, e.g., Hatcher~\cite{Hatcher}. Let $\Delta$ be a $d$-dimensional simplicial complex. For $-1\leq i\leq d$, let $C_i(\Delta;\mathbb{Z})$ be the $i^{th}$ simplicial chain group of $\Delta$. We denote the simplicial boundary and coboundary maps respectively by \begin{align*} \partial^{\phantom{*}}_{\Delta,i} &\;:\; C_i(\Delta;\mathbb{Z}) \to C_{i-1}(\Delta;\mathbb{Z}),\\ \partial^*_{\Delta,i} &\;:\; C_{i-1}(\Delta;\mathbb{Z}) \to C_i(\Delta;\mathbb{Z}), \end{align*} where we have identified cochains with chains via the natural inner product. We will abbreviate the subscripts in the notation for boundaries and coboundaries whenever no ambiguity can arise. Let $-1\leq i\leq d$. The \emph{$i$-dimensional combinatorial Laplacian}\footnote{\label{L-notation-note} In other settings, our Laplacian might be referred to as the ``up-down'' Laplacian, $L^{\rm ud}$. The $i^{th}$ \emph{down-up Laplacian} is $L^{\rm du}_i=\partial^*_i \partial^{\phantom{*}}_i$, and the $i^{th}$ \emph{total Laplacian} is $L^{\rm tot}_i = L_i+L^{\rm du}_i$. We adopt the notation we do since, except for one application (Remark~\ref{Molly} below), we only need the up-down Laplacian. } of $\Delta$ is the operator \begin{displaymath} L_{\Delta,i} = \partial^{\phantom{*}}_{i+1}\partial^*_{i+1}\colon C_i(\Delta;\mathbb{Z})\to C_i(\Delta;\mathbb{Z}). \end{displaymath} Combinatorial Laplacian operators seem to have first appeared in the work of Eckmann~\cite{Eckmann} on finite dimensional Hodge theory. As the name suggests, they are discrete versions of the Laplacian operators on differential forms on a Riemannian manifold. In fact, Dodziuk and Patodi~\cite{DP} showed that for suitably nice triangulations of a manifold, the eigenvalues of the discrete Laplacian converge in an appropriate sense to those of the usual continuous Laplacian. For one-dimensional complexes, i.e., graphs, the combinatorial Laplacian is just the usual Laplacian matrix $L=D-A$, where $D$ is the diagonal matrix of vertex degrees and $A$ is the (symmetric) adjacency matrix. In analogy to the chain complex of \eqref{graph-chain}, we have the chain complex \begin{displaymath} C_i(\Delta;\mathbb{Z}) \timesrightarrow{L} C_i(\Delta;\mathbb{Z}) \timesrightarrow{\partial^{\phantom{*}}_i} C_{i-1}(\Delta;\mathbb{Z}), \end{displaymath} where $L = L_{\Delta,i}$. (This is a chain complex because $\partial^{\phantom{*}}o_iL=\partial^{\phantom{*}}_i\partial^{\phantom{*}}_{i+1}\partial^*_{i+1}=0$.) We are now ready to make our main definition. \begin{definition} The $i$-dimensional critical group of $\Delta$ is \begin{displaymath} K_i(\Delta) := \ker \partial^{\phantom{*}}_i / \im L = \ker \partial^{\phantom{*}}_i / \im(\partial^{\phantom{*}}_{i+1}\partial^*_{i+1}). \end{displaymath} \end{definition} Note that $K_0(\Delta)$ is precisely the critical group of the 1-skeleton of $\Delta$. \subsection{Simplicial spanning trees} Our results about critical groups rely on the theory of simplicial and cellular spanning trees developed in~\cite{DKM}, based on earlier work of Bolker~\cite{Bolker} and Kalai~\cite{Kalai}. Here we briefly review the definitions and basic properties, including the higher-dimensional analogues of Kirchhoff's matrix-tree theorem. For simplicity, we present the theory for simplicial complexes, the case of primary interest in combinatorics. Nevertheless, the definitions of spanning trees, their enumeration using a generalized matrix-tree theorem, and the definition and main result about critical groups are all valid in the more general setting of regular CW-complexes~\cite{DKM2}. In order to define simplicial spanning trees, we first fix some notation concerning simplicial complexes and algebraic topology. The symbol~$\Delta_i$ will denote the set of cells of dimension~$i$. The \emph{$i$-dimensional skeleton}~$\Delta_{(i)}$ of a simplicial complex~$\Delta$ is the subcomplex consisting of all cells of dimension~$\leq i$. A complex is \emph{pure} if all maximal cells have the same dimension. The $i^{th}$ reduced homology group of~$\Delta$ with coefficients in a ring $R$ is denoted $\tilde H_i(\Delta;R)$. The \emph{Betti numbers} of $\Delta$ are $\beta_i(\Delta)=\dim_\mathbb{Q}\tilde H_i(\Delta;\mathbb{Q})$. The \emph{$f$-vector} is $f(\Delta)=(f_{-1}(\Delta),f_0(\Delta),\dots)$, where $f_i(\Delta)$ is the number of faces of dimension~$i$. \begin{definition} \label{SST} Let $\Delta$ be a pure $d$-dimensional simplicial complex, and let $\Upsilon\subseteq\Delta$ be a subcomplex such that $\Upsilon_{(d-1)}=\Delta_{(d-1)}$. We say that $\Upsilon$ is a \emph{(simplicial) spanning tree} of $\Delta$ if the following three conditions hold: \begin{enumerate} \item $\tilde H_d(\Upsilon;\mathbb{Z}) = 0$; \item $ \tilde H_{d-1}(\Upsilon;\mathbb{Q})=0$ (equivalently, $|\tilde H_{d-1}(\Upsilon;\mathbb{Z})| < \infty$); \item $f_d(\Upsilon) = f_d(\Delta)-\beta_d(\Delta)+\beta_{d-1}(\Delta)$. \end{enumerate} More generally, an \emph{$i$-dimensional spanning tree} of $\Delta$ is a spanning tree of the $i$-dimensional skeleton of $\Delta$. \end{definition} In the case $d=1$ (that is, $\Delta$ is a graph), we recover the usual definition of a spanning tree: the three conditions above say respectively that~$\Upsilon$ is acyclic, connected, and has one more vertex than edge. Meanwhile, the 0-dimensional spanning trees of~$\Delta$ are its vertices (more precisely, the subcomplexes of~$\Delta$ with a single vertex), which are precisely the connected, acyclic subcomplexes of~$\Delta_{(0)}$. Just as in the graphical case, any two of the conditions of Definition~\ref{SST} imply the third~\cite[Prop~3.5]{DKM}. In order for~$\Delta$ to have a $d$-dimensional spanning tree, it is necessary and sufficient that $\tilde H_i(\Delta;\mathbb{Q})=0$ for all $i<d$; such a complex is called \emph{acyclic in positive codimension}, or APC. Note that a graph is APC if and only if it is connected. \begin{example}\label{bipyr-trees} Consider the \emph{equatorial bipyramid}: the two-dimensional simplicial complex $B$ with vertices $[5]$ and facets $123, 124, 125, 134, 135, 234, 235$. A geometric realization of~$B$ is shown in Figure~\ref{bipyramid-figure}. A 2-SST of $B$ can be constructed by removing two facets $F, F'$, provided that $F \cap F'$ contains neither of the vertices $4, 5$. A simple count shows that there are 15 such pairs $F,F'$, so $B$ has 15 two-dimensional spanning trees. \end{example} \begin{figure} \caption{The equatorial bipyramid $B$.\label{bipyramid-figure} \label{bipyramid-figure} \end{figure} A phenomenon arising only in dimension $d>1$ is that spanning trees may have torsion: that is, $\tilde H_{d-1}(\Upsilon;\mathbb{Z})$ can be finite but nontrivial. For example, the 2-dimensional skeleton of a 6-vertex simplex has (several) spanning trees $\Upsilon$ that are homeomorphic to the real projective plane, and in particular have $\tilde H_1(\Upsilon;\mathbb{Z})\cong\mathbb{Z}/2\mathbb{Z}$. This cannot happen in dimension~1 (i.e., for graphs), in which every spanning tree is a contractible topological space. This torsion directly affects tree enumeration in higher dimension; see Section~\ref{order-section}. \subsection{The main theorem} Our main result gives an explicit form for the critical group $K_i(\Delta)$ in terms of a reduced Laplacian matrix. This reduced form is both more convenient for computing examples, and gives a direct connection with the simplicial and cellular generalizations of the matrix-tree theorem \cite{DKM,DKM2}. For a general reference on the homological algebra we will need, see, e.g., Lang \cite{Lang}. Let~$\Delta$ be a pure, $d$-dimensional, APC simplicial complex, and fix $i<d$. Let~$\Upsilon$ be an $i$-dimensional spanning tree of~$\Delta_{(i)}$, and let $\Theta=\Delta_i \backslash \Upsilon$ (the set of $i$-dimensional faces of~$\Delta$ \emph{not} in~$\Upsilon$). Let $\tilde L$ denote the reduced Laplacian obtained from~$L$ by removing the rows and columns corresponding to~$\Upsilon$ (equivalently, by restricting~$L$ to the rows and columns corresponding to~$\Theta$). \begin{theorem} \label{main-thm} Suppose that $\tilde H_{i-1}(\Upsilon;\mathbb{Z})=0$. Then \begin{displaymath} K_i(\Delta) \cong \mathbb{Z}^\Theta / \im \tilde L. \end{displaymath} \end{theorem} \begin{proof} We will construct a commutative diagram \begin{equation} \label{snake} \timesymatrix{ 0 \ar[r] & \im L \ar[r]\ar[d]_f & \ker\partial^{\phantom{*}}o_{\Delta,i}\ar[r]\ar[d]_g & K_i(\Delta) \ar[d]_h\ar[r] & 0\\ 0 \ar[r] & \im\tilde{L} \ar[r] & \mathbb{Z}^\Theta\ar[r] & \mathbb{Z}^\Theta / \im \tilde L\ar[r] & 0 } \end{equation} where the rows are short exact sequences with the natural inclusions and quotient maps. The map~$f$ is defined by $f(L\theta)=\tilde{L}\theta$ for all $\theta\in\Theta$ (which we will show is an isomorphism in Claim \ref{claim-three}), and the map~$g$ is defined by $g(\hat\theta)=\theta$ (which we will show is an isomorphism in Claim \ref{claim-four}). In Claim~\ref{claim-five}, we will show that $f(\gamma)=g(\gamma)$ for all $\gamma\in\im L$, so the left-hand square commutes. Having proven these facts, the map~$h$ is well-defined by a diagram-chase, and it is an isomorphism by the snake lemma. We organize the proof into a series of claims. \begin{claim}\label{claim-one} $\im\partial^{\phantom{*}}_{\Delta,i}=\im\partial^{\phantom{*}}_{\Upsilon,i}$ as $\mathbb{Z}$-modules. \end{claim} Indeed, we have $\im\partial^{\phantom{*}}_{\Upsilon,i}\subseteq\im\partial^{\phantom{*}}_{\Delta,i}\subseteq\ker\partial^{\phantom{*}}_{\Delta,i-1}=\ker\partial^{\phantom{*}}_{\Upsilon,i-1}$ (the last equality because $\Upsilon_{(i-1)}=\Delta_{(i-1)}$), so there is a short exact sequence of $\mathbb{Z}$-modules $$ 0 \to \frac{\im\partial^{\phantom{*}}_{\Delta,i}}{\im\partial^{\phantom{*}}_{\Upsilon,i}} \to \frac{\ker\partial^{\phantom{*}}_{\Upsilon,i-1}}{\im\partial^{\phantom{*}}_{\Upsilon,i}} \to \frac{\ker\partial^{\phantom{*}}_{\Delta,i-1}}{\im\partial^{\phantom{*}}_{\Delta,i}} \to 0. $$ Since $\Upsilon$ is a torsion-free spanning tree, the middle term $\tilde H_{i-1}(\Upsilon;\mathbb{Z})$ is zero. Therefore, the first term is zero as well, proving Claim~\ref{claim-one}. \begin{claim} \label{subclaim} $\coker\partial^{\phantom{*}}_{\Upsilon,i}$ is a free $\mathbb{Z}$-module. \end{claim} We will use some of the basic theory of projective modules \cite[pp.~137--139]{Lang}. The image of $\partial^{\phantom{*}}_{\Upsilon,i}$ is a submodule of $C_{i-1}(\Upsilon;\mathbb{Z})$, so it is free, hence projective. Therefore, the short exact sequence $$0 \to \ker\partial^{\phantom{*}}_{\Upsilon,i} \to C_i(\Upsilon;\mathbb{Z}) \timesrightarrow{\partial^{\phantom{*}}_{\Upsilon,i}} \im \partial^{\phantom{*}}_{\Upsilon,i} \to 0$$ is split: that is, $C_{i}(\Upsilon;\mathbb{Z}) = \ker\partial^{\phantom{*}}_{\Upsilon,i} \oplus F$, where $F$ is a free $\mathbb{Z}$-module. On the other hand, $\im\partial^{\phantom{*}}_{\Upsilon,i}\subseteq\ker\partial^{\phantom{*}}_{\Upsilon,i-1}$, so $$ \coker \partial^{\phantom{*}}_{\Upsilon,i} = \frac{C_{i-1}(\Upsilon;\mathbb{Z})}{\im \partial^{\phantom{*}}_{\Upsilon,i}} = \frac{\ker\partial^{\phantom{*}}_{\Upsilon,i-1}\oplus F}{\im \partial^{\phantom{*}}_{\Upsilon,i}} = \frac{\ker\partial^{\phantom{*}}_{\Upsilon,i-1}}{\im \partial^{\phantom{*}}_{\Upsilon,i}} \oplus F = \tilde H_{i-1}(\Upsilon;\mathbb{Z}) \oplus F$$ and $\tilde H_{i-1}(\Upsilon;\mathbb{Z})=0$ by hypothesis, proving Claim~\ref{subclaim}. \begin{claim}\label{claim-two} The coboundary map $\partial^*_{\Upsilon,i}\colon C_{i-1}(\Upsilon;\mathbb{Z}) \to C_i(\Upsilon;\mathbb{Z})$ is surjective. \end{claim} By the basic theory of finitely generated abelian groups, we may write \begin{equation} \label{not-smith} \partial^{\phantom{*}}_{\Upsilon,i} = P \left[\begin{array}{c|c} D&0\\ \hline 0&0\end{array}\right]Q \end{equation} where $P\in GL_{f_{i-1}}(\mathbb{Z})$, $Q\in GL_{f_i}(\mathbb{Z})$, and $D$ is a diagonal matrix whose entries are the cyclic summands of the torsion submodule of $\coker\partial^{\phantom{*}}_{\Upsilon,i}$. In fact, these entries are all~1 by Claim~\ref{subclaim}. Moreover, the columns of $\partial^{\phantom{*}}_{\Upsilon,i}$ are linearly independent over~$\mathbb{Q}$ and over~$\mathbb{Z}$ because $\Upsilon$ is a simplicial tree, so in fact there are no zero columns in \eqref{not-smith}. Therefore $$\partial^{\phantom{*}}_{\Upsilon,i} = P \left[\begin{array}{c} I\\ \hline 0\end{array}\right] Q = P \left[\begin{array}{c} Q\\ \hline 0\end{array}\right]$$ and transposing yields $$\partial^*_{\Upsilon,i} = \left[\begin{array}{c|c} Q^T & 0\end{array}\right] P^T$$ and so \begin{align*} \im\partial^*_{\Upsilon,i}&=\im\left[\begin{array}{c|c} Q^T & 0\end{array}\right] && \text{(because $P$, hence $P^T$, is invertible over $\mathbb{Z}$)}\\ &=C_i(\Upsilon;\mathbb{Z}) && \text{(because $Q$, hence $Q^T$, is invertible over $\mathbb{Z}$)}. \end{align*} We have proved Claim~\ref{claim-two}. \begin{claim}\label{claim-three} $L(C_i(\Delta;\mathbb{Z})) = L(C_i(\Theta;\mathbb{Z}))$. \end{claim} Choose an arbitrary chain $\gamma\in C_i(\Upsilon;\mathbb{Z})$. By Claim~\ref{claim-two}, there is a chain $\eta\in C_{i-1}(\Upsilon;\mathbb{Z})$ such that $\partial^*_{\Upsilon,i}(\eta) = \gamma$. On the other hand, $\partial^*_{\Delta,i}(\eta) = \partial^*_{\Upsilon,i}(\eta) - \theta$ for some chain~$\theta\in C_i(\Theta;\mathbb{Z})$. Hence \begin{displaymath} L(\gamma)-L(\theta) ~=~ L(\gamma-\theta) ~=~ L(\partial^*_{\Upsilon,i}(\eta)-\theta) ~=~ L\partial^*_{\Delta,i}(\eta) ~=~ \partial^{\phantom{*}}_{i+1} \partial^*_{i+1} \partial^*_i(\eta) ~=~ 0 \end{displaymath} and so $L(\gamma) = L(\theta)$. In particular, $L(\gamma)\in L(C_i(\Theta;\mathbb{Z}))$, which proves Claim~\ref{claim-three}. Observe that $$L ~=~ \left[\begin{array}{c|c} L(C_i(\Upsilon;\mathbb{Z})) & L(C_i(\Theta;\mathbb{Z})) \end{array}\right] ~=~ \left[\begin{array}{c|c} L(C_i(\Upsilon;\mathbb{Z})) & \begin{array}{c} *\pad\\\hline\pad\tilde{L} \end{array} \end{array}\right]. $$ Thus Claim~\ref{claim-three} says that the $\Theta$-columns span the full column space of~$L$. Since $L$ is a symmetric matrix, this statement remains true if we replace ``column'' with ``row''. In particular, there is an isomorphism $f\colon L(C_i(\Theta;\mathbb{Z}))\to\tilde{L}(C_i(\Theta;\mathbb{Z}))$ given by deleting the $\Upsilon$-rows; that is, $f(L\theta)=\tilde{L}\theta$. By Claim~\ref{claim-one}, for each chain $\theta\in C_i(\Theta;\mathbb{Z})$, we can write $$\partial^{\phantom{*}}o_{\Delta,i}(\theta)=\sum_{\sigma\in\Upsilon_i}c^{\phantom{*}}_{\sigma\theta}\partial^{\phantom{*}}o_{\Delta,i}(\sigma)$$ with $c^{\phantom{*}}_{\sigma\theta}\in\mathbb{Z}$. Therefore, the chain $$\hat \theta=\theta-\sum_{\sigma\in\Upsilon_i}c^{\phantom{*}}_{\sigma\theta}\sigma$$ lies in $X:=\ker\partial^{\phantom{*}}_{\Delta,i}$. \begin{claim}\label{claim-four} The set $\{\hat\theta\colon\theta\in\Theta\}$ is a $\mathbb{Z}$-module basis for $X$. \end{claim} Indeed, for any $\gamma=\sum_{\sigma\in\Delta_i} a_\sigma\sigma\in X$, let $$ \gamma' ~=~ \sum_{\sigma\in\Delta_i} a_\sigma\sigma ~-~ \sum_{\sigma\in\Theta} a_\sigma\hat\sigma ~=~ \sum_{\sigma\in\Upsilon_i} a_\sigma\sigma ~+~ \sum_{\sigma\in\Theta} a_\sigma(\sigma-\hat\sigma). $$ By the previous observation, we have $\gamma'\in X\cap C_i(\Upsilon;\mathbb{Z})=\tilde H_i(\Upsilon;\mathbb{Z})$. On the other hand, $\tilde H_i(\Upsilon;\mathbb{Z})=0$ (because $\Upsilon$ is an $i$-dimensional simplicial tree), so in fact $\gamma'=0$. Therefore, $\gamma = \sum_{\sigma\in\Theta} a_\sigma \hat\sigma$, proving Claim~\ref{claim-four}. \begin{claim}\label{claim-five} Suppose that the action of the reduced Laplacian $\tilde{L}$ on $C_i(\Theta;\mathbb{Z})$ is given by $$\tilde{L} \theta = \sum_{\sigma\in\Theta} \ell_{\theta\sigma} \sigma$$ for $\theta\in\Theta$. Then $L\theta=\sum_{\sigma\in\Theta}\ell_{\sigma\theta}\hat\sigma$. \end{claim} Indeed, the chain $L\theta-\sum_{\sigma\in\Theta}\ell_{\sigma\theta}\hat\sigma$ belongs both to~$X$ and to~$C_i(\Upsilon;\mathbb{Z})$, so it must be zero (as in the proof of Claim~\ref{claim-four}, because $\Upsilon$ is an $i$-dimensional simplicial tree), establishing Claim~\ref{claim-five} and completing the proof \end{proof} \begin{remark} Claim~\ref{claim-one} holds for any subcomplex $\Upsilon\subseteq\Delta$ \emph{containing} a torsion-free spanning tree or, more interestingly, if $\Upsilon$ is ``torsion-minimal'', i.e., if $\tilde H_{i-1}(\Upsilon;\mathbb{Z}) = \tilde H_{i-1}(\Delta;\mathbb{Z})$. The remainder of the proof requires $\Upsilon$ to be torsion-free, but it should be possible to extend these methods to the case that it is torsion-minimal. Not every APC complex need have a torsion-minimal spanning tree. For instance, for~$k\in\mathbb{N}$, let $M_k=M(\mathbb{Z}/k\mathbb{Z},2)$ be the Moore space \cite[p.~143]{Hatcher} obtained by attaching a 2-cell to the circle $S^1$ by a map of degree~$k$; let $X$ be the complex obtained by identifying the 1-skeletons of~$M_2$ and~$M_3$ (thus, $X$ is a CW-complex with two 2-cells, one 1-cell, and one 0-cell); and let~$\Delta$ be a simplicial triangulation of~$X$ (in particular, note that $\Delta$ is a regular CW-complex with the same homology as $X$). Then $\tilde H_2(\Delta;\mathbb{Z})\cong\mathbb{Z}$ and $\tilde H_i(\Delta;\mathbb{Z})=0$ for $i<2$. On the other hand, the simplicial spanning trees of~$\Delta$ are precisely the subcomplexes~$\Upsilon$ obtained by deleting a single facet~$\sigma$ (just as though $\Upsilon$ were a simplicial sphere, which it certainly is not---see Remark~\ref{sphere-count} and subsequently), and each such $\Upsilon$ has $\tilde H_1(\Upsilon;\mathbb{Z})\cong\mathbb{Z}/3\mathbb{Z}$ or $\mathbb{Z}/2\mathbb{Z}$, according as $\sigma$ is a face of $M_2$ or $M_3$. (We thank Vic Reiner for providing this example.) \end{remark} \begin{example}\label{bipyr-thm} We return to the bipyramid $B$ from Example \ref{bipyr-trees} to illustrate Theorem \ref{main-thm}. We must first pick a 1-dimensional spanning tree $\Upsilon$; we take $\Upsilon$ to be the spanning tree with edges $12, 13, 14, 15$. (In general, we must also make sure $\Upsilon$ is torsion-free, but this is always true for 1-dimensional trees.) Let $L = L_{B,1}\colon C_1(B;\mathbb{Z}) \rightarrow C_1(B;\mathbb{Z})$ be the full Laplacian; note that $L$ is a $9\times 9$ matrix whose rows and columns are indexed by the edges of $B$. The reduced Laplacian $\tilde L$ is formed by removing the rows and columns indexed by the edges of $\Upsilon$: \begin{displaymath} \tilde L ~=~ \bordermatrix{ & 23 & 24 & 25 & 34 & 35 \cr 23 & 3 & -1 & -1 & 1 & 1 \cr 24 & -1 &2 & 0 & -1 & 0 \cr 25 & -1 & 0 & 2 & 0 & -1 \cr 34 & 1 & -1 & 0 & 2 & 0 \cr 35 & 1 & 0 & -1 & 0 & 2 }. \end{displaymath} The critical group $K_1(B)$ is the cokernel of this matrix, i.e., $K_1(B)\cong\mathbb{Z}^5/\im \tilde L$. Since $\tilde L$ has full rank, it follows that $K_1(B)$ is finite; its order is $\det(\tilde L) = 15$. \end{example} \section{The Order of the Critical Group}\label{order-section} The matrix-tree theorem implies that the order of the critical group of a graph equals the number of spanning trees. In this section, we explain how this equality carries over to the higher-dimensional setting. As before, let $\Delta$ be a pure $d$-dimensional simplicial complex. Let ${\mathcal T}_i(\Delta)$ denote the set of all $i$-dimensional spanning trees of $\Delta$ (that is, spanning trees of the $i$-dimensional skeleton~$\Delta_{(i)}$). Define \begin{align*} \tau_i &= \sum_{\Upsilon\in{\mathcal T}_i(\Delta)} |\tilde H_{i-1}(\Upsilon;\mathbb{Z})|^2,\\ \pi_i &= \text{ product of all nonzero eigenvalues of $L_{\Delta, i-1}$}. \end{align*} The following formulas relate the tree enumerators $\tau_i$ to the linear-algebraic invariants $\pi_i$. \begin{theorem}[The simplicial matrix-tree theorem] \cite[Thm.~1.3]{DKM} \label{thm:SMTT} For all $i\leq d$, we have \begin{displaymath} \pi_i = \frac{\tau_i \tau_{i-1}}{|\tilde H_{i-2}(\Delta;\mathbb{Z})|^2}. \end{displaymath} Moreover, if $\Upsilon$ is any spanning tree of $\Delta_{(i-1)}$, then \begin{displaymath} \tau_i = \frac{|\tilde H_{i-2}(\Delta;\mathbb{Z})|^2}{|\tilde H_{i-2}(\Upsilon;\mathbb{Z})|^2} \det \tilde{L}, \end{displaymath} where $\tilde{L}$ is the reduced Laplacian formed by removing the rows and columns corresponding to $\Upsilon$. \end{theorem} Recall that when $d=1$, the number $\tau_1(\Delta)$ is simply the number of spanning trees of the graph $\Delta$, and $\tau_0(\Delta)$ is the number of vertices (i.e., 0-dimensional spanning trees). Therefore, the formulas above specialize to the classical matrix-tree theorem. \begin{corollary} \label{count-corollary} Let $i<d$. Suppose that $\tilde H_{i-1}(\Delta;\mathbb{Z})=0$ and that $\Delta$ has an $i$-dimensional spanning tree $\Upsilon$ such that $\tilde H_{i-1}(\Upsilon;\mathbb{Z})=0$. Then the order of the $i$-dimensional critical group is the torsion-weighted number of $(i+1)$-dimensional spanning trees, i.e., \begin{displaymath} |K_i(\Delta)|=\tau_{i+1}. \end{displaymath} \end{corollary} \begin{example}\label{bipyr-count} Returning again to the bipyramid $B$, recall that 15 is both the number of its spanning trees (Example \ref{bipyr-trees}) and the order of its 1-dimensional critical group (Example \ref{bipyr-thm}), in each case because $\det \tilde L = 15$. \end{example} Another formula for the orders of the critical groups of~$\Delta$ is as follows. For $0\leq j\leq d$, denote by $\pi_j$ the product of the nonzero eigenvalues of the Laplacian $L_{j-1}^{ud}=\partial^{\phantom{*}}_j\partial^*_j$. Then Corollary~\ref{count-corollary}, together with \cite[Corollary~2.10]{DKM2}, implies the following formula for $|K_i(\Delta)|$ as an alternating product: \begin{corollary} \label{alt-product} Under the conditions of Corollary ~\ref{count-corollary}, for every $i\leq d$, we have \begin{displaymath} |K_i(\Delta)| = \prod_{j=0}^i \pi_j^{(-1)^{i-j}}. \end{displaymath} \end{corollary} The condition that $\Delta$ and $\Upsilon$ be torsion-free is not too restrictive, in the sense that many simplicial complexes of interest in combinatorics (for instance, all shellable complexes) are torsion-free and have torsion-free spanning trees. \begin{remark} \label{sphere-count} When every spanning tree of $\Delta$ is torsion-free, the order of the critical group is exactly the number of spanning trees. This is a strong condition on $\Delta$, but it does hold for some complexes --- notably for simplicial spheres, whose spanning trees are exactly the (contractible) subcomplexes obtained by deleting a single facet. Thus a given explicit bijection between spanning trees and elements of the critical group amounts to an abelian group structure on the set of facets of a simplicial sphere. \end{remark} Determining the structure of the critical group is not easy, even for very special classes of graphs; see, e.g., \cite{Vic1,Vic2}. One of the first such results is due to Lorenzini \cite{Lor1,Lor2} and Merris \cite[Example~1(1.4)]{Merris}, who independently noted that the critical group of the cycle graph on $n$ vertices is $\mathbb{Z}/n\mathbb{Z}$, the cyclic group on $n$ elements. Simplicial spheres are the natural generalizations of cycle graphs from a tree-enumeration point of view. In fact, the theorem of Lorenzini and Merris carries over to simplicial spheres, as we now show. \begin{theorem} \label{spheres} Let $\Sigma$ be a $d$-dimensional simplicial sphere with $n$ facets. Then $K_{d-1}(\Sigma) \cong \mathbb{Z}/n\mathbb{Z}$. \end{theorem} \begin{proof} Let $K=K_{d-1}(\Sigma)$. Remark \ref{sphere-count} implies that $|K|=n$, so it is sufficient to show that it is cyclic. In what follows, we use the standard terms ``facets'' and ``ridges'' for faces of~$\Sigma$ of dimensions~$d$ and~$d-1$, respectively. By definition, $K$ is generated by $(d-1)$-dimensional cycles, that is, elements of $\ker\partial^{\phantom{*}}o_{d-1}$. Since $\tilde H_{d-1}(\Sigma;\mathbb{Z})=0$, all such cycles are in fact $(d-1)$-dimensional boundaries of $d$-dimensional chains. Therefore, $K$ is generated by the boundaries of facets, modulo the image of $L=\partial^{\phantom{*}}o_d\partial^*_d$. We now show that for any two facets $\sigma,\sigma'\in\Sigma$, we have $\partial^{\phantom{*}}o_d\sigma\equiv\pm\partial^{\phantom{*}}o_d\sigma$ modulo $\im L$. This will imply that~$K$ can be generated by a single element as a $\mathbb{Z}$-module. Since $\Sigma$ is a sphere, it is in particular a pseudomanifold, so every ridge is in the boundary of at most two facets \cite[p.~24]{Stanley}. Consequently, if two facets $\sigma,\sigma'$ share a ridge $\rho$, then no other facet contains $\rho$, and we have $$0 \equiv \partial^{\phantom{*}}o\partial^*(\rho) = \partial^{\phantom{*}}o(\pm \sigma \pm \sigma') = \pm(\partial^{\phantom{*}}o \sigma \pm \partial^{\phantom{*}}o \sigma')$$ (where $\equiv$ means ``equal modulo $\im L$''). Hence $\partial^{\phantom{*}}o(\sigma)$ and $\partial^{\phantom{*}}o(\sigma')$ represent the same or opposite elements of $K$. Furthermore, the definition of pseudomanifold guarantees that for any two facets $\sigma,\sigma'$, there is a sequence of facets $\sigma = \sigma_0, \sigma_1, \ldots, \sigma_k = \sigma'$ such that each $\sigma_j$ and $\sigma_{j+1}$ share a common ridge. Therefore, by transitivity, the boundary of any single facet generates~$K$, as desired. \end{proof} The condition that $\Sigma$ be a simplicial sphere can be relaxed: in fact, the proof goes through for any $d$-dimensional pseudomanifold $\Sigma$ such that $\tilde H_{d-1}(\Sigma; \mathbb{Z}) = 0$. On the other hand, if $\Sigma$ is APC in addition to being a pseudomanifold (for example, certain lens spaces---see \cite[p.~144] {Hatcher}), then it has the rational homology type of either a sphere or a ball (because $\tilde H_d(\Sigma;\mathbb{Q})$ is either $\mathbb{Q}$ or 0; see \cite[p.~24]{Stanley}). \begin{remark} \label{Molly} Let $\Delta$ be the simplex on vertex set~$[n]$, and let $k\leq n$. Kalai \cite{Kalai} proved that $\tau_k(\Delta)=n^{\binom{n-2}{k}}$ for every $n$ and $k$, generalizing Cayley's formula $n^{n-2}$ for the number of labeled trees on $n$ vertices. Maxwell~\cite{Maxwell} studied the skew-symmetric matrix $$A = \left[\begin{array}{c} \pad\tilde\partial^{\phantom{*}}_{\Delta,k}\\ \hline \pad-\tilde\partial^{*}_{\Delta,k+1} \end{array}\right]$$ where~$\tilde\partial^{\phantom{*}}_{\Delta,k}$ denotes the reduced boundary map obtained from the usual simplicial boundary $\partial^{\phantom{*}}_{\Delta,k}$ by deleting the rows corresponding to $(k-1)$-faces containing vertex~1, and~$\tilde\partial^{*}_{\Delta,k+1}$ is obtained from~$\partial^*_{\Delta,k+1}$ by deleting the rows corresponding to $k$-faces \emph{not} containing vertex~1. (Note that Maxwell and Kalai use the symbol $I^k_r(X)$ for what we call $\partial^{\phantom{*}}_{\Delta,k}$.) In particular, Maxwell \cite[Prop.~5.4]{Maxwell} proved that $$\coker A \cong (\mathbb{Z}/n\mathbb{Z})^{\binom{n-2}{k}}.$$ The matrix~$A$ is not itself a Laplacian, but is closely related to the Laplacians of $\Delta$. Indeed, Maxwell's result, together with ours, implies that all critical groups of~$\Delta$ are direct sums of cyclic groups of order~$n$, for the following reasons. We have $$ AA^T = -A^2 = \left[\begin{array}{c} \pad\tilde\partial^{\phantom{*}}_{\Delta,k}\\ \hline -\pad\tilde\partial^{*}_{\Delta,k+1} \end{array}\right] \left[\begin{array}{c|c} \tilde\partial^{*}_{\Delta,k} & -\tilde\partial^{\phantom{*}}_{\Delta,k+1} \end{array}\right] = \left[\begin{array}{c|c} \pad\tilde{L}^{\textrm{ud}}_{k-1} & 0\\ \hline 0 & \pad\tilde{L}^{\textrm{du}}_{k+1} \end{array}\right] $$ where ``ud'' and ``du'' stand for ``up-down'' and ``down-up'' respectively (see footnote~\ref{L-notation-note}). Therefore \begin{align*} \coker(AA^T) &\cong \coker(\tilde{L}^{\textrm{ud}}_{k-1}) \oplus \coker(\tilde{L}^{\textrm{du}}_{k+1})\\ &\cong \coker(\tilde{L}^{\textrm{ud}}_{k-1}) \oplus \coker(\tilde{L}^{\textrm{ud}}_{k})\\ &\cong K_{k-1}(\Delta)\oplus K_k(\Delta), \end{align*} where the second step follows from the general fact that~$MM^T$ and~$M^TM$ have the same multisets of nonzero eigenvalues for any matrix~$M$, and the third step follows from Theorem~\ref{main-thm}. On the other hand, we have $\coker(A A^T) = \coker(-A^2) = \coker(A^2) \cong(\coker A) \oplus (\coker A)$. It follows from Maxwell's result that the $k$th critical group of the $n$-vertex simplex is a direct sum of $\binom{n-2}{k}$ copies of~$\mathbb{Z}/n\mathbb{Z}$, as desired. \end{remark} \section{The Critical Group as a Model of Discrete Flow} \label{sec:model} In this section, we describe an interpretation of the critical group in terms of flow, analogous to the chip-firing game. By definition of $K_i(\Delta)$, its elements may be represented as integer vectors $\mathbf{c}=(c_F)_{F\in\Delta_i}$, modulo an equivalence relation given by the Laplacian. These configurations are the analogues of the configurations of chips in the graph case ($i=0$). When $i=1$, it is natural to interpret $c_F$ as a flow along the edge~$F$, in the direction given by some predetermined orientation; a negative value on an edge corresponds to flow in the opposite direction. More generally, if $F$ is an $i$-dimensional face, then we can interpret $c_F$ as a generalized \emph{$i$-flow}, again with the understanding that a negative value on a face means a $i$-flow in the opposite orientation. For instance, 2-flow on a triangle represents circulation around the triangle, and a negative 2-flow means to switch between clockwise and counterclockwise. When $i=1$, the condition $\mathbf{c}\in\ker \partial^{\phantom{*}}_i$ means that flow neither accumulates nor depletes at any vertex; intuitively, matter is conserved. In general, we call an $i$-flow \emph{conservative} if it lies in $\ker \partial^{\phantom{*}}_i$. For instance, when $i=2$, the $\partial^{\phantom{*}}_i$ map converts 2-flow around a single triangle into 1-flow along the three edges of its boundary in the natural way; for a 2-flow on $\Delta$ to be conservative, the sum of the resulting 1-flows on each edge must cancel out, leaving no net flow along any edge. In general, the sum of (the boundaries of) all the $i$-dimensional flows surrounding an $(i-1)$-dimensional face must cancel out along that face. That the group $K_i(\Delta)$ is a quotient by the image of the Laplacian means that two configurations are equivalent if they differ by an integer linear combination of Laplacians applied to $i$-dimensional faces. This is analogous to the chip-firing game, where configurations are equivalent when it is possible to get from one to the other by a series of chip-firings, each of which corresponds to adding a column vector of the Laplacian. When $i=1$, it is easy to see that firing an edge~$e$ (adding the image of its Laplacian to a configuration) corresponds to diverting one unit of flow around each triangle containing~$e$ (see Example \ref{bipyr-flow}). More generally, to fire an $i$-face $F$ means to divert one unit of $i$-flow from $F$ around each $(i+1)$-face containing~$F$. By Theorem \ref{main-thm}, we may compute the critical group as $\mathbb{Z}^\Theta$ modulo the image of the reduced Laplacian. In principle, passing to the reduced Laplacian means ignoring the $i$-flow along each facet of an $i$-dimensional spanning tree $\Upsilon$. In the graph case ($i=0$), this spanning tree is simply the bank vertex. The higher-dimensional generalization of this statement is that the equivalence class of a configuration $\mathbf{c}$ is determined by the subvector $(c_F)_{F\in\Delta\backslash\Upsilon}$. A remaining open problem is to identify the higher-dimensional ``critical configurations'', i.e., a set of stable and recurrent configurations that form a set of coset representatives for the critical group. Recall that in the chip-firing game, when vertex $i$ fires, every vertex other than $i$ either gains a chip or stays unchanged. Therefore, we can define stability simply by the condition $\mathbf{c}_i<\deg(i)$ for every non-bank vertex $i$. On the other hand, when a higher-dimensional face fires, the flow along nearby faces can actually decrease. Therefore, it is not as easy to define stability. For instance, one could try to define stability by the condition that no face can fire without forcing some face (either itself or one of its neighbors) into debt. However, with this definition, there are some examples (such as the 2-skeleton of the tetrahedron) for which some of the cosets of the Laplacian admit more than one critical configuration. Therefore, it is not clear how to choose a canonical set of coset representatives analogous to the critical configurations of the graphic chip-firing game. \begin{example}\label{bipyr-flow} We return once again to the bipyramid $B$, and its 1-dimensional spanning tree $\Upsilon$ with edges $12, 13, 14, 15$. If we pick 1-flows on $\Theta = \Delta_1 \backslash \Upsilon$ as shown in Figure \ref{flow-exA}, it is easy to compute that we need 1-flows on $\Upsilon$ as shown in Figure \ref{flow-exB} to make the overall flow 1-conservative. Since Theorem \ref{main-thm} implies we can always pick flows on $\Upsilon$ to make the overall flow 1-conservative, we only show flows on $\Theta$ in subsequent diagrams. If we fire edge 23, we get the configuration shown in Figure \ref{flow-exC}. One unit of flow on edge 23 has been diverted across face 234 to edges 24 and 34, and another unit of flow has been diverted across face 235 to edges 25 and 35. Note that the absolute value of flow on edge 25 has actually decreased, because of its orientation relative to edge 23. If we subsequently fire edge 24, we get the configuration shown in Figure \ref{flow-exD}. One unit of flow on edge 24 has been diverted across face 234 to edges 23 and 34, and another unit of flow has been diverted across face 124 to edges 12 and 14 (and out of the diagram of $\Theta$). \begin{figure} \caption{Conservative 1-flows and firings} \label{flow-exA} \label{flow-exB} \label{flow-exC} \label{flow-exD} \label{flow-ex} \end{figure} \end{example} \section{Critical Groups as Chow Groups} \label{sec:intersection} An area for further research is to interpret the higher-dimensional critical groups of a simplicial complex~$\Delta$ as simplicial analogues of the Chow groups of an algebraic variety. (For the algebraic geometry background, see, e.g., \cite[Appendix~A]{Hartshorne} or \cite{Fulton}.) We regard~$\Delta$ as the discrete analogue of a $d$-dimensional variety, so that divisors correspond to formal sums of codimension-1 faces. Even more generally, algebraic cycles of dimension~$i$ correspond to simplicial $i$-chains. The critical group $K_i(\Delta)$ consists of closed $i$-chains modulo conservative flows (in the language of Section~\ref{sec:model}) is thus analogous to the Chow group of algebraic cycles modulo rational equivalence. This point of view has proved fruitful in the case of graphs \cite{BHN,BN,HMY,Lor1,Lor2}. In order to develop this analogy fully, the next step is to define a ring structure on $\bigoplus_{i\geq 0}K_i(\Delta)$ with a ring structure analogous to that of the Chow ring. The goal is to define a ``critical ring'' whose multiplication encodes a simplicial version of intersection theory on~$\Delta$. \end{document}
math
46,602
\begin{document} \begin{center} \Large {\bf The use of spatial information in entropy measures}\par \normalsize{Linda Altieri, Daniela Cocchi, Giulia Roli}\\ \small{Department of Statistics, University of Bologna, via Belle Arti, 41, 40126, Bologna, Italy.} \par \end{center} \begin{quotation} \noindent {\it Abstract:} The concept of entropy, firstly introduced in information theory, rapidly became popular in many applied sciences via Shannon's formula to measure the degree of heterogeneity among observations. A rather recent research field aims at accounting for space in entropy measures, as a generalization when the spatial location of occurrences ought to be accounted for. The main limit of these developments is that all indices are computed conditional on a chosen distance. This work follows and extends the route for including spatial components in entropy measures. Starting from the probabilistic properties of Shannon's entropy for categorical variables, it investigates the characteristics of the quantities known as residual entropy and mutual information, when space is included as a second dimension. This way, the proposal of entropy measures based on univariate distributions is extended to the consideration of bivariate distributions, in a setting where the probabilistic meaning of all components is well defined. As a direct consequence, a spatial entropy measure satisfying the additivity property is obtained, as global residual entropy is a sum of partial entropies based on different distance classes. Moreover, the quantity known as mutual information measures the information brought by the inclusion of space, and also has the property of additivity. A thorough comparative study illustrates the superiority of the proposed indices. \end{quotation} \begin{quotation} \noindent {\it Keywords:} Shannon's entropy, residual entropy, mutual information, additivity property, lattice data, spatial entropy, categorical variables \end{quotation} \section{Introduction} When a set of units can be assigned to a finite number of categories of a study variable, a popular way of assessing heterogeneity is to compute entropy. The concept of entropy has been firstly introduced in information theory to evaluate the degree of heterogeneity in signal processing. The seminal work by Shannon (\citeyear{shan}) provided the basics to define entropy, and Shannon's formula of entropy rapidly became popular in many applied sciences, e.g. ecology and geography \citep{patiltaillie, hoeting, frosini, cobbold}. The reasons for the success of this measure are two-fold. On one hand, entropy is a measure of diversity that only explicitly considers the number of categories of the study variable and their probabilities of occurrence; thus, it can be employed in a wide range of applications, even when qualitative variables are involved. On the other hand, entropy summarizes and captures several aspects that are differently denoted according to the specific target: heterogeneity, information, surprise, diversity, uncertainty, contagion are all concepts strongly related to entropy. Information theory also investigates the relationship across two variables under a probabilistic framework to form bivariate entropy measures that, despite their interesting properties (\citeauthor{renyi}, \citeyear{renyi}, \citeauthor{coverthomas}, \citeyear{coverthomas}), are as yet not deeply explored. A relatively recent research field aims at accounting for space in entropy measures, as a natural generalization when the spatial location of the occurrences is available and relevant to the purpose of the analysis. Spatial data are georeferenced over spatial units which may be points or areas; this work deals with areal data, but most measures are applicable to point data as well. Spatial entropy must not be confused with spatial correlation, a measure that also identifies the type of spatial association, either positive or negative. Indeed, entropy measures heterogeneity, irrespective of the type of association across the outcomes of the study variable. Several solutions to account for space in entropy measures have been proposed in geography, ecology and landscape studies, from the papers by Batty (\citeyear{batty74}, \citeyear{batty76}) to recent works \citep{batty10, cobbold, batty14, leibovici14}. The underlying idea is to include spatial information into the computation of entropy for capturing the role of space in determining the occurrences of interest. Being entropy an expectation, care has to be devoted at defining the entity to which a probability is assigned, since the sum of all probabilities has to be 1. In most entropy measures, this entity is the $i$-th category of a variable $X$, say $x_i$, with $i=1, \dots, I$, and the probability of this occurrence is denoted by $p(x_i)$. The statistical properties of Shannon's entropy are usually assessed under this definition. When considering Batty's spatial entropy, the categories of the study variables are defined according to the $g=1, \dots, G$ zones that partition a territory, and the probability $p_g$ represents the probability of occurrence of the phenomenon of interest over zone $g$. Other approaches to spatial entropy (\citeauthor{oneill}, \citeyear{oneill}, \citeauthor{leibovici09}, \citeyear{leibovici09}) are based on a transformation of the study variable, aimed at including space, with its probability distribution being employed in Shannon's formula. The present work proposes spatial entropy measures which exploit the full probabilistic framework provided in the information theory field and are based on both univariate and bivariate distributions. Our approach leads to coherently defined entropy measures that are able to discern and quantify the role of space in determining the outcomes of the variable of interest, that needs to be suitably defined. Indeed, the entropy of such variable can be decomposed into spatial mutual information, i.e. the entropy due to space, and spatial residual entropy, i.e. the remaining information brought by the variable itself once space has been considered. Furthermore, the present proposal solves the problem of preserving additivity in constructing entropy measures, that was pioneeringly tackled by \citeauthor{theil} (\citeyear{theil}), allowing for partial and global syntheses. The topic of additivity has been thoroughly faced in a number of papers \citep{anselin, karlstrom, entrcontinuo}, but without exploiting the properties of entropy computed on bivariate distributions. The remainder of the paper is organized as follows. In Section \ref{sec:prop} some properties of classical entropy are highlighted; Section \ref{sec:reviewspace} reviews popular spatial entropy measures. In Section \ref{sec:nostra} an innovative way to deal with space in entropy is proposed, which is thoroughly assessed in Section \ref{sec:sim}. Section \ref{sec:disc} discusses main results and concludes the paper. \section{Statistical properties of Shannon's entropy} \label{sec:prop} Information theory provides a complete probabilistic framework to properly define entropy measures in the field of signal processing. Its original aim is to quantify the amount of information related to a message when it is transmitted by a channel that can be noisy \citep{coverthomas, stone}. The message is treated as a discrete random variable, $X$, which can assume a set of different values, $x_i$, $i=1,\dots,I$ where $I$ is the number of possible outcomes. The term information is associated to the concepts of surprise and uncertainty: the greater the surprise (and, thus, the uncertainty) in observing a value $X=x_i$, the greater the information it contains. The amount of surprise about an outcome value $x_i$ increases as its probability decreases. In this spirit, \citeauthor{shan} (\citeyear{shan}) introduced the random variable $I(p_X)$ called information function, where $p_X=(p(x_1),\dots,p(x_{I}))'$ is the univariate probability mass function (pmf) of $X$. The information function takes values $I(p(x_i))=\log(1/p(x_i))$. It measures the amount of information contained in an outcome which has probability $p(x_i)$, without any focus on the value of the outcome itself. In information theory, the logarithm has base $2$ to quantify information in bits, but this point is irrelevant since entropy properties are invariant with respect to the choice of the base. The entropy, also known as Shannon's entropy \citep{shan}, of a variable $X$ with $I$ possible outcomes is then defined as the expected value of the information function \begin{equation} H(X)=E[I(p_X)]=\sum_{i=1}^{I} p(x_i)\log\left(\frac{1}{p(x_i)}\right). \label{eq:shann} \end{equation} Being an expected value, it measures the average amount of information brought by the realizations of $X$ as generated by the pmf $p_X$. When entropy is high, no precise information is available about the next realization, therefore the amount of information it brings is high. On the other hand, if one is fairly sure about the next observation, its occurrence does not carry much information, and the entropy is low. The probabilistic properties of entropy are often left apart in the applied literature and entropy is commonly seen as a heterogeneity index, which can be computed without the value of the study variable for the different categories. Entropy $H(X)$ ranges in $[0,\log(I)]$, i.e. it is nonnegative and its maximum depends on the number of categories of $X$. The maximum value of entropy is achieved when $X$ is uniformly distributed, while the minimum is only reached in the extreme case of certainty about the variable outcome. In order to let entropy vary between $0$ and $1$, a suitable positive constant $B$, equal to the inverse of the entropy maximum value, is often introduced to obtain the normalized version of the index: \begin{equation}H_{norm}(X)=B\cdot H(X)=\frac{H(X)}{\log(I)}.\label{eq:relH} \end{equation} When two pmfs for $X$ are competing, say $p_X$ and $q_X$, being $q_X$ the reference distribution, a measure of distance between the two is defined in terms of entropy. This quantity is called Kullback-Leibler distance, or relative entropy: \begin{equation}D_{KL}(p_X || q_X)= E\left[I\left(\frac{q_X}{p_X}\right)\right]=\sum_{i=1}^{I} p(x_i)\log\left(\frac{p(x_i)}{q(x_i)}\right) \label{eq:kullback} \end{equation} where the weights of the expectation come from the pmf $p_X$, i.e. the distribution in the denominator of the information function. Being a distance measure, any Kullback-Leibler distance is non-negative. \begin{remark} When $q_X$ is the uniform distribution $U_X$, expression (\ref{eq:kullback}) is the difference between the maximum value of $H(X)$ and $H(X)$ itself \begin{equation}D_{KL}(p_X || U_X)= \log(I)-H(X). \label{eq:kullback_unif} \end{equation} \end{remark} When a noisy channel is considered in information theory, a crucial point is represented by the importance of discerning the amount of information related to $X$ from the noise. In such cases, a further message, the original non-noised one, is introduced as a second discrete random variable $Y$ with $j=1, \ldots, J$ potential outcomes $y_j$. A pmf $p_Y$ is associated to the variable $Y$, and the marginal entropy $H(Y)$ can be similarly computed. This suggests to adopt a bivariate perspective; various kinds of expectations with different properties can be thus derived with reference to a joint pmf $p_{XY}$ \citep{coverthomas, stone}. A crucial quantity is represented by the expectation known as mutual information of $X$ and $Y$, defined as \begin{equation} MI(X,Y)=E\left[I\left(\frac{p_Xp_Y}{p_{XY}}\right)\right]=\sum_{i=1}^{I}\sum_{j=1}^{J}p(x_i,y_j)\log{\left(\frac{p(x_i,y_j)}{p(x_i)p(y_j)}\right)}. \label{eq:mutual} \end{equation} Expression (\ref{eq:mutual}) is a Kullback-Leibler distance $D_{KL}(p_{XY}||p_Xp_Y)$, where the reference joint pmf is the independence distribution of $X$ and $Y$, $p_Xp_Y$; the terms $I\left(p(x_i)p(y_j)/p(x_i,y_j)\right)$ of the information function are farther from $0$ as the association $(i,j)$ moves away from independence. When the two variables are independent, i.e. $p_{XY}=p_Xp_Y$, the mutual information is null, since, for all $i$ and $j$, $\log{\left(p(x_i,y_j)/p(x_i)p(y_j)\right)}=0$. Mutual information measures the association of the two messages, i.e. the amount of information of $X$ due to $Y$ (or vice-versa, as the measure is symmetric), thus removing the noise effect. Expression (\ref{eq:mutual}) can be also seen as a measure with the same structure as (\ref{eq:shann}): \begin{equation} MI(X,Y)=\sum_{i=1}^{I}p(x_i)\sum_{j=1}^{J}p(y_j|x_i)\log{\left(\frac{p(y_j|x_i)}{p(y_j)}\right)}, \label{eq:mutual_sommai} \end{equation} where, for each $i$, the information function in (\ref{eq:shann}) is replaced by a Kullback-Leibler distance $D_{KL}(p_{Y|x_i}||p_Y)$. This distance assesses how much, on average, each value of the conditional distribution $p_{Y|x_i}$ differs from the marginal $p_Y$, i.e. from independence. \begin{remark} Mutual information is both a Kullback-Leibler distance on a joint pmf and a weighted sum of Kullback-Leibler distances on univariate pmfs; being a symmetric measure, the decomposition holds in both directions, so that it is also a weighted sum of $D_{KL}(p_{X|y_j}||p_X)$. \end{remark} A further important measure of entropy, known as conditional entropy, involves the joint pmf $p_{XY}$ as (\ref{eq:mutual}) and is defined as \begin{equation} \begin{split} H(X)_Y&=E\left[H(X|y_j)\right]=\sum_{j=1}^{J}p(y_j)H(X|y_j)\\ &=E\left[E\left(I\left(p_{X|y_j}\right)\right)\right]=\sum_{j=1}^{J}p(y_j)\sum_{i=1}^{I}p(x_i|y_j)\log{\left(\frac{1}{p(x_i|y_j)}\right)}\\ &=\sum_{i=1}^{I}\sum_{j=1}^{J}p(x_i,y_j)\log{\left(\frac{1}{p(x_i|y_j)}\right)}. \end{split} \label{eq:residYX} \end{equation} In information theory, this quantity is also called residual or noise entropy, as it expresses the residual amount of information brought by $X$ once the influence of the non-noised $Y$ has been removed. The components $H(X|y_j)=E\left[I\left(p_{X|y_j}\right)\right]$ of (\ref{eq:residYX}) are entropies. For this reason, $H(X)_Y$ enjoys the additive property, i.e. (\ref{eq:residYX}) is an example of the law of iterated expectations, being the expectation of a conditional expectation, while marginal entropy (\ref{eq:shann}) is not. Residual entropy (\ref{eq:residYX}), like (\ref{eq:mutual}), maintains the same structure as (\ref{eq:shann}): \begin{equation} H(X)_Y=\sum_{i=1}^{I}p(x_i)\sum_{j=1}^{J}p(y_j|x_i)\log{\left(\frac{1}{p(x_i|y_j)}\right)} \label{eq:rrresid} \end{equation} where, analogously to what observed in (\ref{eq:mutual_sommai}), the information function in (\ref{eq:shann}) is replaced by a more complex synthesis. If $Y$ partially explains $X$, the entropy of $X$ should be lower when $Y$ is taken into account. Indeed, it has been shown \citep{coverthomas} that \begin{equation} MI(X,Y)=H(X)-H(X)_Y=H(Y)-H(Y)_X, \label{mutrul} \end{equation} that is, marginal entropy is the sum of mutual information and residual entropy. Since the concept of mutual information is symmetric, both equalities in (\ref{mutrul}) hold, where residual entropy $H(Y)_X$ can be defined analogously to (\ref{eq:residYX}). When independence occurs, $H(X)_Y=H(X)$, since knowing $Y$ does not reduce the uncertainty related to (i.e. the amount of information carried by) a realization of $X$. On the contrary, if there were a perfect relation between $X$ and $Y$, then $H(X)=MI(X,Y)$ and the residual entropy would be zero. In non-extreme situations, any additive term in (\ref{eq:mutual}) can be explored to check what simultaneous realizations of $X$ and $Y$ are farther away from independence; the same elementwise investigation can be performed for any of the $J$ random components $H(X|y_j)$ in (\ref{eq:residYX}). Another important quantity is the joint entropy $H(X,Y)$, which is the equivalent of (\ref{eq:shann}) when a joint pmf $p_{XY}$ is considered: \begin{equation} H(X,Y)=E\left[I\left(p_{XY}\right)\right]=\sum_{i=1}^{I}\sum_{j=1}^{J}p(x_i,y_j)\log{\left(\frac{1}{p(x_i,y_j)}\right)} \label{eq:joint} \end{equation} and expresses the total amount of information given by a simultaneous realization of $X$ and $Y$. The term 'joint' does not take the usual statistical meaning because it is not a measure of association, rather a total measure of the entropy of $X$ and $Y$ together. Therefore, $H(X,Y)$ is also called total entropy in the information theory language. The following symmetric property holds \citep{coverthomas}: \begin{equation} H(X,Y)=H(X)+H(Y)-MI(X,Y)\label{eq:rule3} \end{equation} with $H(X,Y)=H(X)+H(Y)$ in the case of independence. \begin{remark} An interesting special case of independence occurs when $p_{XY}$ is uniform, i.e. $U_{XY}$. In this situation, not only $MI(X,Y)=0$, $H(X)=H(X)_Y$ and $H(Y)=H(Y)_X$ but, in addition, the marginal entropies $H(X)$ and $H(Y)$ reach their theoretical maxima, $\log(I)$ and $\log(J)$, with the consequence that the total entropy is $H(X,Y)=\log(I)+\log(J)$. Indeed, this case describes the situation with the maximum uncertainty about the possible outcomes, i.e. with the highest marginal, residual and total entropy.\end{remark} A further relationship between entropy indices, similar to (\ref{mutrul}), is \begin{equation} H(X,Y)=H(Y)_X+H(X)=H(X)_Y+H(Y)\label{eq:rule5}. \end{equation} This equation states that the total entropy can be computed by summing the residual entropy and the marginal entropy of the variable assumed as conditioning \citep{coverthomas}. Relationships (\ref{mutrul}) and (\ref{eq:rule5}) involve the three fundamental entropy measures; all are expectations of different random variables, i.e. the information functions, weighted by different probability distributions, either univariate or bivariate. When the study variables are continuous, entropy measures cannot be simply generalized by Shannon's definition. Switching from a probability mass function (pmf) to a probability density function (pdf) unfortunately generates a measure of entropy which tends to infinity. A commonly adopted solution \citep{renyi} only considers the finite part of the entropy measure, called differential entropy, that constitutes the basis for defining Batty's spatial entropy (see Section \ref{sec:spatent}). \section{Univariate approaches to the use of spatial information in entropy measures} \label{sec:reviewspace} Several fields of application of entropy indices, such as geography, ecology and landscape studies, usually deal with spatial data, i.e. data collected over an area, from now on called observation window, where the spatial location of the occurrences is relevant to the analysis. A major drawback of using classical entropy indices in such studies is that they only employ the probability of occurrence of a category without considering the spatial distribution of such occurrence. Hence, datasets with identical pmf but very different spatial configurations share the same marginal entropy, say $H(X)$: the same $H(X)$ occurs in the two cases of strong spatial association and complete random pattern, in spite of the opposite spatial configurations, since the only element entering \ref{eq:shann} is the pmf of $X$. For this reason, a concern when computing entropy measures is the introduction of some spatial information into the formulae for capturing the distribution over space, making use, sometimes implicitly, of the concept of neighbourhood. The notion of neighbourhood is a basic concept of spatial statistics, linked to the idea that occurrences at certain locations may be influenced, in a positive or negative sense, by what happens at surrounding locations, i.e. their neighbours. The spatial extent of the influence, i.e. the choice of the neighbourhood system, is usually fixed exogenously, prior to the analysis. The system can be represented by a graph \citep{bondy}, where each location is a vertex and neighbouring locations are connected by edges. The simplest way of representing a neighbourhood system is via an adjacency matrix, i.e. a square matrix whose elements indicate whether pairs of vertices are adjacent or not in the graph. For a simple graph representing $G$ spatial units, the adjacency matrix $A=\{a_{gg'}\}_{g,g'=1,\dots,G}$ is a square $G\times G$ matrix such that $a_{gg'}=1$ when there is an edge from vertex $g$ to vertex $g'$, and $a_{gg'}=0$ otherwise; in other words, $a_{gg'}=1$ if $g' \in \mathcal{N}(g)$, the neighbourhood of area $g$. Its diagonal elements are all zero by default. Often, a row-standardized version of $A$ is used, i.e. all $G$ rows are constrained to sum to 1. Note that the spatial units may be points, defined via coordinate pairs, or areas, identified via a representative coordinate pair, such as the area centroid. Coordinates are used to measure distances and define what spatial units are neighbours. Areal units are seen as blocks, where a single value of the study variable is observed and the neighbourhood system is built. The idea of neighbourhood underlies a number of proposals in research fields that actively contribute to the definition of spatial measures. For instance, \citeauthor{illian} (\citeyear{illian}) propose a generalized and flexible measure of spatial biodiversity based on graphs. \citeauthor{cobbold} (\citeyear{cobbold}) present an approach to the idea of neighbourhood which measures the logical similarity between pairs of species, and is therefore more extended than the similarity between spatial units. Under this perspective, they consider a new family of diversity measures, including some entropy-based indices as special cases, in order to account for both abundances of species and differences between them. Over the past thirty years, several works developed the idea of spatial entropy measures based on an idea of neighbourhood; they can be ascribed to two main univariate approaches. The first one (\citeauthor{batty76} \citeyear{batty74, batty76, batty10}, \citeauthor{karlstrom} \citeyear{karlstrom}), presented in Section \ref{sec:spatent}, starts from the theory of spatial statistics but pays the price of discarding the categories of $X$. In particular, \citeauthor{karlstrom} (\citeyear{karlstrom}) aim at building an additive measure following the idea of Local Indices of Spatial Association (LISA) proposed by \cite{anselin}. The second approach computes entropy measures based on a transformation of the study variable $X$, accounting for the distance at which specific associations of $X$ outcomes occur. The resulting measures are not additive (therefore not decomposable) and are only able to consider one distance range at a time. They are outlined in Section \ref{sec:Z}. All the above proposals, except for Batty's work, refer to the concept of neighbourhood by setting an adjacency matrix. \subsection{Towards an additive spatial entropy} \label{sec:spatent} \subsubsection{Batty's spatial entropy} One appreciable attempt to include spatial information into Shannon's entropy starts from a reformulation of (\ref{eq:shann}). The categorical variable $X$ is recoded into $I$ dummy variables, each identifying the occurrence of a specific category of $X$. The probability of success of each dummy variable, i.e. 'occurrence of the $i$-th category of $X$', is labelled as $p_i$, and, for each $i$, $1-p_i=\sum_{i' \ne i} p_{i'}$. This means that each non-occurrence of the $i$-th category of $X$ implies the occurrence of another category. As a consequence, $\sum_i p_i=1$, since the collection of occurrences constitutes a partition of the certain event. Due to the way the $I$ 'occurrence/non-occurrence' variables are defined, $p_i=p(x_i)$. Therefore, Shannon's entropy of the variable $X$ may be expressed as $H(X)=\sum_{i=1}^{I} p_i \log(1/p_i)$. This approach is taken by Batty (\citeyear{batty74, batty76}) to define a spatial entropy which extends Theil's work (\citeyear{theil}). In a spatial context, a phenomenon of interest $F$ occurs over an observation window of size $T$ partitioned into $G$ areas of size $T_g$, with $\sum_{g=1}^G T_g=T$. This defines $G$ dummy variables identifying the occurrence of $F$ over a generic area $g$, $g=1, \dots, G$. Given that $F$ occurs somewhere over the window, its occurrence in zone $g$ takes place with probability $p_g$, where again $1-p_g=\sum_{g' \ne g} p_{g'}$ and $\sum_g p_g=1$. Since the collection of $p_g$ meets the criteria for being a pmf, it is possible to define the phenomenon pmf over the window $p_F=\left(p_1, \dots, p_g, \dots, p_G\right)'$. When $p_g$ is divided by the area size $T_g$, the phenomenon intensity is obtained: $\lambda_g=p_g/T_g$, assumed constant within each area $g$. Shannon's entropy of $F$ may be written as \begin{equation} H(F)=E\left[I\left(p_F\right)\right]=\sum_{g=1}^G p_g \log \left(\frac{1}{p_g}\right)=\sum_{g=1}^G \lambda_gT_g \log \left(\frac{1}{\lambda_g}\right)+\sum_{g=1}^G \lambda_gT_g \log \left(\frac{1}{T_g}\right). \end{equation} \cite{batty76} shows that the first term on the right hand side of the formula converges to the continuous version of Shannon's entropy \citep{renyi}, namely the differential entropy, as the area size $T_g$ tends to zero. The second term is discarded and the differential entropy is rewritten in terms of $p_g$, giving Batty's spatial entropy \begin{equation} H_B(F)=\sum_{g=1}^G p_g \log \left(\frac{T_g}{p_g}\right). \label{eq:spaten} \end{equation} It expresses the average surprise (or amount of information) brought by the occurrence of $F$ in an area $g$, and aims at computing a spatial version of Shannon's entropy. Shannon's entropy is high when the $I$ categories of $X$ are equally represented over a (non spatial) data collection, while Batty's entropy is high when the phenomenon of interest $F$ is equally intense over the $G$ areas partitioning the observation window ($\lambda_g=\lambda$ for all $g$). Batty's entropy includes a multiplicative component $T_g$ related to space in the information function that accounts for unequal space partition. Batty's entropy $H_B(F)$ reaches a minimum value equal to $\log(T_{g^*})$ when $p_{g^*}=1$ and $p_g=0$ for all $g\ne g^*$, with $g^*$ denoting the area with the smallest size. The maximum value of Batty's entropy is $\log(T)$, reached when the intensity of $F$ is the same over all areas, i.e. $\lambda_g=1/T$ for all $g$. This maximum value does not depend on the area partition, nor on the nature of the phenomenon of interest $F$ (discrete or continuous), but only on the size of the observation window. When $T_g=1$ for each $g$, $H_B(F)$ is a Shannon's entropy of $F$ equivalent to (\ref{eq:shann}), and the index ranges accordingly in $[0,\log(G)]$. \subsubsection{A LISA version of Batty's spatial entropy} \label{sec:kc} A challenging attempt to introduce additive properties and to include neighbourhood in Batty's entropy index (\ref{eq:spaten}) is due to \citeauthor{karlstrom} (\citeyear{karlstrom}), following the LISA theory. Local Indices of Spatial Association (LISA, see \citeauthor{anselin}, \citeyear{anselin}, for an extensive introduction and \citeauthor{cliff}, \citeyear{cliff},\ for the popular Moran's $I$ example of a LISA measure) are descriptive measures of a spatial dataset that satisfy the following conditions. \begin{cond}\label{cond1} For every spatial unit $g$ within the observation window, a LISA index measures the degree of spatial clustering/aggregation around that location; it can be viewed as a local index because it is a function of the study variable at unit $g$ and at neighbour units $g'$. In the context of Section \ref{sec:spatent}, the local index may be defined as $L_g=f(\lambda_g, \lambda_{g'\in \mathcal{N}(g)})$, where $\mathcal{N}(g)$ is the neighbourhood of $g$. \end{cond} \begin{cond}\label{cond2} The sum of the indices at all spatial units $g$ is proportional to the overall index in the observation window: $\alpha\sum_g L_g=L$ where $L$ is a global version of the index. This is the desirable additivity property of local spatial measures. \end{cond} \citeauthor{karlstrom}'s entropy index $H_{KC}(F)$ starts by weighting the probability of occurrence of the phenomenon of interest $F$ in a given spatial unit $g$, $p_g$, with its neighbouring values: \begin{equation} \widetilde{p}_g=\sum_{g'=1}^G a_{gg'}p_{g'} \label{eq:karl_ptilde} \end{equation} where $a_{gg'}$ is the element of the row-standardised $G\times G$ adjacency matrix $A$, which selects the neighbouring areas and the associated probabilities $p_{g'}$. Then, an information function is defined, fixing $T_g=1$, as $I(\widetilde{p}_g)=\log\left(1/\widetilde{p}_g\right)$. When all weights are equal, i.e. $a_{gg'}=1/|\mathcal{N}(g)|$ for $g' \in \mathcal{N}(g)$ where $|\mathcal{N}(g)|$ is the cardinality of $\mathcal{N}(g)$, then an average of the $p_{g'}$ is obtained: $\sum_{g'=1}^G a_{gg'}p_{g'}=\sum_{g'\in \mathcal{N}(g)}p_{g'}/|\mathcal{N}(g)|$. In this proposal, the elements on the diagonal of the adjacency matrix $A$ are non-zero, i.e. each area neighbours itself and enters the computation of $I(\widetilde{p}_g)$. Thus, \citeauthor{karlstrom}'s entropy index is \begin{equation} H_{KC}(F)=E\left[I\left(\widetilde{p}_g\right)\right]=\sum_{g=1}^G p_g\log\left(\frac{1}{\widetilde{p}_g}\right). \label{eq:karl} \end{equation} It can be shown that the maximum of $H_{KC}(F)$ does not depend on the choice of the neighbourhood and is $\log(G)$. As the neighbourhood reduces until vanishing, i.e. as $A$ becomes the identity matrix, $H_{KC}(F)$ coincides with Batty's spatial entropy (\ref{eq:spaten}) in the case of all $T_g=1$. The local measure that satisfies LISA Condition \ref{cond1} is $L_g=p_gI(\widetilde{p}_g)$. The sum of local measures $L_g$ forms the global index (\ref{eq:karl}), preserving the LISA property of additivity, Condition \ref{cond2}, with $\alpha=1$ as proportionality constant. The main disadvantage of the local components is that they are not expectations, therefore they are not entropy measures. \subsection{Spatial entropies based on a transformation of the study variable} \label{sec:Z} A second way to build a spatial entropy measure consists in defining a new categorical variable $Z$, where each realization identifies groups of occurrences of $X$ over space, namely co-occurrences. The definition of $Z$ underlies a choice for $m$, the degree of co-occurrences of $X$ (couples, triples and so on) and an assumption on whether to preserve the order of realizations over space. Preserving the order means, for example, that a co-occurrence $(x_i,x_j)$ of degree $m=2$ is different from $(x_j, x_i)$. Then, for a generic degree $m$ and $I$ categories of $X$, the new variable $Z$ has $R^o_m=I^m$ categories; should the order not be preserved, $R^{no}_m=\binom{I+m-1}{m}$. Once $Z$ is defined, its pmf is $p_{Z}=(p(z_1), \dots, p(z_{R_m}))'$, where $p(z_r)$ is the probability of observing the $r$-th co-occurrence of $X$ over different spatial units, and $R_m$ may be alternatively $R^o_m$ or $R^{no}_m$. The pmf $p_Z$ can be used to compute Shannon's entropy (\ref{eq:shann}) of $Z$, $H(Z)$, which differs from Shannon's entropy of $X$ as regards the number of categories. When the order is not preserved, this measure does not depend on the spatial configuration of co-occurrences. Therefore, in this case, $Z$ maintains the information of $X$ and the corresponding entropies are strictly related (see Section \ref{sec:sim_resshan} for details). Conversely, when the order is preserved, the entropy of $Z$ depends not only on the pmf of $X$, but also on the spatial order of its realizations. All contributions that introduce space in entropy measures based on $Z$ make use of a definition of neighbourhood which, even in the simplest case of sharing a border, needs the construction of an adjacency matrix $A$, which, for a generic degree $m$, generalizes to a hypercube in the $m$-dimensional space. The definition of $A$ implies that the univariate distribution used in entropies are conditional, i.e. $p_{Z|A}=(p(z_1|A), \dots, p(z_{R_m}|A))'$. Realizations of $Z|A$ form a subset of realizations of $Z$ that only includes co-occurrences identified by non-zero elements of $A$, i.e. conditioning on a fixed neighbourhood. In most works on regular lattice data (see, for instance, \citeauthor{oneill}, \citeyear{oneill}, \citeauthor{contagion}, \citeyear{contagion}, \citeauthor{riitters} \citeyear{riitters}), co-occurrences are defined as ordered couples of contiguous realizations of $X$, where the term "contiguous" in this case means "sharing a border". Thus, $m=2$ and a contiguity matrix is built, here denoted by $O$; consequently, the variable of interest is $Z|O$ with $R^{o}_2=I^2$ categories. \citet{oneill} propose one of the early spatial entropy indices, computing a Shannon's entropy (\ref{eq:shann}) for the variable $Z|O$ \begin{equation} H(Z|O)=E\left[I\left(p_{Z|O}\right)\right]=\sum_{r=1}^{R^o_2} p(z_{r}|O)\log\left(\frac{1}{p(z_{r}|O)}\right). \label{eq:oneill} \end{equation} The entropy ranges from 0 to $\log(R^o_2)$; the index maximum is reached when the pmf $p_{Z|O}$ is uniform. Other measures based on the construction of $Z|O$ start from the concept of contagion, the conceptual opposite to entropy. The Relative Contagion index $RC$ \citep{contagion} was proposed as \begin{equation} RC(Z|O)=1-H_{norm}(Z|O)=1-\frac{1}{\log(R^o_2)}\sum_{r=1}^{R^o_2} p(z_{r}|O)\log\left(\frac{1}{p(z_{r}|O)}\right) \label{eq:rc} \end{equation} where the idea of normalized measure comes from (\ref{eq:relH}): the second term is the normalized entropy of $Z|O$, via the multiplication of (\ref{eq:oneill}) by the appropriate $B=1/\log(R^o_2)$. Then its complement to 1 is computed in order to measure relative contagion: the higher the contagion between categories of $Z|O$, the lower the entropy. \cite{riitters} derive RC indices with or without preserving the order of elements in the couple. In the latter case, the number of categories for $m=2$ is $R^{no}_2=(I^2+I)/2$, a special case of the binomial coefficient $R^{no}_m=\binom{I+m-1}{m}$, and the normalization constant changes accordingly into $B=1/\left(\log(I^2+I)-\log(2)\right)$. A negative characteristic of the RC index, as noted by \cite{parresol} is that, as all normalized measures, it is not able to distinguish among contexts with different numbers of categories. While a dataset with $I=2$ has a lower entropy than a dataset with $I=100$, a normalized index does not account for that. For this reason, \citeauthor{parresol} suggest to go back toward an unnormalized version of (\ref{eq:rc}): \begin{equation} C(Z)=-H(Z|O)=\sum_{r=1}^{R^o_2} p(z_{r}|O)\log(p(z_{r}|O)) \label{eq:gamma} \end{equation} thus ranging from $-\log(R^o_2)$ to $0$. The above measures are inspired by very different conceptualizations but are computed as linear transformations of the common starting quantity (\ref{eq:oneill}). \cite{leibovici09} and \cite{leibovici14} propose a richer measure of entropy by extending $H(Z|O)$ in two ways. Firstly, $Z$ can now represent not only couples, but also triples and further degrees $m$ of co-occurrences. The authors only develop the case of ordered co-occurrences, so that the number of categories of $Z$ is $R^o_m=I^m$. Secondly, space is now allowed to be continuous, so that areal as well as point data might be considered and associations may not coincide with contiguity; therefore the concept of distance between occurrences replaces the concept of contiguity between lattice cells. A distance $d$ is fixed, then co-occurrences are defined for each $m$ and $d$ as $m$-th degree simultaneous realizations of $X$ at any distance $d^{*} \le d$, i.e. distances are considered according to a cumulative perspective; this way an adjacency hypercube $L_d$ is built and the variable of interest is $Z|L_d$. Then, Leibovici's spatial entropy is \begin{equation} H(Z|L_d)=E\left[I\left(p_{Z|L_d}\right)\right]=\sum_{r=1}^{R^o_m} p(z_r|L_d)\log\left(\frac{1}{p(z_r|L_d)}\right). \label{eq:leib} \end{equation} The probability $p(z_r|L_d)$ is again the discrete element of a univariate pmf $p_{Z|L_d}$, i.e. computed for a distribution conditional on $L_d$. Entropy (\ref{eq:leib}) can be normalized using $B=1/\log(R^o_m)$. In the case of lattice data, O'Neill's entropy (\ref{eq:oneill}) is obtained as a special case when $m=2$ and $d$ equals the cell's width. \subsection{Overall comments} The categorical variable $X$ of Section 2 is not used in entropies (\ref{eq:spaten}) and (\ref{eq:karl}), as the information on the different categories is lost. The phenomenon of interest $F$ may coincide with a specific category of $X$, say $F=X_i^*$, and $H_{KC}(X_i^*)$ may be computed to assess the spatial configuration of the realizations of $X_i^*$. Thus, for a categorical $X$, $I$ different $H_{KC}(X_i^*)$ can be computed, but there is no way to synthesize them into a single spatial entropy measure for $X$. A similar approach exploiting neighbourhood in terms of spatial proximity is also used in several sociological works to properly define an entropy-based segregation index among population groups able to include spatial information of locations (see, e.g., \citeauthor{reardon}, \citeyear{reardon}). With respect to the measures proposed in Section \ref{sec:spatent}, the advantage of an approach based on the construction of $Z$ is to maintain the information about all categories of $X$. It holds however two main limits: firstly, entropies are not decomposable, so there are no partial terms to investigate; secondly, they are based on univariate distributions, so all the interesting properties related to bivariate distributions cannot be enjoyed. Two substantial differences can be appreciated when using an appropriate adjacency matrix to build the realizations of $Z|A$ with respect to \citeauthor{karlstrom}'s approach of Section \ref{sec:kc}. First of all, in \citeauthor{karlstrom}'s approach $p_g$ does not depend on $A$, which is needed in the further step to include the neighbouring values, i.e. to derive $\widetilde{p}_g$; on the contrary, in the approach proposed in Section \ref{sec:Z}, $A$ is needed from the beginning to switch from $X$ to $Z|A$ and to define the proper pmf $p_{Z|A}$ itself. Secondly, since $p_g$ takes values over a location $g$, the other probabilities $p_{g'}$, $g'\ne g$ are used to compute $\widetilde{p}_g$ in the neighbourhood of each $g$. Conversely, in the approach based on the construction of $Z|A$, probabilities $p(z_r|A)$ are not referred to a specific location. \section{Additive spatial entropy measures exploiting bivariate properties} \label{sec:nostra} All the previously listed indices are challenging attempts to include space into entropy measures; nevertheless, some open questions remain. An important limitation is that each index is computed for only one adjacency matrix, i.e. by fixing the neighbourhood of interest in advance. This is linked to the fact that all entropies of Section \ref{sec:reviewspace} are based on univariate distributions and cannot take advantage of the bivariate properties presented in Section \ref{sec:prop}. The use of bivariate distributions would allow the property of additivity to be exploited for a global index by using a rigorous probabilistic approach. Moreover, there is a need to build spatial entropy measures exploiting the relationship of the study variable with space. Under this perspective, proper spatial entropy measures are expected to: \begin{enumerate} \item[a)] maintain the information about the categories of $X$, e.g. by exploiting the trasformed variable $Z$ as in Section \ref{sec:Z}, \item[b)] consider different distance ranges simultaneously, by including an additional study variable representing space to enjoy the properties of bivariate entropy measures, \item[c)] quantify the overall role of space, \item[d)] be additive and decomposable, i.e. satisfy partial and global properties as in Section \ref{sec:kc}. \end{enumerate} All the above points are accomplished using residual entropy (\ref{eq:residYX}) and its relationship with Shannon's entropy (\ref{eq:shann}); residual entropy is the proper quantity able to summarize partial entropy measures, conditional on a specific value of the second variable, into a global one. Partial entropies, conditional expectations themselves, are weighted by their probabilities, helping to appreciate the relevance of uncertainty and to switch from explorative analysis to statistical inference. The same properties a) to d) are enjoyed by the quantity known as mutual information (\ref{eq:mutual}), which receives a novel interpretation when space is taken into account. The realizations of $X$ are assumed to occur over a discretized area, say a grid (though the following is applicable to non-regular lattices and point data as well), and distances between areas are represented by Euclidean distances between centroids. Co-occurrences of $X$ are used to build $Z$, and the degree of such co-occurrences (couples, triples, or further degrees, i.e. $m=1,\ldots ,M$) is fixed exogenously, driven by the researcher's experience. Different structures have different merits, discussing them is beyond the purpose of this work and conclusions are independent of such choice. The categories of the transformed variable $Z$ derive from unordered associations: ordering occurrences does not appear sensible, since spatial neighbourhoods do not generally have a direction. Moreover, neglecting the order of occurrences ensures a one-to-one correspondence between $H(X)$ and $H(Z)$. Conversely, if order is preserved, different $H(Z)$ can be obtained for different spatial configurations of the same series of realizations of $X$, as discussed in Section \ref{sec:reviewspace}. This encourages the choice of considering unordered occurrences as the most appropriate: in the case of $m=2$, a spatial measure should consider the couple $z_{r}=(x_i,x_j)$ equal to the couple $z_{r}=(x_j,x_i)$, and analogously for further degrees of co-occurrences, and this is what will be done in the remainder of this Section. As above mentioned, a novelty of the proposed measures lies in the introduction of a second discrete variable $W$, that represents space by classifying the distances at which co-occurrences take place. These exogenous classes $w_k$, with $k=1,\dots,K$, cover all possible distances within the observation window and have a pmf $p_W=(p(w_1), \dots, p(w_K))'$; $p(w_k)$ is the probability associated to the $k$-th distance range. Once the degree $m$ of co-occurrences is fixed, each distance category $w_k$ implies the choice of a different adjacency matrix $A_k$ (a hypercube in the $m$-dimensional space for $m>2$) for the associations of $X$ that define $Z|A_k$. Therefore, $p_{Z|A_k}$ may equivalently be written as the $R^{no}_m \times 1$ vector $p_{Z|w_k}$, and the set of $K$ conditional distributions can be collected in a $R^{no}_m \times K$ matrix \begin{equation} p_{Z|W}=\begin{bmatrix} p_{Z|w_1} & p_{Z|w_2} & \cdots & p_{Z|w_K} \end{bmatrix}. \end{equation} Consequently, the discrete joint pmf $p_{ZW}$ can be represented by a $R^{no}_m \times K$ matrix: \begin{equation} p_{ZW}=p_{Z|W}diag\left(p_W\right). \end{equation} This decomposition is relevant to stress the logical relationship between $Z$ and $W$: $W$ influences $Z$ and not viceversa. It confirms that the marginal pmf of $W$ and the set of distributions of $Z$ conditional on the values of $W$ are the proper quantities to obtain entropy measures exploiting properties of bivariate distributions. \subsection{An innovative view warranting additivity: spatial residual entropy} Under the setting defined at the beginning of this Section, entropy has to be computed on the $Z|w_k$; the number of categories is $R^{no}_m$, and the strength of spatial influence is determined by the $K$ values of $W$. These elements permit to write the entropy measure called spatial global residual entropy $H(Z)_W$, which reinterprets (\ref{eq:residYX}) as follows \begin{equation} \begin{split} H(Z)_W=E[H(Z|w_k)]=E[E\left(I\left(p_{Z|w_k}\right)\right)]&=\sum_{k=1}^{K}p(w_k)\sum_{r=1}^{R^{no}_m}p(z_r|w_k)\log{\left(\frac{1}{p(z_r|w_k)}\right)}\\ &=\sum_{r=1}^{R^{no}_m}p(z_r,w_k)\log{\left(\frac{1}{p(z_r|w_k)}\right)}. \end{split} \label{eq:residZW} \end{equation} The components of (\ref{eq:residZW}) \begin{equation} H(Z|w_k)=E[I\left(p_{Z|w_k}\right)]=\sum_{r=1}^{R^{no}_m}p(z_r|w_k)\log{\left(\frac{1}{p(z_r|w_k)}\right)} \label{eq:residZW_loc} \end{equation} have a crucial meaning and, from now on, are named spatial partial residual entropies, where "partial" corresponds to a specific distance class $w_k$. They are computed starting from the conditional pmf $p_{Z|w_k}$. When these measures are multiplied by the probability $p(w_k)$, they allow spatial global residual entropy (\ref{eq:residZW}) to enjoy the additive property, as (\ref{eq:residZW}) can be written as \begin{equation} H(Z)_W=\sum_{k=1}^{K}p(w_k)H(Z|w_k). \label{eq:nostra_add} \end{equation} The additive relationship (\ref{eq:nostra_add}) holds, as the spatial global residual entropy (\ref{eq:residZW}) is obtained by weighting the spatial partial residual entropies with the probabilities of the conditioning variable $W$, and is relevant: the spatial global residual entropy tells how much information is still brought by $Z$ after removing the effect of the spatial configuration $W$. Partial entropies show how distances contribute to the entropy of $Z$. The main innovation of the proposed spatial residual entropy perspective is that it allows entropy measures illustrated in Section \ref{sec:Z} to be generalized through the formulation of each different spatial partial residual entropy (\ref{eq:residZW_loc}). Indeed, fixing a degree $m=2$ and a distance class $w_k$ implies a definition of an adjacency matrix $A_k$ which can be the contiguity $O$ as in (\ref{eq:oneill}) when $w_k=[0,1]$, or a $L_d$ as in (\ref{eq:leib}) based on a distance range $w_k=[0,d]$. \subsection{Deepening the concept of mutual information} An immediate consequence of relying on residual entropy is the possibility to isolate the mutual information of $Z$ and $W$, from now on named spatial mutual information, according to (\ref{mutrul}) by subtracting the spatial global residual entropy $H(Z)_W$ from $H(Z)$, Shannon's entropy of $Z$: \begin{equation} MI(Z, W)=H(Z)-H(Z)_W. \label{eq:residZW_mut} \end{equation} Shannon's entropy of $Z$ is computed by using the univariate marginal $p_Z$, that does not depend on any adjacency matrix. Spatial mutual information is defined, similarly to (\ref{eq:mutual}), as \begin{equation} MI(Z,W)=E\left[I\left(\frac{p_Zp_W}{p_{ZW}}\right)\right] =\sum_{r=1}^{R^{no}_m}\sum_{k=1}^{K}p(z_r,w_k)\log{\left(\frac{p(z_r,w_k)}{p(z_r)p(w_k)}\right)}. \label{eq:spatial_mutual} \end{equation} It is a Kullback-Leibler distance $D_{KL}(p_{ZW}||p_Zp_W)$ and the component of $H(Z)$ due to the spatial configuration $W$. Spatial mutual information may be additively decomposed the same way as spatial global residual entropy (\ref{eq:residZW}), so that the contribution of space can be quantified at every distance range $w_k$: \begin{equation} MI(Z,W)=\sum_{k=1}^{K} p(w_k)\sum_{r=1}^{R^{no}_m}p(z_r|w_k)\log{\left(\frac{p(z_r|w_k)}{p(z_r)}\right)}, \label{eq:partial_mutual} \end{equation} where the $k$-th partial term, analogously to (\ref{eq:residZW_loc}), is now named spatial partial information \begin{equation} PI(Z,w_k)=E\left[I \left(\frac{p_Z}{p_{Z|w_k}}\right) \right]=\sum_{r=1}^{R^{no}_m}p(z_r|w_k)\log{\left(\frac{p(z_r|w_k)}{p(z_r)}\right)}. \label{eq:partialterm_mut} \end{equation} Each partial term is a Kullback-Leibler distance $D_{KL}(p_{Z|w_k}||p_Z)$ that quantifies the contribution to the departure from independence of each conditional distribution $p_{Z|w_k}$. In the special case of independence between $Z$ and a distance class $w_k$, $p(z_r|w_k)=p(z_r)$ and the contribution of the corresponding partial term to the spatial mutual information is null. The additive relationship is respected the same way as for the spatial residual entropy, once the $PI$s are weighted by the probabilities $p(w_k)$: \begin{equation} MI(Z,W)=\sum_{k=1}^{K}p(w_k)PI(Z,w_k). \label{eq:mut_sumPI} \end{equation} Again, (\ref{eq:partial_mutual}) is expressed in terms of $w_k$, due to the logical order between $w_k$ and $z_r$. \subsection{Advances in interpreting spatial entropy measures} \label{sec:nostra_advances} Expression (\ref{eq:residZW_mut}) now takes a new substantial meaning: the entropy of $Z$, $H(Z)$, may be decomposed into spatial mutual information, quantifying the role of space, and spatial global residual entropy, quantifying the remaining information brought by $Z$: \begin{equation} H(Z)=MI(Z,W)+H(Z)_{W}. \end{equation} The more $Z$ depends on $W$, i.e. the more the realizations of $X$ are (positively or negatively) spatially associated, the higher the spatial mutual information. Conversely, when the spatial association among the realizations of $X$ is weak, the entropy of $Z$ is mainly due to the spatial global residual entropy. For the sake of interpretation and diffusion of the results, a ratio can be built that allows to quantify the role of space in proportional terms. The quantity \begin{equation} MI_{prop}(Z,W)=\frac{MI(Z,W)}{H(Z)}=1-\frac{H(Z)_W}{H(Z)} \label{eq:mutprop} \end{equation} ranges in $[0,1]$ and is able to quantify the contribution of space in the entropy of $Z$ as a proportion of the marginal entropy. If, e.g., $MI_{prop}(Z,W)=0.6$, it can be concluded that 60\% of the entropy of $Z$ is due to the specific spatial configuration. Similarly, $H(Z)_W/H(Z)$ gives the proportion of the entropy of $Z$ due to sources of heterogeneity other than space. This highlights that both $MI(Z,W)$ and $H(Z)_W$ can potentially vary in the whole range of $H(Z)$, but the value taken by $H(Z)$ constitutes an upper limit. Moreover, the terms in $H(Z)$ can be further decomposed, exploiting (\ref{eq:nostra_add}) and (\ref{eq:mut_sumPI}), as \begin{equation} H(Z)=\sum_{k=1}^{K}p(w_k)\left[PI(Z,w_k)+H(Z|w_k)\right], \end{equation} where the contribution of each term in explaining the relationship between $Z$ and $W$ is isolated. This way, Shannon's entropy $H(Z)$ can be written in additive form by exploiting the bivariate properties of entropy. \section{A comparative study of spatial entropy measures} \label{sec:sim} Section \ref{sec:nostra} has already presented the theoretical properties of the proposed measures, which state their superiority as spatial entropy indices. Unlike traditional measures, all referred to a univariate approach and some based on a definition of a single adjacency matrix (hypercube), spatial residual entropy (\ref{eq:residZW}) and spatial mutual information (\ref{eq:spatial_mutual}) consider different matrices (hypercubes) to cover all possible distances and exploit the bivariate properties of entropy to summarize what is known. It has been shown that all measures in Section \ref{sec:Z} can be derived as special cases of one spatial partial residual entropy (\ref{eq:residZW_loc}). The spatial entropy indices discussed in Section \ref{sec:reviewspace} and \ref{sec:nostra} need to be further investigated in order to identify their main properties and the different contexts of application. The behaviour of the proposed entropy indices compared to the other measures is assessed in what follows, and their flexibility and informativity are investigated. Therefore, in this comparative analysis, several datasets are generated under different scenarios to compute Batty's entropy (\ref{eq:spaten}) and \citeauthor{karlstrom}'s entropy (\ref{eq:karl}). Then, O'Neill's entropy (\ref{eq:oneill}) and its generalization Leibovici's (or co-occurrence) entropy (\ref{eq:leib}) are also assessed. Finally, the spatial global residual entropy (\ref{eq:residZW}) and the spatial mutual information (\ref{eq:spatial_mutual}) are computed, as well as their partial components. The thorough comparison of this wide set of entropy measures shows that spatial residual entropy overcomes the traditional measures as regards completeness, since it succeeds in synthesizing many relevant features. It is also the main tool for computing spatial mutual information, able to point out the overall role of space. For simplicity of presentation, discrete space and a regular grid are considered. It is to point out that space can be discretized as wished, as long as a distance measure between areas is suitably defined. Additionally, space is allowed to be continuous and areas may be replaced by points; in this case, $W$ would represent the distance between points themselves, and all entropy measures would be defined accordingly. When dealing with the transformed variable $Z$, $m=2$ is assumed and the number of categories is simply named $R^{o}$ or $R^{no}$ according to order preservation. Sections \ref{sec:simdesign} and \ref{sec:cumput} illustrate the design of the study: firstly, the data generation procedure is introduced, then, estimation of the necessary probability distributions is presented. Results are shown for the non-spatial Shannon's entropy in Section \ref{sec:sim_resshan}; afterwards, results for all entropy measures on data with two categories are summarized in Section \ref{sec:sim_resX2}. The main results for extensions to data with more than two categories are in Section \ref{sec:sim_resX5X20}. \subsection{Data generation} \label{sec:simdesign} Let us consider $N=2500$ realizations of a categorical variable $X$ by randomly setting the pmf $p_X$ and then generating values from a multinomial distribution $Mn(N,p_X)$. In accordance to the choice of using a regular grid, the realizations are arranged in $50\times 50$ pixels over a square window. Without loss of generality, each pixel is assumed to be a $1\times 1$ square, therefore the observation window is $50 \times 50$ units. Three options for the number of categories $I$ are covered: 2 categories ($X2$), 5 categories ($X5$) and 20 categories ($X20$). Categories are coded with integers from 1 to $I$ and, when needed, represented by different grey intensities going from black to white. The simulated sequence of 2500 values is organized according to different spatial configurations, as they are expected to produce different entropy values. For $X2$, four different scenarios are built: \begin{enumerate} \item compact - a spatially strongly clustered distribution \item repulsive - a spatially regular distribution, tending to a chessboard configuration \item multicluster - a spatial configuration presenting 25 small clusters of about the same size \item random - a pattern with no spatial correlation whatsoever. \end{enumerate} As regards $X5$ and $X20$, two scenarios are considered, which represent the two extreme entropy situations: the compact and the random ones. Indeed, when many unordered categories are present over a window, a repulsive pattern is uninteresting, as it would be very hard to distinguish it from a random one. For a similar reason, a multicluster configuration is not built. Hence, eight simulated scenarios are investigated, each replicated $1000$ times. A dataset generated under the hypothesis of uniform distribution $U_X$ among the categories is also built as a special case for each of the eight scenarios, as the 1001$-th$ simulation with $p(x_i)=1/I$ for every $i$; it is displayed in Figure \ref{fig:1}. For the multicluster configuration, the 25 cluster centroids are, in this special case, also forced to be uniformly distributed over the square window. \begin{figure} \caption{Data generated for all simulation scenarios under a uniform distribution of $X$.} \label{fig:1} \end{figure} \subsection{Computation of entropy components} \label{sec:cumput} Generated data are used to compute quantities for the entropy measures to be assessed and, in particular, probabilities are estimated as proportions of observed data, as follows. When restricting to a regular grid, i.e. the ultimate partition of the observation window, each unit contains one (and one only) realization of $X$ and the pixel size is 1. Let the generic pixel be labelled by $u$, $u=1, \dots, N$ and let $x_u$ denote the value of $X$ in pixel $u$. For Shannon's entropy (\ref{eq:shann}) the probabilities $p(x_i)$ for each category are estimated by the proportion of pixels where $x_i$ is observed: \begin{equation} \widehat{p}(x_i)= \frac{\sum_{u=1}^N\mathbf{1}(x_u=x_i)}{N}. \end{equation} Batty's and \citeauthor{karlstrom}'s entropies cannot be computed directly on the pixel grid, since only one realization of $X$ occurs over each pixel. The studied phenomenon is here defined as the occurrence of 1-valued pixels (black pixels in figures) over a fixed area, i.e. $F=X_1$; indices are computed on data generated for $X2$. The window is partitioned in $G=100$ fixed areas of different size, where the size of each area, $T_g$, $g=1, \dots, 100$, is the number of contained pixels. The superimposition of the areas over the data matrices is shown in Figure \ref{fig:batty_unif_part} for the datasets generated under a uniform distribution of $X2$ for the different spatial configurations shown in Figure \ref{fig:1}. The probabilities $p_g$ are estimated in each of the $1000$ simulations as the proportion of 1-valued pixels over the areas: \begin{equation} \widehat{p}_g=\frac{c_g}{C} \end{equation} where $c_g$ is the number of 1-valued pixels in area $g$ and $C$ is the total number of 1-valued pixels, so that $\sum_g \widehat{p}_g=1$. \begin{figure} \caption{Four scenarios under the uniform distribution for $X2$, with partition in 100 areas.} \label{fig:batty_unif_part} \end{figure} For computing \citeauthor{karlstrom}'s entropies, 4 neighbourhood distances are considered between the areas' centroids to quantify $I(\widetilde{p}_g)$: $d_0=0$ (no neighbours other than the pixel itself), $d_1=2$, $d_2=5$ and $d_3=10$. The definition of the adjacency matrix $A$ based on the fixed distance $d$ and the estimated quantities $\widehat{p}_g$ allow to compute the estimates of $\widetilde{p}_g$ as averages of the neighbouring estimated probabilities \begin{equation} \widehat{\widetilde{p}}_g=\sum_{g'=1}^G a_{gg'}\widehat{p}_{g'}=\frac{\sum_{g' \in \mathcal{N}(g)}\widehat{p}_{g'}}{|\mathcal{N}(g)|}. \end{equation} For the measures based on the transformation $Z$ of the study variable $X$, the variable $Z$ has a different number $R$ of categories according to the number of categories of $X$ and whether the order is preserved; options for $I$ and $R$ values and the corresponding entropy maxima (i.e. the entropy obtained under a uniform distribution for $X$ and $Z$ respectively) are reported in Table \ref{tab:Zcat}. When the order is not preserved, the entropy range is smaller. \begin{table} \caption{\label{tab:Zcat}Categories and entropy maxima for $X$ and $Z$.} \centering \fbox{ \begin{tabular}{c|c|c||c|c|c} \multicolumn{3}{c||}{No of categories} & \multicolumn{3}{c|} {Entropy maxima} \\ \hline $X$ & \multicolumn{2}{|c||}{$Z$} & $X$ & \multicolumn{2}{|c|}{$Z$} \\ \hline $I$ & $R^o$ & $R^{no}$ & $\log(I)$ & $\log(R^o)$ & $\log(R^{no})$ \\ \cline{1-6} 2 & 4 & 3 & 0.69 & 1.38 & 1.10 \\ 5 & 25 & 15 & 1.61 & 3.22 & 2.71 \\ 20 & 400 & 210 & 2.99 & 5.99 & 5.35 \\ \end{tabular}} \end{table} Co-occurrences are built according to the specific adjacency matrix employed in each case, as described in Section \ref{sec:Z}. The cell centroids are used to measure distance between pixels, consequently the distance between contiguous pixels is $1$ and the distance to farther cells along the cardinal directions belongs to the set of integers $\mathbb{Z}^+$. The adjacency matrix $A_k$ is $N \times N$ and $|\mathcal{N}(u)_k|=\sum_{u'=1}^N a_{uu',k}$ is the number of $Z$ observations built using the neighbourhood $\mathcal{N}(u)_k$ of pixel $u$. The rule of moving rightward and downward is adopted along the observation window in order to identify adjacent couples. A general method for estimating $p_{Z|w_k}$ is proposed, which can be applied, after choosing a suitable adjacency matrix, to all measures in Section \ref{sec:Z}. Let $Q_k$ denote the number of observed couples over the dataset for each category $w_k$, corresponding to the sum of all unit values over the matrix $A_k$: $Q_k=\sum_{u=1}^N |\mathcal{N}(u)_k|=\sum_{u=1}^N \sum_{u'=1}^N a_{uu',k}$. All observed $Z|w_k$ over the dataset are arranged in the rows of a $Q_k \times 2$ matrix $Z^{obs}_k=[Z^{(1)}_k, Z^{(2)}_k]$. The first column of $Z^{obs}_k$ is obtained by taking each pixel value and replicating it as many times as the cardinality of its neighbourhood: \begin{equation} Z^{(1)}_k=\begin{bmatrix} x_1 \cdot 1_{|\mathcal{N}(1)_k|}\\ x_2 \cdot 1_{|\mathcal{N}(2)_k|}\\\vdots \\ x_N \cdot 1_{|\mathcal{N}(N)_k|} \end{bmatrix} \label{eq:Zobs_1} \end{equation} where each $1_{|\mathcal{N}(u)_k|}$ is a $|\mathcal{N}(u)_k|$-dimensional vector of 1s. The second column $Z^{(2)}_k$ is built selecting, for each pixel, the neighbouring values via $A_k$. Let us define the $N \times N$ selection matrix $\widetilde{A}_k$, substituting zeros in $A_k$ with missing values. Let us also define $vec(X)$ as a $N \times 1$ vector stacking all realizations of $X$. An element-wise product is run between $vec(X)$ and the $u$-th row of $\widetilde{A}_k$, denoted by $vec(X) \cdot \widetilde{A}_{u., k}$. Thus, a $|\mathcal{N}(u)_k|$-dimensional vector is returned, containing the values of the pixels neighbouring $u$: \begin{equation} Z^{(2)}_k=\begin{bmatrix} vec(X) \cdot \widetilde{A}_{1., k}\\ vec(X) \cdot \widetilde{A}_{2., k}\\ \vdots \\ vec(X) \cdot \widetilde{A}_{N., k} \end{bmatrix}. \label{eq:Zobs_2} \end{equation} The $Q_k$ realizations of $Z^{obs}_k$ present at most $R^{no}$ categories, indexed by $r=1, \dots, R^{no}$. Their relative frequencies are used to compute $\widehat{p}(z_r|w_k)$. The marginal $p_Z$ may be estimated by marginalizing out $W$, or by building a special adjacency matrix $A$ that takes value 1 everywhere except for the main diagonal (such a matrix is indeed the sum of all $A_k$, $k=1, \dots, K$). The estimated pmf $\widehat{p}_Z$ is used to compute Shannon's entropy $H(Z)$. For the computation of O'Neill's spatial entropy (\ref{eq:oneill}), the above method employs the adjacency matrix $O$; for Leibovici's entropy (\ref{eq:leib}), the adjacency matrix $L_d$ is used and results are shown for $d=2$. When computing the spatial residual entropy (\ref{eq:residZW}), the variable $W$ is built with fixed categories $w_k$: $w_1=[0,1]$, $w_2=]1,2]$, $w_3=]2,5]$, $w_4=]5,10]$, $w_5=]10,20]$, $w_6=]20,30]$ and $w_7=]30, 50\sqrt{2}]$ (where $50\sqrt{2}$ is the diagonal length, i.e. the maximum distance over a square of side 50), covering all possible distances for couples over the dataset. For each distance $w_k$, a specific adjacency matrix $A_k$ is built. Therefore, $K$ different $A_k$ and $Z^{obs}_k$ are built using (\ref{eq:Zobs_1}) and (\ref{eq:Zobs_2}), and $K$ different conditional distributions $\widehat{p}_{Z|w_k}$ are obtained. Finally, an estimate for $p_W$ is needed: for each $k$, $\widehat{p}(w_k)=Q_k/Q$ represents the proportion of couples within distance range $w_k$ with respect to the total number of couples $Q=\sum_k Q_k$. A summary of the characteristics of entropy measures based on $Z$ is shown in Table \ref{tab:Zcat2}, highlighting the specific adjacency matrix for each index. \begin{table} \caption{\label{tab:Zcat2}Peculiarities of entropy measures based on $Z$.} \centering \fbox{ \begin{tabular}{c|c|c|c} Entropy & No cat. $Z$ & Adjacency matrix & No obs. couples \\ \hline $H(Z|O)$ & $R^o$& $O$ (contiguity) & $Q_O=4900$ \\ $H(Z|L_d)$ & $R^o$ & $L_2$ (up to distance $2$) & $Q_{L_2}=14502$ \\ $H(Z|w_k)$ and $PI(Z,w_k)$ & $R^{no}$& $A_1$ (at distance $w_1$) & $Q_1=4900$ \\ & &$\vdots$ & $\vdots$\\ & & $A_7$ (at distance $w_7$) & $Q_7=1191196$ \\ \end{tabular}} \end{table} All the above mentioned indices are computed for each scenario over the 1000 generated datasets, plus the special case of uniform distribution among the $X$ categories. In the presentation of the results, boxplots are employed to summarize the distribution of a specific index; stars highlight results achieved under the uniform distribution of $X$, while the dashed lines mark the indices' maxima. \subsection{Results for Shannon's entropy} \label{sec:sim_resshan} Figure \ref{fig:sh}, left panel, shows Shannon's entropy (\ref{eq:shann}) for $X2$, $X5$ and $X20$. \begin{figure} \caption{Left panel: Shannon's entropy, 1000 simulations; each dashed line corresponds to the index maximum. Right panel: difference between normalized Shannon's entropy of $X$ and $Z$. Each star identifies the entropy value computed on a uniformly distributed $X$. } \label{fig:sh} \end{figure} Since Shannon's entropy is non-spatial, it only depends on the generated outcomes and not on their spatial distribution, therefore there are no distinct measures for the different spatial configurations proposed: entropy only depends on $X$ categories' proportions and not on space. Entropy is higher as the number of categories increases: the greater diversity (i.e. number of categories), the higher entropy. Moreover, the empirical probability intervals do not overlap, therefore the index is effective in distinguishing among contexts with different numbers of categories. Should the normalized version of the index be computed, interpretation would be easier, but it would be impossible to distinguish among $X2$, $X5$ and $X20$. In all cases, the unnormalized entropy maximum is $\log(I)$ and is reached when the distribution of $X$ is uniform. Shannon's entropy can be analogously computed on $Z$ without order preservation, so that it does not depend on any adjacency matrix as it is non-spatial. It is computed via the $\widehat{p}_Z$ of Section \ref{sec:cumput} and compared to Shannon's entropy of $X$ in Figure \ref{fig:sh}, right panel. The Figure highlights that $Z$ brings the same information as $X$, since the two normalized entropy measures tend to have both the same behaviour and the same range of values; the difference is very small and becomes negligible as $I$ increases. Expression $H(Z)$ is used in Section \ref{sec:sim_resX2} for computing the mutual information. \subsection{Results for binary data} \label{sec:sim_resX2} This Section thoroughly illustrates the performance of the measures of Section \ref{sec:reviewspace} when applied to binary data. According to the data generation description in Section \ref{sec:simdesign}, the variable $X$ assumes values $x_1=1$ (black pixels) or $x_2=2$ (white pixels) and values are drawn from a binomial distribution as a special case of the multinomial distribution. The variable $Z$ with order preservation has $R^o=4$ categories $z_1=(1,1)$, $z_2=(2,2)$, $z_3=(1,2)$ and $z_4=(2,1)$; when order is not preserved, $R^{no}=3$ as the last two categories are undistinguishable. Section \ref{sec:simres_batty} relates to the measures of Section \ref{sec:spatent}, and Section \ref{sec:simres_Z} to those of Section \ref{sec:Z}; in addition, Section \ref{sec:simres_resid} refers to the proposals of Section \ref{sec:nostra}. \subsubsection{Batty's and \citeauthor{karlstrom}'s entropy} \label{sec:simres_batty} Results for Batty's entropy (\ref{eq:spaten}) are shown in Figure \ref{fig:batty_X2}, left panel, for the four spatial configurations described in Section \ref{sec:simdesign}. The dashed line corresponds to the index maximum $\log(T)=7.82$, which may only be reached with a repulsive or random spatial configuration. Batty's entropy measure, which does not make use of an adjacency matrix, is really able to detect a departure from a random configuration only when clustering occurs: the entropy distributions corresponding to compact and multicluster datasets are set on lower values than the distribution corresponding to random datasets. Conversely, the majority of the repulsive datasets generates entropy values that cannot be distinguished from those coming from random datasets. \begin{figure} \caption{Left panel: Batty's entropy, 1000 simulations. Right panel: \citeauthor{karlstrom} \label{fig:batty_X2} \end{figure} \citeauthor{karlstrom}'s entropy (\ref{eq:karl}) modifies Batty's entropy by including information about the neighbouring areas via an adjacency matrix and neglecting the sizes $T_g$. The index maximum is $\log(G)=4.61$. When $d_0=0$ (only 1 neighbour for each area, i.e. the area itself) a special case of Batty's entropy without the $T_g$ terms is obtained. Figure \ref{fig:batty_X2}, right panel shows \citeauthor{karlstrom}'s entropies for the 4 neighbourhood options: \textit{KC-0} denotes the entropy measure computed using $d_0=0$, \textit{KC-1} using $d_1=2$, \textit{KC-2} using $d_2=5$ and \textit{KC-3} using $d_3=10$. As stated in Section \ref{sec:kc}, as the neighbourhood gets smaller, \citeauthor{karlstrom}'s entropy measures tend to the version without the $T_g$ terms of Batty's entropy (i.e. \textit{KC-0}). Results are very similar to Batty's ones, both for the 1000 generated datasets and for the special case of the uniform distribution of $X$, though the inclusion of the neighbourhood produces a monotone increase in all entropy values. The multicluster pattern is the most influenced by extending the neighbourhood: as the neighbourhood becomes larger, its entropy distribution gets closer and closer to the result of a random configuration. In this latter case, indeed, neighbourhood plays an important role since, under the random spatial configuration, areas present different spatial behaviours. Conversely, when areas tend to be similar, the inclusion of the neighbourhood does not substantially modify the conclusions. \subsubsection{Entropy measures based on $Z$} \label{sec:simres_Z} Results for O'Neill's spatial entropy (\ref{eq:oneill}) and Leibovici's spatial entropy (\ref{eq:leib}) at distance $d=2$ are displayed in Figure \ref{fig:oneillX2}; O'Neill's entropy is a special case of Leibovici's entropy with $d=1$. \begin{figure} \caption{O'Neill's entropy (left) and Leibovici's entropy with $d=2$ (right), 1000 simulations. The dashed line corresponds to the index maximum; each star identifies the entropy value computed on a uniformly distributed $X$. } \label{fig:oneillX2} \end{figure} The main limit of the measures shown in Figure \ref{fig:oneillX2} is that for the repulsive patterns they produce on average much higher entropy values than for the compact ones. As stated in Section \ref{sec:reviewspace}, a proper spatial entropy measure accounts for the presence of a spatial pattern, but does not distinguish a negative correlation from a positive one. Therefore, results for the compact and repulsive pattern should be more similar than they appear here. This difference is mainly due to order preservation in building couples, which is also the reason why in the special case of uniform distribution for $X$ (star of the second boxplot in Figure \ref{fig:oneillX2}, left panel), the entropy value for the repulsive pattern is low but cannot reach the minimum value 0. For Leibovici's entropy, values for the repulsive patterns are even higher than in O'Neill's entropy, and nearly identical to those of the random patterns. This happens because different types of couples (second-order neighbours along the two cardinal directions, plus diagonals) are counted, increasing heterogeneity among couples. As for the compact and the multicluster data, the two indices behave the same way and return the same amount of information. This states that the choice of $d$ barely influences the entropy values, as long as $d$ is smaller than the cluster size. The more a pattern is compact, the stronger the expectation about the next $Z$ outcome, therefore, the compact configuration witnesses a low degree of surprise, whereas in the multicluster patterns the degree of surprise is higher. For the first three spatial configurations, the index maximum (dashed lines in Figure \ref{fig:oneillX2}) cannot be reached, because the spatial pattern does not allow the occurrence of the uniform distribution of $Z$. It may only be reached by a random pattern with a uniform distribution of $X$, where $H(Z|O)$ and $H(Z|L_2)$ are very similar to $H(Z)$. The random patterns are not influenced by space, therefore all measures lead to the same results irrespective of the distance. If the sign of O'Neill's entropy is changed, Parresol and Edwards' index (\ref{eq:gamma}) is obtained and contagion is measured instead of entropy. When (\ref{eq:gamma}) is also normalized, it evolves to the Relative Contagion index (\ref{eq:rc}); all above indices share the same basic idea, therefore comments to Figure \ref{fig:oneillX2} also hold for these indices. \subsubsection{Spatial residual entropy and spatial mutual information} \label{sec:simres_resid} Spatial partial residual entropies (\ref{eq:residZW_loc}) constitute the generalization of entropy measures based on $Z$ shown above, without order preservation. Figure \ref{fig:respart_X2} summarizes results for the partial terms at distances $w_1$ to $w_6$ (results for $w_7$, not reported here, are very similar to those for $w_6$). In the binary case, the panels referring to short distances, where spatial association occurs, are the most relevant. When further distances are taken into account and the ranges covered by the $W$ categories increase (as in the case of the 7 categories defined in Section \ref{sec:simdesign}), differences between the spatial configurations reduce. In the first two panels of Figure \ref{fig:respart_X2} (i.e. at distances $w_1$ and $w_2$) interpreting the role of space is easier than by means of the entropy measures proposed in Sections \ref{sec:spatent}. Indeed, Batty's and \citeauthor{karlstrom}'s entropy only detect that space has a role in clustered patterns, while spatial residual entropy highlights a natural order across the four spatial configurations: the lower the spatial association, the higher the entropy. The partial residual entropy values at distance $w_1$ in Figure \ref{fig:respart_X2} (where co-occurrences are couples of contiguous pixels) is O'Neill's entropy (\ref{eq:oneill}), reported in the left panel of Figure \ref{fig:oneillX2} without order preservation. Should the two partial residual entropies at distance $w_1$ and $w_2$ be summed, an unordered version of Leibovici's entropy, already shown in Figure \ref{fig:oneillX2} right panel, would be obtained. Since in the compact pattern most couples are formed by identical elements, order preservation is irrelevant with this configuration and results are very close to those reached by the previous entropy measures based on $Z$. A substantial improvemente is that the difference between entropies in compact and repulsive patterns is lower than in the case of O'Neill's and Leibovici's measures, while there is an evident difference between the situation of compact and repulsive patterns (the two strongly spatially associated ones) and the multicluster and random patterns (the two less spatially associated ones). This is a nice feature, since entropy measures should detect spatial association, irrespective of its type. Moreover, the entropy value for the uniform dataset with a repulsive pattern (star in the second boxplot of the first panel in Figure \ref{fig:respart_X2}) actually reaches the lower limit 0, since, when order is not preserved, all couples are of the same type in the perfect chessboard scheme. The entropy under a uniform distribution of $X$ at distance $w_2$ (star of the second boxplot of the second panel in Figure \ref{fig:respart_X2}) is larger than at distance $w_1$, as a greater number of couples is considered and couples formed by identical elements are also present. Entropy values are tendentially high for the multicluster dataset, closer to its maximum than in the case of O'Neill's and Leibovici's entropy. The random configuration entropy values are the highest, but they do not reach the index maximum as, since order is not preserved, a uniform distribution for $Z$ cannot be represented. Partial entropies allow to understand that the similarity between random and repulsive patterns in Leibovici's entropy (Figure \ref{fig:oneillX2}, right panel) is mainly due to what happens at distance $w_2$. This can be shown because partial terms (\ref{eq:residZW_loc}) consider different distance levels separately, while Leibovici's entropy counts all couples within a fixed distance without distinction. As distances $w_3$ to $w_6$ are considered (third to sixth panel in Figure \ref{fig:respart_X2}), entropy values for the compact configuration increase slowly, while for the repulsive pattern they become more and more similar to those of the random pattern, as all $Z$ categories tend to be equally present. The multicluster configuration reaches the highest entropy values at distance categories $w_3$ and $w_4$. Entropy values for the random pattern remain similarly distributed across distances as expected, since no spatial association should be detected irrespective of the considered distances. \begin{figure} \caption{Spatial partial residual entropies, 1000 simulations. Each dashed line corresponds to the index maximum; each star identifies the entropy value computed on a uniformly distributed $X$. } \label{fig:respart_X2} \end{figure} One focus of this work is on the contribution of the partial terms, rather than on the spatial global residual entropy (\ref{eq:residZW}), which, in accordance to the property of additivity, is a weighted sum of all partial terms (\ref{eq:residZW_loc}) for $w_k$, $k=1, \dots, 7$. For this reason, a graphical representation of the spatial global residual entropy is not shown here. Spatial global residual entropy (\ref{eq:residZW}) contributes to quantify the role of space: it allows to compute the spatial mutual information (\ref{eq:residZW_mut}), by subtracting (\ref{eq:residZW}) from $H(Z)$. The proportional version (\ref{eq:mutprop}) of the spatial mutual information is displayed in Figure \ref{fig:mutrel_X2}. \begin{figure} \caption{Proportional spatial mutual information, 1000 simulations. The stars identify the entropy value computed on a uniformly distributed $X$.} \label{fig:mutrel_X2} \end{figure} Proportional spatial mutual information illustrates how the role of space decreases along the four considered spatial configurations: a globally appreciable influence of space is detected for the first two spatial patterns (compact and repulsive), the mutual information for the multicluster dataset is very small and no mutual information is detected over the random patterns, where no spatial structure is present and space does not help in explaining the data behaviour. This measure also has the advantage of being easily interpretable: for instance, for the compact pattern, it says that nearly 10\% of the entropy is due to space (median of the first boxplot in Figure \ref{fig:mutrel_X2}). More detailed results are obtained by disaggregating the role of space at different distance categories: spatial partial information terms (\ref{eq:partialterm_mut}) are shown in Figure \ref{fig:partmut_X2}. The spatial partial information constantly decreases for the compact patterns as distance increases, and behaves analogously with smaller values for the multicluster one. For the repulsive pattern, the spatial partial information takes high values for the first two distance ranges and drops from distance $w_3$ on. It is null at any distance range for the random patterns. \begin{figure} \caption{Spatial partial information, 1000 simulations. Each star identifies the entropy value computed on a uniformly distributed $X$.} \label{fig:partmut_X2} \end{figure} \subsection{Extension to data with more than two categories} \label{sec:sim_resX5X20} For $X5$ and $X20$, only the compact and random scenarios are investigated, since, as said in Section \ref{sec:simdesign}, when many unordered categories are present over a window, a repulsive or a multicluster pattern cannot be distinguished from a random one. When switching from binary to data with more than two categories, all entropy values increase, since their maxima depend on the number of categories. Unnormalized indices are to prefer, as they account for diversity: the greater number of categories, the higher suprise/information about an outcome. Irrespective of the chosen measure, all entropy values under a uniform distribution of $X$ (stars) are higher than the rest of the distribution (boxplots). This happens because, under the hypothesis of uniform distribution, all categories have the same importance. Conversely, when a random sequence of values from a multinomial distribution is generated, it does not always cover the whole range of potential categories. For instance, with $X20$, in several replicates less than 20 categories are actually produced. All entropy measures computed for $X5$ and $X20$ are based on the trasformed variable $Z$; entropy maxima and number of categories may be retrieved in Table \ref{tab:Zcat}. O'Neill's entropy (\ref{eq:oneill}) and Leibovici's entropy (\ref{eq:leib}) are shown in Figure \ref{fig:oneillX5X20} and have similar behaviours: when the number of categories for $X$ increases, the distributions for the compact and random spatial configurations get farther apart. \begin{figure} \caption{O'Neill's entropy (left) and Leibovici's entropy (right), 1000 simulations. Each dashed line corresponds to the index maximum; each star identifies the entropy value computed on a uniformly distributed $X$. } \label{fig:oneillX5X20} \end{figure} The partial terms (\ref{eq:residZW_loc}) of spatial residual entropy are shown in Figure \ref{fig:respart_X5X20}. As the distance category $w_k$ increases, the two spatial configurations lead to more similar entropy values; this is also due to the increasing range of the distance classes. On the other hand, as the number of categories increases (i.e. as the index maximum increases), the two distributions diverge, which is another desirable feature of the proposed measures. \begin{figure} \caption{Spatial partial residual entropies, 1000 simulations. Each dashed line corresponds to the index maximum; each star identifies the entropy value computed on a uniformly distributed $X$. } \label{fig:respart_X5X20} \end{figure} Proportional spatial mutual information is appreciable over both compact datasets (Figure \ref{fig:mutrel_X5X20}): including space as a variable returns information, meaning that the surprise of observing a certain outcome is reduced. For compact patterns, the distribution variability decreases when switching from $X2$ to $X5$ and $X20$ (Figures \ref{fig:mutrel_X2} and \ref{fig:mutrel_X5X20}); nevertheless, distributions are centered around similar values, therefore the role of space is constant across different numbers of categories. This is a key advantage of proportional spatial mutual information: the detected role of space is measured taking the number of categories into account. No mutual information is detected over the random patterns, where spatial structures are undetectable, and space does not help in explaining the data heterogeneity. \begin{figure} \caption{Proportional spatial mutual information, 1000 simulations. The stars identify the entropy value computed on a uniformly distributed $X$. } \label{fig:mutrel_X5X20} \end{figure} The same happens for all spatial partial information terms (Figure \ref{fig:partmut_X5X20}) referring to random patterns, irrespective of the number of categories. Rather, a monotone decrease occurs in the values obtained for compact patterns as distance increases. As the number of categories increases, spatial mutual information becomes less variable across generated data (lower interquartile ranges): the measure is very informative in distinguishing among different spatial configurations as regards the role of space. \begin{figure} \caption{Spatial partial information, 1000 simulations. Each star identifies the entropy value computed on a uniformly distributed $X$.} \label{fig:partmut_X5X20} \end{figure} \section{Discussion and conclusions} \label{sec:disc} When accounting for the role of space in determining the heterogeneity of the outcomes of the study variable, any type of spatial association, positive or negative, decreases entropy according to its strength. The sign of association is assessed by spatial correlation indices, which should not be confused with spatial entropy. The main innovation and merit of the two measures proposed in this paper are that they allow the entropy of a categorical variable to be decomposed into a term accounting for the role of space and a noise term summarizing the residual information. Results from the comparative study of Section \ref{sec:sim} show that the entropy measures proposed in this work, i.e. spatial residual entropy and spatial mutual information, are substantially different from the most popular indices. Their characteristics are summarized in Table \ref{tab:specchio}. Spatial residual entropy and spatial mutual information are the only measures that share all the listed desirable features. \begin{table} \caption{\label{tab:specchio}Characteristics of the entropy measures.} \hskip-0cm\fbox{ \begin{tabular}{l|c|c|c|c|c|c} & Variable & Unord. & Info on $X$ cat. & Spatial & Additive & Bivariate\\ \hline Shannon & $X$ & & $\blacksquare$ & & & \\ Batty & $F$ & & & $\blacksquare$ & & \\ \citeauthor{karlstrom} & $F$ & & & $\blacksquare$ & $\blacksquare$ & \\ O'Neill & $Z$ & & $\blacksquare$ & $\blacksquare$ & & \\ Leibovici & $Z$ & & $\blacksquare$ & $\blacksquare$ & & \\ Spatial residual entropy & \multirow{2}{*}{$Z,W$} & \multirow{2}{*}{$\blacksquare$} & \multirow{2}{*}{$\blacksquare$} & \multirow{2}{*}{$\blacksquare$} & \multirow{2}{*}{$\blacksquare$} & \multirow{2}{*}{$\blacksquare$}\\ and mutual information &{}&{}&{}&{}&{}&{}\\ \end{tabular}} \end{table} First of all, the two proposed measures do not preserve the pair order, which is reasonable in spatial analysis. Indeed, spatial phenomena are not usually assumed to have a direction: the primary interest lies in understanding the spatial heterogeneity of data over a specific area, considering neighbourhood in any direction. Neglecting the order allows the presence of spatial patterns to be better distinguished from spatial randomness. Unnormalized entropy indices ought to be preferred, in agreement with \cite{parresol}, in order to distinguish between situations with different numbers of categories of $X$ and, consequently, of $Z$. Most spatial association measures, on the contrary, need normalization, since they ought not to depend on the number of data categories. Entropy is not primarily conceived to measure spatial association, rather it measures the surprise concerning an outcome, therefore, given a fixed degree of spatial association, the surprise has to be higher for a dataset with more categories. Normalized entropy measures may be only preferred in special cases to achieve easily interpretable results. In addition, spatial residual entropy and spatial mutual information improve Batty's (\ref{eq:spaten}) and \citeauthor{karlstrom}'s (\ref{eq:karl}) entropies. They also enjoy \citeauthor{karlstrom}'s property of additivity and consider space, but, since they do not lose information about the variable categories, the two measures answer a wider set of questions. Unlike \citeauthor{karlstrom}'s measure that refers to a univariate approach basing on a unique adjacency matrix, the proposed measures consider different matrices to cover all the range of possible distances and exploit the bivariate properties to unify the partial results. Moreover, spatial residual entropy and spatial mutual information constitute a substantial theoretical improvement with regard to O'Neill's entropy, Parresol and Edward's index and the Relative Contagion index, as they only consider contiguous pairs (at distance 1), while (\ref{eq:residZW}) and (\ref{eq:spatial_mutual}) give a global view of what happens over a dataset, since they also consider distances greater than 1. Leibovici's entropy (\ref{eq:leib}) is a more general measure which extends to further distances. Compared to it, spatial residual entropy and spatial mutual information have additional advantages. First of all, they consider unordered couples with the aforementioned consequences. Secondly, Leibovici's measure does not allow any deeper inspection, whereas (\ref{eq:residZW}) and (\ref{eq:spatial_mutual}) can investigate what happens at different distance ranges. This enriches the interpretation of results, as knowledge is available about what distances are more important for the spatial association of a studied phenomenon. In the study presented here, the most interesting distances are the small ones. Real life situations are very challenging opportunities, where different spatial configurations can arise; at this regard, (\ref{eq:residZW}) and (\ref{eq:spatial_mutual}) are very flexible in identifying the most informative distance to properly interpret the phenomenon under study. Indeed, spatial residual entropy is able to detect this aspect by discerning the contribution to entropy of different distances through its partial versions (\ref{eq:residZW_loc}) which can be summarized to form the global (\ref{eq:residZW}) or further decomposed as wished. Thus, the categories $w_k$ must be suitably proposed according to the context, as the less interesting distances should be aggregated while the most interesting ones ought to be analyzed in detail. Furthermore, in spatial global residual entropy the definition of equal-sized distance classes is not required, since weights suitably resume the spatial partial residual entropies to properly form the global version. When the spatial global residual entropy (\ref{eq:residZW}) is computed, probabilities of couples that occur at different distances, $p(z_r|w_k)$, are weighted by $p(w_k)$, so that the relative weight of all distances is respected. Lastly, spatial mutual information is a further tool to exploit for quantifying the overall information brought by the inclusion of space. It is different from zero when it is possible to recognize a (positive or negative) spatial pattern. It can be decomposed the same way as spatial global residual entropy to investigate the role of space at each distance range. In addition, its ability to detect the role of space is not influenced by the number of categories of $X$, as shown in Section \ref{sec:sim_resX5X20}. This work provides a complete toolbox for analyzing spatial data where distance is believed to play a role in determining the heterogeneity of the outcomes. The first step of this analysis consists in computing Shannon's entropy of $Z$ to keep as a reference value. Secondly, spatial mutual information is computed and its proportional version identifies the overall role of space. According to this result, distance classes are then suitably defined in order to investigate the partial terms. In particular, partial information terms help to understand whether space plays a relevant role at each distance class, while spatial partial residual entropies focus on the heterogeneity of the study variable due to other sources. The comparison of partial terms across distances is also helpful to grasp the spatial behaviour of the study variable. The proposed entropy measures may be extended to spatially continuous data presenting a finite number of categories, such as marked spatial point patterns. \noindent\textbf{Acknowledgements}\\ This work is developed under the PRIN2015 supported-project 'Environmental processes and human activities: capturing their interactions via statistical methods (EPHASTAT)' funded by MIUR (Italian Ministry of Education, University and Scientific Research). \\ As regards author Linda Altieri, the research work underlying this paper was partially funded by an FIRB 2012 [grant number RBFR12URQJ] 'Statistical modelling of environmental phenomena: pollution, meteorology, health and their interactions' for research projects by the Italian Ministry of Education, Universities and Research. \nocite{*} \end{document}
math
92,708
\begin{document} \begin{frontmatter} \title{Synchronizing Random Almost-Group Automata\thanks{This work is supported by the French National Agency through ANR-10-LABX-58, Russian Foundation for Basic Research, grant no.\ 16-01-00795, and the Competitiveness Enhancement Program of Ural Federal University. A major part of the research was conducted during the scientific collaboration under the Metchnikov program arranged by French Embassy in Russia.}} \author{Mikhail V. Berlinkov\inst{1}, Cyril Nicaud\inst{2}} \institute{Institute of Natural Sciences and Mathematics, \\ Ural Federal University, Ekaterinburg, Russia, 620062\\ \email{[email protected]} \and LIGM, Universit\'{e} Paris-Est and CNRS, 5 bd Descartes, Champs-sur-Marne, 77454 Marne-la-Vall\'{e}ee Cedex 2, France\\ \email{[email protected]} } \maketitle \begin{abstract} In this paper we address the question of synchronizing random automata in the critical settings of almost-group automata. Group automata are automata where all letters act as permutations on the set of states, and they are not synchronizing (unless they have one state). In almost-group automata, one of the letters acts as a permutation on $n-1$ states, and the others as permutations. We prove that this small change is enough for automata to become synchronizing with high probability. More precisely, we establish that the probability that a strongly connected almost-group automaton is not synchronizing is $\frac{2^{k-1}-1}{n^{2(k-1)}}(1+o(1))$, for a $k$-letter alphabet. \end{abstract} \end{frontmatter} \section{Introduction} \label{sec:intro} A deterministic automaton is called \emph{synchronizing} when there exists a word that brings every state to the same state. If it exists, such a word is called \emph{reset} or \emph{synchronizing}. Synchronizing automata serve as natural models of error-resistant systems because a reset word allows to turn a system into a known state, thus reestablishing the control over the system. For instance, prefix code decoders can be represented by automata. If an automaton corresponding to a decoder is synchronizing, then decoding a reset word, after an error appeared in the process, would recover the correct decoding process. There has been a lot of research done on synchronizing automata since pioneering works of \v{C}ern\'{y}~\cite{Ce64}. Two questions that attract major interest here are whether an automaton is synchronizing and what is the length of shortest reset words if the answer to the first question is `yes'? These questions are also studied from different perspectives such as algorithmic, general statements etc. and in variety of settings, e.g. for particular classes of automata, random settings, \emph{etc.} The reader is referred to the survey of Volkov~\cite{Vo08} for a brief introduction to the theory of synchronizing automata. One of the most studied direction of research in this field is the long-standing conjecture of \v{C}ern\'{y}, which states that if an automaton is synchronizing, then it admits a reset word of length at most $(n-1)^2$, where $n$ is the number of states of the automaton. This bound is best possible, as shown by \v{C}ern\'{y}. However, despite many efforts, only cubic upper bounds have been obtained so far~\cite{Pin1983,Szykula2017}. It is the probabilistic settings that interest us in this article. During the attempts to tackle the conjecture of \v{C}ern\'{y}, lots of experiments have been done, showing that random automata seem to be synchronizing with high probability, and that their reset words seem to be quite small in expectation. This was proved quite recently in a series of articles: \begin{itemize} \item Skvortsov and Zaks~\cite{Zaks10} obtained some results for large alphabets (where the number of letters grows with $n$); \item Berlinkov~\cite{Berl2013RandomAut} proved that the probability that a random automaton is not synchronizing is in $\mathcal{O}(n^{-k/2})$, where $k$ is the number of letters, for any $k\geq 2$ (this bound is tight for $k=2$); \item Nicaud~\cite{FastSyn} proved that with high probability a random automaton admits a reset word of length $\mathcal{O}(n\log^3 n)$, for $k\geq 2$ (but with less precise error terms than in~\cite{Berl2013RandomAut}). \end{itemize} All these results hold for the \emph{uniform distribution} on the set of deterministic and complete automata with $n$ states on an alphabet of size $k$, where all automata have the same probability. And it is, indeed, the first probability distribution to study. The reader is refered to the survey~\cite{RandomAutSurvey} for more information about random deterministic automata. In this article we study a distribution on a restricted set of deterministic automata, the \emph{almost-group automata}, which will be defined later in this introduction. In order to motivate our choice, we first need to outline the main features of the uniform distribution on deterministic automata and how they were used in the proofs of the articles cited above. In a deterministic and complete automaton, one can consider each letter as a map from the set of states $Q$ to itself, which is called its \emph{action}. The action of a given letter in a uniform random automaton is a uniform random mapping from $Q$ to $Q$. Properties of uniform random mappings have been long studied and most of their typical\footnote{In all the informal statements of this article, \emph{typical} means \emph{with high probability} as the size of the object (cardinality of the set, number of states of the automaton, ...) tends to infinity.} statistics are well known. The \emph{functional graph} proved to be a useful tool to describe a mapping; it is the directed graph of vertex set $Q$, built from a mapping $f:Q\rightarrow Q$ by adding an edge $i\rightarrow j$ whenever $j=f(i)$. Such a graph can be decomposed as a set of cycles of trees. Vertices that are in a cycle consists of elements $q\in Q$ such that $f^\ell(q)=q$ for some positive $\ell$. They are called \emph{cyclic vertices}. The expected number of cyclic vertices in a uniform random mapping on a set of size $n$ is in $\Theta(\sqrt{n})$. This is used in~\cite{FastSyn} and~\cite{Berl2013RandomAut} to obtain the synchronization of most automata. The intuitive idea is that after reading $a^n$, the set of states already shrinks to a much smaller set, in a uniform random automaton; this gives enough leverage, combined with the action of the other letters, to fully synchronize a typical automaton. In a nutshell, uniform random automata are made of uniform random mappings, and each uniform random mapping is already likely to synchronize most of the states, due to their inherent typical properties. At this point, it seems natural to look for "harder" random instances with regard to synchronization, and it was a common question asked when the authors presented their works. In this article, to prevent easy synchronization from the separate action of the letter, we propose to study what we call \emph{almost-group automata}, where the action of each letter is a permutation, except for one of them which has only one non-cyclic vertex. An example of such an automaton is depicted on Fig:~\ref{fig:intro}. \begin{figure} \caption{An almost-group automaton with $7$ states. The action of $b$ is a permutation. The action of $a$ is not, as $1$ has no preimage by $a$; but if state $1$ is removed, $a$ acts as a permutation on the remaining states.\label{fig:intro} \label{fig:intro} \end{figure} Since a group automaton with more than one state cannot be synchronizing, almost-group automata can be seen as the automata with the maximum number of cyclic states (considering all its letters) that can be synchronizing. The question we investigate in this article is the following. \noindent\textbf{Question: }For the uniform distribution, what is the probability that a strongly connected almost-group automaton is synchronizing? For this question, we consider automata with $n$ states on a $k$-letter alphabet, with $k\geq 2$, and try to answer asymptotically as $n$ tends to infinity. We prove that such an automaton is synchronizing with probability that tends to $1$. We also provide a precise asymptotic estimation of the probability that it is not synchronizing. In other words, one can state our result as follows: group automata are always non-synchronizing when there are at least two states, but if one allows just one letter to act not bijectively for just one state, then the automaton is synchronizing with high probability. This suggests that from a probabilistic point of view, it is very difficult to achieve non-synchronization. This article starts with recalling some basic definitions and notations in Section~\ref{sec:basicdefs}. Then some interesting properties of this set of automata regarding synchronization are described in Section~\ref{sec:almostgroup}. Finally, we rely on this properties and some elementary counting techniques to establish our result in Section~\ref{sec:counting}. \section{Basic Definitions and Notations} \label{sec:basicdefs} \noindent\textbf{Automata and synchronization.} Throughout the article, we consider automata on a fixed $k$-letter alphabet $\mathcal{S}igma=\{a_0,\ldots,a_{k-1}\}$. Since we are only interested in synchronizing properties, we only focus on the transition structure of automata: we do not specify initial nor final states, and will never actually consider recognized languages in the sequel. From now on a \emph{deterministic and complete automaton} (DFA) $\mathcal{A}$ on the alphabet $A$ is just a pair $(Q,\cdot)$, where $Q$ is a non-empty finite set of \emph{states} and $\cdot$, the \emph{transition mapping}, is a mapping from $Q\times A$ to $Q$, where the image of $(q,a)\in Q\times A$ is denoted $q\cdot a$. It is inductively extended to a mapping from $Q\times A^*$ to $Q$ by setting $q\cdot \varepsilon = q$ and $q\cdot ua=(q\cdot u)\cdot a$, for any word $u\in A^*$ and any letter $a\in A$, where $\varepsilon$ denote the empty word. Let $\mathcal{A}=(Q,\cdot)$ be a DFA. A word $u\in A^*$ is a \emph{synchronizing word} or a \emph{reset word} if for every $q,q'\in Q$, $q\cdot u=q'\cdot u$. An automaton is \emph{synchronizing} if it admits a synchronizing word. A subset of states $S\subseteq Q$ is \emph{synchronized} by a word $u\in A^*$ if $|S\cdot u|=1$. Observe that if an automaton contains two or more terminal strongly connected components\footnote{A strongly connected component $S$ is terminal when $S\cdot u\subseteq S$ for every $u\in A^*$.}, then it is not synchronizing. Moreover if it has only one terminal strongly connected component $S$, then it is synchronizing if and only if $S$ is synchronized by some word $u$. For this reason, most works on synchronization focus on strongly connected automata, and this paper is no exception. \noindent\textbf{Almost-group automata.} Let $\mathcal{S}_n$ be the set of all permutations of $E_n=\{0,\ldots,n-1\}$. A \emph{cyclic point} of a mapping $f$ is an element $x$ such that $f^\ell(x)=x$ for some positive $\ell$. An \emph{almost-permutation} of $E_n$ is a mapping from $E_n$ to itself with exactly $n-1$ cyclic points; its unique non-cyclic point is called \emph{dangling point} (or \emph{dangling state} later on, when we use this notion for automata). Equivalently, an almost-permutation is a mapping that acts as a permutation on a subset of size $n-1$ of $E_n$ and that is not a permutation. Let $\mathcal{S}'_n$ denote the set of almost-permutations on $E_n$. An \emph{almost-group automaton} is a DFA such that one letter act as an almost-permutation and all others as permutations. An example of such an automaton is given in Fig.~\ref{fig:intro}. For counting reasons, we need to normalize the automata, and define $\mathcal{G}_{n,k}$ as the set of all almost-group automata on the alphabet $\{a_0,\ldots,a_{k-1}\}$ whose state set is $E_n$ and such that $a_0$ is the almost-permutation letter. \noindent\textbf{Probabilities.} In this article, we equip non-empty finite sets with the uniform distribution, where all elements have same probability. The sets under consideration are often sequences of sets, such as $\mathcal{S}_n$; by abuse of notation, we say that a property \emph{hold with high probability} for $\mathcal{S}_n$ when the probability that it holds, which is defined for every $n$, tends to $1$ as $n$ tends to infinity. \section{Synchronization of Almost-Group Automata} \label{sec:almostgroup} In this section we introduce the main tools that we use to describe the structure of synchronizing and of non-synchronizing almost-group automata. The notion of a \emph{stable pair}, introduced by Kari~\cite{KariStable02}, has proved to be fruitful mostly by Trahtman, who managed to use it for solving the famous \emph{Road Coloring Problem}~\cite{TRRCP08}. We make use of this definition in our proof as well, along with some ideas coming from~\cite{TRRCP08}. A pair of states $\{p,q\}$ is called \emph{stable}, if for every word $u$ there is a word $v$ such that $p\cdot uv=q\cdot uv$. The \emph{stability} relation given by the set of stable pairs joined with a diagonal set $\{(p,p) \mid p \in Q\}$ is invariant under the actions of the letters and complete whenever $\mathcal{A}$ is synchronizing. The definition on pairs is sound as stability is a symmetric binary relation. It is also transitive whence it is an equivalence relation on $Q$ which is a congruence, i.e. invariant under the actions of the letters. Notice also, that an automaton is synchronizing if and only if its stability relation is complete, that is, all pairs are stable. Because of that, if an automaton is not synchronizing and admits a stable pair, then one can consider a non-trivial factorization of the automaton by the stability relation. So, we aim at characterizing stable pairs in a strongly-connected non-synchronizing almost-permutation automaton, in order to show there is a slim chance for such a factorization to appear when switching to probabilities. For this purpose, we need the definition of a \emph{deadlock}, which is a pair that cannot be merged into one state by any word (somehow opposite to the notion of stable pair). A subset $S \subseteq Q$ is called an $F$-clique of $\mathcal{A}$ if it is a set of maximum size such that each pair of states from $S$ is a deadlock. It follows from the definition that all $F$-cliques have same size and that the image of $F$-clique by a letter or a word is also an $F$-clique. Let us reformulate~\cite[Lemma~2]{TRRCP08} for our purposes and present a proof for self-completeness. \begin{lemma} \label{lem:f-clique-diff} If $S$ and $T$ are two $F$-cliques such that $S \setminus T = \{p\}$ and $T \setminus S= \{q\}$, for some states $p$ and $q$, then $\{p,q\}$ is a stable pair. \end{lemma} \begin{proof} By contradiction, suppose there is a word $u$ such that $\{p\cdot u,q\cdot u\}$ is a deadlock. Then $(S \cup T)\cdot u$ is an $F$-clique because all its pairs are deadlocks. Since $p\cdot u \neq q\cdot u$, we have $|S\cup T| = |S| + 1 > |S|$ contradicting maximality of $S$.\qed \end{proof} \begin{lemma} \label{lem:stable_pair} Each strongly-connected almost-group automaton $\mathcal{A} \in \mathcal{G}_{n,k}$ with at least two states, admits a stable pair containing the dangling state that is synchronized by $a_0$. \end{lemma} \begin{proof} If $\mathcal{A}$ is synchronizing, then we are done because all pairs are stable. In the opposite case, there must be an $F$-clique $F_1$ of size at least two. Let $p_0$ be the dangling state (which is not permuted by $a_0$) and let $d$ be the product of all cycle lengths of $a_0$. Since $\mathcal{A}$ is strongly-connected there is a word $u$ such that $p_0 \in F_1\cdot u$. By the property of $F$-cliques, $F_2 = F_1\cdot u$ and $F_3 = F_2\cdot a_0^{d}$ are $F$-cliques too. Notice that $p_0$ is the only state which does not belong to the cycles of $a_0$ and all the cycle states remains intact under the action $a_0^d$, by construction of $d$. Hence $F_2 \setminus F_3 = \{p_0\}$ and $F_3 \setminus F_2 = \{p_0\cdot a_0^d\}$. Hence, by Lemma~\ref{lem:f-clique-diff}, $\{p_0,p_0\cdot a_0^d\}$ is a stable pair. This concludes the proof since $p_0\cdot a_0 = p_0\cdot a_0^{d+1}$.\qed \end{proof} To characterize elements of $\mathcal{G}_{n,k}$ that are not synchronizing, we build their \emph{factor automata}, which is defined as follows. Let $\mathcal{A}$ be a DFA with stability relation $\rho$. Let $\mathcal{C}=\{C_1$,\ldots, $C_\ell\}$ denote its classes for $\rho$. The \emph{factor automaton} of $\mathcal{A}$, denoted by $\mathcal{A} / \rho$, is the automaton of set of states $\mathcal{C}$ with transition function defined by $C_i\cdot a = C_j$ in $\mathcal{A} / \rho$ if and only if $C_i\cdot a \subseteq C_j$ in $\mathcal{A}$. Or equivalently, if and only if there exists $q\in C_i$ such that $q\cdot a\in C_j$ in $\mathcal{A}$. \begin{lemma} \label{lem:factor_automaton} If $\mathcal{A}\in\mathcal{G}_{n,k}$ is strongly-connected, then its factor automaton $\mathcal{A} / \rho$ is a strongly-connected permutation automaton. \end{lemma} \begin{proof} Strong-connectivity follows directly from the definition. If one of the letters was not a permutation on the factor automaton, then there would be a stable class $S$ in $\mathcal{A}$ which has no incoming transition by this letter. It would follow that there is no incoming transition to every state of $S$ in $\mathcal{A}$ either. However, this may happen only for the letter $a_0$ and the (unique) dangling state $p_0$ by this letter. Due to Lemma~\ref{lem:stable_pair}, the dangling state $p_0$ must belong to a stable pair whence there is another state in $S$: this contradicts that $p_0$ is the only state with no incoming transition by $a_0$.\qed \end{proof} \begin{lemma} \label{lem:size_of_components} Let $\mathcal{A}\in\mathcal{G}_{n,k}$ and let $D$ be the stable class of $\mathcal{A}$ that contains the dangling state $p_0$. Then the set of stable classes can be divided into two disjoint, but possibly empty, subsets $\mathcal{B}$ and $\mathcal{S}$ such that \begin{itemize} \item[$\bullet$] $D \in \mathcal{B}$ and $|B|=|D|$ for every $B \in \mathcal{B}$; \item[$\bullet$] $|S|=|D|-1$ for every $S \in \mathcal{S}$; \item[$\bullet$] The $a_0$-cycle of $\mathcal{A} / \rho$ that contains $D$ only contains elements of $\mathcal{S}$ besides $D$; \item[$\bullet$] Every other cycle in $\mathcal{A} / \rho$ lies entirely in either $\mathcal{B}$ or $\mathcal{S}$. \end{itemize} \end{lemma} \begin{proof} Since stable pairs are mapped to stable pairs, the image of a stable class by any letter must be included in a stable class. Recall that by Lemma~\ref{lem:factor_automaton} all letters in $\mathcal{A} / \rho$ act as permutations on the stable classes. Our proof consists in examining the different cycles of the group automaton $\mathcal{A}/\rho$. Let us consider any cycle of a letter $a$ in $\mathcal{A} / \rho$, made of the stable classes $C_0, C_1, \dots, C_{r-1}$ with $C_j\cdot a \subseteq C_{j+1 \pmod{r}}$, for any $j\in\{0,\ldots r-1\}$. If $a\neq a_0$ then the letter $a$ acts as a permutation in $\mathcal{A}$, and for each $j$, we have $|C_j| \leq |C_{j+1 \pmod{r}}|$, since $a$ does not merge pairs of states. Therefore, \[ |C_0| \leq |C_1| \dots \leq |C_{r-1}| \leq |C_0|. \] As a direct consequence, all $|C_j|$ have same cardinality. If $a = a_0$, then observe that the same argument can be used when one removes the dangling state $p_0$ and its outgoing transition by $a_0$: the action of $a_0$ on $Q\setminus\{p_0\}$ becomes a well-defined permutation. Henceforth, if this cycle does not degenerate to a simple loop consisting of only $D$, then all the other elements of the cycle are stable classes of size $|D|-1$. And this is the only place where changes of size may happen in $\mathcal{A}/\rho$. The lemma follows from the strong-connectivity of $\mathcal{A} / \rho$. \qed \end{proof} Notice that an almost-group automaton is non-synchronizing if and only if it has at least two stable classes. The following theorem is a consequence of this fact and of Lemma~\ref{lem:size_of_components}. \begin{theorem} \label{thm:non-synch-criterion} A strongly-connected almost-group automaton $\mathcal{A}$ is non-synchro\-nizing if and only if its partitioning described in Lemma~\ref{lem:size_of_components} is such that $|\mathcal{B} \cup \mathcal{S}|>1$. \end{theorem} \section{Counting Non-synchronizing Almost-Group automata} \label{sec:counting} In this section, we use counting arguments to establish our main result: a precise estimation of the asymptotic number of strongly connected almost-group automata that are not synchronizing. Recall that our working alphabet is $\mathcal{S}igma=\{a_0,\ldots,a_{k-1}\}$, that $E_n=\{0,\ldots,n-1\}$ and that $\mathcal{G}_{n,k}$ is the set of almost-group automata on $\mathcal{S}igma$ with set of states $E_n$. Our first counting lemma is immediate. \begin{lemma}\label{lem:ag automata} For any $n\geq 1$, there are exactly $(n-1) n!$ almost-permutations of $E_n$. The number of elements of $\mathcal{G}_{n,k}$ is therefore equal to $(n-1)n!^k$. \end{lemma} \begin{proof} An almost-permutation of $E_n$ is characterized by its element with no preimage $x_0$, the way it permutes $E_n\setminus\{x_0\}$ and the image of $x_0$ in $E_n\setminus\{x_0\}$. Since there are $n$ choices for $x_0$, $(n-1)!$ ways to permute the other elements and $n-1$ choices for the image of $x_0$, the result follows.\qed \end{proof} \subsection{Strong-Connectivity} Our computations below focus on strong-connectivity. We shall need an estimation of the number of strongly connected group automata and almost-group automata. These results are given in Lemma~\ref{lem:sc_group_automata} and \ref{lem:sc_automata}. The proofs of these lemmas are kind of folklore, so we moved them into Appendix section~\ref{sec:appendix} to fit into a space limit. \begin{restatable}{lemma}{primelemma} \label{lem:sc_group_automata} There are at most $n(n-1)!^k(1+o(1))$ group automata with set of states $E_n$ that are not strongly-connected. Henceforth, there are $n!^k(1+o(n^{1-k}))$ strongly-connected group automata. \end{restatable} \begin{restatable}{lemma}{secprimelemma} \label{lem:sc_automata} The number of not strongly-connected almost-group automata is at most $2(n-1)n(n-1)!^{k}(1+o(1))$. Henceforth, almost-group automata are strongly connected with high probability: there are $(n-1)n!^k(1+o(n^{1-k}))$ strongly connected elements in $\mathcal{G}_{n,k}$. \end{restatable} \subsection{Non-synchronizing Almost-Group Automata: a Lower Bound} In this section we give a lower bound on the number of strongly connected elements of $\mathcal{G}_{n,k}$ that are not synchronizing. In order to do so, we build a sufficiently large family of automata of that kind. The construction of this family is intuitively driven by the structure given in Lemma~\ref{lem:size_of_components} but the formal details of the construction can be done without mentioning this structure. For $n\geq 3$, let $\mathcal{F}_{n,k}$ be the subset of $\mathcal{G}_{n,k}$, made of the almost-group automata on $\mathcal{S}igma$ with set of states $E_n$ such that: \begin{enumerate} \item there exists a state $p$ that is not the dangling state $p_0$ such that for every letter $a\neq a_0$, either $p\cdot a = p_0$ and $p_0\cdot a=p$, or $p\cdot a = p$ and $p_0\cdot a=p_0$; \item for at least one letter $a\neq a_0$, we have $p\cdot a = p_0$ and $p_0\cdot a=p$; \item there exists a state $q\in Q'=E_n\setminus\{p,p_0\}$ such that the action of $a_0$ on $Q\setminus\{p_0\}$ is a permutation with $q$ being the image of $p$; \item the image of the dangling state by $a_0$ is $p_0\cdot a_0 = q$. \item let $q'$ be the preimage of $p$ by $a_0$; if one removes the states $p$ and $p_0$ and set $q'\cdot a_0=q$, then the resulting automaton is a strongly connected group automaton; \end{enumerate} The structure of such an automaton is depicted on Fig.~\ref{fig:Fnk}. Clearly from the definition, an element of $\mathcal{F}_{n,k}$ is a strongly connected almost group automaton with the dangling state $p_0$. \begin{figure} \caption{The shape of an element of $\mathcal{F} \label{fig:Fnk} \end{figure} \begin{lemma}\label{lem:Fnk not synchronizing} For every $n\geq 3$, every automaton of $\mathcal{F}_{n,k}$ is not synchronizing. \end{lemma} \begin{proof} First observe that $\{p_0,p\}$ is the only pair that can be synchronized by reading just a letter, which has to be $a_0$. The preimage of $\{p_0,p\}$ is either $\{p_0,p\}$ for $a \neq a_0$ or a singleton $\{q'\}$ otherwise. Hence, no other pair can be mapped to $\{p_0,p\}$ and thus be synchronized by more that one letter. \qed \end{proof} \begin{lemma} \label{lem:lower_bound} There are $(2^{k-1}-1)n(n-1)(n-2)(n-2)!^{k}(1+o(n^{1-k}))$ elements in $\mathcal{F}_{n,k}$. Thus there are at least that many strongly connected non-synchronizing almost-group automata. \end{lemma} \begin{proof} From the definition of $\mathcal{F}_{n,k}$, we observe that there are $n(n-1)(n-2)$ ways to choose $p_0$, $p$ and $q$. Once it is done, we choose any strongly connected group automaton $\mathcal{A}'$ with $n-2$ states in $E_N\setminus\{p_0,p\}$; there are $(n-2)!^k(1+o(n^{1-k}))$ ways to do that according to Lemma~\ref{lem:sc_group_automata}. We then change the transition from the preimage $q'$ of $q$ by $a_0$ by setting $q'\cdot a_0 = p$. We set $p\cdot a_0=p_0\cdot a_0 =q$. Finally we choose the actions of the letters $a\in\mathcal{S}igma\setminus\{a_0\}$ on $\{p_0,p\}$ in one of the $2^{k-1}-1$ possible ways, as at least one of them is not the identity. This concludes the proof, since all the elements of $\mathcal{F}_{n,k}$ are built exactly once this way.\qed \end{proof} Observe that using the definitions of Lemma~\ref{lem:size_of_components}, an element of $\mathcal{F}_{n,k}$ consists of exactly one stable class $\{p_0,p\}$ in $\mathcal{B}$ and $n-2$ stable classes of size $1$ in $\mathcal{S}$. \subsection{Non-synchronizing Almost-Group Automata: an Upper Bound} In this section, we upper bound the number of non-synchronizing strongly-connected elements of $\mathcal{G}_{n,k}$ using the characterization of Lemma~\ref{lem:size_of_components}. In the sequel, we freely use the notations used in this lemma (the sets $D$, $\mathcal{B}$, $\mathcal{S}$, \ldots). Let $b\geq 1$, $s\geq 0$ and $\ell\geq 1$ be three non-negative integers such that $(\ell+1)b + \ell s = n$. Let $\mathcal{G}_{n,k}(b,s,\ell)$ denote the subset of $\mathcal{G}_{n,k}$ made of the automata such that $|\mathcal{B}|=b$, $|\mathcal{S}|=s$ and $|D|=\ell+1$. \begin{lemma} \label{lem:main_bound} The number of non-synchroninzing strongly-connected elements of $\mathcal{G}_{n,k}(b,s,\ell)$ is at most \[ \begin{cases} n! (n-2)!^{k-1}(n-2)(2^{k-1}-1) & \text{if }b=1,\,s=n-2,\text{ and}\ \ell=1,\\ n! \max(1,s) \ell\big(b!s!(\ell+1)!^{b}\ell!^{s}\big)^{k-1}&\text{otherwise}. \end{cases} \] \end{lemma} \begin{proof} Our proof consists in counting the number of ways to build, step by step, an element of $\mathcal{G}_{n,k}(b,s,\ell)$. Firstly, by elementary computations, one can easily verify that the number of ways to split $E_n$ into $b$ subsets of size $\ell+1$ and $s$ subsets of size $\ell$ is exactly \begin{equation}\label{eq:count_partitioning} \frac{n!}{(\ell+1)!^{b}\ell!^{s}b!s!}. \end{equation} Secondly, let us count the number of ways to define the transitions at the level of the factor automaton, i.e. between stable classes, as follows: \begin{itemize} \item Choose a permutation on $\mathcal{B}$ in $b!$ ways and on $\mathcal{S}$ in $s!$ ways for each of the $k-1$ letters $a \neq a_0$. \item Choose which stable class of $\mathcal{B}$ is the class $D$, i.e. the one containing the dangling state $p_0$, amongst the $b$ possibilities. \item Choose a permutation for $a_0$ on the $b-1$ classes $\mathcal{B} \setminus\{D\}$ in $(b-1)!$ ways. \item If $s\neq0$, choose one of the $s!$ permutations of $\mathcal{S}$ for the action of $a_0$ on these classes, then alter the action of $a_0$ the following way: choose the image $D'$ of $D$ by $a_0$ in $\mathcal{S}$ in $s$ ways, then insert it in the $a_0$-cycle: if $D''$ is the former preimage of $D'$, then now $D\cdot a_0 = D'$ and $D''\cdot a_0 = D$ in $\mathcal{A}/\rho$. \item If $s=0$, then set $D\cdot a_0 = D$ in $\mathcal{A}/\rho$. \end{itemize} In total, the number of ways to define the transitions of the factor automaton $\mathcal{A} / \rho$, once the stable classes are chosen is \begin{equation}\label{eq:factor_count} (b!s!)^{k-1}b(b-1)!\max(1,s)s! = b!^ks!^{k}\max(1,s). \end{equation} Now, we need to define transitions between stable classes for all letters. For all letters but $a_0$, there are $b$ injective transitions between stable classes of size $\ell+1$ and $s$ injective transitions between stable classes of size $\ell$, that is, there are at most $(\ell+1)!^b \ell!^s$ ways to define them for each of the $k-1$ letters. This is an upper bound, as some choices may result in an automaton that is, for instance, not strongly connected. We refine this bound for the case $\ell=1, b=1, s=n-2$: one of the letters must swap the states in the single $2$-element class in $\mathcal{B}$ for strong connectivity, so we count just one choice instead of $2$ (for $(\ell+1)!$) to define this letter on this component, that is, we consider only $2^{k-1}-1$ ways to define all permutations on $\mathcal{B}$ in this case, instead of the $((\ell+1)!^b)^{k-1}$ upper bound in the general case (this refinement is used to match our lower bound). For the action of $a_0$, we additionally choose the dangling state $p_0 \in D$ in $\ell+1$ ways and its image in $D\cdot a_0$ in $\ell$ ways: there are $\ell$ choices in the case where $D\cdot a_0=D$, since $p_0\cdot a_0\neq p_0$, and also when $D\cdot a_0\neq D$, since $D\cdot a_0\in\mathcal{S}$ in this case, according to Lemma~\ref{lem:size_of_components}. Then, it remains to define the injective transitions between the $\mathcal{B} \setminus\{D\}$ blocks in $(\ell+1)!^{b-1}$ ways, and the $s+1$ injective transitions between the $\mathcal{S} \cup \{D'\}$ blocks in $\ell!^{s+1}$ ways, where $D'=D\setminus\{p_0\}$. Thus, the number of ways to define the transitions between stable classes is at most $((\ell+1)!^b \ell!^s)^{k-1}\ell(\ell+1)(\ell+1)!^{b-1}\ell!^{s+1} = \ell(\ell+1)!^{bk}\ell!^{sk}$, in the general case, and $2(2^{k-1}-1)$ in the case $\ell=1, b=1, s=n-2$. Putting together \eqref{eq:count_partitioning}, \eqref{eq:factor_count} and this last counting result yield the lemma.\qed \end{proof} \begin{lemma} \label{lem:upper_bound} The number of non-synchroninzing strongly-connected almost-group automata in $\mathcal{G}_{n,k}$ is at most $n(2^{k-1}-1)n!(n-2)!^{k-1}(1+o(1/n))$. \end{lemma} \begin{proof} By Lemma~\ref{lem:lower_bound} and Theorem~\ref{thm:non-synch-criterion}, the number of non-synchroninzing strongly-connected almost-group automata in $\mathcal{G}_{n,k}$ is at most \begin{equation} \label{eq:sum_bs} n!\sum_{\ell=1}^{\lfloor n/2 \rfloor}\sum_{\{b,s \mid b(\ell+1) + s\ell = n\} } N_{\ell,b,s}, \end{equation} where $b \geq 1$, $s \geq 0$, and $b+s\geq 2$, and where $N_{\ell,b,s}$ is defined by \begin{equation} N_{\ell,b,s} = \begin{cases} \max(1,s) \ell (b!s!(\ell+1)!^{b}\ell!^{s})^{k-1}, & \text{for } (\ell,b,s) \neq (1,1,n-2) \\ (n-2)!^{k-1}(n-2)(2^{k-1}-1), & \text{for } (\ell,b,s) = (1,1,n-2). \end{cases} \end{equation} To finish the proof, it will be sufficient to prove that the sum in~(\ref{eq:sum_bs}) is asymptotically equivalent to the term $N_{1,1,n-2}$ since $n!N_{1,1,n-2}$ is asymptotically equivalent to the expression stated in Lemma~\ref{lem:upper_bound}. To prove this, let us consider the following fraction for $(\ell,b,s) \neq (1,1,n-2)$: \begin{equation} \label{leq:fraction1} \frac{N_{1,1,n-2}}{N_{\ell,b,s}} = \frac{n-2}{\max(1,s) \ell}\frac{(n-2)!^{k-1}(2^{k-1}-1)}{ (b!s!(\ell+1)!^{b}\ell!^{s})^{k-1}} \geq \left(\frac{(n-2)!}{b!s!(\ell+1)!^{b}\ell!^{s}}\right)^{k-1}, \end{equation} where we used that $n-2 = s\ell + b(\ell+1)-2 \geq s \ell$, as $b$ and $\ell$ are positive; thus $n-2\geq \max(1,s)\ell$ if $s>0$; but it also holds if $s=0$ since $b+s\geq2$. Observe that, for positive $\ell$ and $m$ we have \begin{align*} \frac{(bm)!}{m!^b} &= \left(\frac{1\cdot 2\cdots m}{1\cdot 2\cdots m}\right) \left(\frac{(m+1)(m+2)\cdots 2m}{1\cdot 2\cdots m}\right)\cdots \left(\frac{((b-1)m+1)\cdots bm}{1\cdot2\cdots m}\right)\\ & \geq 1^m\cdot 2^m\cdots b^m = b!^m \end{align*} Hence, for $m=\ell+1$, we have $\frac{(b(\ell+1))!}{(\ell+1)!^b}\geq b!^{\ell+1}$. Similarly, one can get that \begin{equation} {\frac{n!}{(b(\ell+1))!}}\frac{1}{\ell!^s} \geq \left(\frac{(b+s)!}{b!}\right)^{\ell}. \end{equation} Let $M_{\ell,b,s}=\frac{(n-2)!}{b!s!(\ell+1)!^{b}\ell!^{s}}$, the expression in brackets of (\ref{leq:fraction1}). This quantity can be bounded from below as follows. \begin{align} \label{eq:fraction2} M_{\ell,b,s} &= \frac{1}{n(n-1)b!s!}\frac{(b(\ell+1))!}{(\ell+1)!^{b}} \frac{n!}{(b(\ell+1))!\ell!^s} \\ &\geq \frac{b!^{\ell+1}}{n(n-1)b!s!} \left(\frac{(b+s)!}{b!}\right)^{\ell} \geq \frac{(b+s)!^\ell}{n^2 s!}. \end{align} Recall that we want to prove that $M_{\ell,b,s}$ is sufficiently large, so that $N_{1,1,n-2}$ is really larger than $N_{\ell,b,s}$. Notice that there are at most quadratic in $n$ number of combinations $(\ell, b, s)$ satisfying $b(\ell+1) + s\ell = n$, as for any values $1 \leq b, \ell < n$ there is at most one suitable value of $s$. Therefore, qubic lower bound on $M_{\ell,b,s}$ is enough in general. We distinguish two cases: \noindent$\triangleright$ If $\ell\geq2$, then $M_{\ell,b,s} \geq {n^{-2}(b+s)!^{\ell-1}}.$ If $b+s \geq \ln{n}$, this expression is greater than $\Theta(n^3)$ by Stirling formula. Otherwise, because $b(\ell+1)+s\ell=n$, we have $\ell \geq \frac{n}{\ln{n}}-1$ and as $b+s\geq 2$ the same $\Theta(n^3)$ lower bound holds. \noindent$\triangleright$ If $\ell=1$, then $s = n-2b$ and $M_{\ell,b,s} \geq \frac{(n-b)!}{n^2 (n-2b)!}.$ Clearly, this expression decreases as $b$ increases; for $b=3$ it is greater than $\Theta(n)$ (and there is only one such term) and for $b>3$ it is greater than $\Theta(n^3)$. If $b=1$, then $s=n-2$ and this is the term $N_{1,1,n-2}$. The only remaining case is when $b=2$, $\ell=1$, and $s=n-4$. For this case by (\ref{leq:fraction1}), we get \begin{equation} \frac{N_{1,1,n-2}}{N_{\ell,b,s}} \geq \left(\frac{(n-2)!}{b!s!(\ell+1)!^{b}\ell!^{s}}\right)^{k-1} = \left(\frac{(n-2)!}{8(n-4)!}\right)^{k-1} = \Theta(n^{2(k-1)}). \end{equation} Thus, we proved that the sum (\ref{eq:sum_bs}) is indeed asymptotically equal to the term $N_{1,1,n-2}$ multiplied by $n!$.\qed \end{proof} \subsection{Main Result and Conclusions} Now, we are ready to prove our main result on the asymptotic number of strongly connected elements of $\mathcal{G}_{n,k}$ that are not synchronizing. \begin{theorem} \label{thm:main} The probability that a random strongly connected almost-group automaton with $n$ states and $k\geq 2$ letters is not synchronizing is equal to \begin{equation} ({2^{k-1}-1}){n^{-2(k-1)}}\left(1+o(1)\right). \end{equation} In particular, random strongly connected almost-group automata are synchronizing with high probability as $n$ tends to infinity. \end{theorem} \begin{proof} Lemma~\ref{lem:lower_bound} and Lemma~\ref{lem:upper_bound} give lower and upper bounds on the number of strongly-connected non-synchronizing almost-group automata, which are both equal to $(2^{k-1}-1)n^3(n-2)!^{k}(1+o(1/n))$. We conclude the proof using the estimation on the number of strongly-connected almost-group automata given in Lemma~\ref{lem:sc_automata}.\qed \end{proof} Thus we obtained a precise asymptotic on the probability for strongly-connected almost group automata of being synchronizable for any alphabet size. As in~\cite{Berl2013RandomAut}, it would be natural to design an algorithm which would verify whether a given random strongly-connected almost group automaton is synchronizing in optimal average time. Another, much more challenging problem, concerns estimation of the expected length of a shortest reset word for random automata in this setting. We are thankful to anonymous referees whose comments helped to improve the presentation of the results. \section*{Appendix} \label{sec:appendix} \primelemma* \begin{proof} If a group automaton is not strongly-connected, then the set of states can be divided into two non-empty parts $Q_1$ and $Q_2$ such that there is no transition between them (if there is a transition from, say, $Q_1$ to $Q_2$ labeled by $a$, then following this $a$-cycle one finds a transition from $Q_2$ to $Q_1$ at some point). In other words, every non-strongly connected group automaton can be built by (a) choosing a non-trivial partition of $E_n$ into $Q_1\dot{\cup} \,Q_2$ such that $|Q_1|\leq |Q_2|$ and (b) choosing a group automaton on the set of states $Q_1$ and another one on $Q_2$. Observe that this construction is not a bijection: if the automaton is made of, for example, three parts, there are several ways to choose $Q_1$ and $Q_2$. Hence, by counting the number of such decompositions, we are over-counting the number of group automata that are not strongly connected, which is fine as we are looking for an upper bound. The number of group automata that are not strongly connected is therefore at most, using $r$ to denote the cardinality of $Q_1$, \begin{equation} Z_{n,k} = \sum_{r=1}^{\lfloor \frac{n}{2} \rfloor}{n \choose r}(r!(n-r)!)^k, \end{equation} since there are $\binom{n}{r}$ ways to choose the elements that are in $Q_1$, and then $r!^k$ group automata on $Q_1$ and $(n-r)!^k$ on $Q_2$. Observe that \begin{align} Z_{n,k} &= n! \sum_{r=1}^{\lfloor \frac{n}{2} \rfloor}\left( r!(n-r)!\right)^{k-1} = \\ &= n!\left((n-1)!^{k-1} + 2^{k-1}(n-2)!^{k-1} + \sum_{r=3}^{\lfloor \frac{n}{2} \rfloor}\left( r!(n-r)!\right)^{k-1} \right) = \\ &= n(n-1)!^k\left(1 + \frac{2^{k-1}}{(n-1)^{k-1}}+n^{k-1}\sum_{r=3}^{\lfloor \frac{n}{2} \rfloor}\binom{n}{r}^{1-k}\right). \end{align} This concludes the proof since $\binom{n}{r}^{1-k}\leq \binom{n}{3}^{1-k}$ in the range of the sum and, therefore, $\sum_{r=3}^{\lfloor \frac{n}{2} \rfloor}\binom{n}{r}^{1-k}\leq \frac{n}2\mathcal{O}(n^{3-3k})$.\qed \end{proof} Using the same kind of techniques as in Lemma~\ref{lem:sc_group_automata}, we can obtain an upper bound on the number of almost-group automata that are not strongly connected. \secprimelemma* \begin{proof} If an automaton of $\mathcal{G}_{n,k}$ is not strongly-connected, then its set of states $E_n$ can be divided into two non-empty parts $Q_1$ and $Q_2$ such that there is no transition from $Q_2$ to $Q_1$. To continue the proof, we distinguish two cases, whether there is a transition from $Q_1$ to $Q_2$ or not. Recall that $p_0$ is the dangling state, with no incoming transition labelled by $a_0$. \noindent $\triangleright$ If there is a transition $q_1\xrightarrow[]{a}q_2$ from $q_1$ to $q_2$ where $q_1\in Q_1, q_2\in Q_2$. We first prove by contradiction that $a=a_0$ and $q_1=p_0$: It it is not the case, then the transition $q_1\xrightarrow[]{a}q_2$ belongs to a cycle labelled by $a$, which is not possible as such a cycle would contain a transition from $Q_2$ to $Q_1$. Hence $p_0\xrightarrow[]{a_0}q_2$ is the only transition from $Q_1$ to $Q_2$ in this case. We obtain an upper bound of the number of automata in this case by (1) choosing the $r$ states in $Q_1$, (2) choosing the actions of all letters but $a_0$ on both $Q_1$ and $Q_2$, (3) choosing the dangling state $p_0$ in $Q_1$ and its image by $a_0$ in $Q_2$, and (4) choosing the action of $a_0$ on $Q_1\setminus\{p_0\}$ and on $Q_2$. As in Lemma~\ref{lem:sc_group_automata}, this yields an upper bound, not an exact counting, since some automata are counted several times this way. Therefore, the upper bound we obtain in this case, for fixed $r\in\{1,\ldots,n-1\}$ is: \begin{equation}\label{eq:case1} \binom{n}{r}r!^{k-1}(n-r)!^{k-1} r(n-r) (r-1)!(n-r)! = \binom{n}{r} r!^k(n-r)!^k(n-r). \end{equation} \noindent $\triangleright$ If there is no transition from $Q_1$ to $Q_2$, the automaton is really split in two (or more) components, even as an undirected graph. The dangling state is either in $Q_1$ or $Q_2$, we can assume it is in $Q_1$ by symmetry. The restriction to $Q_1$ of the automaton is an almost-group automaton, and the restriction to $Q_2$ is a group automaton. We therefore have the following upper bound for such automata for fixed $r=|Q_1|$, using Lemma~\ref{lem:ag automata}: \begin{equation}\label{eq:case2} \binom{n}{r}|\mathcal{G}_{r,k}|(n-r)!^k = \binom{n}{r}(r-1)r!^k(n-r)!^k. \end{equation} To conclude the proof, we just have to sum everything up for $1 \leq r \leq n-1$: \begin{equation} \sum_{r=1}^{n-1}{n \choose r}r!^k(n-r)!^{k}(r-1 + n-r) = (n-1)\sum_{r=1}^{n-1} \binom{n}{r}r!^k(n-r)!^{k}. \end{equation} This concludes the proof as the sum is equal to twice the value of $Z_{n,k}$ of the proof of Lemma~\ref{lem:sc_group_automata}, possibly with a negligible extra central value.\qed \end{proof} \end{document}
math
41,352
\begin{document} \begin{center} {\bf \large Necessary and Sufficient Conditions \\ for Convergence of First-Rare-Event Times for \\ Perturbed Semi-Markov Processes} \varepsilonnd{center} \begin{center} {\large Dmitrii Silvestrov\footnote{Department of Mathematics, Stockholm University, SE-106 81 Stockholm, Sweden. \\ Email address: [email protected]}} \varepsilonnd{center} Abstract: Necessary and sufficient conditions for convergence in distribution of first-rare-event times and convergence in Skorokhod J-topology of first-rare-event-time processes for perturbed semi-Markov processes with finite phase space are obtained. \\ Keywords: Semi-Markov process, First-rare-event time, First-rare-event-time process, Convergence in distribution, Convergence in Skorokhod J-topology, Necessary and sufficient conditions. \\ 2010 Mathematics Subject Classification: Primary: 60J10, 60J22, 60J27, 60K15; Secondary: 65C40. \\ {\bf 1. Introduction} \\ Random functionals similar with first-rare-event times are known under different names such as first hitting times, first passage times, absorption times, in theoretical studies, and as lifetimes, first failure times, extinction times, etc., in applications. Limit theorems for such functionals for Markov type processes have been studied by many researchers. The case of Markov chains and semi-Markov processes with finite phase spaces is the most deeply investigated. We refer here to the works by Simon and Ando (1961), Kingman (1963), Darroch and Seneta (1965, 1967), Keilson (1966, 1979), Korolyuk (1969), Korolyuk and Turbin (1970, 1976), Silvestrov (1970, 1971, 1974, 1980, 2014), Anisimov (1971a, 1971b, 1988, 2008), Turbin (1971), Masol and Silvestrov (1972), Zakusilo (1972a, 1972b), Kovalenko (1973), Latouch and Louchard (1978), Shurenkov (1980a, 1980b), Gut and Holst (1984), Brown and Shao (1987), Alimov and Shurenkov (1990a, 1990b), Hasin and Haviv (1992), Asmussen (1994, 2003), Ele\u \i ko and Shurenkov (1995), Kalashnikov (1997), Kijima (1997), Stewart (1998, 2001), Gyllenberg and Silvestrov (1994, 1999, 2000, 2008), Silvestrov and Drozdenko (2005, 2006a, 2006b), Asmussen and Albrecher (2010), Yin and Zhang (2005, 2013), Drozdenko (2007a, 2007b, 2009), Benois, Landim and Mourragui (2013). The case of Markov chains and semi-Markov processes with countable and an arbitrary phase space was treated in works by Gusak and Korolyuk (1971), Silvestrov (1974, 1980, 1981, 1995, 2000), Korolyuk and Turbin (1978), Kaplan (1979, 1980), Kovalenko and Kuznetsov (1981), Aldous (1982), Korolyuk~D. and Silvestrov (1983, 1984), Kartashov (1987, 1991, 1996, 2013), Anisimov (1988, 2008), Silvestrov and Velikii (1988), Silvestrov and Abadov (1991, 1993), Motsa and Silvestrov (1996), Korolyuk and Swishchuk (1992), Korolyuk~V.V. and Korolyuk~V.S. (1999), Koroliuk and Limnios (2005), Kupsa and Lacroix (2005), Glynn (2011), and Serlet (2013). We also refer to the books by Silvestrov (2004) and Gyllenberg and Silvestrov (2008) and papers by Kovalenko (1994) and Silvestrov D. and Silvestrov S. (2015), where one can find comprehensive bibliographies of works in the area. The main features for the most previous results is that they give sufficient conditions of convergence for such functionals. As a rule, those conditions involve assumptions, which imply convergence in distribution for sums of i.i.d random variables distributed as sojourn times for the semi-Markov process (for every state) to some infinitely divisible laws plus some ergodicity condition for the imbedded Markov chain plus condition of vanishing probabilities of occurring a rare event during one transition step for the semi-Markov process. In the context of necessary and sufficient conditions of convergence in distribution for first-rare-event-time type functionals, we would like to point out the paper by Kovalenko (1965) and the books by Gnedenko and Korolev (1996) and Bening and Korolev (2002), where one can find some related results for geometric sums of random variables, and the papers by Korolyuk and Silvestrov (1983) and Silvestrov and Velikii (1988), where one can find some related results for first-rare-event-time type functionals defined on Markov chains with arbitrary phase space. The results of the present paper relate to the model of perturbed semi-Markov processes with a finite phase space. Instead of conditions based on ``individual'' distributions of sojourn times, we use more general and weaker conditions imposed on distributions sojourn times averaged by stationary distributions of the corresponding imbedded Markov chains. Moreover, we show that these conditions are not only sufficient but also necessary conditions for convergence in distribution of first-rare-event times and convergence in Skorokhod J-topology of first-rare-event-time processes. These results give some kind of a ``final solution'' for limit theorems for first-rare-event times and first-rare-event-time processes for perturbed semi-Markov process with a finite phase space. The paper generalize and improve results concerned necessary and sufficient conditions of weak convergence for first-rare-event times for semi-Markov process obtained in papers by Silvestrov and Drozdenko (2005, 2006a, 2006b) and Drozdenko (2007a, 2007b, 2009). First, weaken model ergodic conditions are imposed on the corresponding embedded Markov chains. Second, the above results about weak convergence for first-rare-event times are extended, in Theorem 1, to the form of corresponding functional limit theorems for first-rare-event-time processes, with necessary and sufficient conditions of convergence. Third, new proofs, based on general limit theorems for randomly stopped stochastic processes, developed and extensively presented in Silvestrov (2004), are given, instead of more traditional proofs based on cyclic representations of first-rare-event times if the form of geometrical type random sums. This actually made it possible to get more advanced results in the form of functional limit theorems. Fourth, necessary and sufficient conditions of convergence for step-sum reward processes defined on Markov chains are also obtained in the paper. In the context of the present paper, these results, formulated in Theorem 2, play an intermediate role. At the same time, they have their own theoretical and applied values. Finally, we would like to mention results formulated in Lemmas 1 - 9, which also give some useful supplementary information about asymptotic properties of first-rare-event times and step-sum reward processes. We would like to conclude the introduction with the remark that the present paper is a slightly improved version of the research report by Silvestrov (2016). \\ {\bf 2. First-rare-event times for perturbed semi-Markov processes} \\ Let $(\varepsilonta_{\varepsilon, n}, \kappa_{\varepsilon, n}, \zeta_{\varepsilon, n}), \ n = 0, 1, \ldots$ be, for every $\varepsilon \in (0, \varepsilon_0]$, a Markov renewal process, i.e., a homogenous Markov chain with a phase space $\mathbb{Z} = \{1, 2, \ldots, m\} \times [0, \infty) \times \{ 0, 1 \}$, an initial distribution $\bar{q}_\varepsilon = \langle q_{\varepsilon, i} = \mathsf{P} \{\varepsilonta_{\varepsilon, 0} = i, \kappa_{\varepsilon, 0} = 0, \zeta_{\varepsilon, 0} = 0 \} = \mathsf{P} \{\varepsilonta_{\varepsilon, 0} = i \}, i \in {\mathbb X} \rangle$ and transition probabilities, \begin{equation}\label{sad} \begin{aligned} & \mathsf{P} \{ \varepsilonta_{\varepsilon, n+1} = j, \kappa_{\varepsilon, n+1} \leq t, \zeta_{\varepsilon, n+1} = \jmath / \varepsilonta_{\varepsilon, n} = i, \xi_{\varepsilon, n} = s, \zeta_{\varepsilon, n} = \imath \} \\ & \quad = \mathsf{P} \{ \varepsilonta_{\varepsilon, n+1} = j, \kappa_{\varepsilon, n+1} \leq t, \zeta_{\varepsilon, n+1} = \jmath / \varepsilonta_{\varepsilon, n} = i \} \\ & \quad = Q_{\varepsilon, ij}(t, \jmath), \ i, j \in \mathbb{X}, \ s, t \geq 0, \ \imath, \jmath = 0, 1. \varepsilonnd{aligned} \varepsilonnd{equation} As is known, the first component $\varepsilonta_{\varepsilon, n}$ of the above Markov renewal process is also a homogenous Markov chain, with the phase space $\mathbb{X} = \{1, 2, \ldots, m\}$, the initial distribution $\bar{q}_\varepsilon = \langle q_{\varepsilon, i} = \mathsf{P} \{\varepsilonta_{\varepsilon, 0} = i \}, i \in {\mathbb X} \rangle$ and the transition probabilities, \begin{equation}\label{proba} p_{\varepsilon, ij} = Q_{\varepsilon, ij}(+ \infty, 0) + Q_{\varepsilon, ij}(+ \infty, 1), \, i , j \in \mathbb{X}. \varepsilonnd{equation} Also, the random sequence $(\varepsilonta_{\varepsilon, n}, \zeta_{\varepsilon, n}), \ n = 0, 1, \ldots$ is a Markov renewal process with the phase space $\mathbb{X} \times \{0, 1 \}$, the initial distribution $\bar{q}_\varepsilon = \langle q_{\varepsilon, i} = \mathsf{P} \{\varepsilonta_{\varepsilon, 0} = i, \zeta_{\varepsilon, 0} = 0 \} = \mathsf{P} \{\varepsilonta_{\varepsilon, 0} = i \}, i \in {\mathbb X} \rangle$ and the transition probabilities, \begin{equation}\label{probak} p_{\varepsilon, ij, \jmath} = Q_{\varepsilon, ij}(+ \infty, \jmath), \, i , j \in \mathbb{X}, \jmath = 0, 1. \varepsilonnd{equation} Random variables $\kappa_{\varepsilon, n}, n = 1, 2, \ldots$ can be interpreted as sojourn times and random variables $\tau_{\varepsilon, n} = \kappa_{\varepsilon, 1} + \cdots + \kappa_{\varepsilon, n}, n = 1, 2, \ldots, \tau_{\varepsilon, 0} = 0$ as moments of jumps for a semi-Markov process $\varepsilonta_\varepsilon(t), t \geq 0$ defined by the following relation, \begin{equation}\label{semi} \varepsilonta_\varepsilon(t) = \varepsilonta_{\varepsilon, n} \quad \mbox{for} \quad \tau_{\varepsilon,n} \leq t <\tau_{\varepsilon, n+1}, \ n = 0, 1, \ldots, \varepsilonnd{equation} As far as random variables $\zeta_{\varepsilon, n}, n = 1, 2, \ldots$ are concerned, they are interpreted as so-called, ``flag variables'' and are used to record events $\{\zeta_{\varepsilon, n} = 1 \}$ which we interpret as ``rare'' events. Let us introduce random variables, \begin{equation}\label{semika} \xi_\varepsilon = \sum_{n = 1}^{\nu_\varepsilon} \kappa_{\varepsilon, n}, \ {\rm where} \ \nu_\varepsilon = \min(n \geq 1: \zeta_{\varepsilon, n} =1). \varepsilonnd{equation} A random variable $\nu_\varepsilon$ counts the number of transitions of the imbedded Markov chain $\varepsilonta_{\varepsilon,n} $ up to the first occurrence of ``rare'' event, while a random variable $\xi_\varepsilon$ can be interpreted as the first-rare-event time of the first occurrence of ``rare'' event for the semi-Markov process $\varepsilonta_\varepsilon(t)$. We also consider the first-rare-event-time process, \begin{equation}\label{semikana} \xi_\varepsilon(t) = \sum_{n = 1}^{[t\nu_\varepsilon]} \kappa_{\varepsilon, n}, \ t \geq 0. \varepsilonnd{equation} The objective of this paper is to describe class $\mathcal F$ of all possible {\cd} processes $ \xi_0(t), t \geq 0$, which can appear in the corresponding functional limit theorem given in the form of the asymptotic relation, $\xi_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \xi_0(t), t \geq 0$ as $\varepsilon \to 0$, and to give necessary and sufficient conditions for holding of the above asymptotic relation with the specific (by its finite dimensional distributions) limiting stochastic process $\xi_0(t), t \geq 0$ from class $\mathcal F$. Here and henceforth, we use symbol $\stackrel{d}{\longrightarrow}$ to indicate convergence in distribution for random variables (weak convergence of distribution functions) or stochastic processes (weak convergence of finitely dimensional distributions), symbol $\stackrel{\mathsf{P}}{\longrightarrow}$ to indicate convergence of random variables in probability, and symbol $\stackrel{\mathsf{J}}{\longrightarrow}$ to indicate convergence in Skorokhod J-topology for real-valued {\cd} stochastic processes defined on time interval $[0, \infty)$. We refer to books by Gikhman and Skorokhod (1971), Billingsley (1968, 1999) and Silvestrov (2004) for details concerned the above form of functional convergence. The problems formulated above are solved under three general model assumptions. Let us introduce the probabilities of occurrence of rare event during one transition step of the semi-Markov process $\varepsilonta_\varepsilon(t)$, $$ p_{\varepsilon, i} = \mathsf{P}_i \{ \zeta_{\varepsilon, 1} = 1 \}, \ i \in \mathbb{X}. $$ Here and henceforth, $\mathsf{P}_i$ and $\mathsf{E}_i$ denote, respectively, conditional probability and expectation calculated under condition that $\varepsilonta_{\varepsilon, 0} = i$. The first model assumption {\bf A}, imposed on probabilities $p_{i \varepsilon}$, specifies interpretation of the event $\{\zeta_{\varepsilon, n} = 1\}$ as ``rare'' and guarantees the possibility for such event to occur: \begin{itemize} \item[\bf A: ] $0 < \max_{1 \leq i \leq m} p_{\varepsilon, i} \to 0$ as $\varepsilon \to 0$. \varepsilonnd{itemize} Let us introduce random variables, \begin{equation}\label{statikani} \mu_{\varepsilon, i}(n) = \sum_{k =1}^n I(\varepsilonta_{\varepsilon, k-1} = i), \ n = 0, 1, \ldots, \ i \in \mathbb{X}. \varepsilonnd{equation} If, the Markov chain $\varepsilonta_{\varepsilon, n}$ is ergodic, i.e., $\mathbb{X}$ is one class of communicative states for this Markov chain, then its stationary distribution is given by the following ergodic relation, \begin{equation}\label{statika} \frac{\mu_{\varepsilon, i}(n)}{n} \stackrel{\mathsf{P}}{\longrightarrow} \pi_{\varepsilon, i} \ {\rm as} \ n \to \infty, \ {\rm for} \ i \in \mathbb{X}. \varepsilonnd{equation} The ergodic relation (\ref{statika}) holds for any initial distribution $\bar{q}_{\varepsilon}$, and the stationary distribution $\pi_{\varepsilon, i}, i \in {\mathbb X}$ does not depend on the initial distribution. Also, all stationary probabilities are positive, i.e., $\pi_i(\varepsilon) > 0, i \in \mathbb{X}$. As is known, the stationary probabilities $\pi_i(\varepsilon), i \in \mathbb{X}$ are the unique solution for the system of linear equations, \begin{equation}\label{statikabas} \pi_{\varepsilon, i} = \sum_{j \in \mathbb{X}} \pi_{\varepsilon, j} p_{\varepsilon, ji}, Êi \in \mathbb{X}, \ \ \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} = 1. \varepsilonnd{equation} The second model assumption {is a condition of asymptotically uniform ergodicity for the embedded Markov chains $\varepsilonta_{\varepsilon, n}$: \begin{itemize} \item[$\mathbf{B}$:] There exists a ring chain of states $i_0, i_1, \ldots, i_N = i_0$ which contains all states from the phase space $\mathbb{X}$ and such that $\varliminf_{\varepsilon \to 0}p_{\varepsilon, i_{k-1} i_k} > 0$, for $k = 1, \ldots, N$. \varepsilonnd{itemize} As follows from Lemma 1 given below, condition $\mathbf{B}$ guarantees that there exists $\varepsilon'_0 \in (0, \varepsilon_0]$ such that the Markov chain $\varepsilonta_{\varepsilon, n}$ is ergodic for every $\varepsilon \in (0, \varepsilon'_0]$. However, condition $\mathbf{B}$ does not require convergence of transition probabilities and, in sequel, do not imply convergence of stationary probabilities for the Markov chains $\varepsilonta_{\varepsilon, n}$ as $\varepsilon \to 0$. In the case, where the transition probabilities $p_{\varepsilon, ij} = p_{0, ij}, i, j \in \mathbb{X}$ do not depend on parameter $\varepsilon$, condition $\bf B$ reduces to the standard assumption that the Markov chain $\varepsilonta_{0, n}$ with the matrix of transition probabilities $\| p_{0,ij} \|$ is ergodic. Lemma 1 formulated below gives a more detailed information about condition {\bf B}. Finally, the following condition guarantees that the last summand $\kappa_{\varepsilon, \nu_\varepsilon}$ in the random sum $\xi_\varepsilon$ is asymptotically negligible: \begin{itemize} \item[\bf C: ] $\mathsf{P}_i \{ \kappa_{\varepsilon, 1} > \delta / \zeta_{\varepsilon, 1} = 1 \} \to 0$ as $\varepsilon \to 0$, for $\delta > 0, i \in \mathbb{X}$. \varepsilonnd{itemize} Let us define a probability which is the result of averaging of the probabilities of occurrence of rare event in one transition step by the stationary distribution of the imbedded Markov chain $\varepsilonta_{\varepsilon, n}$, \begin{equation}\label{defra} p_\varepsilon = \sum\limits_{i=1}^m \pi_{\varepsilon, i} p_{\varepsilon, i} \ \ {\rm and} \ \ v_\varepsilon = p_\varepsilon^{-1}. \varepsilonnd{equation} Let us introduce the distribution functions of a sojourn times $\kappa_{\varepsilon,1}$ for the semi-Markov processes $\varepsilonta_\varepsilon(t)$, $$ G_{\varepsilon, i}(t) = \mathsf{P}_i \{\kappa_{\varepsilon, 1} \leq t \}, \ t \geq 0, \ i \in \mathbb{X}. $$ Let $\theta_{\varepsilon, n}, n = 1, 2, \ldots$ be i.i.d. random variables with distribution $G_\varepsilon(t)$, which is a result of averaging of distribution functions of sojourn times by the stationary distribution of the imbedded Markov chain $\varepsilonta_{\varepsilon, n}$, $$ G_\varepsilon(t)=\sum_{i=1}^m \pi_{\varepsilon, i} G_{\varepsilon, i}(t), \ t \geq 0. $$ Now, we can formulate the necessary and sufficient condition for convergence in distribution for first-rare-event times: \begin{itemize} \item[$\mathbf{D}$:] $\theta_{\varepsilon} = \sum_{n = 1}^{[v_\varepsilon]} \theta_{\varepsilon, n} \stackrel{d}{\longrightarrow} \theta_0$ as $\varepsilon \to 0$, where $\theta_0$ is a non-negative random variable with distribution not concentrated in zero. \varepsilonnd{itemize} As well known, {\bf (d$_1$)} the limiting random variable $\theta_0$ penetrating condition $\mathbf{D}$ should be infinitely divisible and, thus, its Laplace transform has the form, $\mathsf{E} e^{-s \theta_0} = e^{- A(s)}$, where $A(s) = gs + \int_0^\infty (1 - e^{-sv})G(dv), s \geq 0$, $g$ is a non-negative constant and $G(dv)$ is a measure on interval $(0, \infty)$ such that $\int_{(0, \infty)} \frac{v}{1+ v}G(dv) < \infty$; {\bf (d$_2$)} $g + \int_{(0, \infty)} \frac{v}{1+ v}G(dv) > 0$ (this is equivalent to the assumption that $\mathsf{P} \{\xi_0 = 0 \} < 1$). Let also consider the homogeneous step-sum process with independent increments (summands are i.i.d. random variables), \begin{equation}\label{bgre} \theta_{\varepsilon}(t) = \sum_{n = 1}^{[tv_\varepsilon]} \theta_{\varepsilon, n}, t \geq 0. \varepsilonnd{equation} As is known (see, for example, Skorokhod (1964, 1986)), condition $\mathbf{D}$ is necessary and sufficient for holding of the asymptotic relation, \begin{equation}\label{bgref} \theta_{\varepsilon}(t) = \sum_{n = 1}^{[tv_\varepsilon]} \theta_{\varepsilon, n}, t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_0(t), t \geq 0 \ {\rm as} \ \varepsilon \to 0, \varepsilonnd{equation} where $\theta_0(t), t \geq 0$ is a nonnegative L\'{e}vy process (a {\cd} homogeneous process with independent increments) with the Laplace transforms $\mathsf{E} e^{-s \theta_0(t)} = e^{- tA(s)}, s, t \geq 0$. Let us define the Laplace transforms, $$ \varphi_{\varepsilon, i}(s) = \mathsf{E}_i e^{- s \kappa_{\varepsilon, 1}}, i \in \mathbb{X}, \ \varphi_\varepsilon(s) = \mathsf{E} e^{- s \theta_{\varepsilon, 1}} = \sum_{\ \in \mathbb{X}} \pi_{\varepsilon, i} \varphi_{\varepsilon, i}(s), \ s \geq 0. $$ Condition $\mathbf{D}$ can be reformulated (see, for example, Feller (1966, 1971)) in the equivalent form, in terms of the above Laplace transforms: \begin{itemize} \item[$\mathbf{D}_1$:] $v_\varepsilon (1 - \varphi_\varepsilon(s)) \to A(s)$ as $\varepsilon \to 0$, for $s > 0$, where the limiting function $A(s) > 0$, for $s > 0$ and $A(s) \to 0$ as $s \to 0$. \varepsilonnd{itemize} In this case, {\bf (d$_3$)} $A(s)$ is a cumulant of non-negative random variable with distribution not concentrated in zero. Moreover, {\bf (d$_4$)} $A(s)$ should be the cumulant of infinitely divisible distribution of the form given in the above conditions {\bf (d$_1$)} and {\bf (d$_2$)}. The following condition, which is a variant of the so-called central criterion of convergence (see, for example, Lo\`eve (1977)), is equivalent to condition $\mathbf{D}$, with the Laplace transform of the limiting random variable $\theta_0$ given in the above conditions {\bf (d$_1$)} and {\bf (d$_2$)}: \begin{itemize} \item[$\mathbf{D}_2$:] {\bf (a)} $v_\varepsilon (1-G_\varepsilon(u)) \to G(u)$ as $\varepsilon \to 0$ for all $u > 0$, which are points of continuity of the limiting function, which is nonnegative, non-increasing, and right continuous function defined on interval $(0, \infty)$, with the limiting value $G(+\infty) = 0$; {\bf (a)} function $G(u)$ is connected with the measure $G(dv)$ by the relation $G((u', u'']) = G(u') - G(u'')$, $0 < u' \leq u'' <\infty$; {\bf (b)} $v_\varepsilon \int_{(0, u]} v G_\varepsilon(dv) \to g + \int_{(0, u]} v G(dv)$ as $\varepsilon \to 0$ for some $u > 0$ which is a point of continuity of $G(u)$. \varepsilonnd{itemize} It is useful to note that {\bf (d$_5$)} the asymptotic relation penetrating condition $\mathbf{D}_2$ {\bf (b)} holds, under condition $\mathbf{D}_2$ {\bf (a)}, for any $u > 0$ which is a point of continuity for function $G(u)$. In what follows, we also always assume that asymptotic relations for random variables and processes, defined on trajectories of Markov renewal processes $(\varepsilonta_{\varepsilon, n}, \kappa_{\varepsilon, n}, \zeta_{\varepsilon, n})$, hold for any initial distributions $\bar{q}_\varepsilon$, if such distributions are not specified. The main result of the paper is the following theorem. {\bf Theorem 1.} {\it Let conditions {\bf A}, {\bf B} and {\bf C} hold. Then, {\bf (i)} condition {\bf D} is necessary and sufficient for holding {\rm (}for some or any initial distributions $\bar{q}_\varepsilon$, respectively, in statements of necessity and sufficiency{\rm )} of the asymptotic relation $\xi_\varepsilon = \xi_\varepsilon(1) \stackrel{d}{\longrightarrow} \xi_0$ as $\varepsilon \to 0$, where $\xi_0$ is a non-negative random variable with distribution not concentrated in zero. In this case, {\bf (ii)} the limiting random variable $\xi_0$ has the Laplace transform $\mathsf{E} e^{-s \xi_0} = \frac{1}{1+A(s)}$, where $A(s)$ is a cumulant of infinitely divisible distribution defined in condition {\bf D}. Moreover, {\bf (iii)} the stochastic processes $\xi_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \xi_0(t) = \theta_0(t \nu_0), t \geq 0$ as $\varepsilon \to 0$, where {\bf (a)} $\nu_0$ is a random variable, which has the exponential distribution with parameter $1$, {\bf (b)} $\theta_0(t), t \geq 0$ is a nonnegative L\'{e}vy process with the Laplace transforms $\mathsf{E} e^{-s \theta_0(t)} = e^{- tA(s)}, s, t \geq 0$, {\bf (c)} the random variable $\nu_0$ and the process $\theta_0(t), t \geq 0$ are independent.} {\bf Remark 1}. According Theorem 1, class $\mathcal F$ of all possible nonnegative, nondecreasing, \cd, stochastically continuous processes $\xi_0(t), t \geq 0$ with distributions of random variables $\xi_0(t), t > 0$ not concentrated in zero, and such that the asymptotic relation, $\xi_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \xi_0(t), t \geq 0$ as $\varepsilon \to 0$, holds, coincides with the class of limiting processes described in proposition {\bf (iii)}. Condition {\bf D} is necessary and sufficient condition for holding not only the asymptotic relation given in propositions {\bf (i)} -- {\bf (ii)} but also for the much stronger asymptotic relation given in proposition {\bf (iii)}. {\bf Remark 2}. The statement ``for some or any initial distributions $\bar{q}_\varepsilon$, respectively, in statements of necessity and sufficiency'' used in the formulation of Theorem1 should be understood in the sense that the asymptotic relation penetrating proposition {\bf (i)} should hold for at least one family of initial distributions $\bar{q}_\varepsilon, \varepsilon \in (0, \varepsilon_0]$, in the statement of necessity, and for any family of initial distributions $\bar{q}_\varepsilon, \varepsilon \in (0, \varepsilon_0]$, in the statement of sufficiency. \\ {\bf 3. Asymptotics of step-sum reward processes}. \\ Let us consider, for every $\varepsilon \in (0, \varepsilon_0]$, the step-sum stochastic process, \begin{equation}\label{edibas} \kappa_\varepsilon(t) = \sum_{n = 1}^{[tv_\varepsilon]} \kappa_{\varepsilon, n}, t \geq 0. \varepsilonnd{equation} The random variables $\kappa_\varepsilon(t) $ can be interpreted as rewards accumulated on trajectories of the Markov chain $\varepsilonta_{\varepsilon, n}$. Respectively, random variables $\xi_\varepsilon$ can be interpreted as rewards accumulated on trajectories of the Markov chain $\varepsilonta_{\varepsilon, n}$ till the first occurrence of the ``rare'' event. Asymptotics of the step-sum reward processes $\kappa_\varepsilon(t), t \geq 0$ have its own value. At the same, the corresponding result formulated below in Theorem 2 plays the key role in the proof of Theorem 1. It is useful to note that the flag variables $\zeta_{\varepsilon, n}$ are not involved in the definition of the processes $\kappa_\varepsilon(t)$. This let us replace function $v_\varepsilon = p_\varepsilon^{-1}$ by an arbitrary function $0 < v_\varepsilon \to \infty$ as $\varepsilon \to 0$ in condition {\bf D}, Theorem 2 and Lemmas 2 -- 6 formulated below. {\bf Theorem 2.} {\varepsilonm Let condition {\bf B} holds. Then, {\bf (i)} condition {\bf D} is necessary and sufficient condition for holding {\rm (}for some or any initial distributions $\bar{q}_\varepsilon$, respectively, in statements of necessity and sufficiency{\rm )} of the asymptotic relation, $\kappa_\varepsilon(1) \stackrel{d}{\longrightarrow} \theta_0$ as $\varepsilon \to 0$, where $\theta_0$ is a non-negative random variable with distribution not concentrated in zero. In this case, {\bf (ii)} the random variable $\theta_0$ has the infinitely divisible distribution with the Laplace transform $\mathsf{E} e^{-s \theta_0} = e^{- A(s)}, s \geq 0$ with the cumulant $A(s)$ defined in condition {\bf D}. Moreover, {\bf (iii)} stochastic processes $\kappa_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_0(t), t \geq 0$ as $\varepsilon \to 0$, where $\theta_0(t), t \geq 0$ is a nonnegative L\'{e}vy process with the Laplace transforms $\mathsf{E} e^{-s \theta_0(t)} = e^{- tA(s)}, s, t \geq 0$.} {\bf Remark 3}. According Theorem 2, class $\mathcal G$ of all possible nonnegative, nondecreasing, {\cd}, stochastically continuous processes $\theta_0(t), t \geq 0$ with distributions of random variables $\theta_0(t), t > 0$ not concentrated in zero, and such that the asymptotic relation, $\kappa_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_0(t), t \geq 0$ as $\varepsilon \to 0$, holds, coincides with the class of limiting processes described in proposition {\bf (iii)}. Condition {\bf D} is necessary and sufficient condition for holding the asymptotic relation given in propositions {\bf (i)} -- {\bf (ii)} as well as for the much stronger asymptotic relation given in proposition {\bf (iii)}. We use several useful lemmas in the proof of Theorems 1 and 2. Let $\tilde{\varepsilonta}_{\varepsilon, n}$ be, for every $\varepsilon \in (0, \varepsilon_0]$ a Markov chain with the phase space $\mathbb{X}$ and a matrix of transition probabilities $\| \tilde{p}_{\varepsilon, ij} \|$. We shall use the following condition: \begin{itemize} \item[$\mathbf{E}$:] $p_{\varepsilon, ij} - \tilde{p}_{\varepsilon, ij} \to 0$ as $\varepsilon \to 0$, for $i, j \in \mathbb{X}$. \varepsilonnd{itemize} If transition probabilities $\tilde{p}_{\varepsilon, ij} \varepsilonquiv p_{0, ij}, i,j \in \mathbb{X}$ do not depend on $\varepsilon$, then condition $\mathbf{E}$ reduces to the following condition: \begin{itemize} \item[$\mathbf{F}$:] $p_{\varepsilon, ij}\to p_{0, ij}$ as $\varepsilon \to 0$, for $i, j \in \mathbb{X}$. \varepsilonnd{itemize} {\bf Lemma 1.} {\varepsilonm Let condition $\mathbf{B}$ holds for the Markov chains $\varepsilonta_{\varepsilon, n}$. Then, {\bf (i)} There exists $\varepsilon'_0 \in (0, \varepsilon_0]$ such that the Markov chain $\varepsilonta_{\varepsilon, n}$ is ergodic, for every $\varepsilon \in (0, \varepsilon'_0]$ and $0 < \varliminf_{\varepsilon \to 0} \pi_{\varepsilon, i} \leq \varlimsup_{\varepsilon \to 0} \pi_{\varepsilon, i} < 1$, for $i \in \mathbb{X}$. {\bf (ii)} If, together with $\mathbf{B}$, condition $\mathbf{E}$ holds, then, there exists $\varepsilon''_0 \in (0, \varepsilon'_0]$ such that Markov chain $\tilde{\varepsilonta}_{\varepsilon, n}$ is ergodic, for every $\varepsilon \in (0, \varepsilon''_0]$, and its stationary distribution $\tilde{\pi}_{\varepsilon, i}, i \in \mathbb{X}$ satisfy the asymptotic relation, $\pi_{\varepsilon, i} - \tilde{\pi}_{\varepsilon, i} \to 0$ as $\varepsilon \to 0$, for $i \in \mathbb{X}$. {\bf (iii)} If condition $\mathbf{F}$ holds, then matrix $\| p_{0, ij} \|$ is stochastic, condition $\mathbf{B}$ is equivalent to the assumption that a Markov chain $\varepsilonta_{0, n}$, with the matrix of transition probabilities $\| p_{0, ij} \|$, is ergodic and the following asymptotic relation holds, $\pi_{\varepsilon, i} \to \pi_{0, i}$ as $\varepsilon \to 0$, for $i \in \mathbb{X}$, where $\pi_{0, i}, i \in \mathbb{X}$ is the stationary distribution of the Markov chain $\varepsilonta_{0, n}$. } {\bf Proof}. Let us first prove proposition {\bf (iii)}. Condition $\mathbf{F}$ obviously implies that matrix $\| p_{0, ij} \|$ is stochastic. Conditions $\mathbf{B}$ and $\mathbf{F}$ imply that $\lim_{\varepsilon \to 0}p_{\varepsilon, i_{k-1} i_k} = p_{0, i_{k-1} i_k} > 0, k = 1, \ldots, N$, for the ring chain penetrating condition $\mathbf{B}$. Thus, the Markov chain $\varepsilonta_{0, n}$ with the matrix of transition probabilities $\| p_{0, ij} \|$ is ergodic. Vise versa, the assumption that a Markov chain $\varepsilonta_{0, n}$ with the matrix of transition probabilities $\| p_{0, ij} \|$ is ergodic implies that there exists a ring chain of states $i_0, \ldots, i_N = i_0$ which contains all states from the phase space $\mathbb{X}$ and such that $ p_{0, i_{k-1} i_k} > 0, k = 1, \ldots, N$. In this case, condition $\mathbf{F}$ implies that $\lim_{\varepsilon \to 0} p_{\varepsilon, i_{k-1} i_k} = p_{0, i_{k-1} i_k} > 0, k = 1, \ldots, N$, and, thus, condition {\bf B} holds. Let us assume that the convergence relation for stationary distributions penetrating proposition {\bf (iii)} does not hold. In this case, there exist $\delta > 0$ and a sequence $0 < \varepsilon_n \to 0$ as $n \to \infty$ such that $\varliminf_{n \to \infty} |\pi_{\varepsilon_n, i'} - \pi_{0, i'}| \geq \delta$, for some $i' \in \mathbb{X}$. Since, the sequences $\pi_{\varepsilon_n, i}, n = 1, 2, \ldots, i \in \mathbb{X}$ are bounded, there exists a subsequence $0 < \varepsilon_{n_k} \to 0$ as $k \to 0$ such that $\pi_{\varepsilon_{n_k}, i} \to \pi'_{0, i}$ as $k \to \infty$, for $i \in \mathbb{X}$. This relation, condition $\mathbf{F}$ and relation (\ref{statikabas}) imply that numbers $\pi'_{0, i}, i \in \mathbb{X}$ satisfy the system of linear equation given in (\ref{statikabas}). This is impossible, since inequality $|\pi'_{0, i'} - \pi_{0, i'}| \geq \delta$ should hold, while the stationary distribution $\pi_{0, i}, i \in \mathbb{X}$ is the unique solution of system (\ref{statikabas}). Let us now prove proposition {\bf (i)}. Condition $\mathbf{B}$ obviously implies that there exist $\varepsilon'_0 \in (0, \varepsilon_0]$ such that $p_{\varepsilon, i_{k-1} i_k} > 0, k = 1, \ldots, N$, for the ring chain penetrating condition $\mathbf{B}$, for $\varepsilon \in (0, \varepsilon'_0]$. Thus, the Markov chain $\varepsilonta_{\varepsilon, n}$ is ergodic, for every $\varepsilon \in (0, \varepsilon'_0]$. Let now assume that $\varliminf_{\varepsilon \to 0} \pi_{\varepsilon, i'} = 0$, for some $i' \in \mathbb{X}$. In this case, there exists a sequence $0 < \varepsilon_n \to 0$ as $n \to \infty$ such that $\pi_{\varepsilon_n, i'} \to 0$ as $n \to \infty$. Since, the sequences $p_{\varepsilon_n, ij}, n = 1, 2, \ldots, i, j \in \mathbb{X}$ are bounded, there exists a subsequence $0 < \varepsilon_{n_k} \to 0$ as $k \to 0$ such that $p_{\varepsilon_{n_k}, ij} \to p_{0, ij}$ as $k \to \infty$, for $i, j \in \mathbb{X}$. By proposition {\bf (iii)}, the matrix $\| p_{0, ij} \|$ is stochastic, the Markov chain $\varepsilonta_{0, n}$ with the matrix of transition probabilities $\| p_{0, ij} \|$ is ergodic and its stationary distribution $\pi_{0, i}, i \in \mathbb{X}$ satisfies the asymptotic relation, $\pi_{\varepsilon_{n_k}, i} \to \pi_{0, i}$ as $k \to \infty$, for $i \in \mathbb{X}$. This is impossible since equality $\pi_{0, i'} = 0$ should hold, while all stationary probabilities $\pi_{0, i}, i \in \mathbb{X}$ are positive. Thus, $\varliminf_{\varepsilon \to 0} \pi_{\varepsilon, i} > 0$, for $i \in \mathbb{X}$. This implies that, also, $\varlimsup_{\varepsilon \to 0} \pi_{\varepsilon, i} < 1$, for $i \in \mathbb{X}$, since $\sum_{i \in \mathbb{X}}\pi_{\varepsilon, i} =1$, for $\varepsilon \in (0, \varepsilon'_0]$. Finally, let us now prove proposition {\bf (ii)}. Conditions $\mathbf{B}$ and $\mathbf{E}$ obviously imply that $\varliminf_{\varepsilon \to 0}\tilde{p}_{\varepsilon, i_{k-1} i_k} = \varliminf_{\varepsilon \to 0} p_{\varepsilon, i_{k-1} i_k} > 0, k = 1, \ldots, N$, for the ring chain penetrating condition $\mathbf{B}$. Thus, condition $\mathbf{B}$ holds also for the Markov chains $\tilde{\varepsilonta}_{\varepsilon, n}$ and there exist $\varepsilon''_0 \in (0, \varepsilon'_0]$ such that Markov chain $\tilde{\varepsilonta}_{\varepsilon, n}$ is ergodic, for every $\varepsilon \in (0, \varepsilon''_0]$. Let assume that the convergence relation for stationary distributions penetrating proposition {\bf (ii)} does not hold. In this case, there exist here exist $\delta > 0$ and a sequence $0 < \varepsilon_n \to 0$ as $n \to \infty$ such that $\varliminf_{n \to \infty} |\pi_{\varepsilon_n, i'} - \tilde{\pi}_{\varepsilon_n, i'}| \geq \delta$, for some $i' \in \mathbb{X}$. Since, the sequences $p_{\varepsilon_n, ij}, n = 1, 2, \ldots, i, j \in \mathbb{X}$ are bounded, there exists a subsequence $0 < \varepsilon_{n_k} \to 0$ as $k \to 0$ such that $p_{\varepsilon_{n_k}, ij} \to p_{0, ij}$ as $k \to \infty$, for $i, j \in \mathbb{X}$. This relations and condition $\mathbf{E}$ imply that, also, $\tilde{p}_{\varepsilon_{n_k}, ij} \to p_{0, ij}$ as $k \to \infty$, for $i, j \in \mathbb{X}$. By proposition {\bf (iii)}, the matrix $\| p_{0, ij} \|$ is stochastic, the Markov chain $\varepsilonta_{0, n}$ with the matrix of transition probabilities $\| p_{0, ij} \|$ is ergodic and its stationary distribution $\pi_{0, i}, i \in \mathbb{X}$ satisfies the asymptotic relations, $\pi_{\varepsilon_{n_k}, i} \to \pi_{0, i}$ as $k \to \infty$, for $i \in \mathbb{X}$ and $\tilde{\pi}_{\varepsilon_{n_k}, i} \to \pi_{0, i}$ as $k \to \infty$, for $i \in \mathbb{X}$. This is impossible, since relation $\varliminf_{k \to \infty} |\pi_{\varepsilon_{n_k}, i'} - \tilde{\pi}_{\varepsilon_{n_k}, i'}| \geq \delta$ should hold. $\Box$ Due to Lemma 1, the asymptotic relation penetrating condition $\mathbf{D}_1$ can, under conditions $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{E}$, be rewritten in the equivalent form, where the stationary probabilities $\pi_{\varepsilon, i}, i \in \mathbb{X}$ are replaced by the stationary probabilities $\tilde{\pi}_{\varepsilon, i}, i \in \mathbb{X}$, \begin{align}\label{bokerwa} v_\varepsilon (1 - \varphi_\varepsilon(s)) & = \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} v_\varepsilon(1 -\varphi_{\varepsilon, i}(s)) \nonumber \\ & \sim \sum_{i \in \mathbb{X}} \tilde{\pi}_{\varepsilon, i} v_\varepsilon(1 -\varphi_{\varepsilon, i}(s)) \to A(s) \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0. \varepsilonnd{align} Here and henceforth relation $a(\varepsilon) \sim b(\varepsilon)$ as $\varepsilon \to 0$ means that $a(\varepsilon)/b(\varepsilon) \to 1$ as $\varepsilon \to 0$. Proposition {\bf (iii)} of Lemma 1 implies that, in the case, where the transition probabilities $p_{\varepsilon, ij} = p_{0,ij}, i, j \in \mathbb{X}$ do not depend on parameter $\varepsilon$ or $p_{\varepsilon, ij} \to p_{0, ij}$ as $\varepsilon \to 0$, for $i, j \in \mathbb{X}$, condition $\bf B$ reduces to the standard assumption that the Markov chain $\varepsilonta_{0, n}$, with the matrix of transition probabilities $\| p_{0,ij}\|$, is ergodic. These simpler variants of asymptotic ergodicity condition, based on condition $\mathbf{F}$ and the assumption of ergodicity of the Markov chain $\varepsilonta_{0, n}$ combined with averaging of characteristic in condition {\bf D} by its stationary distribution $\pi_{0, i}, i \in \mathbb{X}$, have been used in the mentioned above works by Silvestrov and Drozdenko (2006a) and Drozdenko (2007a) for proving analogues of propositions {\bf (i)} and {\bf (ii)} of Theorem 1. In this case, the averaging of characteristics in the necessary and sufficient condition {\bf D}, in fact, relates mainly to distributions of sojourn times. Condition {\bf B}, used in the present paper, balances in a natural way averaging of characteristics in condition {\bf D} between distributions of sojourn times and stationary distributions of the corresponding embedded Markov chains. Let us introduce random variables, which are sequential moments of hitting state $i \in \mathbb{X}$ by the Markov chain $\varepsilonta_{\varepsilon, n}$, \begin{equation}\label{rewopi} \tau_{\varepsilon, i, n} = \left\{ \begin{array}{ll} \min(k \geq 0, \varepsilonta_{\varepsilon, k} = i) & \text{for} \ n = 1, \\ \min(k > \tau_{\varepsilon, i, n - 1}, \varepsilonta_{\varepsilon, k} = i) & {\rm for} \ n \geq 2. \varepsilonnd{array} \right. \varepsilonnd{equation} Let also define random variables, \begin{equation}\label{rewopilo} \kappa_{\varepsilon, i, n} = \kappa_{\varepsilon, \tau_{\varepsilon, i, n} +1}, n = 1, 2, \ldots, i \in \mathbb{X}. \varepsilonnd{equation} The following simple lemma describe useful properties of the above family of random variables. {\bf Lemma 2}. {\varepsilonm Let condition {\bf B} holds. Then, for every $\varepsilon \in (0, \varepsilon'_0]$, {\bf (i)} the random variables $\kappa_{\varepsilon, i, n}, n = 1, 2, \ldots, i \in \mathbb{X}$ are independent; {\bf (ii)} $\mathsf{P} \{\kappa_{\varepsilon, i, n} \leq t \} = G_{\varepsilon, i}(t), t \geq 0$, for $n = 1, 2, \ldots, i \in \mathbb{X}$; {\bf (iii)} the following representation takes place for process $\kappa_\varepsilon(t)$, \begin{equation}\label{edibastabe} \kappa_\varepsilon(t) = \sum_{n = 1}^{[tv_\varepsilon]} \kappa_{\varepsilon, n} = \sum_{i \in \mathbb{X}} \sum_{n = 1}^{\mu_{\varepsilon, i}([tv_\varepsilon])} \kappa_{\varepsilon, i, n}, t \geq 0. \varepsilonnd{equation}} It should be noted that the families of random variables $\langle \mu_{\varepsilon, i}(n), n = 0, 1, \ldots, i \in \mathbb{X} \rangle$ and $\langle \kappa_{\varepsilon, i, n}, n = 1, 2, \ldots, i \in \mathbb{X} \rangle$ are not independent. In what follows, we, for simplicity, indicate convergence of {\cd} processes in uniform U-topology to continuous processes as convergence in J-topology, since, in this case, convergence J-topology is equivalent to convergence in uniform U-topology. {\bf Lemma 3}. {\varepsilonm Let condition {\bf B} hold. Then,} \begin{equation}\label{edibasta} \mu^*_{\varepsilon, i}(t) = \frac{\mu_{\varepsilon, i}([tv_\varepsilon])}{\pi_{\varepsilon, i} v_\varepsilon}, t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \mu_{0, i}(t) = t, t \geq 0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ i \in \mathbb{X}. \varepsilonnd{equation} {\bf Proof}. Let $\alpha_{\varepsilon, j} = \min(n > 0: \varepsilonta_{\varepsilon, n} = j)$ be the moment of first hitting to the state $j \in \mathbb{X}$ for the Markov chain $\varepsilonta_{\varepsilon, n}$. Condition {\bf B} implies that there exist $p \in (0, 1)$ and $\varepsilon_p \in (0, \varepsilon_0]$ such that $\prod_{k = 1}^N p_{\varepsilon, i_{k-1} i_k} > p$, for $\varepsilon \in (0, \varepsilon_p]$. The following inequalities are obvious, $\mathsf{P}_i \{ \alpha_{\varepsilon, j} > kN \} \leq (1 - p)^k, k \geq 1, i, j \in \mathbb{X}$, for $\varepsilon \in (0, \varepsilon_p]$. These inequalities imply that there exists $K_p \in (0, \infty)$ such that $\max_{i, j \in \mathbb{X}} \mathsf{E}_i \alpha_{\varepsilon, j}^2 \leq K_p < \infty, i, j \in \mathbb{X}$, for $\varepsilon \in (0, \varepsilon_p]$. Also, as well known, $\mathsf{E}_i \alpha_{\varepsilon, i} = \pi_{\varepsilon, i}^{-1}, i \in \mathbb{X}$, for $\varepsilon \in (0, \varepsilon_p]$. Let $\alpha_{\varepsilon, i, n} = \min(k > \alpha_{\varepsilon, i, n - 1} : \varepsilonta_{\varepsilon, k} = i), n = 1, 2, \ldots$ be sequential moments of hitting to state $i \in \mathbb{X}$ for the Markov chain $\varepsilonta_{\varepsilon, n}$ and $\beta_{\varepsilon, i, n} = \alpha_{\varepsilon, i, n} - \alpha_{\varepsilon, i, n-1}, n = 1, 2, \ldots$, where $\alpha_{\varepsilon, i, 0} = 0$. The random variables $\beta_{\varepsilon, i, n}, n \geq 1$ are independent and identically distributed for $n \geq 2$. The above relations for moments of random variables $\alpha_{\varepsilon, i} $ imply that $\alpha_{\varepsilon, i, 1}/v_\varepsilon \stackrel{\mathsf{P}}{\longrightarrow} 0$ as $\varepsilon \to 0$, for $i \in \mathbb{X}$. Also, $\mathsf{P}_i \{ v_\varepsilon^{-1} | \alpha_{\varepsilon, i, [tv_\varepsilon]} - \pi_{\varepsilon, i}^{- 1}[tv_\varepsilon]| > \delta \} \leq t K_p/ \delta^2 v_\varepsilon, \delta > 0, t \geq 0, i \in \mathbb{X}$, for $\varepsilon \in (0, \varepsilon_p]$. These relations obviously implies that random variables $\alpha_{\varepsilon, i, [tv_\varepsilon]}/ \pi_{\varepsilon, i}^{-1}v_\varepsilon \stackrel{\mathsf{P}}{\longrightarrow} t$ as $\varepsilon \to 0$, for $t \geq 0$. The dual identities $\mathsf{P} \{\mu_{\varepsilon, i}(r) \geq k \} = \mathsf{P} \{ \alpha_{\varepsilon, i, k} \leq r \}, r, k = 0, 1, \ldots$ let one, in standard way, convert the latter asymptotic relation to the equivalent relation $\mu^*_{\varepsilon, i}(t) = \mu_{\varepsilon, i, [tv_\varepsilon]}/ \pi_{\varepsilon, i}v_\varepsilon \stackrel{\mathsf{P}}{\longrightarrow} t$ as $\varepsilon \to 0$, for $t \geq 0$. Since the processes $\mu^*_{\varepsilon, i}(t), t \geq 0$ are nondecreasing and the corresponding limiting function is continuous, the latter asymptotic relation is (see, for example, Lemma 3.2.2 from Silvestrov (2004)) equivalent to the asymptotic relation (\ref{edibasta}) given in Lemma 3. $\Box$ Let now introduce step-sum processes with independent increments, \begin{equation}\label{edibastaha} \tilde{\kappa}_\varepsilon(t) = \sum_{i \in \mathbb{X}} \sum_{n = 1}^{[t\pi_{\varepsilon, i} v_\varepsilon]} \kappa_{\varepsilon, i, n}, t \geq 0. \varepsilonnd{equation} Lemmas 2 and 3 let us presume that processes $\tilde{\kappa}_\varepsilon(t)$ can be good approximations for processes $\kappa_\varepsilon(t)$. {\bf Lemma 4}. {\varepsilonm Let condition {\bf B} hold. Then, {\bf (i)} condition {\bf D} holds if and only if the following relation holds, $\tilde{\kappa}_\varepsilon(1) \stackrel{d}{\longrightarrow} \theta_0$ as $\varepsilon \to 0$, where $\theta_0$ is a non-negative random variable with distribution not concentrated in zero. In this case, {\bf (ii)} the random variable $\theta_0$ has the infinitely divisible distribution with the Laplace transform $\mathsf{E} e^{-s \theta_0} = e^{- A(s)}, s \geq 0$ with the cumulant $A(s)$ defined in condition {\bf D}. Moreover, {\bf (iii)} stochastic processes $\tilde{\kappa}_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_0(t), t \geq 0$ as $\varepsilon \to 0$, where $\theta_0(t), t \geq 0$ is a nonnegative L\'{e}vy process with the Laplace transforms $\mathsf{E} e^{-s \theta_0(t)} = e^{- tA(s)}, s, t \geq 0$.} {\bf Proof of Theorem 2}. The proof of Lemma 4 is an integral part of the proof of Theorem 2. Let us, first, prove that condition {\bf D} implies holding of the asymptotic relations penetrating Lemma 4 and Theorem 2. Let $\hat{\varepsilonta}_{\varepsilon, n}, n = 1, 2, \ldots$ be, for every $\varepsilon \in (0, \varepsilon'_0]$, a sequence of random variables such that: (a) it is independent of the Markov chain $(\varepsilonta_{\varepsilon, n}, \kappa_{\varepsilon, n}), n = 0, 1, \ldots$ and (b) it is a sequence of i.i.d. random variables taking value $i$ with probability $\pi_{\varepsilon, i}$, for $i \in \mathbb{X}$. Note that, in this case, the sequence of random variables $\hat{\varepsilonta}_{\varepsilon, n}, n = 1, 2, \ldots$ is also independent of the families of random variables $\langle \mu_{\varepsilon, i}(n), n = 0, 1, \ldots, i \in \mathbb{X} \rangle$ and $\langle \kappa_{\varepsilon, i, n}, n = 1, 2, \ldots, i \in \mathbb{X} \rangle$. Let us define random variables, \begin{equation}\label{edibasa} \hat{\mu}_{\varepsilon, i}(n) = \sum_{k = 1}^{n} I(\hat{\varepsilonta}_{\varepsilon, n} = i), n = 0, 1, \ldots, i \in \mathbb{X}. \varepsilonnd{equation} and stochastic processes \begin{equation}\label{edibahar} \hat{\kappa}_\varepsilon(t) = \sum_{i \in \mathbb{X}} \sum_{n = 1}^{\hat{\mu}_{\varepsilon, i}([t v_\varepsilon])} \kappa_{\varepsilon, i, n}, t \geq 0. \varepsilonnd{equation} Let us also consider the sequence of random variables $\theta_{\varepsilon, n} = \kappa_{\varepsilon, \hat{\varepsilonta}_{\varepsilon, n}, n}, n = 1, 2, \ldots$. This is the sequence of i.i.d. random variables that follows from the above definition of the sequence of random variables $\hat{\varepsilonta}_{\varepsilon, n}, n = 1, 2, \ldots$ and the family of random variables $\kappa_{\varepsilon, i, n}, n = 1, 2, \ldots, i \in \mathbb{X}$. Also, \begin{equation}\label{edibaheha} \mathsf{P} \{\theta_{\varepsilon, 1} \leq t \} = \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} G_{\varepsilon, i}(t) = G_\varepsilon(t), t \geq 0. \varepsilonnd{equation} Let us also define the homogeneous step-sum processes with independent increments using for them, due to relation (\ref{edibaheha}) the same notation as for processes introduced in relation (\ref{bgre}), \begin{equation}\label{edibaha} \theta_\varepsilon(t) = \sum_{n = 1}^{[t v_\varepsilon]} \theta_{\varepsilon, n}, t \geq 0. \varepsilonnd{equation} As well known (see, for example, Skorokhod (1964, 1986)), condition {\bf D} is equivalent to the following relation, \begin{equation}\label{edibaas} \theta_\varepsilon(t), t \geq 0 \stackrel{d}{\longrightarrow} \theta_0(t), t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} By the definition of the sequence of random variables $\langle \hat{\varepsilonta}_{\varepsilon, n}, n = 1, 2, \ldots \rangle$ and the family of random variables $\langle \kappa_{\varepsilon, i, n}, n = 1, 2, \ldots, i \in \mathbb{X} \rangle$, in particular, due to independence of the above sequence and family, the following relation holds, \begin{equation}\label{edibahawa} \hat{\kappa}_\varepsilon(t), t \geq 0 \stackrel{d}{=} \theta_\varepsilon(t), t \geq 0. \varepsilonnd{equation} Relation (\ref{edibahawa}) implies that $\hat{\kappa}_\varepsilon(t), t \geq 0$ also is a homogeneous step-sum process with independent increments and that condition {\bf D} is equivalent to the following relation, \begin{equation}\label{edibaastae} \hat{\kappa}_\varepsilon(t), t \geq 0 \stackrel{d}{\longrightarrow} \theta_0(t), t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} Random variables $I(\hat{\varepsilonta}_{\varepsilon, n} = i), n = 1, 2, \ldots$ are, for every $i \in \mathbb{X}$, i.i.d. random variables taking values $1$ and $0$ with probabilities, respectively, $\pi_{\varepsilon, i}$ and $1 - \pi_{\varepsilon, i}$. According proposition {\bf (i)} of Lemma 1, $0 < \varliminf_{\varepsilon \to 0} \pi_{\varepsilon, i} \leq \varlimsup_{\varepsilon \to 0} \pi_{\varepsilon, i} < 1$, for every $i \in \mathbb{X}$. Taking into account the above remarks, this is easy to prove using the corresponding results from Skorokhod (1964, 1986), that the following relation holds, \begin{equation}\label{edibastakamop} \hat{\mu}^*_{\varepsilon, i}(t) = \frac{\hat{\mu}_{\varepsilon, i}([tv_\varepsilon])}{\pi_{\varepsilon, i} v_\varepsilon}, t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \mu_{0, i}(t) = t, t \geq 0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ i \in \mathbb{X}. \varepsilonnd{equation} Let us choose some $0 < u < 1$. By the definition, processes $\tilde{\kappa}_{\varepsilon}(t)$, $\hat{\kappa}_{\varepsilon}(t)$, and $\hat{\mu}^*_{\varepsilon, i}(t), i \in \mathbb{X}$ are non-negative and non-decreasing. Taking this into account, we get, for $x \geq 0$, \begin{align}\label{bibermof} \mathsf{P} \{\tilde{\kappa}_{\varepsilon}(u) > x \} & \leq \mathsf{P}\{ \tilde{\kappa}_{\varepsilon}(u) > x, \hat{\mu}^*_{\varepsilon, i}(1) > u, i \in \mathbb{X} \} \nonumber \\ & \quad + \sum_{i \in \mathbb{X}}\mathsf{P}\{ \tilde{\kappa}_{\varepsilon}(u) > x, \hat{\mu}^*_{\varepsilon, i}(1) \leq u \} \nonumber \\ & \leq \mathsf{P} \{ \hat{\kappa}_{\varepsilon}(1) > x \} + \sum_{i \in \mathbb{X}} \mathsf{P} \{ \hat{\mu}^*_{\varepsilon, i}(1) \leq u \}. \varepsilonnd{align} Relations (\ref{edibaastae}), (\ref{edibastakamop}) and inequality (\ref{bibermof}) imply that distributions of random variables $\tilde{\kappa}_{\varepsilon}(u)$ are relatively compact as $\varepsilon \to 0$, \begin{align}\label{botyr} & \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} \mathsf{P} \{ \tilde{\kappa}_{\varepsilon}(u) > x \} \leq \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} ( \mathsf{P} \{\hat{\kappa}_{\varepsilon}(1) > x \} \nonumber \\ & \quad \quad + \sum_{i \in \mathbb{X}} \mathsf{P} \{ \hat{\mu}^*_{\varepsilon, i}(1) \leq u \}) = \lim_{x \to \infty} \mathsf{P} \{ \theta_0(1) > x \} = 0. \varepsilonnd{align} Let also introduce homogeneous step-sum processes with independent increments, for $i \in \mathbb{X}$, \begin{equation}\label{botyra} \tilde{\kappa}_{\varepsilon, i}(t) = \sum_{n = 1}^{[t\pi_{\varepsilon, i} v_\varepsilon]} \kappa_{\varepsilon, i, n}, t \geq 0. \varepsilonnd{equation} Note that, for every $\varepsilon \in (0, \varepsilon'_0]$, processes $\langle \tilde{\kappa}_{\varepsilon, i}(t), t \geq 0 \rangle, i \in \mathbb{X}$ are independent. Since, $\tilde{\kappa}_{\varepsilon, i}(u) \leq \tilde{\kappa}_{\varepsilon}(u)$, for $i \in \mathbb{X}$, relation (\ref{botyr}) imply that distributions of random variables $\tilde{\kappa}_{\varepsilon, i}(1)$ are also relatively compact as $\varepsilon \to 0$, for every $i \in \mathbb{X}$, \begin{equation}\label{botyras} \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} \mathsf{P} \{ \tilde{\kappa}_{\varepsilon, i}(u) > x \} \leq \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} \mathsf{P} \{ \tilde{\kappa}_{\varepsilon}(u) > x \} = 0. \varepsilonnd{equation} This implies that any sequence $0 < \varepsilon_n \to 0$ as $n \to \infty$ contains a subsequence $0 < \varepsilon_{n_k} \to 0$ as $k \to \infty$ such that random variables, \begin{equation}\label{botyrasa} \tilde{\kappa}_{\varepsilon_{n_k}, i}(u) \stackrel{d}{\longrightarrow} \theta_{0, i, u} \ {\rm as} \ k \to \infty, \ {\rm for} \ i \in \mathbb{X}, \varepsilonnd{equation} where $\theta_{0, i, u}, i \in \mathbb{X}$ are proper nonnegative random variables, with distributions possibly dependent of the choice of subsequence $\varepsilon_{n_k}$. Moreover, by the central criterion of convergence (see, for example, Lo\`eve (1977)), random variables $\theta_{0, i, u}, i \in \mathbb{X}$ have infinitely divisible distributions. Let $\mathsf{E} e^{- s \theta_{0, i, u}} = e^{- uA_i(s)}, s \geq 0, i \in \mathbb{X}$ be their Laplace transforms. As well known (see, for example, Skorokhod (1964, 1986)), relation (\ref{botyrasa}) implies that stochastic processes, \begin{equation}\label{botyrame} \tilde{\kappa}_{\varepsilon_{n_k}, i}(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_{0, i}(t), t \geq 0 \ {\rm as} \ k \to \infty, \ {\rm for} \ i \in \mathbb{X}, \varepsilonnd{equation} where $\theta_{0, i}(t), t \geq 0, i \in \mathbb{X}$ are nonnegative L\'{e}vy processes with Laplace transforms $\mathsf{E} e^{- s \theta_{0, i}(t)} = e^{- t A_i(s)}, s, t \geq 0, i \in \mathbb{X}$, possibly dependent of the choice of subsequence $\varepsilon_{n_k}$. Moreover, since processes $\tilde{\kappa}_{\varepsilon, i}(t), t \geq 0, i \in \mathbb{X}$ are independent, J-conver\-gence of vector processes $(\tilde{\kappa}_{\varepsilon_{n_k}, 1}(t), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m}(t)), t \geq 0$ also takes place, \begin{align}\label{botyrabero} & (\tilde{\kappa}_{\varepsilon_{n_k}, 1}(t), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m}(t)), t \geq 0 \nonumber \\ & \quad \quad \quad \stackrel{\mathsf{J}}{\longrightarrow} (\theta_{0, 1}(t), \ldots, \theta_{0, m}(t)), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} where $\theta_{0, i}(t), t \geq 0, i \in \mathbb{X}$ are independent nonnegative L\'{e}vy processes with Laplace transforms $\mathsf{E} e^{- s \theta_{0, i}(t)} = e^{- t A_i(s)}, s, t \geq 0, i \in \mathbb{X}$, possibly dependent of the choice of subsequence $\varepsilon_{n_k}$. Note (see, for example, Theorem 3.8.1, in Silvestrov (2004)) that J-compactness of the vector processes $(\tilde{\kappa}_{\varepsilon_{n_k}, 1}(t), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m}(t))$ follows from J-compactness of their components $\tilde{\kappa}_{\varepsilon_{n_k}, i}(t), i \in \mathbb{X},$ since the corresponding limiting processes $\theta_{0, i}(t), i \in \mathbb{X}$ are stochastically continuous and independent and, thus, they have not with probability $1$ joint points of discontinuity. Relation (\ref{botyrabero}) obviously implies the following relation, \begin{equation}\label{botyraberbert} \tilde{\kappa}_{\varepsilon_{n_k}}(t) = \sum_{i \in \mathbb{X}} \tilde{\kappa}_{\varepsilon_{n_k}, i}(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta'_{0}(t) = \sum_{i \in \mathbb{X}} \theta_{0, i}(t), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{equation} where $\theta_{0, i}(t), t \geq 0, i \in \mathbb{X}$ are independent nonnegative L\'{e}vy processes described in relation (\ref{botyrabero}). Since, the limiting processes in (\ref{edibasta}) and (\ref{edibastakamop}) are non-random functions, relations (\ref{edibasta}), (\ref{edibastakamop}) and (\ref{botyraberbert}) imply (see, for example, Subsection 1.2.4 in Silvestrov (2004)), by Slutsky theorem, that, \begin{align}\label{botyraberik} & (\mu^*_{\varepsilon_{n_k}, 1}(t), \ldots, \mu^*_{\varepsilon_{n_k}, m}(t), \tilde{\kappa}_{\varepsilon_{n_k}, 1}(t), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m }(t)), t \geq 0 \nonumber \\ & \quad \quad \stackrel{d}{\longrightarrow} (\mu_{0, 1}(t), \ldots, \mu_{0, m}(t), \theta_{0, 1}(t), \ldots, \theta_{0, m}(t)), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} and \begin{align}\label{botyraberit} & (\hat{\mu}^*_{\varepsilon_{n_k}, 1}(t), \ldots, \hat{\mu}^*_{\varepsilon_{n_k}, m}(t), \tilde{\kappa}_{\varepsilon_{n_k}, 1}(t), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m}(t) ), t \geq 0 \nonumber \\ & \quad \quad \stackrel{d}{\longrightarrow} (\mu_{0, 1}(t), \ldots, \mu_{0, m}(t), \theta_{0, 1}(t), \ldots, \theta_{0, m}(t)), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} where $\mu_{0, i}(t) = t, t \geq 0, i \in \mathbb{X}$ and $\theta_{0, i}(t), t \geq 0, i \in \mathbb{X}$ are independent nonnegative L\'{e}vy processes defined in relation (\ref{botyrabero}). We can now apply Theorem 3.8.2, from Silvestrov (2004), which give conditions of J-convergence for vector compositions of {\cd} stochastic processes, and get the following asymptotic relations, \begin{align}\label{botyrabernabaw} & (\tilde{\kappa}_{\varepsilon_{n_k}, 1}(\mu^*_{\varepsilon_{n_k}, 1}(t)), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m}(\mu^*_{\varepsilon_{n_k}, m}(t))), t \geq 0 \nonumber \\ & \quad \quad \stackrel{\mathsf{J}}{\longrightarrow} (\theta_{0, 1}(\mu_{0, 1}(t) ), \ldots, \theta_{0, m}(\mu_{0, m}(t) )) \nonumber \\ & \quad \quad \quad = (\theta_{0, 1}(t), \ldots, \theta_{0, m}(t)), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} and \begin{align}\label{botyrabernavav} & (\tilde{\kappa}_{\varepsilon_{n_k}, 1}(\hat{\mu}^*_{\varepsilon_{n_k}, 1}(t)), \ldots, \tilde{\kappa}_{\varepsilon_{n_k}, m}(\hat{\mu}^*_{\varepsilon_{n_k}, m}(t))), t \geq 0 \nonumber \\ & \quad \quad \stackrel{\mathsf{J}}{\longrightarrow} (\theta_{0, 1}(\mu_{0, 1}(t) ), \ldots, \theta_{0, m}(\mu_{0, m}(t) )) \nonumber \\ & \quad \quad \quad = (\theta_{0, 1}(t), \ldots, \theta_{0, m}(t)), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} where $\theta_{0, i}(t), t \geq 0, i \in \mathbb{X}$ are independent nonnegative L\'{e}vy processes defined in relation (\ref{botyrabero}). Relations (\ref{botyrabernabaw}) and (\ref{botyrabernavav}) obviously imply J-convergence for sum of components of the processes in these relations, i.e. that, respectively, the following relations hold, \begin{align}\label{botyrabernaop} & \kappa_{\varepsilon_{n_k}}(t) = \sum_{i \in \mathbb{X}} \tilde{\kappa}_{\varepsilon_{n_k}, i}(\mu^*_{\varepsilon_{n_k}, i}(t)), t \geq 0 \nonumber \\ & \quad \quad \stackrel{\mathsf{J}}{\longrightarrow} \theta'_{0}(t) = \sum_{i \in \mathbb{X}}\theta_{0, i}(t), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} and \begin{align}\label{botyrabernaopc} & \hat{\kappa}_{\varepsilon_{n_k}}(t) = \sum_{i \in \mathbb{X}} \tilde{\kappa}_{\varepsilon_{n_k}, i}(\hat{\mu}^*_{\varepsilon_{n_k}, i}(t)), t \geq 0 \nonumber \\ & \quad \quad \stackrel{\mathsf{J}}{\longrightarrow} \theta'_{0}(t) = \sum_{i \in \mathbb{X}}\theta_{0, i}(t), t \geq 0 \ {\rm as} \ k \to \infty, \varepsilonnd{align} where $\theta_{0, i}(t), t \geq 0, i \in \mathbb{X}$ are independent nonnegative L\'{e}vy processes defined in relation (\ref{botyrabero}). Relation (\ref{edibaastae}) implies that \begin{equation}\label{botyraberna} \theta'_{0}(t), t \geq 0 \stackrel{d}{=} \theta_{0}(t), t \geq 0, \varepsilonnd{equation} Thus, the limiting process $\theta'_{0}(t) = \sum_{i \in \mathbb{X}}\theta_{0, i}(t), t \geq 0$ has the same finite dimensional distributions for all subsequences $\varepsilon_{n_k}$ described above. Moreover, the cumulant $A(s)$ of the limiting L\'{e}vy process $\theta_{0}(t)$ is connected with cumulants $A_i(s), i \in \mathbb{X}$ of L\'{e}vy processes $\theta_{0, i}(t)$ by relation, $A(s) = \sum_{i \in \mathbb{X}} A_i(s)$, $s \geq 0$. Therefore, relations (\ref{botyraberbert}), (\ref{botyrabernaop}) and (\ref{botyrabernaopc}) imply that, respectively, the following relations hold, \begin{equation}\label{botyrabernakolb} \tilde{\kappa}_{\varepsilon}(t) = \sum_{i \in \mathbb{X}} \tilde{\kappa}_{\varepsilon, i}(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_{0}(t), t \geq 0 \ {\rm as} \ \varepsilon \to 0, \varepsilonnd{equation} and \begin{align}\label{botyrabernanop} \kappa_{\varepsilon}(t) = \sum_{i \in \mathbb{X}} \tilde{\kappa}_{\varepsilon, i}(\mu^*_{\varepsilon, i}(t)), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_{0}(t), t \geq 0 \ {\rm as} \ \varepsilon \to 0, \varepsilonnd{align} as well as, \begin{align}\label{botyrabernanok} \hat{\kappa}_{\varepsilon}(t) = \sum_{i \in \mathbb{X}} \tilde{\kappa}_{\varepsilon, i}(\hat{\mu}^*_{\varepsilon, i}(t)), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_{0}(t), t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} It is useful to note that relation (\ref{botyrabernanok}) for homogeneous step-sum processes $\hat{\kappa}_{\varepsilon}(t)$ follows directly from relation (\ref{edibaastae}). It was obtained in the way described above just in order to prove that the limiting process in relations (\ref{botyraberbert}), (\ref{botyrabernaop}) and (\ref{botyrabernaopc}) is the same and does not depend on the choice of subsequences $\varepsilon_{n_k}$ described above. This made it possible to write down relations (\ref{botyrabernakolb}) and (\ref{botyrabernanop}). Let us now prove that the asymptotic relation given in proposition {\bf (i)} of Theorem 2 or in proposition {\bf (i)} of Lemma 4 implies condition {\bf D} to hold. In both cases, the first step is to prove that distributions of random variables $\tilde{\kappa}_{\varepsilon}(u)$ are relatively compact as $\varepsilon \to 0$, for some $u > 0$. Let us choose some $0 < u < 1$. By the definition, the processes $\kappa_{\varepsilon}(t)$, $\tilde{\kappa}_{\varepsilon}(t)$, and $\mu^*_{\varepsilon, i}(t), i \in \mathbb{X}$ are nonnegative and nondecreasing. Taking this into account, we get, for any $x \geq 0$, \begin{align}\label{bibermofm} \mathsf{P} \{\tilde{\kappa}_{\varepsilon}(u) > x \} & \leq \mathsf{P}\{ \tilde{\kappa}_{\varepsilon}(u) > x, \mu^*_{\varepsilon, i}(1) > u, i \in \mathbb{X} \} \nonumber \\ & \quad + \sum_{i \in \mathbb{X}}\mathsf{P}\{ \tilde{\kappa}_{\varepsilon}(u) > x, \mu^*_{\varepsilon, i}(1) \leq u \} \nonumber \\ & \leq \mathsf{P} \{\kappa_{\varepsilon}(1) > x \} + \sum_{i \in \mathbb{X}} \mathsf{P} \{ \mu^*_{\varepsilon, i}(1) \leq u \}. \varepsilonnd{align} The asymptotic relation given in proposition {\bf (i)} of Theorem 2, relation (\ref{edibasta}) and inequality (\ref{bibermofm}) imply that, \begin{align}\label{botyrvitytbn} \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} \mathsf{P} \{ \tilde{\kappa}_{\varepsilon}(u) > x \} & \leq \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} ( \mathsf{P} \{\kappa_{\varepsilon}(1) > x \} \nonumber \\ & \quad + \sum_{i \in \mathbb{X}} \mathsf{P} \{ \mu^*_{\varepsilon, i}(1) \leq u \}) = \lim_{x \to \infty} \mathsf{P} \{ \theta_0 > x \} = 0. \varepsilonnd{align} Note that, in this nessessity case, the asymptotic relation given in proposition {\bf (i)} of Theorem 2 is required to hold only for at least one family initial distributions $\bar{q}_\varepsilon, \varepsilon \in (0, \varepsilon_0]$. The asymptotic relation given in proposition {\bf (i)} of Lemma 4 implies that, \begin{align}\label{botyrvitytb} \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} \mathsf{P} \{ \tilde{\kappa}_{\varepsilon}(u) > x \} & \leq \lim_{x \to \infty} \varlimsup_{\varepsilon \to 0} \mathsf{P} \{ \tilde{\kappa}_{\varepsilon}(1) > x \} \nonumber \\ & = \lim_{x \to \infty} \mathsf{P} \{ \theta_0 > x \} = 0. \varepsilonnd{align} Relation (\ref{botyrvitytbn}), as well as relation (\ref{botyrvitytb}), implies that distributions of random variables $\tilde{\kappa}_{\varepsilon}(u)$ are relatively compact as $\varepsilon \to 0$. Now, we can repeat the part of the above prove related to relations (\ref{botyra}) -- (\ref{botyrabernaopc}). Relation (\ref{botyrabernaop}) and the asymptotic relation given in proposition {\bf (i)} of Theorem 2, as well as relation (\ref{botyraberbert}) and the asymptotic relation given in proposition {\bf (i)} of Lemma 4, implies that the random variables $\theta'(1)$ and $\theta_0$, which appears in the above asymptotic relations, have the same distribution, \begin{equation}\label{botyrvitva} \theta'(1) \stackrel{d}{=} \theta_0. \varepsilonnd{equation} Moreover, cumulant $A(s)$ of the limiting L\'{e}vy process $\theta'_{0}(t)$ coincides with the cumulant of the random variable $\theta_0$, which, therefore, has infinitely divisible distribution. Moreover, relation (\ref{botyrabernaopc}) implies that cumulant $A(s)$ is connected with cumulants $A_i(s), i \in \mathbb{X}$ of L\'{e}vy processes $\theta'_{0, i}(t)$ by relation $A(s) = \sum_{i \in \mathbb{X}} A_i(s), s \geq 0$. Thus, the limiting process $\theta'_{0}(t), t \geq 0 = \sum_{i \in \mathbb{X}}\theta_{0, i}(t), t \geq 0$ has the same finite dimensional distributions for all subsequences $\varepsilon_{n_k}$ described above. This let us again to write down relations (\ref{botyrabernakolb}) -- (\ref{botyrabernanok}). Relation (\ref{botyrabernanok}) proves, in this case, that condition {\bf D} holds. Relation (\ref{botyrabernakolb}) proves proposition {\bf (iii)} of Lemma 4. Relation (\ref{botyrabernanop}) proves proposition {\bf (iii)} of Theorem 2. $\Box$ Let us consider the particular case of the model with random variables $\kappa_{\varepsilon, n} = f_{\varepsilon, \varepsilonta_{\varepsilon, n-1}}, n = 1, 2, \ldots, i \in \mathbb{X}$, where $f_{\varepsilon, i} \geq 0, i \in \mathbb{X}$ are nonrandom nonnegative numbers. In this case, stochastic process, \begin{equation}\label{bopser} \kappa_\varepsilon(t) = \sum_{n = 1}^{[tv_\varepsilon]} f_{\varepsilon, \varepsilonta_{\varepsilon, n-1}}, t \geq 0. \varepsilonnd{equation} Also, the Laplace transforms, $$ \varphi_{\varepsilon, i}(s) = \mathsf{E}_i e^{- s f_{\varepsilon, \varepsilonta_{\varepsilon, 0}}} = e^{- s f_{\varepsilon, i}}, s \geq 0, \ {\rm for} \ i \in \mathbb{X}. $$ and, $$ \varphi_{\varepsilon}(s) = \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} e^{- s f_{\varepsilon, i}}, s \geq 0. $$ Condition $\mathbf{D}_1$ takes, in this case, the form of the following relation, \begin{align}\label{njiou} v_\varepsilon (1 - \varphi_\varepsilon(s)) & = \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i}v_\varepsilon (1 - e^{- s f_{\varepsilon}(i)}) \nonumber \\ & \to A(s) \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0, \varepsilonnd{align} where the limiting function $A(s) > 0$, for $s > 0$ and $A(s) \to 0$ as $s \to 0$. This condition obviously implies that $1 - \varphi_{\varepsilon, i}(s) \to 0$ as $\varepsilon \to 0$, for $s >0, i \in \mathbb{X}$ that is equivalent to relation $f_{\varepsilon, i} \to 0$ as $\varepsilon \to 0$, for $i \in \mathbb{X}$. In this case, $1 - \varphi_{\varepsilon, i}(s) = s f_{\varepsilon, i} + o(sf_{\varepsilon, i})$ as $\varepsilon \to 0$, for every $s > 0, i \in \mathbb{X}$. These relations let us reformulate condition $\mathbf{D}_1$ in terms of functions, $$ f_\varepsilon = v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} f_{\varepsilon, i}. $$ Condition $\mathbf{D}_1$ is equivalent to the following condition: \begin{itemize} \item[$\mathbf{G}$:] $f_\varepsilon \to f_0 \in (0, \infty) $ as $\varepsilon \to 0$. \varepsilonnd{itemize} Moreover, in this case the cumulant $A(s) = f_0 s, s \geq 0$. Theorem 2 takes in this case the following form. {\bf Lemma 5.} {\varepsilonm Let condition {\bf B} holds. Then, {\bf (i)} condition {\bf G} is necessary and sufficient condition for holding {\rm (}for some or any initial distributions $\bar{q}_\varepsilon$, respectively, in statements of necessity and sufficiency{\rm )} of the asymptotic relation, $\kappa_\varepsilon(1) \stackrel{d}{\longrightarrow} \theta_0$ as $\varepsilon \to 0$, where $\theta_0$ is a non-negative random variable with distribution not concentrated in zero. In this case, {\bf (ii)} the random variable $\theta_0 \stackrel{d}{=} f_0$, i.e., it is a constant. Moreover, {\bf (iii)} stochastic processes $\kappa_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} f_0 t, t \geq 0$ as $\varepsilon \to 0$.} Let us assume that function $f_\varepsilon$ satisfy the following natural assumption: \begin{itemize} \item[$\mathbf{H}$:] There exists $\varepsilon''_0 \in (0, \varepsilon'_0]$ such that $f_\varepsilon > 0$ for $\varepsilon \in (0, \varepsilon''_0]$. \varepsilonnd{itemize} In this case, we can describe asymptotic behavior of reward step-sum processes $\kappa_\varepsilon(t)$ under weaker than $\mathbf{G}$ condition, which admits extremal behavior of functions $f_\varepsilon$: \begin{itemize} \item[$\mathbf{I}$:] $f_\varepsilon \to f_0 \in [0, \infty] $ as $\varepsilon \to 0$. \varepsilonnd{itemize} The following lemma generalizes and supplements Lemma 5. {\bf Lemma 6.} {\varepsilonm Let conditions {\bf B} and $\mathbf{H}$ hold. Then, {\bf (i)} $f_\varepsilon^{-1} \kappa_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} g_0(t) = t, t \geq 0$ as $\varepsilon \to 0$. {\bf (ii)} Condition {\bf I} is necessary and sufficient condition for holding {\rm (}for some or any initial distributions $\bar{q}_\varepsilon$, respectively, in statements of necessity and sufficiency{\rm )} of the asymptotic relation, $\kappa_\varepsilon(1) \stackrel{d}{\longrightarrow} \theta_0$ as $\varepsilon \to 0$, where $\theta_0$ is a non-negative proper or improper random variable. In this case, {\bf (iii)} the random variable $\theta_0 \stackrel{d}{\longrightarrow} f_0$, i.e., it is a constant, and {\bf (iv)} $\kappa_\varepsilon(t) \stackrel{\mathsf{P}}{\longrightarrow} f_0 t$ as $\varepsilon \to 0$, for every $t > 0$. Moreover, {\bf (v)} if $f_0 \in [0, \infty)$ then, $\kappa_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} f_0 t, t \geq 0$ as $\varepsilon \to 0${\rm ;} and {\bf (vi)} if $f_0 = \infty$ then, $\min(T, \kappa_\varepsilon(t)), t > 0 \stackrel{\mathsf{J}}{\longrightarrow} h_T(t) = T, t > 0$ as $\varepsilon \to 0$, for every $T > 0$. } {\bf Proof}. We can use the following representation, \begin{equation}\label{baesd} \kappa_\varepsilon(t) = \sum_{i \in \mathbb{X}} \mu^*_{\varepsilon, i}(t) v_\varepsilon \pi_{\varepsilon, i} f_{\varepsilon, i}, t \geq 0. \varepsilonnd{equation} For any sequence $0 < \varepsilon_n \to 0$ as $n \to \infty$, there exists a subsequence $0 < \varepsilon_{n_k} \to 0$ as $k \to \infty$ such that \begin{equation}\label{baesda} \frac{v_{n_k} \pi_{\varepsilon_{n_k}, i} f_{\varepsilon_{n_k}, i}}{f_{\varepsilon_{n_k}}} \to g_i \in [0, 1] \ {\rm as} \ k \to \infty, \ {\rm for} \ i \in \mathbb{X}. \varepsilonnd{equation} Constants $g_i, i \in \mathbb{X}$ can depend on the choice of subsequence $\varepsilon_{n_k}$, but, obviously satisfy the following relation, \begin{equation}\label{baevesda} \sum_{i \in \mathbb{X}} g_i = 1. \varepsilonnd{equation} Since the limiting processes in relations (\ref{edibasta}) given in Lemma 3 are nonrandom functions, relations (\ref{edibasta}) and (\ref{baesda}) obviously imply that \begin{equation}\label{baesdobane} f_{\varepsilon_{n_k}}^{-1} \kappa_{\varepsilon_{n_k}}(t), t \geq 0 \stackrel{d}{\longrightarrow} \sum_{i \in \mathbb{X}} t g_i = t, t \geq 0 \ {\rm as} \ k \to \infty. \varepsilonnd{equation} Moreover, since the processes on the left hand side of the above relation are nondecreasing and the limiting function is continuous, the following relation (see, for example, Lemma 3.2.2 from Silvestrov (2004)) holds, \begin{equation}\label{baesdo} f_{\varepsilon_{n_k}}^{-1} \kappa_{\varepsilon_{n_k}}(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \sum_{i \in \mathbb{X}} t g_i = t, t \geq 0 \ {\rm as} \ k \to \infty. \varepsilonnd{equation} Since the limiting process is the same for all subsequences $\varepsilon_{n_k}$ described above, relation (\ref{baesdo}) implies that the following relation holds, \begin{equation}\label{baesdog} f_{\varepsilon}^{-1} \kappa_{\varepsilon}(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} g_{0}(t) = t, t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} Relation (\ref{baesdog}) implies that random variables $f_{\varepsilon}^{-1} \kappa_{\varepsilon}(t) \stackrel{d}{\longrightarrow} t$ as $\varepsilon \to 0$, for every $t \geq 0$. This implies that the random variables $\kappa_{\varepsilon}(1) = f_\varepsilon \cdot (f_{\varepsilon}^{-1} \kappa_{\varepsilon}(1))$ can converge in distribution if and only if $f_\varepsilon \to f_0 \in [0, \infty]$ as $\varepsilon \to 0$. Moreover, in this case, the limiting (possibly improper) random variable is constant $f_0$, and $\kappa_{\varepsilon}(t) \stackrel{\mathsf{P}}{\longrightarrow} f_0 t$ as $\varepsilon \to 0$, for every $t \geq 0$. If $f_0 \in [0, \infty)$, then the asymptotic relation penetrating propositions {\bf (v)} can be obtained by application of Theorem 3.2.1 from Silvestrov (2004) to processes $\kappa_{\varepsilon}(t) = g_\varepsilon(f_{\varepsilon}^{-1} \kappa_{\varepsilon}(t)), t \geq 0$, which are compositions processes $f_{\varepsilon}^{-1} \kappa_{\varepsilon}(t), t \geq 0$ and functions $g_\varepsilon(t) = f_\varepsilon t, t \geq 0$. If $f_0 = \infty$ then the asymptotic relation penetrating proposition {\bf (vi)} can be obtained by application of Theorem 3.2.1 from Silvestrov (2004). to processes $\min(T, \kappa_{\varepsilon}(t)) = h_{\varepsilon, T}(f_{\varepsilon}^{-1} \kappa_{\varepsilon}(t)), t > 0$, which are compositions processes $f_{\varepsilon}^{-1} \kappa_{\varepsilon}(t), t > 0$ and functions $h_{\varepsilon, T}(t) = \min(T, f_\varepsilon t), t > 0$. Let us now assume that the asymptotic relation penetrating proposition {\bf (ii)} holds but condition {\bf I} does not hold. Relation $f_\varepsilon \not\to f_0 \in [0, \infty]$ as $\varepsilon \to 0$ holds if and only if there exist at least two subsequences $0 < \varepsilon'_n, \varepsilon''_n \to 0$ as $n \to \infty$ such that (a) $f_{\varepsilon'_n} \to f'_0 \in [0, \infty]$ as $n \to \infty$, (b) $f_{\varepsilon''_n} \to f''_0 \in [0, \infty]$ as $n \to \infty$ and (c) $f'_0 \neq f''_0$. In this case, $\kappa_{\varepsilon'_n}(1) \stackrel{\mathsf{P}}{\longrightarrow} f'_0 $ as $n \to \infty$ and $\kappa_{\varepsilon''_n}(1) \stackrel{\mathsf{P}}{\longrightarrow} f''_0$ as $n \to \infty$ and, thus, random variables $\kappa_{\varepsilon}(1)$ do not converge in distribution. $\Box$ \\ {\bf 4. Asymptotics of first-rare-event times for Markov chains}. \\ The following lemma describe asymptotics for first-rare-event times $\nu_\varepsilon$ for Markov chains $\varepsilonta_{\varepsilon, n}$. Note that in this section, we always use function $v_\varepsilon = p_\varepsilon^{-1}$. {\bf Lemma 7}. {\varepsilonm Let conditions {\bf A} and {\bf B} hold. Then, the random variables $\nu^*_\varepsilon = p_\varepsilon \nu_\varepsilon \stackrel{d}{\longrightarrow} \nu_0$ as $\varepsilon \to 0$, where $\nu_0$ is a random variable exponentially distributed with parameter $1$.} {\bf Proof}. Let us define probabilities, for $\varepsilon \in (0, \varepsilon_0]$, $$ P_{\varepsilon, i j} = \mathsf{P}_i \{ \varepsilonta_{\varepsilon, 1} = j, \zeta_{\varepsilon, 1} = 0 \}, \ \tilde{p}_{\varepsilon, i j} = \frac{P_{\varepsilon, i j}}{\sum_{r \in \mathbb{X}} P_{\varepsilon, i r}} = \frac{P_{\varepsilon, i j}}{1 - p_{\varepsilon, i}}, \ i, j \in \mathbb{X}. $$ Let also $\tilde{\varepsilonta}_{\varepsilon, n}, n = 0, 1, \ldots$ be a homogeneous Markov chain with the phase space $\mathbb{X}$, an initial distribution $\bar{q}_\varepsilon = \langle q_{\varepsilon, i}, i \in \mathbb{X} \rangle$ and the matrix of transition probabilities $\| \tilde{p}_{\varepsilon, i j} \|$. The following relation takes place, for $t \geq 0$, \begin{equation*} \begin{aligned}\label{burte} \mathsf{P} \{ \nu^*_\varepsilon > t \} & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{i = i_0, i_1, \ldots, i_{[tv_\varepsilon]} \in \mathbb{X}} \prod_{k = 1}^{[tv_\varepsilon]} P_{\varepsilon, i_{k-1} i_k} \nonumber \\ & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{i = i_0, i_1, \ldots, i_{[tv_\varepsilon]} \in \mathbb{X}} \prod_{k = 1}^{[tv_\varepsilon]} \tilde{p}_{\varepsilon, i_{k-1} i_k} (1 - p_{\varepsilon, i_{k-1}}) \nonumber \\ \varepsilonnd{aligned} \varepsilonnd{equation*} \begin{align}\label{burtek} & = \mathsf{E} \varepsilonxp\{ - \sum_{k = 1}^{[tv_\varepsilon]} - \ln (1 - p_{\varepsilon, \tilde{\varepsilonta}_{\varepsilon, k -1}}) \}. \varepsilonnd{align} Conditions {\bf A} and {\bf B} imply that condition {\bf B} holds for transition probabilities of the Markov chains $\tilde{\varepsilonta}_{\varepsilon, n}$, since, the following relation holds, for $i, j \in \mathbb{X}$, \begin{align}\label{burteb} | p_{\varepsilon, ij} - \tilde{p}_{\varepsilon, i j} | & = \frac{|p_{\varepsilon, ij}(1 - p_{\varepsilon, i}) - P_{\varepsilon, i j}|}{1 - p_{\varepsilon, i}} \nonumber \\ & = \frac{|Ê\mathsf{P}_i \{ \varepsilonta_{\varepsilon, 1} = j, \zeta_{\varepsilon, 1} = 0 \} - p_{\varepsilon, ij}p_{\varepsilon, i}|}{1 - p_{\varepsilon, i}} \nonumber \\ & \leq \frac{2p_{\varepsilon, i}}{1 - p_{\varepsilon, i}} \to 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} Thus, by Lemma 1, there exist $\varepsilon''_0 \in (0, \varepsilon'_0]$ such that the Markov chain $\tilde{\varepsilonta}_{\varepsilon, n}$ is ergodic, for every $\varepsilon_0 \in (0, \varepsilon''_0]$, and its stationary probabilities $\tilde{\pi}_{\varepsilon, i}, i \in \mathbb{X}$ satisfy the following relation, \begin{equation}\label{nopi} \tilde{\pi}_{\varepsilon, i} - \pi_{\varepsilon, i} \to 0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ i \in \mathbb{X}. \varepsilonnd{equation} We can apply Lemma 5, which is a particular case of Theorem 2, to the nonnegative step-sum process, \begin{equation}\label{olpi} \kappa^*_{\varepsilon}(t) = \sum_{n = 1}^{[tv_\varepsilon]} - \ln(1 - p_{\varepsilon, \tilde{\varepsilonta}_{\varepsilon, n-1}}), t \geq 0. \varepsilonnd{equation} To do this, we should check that condition {\bf G} holds for functions $f_\varepsilon(i) = - \ln(1 - p_{\varepsilon, i}), i \in \mathbb{X}$. Indeed, using condition {\bf A}, {\bf B}, Lemma 1 and relation (\ref{nopi}), we get, \begin{align}\label{llnu} f_\varepsilon & = - v_\varepsilon \sum_{i \in \mathbb{X}} \tilde{\pi}_{\varepsilon, i} \ln (1 - p_{\varepsilon, i}) \sim v_\varepsilon \sum_{i \in \mathbb{X}} \tilde{\pi}_{\varepsilon, i} \, p_{\varepsilon, i} \nonumber \\ & \sim v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} \, p_{\varepsilon, i} = v_\varepsilon p_\varepsilon = 1 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} This relation is a variant of condition {\bf G}. In this case the corresponding limiting constant $\theta_0 = 1$ and the process $\theta_0(t) = t, t \geq 0$ is a non-random linear function. By applying sufficiency proposition of Lemma 5 to the step-sum process $\kappa^*_{\varepsilon}(t)$, we get the following relation, \begin{equation}\label{firsaba} \kappa^*_{\varepsilon}(t), t \geq 0 \stackrel{d}{\longrightarrow} \theta_0(t) = t, t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} The expression on the right hand side of relation (\ref{burte}) is, just, the Laplace transform of the nonnegative random variable $\kappa^*_{\varepsilon}(t)$ at point $1$. Thus, relation (\ref{firsaba}) implies, by continuity theorem for Laplace transforms, that the following relation holds, for every $t \geq 0$, \begin{equation}\label{burtelo} \mathsf{P} \{ \nu^*_\varepsilon > t \} = \mathsf{E} e^{- \kappa^*_{\varepsilon}(t)} \to e^{- t} \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} The proof is complete. $\Box$ Let, as in Lemma 8, $f_{\varepsilon, i}, i \in \mathbb{X}$ be nonrandom nonnegative numbers and $f_\varepsilon = v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} f_{\varepsilon, i}$. Let us introduce stochastic processes, \begin{equation}\label{olpil} \nu_{\varepsilon}(t) = \sum_{n = 1}^{[t\nu_\varepsilon]} f_{\varepsilon, \varepsilonta_{\varepsilon, n-1}}, t \geq 0. \varepsilonnd{equation} The following lemma generalizes Lemma 7 and is used in what follows. {\bf Lemma 8.} {\varepsilonm Let conditions {\bf A}, {\bf B} and {\bf H} hold. Then, {\bf (i)} $f_\varepsilon^{-1} \nu_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} t \nu_0, t \geq 0$ as $\varepsilon \to 0$, where $\nu_0$ is a random variable exponentially distributed with parameter $1$. {\bf (ii)} Condition {\bf I} is necessary and sufficient condition for holding {\rm (}for some or any initial distributions $\bar{q}_\varepsilon$, respectively, in statements of necessity and sufficiency{\rm )} of the asymptotic relation, $\nu_\varepsilon(1) \stackrel{d}{\longrightarrow} \nu$ as $\varepsilon \to 0$, where $\nu$ is a non-negative random variable with distribution not concentrated in zero. In this case, {\bf (iii)} the random variable $\nu \stackrel{d}{=} f_0 \nu_0$. Moreover, {\bf (iv)} if $f_0 \in [0, \infty)$ then, $\nu_\varepsilon(t), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} f_0 \nu_0 t, t \geq 0$ as $\varepsilon \to 0$, and, {\bf (v)} if $f_0 = \infty$ then, $\min(T, \nu_\varepsilon(t)), t > 0 \stackrel{\mathsf{J}}{\longrightarrow} h_T(t) = T, t > 0$ as $\varepsilon \to 0$, for every $T > 0$ and, thus, {\bf (vi)} $\nu_\varepsilon(t) \stackrel{\mathsf{P}}{\longrightarrow} \infty$ as $\varepsilon \to 0$, for $t > 0$.} {\bf Proof}. The following representation takes place, \begin{equation}\label{olpilhako} \nu_{\varepsilon}(t) = \kappa_\varepsilon(t\nu^*_\varepsilon), t \geq 0, \varepsilonnd{equation} where $\kappa_\varepsilon(t)$ are processes defined in relation (\ref{bopser}). Relations given in proposition {\bf (i)} of Lemma 6 and in Lemma 7 imply, by Slutsky theorem, the following relation, \begin{equation}\label{olpilhamr} (t \nu^*_\varepsilon, f_\varepsilon^{-1} \kappa_\varepsilon(t)), t \geq 0 \stackrel{d}{\longrightarrow} (t \nu_0, t), t \geq 0 \ {\rm as } \ \varepsilon \to 0. \varepsilonnd{equation} The components of the processes on the left hand side of relation (\ref{olpilhamr}) are non-decreasing processes and the process on the right hand side of relation (\ref{olpilhamr}) is continuous. This let us apply Theorem 3.2.1 from Silvestrov (2004) to processes $f_\varepsilon^{-1}\nu_{\varepsilon}(t) = f_\varepsilon^{-1}\kappa_\varepsilon(t\nu^*_\varepsilon), t \geq 0$ and to get the asymptotic relation penetrating the proposition {\bf (i)} of Lemma 8. Relation penetration proposition {\bf (i)} of Lemma 8 implies that random variables $f_\varepsilon^{-1} \nu_\varepsilon(1) \stackrel{d}{\longrightarrow} \nu_0$ as $\varepsilon \to 0$. This implies that random variables $\nu_{\varepsilon}(1) = f_\varepsilon \cdot (f_{\varepsilon}^{-1} \nu_{\varepsilon}(1))$ can converge in distribution if and only if $f_\varepsilon \to f_0 \in [0, \infty]$ as $\varepsilon \to 0$. Moreover, in this case, the limiting (possibly improper) random variable $\nu \stackrel{d}{=} f_0 \nu_0$. If $f_0 \in [0, \infty)$, then relations given in proposition {\bf (iv)} of Lemma 6 and in Lemma 7 imply, by Slutsky theorem, the following relation, \begin{equation}\label{olpilhamrnok} (t \nu^*_\varepsilon, \kappa_\varepsilon(t)), t \geq 0 \stackrel{d}{\longrightarrow} (t \nu_0, f_0 t), t \geq 0 \ {\rm as } \ \varepsilon \to 0. \varepsilonnd{equation} The components of the processes on the left hand side of relation (\ref{olpilhamrnok}) are non-decreasing processes and the process on the right hand side of relation (\ref{olpilhamr}) is continuous. This let us apply Theorem 3.2.1 from Silvestrov (2004) to processes $\nu_{\varepsilon}(t) = \kappa_\varepsilon(t\nu^*_\varepsilon), t \geq 0$ and to get the asymptotic relation penetrating the proposition {\bf (iv)} of Lemma 8. If $f_0 = \infty$, then relations given in proposition {\bf (v)} of Lemma 6 and in Lemma 7 imply, by Slutsky theorem, the following relation, \begin{equation}\label{olpilhamrno} (t \nu^*_\varepsilon, \min(T, \kappa_\varepsilon(t))), t > 0 \stackrel{d}{\longrightarrow} (t \nu_0, T), t > 0 \ {\rm as } \ \varepsilon \to 0, \ {\rm for} \ T > 0. \varepsilonnd{equation} The components of the processes on the left hand side of relation (\ref{olpilhamrno}) are non-decreasing processes and the process on the right hand side of relation (\ref{olpilhamr}) is continuous. Also the limiting random variable $t \nu_0 > 0$ with probability $1$, for every $t > 0$. This let us apply Theorem 3.2.1 (and the remarks made in Subsection 3.2.6) from Silvestrov (2004) to processes $ \min(T, \nu_{\varepsilon}(t)) = \min(T, \kappa_\varepsilon(t\nu^*_\varepsilon)), t > 0$ and to get the asymptotic relation penetrating the proposition {\bf (v)} of Lemma 8. Proposition {\bf (vi)} of this lemma is the direct corollary of proposition {\bf (v)}. Let us now assume that the asymptotic relation penetrating proposition {\bf (ii)} holds but condition {\bf I} does not hold. Relation $f_\varepsilon \not\to f_0 \in [0, \infty]$ as $\varepsilon \to 0$ holds if and only if there exist at least two subsequences $0 < \varepsilon'_n, \varepsilon''_n \to 0$ as $n \to \infty$ such that (a) $f_{\varepsilon'_n} \to f'_0 \in [0, \infty]$ as $n \to \infty$, (b) $f_{\varepsilon''_n} \to f''_0 \in [0, \infty]$ as $n \to \infty$ and (c) $f'_0 \neq f''_0$. In this case, $\nu_{\varepsilon'_n}(1) \stackrel{d}{\longrightarrow} f'_0 \nu_0 $ as $n \to \infty$ and $\nu_{\varepsilon''_n}(1) \stackrel{d}{\longrightarrow} f''_0\nu_0$ as $n \to \infty$ and, thus, random variables $\nu_{\varepsilon}(1)$ do not converge in distribution. $\Box$ \\ {\bf 5. Asymptotics of first-rare-event times for semi-Markov \\ \makebox[10mm]{} processes}. \\ {\bf Proof of Theorem 1}. Now we are prepared to complete the proof of this theorem. Let us, first, concentrate attention on propositions {\bf (i)} and {\bf (ii)} of this theorem. Let us introduce Laplace transforms, $$ \varphi_{\varepsilon, i j}( \imath, s) = \mathsf{E}_i I(\varepsilonta_{\varepsilon, 1} = j, \zeta_{\varepsilon, 1} = \imath ) e^{- s \kappa_{\varepsilon, 1}}, s \geq 0, \ {\rm for} \ i, j \in \mathbb{X}, \ \imath = 0, 1, $$ and $$ \varphi_{\varepsilon, i}(\imath, s) = \mathsf{E}_i I(\zeta_{\varepsilon, 1} = \imath ) e^{- s \kappa_{\varepsilon, 1}}, s \geq 0, \ {\rm for} \ i \in \mathbb{X}, \ \imath = 0, 1. $$ Let also introduce conditional Laplace transforms, $$ \phi_{\varepsilon, i j}( \imath, s) = \mathsf{E}_i \{ I(\varepsilonta_{\varepsilon, 1} = j) e^{- s \kappa_{\varepsilon, 1}} / \zeta_{\varepsilon, 1} = \imath \}, s \geq 0, \ {\rm for} \ i, j \in \mathbb{X}, \ \imath = 0, 1, $$ and $$ \phi_{\varepsilon, i}(\imath, s) = \mathsf{E}_i \{ e^{- s \kappa_{\varepsilon, 1}} / \zeta_{\varepsilon, 1} = \imath \}, s \geq 0, \ {\rm for} \ i \in \mathbb{X}, \ \imath = 0, 1. $$ Now, let us define probabilities, for $s \geq 0$, $$ p_{\varepsilon, s, i j} = \frac{\varphi_{\varepsilon, i j}(0, s)}{\sum_{r \in \mathbb{X}} \varphi_{\varepsilon, i j}(0, s) } = \frac{\varphi_{\varepsilon, i j}(0, s)}{\varphi_{\varepsilon, i}(0, s)}, \ i, j \in \mathbb{X}. $$ Let $(\varepsilonta_{\varepsilon, s, n}, \zeta_{\varepsilon, s, n}), n = 0, 1, \ldots$ be, for every $s \geq 0$, a Markov renewal process, with the phase space $\mathbb{X} \times \{0, 1 \}$, the initial an initial distribution $\bar{q}_\varepsilon = \langle q_{\varepsilon, i} = \mathsf{P} \{\varepsilonta_{\varepsilon, 0} = i, \zeta_{\varepsilon, s, 0} = 0 \} = \mathsf{P} \{\varepsilonta_{\varepsilon, s, 0} = i \}, i \in {\mathbb X} \rangle$ and transition probabilities, \begin{equation}\label{sadop} \begin{aligned} & \mathsf{P} \{ \varepsilonta_{\varepsilon, s, n+1} = j, \zeta_{\varepsilon, s, n+1} = \jmath / \varepsilonta_{\varepsilon, s, n} = i, \zeta_{\varepsilon, s, n} = \imath \} \\ & \quad \quad = \mathsf{P} \{ \varepsilonta_{\varepsilon, s, n+1} = j, \zeta_{\varepsilon, s, n+1} = \jmath / \varepsilonta_{\varepsilon, s, n} = i \} \\ & \quad \quad = p_{\varepsilon, s, i j}(p_{\varepsilon, i} \, \jmath + (1 - p_{\varepsilon, i})(1 - \jmath)) , \ i, j \in \mathbb{X}, \ \imath, \jmath = 0, 1. \varepsilonnd{aligned} \varepsilonnd{equation} Note that the firat component of the Markov renewal process, $\varepsilonta_{\varepsilon, s, n}, n = 0, 1, \ldots$ is a homogeneous Markov chain with the phase space $\mathbb{X}$, an initial distribution $\bar{q}_\varepsilon = \langle q_{\varepsilon, i}, i \in \mathbb{X} \rangle$ and the matrix of transition probabilities $\| p_{\varepsilon, s, i j} \|$. Let us also introduce random variables, \begin{equation}\label{sadopna} \nu_{\varepsilon, s} = \min(n \geq 1: \zeta_{\varepsilon, s, n} = 1). \varepsilonnd{equation} Let us prove that condition {\bf D} or conditions {\bf A}, {\bf B} and the asymptotic relation penetrating proposition {\bf (i)} of Theorem 1 imply that, for every $s \geq 0$, condition {\bf B} holds for transition probabilities of the Markov chain $\varepsilonta_{\varepsilon, s, n}$. Condition {\bf D} obviously, implies that, for $i \in \mathbb{X}$, \begin{equation}\label{gorta} \varphi_{\varepsilon, i}(s) \to 1 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s \geq 0, \varepsilonnd{equation} Let us show that conditions {\bf A}, {\bf B} and the asymptotic relation penetrating proposition {\bf (i)} of Theorem 1 also implies that relation (\ref{gorta}) holds. Let us use representation, \begin{equation}\label{gortana} \xi_\varepsilon = \sum_{i \in \mathbb{X}} \sum_{n = 1}^{\mu_{\varepsilon, i}(\nu_\varepsilon)} \kappa_{\varepsilon, i, n} = \sum_{i \in \mathbb{X}} \sum_{n = 1}^{[\mu^*_{\varepsilon, i}(\nu_\varepsilon)\pi_{\varepsilon, i} v_\varepsilon]} \kappa_{\varepsilon, i, n}. \varepsilonnd{equation} Let us now assume that relation (\ref{gorta}) does not holds. This means that there exists $i \in \mathbb{X}$ such that for some $\delta, p > 0$ and $\varepsilon_{\delta, p} \in (0, \varepsilon'_0]$ probability $\mathsf{P} \{ \kappa_{\varepsilon, i, 1} \geq \delta \} \geq p$, for $\varepsilon \in (0, \varepsilon_{\delta, p}]$. This obviously implies that random variables $\tilde{\kappa}_{\varepsilon, i}(t) = \sum_{n = 1}^{[t \pi_{\varepsilon, i}v_\varepsilon]} \kappa_{\varepsilon, i, n} \stackrel{\mathsf{P}}{\longrightarrow} \infty$ as $\varepsilon \to 0$, for $t > 0$, and, thus, stochastic processes $\min (T, \tilde{\kappa}_{\varepsilon, i}(t)), t > 0 \stackrel{d}{\longrightarrow} h_T(t) = T, t > 0$ as $\varepsilon \to 0$. Since, the processes $\tilde{\kappa}_{\varepsilon, i}(t), t > 0$ are non-decreasing and the limiting function $h_T(t) = T, t > 0$ is continuous, the latter relation implies (see, for example, Theorem 3.2.1 from Silvestrov (2004)) that $\min (T, \tilde{\kappa}_{\varepsilon, i}(t)), t > 0 \stackrel{\mathsf{J}}{\longrightarrow} h_T(t) = T, t \geq 0$ as $\varepsilon \to 0$. Also, by Lemma 8, applied to the model with functions $f_{\varepsilon, j} = I( j = i) (\pi_{\varepsilon, i} v_\varepsilon)^{-1}, j \in \mathbb{X}$, the following relation takes place, $\mu^*_{\varepsilon, i}(\nu^*_\varepsilon) \stackrel{d}{\longrightarrow} \nu_0$ as $\varepsilon \to 0$, where $\nu_0$ is a random variable exponentially distributed with parameter $1$. The latter two relations imply, by Slutsky theorem, that $(\mu^*_{\varepsilon, i}(\nu^*_\varepsilon), \min (T, \tilde{\kappa}_{\varepsilon, i}(t))), t > 0 \stackrel{d}{\longrightarrow} (\nu_0, h_T(t)), t > 0$ as $\varepsilon \to 0$. Now we can apply Theorem 2.2.1 from Silvestrov (2004) that yields the following relation, $\min (T, \tilde{\kappa}_{\varepsilon, i}(\mu^*_{\varepsilon, i}(\nu^*_\varepsilon))) \stackrel{d}{\longrightarrow} T$ as $\varepsilon \to 0$, for any $T > 0$. This is possible only if $\tilde{\kappa}_{\varepsilon, i}(\mu^*_{\varepsilon, i}(\nu^*_\varepsilon))) \stackrel{\mathsf{P}}{\longrightarrow} \infty$ as $\varepsilon \to 0$. Thus, random variables $\tilde{\kappa}_{\varepsilon, i}(\mu^*_{\varepsilon, i}(\nu^*_\varepsilon))) = \sum_{n = 1}{\mu_{\varepsilon, i}(\nu_\varepsilon)} \kappa_{\varepsilon, i, n} \leq \xi_\varepsilon \stackrel{\mathsf{P}}{\longrightarrow} \infty$ as $\varepsilon \to 0$. This relation contradicts to the asymptotic relation penetrating proposition {\bf (i)} of \mbox{Theorem 1.} Relation (\ref{gorta}) and condition {\bf A} imply the following relation, \begin{equation}\label{edibasbasno} \varphi_{\varepsilon, i}(s, 0) = \mathsf{E}_i I(\zeta_{\varepsilon, 1} = 0 ) e^{- s \kappa_{\varepsilon, 1}} \to 1 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s \geq 0, i, j \in \mathbb{X}. \varepsilonnd{equation} which implies that, for $s \geq 0$, \begin{align}\label{edibastame} p_{\varepsilon, ij} - p_{\varepsilon, s, i j} & = \frac{p_{\varepsilon, ij}\varphi_{\varepsilon, i}(0, s) - \varphi_{\varepsilon, i j}(0, s)}{\varphi_{\varepsilon, i}(0, s)} \nonumber \\ & \leq \frac{| p_{\varepsilon, ij}\varphi_{\varepsilon, i}(0, s) - p_{\varepsilon, ij}| + |p_{\varepsilon, ij} - \varphi_{\varepsilon, i j}(0, s)|}{\varphi_{\varepsilon, i}(0, s)} \nonumber \\ & \leq \frac{p_{\varepsilon, ij}| \varphi_{\varepsilon, i}(0, s) - 1| +\mathsf{E}_i I(\varepsilonta_{\varepsilon, 1} = j) |1 - I(\zeta_{\varepsilon, 1} = 0) e^{- \kappa_{\varepsilon, 1}})|}{\varphi_{\varepsilon, i}(0, s)} \nonumber \\ & \leq \frac{2(1 - \varphi_{\varepsilon, i}(0, s))}{\varphi_{\varepsilon, i}(0, s)} \to 0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ i, j \in \mathbb{X}. \varepsilonnd{align} Thus, for every $s \geq 0$, there exist $\varepsilon'_{0, s} \in (0, \varepsilon_0]$ such that the Markov chain $\tilde{\varepsilonta}_{\varepsilon, n, s}$ is ergodic, for every $\varepsilon \in (0, \varepsilon'_{0, s}]$, and its stationary probabilities $\pi_{\varepsilon, s, i}, i \in \mathbb{X}$ satisfy the following relation, \begin{equation}\label{nopibol} \pi_{\varepsilon, s, i} - \pi_{\varepsilon, i} \to 0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ i \in \mathbb{X}. \varepsilonnd{equation} Let us assume that Markov chains $\varepsilonta_{\varepsilon, n}$ and $\varepsilonta_{\varepsilon, n, s}$ has the same initial distribution $\bar{q}_\varepsilon$. The following representation takes place for the Laplace transform of the random variables $\xi_\varepsilon$, for $s \geq 0$, \begin{align}\label{voptrewopiba} \mathsf{E} e^{- s \xi_\varepsilon} & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{n = 1}^{\infty} \sum_{i = i_0, i_1, \ldots, i_{n} \in \mathbb{X}} \prod_{k = 1}^{n-1} \varphi_{\varepsilon, i_{k-1} i_k}(0, s) \varphi_{\varepsilon, i_{n-1} i_n}(1, s) \nonumber \\ & =\sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{n = 1}^{\infty} \sum_{i = i_0, i_1, \ldots, i_{n-1} \in \mathbb{X}} \prod_{k = 1}^{n-1} \varphi_{\varepsilon, i_{k-1} i_k}(0, s) \sum_{i_n \in \mathbb{X}} \varphi_{\varepsilon, i_{n-1} i_n}(1, s) \nonumber \\ & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{n = 1}^{\infty} \sum_{i = i_0, i_1, \ldots, i_{n-1} \in \mathbb{X}} \prod_{k = 1}^{n-1} p_{\varepsilon, s, i_{k-1} i_k} \nonumber \\ & \quad \times (1 - p_{\varepsilon, i_{k-1}}) \phi_{\varepsilon, i_{k-1}}(0, s) p_{\varepsilon, i_{n-1}} \sum_{i_n \in \mathbb{X}} \frac{\varphi_{\varepsilon, i_{n-1}, i_n}(1, s)}{p_{\varepsilon, i_{n-1}}} \nonumber \\ & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{n = 1}^{\infty} \sum_{i = i_0, i_1, \ldots, i_{n-1} \in \mathbb{X}} \prod_{k = 1}^{n-1} p_{\varepsilon, s, i_{k-1} i_k} \nonumber \\ & \quad \times (1 - p_{\varepsilon, i_{k-1}}) \phi_{\varepsilon, i_{k-1}}(0, s) p_{\varepsilon, i_{n-1}} \phi_{\varepsilon, i_{n-1}}(1, s) \nonumber \\ & = \mathsf{E} \varepsilonxp\{ - \sum_{k = 1}^{\nu_{\varepsilon, s}} - \ln \phi_{\varepsilon, \varepsilonta_{\varepsilon, s, k-1}}(0, s) \nonumber \\ & \quad - \ln \phi_{\varepsilon, \varepsilonta_{\varepsilon, s, \nu_{\varepsilon, s} -1}}(0, s) + \ln \phi_{\varepsilon, \varepsilonta_{\varepsilon, s, \nu_{\varepsilon, s} - 1}}(1, s) \}. \varepsilonnd{align} Relation (\ref{edibasbasno}) and condition {\bf A} imply that the following relation holds, \begin{equation}\label{edibasbasbi} \phi_{\varepsilon, i}(0, s) = \frac{\phi_{\varepsilon, i}(0, s)}{1 - p_{\varepsilon, i}} \to 1 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s \geq 0, i \in \mathbb{X}. \varepsilonnd{equation} Also condition {\bf C} is equivalent to the following relation, \begin{align}\label{edibbi} \phi_{\varepsilon, i}(1, s) & = \mathsf{E}_i \{ e^{- s \kappa_{\varepsilon, 1}} / \zeta_{\varepsilon, 1} = 1 \} \to 1 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s \geq 0, i \in \mathbb{X}. \varepsilonnd{align} The above two relations obviously imply that, \begin{equation}\label{edibbifer} | \ln \phi_{\varepsilon, \varepsilonta_{\varepsilon, s, \nu_{\varepsilon, s} -1}}(0, s) | + | \ln \phi_{\varepsilon, \varepsilonta_{\varepsilon, s, \nu_{\varepsilon, s} - 1}}(1, s) | \stackrel{\mathsf{P}}{\longrightarrow} 0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s \geq 0. \varepsilonnd{equation} Representation (\ref{voptrewopiba}) and relation (\ref{edibbifer}) imply the following relation, \begin{equation}\label{edibbiferno} \mathsf{E} e^{- s \xi_\varepsilon} \sim \mathsf{E} e^{ - \tilde{\nu}_{\varepsilon, s}} \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0. \varepsilonnd{equation} where \begin{equation}\label{olpivil} \tilde{\nu}_{\varepsilon, s} = \sum_{n = 1}^{\nu_{\varepsilon, s}} - \ln \phi_{\varepsilon, \varepsilonta_{\varepsilon, s, k-1}}(0, s). \varepsilonnd{equation} Relations (\ref{nopibol}), (\ref{edibasbasbi}) and proposition {\bf (i)} of Lemma 1 imply that, \begin{align}\label{edibbimase} A_{\varepsilon}(s) & = - v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, s, i} \ln \phi_{\varepsilon, i}(0, s) \nonumber \\ & \sim v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, s, i} (1- \phi_{\varepsilon, i}(0, s)) \nonumber \\ &\sim v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} (1 - \phi_{\varepsilon, i}(0, s)) \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0. \varepsilonnd{align} Let us assume that condition {\bf D} holds additionally to conditions conditions {\bf A} -- {\bf C}. Condition {\bf D} is equivalent to condition {\bf D}$_1$, and, thus, due to relations (\ref{edibasbasbi}) and (\ref{edibbi}), condition {\bf A} and proposintion {\bf (i)} of Lemma 1, to the following relation, \begin{align}\label{edibbimaseboi} v_\varepsilon(1 - \varphi_\varepsilon(s)) & = v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} (1 - \varphi_{\varepsilon, i}(s)) \nonumber \\ & = v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} (1 - (1- p_{\varepsilon, i}) \phi_{\varepsilon, i}(0, s) - p_{\varepsilon, i}\phi_{\varepsilon, i}(1, s)) \nonumber \\ & = v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} ( (1- p_{\varepsilon, i}) (1 - \phi_{\varepsilon, i}(0, s)) + p_{\varepsilon, i}(1 - \phi_{\varepsilon, i}(1, s)) \nonumber \\ & \sim v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} (1- p_{\varepsilon, i}) (1 - \phi_{\varepsilon, i}(0, s)) \nonumber \\ & \sim v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} (1 - \phi_{\varepsilon, i}(0, s)) \to A(s) \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0, \varepsilonnd{align} where $A(s) > 0$, for $s > 0$ and $A(s) \to 0$ as $s \to 0$. Relations (\ref{edibbimase}) and (\ref{edibbimaseboi}) imply that, in this case, \begin{align}\label{edibbioi} A_{\varepsilon}(s) = - v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, s, i} \ln \phi_{\varepsilon, i}(0, s) \to A(s) \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0. \varepsilonnd{align} Now, we can, for every $s > 0$, apply the sufficiency statement of proposition {\bf (iv)} of Lemma 8 to random variables $\tilde{\nu}_{\varepsilon, s}$. This yields, the following relation, \begin{equation}\label{oivil} \tilde{\nu}_{\varepsilon, s} \stackrel{d}{\longrightarrow} A(s)\nu_0 \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0, \varepsilonnd{equation} where $\nu_0$ is exponentially distributed random variable with parameter $1$. This relation implies, by continuity theorem for Laplace transforms, the following relation, \begin{equation}\label{edibbifernobomi} \mathsf{E} e^{- s \xi_\varepsilon} \sim \mathsf{E} e^{- \tilde{\nu}_{\varepsilon, s}} \to \mathsf{E} e^{- A(s)\nu_0} = \frac{1}{1 + A(s)} \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0. \varepsilonnd{equation} Relation (\ref{edibbifernobomi}) proves sufficiency statements of propositions {\bf (i)} and {\bf (ii)} of Theorem 1. Let now assume that conditions {\bf A} -- {\bf C} plus proposition {\bf (i)} of Theorem 1 hold. The asymptotic relation (in proposition {\bf (i)} of Theorem 1) expressed in terms of Laplace transforms takes the form of relation (which should be assumed to hold for some initial distributions $\bar{q}_\varepsilon$), \begin{equation}\label{ediifbo} \mathsf{E} e^{- s \xi_\varepsilon} \to e^{ - A_0(s)} \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0, \varepsilonnd{equation} where $A_0(s) > 0$ for $s > 0$ and $A_0(s) \to 0$ as $s \to 0$. Let us assume that conditions {\bf A} -- {\bf C} hold but condition {\bf D} does not holds. The latter assumption means, due to relation (\ref{edibbimase}), that either (a) $A_{\varepsilon}(s) \to A(s) \in (0, \infty)$ as $s \to 0$, for every $s>0$, but $A(s) \not\to 0$ as $\varepsilon \to 0$, or (b) $A_{\varepsilon}(s^*) \not\to A(s^*) \in (0, \infty)$ as $\varepsilon \to 0$, for some $s^* > 0$. The latter relation holds if and only if there exist at least two subsequences $0 < \varepsilon'_n, \varepsilon''_n \to 0$ as $n \to \infty$ such that (b$_1$) $A_{\varepsilon'_n}(s^*) \to A'(s^*) \in [0, \infty]$ as $n \to \infty$, (b$_2$) $A_{\varepsilon''_n}(s^*) \to A''(s^*) \in [0, \infty]$ as $n \to \infty$ and (b$_3$) $A''(s^*) < A'(s^*)$. In the case (a), we can repeat the part of the above proof presented in relations (\ref{edibbimase}) -- (\ref{edibbifernobo}) and, taking into account relation (\ref{ediifbo}), to get relation, $\mathsf{E} e^{- s \xi_\varepsilon} \sim \mathsf{E} e^{ - \tilde{\nu}_{\varepsilon, s}} \to \frac{1}{1 + A(s)} = e^{ - A_0(s)}$ as $\varepsilon \to 0$, for $s > 0$. This relation implies that $A(s) \to 0$ as $\varepsilon \to 0$, i.e., the case (a) is impossible. In the case (b), sub-case, $A'(s^*) = \infty$, is impossible. Indeed, as was shown in the proof of Lemma 8, applied to random variables $\tilde{\nu}_{\varepsilon, s^*}$, in this case, $\tilde{\nu}_{\varepsilon'_n, s^*} \stackrel{\mathsf{P}}{\longrightarrow} \infty$ as $n \to \infty$, and, thus, $\mathsf{E} e^{- s^* \xi_{\varepsilon'_n}} \sim \mathsf{E} e^{- \tilde{\nu}_{\varepsilon'_n, s^*}} \to 0$ as $n \to \infty$. This relation contradicts to relation (\ref{ediifbo}). Sub-case, $A''(s^*) = 0$, is also impossible. Indeed, as was shown in the proof of Lemma 8, random variables $\tilde{\nu}_{\varepsilon, s^*}$, in this case, $\tilde{\nu}_{\varepsilon''_n, s^*} \stackrel{\mathsf{P}}{\longrightarrow} 0$ as $n \to \infty$, and, thus, $\mathsf{E} e^{- s^* \xi_{\varepsilon''_n}} \sim \mathsf{E} e^{- \tilde{\nu}_{\varepsilon''_n, s^*}} \to 1$ as $n \to \infty$. This relation also contradicts to relation (\ref{ediifbo}). Finally, the remaining sub-case, $0 < A''(s^*) < A'(s^*) < \infty$, is also impossible. Indeed, the sufficiency statement of Lemma 7 applied to random variables $\tilde{\nu}_{\varepsilon, s^*}$ yields, in this case, two relations $\tilde{\nu}_{\varepsilon'_n, s^*} \stackrel{d}{\longrightarrow} A'(s^*) \nu_0$ as $n \to \infty$ and $\tilde{\nu}_{\varepsilon''_n, s^*} \stackrel{d}{\longrightarrow} A''(s^*) \nu_0$ as $n \to \infty$, where $\nu_0$ is exponentially distributed random variable with parameter $1$. These relations imply that $\mathsf{E} e^{- s^* \xi_{\varepsilon'_n}} \sim \mathsf{E} e^{- \tilde{\nu}_{\varepsilon'_n, s^*}} \to \frac{1}{1 +A'(s^*)}$ as $n \to \infty$ and $\mathsf{E} e^{- s^* \xi_{\varepsilon''_n}} \sim \mathsf{E} e^{- \tilde{\nu}_{\varepsilon''_n, s^*}} \to \frac{1}{1 +A''(s^*)}$ as $n \to \infty$. These relations contradict to relation (\ref{ediifbo}), since $\frac{1}{1 +A'(s^*)} \neq \frac{1}{1 +A''(s^*)}$. Therefore, condition {\bf D} should hold. This complete the proof of propositions {\bf (i)} and {\bf (ii)} of Theorem 1. The following lemma brings together the asymptotic relations given in Theorem 2 and Lemma 7. The proposition of this lemma gives the last intermediate result required for completing the proof of proposition {\bf (iii)} in Theorem 1. {\bf Lemma 9.} {\varepsilonm Let conditions {\bf A}, {\bf B}, {\bf C} and {\bf D} hold. Then, the following asymptotic relation holds, $(\nu^*_\varepsilon, \kappa_\varepsilon(t)), t \geq 0 \stackrel{d}{\longrightarrow} (\nu_0, \theta_0(t)), t \geq 0$ as $\varepsilon \to 0$, {\bf (a)} $\nu_0$ is a random variable, which has the exponential distribution with parameter $1$, {\bf (b)} $\theta_0(t), t \geq 0$ is a nonnegative L\'{e}vy process with the Laplace transforms $\mathsf{E} e^{-s \theta_0(t)} = e^{- tA(s)}, s, t \geq 0$, with the cumulant $A(s)$ defined in condition {\bf D}, {\bf (c)} the random variable $\nu_0$ and the process $\theta_0(t), t \geq 0$ are independent.} {\bf Proof}. The following representation takes place, for $s, t \geq 0$, \begin{align}\label{burtebo} \mathsf{E} I( \nu^*_\varepsilon > t) e^{- s \kappa_\varepsilon(t)} & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{i = i_0, i_1, \ldots, i_{[tv_\varepsilon]} \in \mathbb{X}} \prod_{k = 1}^{[tv_\varepsilon]} \varphi_{\varepsilon, i_{k-1} i_k}(0, s) \nonumber \\ & = \sum_{i \in \mathbb{X}} q_{\varepsilon, i} \sum_{i = i_0, i_1, \ldots, i_{[tv_\varepsilon]} \in \mathbb{X}} \prod_{k = 1}^{[tv_\varepsilon]} p_{\varepsilon, s, i_{k-1} i_k} \nonumber \\ & \quad \times (1 - p_{\varepsilon, i_{k-1}}) \phi_{\varepsilon, i_{k-1}}(0, s) \nonumber \\ & = \mathsf{E} \varepsilonxp\{ - \sum_{k = 1}^{[tv_\varepsilon]} (- \ln (1 - p_{\varepsilon, \tilde{\varepsilonta}_{\varepsilon, k -1}}) \nonumber \\ & \quad - \ln \phi_{\varepsilon, \tilde{\varepsilonta}_{\varepsilon, k -1}}(0, s) ) \}. \varepsilonnd{align} Using condition {\bf A}, {\bf B}, Lemma 1 and relation (\ref{nopibol}), we get, for $s \geq 0$, the following analogue of relation (\ref{llnu}), \begin{align}\label{llnumop} f_{\varepsilon, s} & = - v_\varepsilon \sum_{i \in \mathbb{X}} \tilde{\pi}_{\varepsilon, s, i} \ln (1 - p_{\varepsilon, i}) \sim v_\varepsilon \sum_{i \in \mathbb{X}} \tilde{\pi}_{\varepsilon, s, i} \, p_{\varepsilon, i} \nonumber \\ & \sim v_\varepsilon \sum_{i \in \mathbb{X}} \pi_{\varepsilon, i} \, p_{\varepsilon, i} = v_\varepsilon p_\varepsilon = 1 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} Relations (\ref{edibbioi}) and (\ref{llnumop}) and imply that Lemma 5 can, for every $s > 0$, be applied to the processes, \begin{equation}\label{edibbifernobo} \kappa_{\varepsilon, s}(t) = \sum_{k = 1}^{[tv_\varepsilon]} (- \ln (1 - p_{\varepsilon, \tilde{\varepsilonta}_{\varepsilon, k -1}}) - \ln \phi_{\varepsilon, \tilde{\varepsilonta}_{\varepsilon, k -1}}(0, s) ), t \geq 0. \varepsilonnd{equation} This yields that the following relation holds, for every $s > 0$, \begin{equation}\label{edibbobov} \kappa_{\varepsilon, s}(t), t \geq 0 \stackrel{d}{\longrightarrow} t + A(s)t, t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} Let us denote, for $i, j \in \mathbb{X}, n = 0, 1, \ldots, s \geq 0$, $$ \Psi_{\varepsilon, ij}(n, s) = \mathsf{E}_i I( \nu_\varepsilon > n, \varepsilonta_{\varepsilon, n} = j) e^{- s \sum_{k = 1}^{n} \kappa_{\varepsilon, k} }, $$ and $$ \Psi_{\varepsilon, i}(n, s) = \mathsf{E}_i I( \nu_\varepsilon > n) e^{- s \sum_{k = 1}^{n} \kappa_{\varepsilon, k} } = \sum_{j \in \mathbb{X}} \Psi_{\varepsilon, ij}(n, s). $$ Relation (\ref{edibbobov}) implies, by continuity theorem for Laplace transforms, the following relation, for $t \geq 0$, \begin{align}\label{burtebonops} \mathsf{E} I( \nu^*_\varepsilon > t) e^{- s \kappa_\varepsilon(t)} & = \Psi_{\varepsilon, i}([t v_\varepsilon], s) \nonumber \\ & = \mathsf{E} e^{- \kappa_{\varepsilon, s}(t)} \to e^{-t -A(s)t} \nonumber \\ & = e^{-t} e^{- A(s)t} \ {\rm as} \ \varepsilon \to 0, \ {\rm for} \ s > 0. \varepsilonnd{align} Let us also denote, for $i, j \in \mathbb{X}, n = 0, 1, \ldots, s \geq 0$, $$ \psi_{\varepsilon, ij}(n, s) = \mathsf{E}_i I(\varepsilonta_{\varepsilon, n} = j) e^{-s \sum_{k = 1}^n \kappa_{\varepsilon, k}}, $$ and $$ \psi_{\varepsilon, i}(n, s) = \mathsf{E}_i e^{-s \sum_{k = 1}^n \kappa_{\varepsilon, k}} = \sum_{j \in \mathbb{X}} \psi_{\varepsilon, ij}(n, s). $$ Relation (\ref{burtebonops}) easily implies that, for $s > 0$ and $0 \leq t'' \leq t' < \infty$, \begin{align}\label{easy} \Psi_{\varepsilon, i}([t' v_\varepsilon] - [t'' v_\varepsilon], s) & \sim \Psi_{\varepsilon, i}([(t' - t'') v_\varepsilon], s) \nonumber \\ & \to e^{- (t' - t'')} e^{- A(s)(t' - t'')} \ {\rm as} \ \varepsilon \to 0, \varepsilonnd{align} Also the proposition {\bf (iii)} of Theorem 2 easily implies that, for $s > 0$ and $0 \leq t'' \leq t' < \infty$. \begin{align}\label{easyd} \psi_{\varepsilon, i}([t' v_\varepsilon] - [t'' v_\varepsilon], s) & \sim \psi_{\varepsilon, i}([(t' - t'') v_\varepsilon], s) \nonumber \\ & \to e^{- A(s)(t' - t'')} \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} Relations (\ref{easy}) and (\ref{easyd}) imply that, for $s >0$ and $0 \leq t'' \leq t' < \infty$, \begin{align}\label{edibbobovt} & \sum_{j \in \mathbb{X}} \Psi_{\varepsilon, i j}([t' v_\varepsilon] - [t'' v_\varepsilon], s) \nonumber \\ & \quad \quad \quad = \Psi_{\varepsilon, i}([t' v_\varepsilon] - [t'' v_\varepsilon], s) \to e^{- (t' - t'')} e^{- A(s)(t' - t'')} \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} and \begin{align}\label{edibbobovger} & \sum_{j \in \mathbb{X}} \psi_{\varepsilon, i j}([t' v_\varepsilon] - [t'' v_\varepsilon], s) \nonumber \\ & \quad \quad \quad = \psi_{\varepsilon, i}([t' v_\varepsilon] - [t'' v_\varepsilon], s) \to e^{- A(s)(t' - t'')} \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} Now, we shall use the following representation for multivariate joint distributions of random variable $\nu^*_\varepsilon$ and increments of stochastic process $\kappa_\varepsilon(t)$ for $0 = t_0 \leq t_1 < \cdots t_k = t \leq t_{k +1} \leq \cdots \leq t_n < \infty, 1 \leq k < n < \infty$ and $s_1, \ldots, s_n \geq 0$, \begin{equation*} \begin{aligned} & \mathsf{E} I( \nu^*_\varepsilon > t_k)\varepsilonxp \{ - \sum_{r= 1}^n s_r (\kappa_\varepsilon(t_r) - \kappa_\varepsilon(t_{r-1}) \} \nonumber \\ & \quad \quad \quad = \sum_{i_0, \ldots, i_{n} \in \mathbb{X}} q_{\varepsilon, i_0} \prod_{r = 1}^k \Psi_{\varepsilon, i_{r-1} i_r}([t_r v_\varepsilon] - [t_{r-1} v_\varepsilon], s_r) \nonumber \\ & \quad \quad \quad \quad \times \prod_{r = k+1}^n \psi_{\varepsilon, i_{r-1} i_r}([t_r v_\varepsilon] - [t_{r-1} v_\varepsilon], s_r) \nonumber \\ & \quad \quad \quad = \sum_{i_0 \in \mathbb{X}} q_{\varepsilon, i_0} \sum_{i_1 \in \mathbb{X}} \Psi_{\varepsilon, i_{0} i_1}([t_1 v_\varepsilon] - [t_{0} v_\varepsilon], s_1) \nonumber \\ \varepsilonnd{aligned} \varepsilonnd{equation*} \begin{align}\label{burtnopsva} & \quad \quad \quad \quad \cdots \times \sum_{i_{k} \in \mathbb{X}} \Psi_{\varepsilon, i_{k-1} i_k}([t_k v_\varepsilon] - [t_{k-1} v_\varepsilon], s_k) \nonumber \\ & \quad \quad \quad \quad \times \sum_{i_{k +1} \in \mathbb{X}} \psi_{\varepsilon, i_{k} i_{k+1}}([t_{k+1} v_\varepsilon] - [t_{k} v_\varepsilon], s_{k+1}) \nonumber \\ & \quad \quad \quad \quad \cdots \times \sum_{i_{n} \in \mathbb{X}} \psi_{\varepsilon, i_{n-1} i_n}([t_{n} v_\varepsilon] - [t_{n-1} v_\varepsilon], s_{n}) \varepsilonnd{align} Using relations (\ref{edibbobovt}), (\ref{edibbobovger}) and representation (\ref{burtnopsva}) we get recurrently, for $0 = t_0 \leq t_1 < \cdots t_k = t \leq t_{k +1} \leq \cdots \leq t_n < \infty, 1 \leq k < n < \infty$ and $s_1, \ldots, s_n > 0$, \begin{align}\label{burtnopges} & \mathsf{E} I( \nu^*_\varepsilon > t_k)\varepsilonxp \{ - \sum_{r= 1}^n s_r (\kappa_\varepsilon(t_r) - \kappa_\varepsilon(t_{r-1})) \} \nonumber \\ & \quad \quad \sim \mathsf{E} I( \nu^*_\varepsilon > t_k)\varepsilonxp \{ - \sum_{r= 1}^{n-1} s_r (\kappa_\varepsilon(t_r) - \kappa_\varepsilon(t_{r-1})) \} e^{- A(s_n)(t_{n} - t_{n-1})} \nonumber \\ & \quad \quad \cdots \sim \mathsf{E} I( \nu^*_\varepsilon > t_k)\varepsilonxp \{ - \sum_{r= 1}^{k} s_r (\kappa_\varepsilon(t_r) - \kappa_\varepsilon(t_{r-1})) \} \nonumber \\ & \quad \quad \quad \quad \times \varepsilonxp\{ \sum_{r = k+1}^n - A(s_r)(t_{r} - t_{r-1}) \} \nonumber \\ & \quad \quad \sim \mathsf{E} I( \nu^*_\varepsilon > t_{k-1})\varepsilonxp \{ - \sum_{r= 1}^{k-1} s_r (\kappa_\varepsilon(t_r) - \kappa_\varepsilon(t_{r-1})) \} \nonumber \\ & \quad \quad \quad \quad \times \varepsilonxp\{- (t_k - t_{k-1}) \} \varepsilonxp \{ \sum_{r = k}^n - A(s_r)(t_{r} - t_{r-1}) \} \nonumber \\ & \quad \quad \cdots \sim \varepsilonxp\{- \sum_{r = 1}^k (t_r - t_{r-1})\} \varepsilonxp \{ \sum_{r = 1}^n - A(s_r)(t_{r} - t_{r-1}) \} \nonumber \\ & \quad \quad = \varepsilonxp\{- t \} \varepsilonxp \{ \sum_{r = 1}^n - A(s_r)(t_{r} - t_{r-1}) \} \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{align} This relation is equivalent an form of the asymptotic relation given in Lemma 9. $\Box$ Now, we can complete the proof of Theorem 1. The asymptotic relation given in Lemma 9 can, obviously, be rewritten in the following equivalent form, \begin{equation}\label{final} (t\nu^*_\varepsilon, \kappa_\varepsilon(t)), t \geq 0 \stackrel{d}{\longrightarrow} (t\nu_0, \theta_0(t)), t \geq 0 \ {\rm as} \ \varepsilon \to 0, \varepsilonnd{equation} where the random variable $\nu_0$ and the stochastic process $\theta_0(t), t \geq 0$ are described in Lemma 9. Asymptotic relation given in proposition {\bf (iii)} of Theorem 2 and relation (\ref{final}) let us apply Theorem 3.4.1 from Silvestrov (2004) to the compositions of stochastic processes $\kappa_\varepsilon(t), t \geq 0$ and $t\nu^*_\varepsilon, t \geq 0$ that yield the following relation, \begin{equation}\label{finala} \xi_\varepsilon(t) = \kappa_\varepsilon(t\nu^*_\varepsilon), t \geq 0 \stackrel{\mathsf{J}}{\longrightarrow} \theta_0(t\nu_0), t \geq 0 \ {\rm as} \ \varepsilon \to 0. \varepsilonnd{equation} The proof of Theorem 1 is complete. $\Box$ \begin{thebibliography}{99} {\footnotesize \bibitem{Ald3} Aldous, D.J. (1982). Markov chains with almost exponential hitting times. Stoch. Proces. Appl., {\bf 13}, 305--310. \bibitem{Ali2} Alimov, D., Shurenkov, V.M. (1990a). Markov renewal theorems in triangular \mbox{array} model. Ukr. Mat. Zh., {\bf 42}, 1443--1448 (English translation in Ukr. Math. J., {\bf 42}, 1283--1288). \bibitem{Ali3} Alimov, D., Shurenkov, V.M. (1990b). Asymptotic behavior of terminating Markov processes that are close to ergodic. Ukr. Mat. Zh., {\bf 42}, 1701--1703 (English translation in Ukr. Math. J., {\bf 42} 1535--1538). \bibitem{Ani3} Anisimov, V.V. (1971a). Limit theorems for sums of random variables on a Markov chain, connected with the exit from a set that forms a single class in the limit. Teor. Veroyatn. Mat. Stat., {\bf 4}, 3--17 (English translation in Theory Probab. Math. Statist., {\bf 4}, 1--13). \bibitem{Ani4} Anisimov, V.V. (1971b). Limit theorems for sums of random variables in array of \mbox{sequences} defined on a subset of states of a Markov chain up to the exit time. Teor. Veroyatn. Mat. Stat., {\bf 4}, 18--26 (English translation in Theory Probab. Math. Statist., {\bf 4}, 15--22). \bibitem{Ani11} Anisimov, V.V. (1988). Random Processes with Discrete Components. Vysshaya Shkola and Izdatel'stvo Kievskogo Universiteta, Kiev, 183 pp. \bibitem{Ani12} Anisimov, V.V. (2008). Switching Processes in Queueing Models. Applied Stochastic Methods Series. ISTE, London and Wiley, Hoboken, NJ, 345 pp. \bibitem{Asm1} Asmussen, S. (1994). Busy period analysis, rare events and transient behavior in fluid flow models. J. Appl. Math. Stoch. Anal., {\bf 7}, no. 3, 269--299. \bibitem{Asm2} Asmussen, S. (2003). Applied probability and queues. Second edition. Applications of Mathematics, {\bf 51}, Stochastic Modelling and Applied Probability. Springer, New York, xii+438 pp. \bibitem{AsH} Asmussen, S., Albrecher, H. (2010). Ruin probabilities. Second edition. Advanced Series on Statistical Science \& Applied Probability, {\bf 14}, World Scientific, Hackensack, NJ, xviii+602 pp. \bibitem{Ben8} Bening, V.E., Korolev, V.Yu. (2002). Generalized Poisson Models and their Applications in Insurance and Finance. Modern Probability and Statistics, VSP, Utrecht, 432 pp. \bibitem{BLM} Benois, O., Landim, C., Mourragui, M. (2013). Hitting times of rare events in Markov chains. J. Stat. Phys., {\bf 153}, no. 6, 967--990. \bibitem{Bil4} Billingsley, P. (1968, 1999). Convergence of Probability Measures. Wiley Series in Proba\-bility and Statistics, Wiley, New York, x+277 pp. \bibitem{Brown} Brown, M., Shao, Y. (1987). Identifying coefficients in spectral representation for first passage-time distributions. Prob. Eng. Inf. Sci., {\bf 1} 69--74. \bibitem{Dar1} Darroch, J., Seneta, E. (1965). On quasi-stationary distributions in absorbing discrete-time finite Markov chains. J. Appl. Probab., {\bf 2}, 88--100. \bibitem{Dar2} Darroch, J., Seneta, E. (1967). On quasi-stationary distributions in absorbing continuous-time finite Markov chains. J. Appl. Probab., {\bf 4}, 192--196. \bibitem{Dr1} Drozdenko, M. (2007a). Weak convergence of first-rare-event times for semi-Markov processes. I. Theory Stoch. Process., {\bf 13(29)}, no. 4, 29--63. \bibitem{Dr2} Drozdenko, M. (2007b). Weak Convergence of First-Rare-Event Times for Semi-Markov Processes. Doctoral dissertation {\bf 49}, M\"{a}lardalen University, V\"{a}ster{\aa}s. \bibitem{Dr3} Drozdenko, M. (2009). Weak convergence of first-rare-event times for semi-Markov processes. II. Theory Stoch. Process., {\bf 15(31)}, no. 2, 99--118. \bibitem{Ele5} Ele\u \i ko, Ya.I., Shurenkov, V.M. (1995). Transient phenomena in a class of matrix-valued stochastic evolutions. Teor. \v{I}morvirn. Mat. Stat., {\bf 52}, 72--76 (English translation in \mbox{Theory} Probab. Math. Statist., {\bf 52}, 75--79). \bibitem{Feller} Feller, W. (1966, 1971). An Introduction to Probability Theory and Its Applica\-tions, Vol. II. Wiley Series in Probability and Statistics, Wiley, New York, 669 pp. \bibitem{GS2} Gikhman, I.I., Skorokhod, A.V. (1971). Theory of Random Processes. 1. Probabi\-lity Theory and Mathematical Statistics, Nauka, Moscow, 664 pp. (English edition: The Theory of Stochastic Processes. 1. Fundamental Principles of Mathematical Sciences, {\bf 210}, Springer, New York (1974) and Berlin (1980), viii+574 pp.). \bibitem{Gly} Glynn, P. (2011). On exponential limit laws for hitting times of rare sets for Harris chains and processes. In: Glynn, P. Mikosch, T. and Rolski, T. (Eds). New Frontiers in Applied Probability: a Festschrift for S¿ren Asmussen, J. Appl. Probab. Spec. Vol. {\bf48A}, 319--326. \bibitem{GnKo1} Gnedenko, B.V., Korolev, V.Yu. (1996). Random Summation. Limit Theorems and Applications. CRC Press, Boca Raton, FL, 288 pp. \bibitem{GuHo} Gut, A., Holst, L. (1984). On the waiting time in a generalized roulette game. Statist. Probab. Lett., {\bf 2}, no. 4, 229--239. \bibitem{GySi1} Gyllenberg, M., Silvestrov, D.S. (1994). Quasi-stationary distributions of a stochastic metapopulation model. J. Math. Biol., {\bf 33}, 35--70. \bibitem{GySl2} Gyllenberg, M., Silvestrov, D.S. (1999). Quasi-stationary phenomena for semi-Markov processes. In: Janssen, J., Limnios, N. (Eds). Semi-Markov Models and Applications. Kluwer, Dordrecht, 33--60. \bibitem{GySl3} Gyllenberg, M., Silvestrov, D.S. (2000). Nonlinearly perturbed regenerative processes and pseudo-stationary phenomena for stochastic systems. Stoch. Process. Appl., {\bf 86}, 1--27. \bibitem{GySi4} Gyllenberg, M., Silvestrov, D.S. (2008). Quasi-Stationary Phenomena in Nonlinearly Perturbed Stochastic Systems. De Gruyter Expositions in Mathematics, {\bf 44}, Walter de Gruyter, Berlin, ix+579 pp. \bibitem{Hassin} Hassin, R., Haviv, M. (1992) Mean passage times and nearly uncoupled Markov chains. SIAM J. Disc Math., {\bf 5}, 386--397. \bibitem{Kap1} Kaplan, E.I. (1979). Limit theorems for exit times of random sequences with mixing. Teor. Veroyatn. Mat. Stat., {\bf 21}, 53--59 (English translation in Theory Probab. Math. Statist., {\bf 21}, 59--65). \bibitem{Kap3} Kaplan, E.I. (1980). Limit Theorems for Sum of Switching Random Variables with an Arbitrary Phase Space of Switching Component. Candidate of Science dissertation, Kiev State University. \bibitem{Kal13} Kalashnikov, V.V. (1997). Geometric Sums: Bounds for Rare Events with Applications. Mathematics and its Applications, {\bf 413}, Kluwer, Dordrecht, xviii+265 pp. \bibitem{Kar1} Kartashov, N.V. (1987). Estimates for the geometric asymptotics of Markov times on homogeneous chains. Teor. Veroyatn. Mat. Stat., {\bf 37}, 66--77 (English translation in Theory Probab. Math. Statist., {\bf 37}, 75--88). \bibitem{Kar12} Kartashov, N.V. (1991). Inequalities in R\'{e}nei's theorem. Teor. \v{I}movirn. Mat. Stat., {\bf 45}, 27--33 (English translation in Theory Probab. Math. Statist., {\bf 45}, 23--28). \bibitem{Kar13} Kartashov, N.V. (1996). Strong Stable Markov Chains, VSP, Utrecht and TBiMC, Kiev, 138 pp. \bibitem{Kar14} Kartashov, M.V. (2013). Quantitative and qualitative limits for exponential asymptotics of hitting times for birth-and-death chains in a scheme of series. Teor.\ \v{I}movirn.\ Mat.\ Stat., {\bf 89}, 40--50 (English translation in Theory Probab. Math. Statist., \mbox{{\bf 89}, 45--56).} \bibitem{Kie1} Keilson, J. (1966). A limit theorem for passage times in ergodic regenerative processes. Ann. Math. Statist., {\bf 37}, 866--870. \bibitem{Kie2} Keilson, J. (1979). Markov Chain Models -- Rarity and Exponentiality. Applied Mathematical Sciences, {\bf 28}, Springer, New York, xiii+184 pp. \bibitem{Kij} Kijima, M. (1997). Markov Processes for Stochastic Modelling. Stochastic Modeling Series. Chapman \& Hall, London, x+341 pp. \bibitem{Kin} Kingman, J.F. (1963). The exponential decay of Markovian transition probabilities. Proc. London Math. Soc., {\bf 13}, 337--358. \bibitem{KoSi1} Korolyuk, D.V., Silvestrov D.S. (1983). Entry times into asymptotically receding \mbox{domains} for ergodic Markov chains. Teor. Veroyatn. Primen., {\bf 28}, 410--420 (English translation in Theory Probab. Appl., {\bf 28}, 432--442). \bibitem{KoSi2} Korolyuk, D.V., Silvestrov D.S. (1984). Entry times into asymptotically receding \mbox{regions} for processes with semi-Markov switchings. Teor. Veroyatn. Primen., {\bf 29}, 539--544 \mbox{(English} translation in Theory Probab. Appl., {\bf 29}, 558--563). \bibitem{Koro1} Korolyuk, V.S. (1969). On asymptotical estimate for time of a semi-Markov process \mbox{being} in the set of states. Ukr. Mat. Zh., {\bf 21}, 842--845. \bibitem{KoKo2} Korolyuk, V.S., Korolyuk, V.V. (1999). Stochastic Models of Systems. Mathematics and its Applications, {\bf 469}, Kluwer, Dordrecht, xii+185 pp. \bibitem{KoLi0} Koroliuk, V.S., Limnios, N. (2005). Stochastic Systems in Merging Phase Space. World Scientific, Singapore, xv+331 pp. \bibitem{KSw} Korolyuk, V., Swishchuk, A. (1992). Semi-Markov Random Evolutions. Naukova Dumka, Kiev, 254 pp. (English revised edition: Semi-Markov Random Evolutions. Mathematics and its Applications, {\bf 308}, Kluwer, Dordrecht, 1995, x+310 pp.). \bibitem{KoTu1} Korolyuk, V.S., Turbin, A.F. (1970). On the asymptotic behaviour of the occupation time of a semi-Markov process in a reducible subset of states. Teor. Veroyatn. Mat. Stat., {\bf 2}, 133--143 (English translation in Theory Probab. Math. Statist., {\bf 2}, 133--143). \bibitem{KoTu2} Korolyuk, V.S., Turbin, A.F. (1976). Semi-Markov Processes and its Applications. Naukova Dumka, Kiev, 184 pp. \bibitem{KoTu3} Korolyuk, V.S., Turbin, A.F. (1978). Mathematical Foundations of the State Lumping of Large Systems. Naukova Dumka, Kiev, 218 pp. (English edition: Mathematics and its Applications, {\bf 264}, Kluwer, Dordrecht, 1993, x+278 pp.) \bibitem{Kov2} Kovalenko, I.N. (1973). An algorithm of asymptotic analysis of a sojourn time of Markov chain in a set of states. Dokl. Acad. Nauk Ukr. SSR, Ser. A, no. 6, 422--426. \bibitem{Kov1} Kovalenko, I.N. (1965) On the class of limit distributions for thinning flows of homogeneous events. Litov. Mat. Sbornik, {\bf 5}, 569--573. \bibitem{Kov6} Kovalenko, I.N. (1994). Rare events in queuing theory -- a survey. Queuing Systems Theory Appl., {\bf 16}, no. 1-2, 1--49. \bibitem{KoKu1} Kovalenko, I.N., Kuznetsov, M.Ju. (1981). Renewal process and rare events limit theorems for essentially multidimensional queueing processes. Math. Operat. Statist., Ser. Statist., {\bf 12}, no. 2, 211--224. \bibitem{Kup} Kupsa, M., Lacroix, Y. (2005). Asymptotics for hitting times. Ann. Probab., {\bf 33} , no. 2, 610--619. \bibitem{Lat} Latouch, G., Louchard, G. (1978). Return times in nearly decomposible stochastic processes. J. Appl. Probab., {\bf 15}, 251--267. \bibitem{Loev} Lo\`eve, M. (1977). Probability Theory. I. Fourth edition. Graduate Texts in Mathematics, {\bf 45}, Springer, New York, xvii+425 pp. \bibitem{MaSi1} Masol, V.I., Silvestrov, D.S. (1972). Record values of the occupation time of a semi-Markov process. Visnik Kiev. Univ., Ser. Mat. Meh., {\bf 14}, 81--89. \bibitem{MoSi2} Motsa, A.I., Silvestrov, D.S. (1996). Asymptotics of extremal statistics and \mbox{functionals} \mbox{of additive} type for Markov chains. In: Klesov, O., Korolyuk, V., Kulldorff, G., \mbox{Silvestrov, D.} (Eds). Proceedings of the First Ukrainian--Scandinavian Conference on Stochastic Dynamical Systems, Uzhgorod, 1995. Theory Stoch. Proces., {\bf 2(18)}, no. 1-2, \mbox{217--224}. \bibitem{Ser} Serlet, L. (2013). Hitting times for the perturbed reflecting random walk. Stoch. Process. Appl., {\bf 123}, no. 1, 110--130. \bibitem{Shu1} Shurenkov, V.M. (1980a). Transition phenomena of the renewal theory in \mbox{asymptotical\,} problems of theory of random processes 1. Mat. Sbornik, {\bf 112}, 115--132 (English transla\-tion in Math. USSR: Sbornik, {\bf 40}, no. 1, 107--123 (1981)). \bibitem{Shu2} Shurenkov, V.M. (1980b). Transition phenomena of the renewal theory in \mbox{asymptotical\,} problems of theory of random processes 2. Mat. Sbornik, {\bf 112}, 226--241 (English transla\-tion in Math. USSR: Sbornik, {\bf 40}, no. 2, 211--225 (1981)). \bibitem{Sil5} Silvestrov, D.S. (1970). Limit theorems for semi-Markov processes and their applica\-tions. 1, 2. Teor. Veroyatn. Mat. Stat., {\bf 3}, 155--172, 173--194 (English translation in \mbox{Theory} Probab. Math. Statist., {\bf 3}, 159--176, 177--198). \bibitem{Sil7} Silvestrov, D.S. (1971). Limit theorems for semi-Markov summation schemes. 1. Teor. Veroyatn. Mat. Stat., {\bf 4}, 153--170 (English translation in Theory Probab. Math. Statist., {\bf 4}, 141--157). \bibitem{Sil1} Silvestrov, D.S. (1974). Limit Theorems for Composite Random Functions. Vysshaya Shkola and Izdatel'stvo Kievskogo Universiteta, Kiev, 318 pp. \bibitem{Sil25} Silvestrov, D.S. (1980). Semi-Markov Processes with a Discrete State Space. Library for an Engineer in Reliability, Sovetskoe Radio, Moscow, 272 pp. \bibitem{Sil26} Silvestrov, D.S. (1981). Theorems of large deviations type for entry times of a sequence with mixing. Teor. Veroyatn. Mat. Stat., {\bf 24}, 129--135 (English translation in Theory Probab. Math. Statist., {\bf 24}, 145--151). \bibitem{Sil31} Silvestrov, D.S. (1995). Exponential asymptotic for perturbed renewal equations. Teor. \v{I}movirn. Mat. Stat., {\bf 52}, 143--153 (English translation in Theory Probab. Math. Statist., {\bf 52}, 153--162). \bibitem{Sil34} Silvestrov, D.S. (2000). Nonlinearly perturbed Markov chains and large deviations for lifetime functionals. In: Limnios, N., Nikulin, M. (Eds). Recent Advances in Reliability Theory: Methodology, Practice and Inference. Birkh\"{a}user, Boston, \mbox{135--144.} \bibitem{Sil2} Silvestrov D.S. (2004). Limit Theorems for Randomly Stopped Stochastic Processes. Probability and Its Applications, Springer, London, xvi+398 pp. \bibitem{Sil8} Silvestrov D.S. (2014). Improved asymptotics for ruin probabilities. In: Silvestrov, D., Martin-L\"{o}f, A. (Eds). Modern Problems in Insurance Mathematics, Chapter {\bf 5}, EAA series, Springer, Cham, 93--110. \bibitem{Sil9} Silvestrov, D. (2016). Necessary and sufficient conditions for convergence of first-rare-event times for perturbed semi-Markov processes. Research Report 2016-4, Department of Mathematics, Stockholm University, 39 pp. \bibitem{SiA1} Silvestrov, D.S., Abadov, Z.A. (1991). Uniform asymptotic expansions for exponential moments of sums of random variables defined on a Markov chain and distributions of entry times. 1. Teor. Veroyatn. Mat. Stat., {\bf 45}, 108--127 (English translation in Theory Probab. Math. Statist., {\bf 45}, 105--120). \bibitem{SiA2} Silvestrov, D.S., Abadov, Z.A. (1993). Uniform representations of exponential moments of sums of random variables defined on a Markov chain, and of distributions of passage times. 2. Teor. Veroyatn. Mat. Stat., {\bf 48}, 175--183 (English translation in Theory Probab. Math. Statist., {\bf 48}, 125--130). \bibitem{SD3} Silvestrov, D.S., Drozdenko, M.O. (2005). Necessary and sufficient conditions for the weak convergence of the first-rare-event times for semi-Markov processes. Dopov. Nac. Akad. Nauk Ukr., Mat. Prirodozn. Tekh. Nauki, no. 11, 25--28. \bibitem{SD1} Silvestrov, D.S., Drozdenko, M.O. (2006a). Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. I. Theory Stoch. Process., {\bf 12(28)}, no. 3-4, 151--186. \bibitem{SD2} Silvestrov, D.S., Drozdenko, M.O. (2006b). Necessary and sufficient conditions for weak convergence of first-rare-event times for semi-Markov processes. II. Theory Stoch. Process., {\bf 12(28)}, no. 3-4, 187--202. \bibitem{SS1} Silvestrov, D., Silvestrov, S. (2015). Asymptotic expansions for stationary distributions of perturbed semi-Markov processes. Research Report 2015-9, Department of Mathematics, Stockholm University, 75 pp. \bibitem{SiVe} Silvestrov, D.S., Velikii, Yu.A. (1988). Necessary and sufficient conditions for convergence of attainment times. In: Zolotarev, V.M., Kalashnikov, V.V. (Eds). Stability Problems for Stochastic Models. Trudy Seminara, VNIISI, Moscow, 129--137 (English translation in J. Soviet. Math., {\bf 57}, (1991), 3317--3324). \bibitem{SiAn1} Simon, H.A., Ando, A. (1961). Aggregation of variables in dynamic systems. \mbox{Econometrica,} {\bf 29}, 111--138. \bibitem{Sko5} Skorokhod, A.V. (1964). Random Processes with Independent Increments. Probability Theory and Mathematical Statistics, Nauka, Moscow, 278 pp. (English edition: Nat. Lending Library for Sci. and Tech., Boston \mbox{Spa, 1971)}. \bibitem{Sko6} Skorokhod, A.V. (1986). Random Processes with Independent Increments. Second edition, Probability Theory and Mathematical Statistics, Nauka, Moscow, 320 pp. (English edition: Mathematics and its Applications, {\bf 47}, Kluwer, Dordrecht, 1991, xii+279 pp.). \bibitem{St9} Stewart, G.W. (1998). Matrix Algorithms. Vol. I. Basic Decompositions. SIAM, Philadelphia, PA, xx+458 pp. \bibitem{St10} Stewart, G.W. (2001). Matrix Algorithms. Vol. II. Eigensystems. SIAM, Philadelphia, PA, xx+469 pp. \bibitem{Turb} Turbin, A.F. (1971). On asymptotic behavior of time of a semi-Markov process being in a reducible set of states. Linear case. Teor. Verotatn. Mat. Stat., {\bf 4}, 179--194 (English translation in Theory Probab. Math. Statist., {\bf 4}, 167--182). \bibitem{YZ2} Yin, G.G., Zhang, Q. (2005). Discrete-time Markov chains. Two-time-scale methods and applications. Stochastic Modelling and Applied Probability, Springer, New York, xix+348 pp. \bibitem{YZ3} Yin, G.G., Zhang, Q. (2013). Continuous-time Markov chains and applications. A two-time-scale approach. Second edition. Stochastic Modelling and Applied Probability, {\bf 37}, Springer, New York, xxii+427 pp. \bibitem{Zak1} Zakusilo, O.K. (1972a). Thinning semi-Markov processes. Teor. Veroyatn. Mat. Stat., {\bf 6}, 54--59 (English translation in Theory Probab. Math. Statist., {\bf 6}, 53--58). \bibitem{Zak2} Zakusilo, O.K. (1972b). Necessary conditions for convergence of semi-Markov processes that thin. Teor. Veroyatn. Mat. Stat., {\bf 7}, 65--69 (English translation in Theory Probab. Math. Statist., {\bf 7}, 63--66). } \varepsilonnd{thebibliography} \varepsilonnd{document}
math
132,920
\begin{document} \author{Santeri Miihkinen} \address{Santeri Miihkinen:\ Department of Mathematics and Statistics, University of Helsinki, Box 68, 00014 Helsinki, Finland} \email{[email protected]} \title[\resizebox{4.5in}{!}{Strict singularity of a Volterra-type integral operator on $H^p$}]{Strict singularity of a Volterra-type integral operator on $H^p$} \begin{abstract} We prove that a Volterra-type integral operator $$T_gf(z) = \int_0^z f(\zeta)g'(\zeta)d\zeta, \quad z \in \mathbb{D},$$ defined on Hardy spaces $H^p, \, 1 \le p < \infty,$ fixes an isomorphic copy of $\ell^p,$ if the operator $T_g$ is not compact. In particular, this shows that the strict singularity of the operator $T_g$ coincides with the compactness of the operator $T_g$ on spaces $H^p.$ As a consequence, we obtain a new proof for the equivalence of the compactness and the weak compactness of the operator $T_g$ on $H^1$. \end{abstract} \maketitle \section{Introduction} Let $g$ be a fixed analytic function in the open unit disc $\mathbb{D}$ of the complex plane $\mathbb{C}$. We consider a linear integral operator $T_g$ defined formally for analytic functions $f$ in $\mathbb{D}$ by $$T_gf(z) = \int_0^z f(\zeta)g'(\zeta)d\zeta, \quad z \in \mathbb{D}.$$ Ch.\ Pommerenke was the first author to consider the boundedness of the operator $T_g$ on Hardy space $H^2$ and he characterized it in \cite{Pommerenke} in a connection to exponentials of $BMOA$ functions. A systematic study of the operator $T_g$ was initiated by A.\ Aleman and A.\ G.\ Siskakis in \cite{AS1}, where they stated the boundedness and compactness characterization of $T_g$ on Hardy spaces $H^p, \, 1 \le p < \infty.$ Namely, they observed that $T_g$ is bounded (compact) if and only if $g \in BMOA$ ($g \in VMOA$). The same boundedness characterization of the operator $T_g$ on $H^p, \, 0 < p < 1,$ spaces was obtained by Aleman and J.\ Cima in \cite{AC}. Many properties of the operator $T_g$ have been studied by several authors later on and they are well known in most spaces of analytic functions, see also surveys \cite{A2} and \cite{Siskakis}. However, one operator theoretically interesting property, the strict singularity, has not been considered in the case of $T_g.$ A bounded operator $S\colon X \to Y$ between Banach spaces is strictly singular if its restriction to any infinite-dimensional closed subspace is not an isomorphism onto its image. This notion generalizes the concept of compact operators and it was introduced by T.\ Kato in \cite{Kato}. Canonical examples of strictly singular non-compact operators are inclusion mappings $i_{p,q} \colon \ell^p \hookrightarrow \ell^q,$ where $1 \le p < q < \infty.$ There also exist strictly singular non-compact operators on $H^p$ spaces for $1 \le p < \infty, \, p \ne 2$. The aim of this note is to show that a non-compact operator $T_g$ defined on Hardy spaces $H^p, \, 1 \le p < \infty,$ fixes an isomorphic copy of $\ell^p.$ In particular, this implies that the operator $T_g$ is strictly singular on $H^p$ if and only if it is compact. Moreover, this gives a new proof for the equivalence of compactness and weak compactness of $T_g$ on Hardy space $H^1,$ see \cite{LMN}. Our main result is the following theorem. \begin{T_gstrictsing} \label{T_gstrictsing} Let $g \in BMOA \setminus VMOA$ and $1 \le p < \infty.$ Then the operator $$T_g \colon H^p \to H^p$$ fixes an isomorphic copy of $\ell^p$ inside $H^p.$ In particular, the operator $T_g$ is not strictly singular, i.e.\ the class of strictly singular operators $T_g$ coincides with the class of compact operators $T_g$. \end{T_gstrictsing} We should point out that there is an interesting extrapolation result by Hern{\'a}ndez, Semenov, and Tradacete\ in \cite[Theorem 3.3]{Hernandez}. It states that if an operator $S$ is bounded on $L^p$ and $L^q$ for some $1 < p < q < \infty$ and strictly singular on $L^r$ for some $p < r < q,$ then it is compact on $L^s$ for all $p < s < q.$ If the corresponding statement for $L^p$ spaces of complex-valued functions is true, then the equivalence of strict singularity and compactness of $T_g$ on $H^p$ for $1 < p < \infty$ follows immediately by using the Riesz projection: Recall that strictly singular operators form a two-sided (closed) ideal in the space $\mathcal{L}(L^p)$ of bounded operators on $L^p=L^p(\mathbb{T}),$ where $\mathbb{T} = \partial\mathbb{D}$. Therefore the strict singularity of $T_g\colon H^p \to H^p$ implies that $T_gR\colon L^p \to L^p$ is strictly singular, where $R\colon L^p \to H^p$ is the Riesz projection and we have identified $T_g\colon H^p \to H^p$ with $T_g\colon H^p \to L^p$. Since the condition $g \in BMOA$ characterizes the boundedness of $T_g$ on every $H^q, \, 0 < q < \infty,$ space and the Riesz projection is bounded on the scale $1 < q < \infty$, we get that $T_gR$ is bounded on every $L^q, \, 1 < q < \infty,$ space. Now assuming that the complex version of the interpolation result is valid, it follows that $T_gR$ is compact on $L^p$ and consequently the restriction $T_gR|_{H^p} = T_g$ is compact on $H^p.$ However, Theorem \ref{T_gstrictsing} states more: a non-compact operator $T_g$ on $H^p$ fixes an isomorphic copy of $\ell^p$ and this is also true in the case $p = 1.$ Theorem \ref{T_gstrictsing} also gives a new proof for the equivalence of the compactness and the weak compactness of the operator $T_g$ on $H^1:$ If $g \in BMOA \setminus \nolinebreak VMOA,$ i.e.\ the operator $T_g$ is not compact, then by Theorem \ref{T_gstrictsing} the operator $T_g$ fixes an infinite-dimensional subspace $M$, an isomorphic copy of $\ell^1.$ The class of compact operators on $\ell^1$ coincides with the class of weakly compact operators on $\ell^1$. As an isomorphism, the restriction $T_g|_M$ is not compact and hence it is not weakly compact. Therefore the operator $T_g$ is not weakly compact. \section{Preliminaries} In this section, we briefly remind a reader some common spaces of analytic functions that appear later and state a theorem of Aleman and Cima which we need in the proof our main result Theorem \ref{T_gstrictsing}. Let $H(\mathbb{D})$ be the algebra of analytic functions in $\mathbb{D}$. We define Hardy spaces $$H^p = \left\{f \in H(\mathbb{D}): \|f\|_p = \left( \sup_{0 \le r < 1}\frac{1}{2\pi}\int_0^{2\pi} |f(re^{it})|^p dt\right)^{1/p} < \infty \right\}.$$ Space $BMOA$ consists of functions $f \in H(\mathbb{D})$ with $$\|f\|_* = \sup_{a \in \mathbb{D}}\|f \circ \sigma_a - f(a)\|_2 < \infty,$$ where $\sigma_a(z) = (a-z)/(1-\bar{a}z)$ is the Möbius automorphism of $\mathbb{D}$ that interchanges the origin and the point $a \in \mathbb{D}$. Its closed subspace $VMOA$ consists of those $f \in H(\mathbb{D})$ with $$\limsup_{|a| \to 1}\|f \circ \sigma_a - f(a)\|_2 = 0.$$ See e.g.\ \cite{Girela} for more information on spaces $BMOA$ and $VMOA$. The Bloch space $\mathcal{B}$ is the Banach space of functions $f \in H(\mathbb{D})$ s.t.\ $$\sup_{z \in \mathbb{D}}(1-|z|^2)|f'(z)| < \infty.$$ We use notation $A \lesssim B$ to indicate that $A \le cB$ for some positive constant $c$ whose value may change from one occurence into another and which may depend on $p$. If $A \lesssim B$ and $B \lesssim A,$ we say that the quantities $A$ and $B$ are equivalent and write $A \simeq B.$ Every $BMOA$ function $f$ satisfies a reverse ``Hölder's inequality'', which implies that for each $0 < p < \infty$ it holds that $$\|f\|_* \simeq \sup_{a \in \mathbb{D}}\|f \circ \sigma_a - f(a)\|_p < \infty,$$ where the proportionality constants depend on $p.$ Similarly, a function $f$ is in $VMOA$ if and only if $$\limsup_{|a| \to 1}\|f \circ \sigma_a - f(a)\|_p = 0.$$ The proof of Theorem \ref{T_gstrictsing} utilizes a result of Aleman and Cima \cite[Theorem 3]{Aleman_Cima}. We state it here for convenience. \begin{aleman_cima} \label{aleman_cima} Let $p > 0$ and $g \in H^p.$ For $a \in \mathbb{D},$ let $\sigma_a(z) = (a - z)/(1 - \bar{a}z)$ and $f_a(z) = (1 - |a|^2)^{1/p}/(1 - \bar{a}z)^{2/p}.$ Then for $0 < t < p/2,$ there exists a constant $A_{p,t} > 0$ (depending only on $p$ and $t$) such that $$\|g\circ \sigma_a - g(a)\|_t^t \le A_{p,t}\|T_g f_a\|_p^t.$$ \end{aleman_cima} \section{Main result} Our goal is to show that a non-compact operator $T_g\colon H^p \to H^p, \, 1 \le p < \infty, \, g \in BMOA\setminus VMOA$, fixes an isomorphic copy of $\ell^p$ yielding that the compactness and strict singularity are equivalent for $T_g$ on $H^p$. This is done by constructing bounded operators $V\colon \ell^p \to H^p$ and $U\colon \ell^p \to H^p,$ where $V(\ell^p) = M$ is a closure of a linear span of suitably chosen test functions $f_{a_k} \in H^p$ and the operator $U$ is an isomorphism onto its image $U(\ell^p) = T_g(M)$. Then it is straightforward to show that the restriction $T_g|_M\colon M \to T_g(M)$ is bounded from below by a positive constant and consequently an isomorphism, see Figure \ref{fig}. \begin{figure} \caption{\textbf{Operators $U, V$ and $T_g$} \label{fig} \end{figure} The strategy for choosing the suitable test functions in Proposition \ref{bddmap} and Theorem \ref{isomorphism} is similar to the one used by Laitila, Nieminen and Tylli in \cite{Tylli}, where they utilized these test functions to show that a non-compact composition operator $C_\varphi\colon H^p \to H^p,$ where $\varphi\colon \mathbb{D} \to \mathbb{D}$ is analytic, fixes an isomorphic copy of $\ell^p$. Before proving our main result (Theorem \ref{T_gstrictsing}), we need some preparations. We prove first a localization lemma for the standard test functions in $H^p, \, 1 \le p < \infty$, defined by $$f_a(z) = \left[\frac{1-|a|^2}{(1-\bar{a}z)^2}\right]^{1/p}, \quad z \in \mathbb{D},$$ for each $a \in \mathbb{D}$. \begin{masslemma} \label{masslemma} Let $1 \le p < \infty, \, \varepsilon > 0$ and $(a_k) \subset \mathbb{D}$ be a sequence s.t.\ $(|a_k|)$ is increasing and $a_k \to \omega \in \mathbb{T}.$ Define $$A_\varepsilon = \{e^{i\theta}: |\theta - \textup{arg}(\omega)| < \varepsilon\}.$$ Then \begin{align*} &\textrm{(i) $\lim_{k \to \infty}\int_{\mathbb{T}\setminus A_\varepsilon}|f_{a_k}|^p dm = 0.$}& \\ &\textrm{(ii) If $k$ is fixed, then $\lim_{\varepsilon \to 0}\int_{A_\varepsilon}|f_{a_k}|^p dm = 0.$}& \end{align*} \end{masslemma} \begin{proof} \textbf{(i)} Fix $\varepsilon > 0.$ It holds that $$|1 - \bar{a_k}\zeta| \gtrsim |1 - \bar{\omega} \zeta| \ge |\omega - \zeta| \gtrsim \varepsilon$$ for $\zeta \in \mathbb{T} \setminus A_\varepsilon$ and large enough $k$. Thus $$|f_{a_k}(\zeta)|^p = \frac{1 - |a_k|^2}{|1-\bar{a_k}\zeta|^2} \le \frac{1 - |a_k|^2}{|\omega-\zeta|^2} \lesssim \frac{1-|a_k|^2}{\varepsilon^2}$$ when $\zeta \in \mathbb{T} \setminus A_\varepsilon,$ and it follows that $$\lim_{k \to \infty}\int_{\mathbb{T}\setminus A_\varepsilon}|f_{a_k}|^p dm = 0.$$ \textbf{(ii)} Fix $k.$ It follows from the absolute continuity of a measure $B \mapsto \int_B |f_{a_k}|^p dm$ that $\int_{A_\varepsilon} |f_{a_k}|^p dm \to 0$ as $\varepsilon \to 0.$ \end{proof} Next, utilizing test functions $f_{a_k}, a_k \in \mathbb{D},$ for which $|a_k| \to 1$ sufficiently fast, we construct a bounded operator $V\colon \ell^p \to H^p$. \begin{bddmap} \label{bddmap} Let $1 \le p < \infty$ and $(a_n) \subset \mathbb{D}$ be a sequence s.t.\ $(|a_n|)$ is increasing and $a_n \to \omega \in \mathbb{T}.$ Then there exists a subsequence $(b_n) \subset (a_n)$ so that the mapping $$S \colon \ell^p \to H^p, \, S(\alpha) = \sum_{n = 1}^\infty \alpha_n f_n,$$ where $\alpha = (\alpha_n) \in \ell^p$ and $f_n = f_{b_n},$ is bounded. In particular, every mapping $$V \colon \ell^p \to H^p, \, V(\alpha) = \sum_{n = 1}^\infty \alpha_n f_{c_n},$$ where $ (c_n) \subset (b_n),$ is bounded. \end{bddmap} \begin{proof} For each $\varepsilon > 0,$ we define a set $A_{\varepsilon} = \{e^{i\theta}: |\theta - \textup{arg}(\omega)| < \varepsilon\}.$ Using the fact that $\|f_a \|_p = 1$ for all $a \in \mathbb{D}$ and Lemma \ref{masslemma}, we choose positive numbers $\varepsilon_n$ with $\varepsilon_1 > \varepsilon_2 > \ldots > 0$ and numbers $b_n \in (a_n)$ s.t.\ the following conditions hold \begin{eqnarray*} &\textup{(i)}& \left(\int_{A_n} |f_j|^p dm \right)^{1/p} < 4^{-n}, \quad j = 1,\ldots, n - 1; \\ &\textup{(ii)}& \left(\int_{\mathbb{T} \setminus A_n} |f_n|^p dm \right)^{1/p} < 4^{-n}; \\ \bigg(&\textup{(iii)}& \left(\int_{A_n} |f_n|^p dm \right)^{1/p} \le \|f_n \|_p = 1\bigg) \end{eqnarray*} for every $n \in \mathbb{N},$ where $A_n = A_{\varepsilon_n}.$ Using conditions (i)-(iii), we show the upperbound $\|S\alpha\|_p \le C \|\alpha\|_{\ell^p}$ for all $\alpha = (\alpha_j) \in \ell^p,$ where $C > 0$ may depend on $p$. \begin{eqnarray*} \|S\alpha\|_p^p &=& \int_{\mathbb{T}}\left|\sum_{j=1}^\infty \alpha_j f_j \right|^p dm = \sum_{n = 1}^\infty \int_{A_n \setminus A_{n+1}}\left|\sum_{j=1}^\infty \alpha_j f_j \right|^p dm \\ &\le& \sum_{n = 1}^\infty \left( \sum_{j = 1}^\infty |\alpha_j| \left( \int_{A_n \setminus A_{n+1}}|f_j|^p dm \right)^{1/p} \right)^p \\ &\le& \sum_{n = 1}^\infty \left( |\alpha_n|\left( \int_{A_n \setminus A_{n+1}}|f_n|^p dm \right)^{1/p}+\sum_{j \ne n} |\alpha_j| \left( \int_{A_n \setminus A_{n+1}}|f_j|^p dm \right)^{1/p} \right)^p, \end{eqnarray*} where \begin{equation} \label{eq: est1} \left( \int_{A_n \setminus A_{n+1}}|f_j|^p dm \right)^{1/p} \le \left( \int_{A_n }| f_j|^p dm \right)^{1/p} < 4^{-n} \end{equation} for $j < n$ by condition (i) and \begin{equation} \label{eq: est2} \left( \int_{A_n \setminus A_{n+1}}|f_j|^p dm \right)^{1/p} \le \left( \int_{\mathbb{T} \setminus A_j }|f_j|^p dm \right)^{1/p} < 4^{-j} \end{equation} for $j > n$ by condition (ii). Thus by estimates \eqref{eq: est1} and \eqref{eq: est2}, it always holds that \begin{equation} \label{eq: est3} \left(\int_{A_n \setminus A_{n+1}}|f_j|^p dm \right)^{1/p} < 2^{-n-j} \end{equation} for $j \ne n.$ By using estimate \eqref{eq: est3} we get \begin{eqnarray*} \|S(\alpha)\|_p^p &\le& \sum_{n = 1}^\infty \left( |\alpha_n|\left( \int_{A_n \setminus A_{n+1}}|f_n|^p dm \right)^{1/p}+\sum_{j \ne n} |\alpha_j| 2^{-n-j} \right)^p \\ &\le& \sum_{n = 1}^\infty (|\alpha_n| + \|\alpha\|_{\ell^p} 2^{-n})^p \\ &\le& 2^p \left(\sum_{n = 1}^\infty |\alpha_n|^p + \|\alpha\|_{\ell^p}^p \sum_{n = 1}^\infty 2^{-np}\right) = 2^{p+1}\|\alpha\|_{\ell^p}^p, \end{eqnarray*} where we also used condition (iii) in the second inequality. Let $(c_k)$ be a subsequence of $(b_n).$ Then $(c_k) = (b_{n_k})$ for some sequence $0 < n_1 < n_2 < \ldots.$ By considering an isometry $$J\colon \ell^p \to \ell^p, \, (\alpha_k) \mapsto (\beta_j),$$ where $$\beta_j = \begin{cases} \alpha_k, &\mbox{ if } j = n_k \mbox{ for some $k$} \\ 0, &\mbox{ otherwise}, \end{cases}$$ we see that the operator $V = SJ$ is bounded. \end{proof} For a non-compact bounded operator $U$ on a Banach space of analytic functions, there exists a weakly (or weak-star in non-reflexive space case) convergent sequence $(g_n)$ so that the sequence $(Ug_n)$ of image points does not converge to zero in norm. The next result states that for a non-compact operator $T_g$ on $H^p$ we can find a sequence $(f_k)$ of test functions converging weakly to zero (or in the weak-star topology for $p=1$) so that the sequence $(T_gf_k)$ converges to a positive constant in norm. The proof is based on Theorem \ref{aleman_cima} of Aleman and Cima. \begin{normlimit} \label{normlimit} Let $g \in BMOA \setminus VMOA$ and $1 \le p < \infty.$ Then there exists a constant $c > 0$ s.t.\ $$\limsup_{|a| \to 1}\|T_g f_a\|_p = c.$$ In particular, there exists a sequence $(a_k) \subset \mathbb{D}$ s.t.\ $$0 < |a_1| < |a_2| < \ldots < 1$$ and $a_k \to \omega \in \mathbb{T}$ so that $$\lim_{k \to \infty}\|T_g f_k\|_p = c.$$ \end{normlimit} \begin{proof} It follows from Theorem \ref{aleman_cima} that for all $t \in (0,p/2)$ there exists a constant $C = C(p,t) > 0$ s.t.\ \begin{equation} \label{eq: ineq1} \| T_g f_a\|_p^t \ge C \|g \circ \sigma_a - g(a)\|_t^t \end{equation} for all $a \in \mathbb{D},$ where $\sigma_a(z) = (a-z)/(1-\bar{a}z).$ For each $0 < q < \infty,$ it holds that $$\textup{dist}(g, VMOA) \simeq \limsup_{|a| \to 1}\|g \circ \sigma_a - g(a) \|_q,$$ where the constants of comparison depend on $q$, see, e.g.\ Lemma 3 in \cite{LMN}. Thus by choosing $t = p/4$ in \eqref{eq: ineq1} and using Lemma 3 in \cite{LMN} we get $$\limsup_{|a| \to 1}\|T_g f_a\|_p \ge C'' \limsup_{|a| \to 1}\|g \circ \sigma_a - g(a)\|_{p/4} \simeq \textup{dist}(g, VMOA) > 0,$$ since $g \in BMOA \setminus VMOA.$ Thus there exists a constant $c > 0$ s.t.\ $$\limsup_{|a| \to 1}\|T_g f_a\|_p = c.$$ In particular, by the compactness of $\overline{\mathbb{D}}$ there exists a sequence $(a_k) \subset \mathbb{D}$ s.t.\ $0 < |a_1| < |a_2| < \ldots < 1$ and $a_k \to \omega \in \mathbb{T}$ so that $$\lim_{k \to \infty}\|T_g f_k\|_p = c.$$ \end{proof} The next lemma is a generalization of Lemma 5 in \cite{LMN} for $1 \le p < \infty$. \begin{localization} \label{localization} Let $a \in \mathbb{D}, \, 1 \le p < \infty, \, g \in BMOA$ and $$f_a(z) = \frac{(1-|a|)^{1/p}}{(1-\bar{a}z)^{2/p}}, \quad z \in \mathbb{D}.$$ Define $$I(a) = \left\{e^{i\theta}: |\theta - \textup{arg}(a)| < (1 - |a|)^{\frac{1}{2(2+p)}}\right\}.$$ Then $$\lim_{|a| \to 1}\int_{\mathbb{T} \setminus I(a)} |T_gf_a|^p dm = 0.$$ \end{localization} \begin{proof} By rotation invariance, we may assume that $a \in (0,1).$ Also, $g(0) = 0.$ It holds that $|1 - a s e^{i\theta}| \ge C |\theta|$ for all $0 \le s < 1$ and $|\theta| \le \pi,$ where $C > 0$ is an absolute constant. Thus for all $0 \le s < 1$ and $(1 - a)^{\frac{1}{2(2+p)}} \le |\theta| \le \pi$ we have $$|f_a(se^{i\theta})|^p \lesssim \frac{1-a}{|1 - a s e^{i\theta}|^2} \lesssim \frac{1-a}{|\theta|^2} \le (1-a)^{1 - \frac{1}{2+p}}$$ and $$|f_a'(a s e^{i\theta})|^p \lesssim \frac{1 - a}{|1 - a s e^{i\theta}|^{2+p}} \lesssim \frac{1-a}{|\theta|^{2+p}} \le (1-a)^{1/2}.$$ For $\zeta \in \mathbb{T} \setminus I(a)$, we obtain \begin{eqnarray*} |T_g f_a (\zeta)|^p &=& \left|\int_0^1 f_a(s \zeta)g'(s \zeta) \zeta ds \right|^p \\ &\le& 2^p \left( |f_a(\zeta)g(\zeta)|^p + \left(\int_0^1 |f_a'(s\zeta)g(s\zeta)|ds\right)^p \right) \\ &\lesssim& (1-a)^{1-\frac{1}{2+p}}|g(\zeta)|^p + (1 - a)^{1/2} \left(\int_0^1 |g(s\zeta)|ds\right)^p. \end{eqnarray*} Since $g \in BMOA \subset \mathcal{B},$ it holds that $|g(z)| \lesssim \log\left( \frac{1}{1 - |z|}\right)$ and consequently $\int_0^1 |g(s\zeta)|ds \lesssim \|g \|_{*},$ where $C > 0$ is an absolute constant and $\|g \|_{*} = \sup_{a \in \mathbb{D}}\|g \circ \sigma_a - g(a) \|_2.$ Therefore $$\int_{\mathbb{T} \setminus I(a)} |T_gf_a|^p dm \lesssim (1 - a)^{1 - \frac{1}{2+p}} \|g\|_p^p + (1 - a)^{1/2} \|g\|_{*}^p \to 0$$ as $a \to 1,$ where $\|g\|_p \le \sup_{a \in \mathbb{D}}\|g \circ \sigma_a - g(a) \|_p \simeq \|g \|_{*}$. \end{proof} Using Lemma \ref{localization}, we prove the following localization result for the images $T_gf_a, \, a \in \mathbb{D},$ of the test functions $f_a$ (cf. Lemma \ref{masslemma}). \begin{localization2} \label{localization2} Let $(a_k) \subset \mathbb{D}$ be s.t.\ $0 < |a_1| < |a_2| < \ldots < 1$ and $a_k \to \omega \in \mathbb{T}.$ Define $$A_\varepsilon = \{e^{i\theta}: |\theta - \textup{arg}(\omega)| < \varepsilon\}$$ for each $\varepsilon > 0$ and $f_k = f_{a_k}$. Then \begin{align*} &\textrm{(i) $\lim_{k \to \infty}\int_{\mathbb{T}\setminus A_\varepsilon}|T_g f_k|^p dm = 0$ for every $\varepsilon > 0.$}& \\ &\textrm{(ii) If $k$ is fixed, then $\lim_{\varepsilon \to 0}\int_{A_\varepsilon}|T_g f_k|^p dm = 0.$}& \end{align*} \end{localization2} \begin{proof} \textbf{(i)} Let $\varepsilon > 0$. Since $a_k \to \omega,$ we have $|\textup{arg}(a_k) - \textup{arg}(\omega)| < \frac{\varepsilon}{2}$ and $(1 - |a_k|)^{\frac{1}{2(2+p)}} < \frac{\varepsilon}{2}$ for $k$ large enough. Consequently we have $$I(a_k) = \left\{e^{i\theta}: |\theta - \textup{arg}(a_k)| < (1 - |a_k|)^{\frac{1}{2(2+p)}}\right\}\subset A_\varepsilon$$ for $k$ large enough. Thus by Lemma \ref{localization} $$\int_{\mathbb{T}\setminus A_\varepsilon}|T_g f_k|^p dm \le \int_{\mathbb{T} \setminus I(a_k)} |T_g f_k|^p dm \to 0$$ as $k \to \infty.$ \textbf{(ii)} If $k$ is fixed, then it follows from the absolute continuity of a measure $B \mapsto \int_B |T_g f_k|^p dm$ that $\int_{A_\varepsilon} |T_g f_k|^p dm \to 0$ as $\varepsilon \to 0.$ \end{proof} As a final step before the proof of Theorem \ref{T_gstrictsing}, we construct an isomorphism $U\colon \ell^p \to H^p$ using a non-compact $T_g$ and test functions. \begin{isomorphism} \label{isomorphism} Let $g \in BMOA \setminus VMOA, \, 1 \le p < \infty$ and $(a_n) \subset \mathbb{D}$ be the sequence from Proposition \ref{normlimit}. Then there exists a subsequence $(b_n) \subset (a_n)$ s.t.\ the mapping $$U\colon \ell^p \to H^p,\, U(\alpha)=\sum_{n = 1}^\infty \alpha_n T_g f_n,$$ where $\alpha = (\alpha_n) \in \ell^p$ and $f_n = f_{b_n}$, is an isomorphism onto its image. \end{isomorphism} \begin{proof} We need to show that $\|U(\alpha)\|_p \simeq \|\alpha\|_{\ell^p}$ for all $\alpha=(\alpha_n) \in \ell^p.$ By Proposition \ref{bddmap} there exists a subsequence $(c_n) \subset (a_n)$ inducing a bounded operator $$S\colon \ell^p \to H^p, \, S(\alpha) = \sum_{n = 1}^\infty \alpha_n f_{c_n}$$ and for any subsequence $(b_n)$ of $(c_n)$ the operator $$V\colon \ell^p \to H^p, \, V(\alpha) = \sum_{n = 1}^\infty \alpha_n f_{b_n}$$ is bounded. Therefore the upperbound $\lesssim$ follows from Proposition \ref{bddmap} and the boundedness of the operator $T_g$: \begin{eqnarray} \label{eq: bddness} \left\|T_g\left(\sum_{n = 1}^\infty \alpha_n f_{b_n}\right)\right\|_p &\le& \|T_g\|_{H^p \to H^p} \left\|\sum_{n = 1}^\infty \alpha_n f_{b_n} \right\|_p = \|T_g\|_{H^p \to H^p} \|V(\alpha)\| \nonumber \\ &\lesssim& \|T_g\|_{H^p \to H^p} \|\alpha\|_{\ell^p}, \end{eqnarray} where $(b_n)$ is any subsequence of $(c_n).$ Before proving the lowerbound $\gtrsim,$ we make some preparations. Since $(c_n) \subset (a_n),$ it holds that $c_n \to \omega \in \mathbb{T}$ and there exists a constant $c > 0$ s.t.\ $\lim_{n \to \infty}\|T_g f_{c_n}\|_p = c$ by Proposition \ref{normlimit}. For each $\varepsilon > 0,$ we define a set $A_\varepsilon = \{e^{i\theta}: |\theta- \textup{arg}(\omega)| < \varepsilon\}.$ Also, we define sequences $(\varepsilon_n)$ and $(b_n) \subset (c_n)$ inductively using Proposition \ref{normlimit} and Lemma \ref{localization2} in the following way: We choose positive numbers $\varepsilon_n$ and $b_n \in (c_n)$ with $$\varepsilon_1 > \varepsilon_2 > \ldots > 0$$ s.t.\ the following conditions hold \begin{eqnarray*} &\textup{(i)}& \left(\int_{A_n} |T_g f_j|^p dm \right)^{1/p} < 4^{-n} \delta c, \quad j = 1,\ldots, n - 1; \\ &\textup{(ii)}& \left(\int_{\mathbb{T} \setminus A_n} |T_g f_n|^p dm \right)^{1/p} < 4^{-n} \delta c; \\ &\textup{(iii)}& \frac{c}{2} \le \left(\int_{A_n} |T_g f_n|^p dm \right)^{1/p} \le 2 c \end{eqnarray*} for every $n \in \mathbb{N},$ where $A_n = A_{\varepsilon_n}, \, f_n = f_{b_n}$ and $\delta > 0$ is a constant whose value is determined later. Now we are ready to prove the lower estimate $\|U\alpha\|_p \ge C \|\alpha\|_{\ell^p},$ where the constant $C > 0$ may depend on $p.$ \begin{eqnarray*} &&\|U\alpha\|_p^p = \int_{\mathbb{T}}\left|\sum_{j=1}^\infty \alpha_j T_g f_j \right|^p dm = \sum_{n = 1}^\infty \int_{A_n \setminus A_{n+1}}\left|\sum_{j=1}^\infty \alpha_j T_g f_j \right|^p dm \\ &\ge& \sum_{n = 1}^\infty \left(|\alpha_n| \left(\int_{A_n \setminus A_{n+1}}| T_g f_n|^p dm\right)^{1/p} -\sum_{j \ne n}|\alpha_j|\left(\int_{A_n \setminus A_{n+1}}| T_g f_j|^p dm\right)^{1/p} \right)^p, \end{eqnarray*} where $$\left( \int_{A_n \setminus A_{n+1}}|T_g f_j|^p dm \right)^{1/p} \le \left( \int_{A_n }|T_g f_j|^p dm \right)^{1/p} < 4^{-n}\delta c$$ for $j < n$ by condition (i) and $$\left( \int_{A_n \setminus A_{n+1}}|T_g f_j|^p dm \right)^{1/p} \le \left( \int_{\mathbb{T} \setminus A_j }|T_g f_j|^p dm \right)^{1/p} < 4^{-j}\delta c$$ for $j > n$ by condition (ii). Thus it always holds that $$\left(\int_{A_n \setminus A_{n+1}}|T_g f_j|^p dm \right)^{1/p} < 2^{-n-j}\delta c$$ for $j \ne n.$ Consequently, we can estimate \begin{eqnarray*} \|U\alpha\|_p^p &\ge& \sum_{n = 1}^\infty \left(|\alpha_n| \left(\int_{A_n \setminus A_{n+1}}| T_g f_n|^p dm\right)^{1/p} - \sum_{j = 1}^\infty |\alpha_j| 2^{-n-j}\delta c \right)^p \\ &\ge& \sum_{n = 1}^\infty \left(|\alpha_n| \left(\frac{c}{2}- 4^{-n-1}\delta c \right) - \|\alpha\|_{\ell^p} 2^{-n} \delta c \right)^p \\ &\ge& \sum_{n = 1}^\infty \left( \frac{c}{2}|\alpha_n| - \|\alpha\|_{\ell^p}(4^{-n-1} + 2^{-n})\delta c \right)^p \\ &\ge& \sum_{n = 1}^\infty \left( \frac{c}{2}|\alpha_n| - 2^{-n+1} \delta c \|\alpha\|_{\ell^p} \right)^p \\ &\ge& \sum_{n = 1}^\infty \left( 2^{-p} \left(\frac{c}{2}\right)^p|\alpha_n|^p - 2^{(-n+1)p}\delta^p c^p \|\alpha\|_{\ell^p}^p \right) \\ &=& 2^{-2p} c^p \|\alpha\|_{\ell^p}^p - 2^p \delta^p c^p \left(\sum_{n = 1}^\infty 2^{-np}\right) \|\alpha\|_{\ell^p}^p \\ &\ge& (2^{-2p}-2\delta^p)c^p \|\alpha\|_{\ell^p}^p = 2^{-2p-1} c^p \|\alpha\|_{\ell^p}^p, \end{eqnarray*} when we choose $\delta > 0$ s.t.\ $2^{-2p}-2\delta^p = 2^{-2p-1}$, i.e.\ $\delta = 2^{-2-2/p}.$ Thus the mapping $U$ is bounded from below and by \eqref{eq: bddness} it is also bounded. Therefore we have established that $$\|U(\alpha)\|_p \simeq \|\alpha\|_{\ell^p}$$ for all $\alpha \in \ell^p$ and consequently the mapping $U$ is an isomorphism onto its image. \end{proof} Now we are ready to prove our main result. \begin{proof}[Proof of Theorem \ref{T_gstrictsing}] By Theorem \ref{isomorphism} and Proposition \ref{bddmap}, we can choose a sequence $(b_n) \subset \mathbb{D}$ that induces an isomorphism $$U\colon \ell^p \to H^p,\, U(\alpha)=\sum_{n = 1}^\infty \alpha_n T_g f_n$$ onto its image and a bounded operator $$V \colon \ell^p \to H^p, \, V(\alpha) = \sum_{n = 1}^\infty \alpha_n f_n,$$ where $f_n = f_{b_n}$ and $\alpha = (\alpha_n) \in \ell^p.$ Define $M = \overline{\textup{span}\{f_n\}},$ where the closure is taken in $H^p.$ It is enough to show that the restriction $$T_g|_M\colon M \to T_g(M)$$ is bounded from below and $M$ is isomorphic to $\ell^p$. Let $f \in M.$ Then $f = \sum_{n = 1}^\infty \alpha_n f_n$ for some $\alpha = (\alpha_n) \in \ell^p$ and it follows from the fact that $U$ is bounded from below and the boundedness of $V$ that \begin{eqnarray*} \|T_gf\|_p &=& \left\|\sum_{n = 1}^\infty \alpha_n T_gf_n \right\|_p = \|U(\alpha)\|_p \gtrsim \|\alpha\|_{\ell^p} \gtrsim \|V(\alpha)\|_p \\ &=& \|\sum_{n = 1}^\infty \alpha_n f_n\|_p = \|f\|_p. \end{eqnarray*} Since the operator $T_g|_M$ is also bounded, it is an isomorphism. Moreover, it holds that $\ell^p$ is isomorphic to $ U(\ell^p) = T_g(M)$, which is isomorphic to $M.$ Consequently the operator $T_g$ fixes an isomorphic copy of $\ell^p,$ namely the closed subspace $M$. Hence the operator $T_g$ is not strictly singular. \end{proof} \section{Some comments} It follows from an idea of Le{\u\i}bov \cite{leibov} that there exists isomorphic copies of the space $c_0$ of null sequences inside $VMOA$. Therefore the strict singularity of $T_g$ on $BMOA$ or on $VMOA$ is equivalent to the compactness of $T_g$ on the same space. The sketch of the proof is the following: First, we give a reformulation of Le{\u\i}bov's result, which is taken from \cite{LNST}. \begin{lemma}[{\cite[Proposition 6]{LNST}}]\label{lemma_leibov} Let $(f_n)$ be a sequence in $VMOA$ such that $\|f_n|_* \simeq 1$ and $\|f_n|_2\to 0$ as $n\to \infty$. Then there is a subsequence $(f_{n_j})$ which is equivalent to the natural basis of $c_0$; that is, the map $\iota\colon (\lambda_j) \to \sum_j \lambda_j f_{n_j}$ is an isomorphism from $c_0$ into $VMOA$. \end{lemma} For each arc $I \subset \mathbb{T},$ we write $|I|$ to denote the length of $I$ and define Carleson windows $$S(I) = \{re^{it}: 1 - |I| \le r < 1, t \in I\}$$ and their corresponding base points $u = (1-|I|)e^{i\theta},$ where $\theta$ is the mid-point of $I$. We also consider ``logarithmic $BMOA$'' space $$LMOA = \left\{ g \in H(\mathbb{D}): \sup_{a \in \mathbb{D}}\lambda(a)\|g \circ \sigma_a - g(a)\|_2 < \infty\right\},$$ where $\lambda(a) = \log\left(\frac{2}{1-|a|}\right).$ The condition $g \in LMOA$ characterizes the boundedness of $T_g$ on $BMOA$ and simultaneously on $VMOA$, see \cite{SZ}. We consider test functions $f_n(z) = \log(1-\bar{u_n}z),$ where $u_n \in \mathbb{D}$ is the base point of the Carleson window $S(I_n)$ and $(I_n)$ is a sequence of arcs of $\mathbb{T}$ s.t\ $I_n \to 0.$ Define $h_n = f_{n+1}-f_n.$ By the proof of Theorem $2$ in \cite{LMN}, it holds that $\|h_n\|_* \simeq 1$ and $\|h_n\|_2 \to 0,$ as $n \to \infty.$ By Lemma \ref{lemma_leibov}, we can pick a subsequence $(h_{n_k}) \subset (h_n)$ which is equivalent to the standard basis $\{e_k\}$ of $c_0.$ If $T_g$ is non-compact on $VMOA,$ by passing to a subsequence if necessary, we can assume that $\|T_gh_{n_k}\|_* > c > 0$ for some constant $c$ for all $k.$ Since $g \in LMOA \subset BMOA,$ the operator $T_g$ is bounded on $H^2$ and consequently $\|T_gh_{n_k}\|_2 \to 0,$ as $k \to \infty.$ Now we apply Lemma \ref{lemma_leibov} again to obtain (by passing to a subsequence, if needed) that $\{T_gh_{n_k}\}$ is equivalent to the natural basis of $c_0.$ Hence $T_g|M,$ where $M = \overline{\textup{span}\{h_{n_k}\}},$ is an isomorphism onto its image and $T_g$ is not strictly singular on $VMOA$ (or on $BMOA$). \textbf{Remark.} In Bergman spaces $A^p, \, 1 \le p < \infty$, which are isomorphic to $\ell^p$, see e.g.\ \cite[Chapter 2.A, Theorem 11]{W}, the strict singularity of the operator $T_g$ coincides with the compactness, since all strictly singular operators on $\ell^p$ are compact. \end{document}
math
29,891
\begin{document} \twocolumn[ \icmltitle{Solving inverse problems with deep neural networks driven by sparse signal decomposition in a physics-based dictionary} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Gaetan Rensonnet}{icteam} \icmlauthor{Louise Adam}{icteam} \icmlauthor{Benoit Macq}{icteam} \end{icmlauthorlist} \icmlaffiliation{icteam}{ICTEAM Institute, Universit\'{e} catholique de Louvain, Louvain-la-Neuve, Belgium} \icmlcorrespondingauthor{Gaetan Rensonnet}{[email protected]} \icmlkeywords{Machine Learning, inverse models, interpretability, deep learning, fingerprinting, dictionary, non-negative linear least squares, multi-layer perceptron, brain, white matter, diffusion MRI, expert knowledge} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} Deep neural networks (DNN) have an impressive ability to invert very complex models, i.e. to learn the generative parameters from a model's output. Once trained, the forward pass of a DNN is often much faster than traditional, optimization-based methods used to solve inverse problems. This is however done at the cost of lower interpretability, a fundamental limitation in most medical applications. We propose an approach for solving general inverse problems which combines the efficiency of DNN and the interpretability of traditional analytical methods. The measurements are first projected onto a dense dictionary of model-based responses. The resulting sparse representation is then fed to a DNN with an architecture driven by the problem's physics for fast parameter learning. Our method can handle generative forward models that are costly to evaluate and exhibits similar performance in accuracy and computation time as a fully-learned DNN, while maintaining high interpretability and being easier to train. Concrete results are shown on an example of model-based brain parameter estimation from magnetic resonance imaging (MRI). \end{abstract} \section{Introduction} \label{sec:introduction} Many engineering problems in medicine can be cast as inverse problems in which a vector of latent biophysical characteristics $\gv{\omega}$ (e.g., biomarkers) are estimated from a vector of $M$ noisy measurements or signals $\v{y}\in \mathbb{R}^M$, each measurement being performed with experimentally-controlled acquisition parameters $\v{p}_i$ ($i=1,\dots,M$) defining the experimental protocol $\mathcal{P} \coloneqq \left\{\v{p}_i\right\}_{i=1}^{M}$. We consider the case in which a generative model of the signal $S\left(\gv{\omega};\v{p}\right)$ is available but hard to invert and costly to evaluate. This occurs for instance when $S$ is very complex or non-differentiable, which includes most cases of $S$ being a numerical simulation rather than a closed-form model. Without loss of generality, the signal $S$ is assumed to arise from $K$ independent contributions weighted by weights $\nu_1,\dots,\nu_K$ \begin{equation} S\left(\gv{\omega};\mathcal{P}\right) = \sum\limits_{k=1}^K \nu_k S_k\left(\gv{\omega}_k;\mathcal{P}\right), \label{eq:linearity_S} \end{equation} where the $S_k$ are available and with $\gv{\omega}=\left[\gv{\omega}_1^T \dots \gv{\omega}_K^T \right]^T$. When no linear separation is possible or known then $K=1$. Because deep neural networks (DNNs) have shown a tremendous ability to learn complex input-output mappings \cite{hornik1990universal,fakoor2013using, guo2016deep,lucas2018using,zhao2019object}, it is tempting to resort to a DNN to learn $\gv{\omega}$ directly from the data $\v{y}$ or from synthetic samples $S\left(\gv{\omega};\mathcal{P}\right)$. This is arguably the least interpretable approach and is referred to as the fully-learned, black-box approach. Little insight into the prediction process is available, which may deter medical professionals from adopting it. In addition, changes to the input such as the size $M$ (e.g., missing or corrupt measurements) or a modified experimental protocol $\mathcal{P}$ (e.g., hardware update) likely require a whole new network to be trained. At the other end of the spectrum, the traditional, physics-based approach in that case is to perform dictionary fingerprinting. As described in Section~\ref{sec:fingerprinting}, this roughly consists in pre-simulating many ($\Ntot{}$) possible signals $S$ and finding the combination of $K\ll \Ntot{}$ responses that best explains the measured data $\v{y}$. This method is explainable and interpretable but it is essentially a brute-force discrete search. It is thus computationally intensive at inference time (which may be problematic for real-time applications) and does not scale well with problem size. We propose an intermediate method, referred to as a hybrid approach, which aims to capture the best of both worlds. The measured signal $\v{y}$ is projected onto a basis of $\Ntot{}$ fingerprints, pre-simulated using the biophysical model $S$. The naturally sparse representation (in theory, $K\ll \Ntot{}$ weights are non zero) is then fed to a neural network with an architecture driven by the physics of the process for final prediction of the latent biophysical parameters $\gv{\omega}$. We describe the theory for general applications and provide an illustrative example of the estimation of brain tissue properties from diffusion-weighted magnetic resonance imaging (DW-MRI) data, where the inverse problem is learned on simulated data. We compare the three methods (dictionary fingerprinting, hybrid and fully-learned) in terms of efficiency and accuracy on the estimated brain properties. The explainability of the hybrid and fully-learned approaches are assessed visually by projecting intermediate activations in a 2D space. \section{Related work} Despite their success at solving many (underdetermined) inverse problems~\cite{lucas2018using,bai2020deep} and despite efforts to explain model predictions in healthcare~\cite{elshawi2020interpretability,stiglic2020interpretability}, DNNs still offer little accountability and are mathematically prone to major instabilities~\cite{gottschling2020troublesome}. There has thus been a growing trend toward incorporating engineering and physical knowledge into deep learning frameworks~\cite{lucas2018using}. One way to do so is to unfold well-known, often iterative algorithms in a DNN architecture~\cite{ye2017tissue,ye2019deep}, or to produce parameter updates based on signal updates from the forward model $S$~\cite{ma2020deep}. However those approaches require many evaluations of the forward model $S$, which is not always possible or affordable. The latent variables $\gv{\omega}$ can be difficult to access. This is the case in our example problem where microstructural properties of the brain cannot be measured \textit{in vivo}. In such cases, DNNs can be trained on synthetic data $S\left(\gv{\omega};\mathcal{P}\right)$ and then applied to experimental measurements $\v{y}$, which is the approach taken with our illustrative example. Another option is via a self-supervised framework wherein the inverse mapping $S^{-1}$ is learned such that $S\left(S^{-1}\left(\v{y}\right)\right) \approx \v{y}$~\cite{senouf2019self}. However, the latter again requires many evaluations of $S$. Our physics-based fingerprinting approach is based on the framework by~\cite{ma2013magnetic} for magnetic resonance fingerprinting (MRF) extended to the multi-contribution case~\cite{rensonnet2018assessing,rensonnet2019towards}. As dictionary sizes increased and inference via fingerprinting became slower, DNN methods have been proposed to accelerate the process~\cite{oksuz2019magnetic,golbabaee2019geometry}. However, these do not consider the case of multiple linear signal contributions as in Eq.~\eqref{eq:linearity_S}. In the field of DW-MRI (our illustrative example), training of DNNs using high-quality data $\v{y}$, i.e. data acquired with a rich protocol $\mathcal{P}$, has been suggested in several works~\cite{golkov2016q,ye2017tissue,schwab2018joint,fang2019deep,ye2020improved}. The main limitation is that such enriched data may not always be available. It is worth noting that \cite{ye2020improved} also proposed Lasso bootstrap to quantify prediction uncertainty, as a way to strengthen interpretability. \section{Methods} \label{sec:methods} \subsection{Running example: estimation of white matter microstructure from DW-MRI} \label{sec:example} In DW-MRI, $M$ volumes of the brain are acquired by applying magnetic gradients with different characteristics $\v{p}$ (intensity, duration, profile). In each voxel of the brain white matter, a measurement vector $\v{y}\in \mathbb{R}^M$ is obtained by compiling the $M$ DW-MRI values at this given voxel location. As illustrated in Figure~\ref{fig:microstructure}, the white matter is mainly composed of long, thin fibers known as \emph{axons} which tend to run parallel to other axons, forming bundles called \emph{fascicles} or \emph{populations}. In a voxel at current clinical resolution, 2 to 3 populations of axons may intersect \cite{jeurissen2013investigating}. In this paper a value of $K=2$ populations is assumed for simplicity. We use a simple model of a population of parallel axons (Figure~\ref{fig:microstructure}) parameterized by a main orientation $\v{u}$, an axon radius index $r$ (of the order of $\SI{1}{\micro\meter}$) and an axon fiber density $f$ (in $[0, 0.9]$), which are properties involved in a number of neurological and psychiatric disorders \cite{chalmers2005contributors, mito2018fibre, andica2020neurocognitive}. Assuming the signal contribution of each population $k$ to be independent~\cite{rensonnet2018assessing}, the generative signal model $S$ in a voxel is \begin{equation} S\left(\gv{\omega};\mathcal{P}\right) = \phi_{\textrm{SNR}}\left(\sum\limits_{k=1}^K \nu_k S_{\textrm{MC}}\underbrace{(\v{u}_k, r_k, f_k}_{\gv{\omega}_k};\mathcal{P})\right), \label{eq:signal_model_dwmri} \end{equation} where the signal of each population is modeled by an accurate but computationally-intensive Monte Carlo simulation $S_{\textrm{MC}}$ \cite{hall2009convergence,rensonnet2015hybrid}. The weights $\nu_k$ can be interpreted as the fraction of voxel volume occupied by each population. The model is stochastic with $\phi$ modeling corruption by Rician noise \cite{gudbjartsson1995rician} at a given signal-to-noise ratio (SNR). In most clinical settings, SNR is high enough for Rician noise to be well approximated by Gaussian noise. A least-squares data fidelity term $\left\|\v{y} - S\left(\gv{\omega};\mathcal{P}\right) \right\|_2^2 $ is thus popular in brain microstructure estimation as it corresponds to a maximum likelihood solution. The acquisition protocol $\mathcal{P}$ is the MGH-USC Adult Diffusion protocol of the Human Connectome Project (HCP), which comprises $M=552$ measurements \cite{setsompop2013pushing}. \begin{figure} \caption{\textbf{Running example : estimation of white matter microstructure.} \label{fig:microstructure} \end{figure} \subsection{Physics-based dictionary fingerprinting} \label{sec:fingerprinting} Fingerprinting is a general approach essentially consisting in a sparse look-up in a vast precomputed dictionary containing all possible physical scenarios. A least squares data fidelity term is assumed here~\cite{bai2020deep} although the framework could be extended to other objective functions. For each contribution $k$ in Eq.~\eqref{eq:linearity_S}, a sub-dictionary $\v{C}^k \in\mathbb{R}^{M\times N_k}$ is presimulated once and for all by calling the known $S_k$, to form the total dictionary $\mathcal{D}\coloneqq \left[\v{C}^1 \dots \v{C}^K\right] \in \mathbb{R}^{M\times \Ntot{}}$, where $\Ntot{}\coloneqq \sum_{k=1}^K N_k$. Each $\v{C}^k$ thus contains $N_k$ atoms or \emph{fingerprints} $\v{A}^k_{j_k}\coloneqq S_k\left(\gv{\omega}_{kj_k};\mathcal{P}\right) \in \mathbb{R}^M $, with $j_k=1,\dots,N_k$, corresponding to a sampling of $N_k$ points in the space of biophysical parameters given a known experimental protocol $\mathcal{P}$. The dictionary look-up in this multi-contribution setting can be mathematically stated as \begin{equation} \begin{array}{lll} \hat{\v{w}}= & \argmin\limits_{\v{w}\geq 0} & \left\|\v{y}-\begin{bmatrix}\v{C}^1 \dots \v{C}^K\end{bmatrix}\cdot \begin{bmatrix}\v{w}_1\\ \vdots \\ \v{w}_K\end{bmatrix} \right\|_2^2\\ & & \\ & \text{subject to} & \left| \mathbf{w}_k \right|_{0}=1, \quad k=1,\dots,K, \\ \end{array} \label{eq:sparse_optimization} \end{equation} where the sparsity constraints $\left|\cdot\right|_0 $ on the sub-vectors $\v{w}_k$ guarantee that only one fingerprint $\v{A}^k_{j_k}$ per sub-dictionary $\v{C}^k$ contributes to the reconstructed signal. No additional tunable regularization of the solution is needed as all the constraints (e.g., lower and upper bounds on latent parameters $\gv{\omega}$) are included in the dictionary $\mathcal{D}$. Equation~\eqref{eq:sparse_optimization} is solved exactly by exhaustive search, i.e. by selecting the optimal solution out of $\prod_{k=1}^K N_k$ independent non-negative linear least squares (NNLS) sub-problems of $K$ variables each \begin{equation} \begin{split} (\hat{j}_1,\dots, \hat{j}_K)=& \\ \argmin\limits_{1\leq j_k \leq N_k}\quad & \min\limits_{\v{w}\geq 0} \left\| \v{y}-\begin{bmatrix}\v{A}^1_{j_1} \dots \v{A}^K_{j_K}\end{bmatrix}\cdot \begin{bmatrix}w_1\\ \vdots \\ w_K \end{bmatrix} \right\|_2^2. \end{split} \label{eq:combinatorial_optimization} \end{equation} Each sub-problem is convex and is solved exactly by an efficient in-house implementation\footnote{\texttt{solve\_exhaustive\_posweights} function of the Microstructure Fingerprinting library available at \url{https://github.com/rensonnetg/microstructure_fingerprinting}. Written in Python 3 and optimized with Numba (\url{http://numba.pydata.org/}).} of the active-set algorithm~\citep[][chap. 23, p. 161]{lawson1995solving}, which empirically runs in $\mathcal{O}\left(K\right)$ time. The optimal biophysical parameters $\hat{\v{\omega}}_k$ are finally obtained as those of the optimal fingerprint $\hat{j}_k$ in each $\v{C}^k$ and the weights $\hat{\nu}_k$ are estimated from the optimal $\hat{w}_k$. The main limitation of this approach is its computational runtime complexity $\mathcal{O}\left(N_1\dots N_K K\right)$ or $\mathcal{O}\left(N^K K\right)$ if $N_k=N $ $\forall k$. Additionally, the size $N_k$ of each sub-dictionary $\v{C}^k$ also increases rapidly as the number of biophysical parameters, i.e. as the number of entries in $\gv{\omega}_k$ increases. In our running example, $K=2$ sub-dictionaries of size $N=782$ each are pre-computed using Monte Carlo simulations, corresponding to biologically-informed values of axon radius $r$ and density $f$. The orientations $\v{u}_1, \v{u}_2$ are pre-estimated using an external routine \cite{tournier2007robust} and directly included in the dictionary. \subsection{Fully-learned neural network} \label{sec:full_nn} As depicted in Figure~\ref{fig:architectures}, we focus on a multi-layer perceptron (MLP) architecture with rectified linear unit (ReLU) non-linearities (not shown in Figure~\ref{fig:architectures}), which has the benefit of a fast forward pass for inference. In our brain microstructure example, training is performed on simulated data obtained by Eq.~\eqref{eq:signal_model_dwmri} with tissue parameters $\gv{\omega}$ drawn uniformly from biophysically-realistic ranges. A mean squared error (MSE) loss is used to match Eq.~\eqref{eq:sparse_optimization} and \eqref{eq:combinatorial_optimization}. Dropout \cite{srivastava2014dropout} is included in every layer during training and stochastic gradient descent with adaptive gradient (Adagrad) \cite{duchi2011adaptive} is used to estimate parameters. \begin{figure*} \caption{\textbf{Interpretable hybrid vs fully-learned approach.} \label{fig:architectures} \end{figure*} \subsection{Hybrid method} \label{sec:hybrid} The proposed method is a combination of the fingerprinting and the end-to-end DNN approaches presented above. \paragraph{First stage: NNLS.} The same sub-dictionaries $\v{C}^k$ as in Section~\ref{sec:fingerprinting} are computed. Instead of solving Eq.~\eqref{eq:sparse_optimization} with 1-sparsity contraints on the sub-vectors $\v{w}_k$, a single NNLS problem is solved \begin{equation*} \begin{array}{lll} \hat{\v{w}}= & \argmin\limits_{\v{w}\geq 0} & \left\|\v{y}-\mathcal{D}\cdot \v{w} \right\|_2^2,\\ \end{array} \end{equation*} where the optimization completely ignores the structure of the dictionary $\mathcal{D}=\begin{bmatrix}\v{C}^1 \dots \v{C}^K\end{bmatrix} \in \mathbb{R}^{M\times \Ntot{}} $. Unlike in Eq.~\eqref{eq:combinatorial_optimization}, only one reasonably large NNLS problem with $\sum_{k=1}^KN_k$ variables is solved rather than many small problems with $K$ variables. The runtime complexity of this algorithm is $\mathcal{O}\left(\Ntot{}\right)$ in practice and typically yields sparse solutions~\citep{slawski2011sparse}. In fact, if the model $S$ used to generate the dictionary were perfect for the measurements $\v{y}$ we would have $\left|\v{w}\right|_0=K \ll \Ntot{}$. There is however no guarantee that the true latent fingerprints in $\mathcal{D}$ will be among those attributed non-zero weights by the NNLS optimization. However, we expect those selected fingerprints with non-zero weights to have underlying properties $\hat{\gv{\omega}}$ close to and informative of the true latent biophysical properties. The second stage of the method can be seen as finding the right combination of these pre-selected features for an accurate final prediction. In our running example, as in the fingerprinting approach, the orientations $\v{u}_1, \v{u}_2$ are pre-estimated using an external routine \cite{tournier2007robust} and directly included in the dictionary. \paragraph{Second stage: DNN.} The output $\hat{\v{w}}$ of the first-stage NNLS estimation is given to the neural network depicted in Figure~\ref{fig:architectures}. Its architecture exploits the multi-contribution nature of the problem: each sub-vector $\hat{\v{w}}_{k}$ of $\hat{\v{w}}$ is first processed by a ``split'' independent multi-layer perceptron (MLP) containing $N_k$ input units (blue in Figure~\ref{fig:architectures}). Splitting the input has the advantage of reducing the number of model parameters while accelerating the learning of compartment-specific features by preventing coadaptation of the model weights~\cite{hinton2012improving}. A joint MLP (green in Figure~\ref{fig:architectures}) performs the final prediction of biophysical parameters. The output of all fully-connected layers is passed through a ReLU activation. Being a feed-forward network, inference is very fast once trained. The overall computational complexity of the hybrid method is therefore dominated by the first NNLS stage. \section{Experimental results} \subsection{Efficiency} Table~\ref{tab:performance} specifies the values obtained during the fine-tuning of the meta parameters of the DNNs used in the fully-learned and hybrid approaches. As predicted by theory, the hybrid method is an order of magnitude faster than the physics-based dictionary fingerprinting for inference. However its NNLS first stage makes it slower than the end-to-end DNN solution. Training times were similar but the fully-learned approach had the advantage of bypassing the precomputation of the dictionary. \begin{table*}[t] \caption{Meta parameters and efficiency of the three methods.} \label{tab:performance} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccc} \toprule & Fingerprinting & fully-learned & Hybrid \\ \midrule minibatch size & $\times$ & $\num{5000}$ & $\num{5000}$ \\ dropout rate & $\times$ & $\num{0.05}$ & $\num{0.1}$ \\ learning rate & $\times$ & $\num{5e-4}$ & $\num{1.5e-3}$\\ training samples & $\times$ & $\num{4e5}$ & $\num{4e5}$ \\ hidden units & $\times$ & $\num{3600}$ & $\num{1700}$ \\ parameters & $\times$ & $\num{3e6}$ & $\num{4.6e5}$ \\ precomputation time & $\approx \SI{2}{\day}$ & $\times$ & $\approx \SI{2}{\day}$ \\ inference time/voxel & $\SI{1.25}{\second}$ & $\SI{1.02e-4}{\second}$ & $\SI{1.45e-1}{\second} $ \\ inference complexity & $\BigO{N_1\dots N_K K}$ & $\BigO{1}$ & $\BigO{N_1+\dots +N_K}$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} \subsection{Accuracy} $\num{15000}$ samples $\left(\gv{\omega},S\left(\gv{\omega};\mathcal{P}\right)\right)$ (never seen by our DNNs during training) were simulated using Eq.\eqref{eq:signal_model_dwmri} of our DW-MRI example, with biophysical parameters in realistic ranges and SNR levels 25, 50 and 100. In order to test the robustness of the approaches to parameter uncertainty, two scenarios were tested for the fingerprinting and hybrid methods. First, the population orientations $\v{u}_1,\v{u}_2$ were estimated using \citep{tournier2007robust} and therefore subject to errors. Second, the reference groundtruth orientations were directly included in the dictionary $\mathcal{D}$. This did not apply to the fully-learned model as it learned all parameters from the measurements $\v{y}$ directly. Figure~\ref{fig:accuracy} shows that the hybrid approach (green lines) was more robust to uncertainty on $\v{u}$ than the physics-based fingerprinting method (blue lines), as the mean absolute errors (MAEs) only slightly increased when $\v{u}$ was misestimated (continuous lines). The fully-learned model exhibited the best overall performance. The poorer performance on the estimation of the radius index $r$ (middle row) is a well-known pitfall of DW-MRI as the signal only has limited sensitivity to $r$~\cite{clayden2015microstructural,sepehrband2016towards}. As the reference latent value of $\nu$ increased (x axis of Figure~\ref{fig:accuracy}), estimation of all tissue properties generally improved for all models. \begin{figure} \caption{\textbf{The proposed hybrid method exhibits high accuracy and robustness.} \label{fig:accuracy} \end{figure} \subsection{Explainability} A total of $\num{11730}$ test samples were simulated using Eq.~\eqref{eq:signal_model_dwmri} in realistic ranges for $\nu,r,f$ and crossing angle between $\v{u}_1$ and $\v{u}_2$ at SNR 50. To ease visualization of the results, $r_1=r_2$ and $f_1=f_2$ was enforced. The activations, defined as the vector of output values of a fully-connected layer \emph{after} the ReLU activation, were inspected at different locations of the DNNs used in the hybrid and the fully-learned methods. The projection of these multi-dimensional vectors into a 2-dimensional space was performed using t-SNE embedding~\cite{van2008visualizing} for each of the $\num{11730}$ test samples. The idea of this low-dimensional projection was to conserve the inter-sample distances and topology of the high-dimensional space. As shown in Figure~\ref{fig:activations}, the DNN in the second stage of the hybrid method seemed to learn the $r$ and $f$ properties after just the first layer, while it took the fully-learned model three layers to display a similar sample topology. At the end of the split MLP the $\nu$ parameter (marker shapes in Figure~\ref{fig:activations}) did not seem to have been learned but the different values of $\nu$ were then well separated at the end of the merged final MLP (see architecture in Figure~\ref{fig:architectures}). \begin{figure*} \caption{\textbf{The neural network of the hybrid approach is faster to learn useful signal representations.} \label{fig:activations} \end{figure*} \section{Discussion and conclusion} An approach was proposed combining the efficiency of deep neural networks with the interpretability of traditional optimization for general inverse problems in which evaluation of the forward generative model $S$ is possible but expensive. The potential of the method was exemplified on a problem of white matter microstructure estimation from DW-MRI. The DNN used in the hybrid method receives as input the measurements $\v{y}$ expressed as a linear combination of a few representative fingerprints taken from a physically-realistic basis. Exploiting the multi-contribution nature of the physical process, this input is separated by signal contribution and fed to independent MLP branches (see Figure~\ref{fig:architectures}). The overall effect is an expedited learning process, as suggested in Figure~\ref{fig:activations} where latent parameters $\v{\omega}_k$ are learned directly, ``for free'', for each axon population $k$. Consequently, we could consider further reducing the number and/or size of layers in the split MLP part of the network used in our illustrative example. Figure~\ref{fig:activations} also confirms that the weights $\nu$ of the independent signal contributions are only learned in the final, joint MLP of the DNN of the hybrid method. This is because a relative signal weight can only be defined with respect to the \emph{other} weights. The split branches of the network treat individual signal contributions independently, unaware of the other contributions, and thus cannot learn the relative weights $\nu$. This suggests that they more specifically focus on the properties $\v{\omega}_k$ of each contribution. A limitation of the proposed hybrid approach is the need to simulate the dictionary $\mathcal{D}$ which can be costly as the number of latent biophysical parameters increases. With high-dimensional inputs such as medical images, it may also be necessary to consider better adapted convolutional network architectures in the second stage of the hybrid method. General decompositions such as PCA analysis could then be considered~\cite{harkonen2020ganspace}. Our exemplary experiments were performed on simulated data and further investigation is required to demonstrate that the method generalizes to experimental data. Further work should also test whether the hybrid approach enables transfer learning from a protocol $\mathcal{P}$ to a new protocol $\mathcal{P}'$. In our DW-MRI example, this could happen with a scanner update or protocol shortened to accomodate pediatric imaging, for instance. While a new dictionary $\mathcal{D}'$ would be required for the fingerprinting and the hybrid method, the DNN trained in the hybrid approach should in theory still perform well without retraining. This is because fingerprints in $\mathcal{D}'$ would be linked to the same latent biophysical parameters $\gv{\omega}_{kj_k}$ (the number of rows $M'$ of $\mathcal{D}'$ would change, but not the number of columns $\Ntot{}$). Preliminary results (not shown in this paper) were encouraging. The DNN of the fully-learned approach would need to be retrained completely however. Future work will also more closely inspect the inference process in the network via techniques such as LIME~\cite{ribeiro2016should}, guided back-propagation~\cite{selvaraju2017grad}, shapley values and derivatives~\cite{shapley201617,sundararajan2020many} or layer-wise relevance propagation~\cite{bohle2019layer}. This would complement our t-SNE inspection and further reinforce the confidence in the prediction of our approach. We hope that these preliminary findings may contribute to incorporating more domain knowledge in deep learning models and ultimately encourage the more widespread adoption of machine learning solutions in the medical field. \FloatBarrier \end{document}
math
27,524
\begin{document} \title{Submicrometer position control of single trapped neutral atoms } \author{I.~Dotsenko} \author{W.~Alt} \author{M.~Khudaverdyan} \author{S.~Kuhr} \author{D.~Meschede} \author{Y.~Miroshnychenko} \author{D.~Schrader} \author{A.~Rauschenbeutel} \email{[email protected]} \affiliation{Institut f\"ur Angewandte Physik, Universit\"at Bonn, Wegelerstra\ss e 8, D-53115 Bonn, Germany} \date{\today} \pacs{32.80.Lg, 32.80.Pj, 39.25.+k, 42.30.-d} \begin{abstract} We optically detect the positions of single neutral cesium atoms stored in a standing wave dipole trap with a sub-wavelength resolution of 143~nm rms. The distance between two simultaneously trapped atoms is measured with an even higher precision of 36~nm rms. We resolve the discreteness of the interatomic distances due to the 532~nm spatial period of the standing wave potential and infer the exact number of trapping potential wells separating the atoms. Finally, combining an initial position detection with a controlled transport, we place single atoms at a predetermined position along the trap axis to within 300~nm rms. \end{abstract} \maketitle Precision position measurement and localization of atoms is of great interest for numerous applications and has been achieved in and on solids using e.g.~scanning tunnelling microscopy \cite{Binnig99}, atomic force microscopy \cite{Giessibl03}, or electron energy-loss spectroscopy imaging \cite{Suenaga00}. However, if the application requires long coherence times, as is the case in quantum information processing \cite{DiVincenzo95 and Ekert96} or for frequency standards, the atoms should be well isolated from their environment. This situation is realized for ions in ion traps, freely moving neutral atoms, or neutral atoms trapped in optical dipole traps. For the case of ions, positions \cite{Walther01, Blatt02} and distances \cite{Wineland03} have been optically measured and controlled with a sub-optical wavelength precision. Similar precision has been reached in an all-optical position measurement of freely moving atoms \cite{Thomas95}. Dipole traps, operated as optical tweezers, have been used to precisely control the position of individual neutral atoms \cite{Kuhr01,Bergamini04}. To our knowledge, however, a sub-micrometer position or distance measurement has so far not been achieved in this case. Such a control of the relative and absolute position of single trapped neutral atoms, however, is an important prerequisite for cavity quantum electrodynamics as well as cold collision experiments, aiming at the realization of quantum logic operations with neutral atoms. Here, we report on the measurement and control of the position of single neutral atoms stored in a standing wave optical dipole trap (DT). The positions of the atoms are inferred from their fluorescence using high resolution imaging optics in combination with an intensified CCD camera (ICCD). The absolute position of individual atoms along the DT is measured with a precision of 143~nm rms. The relative position of the atoms, i.e.~their separation, is determined more accurately by averaging over many measurements, yielding a relative position uncertainty of 36~nm. Due to this high resolution, we can resolve the discreteness of the distribution of interatomic distances in the standing wave potential even though our DT is formed by a Nd:YAG laser with potential wells separated by only 532~nm. This allows us to determine the exact number of potential wells between simultaneously trapped atoms. Finally, using our ``optical conveyor belt'' technique \cite{Kuhr01,Schrader01} we transport individual atoms to a predetermined position along the DT axis with an accuracy of 300~nm, thereby demonstrating a high degree of control of the absolute atom position. \begin{figure} \caption{Determination of the position of a single trapped atom. (a) An atom stored in the standing wave dipole trap is illuminated with an optical molasses (schematic drawing). (b) ICCD image of one atom stored in the DT with an exposure time of 1~s. The observed fluorescence spot corresponds to about 200 detected photons. (c) To determine the position of the atom along the DT axis, the pixel counts are binned in the vertical direction. The solid line corresponds to the line spread function of our imaging optics and reveals the absolute position $x_\mathrm{atom} \label{fig:PositionDetermination} \end{figure} We will only give the essential details of our experimental set-up. A more exhaustive description can e.g.~be found in \cite{Schrader01, Miroshnychenko03}. Our standing wave dipole trap is formed by the interference pattern of two counter-propagating Nd:YAG laser beams ($\lambda =1.064\, \mu$m) with a waist of $2w_0=38\,\mu$m and a total optical power of $2$~W. They produce a trapping potential with a maximal depth of $U_\mathrm{max}/k_\mathrm{B}=0.8$~mK for cesium atoms. Mutually detuning the frequency of the two laser beams moves the trapping potential along the DT axis and thereby transports the atoms \cite{Kuhr01, Schrader01, Miroshnychenko03}. The laser frequency is changed by acousto-optic modulators (AOMs), placed in each beam and driven by a digital dual-frequency synthesizer. The DT is loaded with cold atoms from a high-gradient magneto-optical trap (MOT). We deduce the exact number of atoms in the MOT from their discrete fluorescence levels detected by an avalanche photodiode. The transfer efficiency between the traps is better than 99~\%. In order to obtain fluorescence images of the atoms in the DT, we illuminate them with a near-resonant three-dimensional optical molasses, see Fig.~\ref{fig:PositionDetermination}(a). The molasses counteracts heating through photon scattering, resulting in an atom temperature of about 70~$\mu$K. The storage time in the trap is about $25$~s, limited by background gas collisions. The fluorescence light is collected by a diffraction limited microscope objective \cite{Alt02} and imaged onto the ICCD \cite{Miroshnychenko03}. One detected photon (quantum efficiency approx.~10~\% @ 852~nm) generates on average 350~counts on the CCD chip, and one $13\ \mu\mathrm{m}\times 13\ \mu\mathrm{m}$ CCD pixel corresponds to $0.933\,(\pm0.004)\,\mu$m in the object plane. Figure~\ref{fig:PositionDetermination}(b) shows an ICCD image of a single atom stored in the DT with an exposure time of 1~s. This exposure time is much longer than the timescale of the thermal position fluctuations of the atom inside the trap. Therefore, the vertical width of the fluorescence spot, i.e.~perpendicular to the DT axis, is essentially defined by the spread of the Gaussian thermal wave packet of the atom in the radial direction of the trap. In the axial direction of the DT, the wave packet has a much smaller $1/\sqrt{e}$-halfwidth of only $\Delta x_\mathrm{therm}=35$--50~nm, depending on the depth of the DT. In addition to these thermal fluctuations, the axial position of the standing wave itself is fluctuating by $\sigma_\mathrm{fluct}(1\, \mathrm{s})= 42\,(\pm 13)$~nm during the 1~s exposure time due to drifts and acoustic vibrations of the optical setup (see below). The horizontal $1/\sqrt{e}$-halfwidth of the detected fluorescence peak, $w_\mathrm{ax}=1.3(\pm$0.15)~$\mu$m, is much larger and is caused by diffraction within the imaging optics and a slight blurring in the intensification process of the ICCD. Compared to the point spread function of our imaging system, $\Delta x_\mathrm{therm}$ and $\sigma_\mathrm{fluct}$ have thus a negligible effect on $w_\mathrm{ax}$. Note, however, that all the atom positions and the distances between atoms given in the following refer to the {\em center} of the Gaussian thermal wave packets of the atoms. The ICCD image is characterized by its intensity distribution $I(\tilde{x}_i,\tilde{y}_j)$, where $\tilde{x}_i$ and $\tilde{y}_j$ denote the horizontal and vertical position of pixel $\{i,j\}$, respectively. In order to determine the horizontal position $\tilde{x}_\mathrm{atom}$ of the fluorescence peak from the ICCD image, we bin $I(\tilde{x}_i,\tilde{y}_j)$ in the vertical direction. Neglecting noise for the moment, this yields $I(\tilde{x}_i)=\sum_{j} I(\tilde{x}_i,\tilde{y}_j)\propto L(\tilde{x}_i-\tilde{x}_\mathrm{atom})$, where $L(\tilde{x})$ is the line spread function (LSF) of our imaging optics. Without distortions, the object coordinate $x_\mathrm{atom}$ and the image coordinate $\tilde{x}_\mathrm{atom}$ are connected by the relation $x_\mathrm{atom}=(\tilde{x}_\mathrm{atom}- \widetilde{\mathcal{O}}_x)/M$, where $\widetilde{\mathcal{O}}_x$ is the image coordinate of the origin and $M$ is the magnification of our imaging optics. In general, $\widetilde{\mathcal{O}}_x$ and $M$ have to be calibrated from independent measurements. In the present case, however, no physical point in space is singled out as an origin and we arbitrarily set $\widetilde{\mathcal{O}}_x\equiv 0$. Our LSF is position-independent and is well described by a sum of two Gaussians with a ratio of 4.4:1 in heights and 1:3.2 in widths, with a slight horizontal offset with respect to each other, see Fig.~\ref{fig:PositionDetermination}(c). We define $\tilde{x}_\mathrm{atom}$ as the position of the maximum of this LSF. In our experiment, it is determined by fitting a simple Gaussian to the fluorescence peak. This procedure has been chosen because it can be carried out in a fast automated way, yielding information about the atom position during the running experimental sequence. Assuming pure shot noise, this allows to determine $x_\mathrm{atom}$ with a statistical error of \begin{equation}\label{eq:deltaX} \Delta x_\mathrm{stat}= 1.44\, w_\mathrm{ax}/\sqrt{N_\mathrm{ph}}, \end{equation} where $N_\mathrm{ph}$ is the number of detected photons and the numerical factor has been determined by a numerical simulation taking into account the experimental LSF and the bin size. In the experiment the value of $N_\mathrm{ph}$ depends on the illumination parameters. Here, $N_\mathrm{ph}=200(\pm 30)$ photons per second per atom, so that $\Delta x_\mathrm{stat}= 130\, (\pm20)$~nm. Our simulation also yields a constant position offset of 42~nm of the fitted center of the Gaussian with respect to the maximum of the LSF, due to the slight asymmetry of our LSF. This offset only leads to a global shift of $\widetilde{\mathcal{O}}_x$ and is irrelevant for our analysis. In addition to the statistical error, two further sources influence the precision of the position detection: the background noise of our ICCD image and the position fluctuations of the DT. The background in Fig.~\ref{fig:PositionDetermination} originates in equal proportions from stray light and the read-out process of the ICCD, yielding a total offset of $2300(\pm300)$ counts per bin for 1~s exposure time. The noise of $300$ counts per bin introduces an additional uncertainty of $\Delta x_\mathrm{backgr} = 15$~nm to the fitted peak center. The atom position is subject to position fluctuations of the DT, $\sigma_\mathrm{fluct}$. Since $\sigma_\mathrm{fluct}$ cannot be extracted from the ICCD image, we determine it in an independent measurement. For this purpose, we mutually detune the two trap beams and overlap them on a fast photodiode. From the phase of the resulting beat note we infer the phase variations $\phi(t)$ of the standing wave with a 300~kHz bandwidth. The standard deviation of $\phi(t)$, $\sigma_\phi(\tau)$, is directly related to the position fluctuations of the DT during the time interval $\tau$ by $\sigma_\mathrm{fluct}(\tau) = \lambda/2 \cdot\sigma_{\phi}(\tau)/2\pi$. We have found $\sigma_\mathrm{fluct}(1\,\mathrm{s}) =42\,(\pm 13)$~nm. Thus, using the approximation of Gaussian-distributed position fluctuations, which we have checked to be valid to better than 1~\% in our case, the position uncertainty immediately after the 1~s exposure time is given by \begin{equation}\label{eq:deltaX_a} \Delta x_\mathrm{atom}^2(1\,\mathrm{s})= \Delta x_\mathrm{stat}^2+ \Delta x_\mathrm{backgr}^2 + \sigma_\mathrm{fluct}^2(1\,\mathrm{s})\ , \end{equation} yielding $\Delta x_\mathrm{atom}(1\,\mathrm{s})= 140\,(\pm 20)$~nm. Finally, the read-out and the data analysis of the image take an additional 0.5~s during which $x_\mathrm{atom}$ is further subject to position fluctuations of the DT. This increases the variance of the position measurement by $2\sigma_\mathrm{fluct}^2(0.5\,\mathrm{s})$. Thus, we can determine the absolute position of the trapped atom with a precision of $\Delta x_\mathrm{atom}(1.5\,\mathrm{s})= 143\,(\pm 20)$~nm within 1.5~s (1~s exposure time plus 0.5~s read-out and data analysis). Our analysis shows that this precision cannot be significantly increased by extending the exposure time because the benefit of the higher photon statistics for longer times is counteracted by the increase in $\sigma_\mathrm{fluct}(\tau)$. \begin{figure} \caption{Determination of the distance between atoms. (a) Two atoms in the standing wave dipole trap have a separation of $n \lambda/2$ with $n$ integer. (b) After loading the atoms into the DT, we successively take many camera pictures of the same pair of atoms. (c) From each picture (exposure time 1~s) we determine the positions of the atoms and their separation $d$. Averaging over many measurements of $d$ reduces the statistical error and allows us to infer $n$.} \label{fig:AtomSeparationScheme} \end{figure} While for some applications the absolute position of the atoms must be known to the highest possible precision, other experiments, like e.g.~controlled cold collisions \cite{Mandel03}, require a precise knowledge of the separation $d$ between atoms. In the following we will show that in our case this separation can be more precisely determined than the absolute positions of the individual atoms. The reason is that DT fluctuations equally influence all simultaneously trapped atoms and therefore do not affect the separation between them. Thus, this distance can be averaged over many measurements. Given the precision of the peak detection, the uncertainty of the separation $d$ between two atoms determined from {\em one} picture should be $\Delta d^2=2(\Delta x_\mathrm{stat}^2+\Delta x_\mathrm{backgr}^2)$. Averaging the results from $N_\mathrm{pic}$ images should then reduce the statistical error of the mean value $\bar{d}$ to $\Delta \bar{d}=\Delta d/\sqrt{N_\mathrm{pic}}$. Since the data processing in this case is carried out at a later stage, we use the experimentally established LSF [see Fig.~1(c)] for fitting the fluorescence peaks. For the case of partially overlapping fluorescence spots $(d \lesssim 10~\mu\mathrm{m})$, this method yields more precise results for the two atom positions than fitting a simple Gaussian. For $d\lesssim 4~\mu\mathrm{m}$ the increasing overlap reduces the precision of the position determination. We have therefore restricted our investigations to the case where the atoms are separated by more than 4~$\mu$m. To realize this scheme experimentally, we first load the DT with two atoms, see Fig.~\ref{fig:AtomSeparationScheme}(a). We then typically take $N_\mathrm{pic} =10$ successive camera pictures of the same pair before one of the two atoms leaves the trap. In these experiments, we detect $N_{\mathrm{ph}}=270(\pm 30)$ photons per second per atom. From each picture we determine the distance $d$ between the atoms, see Fig.~\ref{fig:AtomSeparationScheme}~(b) and (c), and then calculate its mean value $\bar{d}$ and its standard deviation $\Delta d$ for each pair. Our measured value of $\Delta d =135\,(\pm30)$~nm is in resonable agreement with the expected value of $\sqrt{2}\Delta x_\mathrm{stat}=160(\pm 25)$~nm, inferred from Eq.~(\ref{eq:deltaX}), thereby confirming its validity. By averaging the distance over about $N_\mathrm{pic}=10$ images per atom pair, the uncertainty in $\bar{d}$ should therefore be reduced to $\Delta \bar{d}\approx 40$~nm. Now, the separation of trapped atoms equals $d= n\lambda/2$ with $n$ integer, see Fig.~\ref{fig:AtomSeparationScheme}(a). If $d$ can be measured with a precision $\Delta d\ll\lambda/2$, its distribution should therefore reveal the standing wave structure of the DT. Indeed, the $\lambda/2$ period is strikingly apparent in Fig.~\ref{fig:DiskreteDistances} which shows the cumulative distribution of mean distances $\bar{d}$ between atoms. \begin{figure} \caption{Cumulative distribution of separations between atoms in the dipole trap measured with the scheme presented in Fig.~\ref{fig:AtomSeparationScheme} \label{fig:DiskreteDistances} \end{figure} The resolution of our distance measurements can be directly inferred from the finite width of the steps observed in Fig.~\ref{fig:DiskreteDistances}, yielding $\Delta \bar{d}=36\,(\pm 12)$~nm. This result proves that we can determine the exact number of potential wells between two optically resolved atoms, a situation that so far seemingly required much longer (e.g.~CO$_2$) trapping laser wave lengths \cite{Scheunemann00}. Using our scheme to precisely measure the position of an atom, we now demonstrate active control of its absolute position along the DT axis. This is realized by transporting the atom to a predetermined position $x_\mathrm{target}$ by means of our optical conveyor belt. Initially, we determine the position of the atom and its distance $L$ from $x_\mathrm{target}$ from an ICCD image by fitting a simple Gaussian. To move the atom to $x_\mathrm{target}$, it is uniformly accelerated along the first half of $L$ and uniformly decelerated along the second half with an acceleration of $a=\pm 1000$~m/s$^2$ \cite{Schrader01}. To confirm the successful transport to $x_\mathrm{target}$, we take a second image of the atom and measure its final position. We repeat the same experiment about 400 times with a single atom each time. \begin{figure} \caption{Absolute position control of single trapped atoms. The histogram shows the accumulative data of about 400 experiments carried out with one single atom at a time. Transfer of the atoms from the MOT to the DT yields the broad distribution on the right (standard deviation $5.0\ \mu$m). We transport the atoms to the target position at $x_\mathrm{target} \label{fig:PositionFeedback} \end{figure} Because the atoms are randomly loaded from the MOT into the DT, the distribution of their initial positions, see Fig.~\ref{fig:PositionFeedback}, has a standard deviation of $5.0\,(\pm0.3)\,\mu$m, corresponding to the MOT radius. After the transport, the width of the distribution of the final positions is drastically reduced to $\sigma_{\mathrm{control}}=300\,(\pm15)$~nm. This width is limited by the errors in determining the final and initial position of the atom, by the transportation error $\sigma_{\mathrm{transp}}$, resulting from the discretization error of our digital dual-frequency synthesizer which drives the optical conveyor belt, and by the DT position drifts, $\sigma_{\mathrm{drift}}$, during the typical time of 1.5~s between the two successive exposure intervals. From the above DT phase measurement we find $\sigma_{\mathrm{drift}}=140 \,(\pm 20)$~nm. Assuming that \begin{equation}\label{eq:transport} \sigma_{\mathrm{control}}= \sqrt{2\Delta x_\mathrm{stat}^2+ \sigma_{\mathrm{drift}}^2+ \sigma_{\mathrm{transp}}^2}, \end{equation} we calculate that $\sigma_\mathrm{transp}=190\,(\pm 25)$~nm, comparable to the statistical error. In addition to statistical errors, the accuracy of the position control is subject to systematic errors. The predominant systematic error stems from the calibration of our length scale. In the present case, a relative calibration error of $0.4$~\% results in a $120$~nm shift of the final positions with respect to the target position after a transport over $L\approx30\,\mu$m. However, this error could be reduced by improving the accuracy of the calibration. Summarizing, we have realized a detection scheme for the absolute and relative position of individual atoms stored in our standing wave dipole trap, yielding sub-micrometer resolution. We have shown that this scheme allows us to measure the exact number of potential wells separating simultaneously trapped atoms in our 532~nm-period standing wave potential. We have furthermore used our position detection scheme to transport an atom to a predetermined position with a sub-optical wavelength accuracy. These results represent an important step towards experiments in which the relative or absolute position of single atoms has to be controlled to a high degree. For example, we aim to use this technique for a deterministic coupling of atoms to the mode of a high-Q optical resonator in order to realize quantum logic operations~\cite{Walther01, Blatt02, Schrader04, You03}. Furthermore, knowing the exact number of potential wells separating the atoms, we can now attempt to control this parameter by placing atoms into specific potential wells of our standing wave using additional optical tweezers. Finally, the demonstrated high degree of control allows us to envision the implementation of controlled cold collisions between optically resolved individual atoms by means of spin-dependent transport \cite{Mandel03}. We acknowledge valuable discussions with V.~I.~Balykin. This work was supported by the Deutsche For\-schungs\-ge\-mein\-schaft and the EC (IST/FET/QIPC project ``QGATES''). I.~D.~acknowledges funding from INTAS. D.~S.~acknowledges funding by the Deutsche Telekom Stiftung. \end{document}
math
21,649
\begin{document} \begin{abstract} We discuss the rainbow Ramsey theorems at limit cardinals and successors of singular cardinals, addressing some questions in \cite{MR2354904} and \cite{MR2902230}. In particular, we show for inaccessible $\kappa$, $\kappa\to^{poly}(\kappa)^2_{2-bdd}$ does not characterize weak compactness and for singular $\kappa$, $\mathrm{GCH}$ implies $\kappa^+\not\to^{poly} (\eta)^2_{<\kappa-bdd}$ for any $\eta\geq cf(\kappa)^+$ and $\square_\kappa$ implies $\kappa^+\to^{poly} (\nu)^2_{<\kappa-bdd}$ for any $\nu<cf(\kappa)^+$. We also provide a simplified construction of a model for $\omega_2\not\to^{poly} (\omega_1)^2_{2-bdd}$ originally constructed in \cite{MR2902230} and show the witnessing coloring is indestructible under strongly proper forcings but destructible under some c.c.c forcing. Finally, we conclude with some remarks and questions on possible generalizations to rainbow partition relations for triples. \end{abstract} \maketitle \let\thefootnote\relax\footnotetext{2010 \emph{Mathematics Subject Classification}. Primary: 03E02, 03E35, 03E55. I thank Uri Abraham, James Cummings, Assaf Rinot and Ernest Schimmerling for helpful discussions, comments and corrections on earlier drafts. I am grateful to the anonymous referee who provides extensive comments and corrections which greatly improve the exposition. The work was done when I was a graduate student at Carnegie Mellon University supported in part by the US tax payers. Part of the revision was done when I am a post doctoral fellow in Bar-Ilan University, supported by the Foreign Postdoctoral Fellowship Program of the Israel Academy of Sciences and Humanities and by the Israel Science Foundation (grant agreement 2066/18). } \section{Introduction} Fix ordinals $\lambda,i, \kappa$ and $n\in \omega$. \begin{definition} We use $\lambda\to (\kappa)^n_i$ to abbreviate: for any $f: [\lambda]^n \to i$, there exists $A\subset \lambda$ of order type $\kappa$ such that $f\restriction [A]^n$ is a constant function. Such $A$ is called a \emph{monochromatic} subset of $\lambda$ (with respect to $f$). \end{definition} \begin{definition} We use $\lambda\to^{poly} (\kappa)^n_{i-bdd}$ to abbreviate: for any $f: [\lambda]^n \to \lambda$ that is \emph{i-bounded}, namely for any $\alpha\in \lambda$, $|f^{-1}\{\alpha\}|\leq i$, there exists $A\subset \lambda$ of order type $\kappa$ such that $f\restriction [A]^n$ is injective. Such $A$ is called a \emph{rainbow} subset of $\lambda$ (with respect to $f$). \end{definition} \begin{remark} $\to^{poly}$ is sometimes denoted as $\to^*$. We adopt $\to^{poly}$ to avoid possible confusion, as rainbow subsets are sometimes called ``polychromatic'' subsets. \end{remark} $\lambda\to (\kappa)^n_i$ implies $\lambda\to^{poly} (\kappa)^n_{i-bdd}$ as given a $i$-bounded coloring it is possible to cook up a dual $i$-coloring for which any monochromatic subset will be a rainbow subset for the original coloring. This is the \emph{Galvin's trick}. This explains why rainbow Ramsey theory is also called sub-Ramsey theory in finite combinatorics. In many cases, the rainbow analogue is a strict weakening. For example: \begin{enumerate} \item[1] In finite combinatorics, the sub-Ramsey number $sr(K_n, k)$, which is the least $m$ such that $m\to^{poly}(n)_{k-bdd}^2$, is bounded by a polynomial in $n$ and $k$ (Alspach, Gerson, Hahn and Hell \cite{MR867747}). This is in contrast with the Ramsey number which grows exponentially. \item[2] In reverse mathematics, over $RCA_0$, $\omega\to^{poly}(\omega)^2_{2-bdd}$ does not imply $\omega\to(\omega)^2_2$ (Csima and Mileti \cite{MR2583822}). \item[3] In combinatorics on countably infinite structures, the Rado graph is Rainbow Ramsey but not Ramsey (Dobrinen, Laflamme, and Sauer \cite{MR3518438}). \item[4] In combinatorics on the ultrafilters on $\omega$, Martin's Axiom implies there exists a Rainbow Ramsey ultrafilter that is not a Ramsey ultrafilter (Palumbo \cite{MR3135506}). \item[5] In uncountable combinatorics, ZFC proves $\omega_1\not\to (\omega_1)_2^2$ but $\omega_1\to^{poly} (\omega_1)_{2-bdd}^2$ is consistent with ZFC (Todorcevic \cite{MR716846}). \end{enumerate} Results in this note serve as further evidence that rainbow Ramsey theory is a strict weakening of Ramsey theory. We focus on the area of uncountable combinatorics. The organization of the paper is: \begin{enumerate} \item In Section \ref{inaccessible}, we discuss rainbow Ramsey theorems at limit cardinals. In particular, we show $\kappa\to^{poly} (\kappa)^2_{2-bdd}$ for an inaccessible cardinal $\kappa$ does not imply $\kappa$ is weakly compact, answering a question in \cite{MR2354904}; \item In Section \ref{singular}, we discuss the rainbow Ramsey theorems at the successor of singular cardinals. Answering a question in \cite{MR2902230}, we show $\mathrm{GCH}+\square_\kappa$ implies $\kappa^+\not\to^{poly} (\eta)^2_{<\kappa-bdd}$ for any $\eta\geq cf(\kappa)^+$ and $\kappa^+\to^{poly} (\nu)^2_{<\kappa-bdd}$ for any $\nu<cf(\kappa)^+$ . \item In Section \ref{indestructible}, we use the method of Neeman developed in \cite{MR3201836} to simplify the construction of a model by Abraham and Cummings \cite{MR2902230} in which $\omega_2\not\to^{poly} (\omega_1)_{2-bdd}^2$. Furthermore, we show in this model, the witnessing coloring is indestructible under strongly proper forcings but destructible under c.c.c forcings. In other words, the coloring witnessing $\omega_2\not\to^{poly} (\omega_1)_{2-bdd}^2$ remains the witness to the same negative partition relation in any strongly proper forcing extension but there exists a c.c.c forcing extension that adds a rainbow subset of size $\omega_1$ for that coloring. As a result, $\omega_2\not\to^{poly} (\omega_1)_{2-bdd}^2$ is compatible with the continuum being arbitrarily large. \item In Section \ref{generalization}, we briefly discuss possibilities and restrictions of generalizations to partition relations for triples. \end{enumerate} \section{Rainbow Ramsey at limit cardinals}\label{inaccessible} In \cite{MR2354904}, Abraham, Cummings and Smyth studied the rainbow Ramsey theory at small uncountable cardinals and successors of regular cardinals. They asked what can be said about the rainbow Ramsey theory at inaccessible cardinals. A test question they asked was for any inaccessible cardinal $\kappa$, whether $\kappa\to^{poly}(\kappa)^2_2$ characterize weak compactness. We answer this in the negative. Fix a regular uncountable cardinal $\kappa$. \begin{definition} We say $f: [\kappa]^n \to \kappa$ is a normal coloring if whenever $\bar{a}, \bar{b}\in [\kappa]^n$ are such that $f(\bar{a})=f(\bar{b})$, then $\max \bar{a}=\max \bar{b}$. \end{definition} \begin{definition} A normal function $f:[\kappa]^2\to \kappa$ is regressively bounded (reg-bdd) if there exists $\lambda<\kappa$ such that $\kappa \cap cof(\geq \lambda)$ is stationary in $\kappa$ and for all $\alpha\in \kappa\cap cof(\geq \lambda)$, and $i<\kappa$, $\{\beta\in \alpha: f(\beta, \alpha)=i\}$ is bounded in $\alpha$. We use $\kappa \to^{poly} (\kappa)^2_{reg-bdd} $ to denote the statement: for any normal regressively bounded $f: [\kappa]^2\to \kappa$, there exists a subset $A\in [\kappa]^\kappa$ such that $A$ is a rainbow subset for $f$. \end{definition} \begin{remark} Notice for any weakly inaccessible cardinal $\kappa$ and any cardinal $\lambda<\kappa$, $\kappa \to^{poly} (\kappa)^2_{reg-bdd}$ implies $\kappa \to^{poly} (\kappa)^2_{\lambda-bdd}$. To see this, given $f: [\kappa]^2\to \kappa$ that is $\lambda$-bounded, recursively, we may find a subset $B\in [\kappa]^\kappa$ such that $f\restriction [B]^2$ is normal. Hence without loss of generality we may assume $f$ is normal. Then it is easy to see that $f$ is regressively bounded witnessed by $\lambda^+$. \end{remark} Even though we cannot employ Galvin's trick of dual colorings since there may not be any $\lambda<\kappa$ that bounds the sizes of color classes, we do have that if $\kappa$ is weakly compact, then $\kappa\to^{poly} (\kappa)_{reg-bdd}^2$. It turns out that weak compactness is not necessary. It suffices when $\kappa$ is a ``generic large cardinal'' (for more on this topic, see \cite{MR2768692}). Recall that a \emph{$\overrightarrow{C}$-sequence} on $\kappa$ is $\langle C_\alpha: \alpha\in \lim \kappa\rangle$ such that each $C_\alpha\subset \alpha$ is a club subset of $\alpha$. \begin{lemma}\label{vanilla} Let $\kappa$ be a regular cardinal. Consider the following statements: \begin{enumerate} \item Every $\overrightarrow{C}$-sequence on $\kappa$ is \emph{trivial stationarily often}, namely, there exists a club $D\subset \kappa$ such that for any $\alpha<\kappa$, there exist stationarily many $\beta<\kappa$ such that $D\cap \alpha \subset C_\beta$; \item $\kappa\to^{poly} (\kappa)^2_{reg-bdd}$; \item every $\overrightarrow{C}$-sequence on $\kappa$ is \emph{trivial}, in the sense that there exists a club $D\subset \kappa$ such that for any $\alpha<\kappa$ there exists $\beta<\kappa$ such that $D\cap \alpha\subset C_\beta$. \end{enumerate} Then (1) implies (2). \end{lemma} \begin{proof} We first prove (1) implies (2). Given a regressively bounded $f$ witnessed by $\lambda<\kappa$, we may assume $f(\cdot, \delta): \delta\to \delta$. It is not hard to see that (1) implies that $\kappa$ is a limit cardinal. Hence we may also assume $\lambda\geq \aleph_1$. For each $\alpha\in cof(\geq \lambda)\cap \kappa$, we let $C_\alpha\subset \alpha$ be a club of order type $cf(\alpha)$ such that for any $\gamma_0<\gamma_1\in C_\alpha$, $f(\gamma_0, \alpha)\neq f(\gamma_1, \alpha)$. We can achieve this by first picking a club subset $C'_\alpha\subset \alpha$ of order type $cf(\alpha)$, and $C_\alpha\subset C_\alpha'$ is the closure points for the following function: $f: C'_\alpha \to C'_\alpha$ such that $f(\beta)$ is the least $\beta'$ such that for all $\beta''\geq \beta'$, $f(\beta, \alpha)\neq f(\beta'', \alpha)$. Note $f$ is well-defined by the regressive bounding condition on $f$ and we make sure that $f(\cdot, \alpha)\restriction C_\alpha$ is injective. If $\alpha\in cof(<\lambda)\cap \kappa$, then just let $C_\alpha$ be any club of type $cf(\alpha)$. Let $D\subset \kappa$ be a club given by the conclusion of (1). We may then build a continuous sequence $\langle \alpha_i\in D: i<\kappa\rangle$ such that for any $i<\kappa$, $D\cap \alpha_i\subset C_{\alpha_{i+1}}$. Notice that on a tail of this sequence, $\alpha_{i+1}\in cof(\geq \lambda)\cap \kappa$. We may without loss of generality assume that $\alpha_{i+1}\in cof(\geq \lambda)\cap \kappa$ for all $i<\kappa$. Let $Y\subset \kappa$ be a set of size $\kappa$ such that for any $i<j\in Y$, $\alpha_{i+1} < \alpha_j$. We claim that $X=_{def} \{\alpha_{i+1}: i\in Y\}$ is a rainbow subset for $f$. Fix $i<j<k \in Y$, since $\alpha_{i+1}<\alpha_{j+1}\in D\cap \alpha_{k} \subset C_{\alpha_{k+1}}$, we know that $f(\alpha_{i+1}, \alpha_{k+1})\neq f(\alpha_{j+1},\alpha_{k+1})$. We prove (2) implies (3). Let a $\langle C_\alpha: \alpha<\kappa\rangle$ be a given. Define $f(\alpha,\beta)=(\min (C_\beta - (\alpha+1)),\beta)$. It can be easily checked that that $f$ is regressively bounded. Let $X\subset \kappa$ be a rainbow subset of size $\kappa$ and let $C=\lim X=\{\delta \in \lim \kappa: \sup X\cap \delta=\delta\}$. We claim that $C$ is what we are looking for. It clearly suffices to show that any $\alpha\in C$ and any $\beta\in X- (\alpha+1)$, it is true that $\alpha\in C_\beta$. Since $\alpha$ is a limit point of $X$, and $f(\cdot, \beta)\restriction X\cap \alpha$ is injective, it must be the case that $\alpha\in \lim C_\beta$, which implies that $\alpha\in C_\beta$. \end{proof} \begin{remark} Lambie-Hanson and Rinot \cite{CLHRinot} introduced and studied a cardinal invariant on $\overrightarrow{C}$-sequences. The statement (3) in Lemma \ref{vanilla} is what they call $\chi(\kappa)\leq 1$. Furthermore, they show $\chi(\kappa)\leq 1$ implies that $\kappa$ is greatly Mahlo. \end{remark} \begin{definition} Suppose $\kappa$ is regular and ${}^{<\kappa}\kappa=\kappa$. We say \emph{$\kappa$ is generically weakly compact via $\kappa$-c.c forcings} if for any transitive $M$ with $|M|=\kappa$, $\kappa\in M$, ${}^{<\kappa}M\subset M$, there exists a $\kappa$-c.c forcing $P$ such that for any generic $G\subset P$ over $V$, in $V[G]$, there exists an elementary embedding $j: M\to N$ where $N$ is transitive and $\mathrm{crit}(j)=\kappa$. \end{definition} \begin{thm}\label{saturatedproof} If a regular cardinal $\kappa$ is generically weakly compact via $\kappa$-c.c forcings, then $\kappa\to^{poly} (\kappa)_{reg-bdd}^2$. \end{thm} \begin{proof} We will show that every $\overrightarrow{C}$-sequence on $\kappa$ is trivial stationarily often, then the theorem follows from Lemma \ref{vanilla}. Given a $\overrightarrow{C}$-sequence on $\kappa$, $\bar{C}=\langle C_\alpha: \alpha<\kappa\rangle$, let $X\prec H(\theta)$ containing $\bar{C}$ where $\theta$ is a large enough regular cardinal, such that $|X|=\kappa$, $\kappa\subset X$ and ${}^{<\kappa}X\subset X$. Let $\pi: X\to M$ be the transitive collapse. Note that $\bar{C}$ is fixed by $\pi$. By the assumption on $\kappa$, there is some generic extension $V[G]$ by a $\kappa$-c.c forcing such that there exists an embedding $j: M\to N$ with $N$ being transitive and $\mathrm{crit}(j)=\kappa$. Let $D'=j(\bar{C})(\kappa)$. Then $D'\subset \kappa$ is a club set in $V[G]$. Since the forcing is $\kappa$-c.c, there exists a club $D\subset \kappa\in V$ such that $D\subset D'$. We claim that $D$ trivializes $\bar{C}$ stationarily often. For any $\alpha<\kappa$, $D\cap \alpha\in M$ since $M$ contains $\kappa$ and is closed under $<\kappa$-sequences in $V$. Let $S=\{\beta<\kappa: D\cap \alpha\subset C_\beta\}$. We will show that $S$ is stationary. Notice that $S\in M$ and $M\models S$ is a stationary subset of $\kappa$. To see this, suppose for the sake of contradiction there is a club $D^*\in M$ disjoint from $S$. Then $\kappa \in j(D^*)$ since $j(D^*)$ is a club in $j(\kappa)$ by elementarity and $j(D^*)\cap \kappa=D^*$ which is unbounded in $\kappa$. This implies $j(D)\cap \kappa=D\not \subset j(\bar{C})(\kappa)=D'$, which is a contradiction. Since $\pi^{-1}(S)=S$, we know that $X\models S$ is a stationary subset of $\kappa$, hence by elementarity, $S\subset \kappa$ is stationary. \end{proof} \begin{remark} To get a model of a cardinal $\kappa$ which is generically weakly compact via $\kappa$-c.c forcings but not weakly compact, we can proceed by the following: first we prepare the ground model such that the weakly compact cardinal $\kappa$ is indestructible under $\mathrm{Add}(\kappa,1)$, and then use a theorem of Kunen (\cite{MR495118}) that $\mathrm{Add}(\kappa,1)$ is forcing equivalent to $P*\dot{T}$ where $P$ adds a homogeneous $\kappa$-Suslin tree $\dot{T}$. The final model will be $V^P$. \end{remark} \begin{remark} The Kunen model also shows that the existence of a $\kappa$-Suslin tree is consistent with $\kappa\to^{poly} (\kappa)^2_{reg-bdd}$. The existence of a $\kappa$-Suslin tree is sometimes strong enough to refute some weak consequences of $\kappa\to (\kappa)^2_2$. For example Todorcevic proved in \cite{Todorcevic1989-TODTSA-5} that for any regular uncountable cardinal $\kappa$, the existence of $\kappa$-Suslin tree implies $\kappa\not\to [\kappa]^2_\kappa$, namely there exists a coloring $f: [\kappa]^2\to \kappa$ such that any $X\in [\kappa]^\kappa$, $f'' [X]^2 =\kappa$. \end{remark} \begin{cor} It is consistent relative to a weakly compact cardinal that for some inaccessible cardinal $\kappa$ that is not weakly compact, $\kappa\to^{poly} (\kappa)_{\lambda-bdd}^2$ for any $\lambda<\kappa$. \end{cor} \begin{cor} If $\kappa$ is real-valued measurable, then $\kappa\to^{poly} (\kappa)_{\lambda-bdd}^2$ for any $\lambda<\kappa$. \end{cor} \begin{cor} If $\kappa$ is weakly compact, then $\kappa\to^{poly} (\kappa)^2_{reg-bdd}$ is indestructible under any forcing satisfying $\lambda$-c.c. for some $\lambda<\kappa$. \end{cor} The trick of using some large enough ordinal to ``guide'' the construction can also be used analogously to prove the following, which provides more contrast with its dual Ramsey statement: \begin{lemma}\label{singularstronglimit} For any singular strong limit $\kappa$, $\kappa\to^{poly} (\kappa)_{\lambda-bdd}^2$ for any $\lambda<\kappa$. \end{lemma} \begin{remark} Given a $\lambda$-bounded coloring $f$ on $[\kappa]^2$, we claim that there is $B\in [\kappa]^{\kappa}$ such that $f\restriction [B]^2$ is normal. Fix a continuous sequence of strictly increasing regular cardinals $\langle \kappa_i: i<cf(\kappa)\rangle$ with $\kappa_0>\max \{cf(\kappa), \lambda\}$ converging to $\kappa$. We find $\langle A_i: i<cf(\kappa)\rangle$ such that \begin{itemize} \item for any $i<cf(\kappa)$, $A_i\subset \kappa_i$ and $|A_i|=\kappa_i$ \item for any $i<j<cf(\kappa)$, $A_i\subsetneq A_j$ \item for any limit $\delta<cf(\kappa)$, $A_\delta=\bigcup_{i<\delta} A_i$ \item for any $i<cf(\kappa)$, $f\restriction [A_i]^2$ is normal \end{itemize} The construction clearly gives $B=\bigcup_{i<cf(\kappa)} A_i$ such that $f\restriction [B]^2$ is normal. The construction at limit stages is clear. At stage $i+1$, we inductively find a subset $C\subset \kappa_{i+1}-\kappa_i$ of size $\kappa_{i+1}$ such that $f\restriction [A_i\cup C]^2$ is normal. Suppose we have built $C'\subset \kappa_{i+1}-\kappa_i$ of size $\leq \kappa_i$, we demonstrate how to add one more element. As $|C'\cup A_i|\leq \kappa_i, \lambda<\kappa_i$ and $\kappa_{i+1}$ is regular, there exists $\gamma>\max C'+1$ such that there do not exist $a\in [C'\cup A_i]^2, \beta\in C'\cup A_i$ with $f(a)=f(\beta, \gamma)$. It is easy to see that $f\restriction [A_i\cup C'\cup \{\gamma\}]^2$ is normal. \end{remark} \begin{proof}[Proof of Lemma \ref{singularstronglimit}] Fix a $\lambda$-bounded coloring $f: [\kappa]^2\to \kappa$. By the remark above, we may assume $f$ is normal. Let $\eta=cf(\kappa)$. Fix an increasing sequence of regular cardinals $\langle \kappa_i: i<\eta\rangle$ such that \begin{enumerate} \item $\kappa_0>\max\{\lambda, \eta\}$; \item $\langle \kappa_i: i<\eta\rangle$ converges to $\kappa$; \item $\kappa_{i+1}^{\kappa_{i}}=\kappa_{i+1}$ for all $i<\eta$. \end{enumerate} Let $\theta$ be a large enough regular cardinal and fix an $\in$-increasing chain $\langle N_i\prec H(\theta): i<\eta \rangle$ such that $|N_i|=\kappa_{i+1}$, $\kappa_{i+1}\subset N_i$, $\sup (N_i\cap\kappa_{i+2})=_{def} \delta_i\in \kappa_{i+2}\cap cof(\kappa_{i+1})$, ${}^{\kappa_i} N_i\subset N_i$. We arrange that $\lambda,f, \langle \kappa_i : i<\eta\rangle \in N_0$. We will recursively build $\langle A_i: i<\eta\rangle$ such that $A_i\subset N_i\cap \kappa_{i+2}$ and $|A_i|=\kappa_{i}^+$ satisfying: \emph{for all $j\geq i$, $A_i\cup \{\delta_j\}$ is a rainbow subset of $f$.} Recursively, suppose $A_k\subset N_k\cap \kappa_{k+2}$ for $k<i$ have been built. Let $A^*=\bigcup_{k<i} A_k \subset \kappa_{i+1}\subset N_i$. Notice that $|A^*|\leq \kappa_{i}$. We will enlarge $A^*$ with $\kappa_{i}^+$ many elements in $\delta_i-\kappa_{i+1}$. More precisely, we will find $C=\{\alpha_k\in \delta_i -\kappa_{i+1} : k<\kappa_{i}^+\}$ such that $A^*\cup C \cup \{\delta_j\}$ is a rainbow subset of $f$ for all $j\geq i$. We finish by setting $A_i=A^*\cup C$. Suppose we have built $C_\nu=\{\alpha_k: k<\nu\}$ for some $\nu<\kappa_{i}^+$ satisfying the requirement. Since ${}^{\kappa_i} N_i\subset N_i$, we have $A^*\cup C_\nu \in N_i$. Let $A(A^*\cup C_\nu)=_{def}\{\gamma<\kappa_{i+2}: A^*\cup C_\nu \cup \{\gamma\} \text{ is a rainbow subset for }f\}$. Since $A(A^*\cup C_\nu)\in N_i$ and $\delta_i\in A(A^*\cup C_\nu)$, we know that $A(A^*\cup C_\nu)$ is a stationary subset of $\kappa_{i+2}$. Let $B_j=_{def} \{\rho\in A(A^*\cup C_\nu): \exists \alpha\in A^*\cup C_\nu \ f(\alpha, \delta_j)=f(\rho, \delta_j)\}$ for each $j\geq i$. As $|A^*\cup C_\nu| \leq \kappa_{i}$ and the coloring is $\lambda$-bounded, we know that $|B_j|\leq \kappa_i$ for any $j\geq i$. Pick any $\gamma \in A(A^*\cup C_\nu)- \bigcup_{i\leq j<\eta} B_j$ with $\gamma > \max A^*\cup C_\nu$. We claim that this $\gamma$ is as desired, namely $A^*\cup C_\nu\cup \{\gamma\}\cup \{\delta_j\}$ is a rainbow subset for all $j\geq i$. Indeed, fix some $j\geq i$. By the fact that $\gamma,\delta_j\in A(A^*\cup C_\nu)$, the only bad possibility is that for some $\alpha\in A^*\cup C_\nu$, $f(\alpha,\delta_j)=f(\gamma,\delta_j)$. But this is ruled out by the fact that $\gamma\not\in B_j$. \end{proof} \begin{remark}\label{Indestructible} We can strengthen the conclusion of Lemma \ref{singularstronglimit} to that $\kappa\to^{poly} (\kappa)_{\lambda-bdd}^2$ for any $\lambda<\kappa$ and it remains true in any forcing extension satisfying $<\gamma$-covering property (see Definition \ref{covering}) for some cardinal $\gamma<\kappa$. The proof is similar to that of Theorem \ref{CountableCase}. Hence it is also possible for a singular cardinal which is not a strong limit to satisfy the conclusion of Lemma \ref{singularstronglimit}. \end{remark} \begin{remark}\label{FreeSetTheorem} The following strengthening is also true: if $\kappa$ is a strong limit singular cardinal, then $\kappa\to^{poly} (\kappa)^n_{\lambda-bdd}$ for any $\lambda<\kappa$ and $n\in \omega$. This is an immediate consequence of the following theorem (Theorem 45.4 in \cite{MR795592}): given a strong limit singular cardinal $\kappa$ and some $\lambda<\kappa$, we have that for any $f: [\kappa]^n \to [\kappa]^{\lambda}$, there exists $H\subset \kappa$ of cardinality $\kappa$ such that for any $x\in [H]^2$, $f(x)\cap (H-x) =\emptyset$ (such $H$ is called \emph{$f$-free}). To see the implication, given a $g: [\kappa]^2\to \kappa$ which is $\lambda$-bounded, consider $f: [\kappa]^2\to [\kappa]^\lambda$ defined as $f(x)=\bigcup \{y: g(y)=g(x)\}-x$. We leave it to the reader to verify that any $f$-free set is a rainbow subset for $g$. However, the proof of Theorem 45.4 in \cite{MR795592} heavily uses the Erd\H{o}s-Rado theorem, it is thus not clear if the proof can be generalized to give the forcing indestructibility result as in Remark \ref{Indestructible}. We decide to keep the proof of a weaker result that entails generalizations. \end{remark} \begin{question} If an inaccessible $\kappa$ carries a non-trivial $\kappa$-complete $\kappa$-saturated normal ideal, is it true that $\kappa\to^{poly}(\kappa)^n_{\lambda-bdd}$ for all $n\in \omega$ and all $\lambda<\kappa$? \end{question} \section{The extent of Rainbow Ramsey theorems at successors of singular cardinals}\label{singular} In \cite{MR2354904} and \cite{MR2902230}, it is shown that if $\mathrm{GCH}$ holds, then $\kappa^+\to^{poly} (\eta)^2_{<\kappa-bdd}$ for any regular cardinal $\kappa$ and ordinal $\eta<\kappa^+$ and moreover the partition relations continue to hold in any $\kappa$-c.c. forcing extension. The authors ask what we can say when $\kappa$ is singular. We will address this question by showing $\mathrm{GCH}$ implies $\kappa^+\to^{poly} (\eta)^2_{<\kappa-bdd}$ for all $\eta<cf(\kappa)^+$ and $\square_\kappa$ implies $\kappa^+\not\to^{poly} (\eta)^2_{<\kappa-bdd}$ for all $\eta\geq cf(\kappa)^+$. For the latter, as we will see below, a weaker hypothesis suffices. \begin{observation}\label{cofinalityObservation} If $\kappa$ is singular of cofinality $\lambda<\kappa$, then $\kappa^+\not \to^{poly} (\lambda^+ +1)^2_{<\kappa-bdd}$. \end{observation} \begin{proof} For each $\beta\in \kappa^+$, fix disjoint $\{A_{\beta, n}: n\in \lambda\}$ such that each set has size $<\kappa$ and $\bigcup_{n\in \lambda} A_{\beta, n} =\beta$. Define a coloring by mapping $\{\alpha,\beta\}\in [\kappa^+]^2\mapsto (n,\beta)$ if $n$ is the unique element in $\lambda$ that $\alpha\in A_{\beta,n}$. This coloring is easily seen to be $<\kappa$-bounded. For any subset $A$ of order type $\lambda^++1$, let $\delta$ be the top element. Now by pigeon hole, there exists $n\in \lambda$, such that $|A\cap A_{\delta,n}|\geq \lambda^+$. For any $\alpha<\beta\in A\cap A_{\delta,n}$, $f(\alpha,\delta)=(n,\delta)=f(\beta,\delta)$. Thus $A$ is not a rainbow subset. \end{proof} The following connects the rainbow partition relations with sets in Shelah's approachability ideal. Fix a singular cardinal $\kappa$ with cofinality $\lambda<\kappa$ for Definitions \ref{sing1} and \ref{sing2}. \begin{definition}\label{sing1} A set $S\subset \kappa^+$ is in $I[\kappa^+; \kappa]$ iff there is a sequence $\bar{a}=\langle a_\alpha \in [\kappa^+]^{<\kappa} : \alpha<\lambda\rangle$ and a closed unbounded $C\subset \kappa^+$ such that for any $\delta\in C\cap S$ is singular and weakly approachable with respect to the sequence $\bar{a}$, namely there is an unbounded $A\subset \delta$ of order type $cf(\delta)$ such that any $\alpha<\delta$ there exists $\beta<\delta$ with $A\cap \alpha\subset a_\beta$. \end{definition} Notice that $I[\kappa^+; \kappa]$ contains $I[\kappa^+]$, which is Shelah's approachability ideal. For more details on these matters, see \cite{MR2768694}. \begin{definition}[Definition 3.24, 3.25 \cite{MR2768694}]\label{sing2} $d: [\kappa^+]^2\to cf(\kappa)$ is \begin{enumerate} \item \emph{normal} if $$ i<cf(\kappa) \rightarrow \sup_{\alpha<\kappa^+} |\{\beta<\alpha: d(\beta,\alpha)<i\}|<\kappa,$$ \item \emph{transitive} if for any $\alpha<\gamma<\beta<\kappa^+$, $d(\alpha,\beta)\leq \max \{d(\alpha,\gamma), d(\gamma,\beta)\}$, \item \emph{approachable} on $S\subset \lim \kappa^+$ if for any $\delta\in S$, there is a cofinal $A\subset \delta$ such that for any $\alpha\in A$, $\sup \{d(\beta,\alpha): \beta\in A\cap \alpha\}<cf(\kappa)$. \end{enumerate} \end{definition} It is a consequence of Theorem 3.28 in \cite{MR2768694} that $\kappa^+\cap cof(\lambda^+)\in I[\kappa^+; \kappa]$ implies the existence of a normal $d$ that is approachable on $E\cap \kappa^+\cap cof(\lambda^+)$ for some club $E\subset \kappa^+$. \begin{lemma} For a singular cardinal $\kappa$ with cofinality $\lambda<\kappa$, we have that $\kappa^+\cap cof(\lambda^+)\in I[\kappa^+; \kappa]$ implies $\kappa^+\not\to^{poly} (\lambda^+)^2_{<\kappa-bdd}$. \end{lemma} \begin{proof} Fix a normal $d$ that is approachable at $E\cap \kappa^+\cap cof(\lambda^+)$ for some club $E\subset \kappa^+$. Define $f: [ E]^2\to \kappa^+$ such that $f(\alpha,\beta)=(d(\alpha,\beta),\beta)$. The normality of $d$ implies $f$ is $<\kappa$-bounded. Given $A\in [E]^{\lambda^+}$, let $\gamma=\sup A$. Then $d$ is approachable at $\gamma$. Fix some unbounded $B\subset \gamma$ of order type $\lambda^+$ witnessing the approachability of $d$. We may assume there exists $\eta_0<\lambda$ such that $\sup d''[B]^2 \leq \eta_0$. To see why we can do this, note that by the approachability assumption on $d$, we know for each $\alpha\in B\cap \gamma$, $\eta_\alpha'=\sup \{d(\beta, \alpha): \beta\in B\cap \alpha\}<\lambda$. Find $B'\in [B]^{\lambda^+}$ and $\eta_0<\lambda$ such that for any $\alpha\in B'$, $\eta_\alpha'=\eta_0$. It is clear that $\sup d''[B']^2 \leq \eta_0$. Without loss of generality, we may assume $B'=B$. Pick the following increasing sequences $\langle a_i\in A: i<\lambda^+\rangle$ and $\langle b_i \in B : i<\lambda^+\rangle$ satisfying that for all $i<\lambda^+$, $b_i<a_i<b_{i+1}$. By the Pigeon Hole principle, we can find $D\in [\lambda^+]^{\lambda^+}$ and some $\eta_1\in \lambda$ such that for all $i\in D$, $d(b_i, a_i), d(a_i, b_{i+1})\leq \eta_1$. Then for any $i<j\in D$, by the transitivity of $d$, we have $d(a_i,a_j)\leq \max \{d(a_i, b_{i+1}), d(b_{i+1}, b_j), d(b_j, a_j)\}\leq \max\{\eta_0, \eta_1\}=_{def} \eta^*$ (here we use the convention that $d(t,t)=0$). Let $A'=\{a_i: i\in D\}$. Pick $\delta\in A'$ such that $A'\cap \delta$ has size $\lambda$. We know $\sup d(\cdot, \delta)'' A'\cap \delta \leq \eta^*<\lambda$, which clearly implies there exist $\alpha_0<\alpha_1\in A'\cap \delta$ such that $d(\alpha_0, \delta)=d(\alpha_1,\delta)$. In particular, $A$ is not rainbow for $f$. \end{proof} \begin{remark} $\square_\kappa$ implies $I[\kappa^+]$ is trivial. Hence $\square_\kappa$ implies $\kappa^+\not\to^{poly}(cf(\kappa)^+)^2_{<\kappa-bdd}$. \end{remark} In light of the preceding theorems, the following theorem is the best possible in a sense. \begin{definition}\label{covering} A forcing poset $\mathbb{P}$ satisfies $<\kappa$-covering property if for any $\mathbb{P}$-name of subset of ordinals $\dot{B}$ such that $\Vdash_{\mathbb{P}} |\dot{B}|<\kappa$, there exists $B\in V$ such that $|B|<\kappa$ and $\Vdash_{\mathbb{P}} \dot{B}\subset B$. \end{definition} Notice that if $\kappa$ is singular, then $\kappa$ and $\kappa^+$ are preserved as cardinals in any forcing extension satisfying $<\kappa$-covering property. \begin{theorem}\label{CountableCase} Fix a singular cardinal $\kappa$ with $\lambda=\mathrm{cf}(\kappa)<\kappa$. Suppose $\kappa^{<\lambda}=\kappa$. Then for any $\alpha<\lambda^+$, \begin{equation} \kappa^+\to^{poly} (\alpha)^2_{<\kappa-bdd}. \end{equation} Moreover, these partition relations continue to hold in any forcing extension by $\mathbb{P}$ satisfying the $<\kappa$-covering property. \end{theorem} \begin{proof} We may assume $|\alpha|=\lambda$. Fix a $\mathbb{P}$-name for a $<\kappa$-bounded coloring $\dot{f}$ on $[\kappa^+]^2$. We may assume it is normal. Fix some large enough regular cardinal $\chi$. Build a sequence $\langle M_i \prec (H(\chi),\in, \dot{f}, \kappa, \mathbb{P}): i<\alpha\rangle$ such that \begin{enumerate} \item $\kappa+1\subset M_i$, $|M_i|=\kappa$, $\kappa_i =_{def} M_i\cap \kappa^+ \in \kappa^+$, \item $|\kappa_{i+1}-\kappa_i|=\kappa$, \item ${}^{<\lambda} M_{i}\subset M_{i+1}$. \end{enumerate} The construction is possible since $\kappa^{<\lambda}=\kappa$. Fix a bijection $g: \lambda\to \alpha$. We will inductively define a rainbow subset $\{ a_i: i<\lambda\}$ such that $a_i\in \kappa_{g(i)+1}-\kappa_{g(i)}$. It is clear that this set as defined will have order type $\alpha$. During the construction, we maintain the following \emph{construction invariant}: \emph{for any $i<\lambda$ and $l=g(i)$, whenever $a_j,a_k<\kappa_{l+1}$, we have $\Vdash_\mathbb{P} \dot{f}(a_j, \kappa_{l+1})\neq \dot{f}(a_k,\kappa_{l+1})$}. Suppose for some $\beta<\lambda$ we have defined $A=\{ a_i: i<\beta\}$. Let $l=g(\beta)$ and $B=\kappa_{l+1}-\kappa_{l}$. Our goal is to find an element in $B$ such that after we augment $A$ with this element, not only does the set remains a rainbow subset, but also the construction invariant is satisfied. Let $C=\{\delta<\kappa^+: \forall i,j<\beta \ a_i,a_j\in A\cap \kappa_{l+1}\rightarrow \Vdash_{\mathbb{P}} \dot{f}(a_i,\delta)\neq \dot{f}(a_j,\delta)\}$ and $B'=B\cap C$. \begin{claim} $|B'|=\kappa$. \end{claim} \begin{proof}[Proof of the claim] Let $A'=A\cap M_{l+1}=A\cap \kappa_{l+1}\subset M_l$. As ${}^{<\lambda}M_l\subset M_{l+1}$ we have $A'\in M_{l+1}$. Hence $C\in M_{l+1}$ and that $\kappa_{l+1}\in C$ by the construction invariant. $C$ is thus a stationary subset of $\kappa^+$. In particular, $M_{l+1}\models $ there exists an injection from $\kappa$ to $C$. As $\kappa+1\subset M_{l+1}$, $B\cap C=B'$ has size $\kappa$. \end{proof} We want to pick an element from $B'$ and add it to the set, however, we need to make sure the set is rainbow and satisfy the construction invariant. For any cardinal $\delta$, let $A\restriction \delta$ be $A\cap (<\delta)$. For the purpose of presentation, work in $V[G]$ for some $G\subset \mathbb{P}$ generic over $V$. Let $B_{-1}=\{\delta\in B': \exists a\in A\restriction \kappa_{l+1} \ f(\delta,\kappa_{l+1})=f(a,\kappa_{l+1})\}$. For each $i<\beta$ with $g(i)>l$, let $B_i=\{\delta\in B': \exists \alpha\in A\restriction \kappa_{g(i)+1} \ f(\alpha,\kappa_{g(i)+1})=f(\delta,\kappa_{g(i)+1})\}$ and $B'_i=\{\delta\in B': \exists \alpha\in A\restriction a_i \ f(\alpha, a_i)=f(\delta,a_i)\}$. We verify that these sets as defined all have size $<\kappa$. Suppose for the sake of contradiction that $B_{-1}$ has size $\kappa$, then since $|A|<\kappa$ and $|B'|=\kappa$, there exists $a\in A$ such that $\{\delta\in B': f(a,\kappa_{l+1})=f(\delta,\kappa_{l+1})\}$ has size $\kappa$. This contradicts with the assumption that $f$ is $<\kappa$-bounded. Suppose for the sake of contradiction that for some $i$ with $i<\beta$ and $g(i)>l$ we have $|B_i|=\kappa$, similar to the above, we can find $a\in A$ such that $\{\delta\in B': f(a, \kappa_{g(i)+1})=f(\delta,\kappa_{g(i)+1})\}$ has size $\kappa$, contradicting with $<\kappa$-boundedness. Similarly $|B_i'|<\kappa$. Back in $V$, pick $\mathbb{P}$-names for the sets above: $\dot{B}_{-1}$, $\dot{B}_i, \dot{B}'_i$ for all $i<\beta$ such that $g(i)>l$. By the $<\kappa$-covering property of $\mathbb{P}$, we can find $B_{-1}^*, B_i^*, (B_i')^*$ of size $<\kappa$ in $V$ such that $\Vdash_{\mathbb{P}} \dot{B}_{-1}\subset B_{-1}^*, \dot{B}_i\subset B_i^*, \dot{B}'_i\subset (B'_i)^*$ for all $i<\beta$ with $g(i)>l$. Since $\beta<\lambda=cf(\kappa)$, we know $|B_{-1}^*\cup \bigcup_{i<\beta, g(i)>l} B_i^*\cup (B_i')^*|<\kappa$. Pick $a_\beta\in B'-B_{-1}^*-\bigcup_{i<\beta, g(i)>l} B_i^*\cup (B_i')^*$. Then it follows that $A\cup \{a_\beta\}$ is forced by $\mathbb{P}$ to be a rainbow subset and to satisfy the construction invariant. \end{proof} An immediate consequence of the proof of Theorem \ref{CountableCase} is: \begin{cor} For any cardinal $\kappa$ and any $\alpha<\omega_1$, \begin{equation} \kappa^+\to^{poly} (\alpha)^2_{<\kappa-bdd}. \end{equation} \end{cor} \begin{question} Is $\kappa^+\to^{poly} (\omega_1)^2_{<\kappa-bdd}$ consistent for some singular $\kappa$ of countable cofinality? \end{question} \section{A coloring that is strongly proper indestructible but c.c.c destructible}\label{indestructible} It is proved in \cite{MR2354904} that if $CH$ holds, then $\omega_2\to^{poly} (\eta)^2_{<\omega_1-bdd}$ for any $\eta<\omega_2$. In \cite{MR2902230}, a model where $2^\omega=\omega_2$ and $\omega_2\not\to^{poly} (\omega_1)_{2-bdd}^2$ is constructed. A question regarding the possibility of getting $\omega_2\not\to^{poly} (\omega_1)_{2-bdd}^2$ along with continuum larger than $\omega_2$ was raised. A positive answer was given in \cite{MR3437648} using the method of forcing with symmetric systems of submodels as side conditions. In this section we give a simpliflied construction of the model presented in \cite{MR2902230} using the framework developed by Neeman \cite{MR3201836} and show the witness to $\omega_2\not\to^{poly} (\omega_1)_{2-bdd}^2$ in that model is indestructible under strongly proper forcings. This provides an alternative answer to the original question. \begin{definition}[Special case of Definition 2.2 and 2.4 in \cite{MR3201836}] Let $K=(H(\omega_2), <^*)$ where $<^*$ is some well-ordering of $H(\omega_2)$. Define \emph{small nodes} and \emph{transitive nodes} respectively as $$\mathcal{S}=_{def}\{M \in [K]^\omega: M\prec K\}$$ and $$\mathcal{T}=_{def}\{W\in [K]^{\omega_1} : W\prec K\text{ and internally approachable of length }\omega_1\}.$$ Here $W\prec K$ is \emph{internally approachable of length $\omega_1$} if there exists a continuous $\subseteq$-increasing countable sequence $\langle W_i \prec K: i<\omega_1\rangle$ such that $W=\bigcup_{i<\omega_1} W_i$ and for all $i<\omega_1$, $\langle W_j: j\leq i\rangle\in W_{i+1}$. Both sets are stationary in $K$ respectively. Let $\mathbb{P}=\mathbb{P}_{\omega,\omega_1,\mathcal{S}, \mathcal{T}}$ be the standard sequence poset consisting of models of two types. More precisely, $\mathbb{P}$ consists of finite increasing $\in$-chain of elements in $\mathcal{S}\cup \mathcal{T}$ closed under intersection. For example, a typical element will look like $\{s_0, s_1, \cdots, s_{k-1}\}\subset \mathcal{S}\cup \mathcal{T}$, where for any $i<k-1$, $s_i\in s_{i+1}$ and for any $i,j<k$, there is some $l<k$ satisfying $s_i\cap s_j=s_l$. Notice that we can either think of a condition as a finite sequence or as a finite set, since the elements in a condition can be naturally ordered by their Von Neumann ranks. Thus, given a condition $s$ and $M, M'\in s$, we say $M$ \emph{precedes/is before (succeeds/is after)} $M'$ when the rank of $M$ is smaller (greater) than the rank of $M'$. \end{definition} \begin{remark}\label{elaborate} In order to consolidate the reader's understanding of the notion, we point out the following: \begin{enumerate} \item If $M_0\in \mathcal{S}, M_1\in \mathcal{S}\cup \mathcal{T}$ and $M_0\in M_1$, then $M_0\subset M_1$. However, if $W\in \mathcal{T}$ and $M\in \mathcal{S}$ satisfy $W\in M$, it cannot be the case that $W\subset M$. Hence, in general the membership relation $\in$ restricted on a condition in $\mathbb{P}$ is not transitive. \item If $W\in \mathcal{T}$ and $M\in \mathcal{S}$ satisfy that $W\in M$, then $W\cap M\in \mathcal{S}\cap W$. Let $\langle W_i: i<\omega\rangle$ be the $<^*$-least sequence witnessing that $W$ is internally approachable of length $\omega_1$. Let $M\cap \omega_1=\delta$. We claim that $M\cap W=W_\delta$. On the one hand, we have $\langle W_i: i<\omega\rangle\in M$ which implies $\bigcup_{i<\delta}W_i= W_\delta\subset M$. On the other hand, suppose $x\in M\cap W$, then if $i_x\in \omega_1$ is the least such that $x\in W_{i_x}$, we know $i_x\in M$ by elementarity. Hence $i_x\in \delta$, which implies $x\in W_\delta$. \item If $s\in \mathbb{P}$ and $W\in s\cap \mathcal{T}$, then any $M\in s$ preceding $W$, $M\in W$. If $M_0, M_1\in s\cap \mathcal{S}$ such that there is no transitive node in $s$ between $M_0$ and $M_1$, then $M_0\in M_1$. \end{enumerate} \end{remark} It is not necessary for a reader to be familiar with \cite{MR3201836} in order to understand the following proof since we will list all the lemmas needed. \begin{claim}[Claim 2.17, 2.18]\label{residuegap} Fix $s\in \mathbb{P}$ and $Q\in s$. Define $res_Q(s)=s\cap Q$. Then \begin{enumerate} \item $res_Q(s)\in \mathbb{P}$. \item If $Q$ is a transitive node, then $res_Q(s)$ consists of all nodes of $s$ that occur before $Q$. If $Q$ is a small node, then $res_Q(s)$ consists of all nodes in $s$ that occur before $Q$ and do not belong to any interval $[Q\cap W, W)\cap s$ for any transitive node $W\in s$. Those intervals are called \emph{residue gaps} of $s$ in $Q$. \end{enumerate} \end{claim} \begin{remark} Do not confuse $res_Q(s)$ with the set of nodes in $s$ preceding $Q$. The point of (1) is that the part of information about $s$ that is captured by $Q$ is itself a legitimate condition. The fact that $res_Q(s)$ is closed under intersection is immediate since $s$ is. It takes a little work to show it forms an $\in$-increasing chain. The second part of the claim describes what $res_Q(s)$ looks like in a very concrete way. For any $W\in \mathcal{T}\cap s$, $[Q\cap W, W)\cap s \cap Q$ must be empty. It takes more work to show that any $M\in s-Q$, there exists one such residue gap containing $M$. \end{remark} \begin{lemma}[Corollary 2.31 in \cite{MR3201836}]\label{strongproperness} Let $s\in \mathbb{P}$ and $Q\in s$. For any $t\in \mathbb{P}\cap Q$ such that $t\leq res_Q(s)=_{def}s\cap Q\in \mathbb{P}$. Then \begin{enumerate} \item $s$ and $t$ are directly compatible, namely the closure of $s\cup t$ under intersection is a common lower bound. Moreover, if $Q$ is a transitive node, then $s\cup t$ is already closed under intersection hence is the lower bound for $s$ and $t$. \item If $r$ is the closure of $s\cup t$, then $res_Q(r)=t$. \end{enumerate} \end{lemma} \begin{remark} An immediate corollary is that that if $Q\in \mathcal{S}$ contains $s\in Q\cap \mathbb{Q}$, then the closure of $s\cup \{M\}$ under intersection is a greatest lower bound of $s$ containing $Q$. The basic idea of the proof is that: first verify that $s\cup t$ consists of $\in$-increasing nodes, and then show that this property remains even after we add nodes of the form $M_0\cap M_1$ where $M_0\in s, M_1\in t$. \end{remark} For each $\beta<\omega_2$, let $f_\beta$ be the $<^*$-least injection from $\beta$ to $\omega_1$. Define the main forcing $\mathbb{Q}$ to consist of $p=(c_p, s_p)$ such that: \begin{enumerate} \item $c_p$ is a finite partial function from $[\omega_2]^2\to \omega_1$ satisfying the \emph{bounding requirement}, namely there do not exist $\alpha_0<\alpha_1<\alpha_2<\beta$ such that $(\alpha_i,\beta)\in dom(c_p)$ for all $i<3$ and $c_p(\alpha_0,\beta)=c_p(\alpha_1,\beta)=c_p(\alpha_2,\beta)$; \item for any $(\alpha,\beta)\in dom(c_p)$, $c_p(\alpha,\beta)\geq f_\beta(\alpha)$; \item $s_p\in \mathbb{P}$; \item if $(\alpha,\beta)\in dom(c_p)$ and $M\in s_p$ contains $(\alpha,\beta)$, then $c_p(\alpha,\beta)\in M$. \end{enumerate} $q\leq_{\mathbb{Q}} p$ iff $c_q\restriction dom(c_p)=c_p$ and $s_q \supset s_p$. \begin{claim}\label{enlarge} For any $\alpha<\beta<\omega_2$ and $p\in \mathbb{Q}$, there exists $p'\leq p$ such that $(\alpha,\beta)\in dom(c_{p'})$. \end{claim} \begin{proof} We may assume $(\alpha,\beta)\not \in dom(c_p)$. Let $\delta\leq\omega_1$ be the least such that there is some $M\in s\cap \mathcal{S}$ containing $(\alpha,\beta)$ such that $\delta=M\cap \omega_1$ if it exists, $\delta=\omega_1$ if no such node exists. In any case, we have $\delta$ is a limit ordinal and $f_\beta(\alpha)\in \delta$. Pick $\delta\backslash (f_\beta(\alpha)+1)$ which is not in $range(c_p)$. It is clear that $(c_p\cup (\{\alpha,\beta\},\gamma), s_p)$ is a desired extension. \end{proof} \begin{definition} Let $\lambda$ be a fixed regular cardinal, $P$ be a poset. Let $\mathcal{M}=(H(\lambda), \in , \cdots)$ be some countable extension of $(H(\lambda), \in )$. We say $P$ is strongly proper for $B$ where $B\subset \{M: M\prec \mathcal{M}\}$ if for any $M\in B$ and any $r\in M\cap P$, there exists $r'\leq r$ such that $r'$ is \emph{strongly $(M,P)$-generic}, namely for any $r''\leq r'$, there exists some $r^* \in M\cap P$ such that any $t\leq r^*$ with $t\in M$ is compatible with $r''$. We call such $r^*$ a \emph{reduct} of $r''$ on $M$ and for the rest of the section we will use $r''\restriction M$ to represent one such reduct of $r''$ on $M$. $P$ is strongly proper if for all sufficiently large regular $\theta$, $P$ is strongly proper for a club subset of $\{M\in [H(\theta)]^\omega: M\prec H(\theta)\}$. \end{definition} \begin{claim}\label{StrongPropernessTransitive} For any $p=(c_p, s_p)$ with a transitive node $W\in s_p$, if $t\leq (c_p\cap W, res_W(s_p))$ and $t\in W$, then $t$ and $p$ are compatible. In particular, $\mathbb{Q}$ is strongly proper for $\mathcal{T}$. \end{claim} \begin{proof} Implicitly in the statement of the claim, $(c_p\cap W, res_W(s_p))$ can be easily checked to be a condition. It is left to check that $r=(c_t\cup c_p, s_p\cup s_t)$ is a condition since it is clear that it extends $t$ and $p$. First note that $s_p\cup s_t \in \mathbb{P}$ and $\leq_\mathbb{P}$-extends $s_p$ and $s_t$ by Lemma \ref{strongproperness}. It is also clear that $c_t\cup c_p$ is a function that satisfies the bounding requirement. We are left with checking condition (4) as in the definition of $\mathbb{Q}$. Given $(\alpha,\beta)\in dom(c_r)$ and $M\in s_r$, if $(\alpha,\beta)\in M$, we need to show $c_r(\alpha,\beta)\in M$. Since $t$ and $p$ are conditions, the following cases are what we need to check: \begin{itemize} \item $(\alpha,\beta)\in dom(c_p)-dom(c_t)$ and $M\in s_t-s_p$, \item $(\alpha,\beta)\in dom(c_t)-dom(c_p)$ and $M\in s_p-s_t$. \end{itemize} If $(\alpha,\beta)\in dom(c_p)-dom(c_t)$, $(\alpha,\beta)\not \in W$ so $(\alpha,\beta)\not \in M$ for any $M\in s_t$ as $t\in W$ and $W$ is transitive. If $(\alpha,\beta)\in dom(c_t)-dom(c_p)$ and $M\in s_p-s_t$ containing $(\alpha,\beta)$, then $M\cap W\in s_p\cap W\subset s_t$. As $t$ is a condition, we have $c_t(\alpha,\beta)\in M\cap W\subset M$. To see $\mathbb{Q}$ is strongly proper for $\mathcal{T}$, it suffices to notice that for any $W\in \mathcal{T}$ and $t=(c_t,s_t)\in W\cap \mathbb{Q}$, there exists $t'=(c_t, s_t')\leq t$ such that $W\in s_t'$ by Lemma \ref{strongproperness}. \end{proof} \begin{claim}\label{StrongPropernessCountable} For any countable $M^*\prec H(\lambda)$ for some large enough regular $\lambda$ containing $\mathbb{Q}, K$, if $r\in \mathbb{Q}$ satisfies that $M^*\cap K\in s_r$, then $r$ is strongly $(M^*, \mathbb{Q})$-generic. In particular, $\mathbb{Q}$ is strongly proper. \end{claim} \begin{proof} Let $M=M^*\cap K$. We need to show for any $r'\leq r$, there exists $r'\restriction M\in M\cap \mathbb{Q}$ weaker than $r'$, such that any extension of $r'\restriction M$ in $M$ is compatible with $r'$. Let $r'\restriction M$ be $(c_{r'} \cap M, res_M(s_{r'}))$. It is easy to see that $r'\restriction M$ is a condition weaker than $r'$. Let $t\in \mathbb{Q}\cap M$ be such that such that $t\leq r'\restriction M$. As $s_t\leq res_M(s_{r'})$ and $s_t\in M$, we know by Lemma \ref{strongproperness} there exists $s^*\leq s_t, s_{r'}$ such that $res_M(s^*)=s_t$. Furthermore, we may assume $s^*$ is the closure of $s_{r'}\cup s_t$ under intersection. Let $h=_{def}(c_t\cup c_{r'}, s^*)$. We will check that $h$ is a condition. First we check that $c_t\cup c_{r'}$ is a function that satisfies the bounding requirement. To see it is a function, let $(\alpha,\beta)\in dom(c_t)\cap dom(c_{r'})$, then $(\alpha,\beta)\in M$. Since $c_t\supset c_{r'}\restriction M$, we know $c_t(\alpha,\beta)=c_{r'}(\alpha,\beta)$. To see $c_t\cup c_{r'}$ is 2-bounded, suppose for the sake of contradiction, $\alpha_0<\alpha_1<\alpha_2<\beta$ are such that $c_h(\alpha_0,\beta)=c_h(\alpha_1,\beta)=c_h(\alpha_2,\beta)=\gamma\in \omega_1$. Note that there exists some $i<3$ such that $(\alpha_i, \beta)\in M$ since otherwise $(\alpha_k,\beta)\in dom(c_{r'})$ for all $k<3$, which contradicts with the fact that $r'$ is a condition. Also notice that $c_t(\alpha_i,\beta)=\gamma\in M$. By the requirement of a condition we know $f_\beta (\alpha_j)\leq \gamma$ for all $j<3$. But as $\gamma\in M$, $\gamma\subset M$, we know $\alpha_j\in M$ for all $j<3$. This means these three tuples are all in the domain of $c_t$. This is a contradiction to the fact that $t$ is a condition. Finally we check condition (4) in the definition of $\mathbb{Q}$. Given $(\alpha,\beta)\in dom(h)$ and $N\in s_h$, if $(\alpha,\beta)\in N$, then we need to verify $c_h(\alpha,\beta)\in N$. Recall that each element of $s_h$ is of the form $M_0\cap M_1$, $M_0$ or $M_1$ where $M_0\in s_{r'}, M_1\in s_t$. Hence, since $r'$ and $t$ are conditions, the cases we need to verify are: \begin{itemize} \item $(\alpha,\beta)\in dom(c_{r'})-dom(c_{t})$ and $N\in s_{t}-s_{r'}$ and \item $(\alpha,\beta)\in dom(c_t)-dom(c_{r'})$ and $N\in s_{r'}-s_t$. \end{itemize} For $(\alpha,\beta)\in dom(c_{r'})-dom(c_t)$ and $N\in s_t\cap \mathcal{S}-s_{r'}$, we know $(\alpha,\beta)\not\in M$ since otherwise, it would have been in $dom(c_{r'}\restriction M)\subset dom(c_t)$. But $s_t\in M$ since $t\in M$, which implies $N\in M$. Hence it is impossible to have $(\alpha,\beta)\in N$. For $(\alpha,\beta)\in dom(c_t)-dom(c_{r'})$ and $N\in s_{r'}\cap \mathcal{S}-s_t$ such that $(\alpha,\beta)\in N$, since $(\alpha,\beta)\in M$ and $s_{r'}$ is closed under intersection, we may assume $N\subset M$. If $N=M$, then we are done since $c_h(\alpha,\beta)=c_{t}(\alpha,\beta)\in M$. If $N\in M$, then we are done since $t\leq r'\restriction M$. So assume $N\not \in M$. By Claim \ref{residuegap}, $N$ occurs in a residue gap, namely there exists $W\in M\cap s_{r'}$ such that $N\in [W\cap M, W)=_{def} \{M'\in s_{r'}: rank(W\cap M)\leq rank(M') < rank(W)\}$. We will show $c_h(\alpha,\beta)\in N$ by inducting on the rank of $N$. As $(\alpha,\beta)\in N\subset W$, $(\alpha,\beta)\in M\cap W$. Also $c_h(\alpha,\beta)\in M\cap W$. If there is no transitive node between $W\cap M$ and $N$, then we are done since $W\cap M\subset N$ (recall that $s_{r'}$ is linearly ordered by $\in$ and Remark \ref{elaborate}). Otherwise, there exists $W'\in [W\cap M, N)\cap \mathcal{T}$. Let $N'= W'\cap N$. Then $rank(N')<rank(N)$. Since $(\alpha,\beta)\in N'$, by the induction hypothesis, we know that $c_h(\alpha,\beta)\in N' \subset N$. To see $\mathbb{Q}$ is strongly proper, for any condition $p$, for sufficiently large regular cardinal $\lambda$, we can find $M^*\prec H(\lambda)$ containing $p, K, \mathbb{Q}$. Then $p'=(c_p, cl(s_p\cup \{M^*\cap K\}))$ is a strongly $(M^*, \mathbb{Q})$-generic extension of $p$ by Lemma \ref{strongproperness}, where $cl(s_p\cup \{M^*\cap K\})$ denotes the closure of $s_p\cup \{M^*\cap K\}$ by intersection. \end{proof} By Claim \ref{StrongPropernessCountable} and Claim \ref{StrongPropernessTransitive}, $\omega_1$ and $\omega_2$ are preserved in the forcing extension by $\mathbb{Q}$. \begin{lemma}[Lemma 4.3 of \cite{MR2902230}]\label{borrow} For $\alpha_0<\alpha_1<\beta<\omega_2$ and $p\in \mathbb{Q}$, if $(\alpha_i,\beta)\not \in dom(c_p)$ for any $i<2$ and \begin{equation} \forall M\in s_p \ (\alpha_0,\beta)\in M \Leftrightarrow (\alpha_1,\beta)\in M \end{equation} Then there exists an extension $p'=(c_{p'}, s_p)$ with the same side condition such that $(\alpha_0,\beta), (\alpha_1,\beta)\in dom(c_{p'})$ and $c_{p'}(\alpha_0,\beta)=c_{p'}(\alpha_1,\beta)$. Furthermore, we can ensure that $dom(c_{p'})=dom(c_p)\cup \{(\alpha_0,\beta),(\alpha_1,\beta)\}$. \end{lemma} Building on the idea of Lemma 4.6 in \cite{MR2902230}, we prove a strengthened version in the following. \begin{lemma}\label{strengthen} In $V^{\mathbb{Q}}$, for any strongly proper forcing $\dot{P}$, $\Vdash_{\dot{P}}$ $c$ witnesses $\omega_2^V\not\to^{poly} (\omega_1)^2_{2-bdd}$. \end{lemma} \begin{remark}\label{confuse} More accurately, it is the coloring $c': [\omega_2]^2\to \omega_2$ such that $c'(\alpha,\beta)=(c(\alpha,\beta), \beta)$ that witnesses $\omega_2\not\to^{poly} (\omega_1)^2_{2-bdd}$. As it is clear from the context, we will continue to refer to $c$ as the witness in the following. \end{remark} \begin{proof}[Proof of Lemma \ref{strengthen}] Suppose otherwise for the sake of contradiction. Let $r\in \mathbb{Q}$, $\mathbb{Q}$-name $\dot{p}, \dot{P}$, $\mathbb{Q}*\dot{P}$-name $\dot{X}$, $\gamma\in \omega^V_2+1$ such that \begin{enumerate} \item $r\Vdash_{\mathbb{Q}} \dot{P}$ is a strongly proper forcing and $\dot{p}\in \dot{P}$ and \item $r\Vdash_{\mathbb{Q}} \dot{p}\Vdash_{\dot{P}} \sup \dot{X}=\gamma, \dot{X}$ is a rainbow subset for $c$ of order type $\omega_1$. \end{enumerate} Note that we include the possibility that $\gamma=\omega_2^V$ since it may be collapsed by $\mathbb{Q}*\dot{P}$. In either case, $cf(\gamma)>\omega$. Let $G\subset \mathbb{Q}$ containing $r$ be generic over $V$. Fix some sufficiently large regular cardinal $\lambda$ and let $C=(\dot{C})^G\subset ([H(\lambda)]^\omega)^{V[G]}$ be a club that witnesses the strong properness of $P$ in $V[G]$. \begin{claim} For any stationary subset $T\subset [H(\lambda)]^\omega$ in $V$, $T[G]=_{def} \{M[G]: M\in T\}$ is a stationary subset of $([H(\lambda)]^\omega)^{V[G]}$. \end{claim} \begin{proof}[Proof of the claim] In $V[G]$, let $f: H(\lambda)^{<\omega} \to H(\lambda)$. In $V$, let $\lambda^*$ be much larger regular cardinal than $\lambda$ and $M'\prec H(\lambda^*)$ containing $\dot{f}, H(\lambda)$ be such that $M=M'\cap H(\lambda)\in T$. Then $M[G]$ is closed under $f$, since for any $\bar{a}\in M[G]\cap [H(\lambda)^{V[G]}]^{<\omega} $, $f(a)\in M'[G]\cap (H(\lambda))^{V[G]} =M'[G]\cap H(\lambda)[G]=(M'\cap H(\lambda))[G]$. The last equality holds since for any $\dot{\tau}\in M', \dot{\sigma}\in H(\lambda)$ such that $(\dot{\tau})^G=(\dot{\sigma})^G$, by the fact that $M'[G]\prec H(\lambda^*)[G]$, $M'[G]\models $ there exists $\dot{\sigma}\in H(\lambda)^V$, $\dot{\tau}^G=\dot{\sigma}^G$. It is easy to see this is sufficient since $M'[G]\cap H(\lambda)^V=M'\cap H(\lambda)^V$. \end{proof} Find a countable $N'\in V$ such that $N'\prec H(\lambda)^V$ contains $r,\mathbb{Q}, \dot{p}, \dot{P}, \dot{X},\gamma$. Moreover, $N=_{def} N' \cap K \in \mathcal{S}$ and $N'[G]\in C$. Let $\gamma'=\sup N\cap \gamma$. Extend $r$ to $t$ such that $N\in s_t$ by Lemma \ref{strongproperness}. Consequently, $t$ is strongly $(N', \mathbb{Q})$-generic. Find $t'\leq_{\mathbb{Q}} t$, $\beta\in [\gamma', \gamma)$ and $\mathbb{Q}$-names $\dot{p}'$, $\dot{q}$ such that $\dot{q} \in N'$ and $t'\Vdash_{\mathbb{Q}} \dot{p}'$ is strongly $(N'[\dot{G}], \dot{P})$-generic and $\dot{p}'\leq_{\dot{P}}\dot{p}$, $\dot{p}'\restriction N'[\dot{G}]=\dot{q}$ and $\dot{p}'\Vdash_{\dot{P}} \beta\in \dot{X}$. Let $m=|t'|<\omega$. Now consider $D=\{a\leq_{\mathbb{Q}} t'\restriction N': \exists \dot{b} \ a\Vdash_{\mathbb{Q}} \dot{b}\leq_{\dot{P}} \dot{q}, \exists \alpha_0<\cdots <\alpha_{2^m} \ \dot{b}\Vdash_{\dot{P}} \forall i\leq 2^m \ \alpha_i\in \dot{X}\}$. This set is dense below $t'\restriction N'$ and is in $N'$. Pick $a\in D\cap N'$ and $\dot{b}, \alpha_0,\cdots, \alpha_{2^m} \in N'$ as its witness. By the Pigeonhole principle, there exist $i\neq j\leq 2^m$ such that for any $M \in \mathcal{S}\cap s_{t'}$, $(\alpha_i,\beta)\in M$ iff $(\alpha_j,\beta)\in M$. Apply Lemma \ref{borrow}, there exists $t''\leq t'$ such that $c_{t''}(\alpha_i,\beta)=c_{t''}(\alpha_j,\beta)$ with $s_{t''}=s_{t'}$ and $dom(c_{t''})=dom(c_{t'})\cup \{(\alpha_i,\beta), (\alpha_j,\beta)\}$. As $a\leq_{\mathbb{Q}} t'\restriction N'=t''\restriction N'$, $a$ and $t''$ are compatible. Find a common lower bound $t'''\leq_{\mathbb{Q}} a, t''$. Then $t'''\Vdash_{\mathbb{Q}}\dot{b}\leq_{\dot{P}} \dot{q}=\dot{p}'\restriction N'[\dot{G}]$ and $\dot{b}\in N'[\dot{G}]$. Hence $t'''$ forces $\dot{b}$ and $\dot{p}'$ are compatible. Let $\dot{w}$ be a common lower bound. Then $(t''', \dot{w})$ forces $c(\alpha_i,\beta)=c(\alpha_j,\beta)$ as well as $\alpha_i, \alpha_j, \beta\in \dot{X}$. This is a contradiction since $(t''',\dot{w})\leq_{\mathbb{Q}*\dot{P}} (r,\dot{p})$ and $(r,\dot{p}) \Vdash _{\mathbb{Q}*\dot{P}} \dot{X}$ is a rainbow subset for $c$. \end{proof} An immediate consequence is $\omega_2\not\to^{poly} (\omega_1)^2_{2-bdd}$ is consistent with the continuum being arbitrarily large as Cohen forcings are strongly proper. This provides an alternative answer to a question in \cite{MR2902230}, which was originally answered in \cite{MR3437648} using a different method. However In this model, there exists a c.c.c forcing that forces a rainbow subset into $c\restriction [\omega_1]^2$. In $V^{\mathbb{Q}}$, let $R$ be the poset $\{a\in [\omega_1]^{<\omega}: a \text{ is a rainbow subset for }c\}$ order by reverse inclusion. By Remark \ref{confuse}, $a\in [\omega_1]^{<\omega}$ is a rainbow subset for $c$ if there is no $\alpha_0<\alpha_1<\beta \in c$ such that $c(\alpha_0,\beta)=c(\alpha_1,\beta)$. It is easy to see that in $V^{\mathbb{Q}}$, $R$ adds an unbounded subset of $\omega_1^V$. \begin{lemma} In $V^{\mathbb{Q}}$, $R$ is c.c.c. \end{lemma} \begin{proof} Otherwise, let $\langle \dot{\tau}_i: i<\omega\rangle$ be a head-tail-tail system with root $r\in [\omega_1]^{<\omega}$ that is forced to be an uncountable antichain by $p$. Let $N'\prec H(\lambda)$ contain relevant objects for some sufficiently large regular cardinal $\lambda$. Let $\delta=N'\cap \omega_1$. Let $q\leq p$ be a strongly $(N',\mathbb{Q})$-generic condition that determines some $\dot{\tau}_j=h$ such that $\min (h-r)\geq \delta$. Let $q'=q\restriction N'$. Find $t\leq q'$ in $N'$ such that $t$ decides some $\dot{\tau}_i = h' \in N'$ such that $\min (h'-r)\geq \max_{(\alpha,\beta)\in dom(c_q)\cap N'} \max \{\alpha,\beta\} +1$. Now we extend $q$ to $q^*$ such that $s_{q}=s_{q^*}$ and $dom(c_{q^*})$ includes $h'\times h$ such that $c_{q^*} [(h'-r)\times (h-r)] \cap (\delta\cup range(c_q))=\emptyset$, $c_{q^*}\restriction (h'-r)\times (h-r)$ is injective and $q^*\restriction N' = q'$. To see that we can do this, enumerate $(h'-r)\times (h-r)$ as $\{(\alpha_i,\beta_i): i<k\}$. We inductively add $(\alpha_i,\beta_i)$ to $c_q$ by Claim \ref{enlarge} while maintaining the other requirements. More precisely, suppose we have added $(\alpha_j,\beta_j)$ to the domain of $c_p$ for $j<i$. Let $M\in s_p$ be of the minimum rank such that $(\alpha_i,\beta_i)\in M$. Then $M\cap \omega_1>\max\{\delta, f_{\beta_i}(\alpha_i)\}$. Hence we only need to avoid finitely many elements in $M\cap \omega_1-(\max\{\delta, f_{\beta_i}(\alpha_i)\}+1)$, which is clearly possible. $q^*$ is compatible with $t$ since $t\leq q'=q^*\restriction N'$ and $q^*\leq q$ which is strongly $(N',\mathbb{Q})$-generic. But a common extension of $q^*$ and $t$ forces that $\dot{\tau}_i \cup \dot{\tau}_j$ is rainbow. We have reached the desired contradiction. \end{proof} \begin{remark} Similarly, if we force with finite rainbow subsets of $\omega_2$, then we will add a rainbow subset of size $\aleph_2$ for $c$ in a c.c.c forcing extension. \end{remark} \section{Some remarks and questions on partition relations of triples}\label{generalization} Recall that Todorcevic in \cite{MR716846} showed that it is consistent that $\omega_1\to^{poly} (\omega_1)_{<\omega-bdd}^2$. In fact, he showed a stronger conclusion, namely for any $<\omega$-bounded coloring on $[\omega_1]^2$, it is always possible to partition $\omega_1$ into countably many rainbow subsets. The plain generalization of this result to 3-dimensional case fails miserably. \begin{remark} $\omega_1\not \to^{poly} (4)^3_{<\omega-bdd}$. \end{remark} \begin{proof} Fix $a: [\omega_1]^2\to \omega$ such that for each $\alpha<\omega_1$, $a(\cdot, \alpha)$ is an injection from $\alpha$ to $\omega$. Define $f: [\omega_1]^3\to \omega$ such that $\{\alpha,\beta,\gamma\}_{<}$ is defined to be $\max \{a(\alpha,\gamma), a(\beta,\gamma)\}\in \omega$. Now define $g: [\omega_1]^3\to \omega_1$ to be $g(\{\alpha,\beta,\gamma\})=(f(\{\alpha,\beta,\gamma\}),\gamma)$. Note $g$ is $<\omega$-bounded, since for each $\gamma\in \omega$, there are only finitely many $\alpha<\gamma$ such that $a(\alpha,\gamma)<n$. For any $A=\{\alpha_0<\alpha_1<\alpha_2<\alpha_3\}\subset \omega_1$ of size 4, pick $i<3$ such that for any $j<3$ and $j\neq i$, $a(\alpha_j,\alpha_3)<a(\alpha_i,\alpha_3)=n$. Say $i=0$ for the sake of demonstration. Then $\{\alpha_0, \alpha_1,\alpha_3\}$ and $\{\alpha_0, \alpha_2,\alpha_3\}$ get the same color $(n,\gamma)$. \end{proof} \begin{remark} There are various limitations on Ramsey Theorems for higher dimensions. For example, $2^\omega \not\to (\omega+2)^3_2$. Hence we need other methods to prove higher dimensional rainbow Ramsey theorems. \end{remark} Given a 2-bounded normal coloring $f$ on $[\delta]^3$, let us try to classify what types of obstacles there are for getting a rainbow subset. \begin{enumerate} \item[Type 1] for some $\alpha,\beta,\alpha',\beta'<\gamma$ such that $\{\alpha,\beta\}\cap \{\alpha',\beta'\}=\emptyset$ and $f(\alpha,\beta,\gamma)=f(\alpha',\beta',\gamma)$ \item[Type 2] for some $\alpha<\beta<\gamma<\delta$, $f(\alpha,\gamma, \delta)=f(\alpha,\beta, \delta)$ \item[Type 3] for some $\alpha<\beta<\gamma<\delta$, $f(\alpha,\beta,\delta)=f(\beta,\gamma,\delta)$ \item[Type 4] for some $\alpha<\beta<\gamma<\delta$, $f(\alpha,\gamma,\delta)=f(\beta,\gamma,\delta)$. \end{enumerate} \begin{remark} By repeatedly applying the Ramsey theorem on $\omega$ to eliminate bad tuples of the 4 types above, one can show $\omega_1\to^{poly} (\omega+k)^3_{l-bdd}$ for any $k,l\in \omega$. This is already in contrast with the dual statements in Ramsey theory. \end{remark} \begin{question} Can we prove in ZFC that $\omega_1\to^{poly} (\alpha)^3_{2-bdd}$ for any $\alpha<\omega_1$? \end{question} \begin{question} Is $\omega_1\to^{poly} (\omega_1)^3_{2-bdd}$ consistent? Is it a consequence of $\mathrm{PFA}$? \end{question} \Addresses \end{document}
math
59,930
\begin{document} \title{On a new type of Inequality related to the Uniform Sublevel Set Problem} \begin{abstract} Recently, Steinerberger \cite{steinerberger2019sublevel} proved a uniform inequality for the Laplacian serving as a counterpoint to the standard uniform sublevel set inequality which is known to fail for the Laplacian. In this note, we give an elementary proof of this result which highlights a step allowing for adaptations to other situations, for instance, we show that the inequality also holds for the heat operator. We formulate some naturally arising questions. \end{abstract} \begin{footnotesize} \textbf{Key words.} Oscillatory integrals, sublevel set estimates, uniform inequality, Laplacian, heat operator \end{footnotesize} \section{Introduction} A central problem throughout analysis is to understand how oscillatory integrals $$I(\lambda)=\int e^{i\lambda u(x)}\,dx$$ decay for large values of a real ``frequency parameter" $\lambda$, where $u$ is a real-valued ``phase" function. In general, considerations such as the domain of integration or other functions multiplying the oscillatory factor inside the integral are important, but for the sake of this discussion we will not go into these specifics. Typically this decay will be expressed as $$|I(\lambda)|\leq C\lambda^{-\delta}$$ for some $\delta>0$. Here we have in mind the idea that as $\lambda$ increases, the small differences in $u(x)$ from moving in $x$ become large differences in $\lambda u(x)$, which in turn corresponds to rapid oscillation in $e^{i\lambda u(x)}$. Thus in the integral, we expect $I(\lambda)$ to decay for large $\lambda$ provided $u$ does not stay near any particular value, and the more quickly $u$ ``moves around", the greater the cancellation we expect to occur, and hence the greater we can take $\delta$ to be. Thus one of the most natural conditions to impose is that $Du$ be bounded below by some positive constant, for some differential operator $D$. Crucial to many applications and key to the discussion in this paper is the idea of uniformity of the constant $C$ within a large class of phases. A natural example appears when studying the Fourier transform of some density on a $k$-dimensional surface $S$ in $\mathbb{R}^n$. After performing a change of variables, we will have integrals containing an oscillatory factor $e^{-i\phi(x)\cdot\xi}$, where $\phi$ is a function on a piece of $\mathbb{R}^k$ parametrising a piece of $S$, and $\xi$ is the Fourier variable. Writing $\xi=|\xi|\omega$ for $\omega\in S^{n-1}$, we consider $|\xi|$ to be our frequency parameter and $-\phi(x)\cdot\omega$ is a class of phase functions indexed by $\omega$. If we are to obtain estimates on the Fourier transform of the form $C|\xi|^{-\delta}$, we need to make sure the constant $C$ does not blow-up as we vary over $\omega$. In line with the intuition expressed above, the decay of this Fourier transform is well-known to relate to the curvature of the surface, see Stein \cite{stein1993harmonic} for a discussion of the fundamental results on oscillatory integrals and their relation to the Fourier transform of surface measures. A related problem is the sublevel set problem: given a real-valued function $u$ and a constant $c$ what conditions should we impose so that estimates of the form $|\{x\in\Omega:|u(x)-c|\leq\alpha\}|\leq C\alpha^\delta$ hold for appropriate $\Omega$. It is typical to seek estimates independent of $c$ so that the problem is invariant under shifting $u$ by a constant, and we can assume without loss of generality that $c=0$. That this should be related is apparent from the intuition expressed above, that oscillatory integrals should observe greater cancellation if $u$ does not spend too much time near a given value. And just as in the oscillatory integral case, we are often not only interested in the best possible $\delta$, but also in the uniformity of the constant $C$ in a class of functions. As mentioned above, typically the class of functions for which we seek uniform bounds is those having $Du$ bounded below by some positive constant, where $D$ is a differential operator. For differential operators where $u$ itself does not appear explicitly, in particular for linear differential operators, this condition is invariant under translation of $u$ by a constant, so uniform estimates are necessarily independent of $c$. We now recall some discussion from the paper of Carbery-Christ-Wright \cite{carbery1999multidimensional}. Oscillatory integral estimates of the form above are known to imply the corresponsing sublevel set estimates. In the case of monomial derivatives, that is, the differential operators $D^\beta=\partial^{\beta_1}_1\dots\partial^{\beta_n}_n$ for $\beta=(\beta_1,\dots,\beta_n)\in\mathbb{N}_0^n$, it is known that we can take $\delta=(|\beta_1|+\dots+|\beta_n|)^{-1}$ in the sublevel set problem when $D^\beta u$ is bounded below by a positive constant, and that this is the optimal $\delta$ provided no extra conditions are imposed. The same $\delta$ works in the oscillatory integral problem, provided some slightly more restrictive conditions are also imposed, we shall not discuss these here. However, in all but dimension $1$, this $\delta$ has not been shown to hold with a uniform constant. The main results of the Carbery-Christ-Wright paper are the following, and an analogue for oscillatory integrals with the slightly more restrictive conditions imposed. \begin{CCW} Denote the unit cube in $\mathbb{R}^n$ by $Q_n=[0,1]^n$. There exists $C, \delta>0$ such that for any smooth $u$ in a neighbourhood of $Q_n$ having $D^\beta u\geq 1$ on $Q_n$, we have the sublevel set estimates $$|\{x\in Q_n:|u(x)|\leq\varepsilon\}|\leq C\varepsilon^\delta.$$ Note that $C$ and $\delta$ do not depend on $u$. \end{CCW} They also observe that their arguments make sense when $Q_n$ is replaced with different convex sets. Note that this result says nothing about the optimality of $\delta$. It remains open in higher dimensions as to what is the best $\delta$ for which such bounds hold with a uniform constant. In higher dimensions, we have access to many interesting differential operators, one natural example being the Laplacian. For the Laplacian, one can obtain estimates with $\delta=1/2$ and a non-uniform constant depending on the derivatives of the function up to third order, but in the paper of Carbery-Christ-Wright, it is shown that no uniform estimate can hold for any positive $\delta$. \begin{CCW2} For each $\varepsilon\in(0,1/2)$ there exists a smooth $u$ with $\Delta u \equiv 1$ on $[0,1]^2$ but also satisfying the estimate $|\{x\in[0,1]^2:|u(x)|\leq\varepsilon\}|\geq 1-\varepsilon$. \end{CCW2} This result extends to higher dimensions by considering this family of counterexamples, and extending them as functions in higher dimensions by asking that they remain constant in the additional variables. Thus no uniform sublevel set estimate holds for any operator consisting of the Laplacian in $2$ or more of the variables plus additional terms - in particular, we observe failure for the heat and wave operators in $2$ or more spatial dimensions. The above result for the Laplacian is complimented by the result of Steinerberger \cite{steinerberger2019sublevel} which we shall discuss in this paper. It states: \begin{stei} There exists a constant $c_n>0$ depending only on the dimension so that if $u:B\rightarrow\mathbb{R}$ satisfies $\Delta u\geq 1$ in $B$, where $B$ is a unit Euclidean ball in $\mathbb{R}^n$, then $$\|u\|_{L^{\infty}(B)}\cdot|\{x\in B:|u(x)|\geq c_n\}|\geq c_n.$$ \end{stei} This result is in essence saying that if a sublevel set for some small $\varepsilon$ is large, so that its complement is small, then $\|u\|_{L^{\infty}(B)}$ must be large. Applying this to the above family of examples, we see that $u$ must be very large somewhere on the complement of that sublevel set. The intuition for this can be seen from a basic fact which shall be central to our proof - considering the averages of a function $u$ over balls as a function of the radius, we find that the derivative can be quantified exactly in terms of the Laplacian of $u$, indicating that functions with large Laplacian should have ``large" variations. The interesting aspect of this theorem is the quantification of this fact in a uniform way. It is worth remarking that just as in the Carbery-Christ-Wright Theorem, the assumptions and hence the conclusions of the statement are invariant under replacing $u$ by $u-c$ for any real number $c$. Steinerberger posed the basic question of whether we can replace $\|u\|_{L^{\infty}(B)}$ with some power of an $L^p$ norm. An affirmative answer to this question is given in the following theorem. \begin{thm}\label{LapThm} Given an open, bounded $\Omega\subseteq\mathbb{R}^n$, there exists a constant $c>0$ depending only on $n$ and $\Omega$ so that if $u:\Omega\rightarrow\mathbb{R}$ satisfies $\Delta u\geq 1$ on $\Omega$, then $$\|u\|_{L^p(\Omega)}\cdot|\{x\in \Omega:|u(x)|\geq c\}|^{1/p'}\geq c$$ for each $1\leq p\leq \infty$, and $p'$ the conjugate exponent. Note that $c$ does not depend on $p$. \end{thm} This result is less interesting in the case $p=1$, since then $p'=\infty$ and this is just giving a lower bound on the $L^1$ norm, which follows more directly from a basic property of subharmonic functions detailed in our proof. We shall also show how the proof can be extended to work for the heat operator, for which uniform sublevel estimates (including in the case of one spatial variable, as we will examine later) fail. \begin{thm}\label{HeatThm} Let $\Omega\subseteq\mathbb{R}^{n+1}$ be open and bounded. We shall denote points in $\mathbb{R}^{n+1}$ by $(x,t)\in\mathbb{R}^n\times\mathbb{R}$ and use $\Delta_x$ for the Laplacian in the first $n$ components. The usual heat operator will be denoted $H=\Delta_x-\partial_t$. Then for those $u$ satisfying $Hu\geq 1$ on $\Omega$ we have the estimate $$\|u\|_{L^p(\Omega)}\cdot|\{x\in \Omega:|u(x)|\geq c\}|^{1/p'}\geq c$$ for each $1\leq p\leq \infty$, and $p'$ the conjugate exponent, where $c>0$ depends only on $n$ and $\Omega$ and not on $p$ or $u$. \end{thm} To avoid having to repeatedly explain the type of inequality to which we refer, we shall describe any inequality having the form $$\|u\|_{L^p(\Omega)}\cdot|\{x\in \Omega:|u(x)|\geq c\}|^{1/p'}\geq c$$ over some class of functions $u$, with $c>0$ independent of $u$, as a Uniformly Balancing Sublevel Inequality. This name is chosen to stress that the $L^p$ norm is balancing the potentially small complement of the sublevel set, and does so in a uniform way. These inequalities are, as one would expect, consequences of uniform sublevel set estimates, since we know that each fixed sublevel set is uniformly small, and hence its complement is uniformly large, which also gives a uniform largeness of the $L^p$ norm. We discuss this observation in the following section. As mentioned above, a central aspect of the proof of Theorem \ref{LapThm} is a derivative formula for a parameterised family of averages that is expressed in terms of the relevant differential operator - the Laplacian. This will also be the case for Theorem \ref{HeatThm}, however, the analogous formula has an issue that we will need to resolve, which will require us to use some slightly non-standard modifications of the averages considered. The paper will be laid out as follows. Section \ref{disc} follows on from the discussion of the introduction, providing a more in-depth discussion of how these inequalities relate to the uniform sublevel set problem and some other properties that do not fit neatly into this introduction. Section \ref{proofs} contains the proofs of the main results, as well as some other results not due to the author, but reformulated in a context which stresses the ways it will be helpful for us, and perhaps that it may be helpful elsewhere. Section \ref{ques} provides a little further discussion of the inequalities that must be postponed until after the proofs, followed by some comments on some naturally-arising questions. In particular, for which differential operators does a uniformly balancing sublevel inequality hold? \section{Properties of the inequalities}\label{disc} Foremost, we confirm that uniformly balancing sublevel inequalities follow from uniform sublevel set estimates. This observation creates a strict hierarchy of problems - uniform oscillatory integral estimates imply uniform sublevel set estimates which imply uniformly balancing sublevel inequalities. Thus further results on these inequalities could provide some insight into the uniform sublevel set problem. \begin{prop}\label{hier} Suppose that $\Omega$ is open and bounded and we have the sublevel set estimates $|\{x\in\Omega:|u(x)|\leq\varepsilon\}|\leq C\varepsilon^\delta$ for certain functions $u$ on $\Omega$. Then we have $$\|u\|_{L^p(\Omega)}\cdot|\{x\in \Omega:|u(x)|\geq c\}|^{1/p'}\geq c$$ for each $1\leq p\leq \infty$, where $p'$ is the conjugate exponent, and $c$ depends only on $C$, $\Omega$ and $\delta$. \end{prop} \begin{proof} By Chebyshev's inequality, we have for each $1\leq p<\infty$ and $\varepsilon>0$ $$\varepsilon|\{x\in\Omega:|u(x)|\geq\varepsilon\}|^{1/p}\leq\|u\|_{L^p(\Omega)}$$ and thus we obtain $$\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq \varepsilon\}|^{1/p'}\geq\varepsilon|\{x\in\Omega:|u(x)|\geq \varepsilon\}|.$$ However, the set on the right hand side is the complement of the sublevel set, and so its measure can be bounded below by $|\Omega|-C\varepsilon^\delta$. Thus we can choose $\varepsilon$ depending only on $C, \Omega$ and $\delta$ so that this is in turn bounded below by $|\Omega|/2$, obtaining $$\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq \varepsilon\}|^{1/p'}\geq\varepsilon|\Omega|/2.$$ Now if $|\Omega|/2\geq 1$, we can bound this below by $\varepsilon$ and take $c=\varepsilon$, else note that replacing $\varepsilon$ by $\varepsilon|\Omega|/2$ on the left-hand side only makes it bigger, in which case we can take $c=\varepsilon|\Omega|/2$. In either case $c$ depended only on $C$, $\Omega$ and $\delta$. We can take the limit $p\rightarrow\infty$ or give a slightly modified direct argument to obtain the $p=\infty$ result. \end{proof} Primarily, we are interested in the problem of whether a uniformly balancing sublevel inequality holds in the set of functions with $Du\geq A>0$, where $D$ is a differential operator, typically linear (for non-linear operators we may consider $Du\leq A<0$ separately, since we cannot simply replace $u$ with $-u$ in this case). At present, we do not know any linear differential operators such that this fails, but we do have a non-linear counterexample - we shall consider $Du=-\det \text{Hess }u$. Under additional assumptions, such as assuming $u$ to be convex, there are uniform sublevel set estimates associated to the determinant of the Hessian, see Carbery \cite{carbery2010uniform}. We use here an example from the paper of Gressman \cite{gressman2011uniform}, which considers some uniform sublevel set estimates and gives remarks on situations when the uniformity fails. We see that the example given there also gives failure of a uniformly balancing sublevel inequality, in a rather striking way. Concretely, we consider the operator $Du=(\partial^2_{xy}u)^2-(\partial^2_{xx}u)(\partial^2_{yy}u)$ on $[0,1]^2$, and the family of functions $u_N(x,y)=N^{-1}e^x\sin(Ny)$. Clearly we have that $Du_N=e^{2x}\geq 1$, but given any $c$, we can always take $N$ large enough that $\{x\in[0,1]^2:|u_N(x)|\geq c\}$ is empty, so no uniformly balancing sublevel inequality holds. We next provide some remarks on the formulation of the uniformly balancing sublevel inequalities. With regards to the power of $1/p'$, we remark that this may not be the largest power possible for a given class of functions and a choice of $p$, for instance in our main theorems. Certainly, in the proof of Proposition \ref{hier} there is no optimal power and it is clear how we could obtain any larger power. It may also be the case that the best power we can obtain in some situations may be less than $1/p'$. Such variant inequalities would still capture the essential feature of an $L^p$ norm being used to ``balance" a product with a superlevel set measure that may be small. There are, however, certain natural aspects to this choice, namely that we are then considering the product of an $L^p$ norm with an $L^{p'}$ norm, which naturally arises in H\"older's inequality, and consequently in the duality of the $L^p$ spaces. Though not a necessity for it, this natural choice leads to us being able to bound the product below by a constant independent of $p$ in Proposition \ref{hier}, in some discussion below regarding the effect of change of variables, and in the main theorems. This is by no means evidence to suggest that $1/p'$ is the correct power to use, but it is certainly a convenient one. We might also be interested in considering the superlevel set at a height different from the constant we use as a lower bound. Thus more generally, we might consider inequalities of the form $$\|u\|_{L^p(\Omega)}\cdot|\{x\in \Omega:|u(x)|\geq c_1\}|^{1/p'}\geq c_2$$ with $c_1,c_2>0$. Such a consideration would only really be of importance if we were concerned with optimality, since as observed in the proof of Proposition \ref{hier}, replacing $c_1$ and $c_2$ by smaller values still yields a true inequality, so we can certainly replace $c_1$ and $c_2$ with their minimum to assume they are equal. We will not concern ourselves with optimality in the main proofs below, but stress our methods do not merely show existence of a suitable constant, but also allow us to find them constructively - even though there is no reason to expect they are anything close to optimal. One other convenient property of uniformity is scaling - for instance, if in the Carbery-Christ-Wright Theorem we instead consider $Du\geq A>0$, we can apply results in the class where $Du\geq 1$ by considering $u/A$ in place of $u$, and it is clear how the constant appearing in the sublevel set bounds must scale. Uniformly balancing sublevel inequalities have the same property - indeed, if such an inequality is known to apply to $u$ in a given class of functions, then whenever $v=Au$, we can apply the inequality to $v/A$ and rearrange the result to obtain $$\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq cA\}|^{1/p'}\geq cA.$$ Thus for linear differential operators we can scale the inequality $Du\geq A$ to apply results in the class $Du\geq 1$, and more generally if the operator has some homogeneity so that $D(\lambda u)=\lambda^\alpha Du$, we can make similar statements - for instance, this applies to the determinant of the Hessian. Furthermore, uniformity allows us to take limits in the inequality to obtain results for ``rough" functions. For instance, by smoothing out a function that only satisfies a differential inequality in a weak/distributional sense, we can apply the inequality first to smooth approximations and then use a standard limiting argument to conclude that it holds for more general functions. We shall not formulate this precisely, but simply note that our results can thus be extended to greater generality if desired. There are other useful properties shared with uniform sublevel set estimates. The next example we give shows that we can lift these inequalities to higher dimensions, for instance, Theorem \ref{LapThm} also implies that a uniformly balancing sublevel inequality holds for the Laplacian in the first two variables considered as a differential operator on $\mathbb{R}^3$. Concretely, we can say the following. \begin{prop} Let $\Omega_1\subseteq\mathbb{R}^{n_1}$ be open and bounded and suppose that within some set of functions $S$ on $\Omega_1$, we have the uniformly balancing sublevel inequality $$\|u\|_{L^p(\Omega_1)}\cdot|\{x\in\Omega_1:|u(x)|\geq c\}|^{1/p'}\geq c.$$ Let $\Omega_2$ be a bounded open set in $\mathbb{R}^{n_2}$ and suppose $\Omega\subseteq\mathbb{R}^{n_1+n_2}$ contains $\Omega_1\times\Omega_2$. Then for functions $v$ on $\Omega$ such that their restrictions to $\Omega_1\times\{y\}$ lies in $S$ for each $y\in\Omega_2$, we have $$\|v\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|v(x)|\geq c\}|^{1/p'}\geq c|\Omega_2|.$$ \end{prop} \begin{proof} We write $(x,y)$ for an element of $\mathbb{R}^{n_1}\times\mathbb{R}^{n_2}$. Let $v$ be as in the statement, then \begin{equation}\label{ineq1} c\leq\|v(\cdot,y)\|_{L^p(\Omega_1)}\cdot|\{x\in\Omega_1:|v(x,y)|\geq c\}|^{1/p'} \end{equation} It is clear that \begin{align*} \|v\|_{L^p(\Omega_1\times\Omega_2)}&=\|(\|v(\cdot,y)\|_{L^p(\Omega_1)})\|_{L^p(\Omega_2)}\\ |\{(x,y)\in\Omega_1\times\Omega_2\colon|v(x,y)|\geq c\}|^{1/p'}&=\||\{x\in\Omega_1:|v(x,y)|\geq c\}|^{1/p'}\|_{L^{p'}(\Omega_2)} \end{align*} hence an application of H\"older's inequality to inequality (\ref{ineq1}) yields the result for $\Omega_1\times\Omega_2$, which in turn gives the result on $\Omega$. \end{proof} We note the last step of the proof holds more generally - if a uniformly balancing sublevel inequality holds on some set, then it holds on any larger set. Moreover, we can also consider the effect of diffeomorphisms on uniformly balancing sublevel inequalities. Concretely, suppose we have a uniformly balancing sublevel inequality for functions $u$ on $\Omega$ and a diffeomorphism $\phi:\Omega\rightarrow\Omega'$. By the change of variables formula we have $$\int_\Omega|u(x)|^p\,dx=\int_{\Omega'}|u(\phi^{-1}(x'))|^p|\det J\phi^{-1}(x')|\,dx'$$ where $J\phi^{-1}$ is the Jacobian of $\phi^{-1}$. Let $M$ be the supremum of $|\det J\phi^{-1}(x')|$. Then we have $$\|u\|_{L^p(\Omega)}\leq \|u\circ\phi^{-1}\|_{L^p(\Omega')}M^{1/p}.$$ As the other term in the uniformly balancing sublevel inequality is an $L^{p'}$ norm, the same reasoning gives \begin{align*} c&\leq\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq c\}|^{1/p'}\\ &\leq M\|u\circ\phi^{-1}\|_{L^p(\Omega')}\cdot|\{x'\in\Omega':|u\circ\phi^{-1}(x')|\geq c\}|^{1/p'}. \end{align*} Hence uniformly balancing sublevel inequalities for a class of $u$ on $\Omega$ yield uniformly balancing sublevel inequalities for the class of $u\circ\phi^{-1}$ on $\Omega'$. Note that going between these classes of functions presents no loss when the Jacobian determinant is constant, such as in the case of invertible linear transformations. An important consequence of this is the following: Suppose we know a uniformly balancing sublevel inequality holds in the class where $Du(x)\geq 1$, for $D$ a differential operator. For clarity say that $D$ is written in terms of ``$x$ coordinates". Then if we express $D$ in terms of $x'$ coordinates, with $\phi$ being the diffeomorphism giving the change of coordinates, we know that in the class where $D(u\circ\phi^{-1})(x')\geq 1$, a uniformly balancing sublevel inequality holds. In short, the class of differential operators for which uniformly balancing sublevel inequalities hold is invariant under change of coordinates. As an example, let us consider which constant coefficient linear differential operators satisfy uniformly balancing sublevel inequalities. By using invertible linear transformations, one can reduce the cases to study to certain canonical forms. For example, the theory of quadratic forms tells us that to understand the homogeneous second order examples in $\mathbb{R}^n$, we need only understand those having associated polynomials of the form $$\sum_{i=1}^{m_1}x_i^2-\sum_{j=m_1+1}^{m_2}x_j^2$$ for $m_2\leq n$. In $\mathbb{R}^2$, this allows us to give a complete picture - in fact, a uniformly balancing sublevel inequality holds in all cases. The quadratic form $x_1^2+x_2^2$ corresponds to the Laplacian, hence follows from Theorem \ref{LapThm}. All the others satisfy the Carbery-Christ Wright Theorem, so in fact a uniform sublevel estimate holds. This is obvious in all cases but $x_1^2-x_2^2$, but this is $(x_1-x_2)(x_1+x_2)$, so setting $x=x_1-x_2$ and $y=x_1+x_2$, we obtain $xy$ via change of coordinates, which is of the correct form. With regards to the failure of uniform sublevel set estimates, we note that the counterexample for the Laplacian by Carbery-Christ-Wright \cite{carbery1999multidimensional} relies on Mergelyan's Theorem. One can extend this construction to other differential operators without any difficulty provided an analogue of Mergelyan's Theorem holds for that operator. This is a huge request, and although it may be possible, it is much more reasonable to work with Runge's Theorem, for which many generalisations have been considered - in particular, for the heat operator. Both Runge's Theorem and Mergelyan's Theorem can be found in Rudin's book \cite{rudin1966real}, and the analogue of Runge's Theorem for the heat operator was given by Jones \cite{jones1975approximation}. Runge's Theorem and Mergelyan's Theorem concern holomorphic functions in the plane, but by considering the real and imaginary parts separately can be considered as a result concerning harmonic functions in the plane. We shall state the necessary consequences of these results during the proof of the forthcoming proposition. We shall establish, as claimed in the introduction, that even with one spatial dimension we have no uniform sublevel estimate for the heat operator. At the same time, we shall re-establish the Carbery-Christ-Wright counterexample, using an alternative proof communicated by James Wright. We stress that this statement is by no means as general as we could make it, in light of other situations where a Runge-type theorem holds - see, for instance, Kalmes \cite{kalmes2019power}. \begin{prop} Consider the operators $\Delta=\partial_{xx}^2+\partial_{tt}^2$ and $H=\partial_{xx}^2-\partial_t$ on $[0,1]^2$. For each $\varepsilon>0$, there exists a smooth $u$ with $\Delta u \equiv 1$ on $[0,1]^2$ but also satisfying the estimate $|\{x\in[0,1]^2:|u(x)|\leq\varepsilon\}|\geq 1-\varepsilon$. Furthermore, for each $\varepsilon>0$, there exists a smooth $u$ with $Hu \equiv 1$ on $[0,1]^2$ but also $|\{x\in[0,1]^2:|u(x)|\leq\varepsilon\}|\geq 1-\varepsilon$. \end{prop} \begin{proof} We will prove both statements simultaneously, indicating the differences as they appear. The Runge theorem for the Laplacian says that for a harmonic function on an open set $U$ containing a compact set $K$ to be uniformly approximated on $K$ by polynomials harmonic on all of $\mathbb{R}^2$, it is enough that the complement of $K$ is connected. The Runge theorem for the heat operator says that a temperature (a solution to the heat equation) on an open set $U$ containing a compact set $K$ may be uniformly approximated on $K$ by temperatures on all of $\mathbb{R}^{n+1}$ provided that the $t$-slices of the complement of $U$ have no compact component, that is, the sets $\{x\in\mathbb{R}^n:(x,t)\in U^{c}\}$ have no compact component. For us, then, it is clearly sufficient that the $t$-slices of $U$ be intervals. In each case, our compact set will be a collection of thin rectangles that cover most of the unit square. To be precise, consider some $0<\delta<1/2$ and let $K$ be the union of the disjoint rectangles $$K:=\bigcup_{i=1}^{\left\lfloor\frac{4-\delta^2}{4\delta+\delta^2}\right\rfloor} \left[\frac{\delta}{4},1-\frac{\delta}{4}\right]\times\left[i\left(\delta+\frac{\delta^2}{4}\right)-\delta,i\left(\delta+\frac{\delta^2}{4}\right)\right]$$ One sees that $\lfloor(4-\delta^2)/(4\delta+\delta^2)\rfloor(\delta+\delta^2/4)\leq 1-(\delta/4)$, and that the rectangles are separated by $\delta^2/4$, so for instance the $\delta^2/16$ neighbourhood of $K$ (say in the supremum norm) is also a disjoint collection of rectangles. This neighbourhood will be our open set $U$. Consider $v(x,t)=t^2/2$ for the $\Delta$ case, $v(x,t)=-t$ for the $H$ case. Then $\Delta v$, respectively $Hv$, is identically $1$. Now on each of the thin rectangles of $U$, pick the $t$ coordinate of some point in the rectangle - call it $c$ - and define $w_1(x,t)$ on that component to be $c^2/2$ (respectively $-c$). Since $w_1$ is locally constant, it is harmonic (respectively, a temperature) on $U$. It is easily seen that, since the side length along the $t$ axis of each such rectangle is $\delta+(\delta^2/8)$, $w_1$ uniformly approximates $v$ on $U$ to within $\delta+(\delta^2/8)$. By Runge's theorem for harmonic functions, it is clear that we can approximate $w_1$ uniformly to within $\delta$ on $K$ by a function $w_2$ harmonic on all of $\mathbb{R}^2$. Thus $w_2$ approximates $v$ on $K$ to within $2\delta+(\delta^2/8)$. Hence $u:=v-w_2$ satisfies $\Delta u \equiv 1$ and $|u(x)|\leq 2\delta+(\delta^2/8)$ on $K$. The analogous statement for the $H$ case is true, since the $t$-slices of $U$ are intervals, and so the we can apply the Runge theorem for temperatures. It remains to note that $K$ has large measure. Indeed, the measure of $K$ is \begin{align*} \delta\left(1-\frac{\delta}{2}\right)\left\lfloor\frac{4-\delta^2}{4\delta+\delta^2}\right\rfloor&\geq\delta\left(1-\frac{\delta}{2}\right)\left(\frac{4-\delta^2}{4\delta+\delta^2}-1\right)\\ &=\frac{\delta^3+3\delta^2-14\delta+8}{8+2\delta}>\frac{8-14\delta}{8+2\delta}=1-\frac{16\delta}{8+2\delta}\\ &>1-2\delta \end{align*} Taking $2\delta+(\delta^2/8)\leq\varepsilon$ completes the proof. \end{proof} \section{Proofs of the main results}\label{proofs} Our proofs will rely on certain results on growth rates of appropriate families of averages, both of which are consequences of some elementary calculations, but they are not always explicitly stated in the literature. For the sake of completeness we produce them here in a way that emphasises their applicability. In the following proofs $u$ will denote a smooth function on a bounded open set $\Omega$. \subsection{Proof of Theorem \ref{LapThm}} The derivative formula for the Laplacian is entirely routine and well-known. \begin{prop} Consider for $\overline{B_R(x)}\subseteq\Omega$ the function $\phi:[0,R]\rightarrow\mathbb{R}$ given by $\phi(0)=u(x)$ and the average $$\frac{1}{|B_r|}\int_{B_r(x)}u(y)\,dy$$ for $0<r\leq R$. Clearly $\phi$ is continuous on $[0,R]$, and on the open interval $(0,R)$ we have \begin{equation}\label{deriv1} \phi'(r)=\frac{1}{|B_r|}\int_{B_r(x)}\frac{r^2-|x-y|^2}{2r}\Delta u(y)\,dy. \end{equation} \end{prop} Here $B_r(x)$ denotes the Euclidean ball of radius $r$ centred at $x$, and $|B_r|$ is the Lebesgue measure of the ball. \begin{proof} Using the change of variables $y=x-r\tilde{y}$ we have $$\phi(r)=\frac{1}{|B_1|}\int_{B_1(0)}u(x-r\tilde{y})\,d\tilde{y}.$$ Differentiating under the integral and changing variables again we have \begin{align*} \phi'(r)&=\frac{1}{|B_1|}\int_{ B_1(0)}\nabla u(x-r\tilde{y})\cdot (-\tilde{y})\,d\tilde{y}\\ &=\frac{1}{|B_r|}\int_{ B_r(0)}\nabla u(y)\cdot \frac{y-x}{r}\,dy. \end{align*} Now using polar coordinates, with $\sigma_s$ being the induced surface measure on the sphere of radius $s$, we have \begin{align*} \phi'(r)&=\frac{1}{|B_r|}\int_0^r\int_{\partial B_r(x)}\nabla u(y)\cdot \frac{y-x}{r}\,d\sigma_s(y)\,ds\\ &=\frac{1}{|B_r|}\int_0^r\frac{s}{r}\int_{\partial B_s(x)}\nabla u(y)\cdot \frac{y-x}{s}\,d\sigma_s(y)\,ds\\ &=\frac{1}{|B_r|}\int_0^r\frac{s}{r}\int_{B_s(x)}\Delta u(y)\,dy\,ds \end{align*} where in the last step we used Gauss' Divergence Theorem. We again apply polar coordinates to the inner integral and apply Fubini's Theorem. \begin{align*} \phi'(r)&=\frac{1}{|B_r|}\int_0^r\frac{s}{r}\int_0^s\int_{\partial B_t(x)}\Delta u(y)\,d\sigma_t(y)\,dt\,ds\\ &=\frac{1}{|B_r|}\int_0^r\int_t^r\frac{s}{r}\int_{\partial B_t(x)}\Delta u(y)\,d\sigma_t(y)\,ds\,dt\\ &=\frac{1}{|B_r|}\int_0^r\int_{\partial B_t(x)}\frac{r^2-t^2}{2r}\Delta u(y)\,d\sigma_t(y)\,dt. \end{align*} Noting $t=|x-y|$ on $\partial B_t(x)$ completes the calculation. \end{proof} \begin{proof}[Proof of Theorem \ref{LapThm}] We shall proceed by contradiction. Suppose that given a positive $c$ we can find $u$ with $\Delta u\geq 1$ so that \begin{equation}\label{ineq2} \|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq c\}|^{1/p'}\leq c \end{equation} holds. For $c$ sufficiently small (we shall determine precisely how during the proof), we will obtain a contradiction. Let $\Omega_\delta$ denote the set of those $x\in\Omega$ that are at least some fixed distance $\delta$ from the boundary. For each such $x$ we can apply mean value inequality for averages over balls $B_{\delta/2}(x)$. The key observation is that inequality (\ref{ineq2}) provides control on the ``bad part" of each of the averages. In particular, denoting $\{x:|u(x)|\leq c\}$ by $U_c$, we have \begin{align*} \frac{1}{|B_{\delta/2}|}\int_{B_{\delta/2}(x)}u(y)dy&=\frac{1}{|B_{\delta/2}|}\left(\int_{B_{\delta/2}(x)\cap U_c}u(y)dy+\int_{B_{\delta/2}(x)\cap (U_c)^c}u(y)dy\right)\\ &\leq (1+|B_{\delta/2}|^{-1})c \end{align*} where we have estimated the second integral by H\"older's inequality and the assumed inequality. Consider now the derivative formula (\ref{deriv1}). We shall denote by $|\partial B_s|=|\partial B_1|s^{n-1}$ the total surface measure of the sphere of radius $s$, and using the assumption $\Delta u\geq 1$, we have that the derivative is bounded below by \begin{align*} \frac{1}{|B_r|}\int_0^r|\partial B_1|s^{n-1}\frac{r^2-s^2}{2r}\,ds&=\frac{|\partial B_1|}{|B_r|}\left(\frac{r^{n+2}}{2rn}-\frac{r^{n+2}}{2r(n+2)}\right)\\ &=\frac{|\partial B_1|}{|B_1|}\left(\frac{1}{2n}-\frac{1}{2(n+2)}\right)r=:C_nr. \end{align*} Now whenever $x\in\Omega_\delta$, we have by the fundamental theorem of calculus that $u(x)=\phi(0)=\phi(\delta/2)-\int_0^{\delta/2}\phi'(r)\,dr$, which along with $\Delta u\geq 1$ gives \begin{align*} u(x)&=\frac{1}{|B_\delta/2|}\int_{B_{\delta/2}(x)}u(y)\,dy-\int_0^{\delta/2}\frac{1}{|B_r|}\int_{B_r(x)}\frac{r^2-|x-y|^2}{2r}\Delta u(y)\,dy\,dr\\ &\leq (1+|B_{\delta/2}|^{-1})c-\int_0^{\delta/2}C_nr\leq (1+|B_{\delta/2}|^{-1})c-(C_n/8)\delta^2. \end{align*} It follows that whenever $$c\leq (1+|B_{\delta/2}|^{-1})^{-1}\frac{C_n\delta^2}{16}$$ we have that $u(x)\leq -(C_n/16)\delta^2$ for each $x\in\Omega_\delta$. Note in particular that $c\leq(C_n/16)\delta^2$. Now we have that $$\|u\|_{L^p(\Omega)}\geq\|u\|_{L^p(\Omega_\delta)}\geq (C_n/16)\delta^2|\Omega_\delta|^{1/p}$$ and therefore \begin{align*} &\quad\,\,\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq c\}|^{1/p'}\\ &\geq\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq(C_n/16)\delta^2\}|^{1/p'}\\ &\geq\|u\|_{L^p(\Omega)}\cdot|\Omega_\delta|^{1/p'}\\ &\geq(C_n/16)\delta^2|\Omega_\delta|^{1/p}\cdot|\Omega_\delta|^{1/p'}\\ &=(C_n/16)\delta^2|\Omega_\delta|. \end{align*} Thus if we know further that $c<(C_n/16)\delta^2|\Omega_\delta|$, we would arrive at a contradiction. So if we choose $\delta$ so that $\Omega_\delta$ has positive measure, the desired inequality holds when $c<\min\{(C_n/16)\delta^2|\Omega_\delta|,(1+|B_{\delta/2}|^{-1})^{-1}(C_n/16)\delta^2\}$. It is clear that this choice is independent of $p$. \end{proof} \subsection{Proof of Theorem \ref{HeatThm}} For the heat operator case, we shall first define some notation. Let $\Phi$ be the standard heat kernel $$\Phi(x,t):=\frac{1}{(4\pi t)^{n/2}}e^{-|x|^2/4t}.$$ The heatball centred at $(x,t)$ of radius $r>0$ is the compact set $$E(x,t;r):=\{(x,t)\}\cup\{(y,s)\in\mathbb{R}^{n+1}:s\leq t,\,\Phi(t-s,x-y)\geq 1/r^n\}.$$ Note that except for at $(x,t)$, $\Phi(t-s,x-y)=1/r^n$ on the boundary. For convenience we shall also use $E(r)$ to denote $\{(x,t):\Phi(x,t)\geq 1/r^n\}$. Notice that $|E(x,t;r)|=|E(r)|=r^{n+2}|E(1)|$, a fact that follows from the parabolic scaling $(y,s)\mapsto (ry,r^2s)$ taking $E(1)$ to $E(r)$. Whenever the heatball $E(x,t;R)$ is in $\Omega$, we shall consider the functions $\phi:[0,R]\rightarrow \mathbb{R}$ given by $\phi(0)=u(x,t)$ and $$\phi(r)=\frac{1}{4r^n}\int_{E(x,t;r)}u(y,s)\frac{|x-y|^2}{(t-s)^2}\,dy\,ds$$ for $0<r\leq R$. These averages were considered in a paper of Watson \cite{watson1973theory}, in which a theory of subtemperatures, analogous to the theory of subharmonic functions, was developed. In the paper of Watson \cite{watson1973theory}, it is seen that $$\frac{1}{4r^n}\int_{E(x,t;r)}\frac{|x-y|^2}{(t-s)^2}\,dy\,ds=1$$ which is where the normalisation in the above definition comes from. However, as the precise normalisation will not be important for us, and in order to give a self-contained treatment in this paper, the reader may wish instead to think of the factor of $4r^n$ being written as $$V(r):=\int_{E(r)}\frac{|y|^2}{s^2}\,dy\,ds=r^nV(1).$$ The equality $V(r)=r^nV(1)$ follows from parabolic scaling. That this quantity is finite can be seen quite easily in higher dimensions, and the techniques we will use are suggestive of methods used to account for unboundedness of the kernel of the averages, $|x-y|^2/(V(r)(t-s)^2)$. In fact, we will only need this result in higher dimensions. By rearranging the formula defining the level set in $E(r)$, we can give the equivalent expression $\{(y,s):0<s\leq r^2/4\pi,|y|\leq(2ns\log(r^2/4\pi s))^{1/2}\}$. The integral becomes \begin{align*} V(r)&=\int_0^{r^2/4\pi}\int_{|y|\leq(2ns\log(r^2/4\pi s))^{1/2}}\frac{|y|^2}{s^2}\,dy\,ds\\ &=\int_0^{r^2/4\pi}\int_{|y|\leq 1}\frac{|y|^2}{s^2}(2ns\log(r^2/4\pi s))^{(n+2)/2}\,dy\,ds\\ &=\int_{|y|\leq 1}|y|^2\,dy\,\int_0^{r^2/4\pi}\left(2ns^{1-\frac{4}{n+2}}\log(r^2/4\pi s)\right)^{(n+2)/2}\,ds. \end{align*} The $y$ integral is clearly finite and easy to compute. For $n\geq 3$, the integrand in the $s$ integral is bounded, extending continuously to $s=0$ - indeed, it is clear that $s^{1-\frac{4}{n+2}}\log(r^2/4\pi s)$ tends to $0$ as $s\rightarrow 0$. Now, we have already observed that the kernel for these averages is unbounded, which means that the first step in the proof of Theorem \ref{LapThm} fails here. Nevertheless, we will later consider a modified kernel which is bounded for fixed $r$, and hence the associated measure can be controlled by a multiple of the Lebesgue measure, so that the argument works. The approach will be to add in some extra spatial variables, apply the derivative formula in higher dimensions, but integrate out the extra variables appearing in the kernel to get a new, bounded kernel in lower dimensions, in a manner not unlike the above calculation. First, however, we shall give a proof of the derivative formula for heatballs. This proof is due to Evans \cite{evans10}, but it is not explicitly given there, instead being absorbed into the proof of the corresponding mean value formula for the heat equation. \begin{prop} The function $\phi$ defined above is continuous on $[0,R]$, and in the interval $(0,R)$ its derivative is given by \begin{equation}\label{deriv2} \phi'(r)=\frac{n}{r^{n+1}}\int_{E(x,t;r)}Hu(y,s)\log(r^n\Phi(x-y,t-s))\,dy\,ds. \end{equation} \end{prop} \begin{proof} Using the translation and rescaling $y=x-r\tilde{y},s=t-r^2\tilde{s}$, we have $$\phi(r)=\frac{1}{4}\int_{E(1)}u(x-r\tilde{y},t-r^2\tilde{s})\frac{|\tilde{y}|^2}{\tilde{s}^2}\,d\tilde{y}\,d\tilde{s}.$$ Continuity of $\phi$ is obvious from the smoothness of $u$ in a neighbourhood of $E(x,t;r)$, and noting that the normalisation $V(1)=4$ in these averages is the correct one. We can differentiate under the integral to obtain $$\phi'(r)=-\frac{1}{4}\int_{E(1)}\left(2rsu_t+\sum_{i=1}^nu_{x_i}y_i\right)\frac{|y|^2}{s^2}\,dy\,ds$$ suppressing the argument $(x-ry,t-r^2s)$ of $u_{x_i}$ and $u_t$. Scaling back we have $$\phi'(r)=-\frac{1}{4r^{n+1}}\int_{E(r)}\left(2su_t+\sum_{i=1}^nu_{x_i}y_i\right)\frac{|y|^2}{s^2}\,dy\,ds$$ with the argument $(x-y,t-s)$ suppressed. For convenience let us denote $\psi(y,s)=\log(r^n\Phi(y,s))=n\log(r)-(n/2)\log(4\pi s)-|y|^2/4s$. We have $\psi_{x_i}(y,s)=-y_i/2s$ and thus $$\int_{E(r)}2su_t\frac{|y|^2}{s^2}\,dy\,ds=-4\int_{E(r)}u_t\sum_{i=1}^n\psi_{x_i}(y,s)y_i\,dy\,ds.$$ Noting that $\psi(y,s)=0$ on the boundary of $E(r)$, we can use integration by parts $\psi_{x_i}$ and $y_iu(x-y,t-s)$ to get that this equals $$4\int_{E(r)}\psi\sum_{i=1}^n(u_t-y_iu_{tx_i})\,dy\,ds=4\int_{E(r)}\psi\left(nu_t-\sum_{i=1}^ny_iu_{tx_i}\right)\,dy\,ds.$$ Focusing on the second term, we have \begin{align*} &-4\int_{E(r)}\sum_{i=1}^n\psi(y,s)y_i\frac{\partial}{\partial s}(-u_{x_i}(x-y,t-s))\,dy\,ds\\ =\,&-4\int_{E(r)}\sum_{i=1}^n\psi_t(y,s)u_{x_i}(x-y,t-s)y_i\,dy\,ds\\ =\,&-4\int_{E(r)}\sum_{i=1}^n\left(\frac{|y|^2}{4s^2}-\frac{n}{2s}\right)u_{x_i}(x-y,t-s)y_i\,dy\,ds \end{align*} and hence $$\int_{E(r)}2su_t\frac{|y|^2}{s^2}\,dy\,ds=4\int_{E(r)}\left[nu_t\psi-\sum_{i=1}^n\left(\frac{|y|^2}{4s^2}-\frac{n}{2s}\right)u_{x_i}y_i\right]dy\,ds.$$ All together this gives \begin{align*} \phi'(r)&=\frac{-1}{4r^{n+1}}\int_{E(r)}\left[4nu_t\psi-4\sum_{i=1}^n\left(\frac{|y|^2}{4s^2}-\frac{n}{2s}\right)u_{x_i}y_i+\sum_{i=1}^nu_{x_i}y_i\frac{|y|^2}{s^2}\right]dy\,ds\\ &=\frac{-1}{4r^{n+1}}\int_{E(r)}\left[4nu_t\psi-4\sum_{i=1}^n\frac{-n}{2s}u_{x_i}y_i\right]dy\,ds\\ &=\frac{n}{r^{n+1}}\int_{E(r)}\left[\left(\sum_{i=1}^n\frac{-y_i}{2s}u_{x_i}(x-y,t-s)\right)-u_t(x-y,t-s)\psi\right]dy\,ds. \end{align*} Since $\psi_{x_i}(y,s)=-y_i/2s$ and $\psi$ is $0$ on the boundary of $E(r)$, an integration by parts in $y_i$ for each term of the sum yields \begin{align*} \phi'(r)&=\frac{n}{r^{n+1}}\int_{E(r)}\Delta u(x-y,t-s)-u_t(x-y,t-s)\psi\,dy\,ds\\ &=\frac{n}{r^{n+1}}\int_{E(r)}Hu(x-y,t-s)\psi(y,s)\,dy\,ds\\ &=\frac{n}{r^{n+1}}\int_{E(x,t;r)}Hu(y,s)\log(r^n\Phi(x-y,t-s))\,dy\,ds \end{align*} as desired. \end{proof} The last ingredient needed to carry out the proof is the averages over the so-called ``modified heatballs". This idea was used by Kuptsov \cite{kuptsov1981mean}, our treatment follows a paper by Watson \cite{watson2002elementary} giving a review of the main ideas and some further results. Conveniently, we won't need anything more than the absolute basics, which we summarise here. Starting with a function $u$ on an open subset $\Omega$ of $\mathbb{R}^{n+1}$, we examine the above averages $\phi$ for $u$ being considered as a function on $\mathbb{R}^m\times\Omega$, by considering a function $\tilde{u}$ that is independent of the first $m$ variables - that is, for $\xi\in\mathbb{R}^m$, $(x,t)\in\Omega$, set $\tilde{u}(\xi,x,t)=u(x,t)$ and consider the averages $\phi$ for $\tilde{u}$, which clearly take the form $$\phi(r)=\frac{1}{4r^{m+n}}\int_{E(\xi,x,t;r)}\frac{|x-y|^2+|\xi-\eta|^2}{(t-s)^2}u(y,s)\,d\eta\,dy\,ds.$$ Since $u$ does not dependent on $\eta$, we can carry out the integration in $\eta$. As above, we rearrange the superlevel set formula defining the heatball to observe that $E(\xi,x,t;r)$ is the set of $(y,\eta,s)$ satisfying $$0\leq t-s\leq r^2/4\pi,|x-y|^2+|\xi-\eta|^2\leq 2(m+n)(t-s)\log(r^2/4\pi(t-s)).$$ The $(n,m)$-modified heatball is the projection of $E(\xi,x,t;r)$ onto the last $n+1$ coordinates, denoted $E_m(x,t;r)$, and by carrying out the integration in $\eta$ we obtain a new kernel on $E_m(x,t;r)$ that will be bounded for large enough $m$, as we shall demonstrate. For fixed $y$, $s$, we must integrate over $|\xi-\eta|\leq A=A(x-y,t-s)$, given by $$A(x-y,t-s):=\left(2(t-s)(m+n)\log\left(\frac{r^2}{4\pi(t-s)}\right)-|x-y|^2\right)^{1/2}.$$ We compute the integral in polar coordinates \begin{align*} \int_{|\xi-\eta|\leq A}\frac{|\xi-\eta|^2+|x-y|^2}{(t-s)^2}\,d\eta&=|\partial B_1|\int_0^A\frac{r^2+|x-y|^2}{(t-s)^2}r^{m-1}\,dr\\ &=\frac{|\partial B_1|}{(t-s)^2}\left(\frac{A^{m+2}}{m+2}+|x-y|^2\frac{A^m}{m}\right)\\ &=\frac{|B_1|}{(t-s)^2}A^m\left(\frac{mA^2}{m+2}+|x-y|^2\right) \end{align*} where we used the basic formula $|\partial B_1|=m|B_1|$. Substituting the value of $A^2$ in the brackets gives $$\frac{2|B_1|}{m+2}A^m\left(\frac{m(m+n)}{t-s}\log\left(\frac{r^2}{4\pi(t-s)}\right)+\frac{|x-y|^2}{(t-s)^2}\right).$$ Dividing this by the normalisation $1/4r^{m+n}$ gives the kernel $K_r(x-y,t-s)$ (a non-negative density) of the average over the $(n,m)$ modified heatball $E_m(x,t;r)$. That is to say, $$\phi(r)=\int_{E_m(x,t;r)}K_r(x-y,t-s)u(y,s)\,dy\,ds.$$ For $m\geq 3$, we can show that this kernel is bounded with bounds depending only on $n,m$ and $r$. Indeed, we can immediately replace $x-y$ with $y$ and $t-s$ with $s$ via a change of variables, and consider bounding $K_r(y,s)$ on the set $\{(y,s):\Phi(0,y,s)\geq 1/r^{m+n}\}$ (this is the reflection in the final coordinate of $E_m(0,0;r)$), where here $\Phi$ is the heat kernel in $m+n$ spatial dimensions. We just need to check that the kernel is bounded near $(0,0)$. Observing that $A^m$ is bounded by $(2s(m+n)\log(r^2/4\pi s))^{m/2}$, we see that since in (the reflection of) the modified heatball we have $|y|^2\leq 2s(m+n)\log(r^2/4\pi s)$, the kernel can be bounded by a constant depending on $n,m$ and $r$ multiplied by $$s^{(m-2)/2}\log\left(\frac{r^2}{4\pi s}\right)^{(m+2)/2}.$$ Similarly to before, we can easily see that as $s$ goes to $0$, provided $m\geq 3$, this quantity goes to $0$, hence the kernel extends continuously to $(0,0)$ and is thus bounded above on the modified heatball. \begin{proof}[Proof of Theorem \ref{HeatThm}] In view of the above, the proof proceeds almost identically to Theorem \ref{LapThm}, with some small differences that we can easily address. Suppose that given a positive $c$, we can always find $u$ with $Hu\geq 1$ so that \begin{equation}\label{ineq3} \|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq c\}|^{1/p'}\leq c. \end{equation} Fix $m\geq 3$. Let $R$ be fixed so that the set $\Omega_R$, consisting of those points $(x,t)$ for which the modified heatball $E_m(x,t;R)$ is contained in $\Omega$, has positive Lebesgue measure. Let $M_R$ be the maximum value attained by the kernel $K_R$. Then for each $(x,t)\in\Omega_R$, by splitting the integral on the right over the sets where $|u|\leq c$ and $|u|\geq c$, applying H\"older's inequality and the inequality (\ref{ineq3}) to the latter, we obtain \begin{align*} \int_{E_m(x,t;R)}K_R(x-y,t-s)u(y,s)\,dy\,ds&\leq M_R\int_{E_m(x,t;R)}|u(y,s)|\,dy\,ds\\ &\leq M_R(|E_m(x,t;R)|+1)c=:C_Rc. \end{align*} Note that the extended function $\tilde{u}(\xi,x,t)=u(x,t)$ considered in the proceeding discussion has $H\tilde{u}(\xi,x,t)=Hu(x,t)\geq 1$. By the fundamental theorem of calculus for $\phi$ and the derivative formula (\ref{deriv2}) applied on the $(m+n)$-dimensional heatball, we have \begin{align*} u(x,t)&=\tilde{u}(\xi,x,t)=\phi(0)=\phi(R)-\int_0^R\phi'(r)\,dr\\ &=\int_{E_m(x,t;R)}K_R(x-y,t-s)u(y,s)\,dy\,ds\\ &-\int_0^R\frac{1}{4r^{m+n+1}}\int_{E(\xi,x,t;r)}v_{\xi,x,t,r}(\eta,y,s)\,d\eta\,dy\,ds\,dr \end{align*} where $v_{\xi,x,t,r}$ is shorthand for $H\tilde{u}(\eta,y,s)\log(r^{m+n}\Phi(\xi-\eta,x-y,t-s))$. Note that $\log(r^{m+n}\Phi(\xi-\eta,x-y,t-s))$ is positive on the heatball $E(\xi,x,t;r)$, and is at least $m+n$ on the heatball $E(\xi,x,t;r/e)$. Hence we have the bound $$\int_{E(\xi,x,t;r)}v_{\xi,x,t,r}(\eta,y,s)\,d\eta\,dy\,ds\geq (m+n)|E(\xi,x,t;r/e)|.$$ The right hand side is equal to $(m+n)|E(\xi,x,t;1)|(r/e)^{m+n+2}=C_{m,n}r^{m+n+2}$. Using this bound with the bound $\phi(R)\leq C_Rc$, it follows that for $(x,t)\in\Omega_R$, $$u(x,t)\leq C_Rc-(C_{m,n}/8)R^2.$$ The remainder of the proof features no major differences to the proof of Theorem \ref{LapThm}. If we take $c\leq C_R^{-1}(C_{m,n}/16)R^2$, then $u(x,t)\leq -(C_{m,n}/16)R^2$. If also $c\leq (C_{m,n}/16)R^2$, then $$\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq c\}|^{1/p'}\geq (C_{m,n}/16)R^2|\Omega_R|$$ and the contradiction follows if we suppose further that $c$ is also less than $(C_{m,n}/16)R^2|\Omega_R|$. \end{proof} \section{Remarks and questions}\label{ques} In light of earlier discussion, it seems of interest to determine the generality in which uniformly balancing sublevel inequalities hold. We pose the following question. \textbf{Question.} Given a differential operator $D$ on an open set $\Omega$, we are interested in whether there exist constants $c,\alpha>0$ and $1\leq p\leq \infty$ such that an inequality of the form $$\|u\|_{L^p(\Omega)}\cdot|\{x\in\Omega:|u(x)|\geq c\}|^\alpha\geq c$$ holds whenever $Du\geq A>0$ in $\Omega$. Can we classify which differential operators possess such inequalities? In particular, could it be that a uniformly balancing sublevel inequality holds for all linear differential operators? In order to motivate the search for a linear counterexample, let us reflect on the above proofs and examine what generality they are likely to extend to. The key element of the proofs was the relation of a rate of change for a parameterised family of averages to the differential operator in question, either by means of a derivative or just a nicely-quantified difference. Such parameterised families of averages occur in the study of mean value formulae for PDE, where often one seeks to establish that such a family of averages is constant if and only if a function satisfies a certain PDE or family of PDEs. In fact, if we allow for averages against measures that are not necessarily positive, a vast number of linear PDE solutions can be characterised this way. Pokrovskii \cite{pokrovskii1998mean} establishes a method for constructing such measures, generalising the ideas of Zalcman \cite{zalcman1973mean}. We need our averages to be with respect to a measure which can at least be locally bounded by some power of the Lebesgue measure, as is constructed by Pokrovskii \cite{pokrovskii1998mean}, if we are to find straightforward extensions to the above proofs. As we saw in the proofs above, the contrapositive of the statement allowed us to bound averages against the Lebesgue measure over a fixed set in a uniform way, which is why we need to compare to the Lebesgue measure, typically by means of a bounded density as for the heat operator. The main difficulty seems to be in producing a formula expressing the growth of averages in terms of the differential operator, integrated against a positive measure. This property perhaps holds only for certain naturally arising monotone families of averages, which seems to be related to those differential operators having good theories of subsolutions and supersolutions where we have access to maximum principles, for instance. Indeed, our arguments were based on the idea that for such operators, inequalities of the form $Du\geq 1$ represent a stronger property than that of a usual subsolution, where by comparison with a constant solution we would expect a quantifiably large deviation. In view of this, it seems reasonable to expect that for sufficiently nice elliptic and parabolic operators, where we indeed have a developed theory for subsolutions and supersolutions, modifications of the above arguments will prove fruitful. This suggests another interesting avenue to pursue. Consider the wave operator $\partial^2_t-\Delta_x$. In one spatial dimension, the wave operator can be expressed as $\partial^2_t-\partial^2_x=(\partial_t-\partial_x)(\partial_t+\partial_x)$, which by the simple change of co-ordinates $x'=t-x, y'=t+x$ is the same as $\partial_{x'}\partial_{y'}$, and so this in fact satisfies a uniform sublevel set estimate. However, in more than one spatial dimension, simply by considering functions that are constant in $t$, we see that no uniform sublevel set estimate holds due to the failure for the Laplacian. However, the wave operator is neither elliptic nor parabolic, which suggests that the methods of this paper may not be helpful in establishing a uniformly balancing sublevel inequality. Nevertheless, this is not to suggest that a uniformly balancing sublevel inequality fails to hold for the wave operator, but that the wave operator seems to be worthy of further inspection. \textit{Acknowledgements.} The author is supported by a UK EPSRC scholarship at the Maxwell Institute Graduate School. The author would like to thank Prof. James Wright for many helpful discussions, along with various suggestions and improvements; in particular the inclusion of the discussion on failure of uniform sublevel set estimates via Runge-type theorems at the end of section \ref{disc}. John Green,\\ Maxwell Institute of Mathematical Sciences and the School of Mathematics,\\ University of Edinburgh,\\ JCMB, The King’s Buildings,\\ Peter Guthrie Tait Road,\\ Edinburgh, EH9 3FD,\\ Scotland\\ Email: \texttt{[email protected]} \end{document}
math
52,318
\beginin{equation}gin{document} \beginin{equation}gin{abstract} We consider ionic electrodiffusion in fluids, described by the Nernst-Planck-Navier-Stokes system. We prove that the system has global smooth solutions for arbitrary smooth data: aribitrary positive Dirichlet boundary conditions for the ionic concentrations, arbitrary Dirichlet boundary conditions for the potential, arbitrary positive initial concentrations, and arbitrary regular divergence-free initial velocities. The result holds for any positive diffusivities of ions, in bounded domains with smooth boundary in three space dimensions, in the case of two ionic species, coupled to Stokes equations for the fluid. The result also holds in the case of Navier-Stokes coupling, if the velocity is regular. The global smoothness of solutions is also true for arbitrarily many ionic species, if all their diffusivities are the same. \end{abstract} \keywords{electroconvection, ionic electrodiffusion, Poisson-Boltzmann, Nernst-Planck, Navier-Stokes} \noindent\thanks{\em{ MSC Classification: 35Q30, 35Q35, 35Q92.}} \maketitle \section{Introduction} The Nernst-Planck-Navier-Stokes system describes the evolution of ions in a Newtonian fluid {\widetilde{c}}ite{rubibook}. Several species of ions, with different valences $z_i$ diffuse with diffusivities $D_i>0$, and are carried by an incompressible fluid with constant density and with velocity $u$, and by an electrical field generated by the local charge $\rho$ and by voltage applied at the boundaries. The system of equations is \beginin{equation} \partial_t c_i + u{\widetilde{c}}dot\nabla c_i = D_i {\mbox{div}\,} (\nabla c_i + z_ic_i\nabla \Phi), \label{cieq} \end{equation} $i=1,2, \dots m$, coupled to the Poisson equation \beginin{equation} -\epsilon\Delta\Phi = \sum_{i=1}^m z_i c_i = \rho, \label{poi} \end{equation} and to the Navier-Stokes equations \beginin{equation} \partial_t u + u{\widetilde{c}}dot\nabla u -\nu\Delta u + \nabla p = -K \rho\nabla\Phi, \quad \nabla{\widetilde{c}}dot u = 0, \label{nse} \end{equation} or to the Stokes equation, \beginin{equation} \partial_t u -\nu\Delta u + \nabla p = -K \rho\nabla\Phi, \quad \nabla{\widetilde{c}}dot u = 0. \label{se} \end{equation} The function $c_i(x,t)$ represents the local concentration of the $i$-th species, and $\Phi$ is an electrical potential created by the charge density $\rho$. The positive constant $\epsilon$ is proportional to the square of the Debye length. The kinematic viscosity of the fluid is $\nu>0$ and $K>0$ is a coupling constant with units of energy per unit mass. This constant is proportional to the product of the Boltzmann's constant $K_B$ and the absolute temperature $T_K$. The potential $\Phi$ has been nondimensionalized so that $\frac{k_BT_K}{e}\Phi$ is the physical electrical potential, where $e$ is elementary charge. The charge density $\rho$ has been nondimensionalized so that $e\rho$ is the physical electrical charge density. The boundary conditions for $c_i$ are inhomogeneous Dirichlet, \beginin{equation} {c_i(x,t)}_{\left | \right. \partial\Omega} = \gamma_i(x) \label{gammai} \end{equation} The boundary conditions for $\Phi$ are inhomogeneous Dirichlet \beginin{equation} \Phi(x,t)_{\left | \right. \partial\Omega} = W(x) \label{phibc} \end{equation} and the boundary conditions for the Navier-Stokes or Stokes equations are homogeneous Dirichlet, \beginin{equation} u_{\left | \right. \partial\Omega} = 0. \label{ubc} \end{equation} The bounded connected domain $\Omega$ need not be simply connected. The functions $\gamma_i$ defined on the boundary of the domain are given, smooth positive time independent functions. We denote by $\Gamma_i$ positive, smooth, time independent extensions of these functions in the interior \beginin{equation} \gamma_i = {\Gamma_i}_{\left | \right. \partial\Omega} \label{Gammai} \end{equation} The function $W$ is also a given and time independent smooth function. The NPNS system is a well posed semilinear parabolic system. The question we are discussing is whether it has global smooth solutions, that is, whether given arbitrary smooth initial data and arbitrary smooth boundary conditions, do smooth solutions exist for all time, or do some solutions blow up. As it is well known, semilinear parabolic scalar equations can blow up in finite time. The simplest such example is a semilinear heat equation ({\widetilde{c}}ite{gigak}). Semilinear systems in which there is a single concentration carried by the gradient of a potential can also blow up. Well-known examples are chemotaxis equations, such as the Keller-Segal equation ({\widetilde{c}}ite{perthame}). The system we are discussing involves the Navier-Stokes equation, where the question we discuss here is a major open problem, but even in the absence of fluid or in the case of Stokes flow coupled to Nernst-Planck equations, the problem of global existence of smooth solutions discussed here remains open in its full generality. Boundary conditions play an essential role in the behavior the solutions of the NPNS system. No flux (blocking) boundary conditions for the concentrations model situations in which the boundaries are impermeable to the ions. Dirichlet (selective) boundary conditions for the concentrations discussed in this paper model situations in which boundaries maintain a certain concentration of ions. Global existence and stability of solutions of the Nernst-Planck equations, uncoupled to fluids has been obtained in {\widetilde{c}}ite{biler}, {\widetilde{c}}ite{choi}, {\widetilde{c}}ite{gajewski} for blocking boundary conditions in two dimensions, or in three dimensions for small data, or in a weak sense. The system coupled to fluid equations was studied in {{\widetilde{c}}ite{schmuck}} where global existence of weak solutions is shown in two and three dimensions for homogeneous Neumann boundary conditions on the potential, a situation without boundary current. In {{\widetilde{c}}ite{ryham}}, homogeneous Dirichlet boundary conditions on the potential are considered, and global existence of weak solutions is shown in two dimensions for large initial data and in three dimensions for small initial data (small perturbations and small initial charge). In {\widetilde{c}}ite {bothe} the problem of global regularity in two dimensions was considered for Robin boundary conditions for the potential. In {\widetilde{c}}ite{ci} global existence of smooth solutions for blocking boundary conditions and for uniform selective (special, stable Dirichlet) boundary conditions were obtained in two space dimensions. In {\widetilde{c}}ite{np3d} blocking and uniform selective boundary conditions were used to prove nonlinear stability of Boltzmann states in three space dimensions. While blocking and uniformly selective boundary conditions lead to stable configurations, instabilities may occur for general selective boundary conditions. These instabilities have been studied in simplified models mathematically and numerically ({\widetilde{c}}ite{rubizaltz}, {\widetilde{c}}ite{zaltzrubi}) and observed in physical experiments {\widetilde{c}}ite{rubinstein}. In this paper we consider the system with large data, with general selective boundary conditions, in situations in which instabilities may occur. We prove global regularity of solutions for two cases: if there are only two species (cations and anions, $m=2$) or if there are many species, but they all have the same diffusivities ($D_1= \dots = D_m$). The difficulty in three dimensions, even when there is no fluid, is in bounding the nonlinear growth of the concentrations. This paper is organized as follows. In Section \ref{prel} we prove a necessary and sufficient condition for global regularity of solutions. In the case of the Nernst-Planck system coupled to Stokes equations this condition (Theorem \ref{bkm}) states that, if (and only if) \beginin{equation} \sup_{0\le \tau <T}\int_0^\tau\left(\int_{\Omega}|\rho(x,t)|^2dx\right)^2dt = B(T) <\infty \label{regcond} \end{equation} is finite, then the solution is smooth on $[0,T]$. In the case of coupling to the Navier Stokes equation, the condition (\ref{regcond}) is supplemented by a well-known condition for regularity for the Navier-Stokes equations. In Section \ref{en} we introduce functionals of $(c_i, \Phi)$ which are used to cancel the contribution of electrical forces in the Navier-Stokes or Stokes energy balance, at the price of certain quadratic error terms. The first functional is a sum of relative entropies and a potential norm. The second one, which is just the potential part of the first one, is weaker and has a weaker dissipation, but it introduces better error terms. Section \ref{glob} is devoted to quadratic bounds, which imply global regularity by the criterion established in Theorem \ref{bkm}. It is only here that the restriction to $m=2$ (Theorem \ref{twospecies}) or to $D_1 = \dots = D_m$ (Theorem \ref{equaldiffs}) is used. In these two special circumstances we show that there is a cubic dissipation term proportional to $\|\rho\|_{L^3}^3$ in the evolution of the quadratic norms that we are employing to control the concentrations. This cubic term is essential, because although it has a small prefactor, it can be used to absorb all the quadratic errors ocurring in the second energy. \section{Preliminaries}\label{prel} We denote by $C$ absolute constants. We denote $C_{\Gamma}$ any constant that depends only on the parameters and boundary data of the problem, i.e. on $\nu, K, \epsilon, z_i, D_i,$, on the domain $\Omega$ itself, on norms of $W$ and on norms of $\Gamma_i$. These constants may change form line to line, and they are explicitly computable. They do not depend on solutions, or initial data. We do not keep track of them to ease notation and focus on the ideas of the proofs. We consider a bounded domain $\Omega\subset{\mathbb R}^3$ with smooth boundary. We denote space $L^p(\Omega) = L^p$ and norms simply by $\|{\widetilde{c}}dot\|_{L^p}$. We denote by $H = L^2(\Omega)^3{\widetilde{c}}ap \{u\left |\right. \; {\mbox{div}\,} u = 0\}$ the space of square integrable, divergence-free velocities with norm $\|{\widetilde{c}}dot\|_H$ and by $V = H_0^1(\Omega)^3{\widetilde{c}}ap \{ u\left | \right.\; {\mbox{div}\,} u =0\}$ the space of divergence free vectors fields with components in $H_0^1(\Omega)$, with norm $\|{\widetilde{c}}dot\|_V$. We denote by $\mathbb P$ the Leray projector $\mathbb P : L^2(\Omega)^3 \to H$, and by $A$ the Stokes operator \beginin{equation} A = -\mathbb P\Delta, \quad \quad A: \mathcal{D} (A)\to H \label{Aop} \end{equation} where \beginin{equation} \mathcal D(A) = H^2(\Omega)^3{\widetilde{c}}ap V. \label{mathcalda} \end{equation} \beginin{equation}g{defi} \label{strongsol}We say that $(c_i, \Phi, u)$ is a strong solution of the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) or (\ref{cieq}), (\ref{poi}), (\ref{se}) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on the time interval $[0,T]$ if $u\in L^{\infty}(0,T; V){\widetilde{c}}ap L^2(0,T; \mathcal D(A))$ and $c_i\in L^{\infty}(0,T; H^1(\Omega)){\widetilde{c}}ap L^2(0,T; H^2(\Omega))$ solve the equations in distribution sense and the boundary conditions in trace sense. \end{defi} It is well-known that strong solutions of Navier-Stokes equations are as smooth as the data permit ({\widetilde{c}}ite{cf}). The same is true for the Nernst-Planck-Navier-Stokes equations. \beginin{equation}g{thm}\label{locex} Let $c_i(0)-\Gamma_i\in H_0^1(\Omega)$, and $u(0)\in V$. There exists $T_0$ depending on $\|u_0\|_V$ and $\|c_i(0)\|_{H^1(\Omega)}$, the boundary conditions $\gamma_i, W$, and the parameters of the problem ($\nu, K, D_i, \epsilon, z_i)$, so that the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) (or the system (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) has a unique strong solution $(c_i, \Phi, u)$ on the interval $[0,T_0]$. \end{thm} \noindent\beginin{equation}g{proof} We write \beginin{equation} c_i = q_i + \Gamma_i. \label{qi} \end{equation} and note that the equations (\ref{cieq}) can be written as \beginin{equation} \partial_t q_i + u{\widetilde{c}}dot\nabla q_i = D_i {\mbox{div}\,} (\nabla q_i + z_iq_i\nabla \Phi) + F_i \label{Fiqieq} \end{equation} where \beginin{equation} F_i = -u{\widetilde{c}}dot\nabla \Gamma_i + D_i{\mbox{div}\,}(\nabla \Gamma_i + z_i \Gamma_i\nabla\Phi). \label{Fi} \end{equation} The boundary conditions for $q_i$ are homogeneous Dirichlet, \beginin{equation} {q_i}_{\left | \right. \partial\Omega} = 0. \label{qibc} \end{equation} We sketch only the apriori bounds for the proof. The actual construction of solutions can be done via Galerkin approximations. Taking the scalar product of (\ref{Fiqieq}) with $-\Delta q_i$, we estimate the terms \beginin{equation} \ba \left|z_iD_i\int_{\Omega}(\nabla q_i{\widetilde{c}}dot\nabla\Phi + q_i\Delta\Phi)\Delta q_i dx\right| \le C_{\Gamma}\left (\|\nabla\Phi\|_{L^6}\|\nabla q_i\|_{L^3} + \|q_i\|_{L^4}\|\rho\|_{L^4}\right)\|\Delta q_i\|_{L^2} \\ \le C_{\Gamma}\left((\|\rho\|_{L^2} +1)\|\nabla q_i\|_{L^2}^{\frac{1}{2}}\|\Delta q_i\|_{L^2}^{\frac{3}{2}} + \|q_i\|_{L^2}^{\frac{1}{4}} \|\rho\|_{L^2}^{\frac{1}{4}}\|\rho\|_{L^6}^{\frac{3}{4}}\|\nabla q_i\|_{L^2}^{\frac{3}{4}}\|\Delta q_i\|_{L^2}\right) \end{array} \label{nonlqh1} \end{equation} where we used \beginin{equation} \|\nabla\Phi\|_{L^6}\le C_{\Gamma}(\|\rho\|_{L^2}+1) \label{naphi6} \end{equation} (the inhomogeneous boundary conditions are accounted for in the added term $C_{\Gamma}$), embedding $H^1\subset L^6$ and interpolation. The advective term is estimated \beginin{equation} \left| \int_{\Omega} (u{\widetilde{c}}dot\nabla q_i)\Delta q_idx\right|\le C_\Gamma \|u\|_V\|\nabla q_i\|_{L^2}^\frac{1}{2}\|\Delta q_i\|_{L^2}^\frac{3}{2} \label{conv} \end{equation} The forcing term is estimated \beginin{equation} \left| \int_{\Omega}F_i \Delta q_idx\right |\le C_{\Gamma}(\|u\|_H + \|\rho\|_{L^2} +1)\|\Delta q_i\|_{L^2} \label{forcedel} \end{equation} where we used \beginin{equation} \|\nabla\Phi\|_{L^p}\le C_{\Gamma}(\|\rho\|_{L^2}+1) \label{naphirholp} \end{equation} valid for $p\le 6$. Now we have \beginin{equation} \|\rho\|_{L^p}\le C_{\Gamma}\sum_{i=1}^m(\|q_i\|_{L^p} + 1) \label{normsrhoq} \end{equation} and therefore, using also the Poincar\'{e} inequality we obtain \beginin{equation} \frac{d}{dt}\sum_{i=1}^m\|\nabla q_i\|^2_{L^2} + \sum_{i=1}^m \|\Delta q_i\|_{L^2}^2 \le C_{\Gamma} \left[ \left(\sum_{i=1}^m\|\nabla q_i\|^2_{L^2} + 1 \right)^3 + \|u\|_V^6\right]. \label{normqs} \end{equation} We take the scalar product of (\ref{nse}) with $Au$ we obtain, using well known estimates for the NSE {\widetilde{c}}ite{cf}, \beginin{equation} \frac{d}{dt} \|u\|_{V}^2 + \nu\|Au\|_{H}^2 \le C_{\Gamma}(\|u\|_V^6 + \|\rho \nabla\Phi\|_{L^2}^2). \label{nsevineq} \end{equation} Adding to (\ref{normqs}) we obtain short time control of the norms required by the definition of strong solutions. \end{proof} \beginin{equation}g{lemma} \label{sufcondB}Let $(c_i, \Phi, u)$ be a strong solution of the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) (or (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on the interval $[0,T]$. Let $2\le p <\infty$ be an even integer, take $c_i(0)\in L^p$ and consider the quantity \beginin{equation} \int_0^T \|\rho(t)\|_{L^2}^4 dt = B(T)<\infty, \label{condB} \end{equation} which is finite because $(c_i, \Phi, u)$ is a strong solution. Then $c_i\in L^{\infty}(0,T; L^p)$ and \beginin{equation} \sum_{i=1}^m \|c_i(t)\|_{L^p}\le C_{\Gamma}\left(\sum_{i=1}^m \|c_i(0)\|_{L^p}+\int_0^T\|u\|_{L^p}dt +1\right)e^{C_{\Gamma}\left(T +B(T)\right)}. \label{lftylpb} \end{equation} holds. \end{lemma} \beginin{equation}g{proof} We multiply the equation (\ref{Fiqieq}) by $q_i^{p-1}$ and integrate. We estimate the term \beginin{equation} \ba \left|z_iD_i\int_{\Omega}{\mbox{div}\,}(q_i\nabla\Phi)q_i^{p-1}dx\right| \le C_{\Gamma}\|\nabla\Phi\|_{L^6}\|q_i^{\frac{p}{2}}\|_{L^3}\|\nabla q_i^{\frac{p}{2}}\|_{L^2}\\ \le C_{\Gamma}(\|\rho\|_{L^2} +1)\|q_i^{\frac{p}{2}}\|_{L^2}^{\frac{1}{2}}\|\nabla q_i^{\frac{p}{2}}\|_{L^2}^{\frac{3}{2}} \end{array} \label{nonlqp} \end{equation} where we used one integration by parts, allowed by the vanishing of $q_i$ at the boundary and interpolation. We estimate the forcing term \beginin{equation} \left| \int_{\Omega}F_iq_i^{p-1}dx\right | \le C_{\Gamma}( \|u\|_{L^p} + \|\nabla\Phi\|_{L^p} + \|\rho\|_{L^p})\|q_i\|_{L^p}^{p-1}. \label{fiqilp} \end{equation} Using the dissipative term \beginin{equation} D_i\int_{\Omega}\Delta q_i q_i^{p-1} dx = -D_i\frac{4(p-1)}{p^2}\int_{\Omega} |\nabla q_i^{\frac{p}{2}}|^2 dx, \label{dispqlp} \end{equation} and then discarding it, we obtain \beginin{equation} \frac{d}{dt}\|q_i\|_{L^p}\le C_{\Gamma}(\|\rho\|_{L^2}^4 +1)\|q_i\|_{L^p} + C_{\Gamma}( \|u\|_{L^p} + \|\nabla\Phi\|_{L^p} + \|\rho\|_{L^p}), \label{oneci} \end{equation} and summing in $i$ we obtain \beginin{equation} \ba \frac{d}{dt}\sum_{i=1}^m\|q_i\|_{L^p} \le C_{\Gamma}(\|\rho\|_{L^2}^4 +1) \sum_{i=1}^m\|q_i\|_{L^p} + C_{\Gamma} m (\|u\|_{L^p} + \|\nabla\Phi\|_{L^p} +\|\rho\|_{L^p})\\ \le C_{\Gamma}(\|\rho\|_{L^2}^4 +1)\sum_{i=1}^m\|q_i\|_{L^p} + C_{\Gamma} m (\|u\|_{L^p} +1) \end{array} \label{l2lp} \end{equation} where we used the estimate \beginin{equation} \|\nabla\Phi\|_{L^p}\le C_{\Gamma}(\|\rho\|_{L^p} +1) \le C_{\Gamma}(\sum_{i=1}^m\|q_i\|_{L^p} +1), \label{naphilp} \end{equation} from (\ref{normsrhoq}). Thus (\ref{lftylpb}) follows from (\ref{l2lp}), concluding the proof. \end{proof} \beginin{equation}g{rem}Because the initial data are in $H^1\subset L^6$ we have that (\ref{lftylpb}) holds with $p\in [2,6]$. \end{rem} \beginin{equation}g{prop}\label{sufcondR} Let $(c_i, \Phi, u)$ be a strong solution of the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) (or (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on the interval $[0, T]$. Consider the quantity $B(T)$ of (\ref{condB}). Then \beginin{equation} \sup_{t\in [0,T]}\sum_{i=1}^m \|c_i(t)\|_{H^1}^2 \le C_{\Gamma}\left(\sum_{i=1}^m \|c_i(0)\|_{H^1}^2 +\int_0^T\|u\|_H^2dt+1\right)e^{C_{\Gamma}(T+R(T)+U(T))} \label{h1b} \end{equation} holds with \beginin{equation} R(T) = \int_0^T\|\rho\|_{L^4}^2dt \le C\left(\sum_{i=1}^m \|c_i(0)\|_{L^4} +\int_0^T\|u\|_{L^4}dt+1\right)^2e^{C_{\Gamma}\left(T +B(T)\right)}, \label{RT} \end{equation} and \beginin{equation} U(T)=\int_0^T\|u\|_{V}^4dt \label{UT} \end{equation} \end{prop} \beginin{equation}gin{proof} In view of (\ref{lftylpb}) with $p=4$ and the fact that $W^{1,p}\subset L^{\infty}$ for $p>3$, we have that \beginin{equation} \|\nabla\Phi\|_{L^{\infty}} \le C_{\Gamma}(\|\rho\|_{L^4} +1) \le C_{\Gamma}\left(\sum_{i=1}^m \|c_i(0)\|_{L^4}+\int_0^T\|u\|_{L^4}dt +1\right)e^{C_{\Gamma}\left(T +B(T)\right)} \label{naftyphi} \end{equation} This is a quantitative bound in terms of the initial data and the constant $B(T)$. Using it together with $\|\rho(t)\|_{L^4}$ in the estimate of the evolution of $\|\nabla q_i\|_{L^2}$ we obtain \beginin{equation} \ba \left|z_iD_i\int_{\Omega}(\nabla q_i{\widetilde{c}}dot\nabla\Phi + q_i\Delta\Phi)\Delta q_i dx\right| \le C_{\Gamma}\left (\|\nabla\Phi\|_{L^{\infty}}\|\nabla q_i\|_{L^2} + \|q_i\|_{L^4}\|\rho\|_{L^4}\right)\|\Delta q_i\|_{L^2}\\ \leC_{\Gamma}(\|\rho\|_{L^4}+1)\|\nabla q_i\|_{L^2}\|\Delta q_i\|_{L^2} \end{array} \label{globnaq} \end{equation} instead of (\ref{nonlqh1}), and consequently together with (\ref{conv}), we obtain \beginin{equation} \frac{d}{dt}\sum_{i=1}^m\|\nabla q_i\|^2_{L^2} + \sum_{i=1}^m \|\Delta q_i\|_{L^2}^2 \le C_{\Gamma} (\|\rho\|_{L^4}^2+\|u\|_V^4+1)\left(\sum_{i=1}^m\|\nabla q_i\|^2_{L^2}\right) + C_{\Gamma}(\|u\|_H^2+1) \label{normqnows} \end{equation} instead of (\ref{normqs}). Using (\ref{normqnows}) we obtain (\ref{h1b}). \end{proof} \beginin{equation}g{thm}\label{bkm} Let $T_1>0$ and let $(c_i, \Phi, u)$ be a strong solution of the Nernst-Planck-Stokes system (\ref{cieq}), (\ref{poi}), (\ref{se}) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on all intervals $[0,T]$ with $T<T_1$. Assume that \beginin{equation} \sup_{T<T_1} \int_0^T\|\rho(t)\|_{L^2}^4dt<\infty. \label{unifcondB} \end{equation} Then there exists $T_2>T_1$ such that $(c_i,\Phi, u)$ can be uniquely continued as a strong solution on $[0,T_2]$. Let $T_1>0$ and let $(c_i, \Phi, u)$ be a strong solution of the Nernst-Planck-Navier-Stokes system (\ref{cieq}), (\ref{poi}), (\ref{nse}) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on all intervals $[0,T]$ with $T<T_1$. Assume that \beginin{equation} \sup_{T<T_1}[\int_0^T\|\rho(t)\|_{L^2}^4dt + \int_0^T\|u\|_V^4dt]<\infty. \label{unifcondBu} \end{equation} Then there exists $T_2>T_1$ such that $(c_i,\Phi, u)$ can be uniquely continued as a strong solution on $[0,T_2]$. \end{thm} The converse is obviously also true. Thus (\ref{unifcondB}) (respectively, (\ref{unifcondBu})) is a necessary and sufficient condition for regularity of the Nernst-Planck-Stokes system (respectively, of the Nernst-Planck-Navier-Stokes system). \beginin{equation}g{proof} The proof follows directly from Theorem \ref{locex}, Lemma \ref{sufcondB} and Proposition \ref{sufcondR}. We remark that in the case of (\ref{cieq}), (\ref{poi}), (\ref{se}), $U(T)$ is controlled by $B(T)$. \end{proof} \beginin{equation}g{prop}\label{positive} Let $(c_i, \Phi, u)$ be a strong solution of the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) (or (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on the interval $[0, T]$. If $c_i(x,0)\ge 0$, $i=1, \dots, m$ then $c_i(x,t)\ge 0$ a.e for $t\in [0,T]$. \end{prop} \beginin{equation}g{proof} In order to show this we take a convex function $F:{\mathbb R}\to {\mathbb R}$ that is nonnegative, twice continuously differentiable, identically zero on the positive semiaxis, and strictly positive on the negative axis. We also assume \beginin{equation} F''(y)y^2 \le CF(y) \label{fass} \end{equation} with $C>0$ a fixed constant. Examples of such functions are \beginin{equation} F(y) = \left\{ \ba y^{2m} \quad \quad {\mbox{for}}\quad y<0,\\ 0 \quad \quad \quad {\mbox{for}}\quad y\ge 0 \end{array} \right. \label{Fm} \end{equation} with $m>1$. (In fact $m=1$ works as well, although we have only $F\in W^{2,\infty}({\mathbb R})$ in that case.) We multiply the equation (\ref{cieq}) by $F'(c_i)$ and integrate by parts using the fact that $F'(\gamma_i) =0$. We obtain \beginin{equation} \frac{d}{dt}\int_{\Omega}F(c_i)dx = -D_i\int_{\Omega} F''(c_i)\left[ |\nabla c_i|^2 + z_ic_i \nabla\Phi{\widetilde{c}}dot\nabla c_i\right]dx. \label{intfeq} \end{equation} Using a Schwartz inequality and the convexity of $F$, $F''\ge 0$, we have \beginin{equation} \frac{d}{dt}\int_{\Omega}F(c_i(x,t))dx \le \frac{CD_i}{2}z_i^2\|\nabla\Phi\|_{L^{\infty}(\Omega)}^2\int_{\Omega}F(c_i(x,t))dx. \label{intfineq} \end{equation} If $c_i(x,0)\ge 0$ then $F(c_i(x,0))=0$ and (\ref{intfineq}) above shows that $F(c_i(x,t))$ has vanishing integral. As $F$ is nonnegative, it follows that $F(c_i(x,t))= 0$ almost everywhere in $x$ and because $F$ does not vanish for negative values it follows that $c_i(x,t)$ is almost everywhere nonnegative. \end{proof} From now on we consider only solutions with $c_i\ge 0$. \section{Energies}\label{en} The Navier-Stokes and Stokes energy balance is \beginin{equation} \frac{1}{2K}\frac{d}{dt}\int_{\Omega}|u|^2dx + \frac{\nu}{K}\int_{\Omega}|\nabla u|^2dx = -\int_{\Omega}\rho(u{\widetilde{c}}dot\nabla \Phi)dx. \label{ens} \end{equation} We consider functionals of $(c_i, \Phi)$ which can be used to cancel the right hand side of the Navier-Stokes energy balance. We denote $(-\Delta_D)^{-1} $ the inverse of the Laplacian with homogeneous Dirichlet boundary condition. We decompose \beginin{equation} \Phi = \Phi_0 + \Phi_W \label{phis} \end{equation} where $\Phi_W$ is harmonic and obeys the inhomogeneous boundary conditions, \beginin{equation} \Delta \Phi_W = 0, \quad {\Phi_W}_{\left | \right. \partial\Omega} = W, \label{phiw} \end{equation} and \beginin{equation} -\epsilon\Delta\Phi_0 = \rho, \quad {\Phi_0}_{\left | \right. \partial\Omega} = 0. \label{phizero} \end{equation} so that \beginin{equation} \Phi_0 = \frac{1}{\epsilon}(-\Delta_D)^{-1}\rho. \label{hpizerorho} \end{equation} We introduce \beginin{equation} D= \min\{D_1, D_2,\dots D_m\}. \label{D} \end{equation} Let \beginin{equation} {\mathcal E}_1 = \int_{\Omega}\left\{\sum_{i=1}^m\Gamma_i\left (\frac{c_i}{\Gamma_i}\log\left(\frac{c_i}{\Gamma_i}\right) - \left(\frac{c_i}{\Gamma_i}\right) +1\right) + \frac{1}{2\epsilon}\rho(-\Delta_D)^{-1}\rho\right\}dx. \label{eone} \end{equation} \beginin{equation}g{prop}\label{enoneineq} Let $(c_i, \Phi, u)$ be a strong solution of the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) (or (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on the interval $[0, T]$. Then \beginin{equation} \frac{d}{dt}\mathcal E_1 + \mathcal D_1 \le \int_{\Omega}\rho (u{\widetilde{c}}dot\nabla \Phi) dx +C_{\Gamma}\left(\sum_{i=1}^m \|c_i-\Gamma_i\|_{L^2} +1\right)(\|u\|_H+1) \label{dteoneineq} \end{equation} holds on $[0,T]$, with \beginin{equation} \mathcal D_1 = \frac{D}{2}\int_{\Omega}\left[ \sum_{i=1}^m\left(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2\right) + \frac{1}{\epsilon}\rho^2 \right]dx. \label{done} \end{equation} \end{prop} The term $\int_{\Omega}\rho (u{\widetilde{c}}dot\nabla\Phi) dx $ in the right hand side of (\ref{dteoneineq}) can be used to cancel the contribution of the electrical forces in the Navier-Stokes energy balance. \beginin{equation}g{proof} We note that \beginin{equation} \frac{1}{2\epsilon}\int_{\Omega}\rho(-\Delta_D)^{-1}\rho dx = \frac{1}{2}\int_{\Omega}\rho \Phi_0 dx. \label{poten} \end{equation} In order to compute the time evolution of $\mathcal E_1$ we multiply the equations (\ref{cieq}) by the factors $\log\left(\frac{c_i}{\Gamma_i}\right) + z_i \Phi_0$ and, noting that these factors vanish at the boundary, we integrate by parts: \beginin{equation} \ba \int_{\Omega}((\partial_t + u{\widetilde{c}}dot\nabla)c_i)\left(\log\left(\frac{c_i}{\Gamma_i}\right) + z_i \Phi_0\right)dx \\= -D_i\int_{\Omega}c_i\nabla\left(\log{c_i} + z_i\Phi\right){\widetilde{c}}dot \nabla\left(\log\left(\frac{c_i}{\Gamma_i}\right) + z_i \Phi_0\right)dx \\ = -D_i\int_{\Omega}c_i\left|\nabla\left(\log{c_i} + z_i\Phi\right)\right|^2dx + D_i\int_{\Omega}c_i\nabla\left(\log{c_i} + z_i\Phi\right){\widetilde{c}}dot\nabla\left(\log \Gamma_i + z_i\Phi_W\right) dx \end{array} \label{eoneone} \end{equation} We have thus \beginin{equation} \ba \int_{\Omega}((\partial_t + u{\widetilde{c}}dot\nabla)c_i)\left(\log\left(\frac{c_i}{\Gamma_i}\right) + z_i \Phi_0\right)dx \\\le -\frac{1}{2}D_i\int_{\Omega}c_i\left|\nabla\left(\log{c_i} + z_i\Phi\right)\right|^2dx + \frac{1}{2} D_i\int_{\Omega}c_i|\nabla(\log \Gamma_i + z_i \Phi_W)|^2 dx \end{array} \label{eoneoneineone} \end{equation} In view of the fact that \beginin{equation} ((\partial_t + u{\widetilde{c}}dot\nabla)c_i) \log\left(\frac{c_i}{\Gamma_i}\right) = (\partial_t + u{\widetilde{c}}dot\nabla)\left(c_i \log\left(\frac{c_i}{\Gamma_i}\right) -c_i\right) + c_i u{\widetilde{c}}dot\nabla\log\Gamma_i, \label{fact} \end{equation} summing in $i$, on the left hand side we have \beginin{equation} \ba \sum_{i=1}^m\int_{\Omega}((\partial_t + u{\widetilde{c}}dot\nabla)c_i)\left(\log\left(\frac{c_i}{\Gamma_i}\right) + z_i \Phi_0\right)dx \\ =\frac{d}{dt}\int_{\Omega}\sum_{i=1}^mc_i(\log\left(\frac{c_i}{\Gamma_i}\right) -1) dx + \int_{\Omega}(\partial_t\sum_{i=1}^m(z_ic_i))\Phi_0 dx\\ +\int_{\Omega}(u{\widetilde{c}}dot\nabla (\sum_{i=1}^m z_ic_i))\Phi_0 dx + \int_{\Omega}\sum_{i=1}^2 c_i u{\widetilde{c}}dot\nabla \log\Gamma_i dx\\ =\frac{d}{dt}\mathcal E_1 + \int_{\Omega}(u{\widetilde{c}}dot\nabla\rho) \Phi_0dx + \sum_{i=1}^m \int_{\Omega}c_i u{\widetilde{c}}dot\nabla\log\Gamma_i dx. \end{array} \label{leftone} \end{equation} In the last equality we used \beginin{equation} \frac{d}{dt}\frac{1}{2\epsilon}\int_{\Omega}\rho(-\Delta_D)^{-1}\rho dx = \int_{\Omega}(\partial_t\rho)\Phi_0 dx \label{dtpoten} \end{equation} because $(-\Delta_D)^{-1}$ is selfadjoint. Combining (\ref{eoneoneineone}) and (\ref{leftone}) we obtain \beginin{equation} \ba \frac{d}{dt}\mathcal E_1 \le -\frac{1}{2}\sum_{i=1}^mD_i\int_{\Omega}c_i\left|\nabla\left(\log{c_i} + z_i\Phi\right)\right|^2dx + \frac{1}{2}\sum_{i=1}^m D_i\int_{\Omega}c_i|\nabla(\log \Gamma_i + z_i\Phi_W)|^2 dx \\ - \sum_{i=1}^m\int_{\Omega}c_i u{\widetilde{c}}dot\nabla\log\Gamma_i dx - \int_{\Omega}(u{\widetilde{c}}dot\nabla\rho)\Phi_0dx\\ = -\frac{1}{2}\sum_{i=1}^mD_i\int_{\Omega}c_i\left|\nabla\left(\log{c_i} + z_i\Phi\right)\right|^2dx + \frac{1}{2}\sum_{i=1}^m D_i\int_{\Omega}c_i|\nabla(\log \Gamma_i + z_i \Phi_W)|^2 dx \\ - \sum_{i=1}^m\int_{\Omega}c_i u{\widetilde{c}}dot\nabla\log\Gamma_i dx + \int_{\Omega}\rho u{\widetilde{c}}dot\nabla (\Phi -\Phi_W)dx \end{array} \label{enoneone} \end{equation} We note that \beginin{equation} \ba \frac{1}{2}\sum_{i=1}^mD_i\int_{\Omega}c_i\left|\nabla\left(\log{c_i} + z_i\Phi\right)\right|^2dx\ge \frac{D}{2}\sum_{i=1}^m\int_{\Omega}c_i\left|\nabla\left(\log{c_i} + z_i\Phi\right)\right|^2dx\\ =\frac{D}{2}\sum_{i=1}^m\int_{\Omega}(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2)dx + D\int_{\Omega}\sum_{i=1}^m z_i\nabla c_i{\widetilde{c}}dot \nabla \Phi dx \\ =\frac{D}{2}\sum_{i=1}^m\int_{\Omega}(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2)dx + D\int_{\Omega}\sum_{i=1}^m z_i\nabla (c_i-\Gamma_i){\widetilde{c}}dot \nabla \Phi dx \\ + D\int_{\Omega}\sum_{i=1}^m z_i\Gamma_i{\widetilde{c}}dot \nabla \Phi dx \\ =\frac{D}{2}\sum_{i=1}^m\int_{\Omega}(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2)dx + D\int_{\Omega}\sum_{i=1}^m z_i(c_i-\Gamma_i)( -\Delta\Phi) dx \\ + D\int_{\Omega}\sum_{i=1}^m z_i\Gamma_i{\widetilde{c}}dot \nabla \Phi dx \\ =\frac{D}{2}\sum_{i=1}^m\int_{\Omega}(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2)dx + D\frac{1}{\epsilon}\int_{\Omega}\sum_{i=1}^m z_i(c_i-\Gamma_i)\rho dx \\ + D\int_{\Omega}\sum_{i=1}^m z_i\Gamma_i{\widetilde{c}}dot \nabla \Phi dx \\ =\frac{D}{2}\sum_{i=1}^m\int_{\Omega}(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2)dx + D\frac{1}{\epsilon}\int_{\Omega}(\rho -\sum_{i=1}^m z_i\Gamma_i)\rho dx \\ + D\int_{\Omega}\sum_{i=1}^m z_i\Gamma_i{\widetilde{c}}dot \nabla \Phi dx \\ \ge \frac{D}{2}\sum_{i=1}^m\int_{\Omega}(c_i^{-1}|\nabla c_i|^2 + z_i^2c_i|\nabla\Phi|^2)dx + \frac{D}{2\epsilon}\int_{\Omega}\rho^2 dx\\ -\frac{D}{2\epsilon}\int_{\Omega}\left |\sum_{i=1}^mz_i\Gamma_i\right|^2dx + D\int_{\Omega}\sum_{i=1}^m z_i\Gamma_i{\widetilde{c}}dot \nabla \Phi dx. \end{array} \label{dissiplow} \end{equation} From (\ref{enoneone}) and (\ref{dissiplow}) we obtain \beginin{equation} \frac{d}{dt}\mathcal E_1 + \mathcal D_1 \le \mathcal Q_1 + \int_{\Omega}\rho u{\widetilde{c}}dot\nabla\Phi dx \label{eonebalance} \end{equation} with $\mathcal D_1$ given in (\ref{done}) and \beginin{equation} \ba \mathcal Q_1 = \frac{1}{2}\sum_{i=1}^m D_i\int_{\Omega}c_i|\nabla(\log \Gamma_i + z_i\Phi_W)|^2 dx\\ - \sum_{i=1}^m\int_{\Omega}c_i u{\widetilde{c}}dot\nabla\log\Gamma_i dx - \int_{\Omega}\rho u{\widetilde{c}}dot\nabla \Phi_W dx\\ +\frac{D}{2\epsilon}\int_{\Omega}\left | \sum_{i=1}^mz_i\Gamma_i\right|^2dx - D\int_{\Omega}\sum_{i=1}^m z_i\Gamma_i{\widetilde{c}}dot \nabla \Phi dx. \end{array} \label{qone} \end{equation} Note that $\mathcal Q_1$ is at most quadratic in terms of the unknowns $u, c_i$, in view of the fact that both $\rho$ and $\Phi$ are affine in $c_i$. The inequality (\ref{dteoneineq}) follows by bounding $Q_1$. \end{proof} A useful energy is the potential part in $\mathcal E_1$: \beginin{equation} \mathcal P = \frac{1}{2\epsilon} \int_{\Omega}\rho (-\Delta_D)^{-1}\rho dx. \label{P} \end{equation} \beginin{equation}g{prop}\label{potentialene} Let $(c_i, \Phi, u)$ be a strong solution of the system (\ref{cieq}), (\ref{poi}), (\ref{nse}) (or (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) on the interval $[0, T]$. Then \beginin{equation} \frac{d}{dt}\mathcal P + \mathcal D_2 \le \int_{\Omega}\rho (u{\widetilde{c}}dot\nabla\Phi) dx + C_{\Gamma}\left(\sum_{i=1}^m\|c_i-\Gamma_i\|_{L^2}+1\right)(\|\rho\|_{L^2}+1) + C_{\Gamma}\|\rho\|_{L^2}\|u\|_H \label{pineq} \end{equation} holds on $[0,T]$, with \beginin{equation} \mathcal D_2 = \frac{1}{2}\sum_{i=1}^m z_i^2D_i\int_{\Omega}c_i \left| \nabla\Phi\right|^2dx. \label{d2} \end{equation} \end{prop} We note that the term $\int_{\Omega}\rho(u{\widetilde{c}}dot\nabla \Phi)dx$ in the right hand side of (\ref{pineq}) can be used to cancel the contribution of electrical forces in the Navier-Stokes energy balance. \beginin{equation}g{proof} In order to compute the time evolution of (\ref{P}) we take the equations (\ref{cieq}), multiply by the factors $z_i\Phi_0$ and integrate by parts in view of the fact that $\Phi_0$ vanishes on the boundary. We obtain \beginin{equation} \ba \int_{\Omega}((\partial_t + u{\widetilde{c}}dot\nabla)c_i)z_i\Phi_0dx = -D_iz_i\int_{\Omega}(\nabla c_i+ z_i c_i\nabla\Phi){\widetilde{c}}dot\nabla\Phi_0dx\\ = -D_iz_i\int_{\Omega}\nabla c_i{\widetilde{c}}dot\nabla\Phi_0 -z_i^2D_i \int_{\Omega}c_i\nabla\Phi{\widetilde{c}}dot\nabla\Phi_0 dx\\ = -z_iD_i\int_{\Omega}\nabla(c_i-\Gamma_i){\widetilde{c}}dot\nabla \Phi_0 dx -D_iz_i\int_{\Omega}\nabla\Gamma_i{\widetilde{c}}dot\nabla\Phi_0\\ - z_i^2D_i\int_{\Omega}c_i \left| \nabla\Phi\right|^2dx + z_i^2D_ i\int_{\Omega}c_i \nabla\Phi {\widetilde{c}}dot\nabla\Phi_Wdx\\ =-D_iz_i\epsilon^{-1}\int_{\Omega}(c_i-\Gamma_i)\rho dx - z_i^2D_i\int_{\Omega}c_i \left| \nabla\Phi\right|^2dx\\ + z_i^2D_i\int_{\Omega}c_i \nabla\Phi {\widetilde{c}}dot\nabla\Phi_Wdx -D _iz_i\int_{\Omega}\nabla\Gamma_i{\widetilde{c}}dot\nabla\Phi_0dx \end{array} \label{potrightone} \end{equation} In the last equality we used the fact that $c_i-\Gamma_i$ vanishes on the boundary and the fact that $-\epsilon\Delta \Phi_0 = \rho$. Summing in $i$, on the left hand side we have \beginin{equation} \ba \sum_{i=1}^m\int_{\Omega}((\partial_t + u{\widetilde{c}}dot\nabla)c_i)z_i\Phi_0dx = \int_{\Omega}(\partial_t\rho)\Phi_0dx + \int_{\Omega}(u{\widetilde{c}}dot\nabla\rho)\Phi_0dx\\ =\frac{d}{dt}\mathcal P +\int_{\Omega}(u{\widetilde{c}}dot\nabla\rho)\Phi_0 dx \end{array} \label{leftpone} \end{equation} Putting together (\ref{potrightone}) and (\ref{leftpone}) \beginin{equation} \ba \frac{d}{dt}\mathcal P + \sum_{i=1}^m z_i^2D_i\int_{\Omega}c_i \left| \nabla\Phi\right|^2dx\\ = -\sum_{i=1}^mD_iz_i\epsilon^{-1}\int_{\Omega}(c_i-\Gamma_i)\rho dx\\ +\sum_{i=1}^m z_i^2D_i\int_{\Omega}c_i \nabla\Phi {\widetilde{c}}dot\nabla\Phi_Wdx - \sum_{i=1}^mD_iz_i\int_{\Omega}\nabla\Gamma_i{\widetilde{c}}dot\nabla\Phi_0dx\\ -\int_{\Omega}\rho(u{\widetilde{c}}dot\nabla \Phi_W) dx + \int_{\Omega}\rho u{\widetilde{c}}dot\nabla\Phi dx. \end{array} \label{pbalanceone} \end{equation} After a Schwartz inequality we obtain \beginin{equation} \frac{d}{dt}\mathcal P + \mathcal D_2 \le \mathcal Q_{2} + \int_{\Omega}\rho u{\widetilde{c}}dot\nabla\Phi dx \label{pbalance} \end{equation} where $\mathcal D_2$ is given in (\ref{d2}) and \beginin{equation} \ba \mathcal Q_{2} = -\sum_{i=1}^mD_iz_i\epsilon^{-1}\int_{\Omega}(c_i-\Gamma_i)\rho dx\\ +\frac{1}{2}\sum_{i=1}^m z_i^2D_i\int_{\Omega}c_i \left| \nabla\Phi_W\right |^2dx - \sum_{i=1}^mD_iz_i\int_{\Omega}\nabla\Gamma_i{\widetilde{c}}dot\nabla\Phi_0dx -\int_{\Omega}\rho(u{\widetilde{c}}dot\nabla \Phi_W) dx \end{array} \label{q2} \end{equation} Unlike the term $\mathcal Q_1$ of (\ref{qone}), $\mathcal Q_2$ has no $(u,c)$ quadratic terms, the only quadratic terms are of the type $(c,\rho)$ or $(u,\rho)$ (the $(u,\Phi_0)$ term is is of $(u,\rho)$ type in this accounting). Estimating $Q_2$ we obtain (\ref{pineq}). \end{proof} \section{Quadratic bounds}\label{glob} We estimate the sum of $L^2$ norms of $c_i.$ We take the scalar product of the equations (\ref{Fiqieq}) with $\frac{1}{D_i} q_i$ and add. We obtain first \beginin{equation} \frac{d}{dt}\sum_{i=1}^m \frac{1}{2D_i}\int_{\Omega} q_i^2 dx + \sum_{i=1}^m \int_{\Omega}|\nabla q_i|^2dx = -\frac{1}{2}\sum_{i=1}^m z_i\int_{\Omega}\nabla\Phi{\widetilde{c}}dot\nabla(q_i^2)dx + \sum_{i=1}^m \frac{1}{D_i}\int_{\Omega}F_iq_idx. \label{enstep1} \end{equation} The integartion by parts is justified because of (\ref{qibc}). We integrate by parts one more time using the same boundary conditions and (\ref{poi}) \beginin{equation} \frac{d}{dt}\sum_{i=1}^m \frac{1}{2D_i}\int_{\Omega} q_i^2 dx + \sum_{i=1}^m \int_{\Omega}|\nabla q_i|^2dx = -\frac{1}{2\epsilon}\int_{\Omega}\rho\sum_{i=1}^m z_i q_i^2 dx + \sum_{i=1}^m \frac{1}{D_i}\int_{\Omega}F_iq_idx. \label{enstep2} \end{equation} \beginin{equation}g{thm}\label{twospecies} Consider $m=2$, $z_1=1$, $z_2=-1$. Let $T>0$ be arbitrary. Let $c_i({\widetilde{c}}dot, 0)>0$, $c_i({\widetilde{c}}dot, 0)\in H^1$, ${c_i}_{|\partial\Omega} = \gamma_i$ and $u_0\in V$ be given. Then the system (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) has global strong solutions on $[0,T]$. The system (\ref{cieq}), (\ref{poi}), (\ref{nse}) has global strong solutions if \beginin{equation} \int_0^T\|u\|_V^4dt <\infty. \label{nsesuf} \end{equation} Moreover \beginin{equation} \sup_{0\le t \le T}\sum_{i=1}^2 \|c_i(t)\|_{H^1}^2 + \int_0^T\sum_{i=1}^2\|c_i(t)\|^2_{H^2}dt \le C_{\Gamma}\left [\sum_{i=1}^2 \|c_i(0)\|_{H^1}^2+\int_0^T\|u\|_H^2dt +1\right]e^{C_{\Gamma} (T+R(T)+U(T))} \label{strongbm2} \end{equation} holds for all $T$, where $R(T)$ is given by (\ref{RT}), $U(T)$ is given by (\ref{UT}), and with $C_{\Gamma}$ depending only on the boundary conditions $\gamma_i$ and $W$, domain $\Omega$, and parameters $\nu, D_i, \epsilon, K$. \end{thm} \beginin{equation}g{proof} We note that, when $m=2$ and $z_1 =1$, $z_2=-1$, then \beginin{equation} \sum_{i=1}^2 z_i q_i^2 = (\rho -\Gamma_1+\Gamma_2 )(c_1+ c_2 -\Gamma_1 -\Gamma_2). \label{rhosum} \end{equation} Thus \beginin{equation} \ba \rho\sum_{i=1}^2 z_i q_i^2 = \rho^2(c_1+c_2) - (\Gamma_1-\Gamma_2)\rho(q_1+q_2) - \rho^2 (\Gamma_1 + \Gamma_2)\\ \ge |\rho|^3 -(\Gamma_1 - \Gamma_2)\rho(q_1+q_2) -\rho^2(\Gamma_1 +\Gamma_2) \end{array} \label{lowcube} \end{equation} because \beginin{equation} c_1+ c_2 \ge |\rho|. \label{sumdiff} \end{equation} Now we use H\"{o}lder and Young inequalities to bound in (\ref{enstep2}) \beginin{equation} \ba \frac{1}{2\epsilon}\int_{\Omega}\rho\sum_{i=1}^2 z_i q_i^2 dx \ge \frac{1}{2\epsilon}\int_{\Omega}|\rho|^3dx\\ - \frac{1}{2\epsilon}(\|q_1\|_{L^2(\Omega)}+\|q_2\|_{L^2(\Omega)})\|\Gamma_2-\Gamma_1\|_{L^6(\Omega)}\|\rho\|_{L^3(\Omega)} \\ -\frac{1}{2\epsilon}[\|\Gamma_1\|_{L^3(\Omega)} + \|\Gamma_2\|_{L^3(\Omega)}]\|\rho\|_{L^3(\Omega)}^2\\ \ge \frac{1}{4\epsilon}\|\rho\|_{L^3}^3 - \frac{1}{4 L^2}\|q_1\|_{L^2}^2 -\frac{1}{4 L^2}\|q_2\|_{L^2}^2 -C_{\Gamma} \\ \end{array} \label{inter} \end{equation} with $L$ the constant in the Poincar\'{e} inequality \beginin{equation} \|\nabla q\|_{L^2(\Omega)}^2 \ge L^{-2}\|q\|^2_{L^2(\Omega)}. \label{poin} \end{equation} From (\ref{enstep2}), (\ref{inter}) and (\ref{poin}) we obtain \beginin{equation} \ba \frac{d}{dt}\sum_{i=1}^2 \frac{1}{2D_i}\int_{\Omega} q_i^2 dx + \frac{3}{4}\sum_{i=1}^2 \int_{\Omega}|\nabla q_i|^2dx + \frac{1}{4\epsilon}\|\rho\|_{L^3}^3\\ \le C_{\Gamma} + \sum_{i=1}^2\frac{1}{D_i}\int_{\Omega}F_iq_idx \end{array} \label{intern} \end{equation} We have \beginin{equation} \left|\int_{\Omega}F_iq_i dx\right| \le C_{\Gamma}(\|\rho\|_{L^2} +\|u\|_H +1)\|q_i\|_{L^2} \label{fqbound} \end{equation} and therefore we have that \beginin{equation} \mathcal E_3 = \sum_{i=1}^2 \frac{1}{D_i}\int_{\Omega} q_i^2 dx \label{e3} \end{equation} obeys \beginin{equation} \frac{d}{dt}\mathcal E_3 + \mathcal D_3 \le C_{\Gamma} + \widetilde{C_{\Gamma}}\|u\|_H^2 \label{e3ineq} \end{equation} with \beginin{equation} \mathcal D_3 = \sum_{i=1}^2\frac{1}{2} \int_{\Omega}|\nabla q_i|^2dx + \frac{1}{4\epsilon}\|\rho\|_{L^3}^3. \label{d3} \end{equation} We singled out the coefficient $\widetilde{C_{\Gamma}}$ of $\|u\|_H^2$ because we use it next. We take a constant \beginin{equation} \delta = \frac{\nu}{2KL^2\widetilde{C_{\Gamma}}} \label{deltaLC} \end{equation} such that the dissipation in the Navier-Stokes energy balance exceeds twice the contribution from $\|u\|_H^2$ in the right hand side of (\ref{e3ineq}) when the latter is multiplied by $\delta$, \beginin{equation} \frac{\nu}{K}\int_{\Omega}|\nabla u|^2dx \ge 2\delta\widetilde{C_{\Gamma}}\|u\|_H^2. \label{dissbeats} \end{equation} We consider \beginin{equation} \mathcal F = \frac{1}{2K}\|u\|_H^2 + \mathcal P + \delta \mathcal E_3 \label{mathcalf} \end{equation} and, using (\ref{ens}), (\ref{pineq}) and (\ref{e3ineq}) multiplied by $\delta$ we obtain \beginin{equation} \ba \frac{d}{dt}\mathcal F + \frac{\nu}{2K}\int_{\Omega}|\nabla u|^2dx + \frac{\delta}{2}\sum_{i=1}^2\|\nabla q_i\|^2_{L^2} + \frac{\delta}{4\epsilon}\|\rho\|_{L^3}^3\\ \le C_{\Gamma}[\|\rho\|_{L^2}(\sum_{i=1}^2\|q_i\|_{L^2} + 1) + \|\rho\|_{L^2}\|u\|_H + \sum_{i=1}^2\|q_i\|_{L^2} + 1]. \end{array} \label{mathcalfineq} \end{equation} The positive cubic term in $\rho$ on the left hand side together with the rest of positive quadratic dissipative terms on the left hand side can be used to absorb all the quadratic terms on the right hand side, because they all involve at least one $\rho$, and the linear terms are also absorbed using Poincar\'{e} inequalities for both $q_i$ and for $u$. This results in \beginin{equation} \frac{d}{dt}\mathcal F + c_{\Gamma} \mathcal F \le C_{\Gamma} \label{mathcalfinal} \end{equation} with $c_{\Gamma}>0$. It follows that \beginin{equation} \mathcal F(t) \le \mathcal F(0) e^{-c_{\Gamma}t} + C_{\Gamma} \label{mathcalFbound} \end{equation} This implies in particular that \beginin{equation} \|\rho(t)\|_{L^2}^2\le C_{\Gamma}\mathcal F(0)e^{-c_{\Gamma}t} + C_{\Gamma} \label{rhol2b} \end{equation} and, integrating in time, (\ref{condB}) holds \beginin{equation} B(T)\le C_{\Gamma}(\mathcal F(0)+ T). \label{BTB} \end{equation} Moreover, the dissipation is time integrable, \beginin{equation} \int_0^T\left\{ \frac{\nu}{2K}\int_{\Omega}|\nabla u|^2dx + \frac{\delta}{2}\sum_{i=1}^2\|\nabla q_i\|^2_{L^2} + \frac{\delta}{4\epsilon}\|\rho\|_{L^3}^3\right\}dt\le C_{\Gamma}(\mathcal F(0) +T). \label{dissip} \end{equation} It follows from (\ref{h1b}) that (\ref{strongbm2}) holds. \end{proof} \beginin{equation}g{thm}\label{equaldiffs} Consider $z_i=\pm 1$, $i=1, \dots,m$, and assume $D_1 = D_2 =\dots = D_m =D>0$. Let $T>0$ be arbitrary. Let $c_i({\widetilde{c}}dot, 0)>0$, $c_i({\widetilde{c}}dot, 0)\in H^1$, ${c_i}_{|\partial\Omega} = \gamma_i$ and $u_0\in V$ be given. Then the system (\ref{cieq}), (\ref{poi}), (\ref{se})) with boundary conditions (\ref{gammai}), (\ref{phibc}), (\ref{ubc}) has global strong solutions on $[0,T]$. The system (\ref{cieq}), (\ref{poi}), (\ref{nse}) has global stromg solutions if \beginin{equation} \int_0^T\|u\|_V^4dt <\infty. \label{nsesuff} \end{equation} Moreover \beginin{equation} \sup_{0\le t \le T}\sum_{i=1}^2 \|c_i(t)\|_{H^1}^2 + \int_0^T\sum_{i=1}^2\|c_i(t)\|^2_{H^2}dt \le C_{\Gamma}\left [\sum_{i=1}^2 \|c_i(0)\|_{H^1}^2 +\int_0^T\|u\|_H^2dt+1\right]e^{C_{\Gamma} (T+R(T)+U(T))} \label{strongbeqdiff} \end{equation} holds for all $T$, where $R(T)$ is given by (\ref{RT}), $U(T)$ is given by (\ref{UT}), and with $C_{\Gamma}$ depending only on the boundary conditions $\gamma_i$ and $W$, domain $\Omega$, and parameters $\nu, D_i, \epsilon, K$. \end{thm} \beginin{equation}g{proof} We consider the auxiliary variables \beginin{equation} S = \sum_{i=1}^M q_i \label{S} \end{equation} and \beginin{equation} Z = \sum_{i=1}^mz_iq_i. \label{Z} \end{equation} Summing in (\ref{Fiqieq}) we have \beginin{equation} \left\{ \ba \left(\partial_t +u{\widetilde{c}}dot\nabla\right) S = D\left(\Delta S +{\mbox{div}\,}(Z\nabla\Phi)\right) + F_S\\ \left(\partial_t + u{\widetilde{c}}dot\nabla\right)Z = D(\left(\Delta Z + {\mbox{div}\,}(S\nabla\Phi)\right) + F_Z, \end{array} \right. \label{SZeq} \end{equation} with \beginin{equation} F_S = \sum_{i=1}^m F_i, \label{FS} \end{equation} \beginin{equation} F_Z = \sum_{i=1}^mz_iF_i, \label{FZ} \end{equation} and $F_i$ given in (\ref{Fi}) and with $D$ in (\ref{D}). Multiplying by $S$ and $Z$ and integrating by parts (twice in the nonlinear term, once in linear terms) we obtain \beginin{equation} \frac{1}{2}\frac{d}{dt}\int_{\Omega}(S^2 + Z^2)dx + D\int_{\Omega}(|\nabla S|^2 + |\nabla Z|^2)dx = -\frac{D}{\epsilon}\int_{\Omega} SZ\rho dx + \int_{\Omega}\left(SF_S + ZF_Z\right)dx \label{enSZ} \end{equation} Now we use \beginin{equation} Z = \rho - \Gamma_Z \label{Zrho} \end{equation} and \beginin{equation} S = \sum_{i=1}^M c_i - \Gamma_S \label{Ssum} \end{equation} with \beginin{equation} \Gamma_S = \sum_{i=1}^M \Gamma_i, \label{gammas} \end{equation} and \beginin{equation} \Gamma_Z = \sum_{i=1}^m z_i\Gamma_i, \label{gammaz} \end{equation} together with \beginin{equation} \sum_{i=1}^m c_i \ge |\rho|, \label{Srhon} \end{equation} to deduce \beginin{equation} SZ\rho \ge |\rho|^3 - \Gamma_S \rho^2 - S\rho \Gamma_Z. \label{szrho} \end{equation} Let us note the relationships \beginin{equation} \left\{ \ba F_S = - u{\widetilde{c}}dot\nabla \Gamma_S + D(\Delta \Gamma_S + {\mbox{div}\,}(\Gamma_Z\nabla\Phi)),\\ F_Z = - u{\widetilde{c}}dot\nabla \Gamma_Z + D(\Delta \Gamma_Z + {\mbox{div}\,}(\Gamma_S\nabla\Phi)).\\ \end{array} \right. \label{FGamma} \end{equation} We deduce that \beginin{equation} \ba \left| \int_{\Omega}(SF_S +ZF_Z)dx\right | \le\frac{1}{2}\int_{\Omega}(|\nabla S|^2 + |\nabla Z|^2)dx \\ + \int_{\Omega}\left[|u|^2\left(\Gamma_S^2 + \Gamma_Z^2\right) + D^2|\nabla \Gamma_S|^2 + D^2|\nabla\Gamma_Z|^2\right ]dx + D^2\int_{\Omega}\left(|\Gamma_S|^2 + |\Gamma_Z|^2\right)|\nabla\Phi|^2dx \end{array} \label{sfzf} \end{equation} Using(\ref{enSZ}), (\ref{szrho}), (\ref{sfzf}) and (\ref{naphilp}) we obtain \beginin{equation} \ba \frac{d}{dt}\int_{\Omega}(S^2 + Z^2)dx + \frac{D}{2}\int_{\Omega}(|\nabla S|^2 + |\nabla Z|^2)dx +\frac{D}{4\epsilon}\int_{\Omega} |\rho|^3 dx \\ \le C_{\Gamma} + \widetilde{C_{\Gamma}}\|u\|_H^2 \end{array} \label{ensszb} \end{equation} We take $\delta$ defined in (\ref{deltaLC}) with the current $\widetilde{C_{\Gamma}}$ and consider the functional \beginin{equation} \mathcal G = \frac{1}{2K}\|u\|_H^2 + \mathcal P + \delta\int_{\Omega}(S^2 + Z^2)dx \label{mathcalG} \end{equation} and obtain from (\ref{ens}), (\ref{pineq}) and (\ref{ensszb}) \beginin{equation} \ba \frac{d}{dt}\mathcal G + \frac{\nu}{2K}\int_{\Omega}|\nabla u|^2dx + \frac{\delta}{2}(\|\nabla S\|^2_{L^2} + \|\nabla Z\|_{L^2}^2)+ \frac{\delta}{4\epsilon}\|\rho\|_{L^3}^3\\ \le C_{\Gamma}[\|\rho\|_{L^2}(\sum_{i=1}^2\|c_i-\Gamma_i\|_{L^2} + 1) + \|\rho\|_{L^2}\|u\|_H + \sum_{i=1}^2\|c_i-\Gamma_i\|_{L^2} + 1]. \end{array} \label{gineq} \end{equation} Now we note that \beginin{equation} 0\le c_i \le \sum_{i=1}^m c_i = S + \Gamma_S \label{cs} \end{equation} implies that \beginin{equation} \|c_i-\Gamma_i\|_{L^2} \le \|S\|_{L^2} + C_{\Gamma} \label{qiS} \end{equation} and the Poincare inequality for $S$ implies \beginin{equation} \|\nabla S\|_{L^2}^2 \ge \frac{1}{L^2}\|c_i-\Gamma_i\|^2_{L^2} -C_{\Gamma}. \label{pois} \end{equation} Therefore we obtain using Poincar\'{e}, H\"{o}lder and Young inequalities, \beginin{equation} \frac{d}{dt}\mathcal G + c_{\Gamma}\mathcal G \le C_{\Gamma} \label{ginedecay} \end{equation} with $c_{\Gamma}>0$. The rest of the proof follows as in the proof of Theorem \ref{twospecies}. \end{proof} {\bf{Acknowledgment.}} The work of PC was partially supported by NSF grant DMS- 171398. \beginin{equation}gin{thebibliography}{99} \bibitem{biler} P. Biler, J. Dolbeault. Long time behavior of solutions to Nernst-Planck and Debye-Hckel drift-diffusion systems. Ann. Henri Poincare {\bf{1}}(2000) 461-472. \bibitem{bothe} D. Bothe, A. Fischer, J. Saal, Global well-posedness and stability of electrokinetic flows, SIAM J. Math. Anal, {\bf 46} 2, (2014), 1263-1316. \bibitem{choi} Y.S. Choi, and R. Lui, Multi-Dimensional Electrochemistry Model, Arch. Rational Mech. Anal. {\bf{130}} (1995), 315-342. \bibitem{cf} P. Constantin , C. Foias, {\em{Navier-Stokes equations}}, Chicago University Press, Chicago (1988). \bibitem{ci}P. Constantin, M. Ignatova. On the Nernst-Planck-Navier-Stokes system. Archive for Rational Mechanics and Analysis {\bf{232}}, No. 3, (2018), 1379 -1428. \bibitem{np3d}P. Constantin, M. Ignatova, F-N Lee, Nernst-Planck-Navier-Stokes systems near equilibrium, preprint (2020). \bibitem{gajewski} H. Gajewski, K. Groger, Reaction-diffusion processes of electrically charged species. Math. Nachr., {\bf{177}} (1996), 109-130. \bibitem{gigak}Y. Giga, R. V. Kohn, Nondegeneracy of blowup for semilinear heat equations, Commun. Pure Applied Math, {\bf{42}} (1989), 854-884. \bibitem{perthame} B. Perthame, {\em{Transport equations in biology}}, Frontiers in Mathematics, Birkhauser Verlag, Basel, (2007) \bibitem{rubibook} I. Rubinstein, {\em{Electro-Diffusion of Ions}}, SIAM Studies in Applied Mathematics, SIAM, Philadelphia 1990. \bibitem{rubinstein}S. M. Rubinstein, G. Manukyan, A. Staicu, I. Rubinstein, B. Zaltzman, R.G.H. Lammertink, F. Mugele, M. Wessling, Direct observation of a nonequilibrium electro-osmotic instability. Phys. Rev. Lett. {\bf{101}}, (2008) 236101-236105. \bibitem{rubizaltz} I. Rubinstein, B. Zaltzman, Electro-osmotically induced convection at a permselective membrane, Phys. Rev. E {\bf{62}} (2000) 2238-2251. \bibitem {ryham} R. Ryham, Existence, uniqueness, regularity and long-term behavior for dissipative systems modeling electrohydrodynamics. arXiv:0910.4973v1, (2009). \bibitem{schmuck} M. Schmuck. Analysis of the Navier-Stokes-Nernst-Planck-Poisson system. Math.Models MethodsAppl., {\bf{19}} (2009), 993-1014. \bibitem{zaltzrubi}B. Zaltzman, I. Rubinstein, Electro-osmotic slip and electroconvective instability. J. Fluid Mech. {\bf{579}}, (2007) 173-226. \end{thebibliography} \end{document}
math
50,481
\begin{document} \pagestyle{plain} \newtheoremstyle{mystyle} {\topsep} {\topsep} {\it} {} {\bf} {.} {.5em} {} \theoremstyle{mystyle} \newtheorem{assumptionex}{Assumption} \newenvironment{assumption} {\pushQED{\qed}\renewcommand{\qedsymbol}{}\assumptionex} {\popQED\endassumptionex} \newtheorem{assumptionexp}{Assumption} \newenvironment{assumptionp} {\pushQED{\qed}\renewcommand{\qedsymbol}{}\assumptionexp} {\popQED\endassumptionexp} \renewcommand{\arabic{assumptionexp}$'$}{\arabic{assumptionexp}$'$} \newtheorem{assumptionexpp}{Assumption} \newenvironment{assumptionpp} {\pushQED{\qed}\renewcommand{\qedsymbol}{}\assumptionexpp} {\popQED\endassumptionexpp} \renewcommand{\arabic{assumptionexp}$'$p}{\arabic{assumptionexpp}$''$} \newtheorem{assumptionexppp}{Assumption} \newenvironment{assumptionppp} {\pushQED{\qed}\renewcommand{\qedsymbol}{}\assumptionexppp} {\popQED\endassumptionexppp} \renewcommand{\arabic{assumptionexp}$'$pp}{\arabic{assumptionexppp}$'''$} \renewcommand{1.3}{1.3} \newcommand{\mathop{\mathrm{argmin}}}{\mathop{\mathrm{argmin}}} \makeatletter \newcommand{\bBigg@{2.25}}{\bBigg@{2.25}} \newcommand{\bBigg@{5}}{\bBigg@{5}} \newcommand{0}{0} \newcommand{Randomized and Balanced Allocation of Units \\ into Treatment Groups Using the \\ Finite Selection Model for R}{Randomized and Balanced Allocation of Units \\ into Treatment Groups Using the \\ Finite Selection Model for R} \if00 {Randomized and Balanced Allocation of Units \\ into Treatment Groups Using the \\ Finite Selection Model for Rle{Randomized and Balanced Allocation of Units \\ into Treatment Groups Using the \\ Finite Selection Model for R} \author{Ambarish Chattopadhyay\thanks{Department of Statistics, Harvard University, 1 Oxford Street Cambridge, MA 02138; email: \url{[email protected]}.} \and Carl N. Morris\thanks{Department of Statistics, Harvard University, 1 Oxford Street Cambridge, MA 02138; email: \url{[email protected]}.} \and Jos\'{e} R. Zubizarreta\thanks{Departments of Health Care Policy, Biostatistics, and Statistics, Harvard University, 180 Longwood Avenue, Office 307-D, Boston, MA 02115; email: \url{[email protected]}.} } \date{} \title{ it} }\fi \if10 Randomized and Balanced Allocation of Units \\ into Treatment Groups Using the \\ Finite Selection Model for Rle{\bf Randomized and Balanced Allocation of Units \\ into Treatment Groups Using the \\ Finite Selection Model for R} \date{} \title{ it} \fi \begin{abstract} \noindent The original Finite Selection Model (FSM) was developed in the 1970s to enhance the design of the RAND Health Insurance Experiment (HIE; \perp\!\!\!\perptealt{newhouse1993free}). At the time of its development by Carl Morris \perp\!\!\!\perptep{morris1979finite}, there were fundamental computational limitations to make the method widely available for practitioners. Today, as randomized experiments increasingly become more common, there is a need for implementing experimental designs that are randomized, balanced, robust, and easily applicable to several treatment groups. To help address this problem, we revisit the original FSM under the potential outcome framework for causal inference and provide its first readily available software implementation. In this paper, we provide an introduction to the FSM and a step-by-step guide for its use in R. \end{abstract} \begin{center} \noindent Keywords: {causal inference, covariate balance, experimental design, randomization, R} \end{center} \doublespacing \singlespacing \pagebreak \tableofcontents \pagebreak \doublespacing \section{Introduction} \label{sec_introduction} \subsection{Origin of the FSM} \label{sec_origin} The original Finite Selection Model (FSM) was proposed and developed at the RAND Corporation in the 1970s to enhance the experimental design for the now famous Health Insurance Experiment (HIE), the second of several large-scale national public policy experiments of that era. Among others, the experimental findings from the HIE helped to understand the consequences of health care financing on people's health and supported the reorganization of private health insurance in the US (see \perp\!\!\!\perptealt{newhouse1993free}). The FSM procedure was conceived and developed by Carl Morris \perp\!\!\!\perptep{morris1979finite}. The FSM generalizes a process familiar to youngsters world-wide who have played schoolyard games, when two teams are assembled from a group of willing participants by having team captains take turns, at each opportunity choosing the best remaining player for his or her team. It is intuitive that if the two captains are equally well-informed (or even equally poorly informed) about the abilities of the available individuals, the resulting teams are likely to be well ``matched'', or well ``balanced'' in the language of experimental design. If the captains are also equally well-informed strategically about winning the contest, they will assemble teams that are likely to compete at a higher level. In the FSM acronym, ``Finite'' recognizes the finiteness of the list of available units (or subjects) being chosen, as opposed to design tools that make choices from a possibly infinite list of units. ``Selection'' emphasizes that treatment ``captains'' (or ``choosers'') select units for treatments, taking turns in a fair and random order. In the FSM, randomness is an integral part of the treatment allocation process, providing a basis for inference. More specifically, the order in which treatments select units is determined by a randomized ``selection order matrix.'' At each turn, the choosing treatment adds the single remaining unit that maximally improves the combined quality of its current group of units. In the HIE, the FSM involved having 13 treatments (insurance plans) choose families in a balanced and controlled random order. This was done separately for each of the HIE’s six national experimental sites (four cities and two counties in the USA). In each site, a group of eligible families had been identified, each willing to participate by accepting which ever treatment it was offered. Selections were based on each family's known vector of 23 covariates chosen to determine the family's value for predicting health utilization regressions. A primary advantage of using the FSM as part of an experimental design lies in its ability to improve covariate balance for a large list of baseline covariates (23 in the HIE) across a large number of treatment groups (13 in the HIE). Other FSM advantages lie in not needing to dichotomize continuous variables. For an overview of the FSM and how it was used in the HIE (see \perp\!\!\!\perptealt{morris1993the}). \subsection{The FSM package} \label{sec_package_intro} At the time of the conceptual development of the original FSM, there were fundamental computational limitations to making the method widely available for practitioners. However, today computation is no longer a binding constraint, and as randomized experiments are becoming ever more common, there is a need for implementing experimental designs that are randomized, balanced, robust, and easily applicable to multiple treatment groups. To help address this problem, we revisit the original FSM under the potential outcome framework for causal inference and provide its first readily available software implementation. In this paper, we provide an introduction to the FSM and illustrate how to use it in the new \texttt{FSM} package for R. In Section \ref{sec_theory}, we delineate the framework and notations, explain the motivating idea behind the FSM, and describe its main components. In this section we also discuss how to perform diagnostics and conduct inference after allocation with the FSM. In Section \ref{sec_stepbystep}, we illustrate how to use the \texttt{FSM} package with step-by-step code examples. Finally, in Section \ref{sec_discussion}, we comment on further software developments for the FSM. A complete R script to replicate the main analyses in this paper and a detailed description of the main functions in the \texttt{FSM} package can be found in the Appendix. We ask the reader to refer to the package documentation for detailed descriptions of other, more specialized functions in the package. \section{Designing an experiment using the FSM} \label{sec_theory} \subsection{Framework} \label{sec_framework} Consider a sample of $N$ units indexed by $i = 1, 2, ...,N$. Our goal is to randomly assign the units into $g$ treatment groups, $T_1$, $T_2$, ..., $T_g$, of sizes $n_1, n_2,..., n_g$, respectively, where $n_1 + n_2 + ... + n_g = N$. Let $\bm{X}_i$ be the $k \times 1$ vector of baseline characteristics or covariates of unit $i \in \{1,2,...,N\}$. Write $\bm{Z} = (Z_1, Z_2, ..., Z_N)^\top$ as the $N \times 1$ vector of treatment group labels, with $Z_i = j$ if unit $i$ is assigned to treatment group $j \in \{1,2,...,g\}$. We base our discussion on the potential outcome framework for causal inference \perp\!\!\!\perptep{neyman1923application, rubin1974estimating, imbens2015causal}. Under the Stable Unit Treatment Value Assumption (SUTVA; \perp\!\!\!\perptealt{rubin1980randomization}), we write the potential outcome of unit $i$ under treatment level $j$ as $Y_i(j)$. The unit level causal effect of treatment $j'$ relative to treatment $j''$ for unit $i$ is given by $Y_i(j')- Y_i(j'')$. Throughout this paper, we use as a running example the Nationally Supported Work (NSW) experimental data set by Lalonde (\perp\!\!\!\perpteyear{lalonde1986evaluating}; see also \perp\!\!\!\perptealt{dehejia1999causal}), or simply, the Lalonde data set. The experiment evaluates the impact of the NSW program on earnings. The complete study consists of $185$ treated individuals or units (enrolled in the training program), $260$ control units (not enrolled in the training program).\footnote{This data set can be downloaded from \url{https://users.nber.org/~rdehejia/nswdata.html}} There are eight baseline covariates, namely, \texttt{Age} (age measured in years), \texttt{Education} (years of education), \texttt{Black} (indicator for Black race), \texttt{Hispanic} (indicator for Hispanic race), \texttt{Married} (indicator for married status), \texttt{Nodegree} (indicator for high school dropout), \texttt{Re74} (earnings in 1974), and \texttt{Re75} (earnings in 1975). For ease of exposition, in the remainder of this section we consider a reduced version of the data set consisting of a randomly chosen subset of $N=12$ units and $k=1$ covariates \texttt{Age}, as described in Table \ref{data_lalonde_short}. \begin{table}[!ht] \centering \scalebox{0.85}{ \begin{tabular}{cc} \toprule Index & Age\\ \hline 1 & 19\\ 2 & 20\\ 3 & 20\\ 4 & 20\\ 5 & 23\\ 6 & 24\\ 7 & 24\\ 8 & 25\\ 9 & 25\\ 10 & 28\\ 11 & 31\\ 12 & 41\\ \hline Mean & 25\\ \bottomrule \end{tabular} } \caption{\footnotesize A sample of 12 units selected from the Lalonde data set. In this abridged version of the data set, we consider a single covariate, \texttt{Age}.} \label{data_lalonde_short} \end{table} \subsection{Motivating the FSM: wisdom from schoolyard games} \label{sec_games} The FSM can be motivated by the familiar situation of building schoolyard teams from a given pool of players. In a game with two teams, the team captains often select the players for their respective teams by taking turns. After every stage of selection, the number of players available for selection is reduced by one. Typically, when their turn a choosing captain selects the best player still available to improve their team. This process of selection continues until all players are chosen. In the FSM, the treatment groups play the role of the team captains and the units play the role of the players. The FSM provides a method of assignment where the treatment groups choose the units, instead of the units randomly choosing a treatment group. In the schoolyard game example, suppose that both captains are equally knowledgeable of what constitutes a strong team and of the players’ abilities. Then, if the captains take turns in a fair order, the resulting two teams are expected to be of similar strength. Moreover, if the captains are well-informed about the players' abilities, the two individual teams are likely to be reasonably strong. Interestingly, even if the captains are misinformed about players' abilities, they are equally misinformed and hence the two teams are still expected to be equally strong. Intuitively, this suggests that with a proper specification of the order and the criteria of selection, the FSM can yield groups that are well-balanced along with high degree of efficiency and robustness. Finally, this idea of selection of units by treatments immediately applies to the case with more than two treatment groups. \subsection{Components of the FSM} \label{sec_components} The FSM has two main components, the selection order matrix (SOM) and the selection function, which we describe as follows. \subsubsection{Selection order matrix (SOM)} \label{sec_som} The selection order matrix (SOM) is a matrix specifying the order in which the treatment groups select the units. The matrix usually has two columns, the first column indicating the stage of selection and the second column indicating the label of the treatment group that gets to choose at that stage. In the case of a stratified design, the SOM may include a third column of stratum labels. A good SOM should be fair so that the order of choices made by one treatment group does not unjustly deprive its opponents of important units. In addition, an SOM should be random, since randomization helps protect against imbalances due to unobserved covariates and at the same time provides a basis for inference. Given the values of $N$, $g$, and the group sizes $n_1,...,n_g$, one can use different methods to generate an SOM (see the Appendix for a list of options available in the \texttt{FSM} package). For two treatment groups, we recommend using the Sequentially Controlled Markovian Random Sampling (SCOMARS) algorithm \perp\!\!\!\perptep{morris1979finite, morris1983sequentially} to generate an SOM. In this setting, SCOMARS provides a vector of conditional probabilities whose $j$th element is equal to the probability that one of the treatments groups, say, $T_2$, gets to choose at the $j$th stage, conditional on the number of choices made by $T_2$ upto the $(j-1)$th stage ($j \in \{1,2,...,N\}$). SCOMARS ensures that, if $T_2$ has gotten more (respectively, less) than its fair share of choices up to a certain stage, it has a low (respectively, high) probability of making a choice in the next stage. The \texttt{FSM} package has a built-in function for SCOMARS (see Section \ref{sec_choosesom} and the Appendix for details). In our example, consider the problem of allocating the 12 units in Table \ref{data_lalonde_short} into two groups of equal size. One instance of an SOM generated using SCOMARS for this assignment problem is given in Table \ref{table_som1}. \begin{table}[!ht] \centering \scalebox{0.85}{ \begin{tabular}{ccc} \toprule Stage of selection & Prob($T_2$ selects) & Treatment \\ \hline 1 & 0.5 & 2 \\ 2 & 0.0 & 1 \\ \hdashline 3 & 0.5 & 1 \\ 4 & 1.0 & 2 \\ \hdashline 5 & 0.5 & 1 \\ 6 & 1.0 & 2 \\ \hdashline 7 & 0.5 & 1 \\ 8 & 1.0 & 2 \\ \hdashline 9 & 0.5 & 1 \\ 10 & 1.0 & 2 \\ \hdashline 11 & 0.5 & 2 \\ 12 & 0.0 & 1 \\ \bottomrule \end{tabular} } \caption{\footnotesize An SOM generated using SCOMARS with $N = 12$ units and $g= 2$ groups of equal sizes.} \label{table_som1} \end{table} In this example, the vector of conditional probabilities are given in column 2 of Table \ref{table_som1}. $T_2$, which has a 0.5 probability of being the chooser in the first stage of selection, ends up being the chooser. Given that $T_2$ gets to select in the first stage, SCOMARS forces $T_1$ to select in the second stage. The same process continues for the following pairs of stages, independently of the previous pairs. Therefore, in the case of two treatment groups with equal sizes, SCOMARS is equivalent to randomly permuting the treatment labels $(1,2)$ independently across successive pairs of stages, yielding a random SOM. Since no treatment group is preferred over the other, an SOM generated in this way is fair. Although SCOMARS is designed only for two groups, for multiple treatment groups of equal sizes we can generate a fair and random SOM using a similar permutation-based algorithm. More formally, for assigning $N = cg$ units to $g$ treatment groups (where $c$ is some positive integer), an SOM can be created by successively generating $c$ independent random permutations of the $g$-tuple $(1,2,...,g)$. In the case of multiple groups with unequal sizes, one can use several strategies to generate an SOM. One such possibility is to first combine the groups appropriately into two groups and then apply SCOMARS repeatedly to split the combined groups into the desired number of groups. For example, with $n_1 = 2$, $n_2 = 4$, $n_3 = 6$, we can first apply SCOMARS to split the 12 units to the groups $(T_1, T_2)$ and $T_3$ of sizes $n^*_1 = 2+4 =6$ and $n_3 = 6$ respectively. Subsequently, we can use SCOMARS again to split the combined group $(T_1, T_2)$ into two groups of sizes $n_1 = 2$ and $n_2 = 4$. \subsubsection{Selection function} \label{sec_selfunc} The second component of the FSM is called the selection function, which provides the common criteria by which the treatment groups select units. In the schoolyard games example, a team's selection function determines who is the best remaining player at every stage of selection, conditional on the choices it already has made. In principle, one can use any criterion as the selection function (see the Appendix for a list of some available options in the \texttt{FSM} package). A robust choice is the \textit{D-optimal} selection function. This selection function implicitly considers a linear model $Y_i(j) = \bm{\beta}^\top_j (1,\bm{X}^\top_i)^\top + \epsilon_{ij}$ of the potential outcome under treatment level $j \in \{1,2,...,g\}$, where $\mathbb{E}[\epsilon_{ij}|\bm{X}_i] = 0$.\footnote{ One can also consider a linear model without the intercept term.} A D-optimal design minimizes the generalized variance of the ordinary least squares (OLS)\footnote{Or weighted least squares.} estimator of $\bm{\beta}_j$ based on the fitted model of $Y_i(j)$ in $T_j$, which is equivalent to maximizing $\text{det}(\underline{\bm{X}}^\top_j \underline{\bm{X}}_j)$, where $\underline{\bm{X}}_j$ is the design matrix based on all the units selected for $T_j$, $j \in \{1,2,...,g\}$. The D-optimal selection function targets this aspect of a D-optimal design, but in a sequential manner. To formalize, without loss of generality, suppose $T_1$ has the turn to choose at stage $r$ ($r\in \{1,2,...,N\}$). Let $\underline{\bm{X}}_{r-1,1}$ be the corresponding design matrix in $T_1$ based on the units $T_1$ has selected upto stage $r-1$. Using the D-optimal selection function, $T_1$ chooses the unit from the remaining pool of $N-(r-1)$ units that maximizes the determinant of $\underline{\bm{X}}_{r,1}^\top \underline{\bm{X}}_{r,1}$, where $\underline{\bm{X}}_{r,1}$ denotes the resulting design matrix for $T_1$ after stage $r$. Ties are resolved by randomly selecting one of the optimal units. In the special case of a single covariate, denoting $a$ as the covariate of a candidate unit and $\bar{X}_{r-1,1}$ as the mean of the covariate in $T_1$ up to the $(r-1)$th stage ($r \geq 2$), selection using the D-optimal selection function boils down to choosing the unit that maximizes $|a - \bar{X}_{r-1,1}|$ among all the units available for selection. When $r=1$, the treatment group attempts to choose the unit that maximizes $|a - \bar{X}_{\text{full}}|$, where $\bar{X}_{\text{full}}$ is the mean of the covariate in the full sample. Assuming a linear model of each potential outcome on \texttt{Age}, we illustrate this selection process using the data in Table \ref{data_lalonde_short} and the SOM in Table \ref{table_som1}. In Table \ref{assign1}, we augment the SOM in Table \ref{table_som1} with the columns of indices of the chosen units and their corresponding ages.\footnote{We remove the column of selection probabilites.} \begin{table}[!ht] \centering \scalebox{0.85}{ \begin{tabular}{cccc} \toprule Stage of selection & Treatment & Index & Age \\ \hline 1 & 2 & 12 & 41 \\ 2 & 1 & 11 & 31 \\ \hdashline 3 & 1 & 1 & 19 \\ 4 & 2 & 4 & 20 \\ \hdashline 5 & 1 & 3 & 20 \\ 6 & 2 & 2 & 20 \\ \hdashline 7 & 1 & 10 & 28 \\ 8 & 2 & 5 & 23 \\ \hdashline 9 & 1 & 9 & 25 \\ 10 & 2 & 6 & 24 \\ \hdashline 11 & 2 & 7 & 24 \\ 12 & 1 & 8 & 25 \\ \bottomrule \end{tabular} } \caption{\footnotesize Augmented SOM with columns corresponding to the indices of the selected units and their respective ages.} \label{assign1} \end{table} The mean of \texttt{Age} in the full sample is 25 years. Since unit 12 is 41 years old, the farthest from the full sample mean in magnitude, it gets selected by $T_2$ in the first stage. In the second stage, unit 11 is 31 years old, the farthest in magnitude from 25 among the pool of 11 remaining units. Consequently, $T_1$ selects unit 11. In the third stage, $T_1$ tries to choose the unit whose age is farthest from $31$ and ends up choosing unit 1 with age 19. Similarly, $T_2$ tries to choose the unit whose age is farthest from $41$ and ends up choosing unit 4 with age 20, resolving a tie among units 2,3, and 4 randomly. This process continues until all the units are selected. The combination of an SOM and a choice of selection function allows us to generate a randomized assignment of units into the treatment groups. In this example, at the end of the selection process $T_1$ consists of 6 units with ages 31, 19, 20, 28, 25, 25, and $T_2$ consists of 6 units with ages 41, 20, 20, 23, 24, 24 (ages within each group are listed by the selection order). \subsection{Design assessment} \label{sec_designassess1} In the design assessment stage, we evaluate the statistical properties of the FSM. We consider measures of covariate balance and relative efficiency of the design. In causal inference, covariate balance is an intuitive objective to pursue in order to isolate the effect of the treatment from the confounding effects of imbalances in baseline covariates. More formally, covariate balance controls bias, but it also impacts variance. In most randomized designs, the act of randomization ensures that treatment groups are balanced \textit{on average}, i.e., over repeated realizations of the assignment mechanism. However, any given realization of the assignment mechanism can produce substantial imbalances. Such imbalances, in turn, can directly impact the variance of the average treatment effect estimator. For instance, for $g=2$, consider fitting a linear outcome model with constant treatment effect $\tau$ (of treatment 1 versus treatment 2) and uncorrelated errors with constant variance $\sigma^2$. The \textit{Optimal Covariance Design (OPCODE)} theorem \perp\!\!\!\perptep{morris1993the} states that the model-based variance of the OLS estimator of $\tau$ is $\frac{\sigma^2}{ N s^2_{T_1} (1-R^2)}$, where $s^2_{T_1} = \frac{n_1}{N} (1 - \frac{n_1}{N})$ and $R^2$ is the square of the multiple correlation coefficient of the indicator of $T_1$ with the covariates; see also \perp\!\!\!\perpte{greevy2004optimal}. The design that minimizes this variance (i.e., the OPCODE solution) satisfies $R^2 = 0$ (if feasible), which is equivalent to exact mean-balance of the covariates between the two treatment groups. However, under a misspecified outcome model, the previous OPCODE solution, even if feasible, may produce estimators with high variances. Hence covariate balance is a measure directly tied to the precision of the OLS regression-based effect estimator. Thus as part of the design assessment, balance checks help to evaluate the efficiency of a design under correct specification of the model as well as robustness of a design against model misspecifications. In addition, given a collection of designs, one can compare the extent of information provided by each design using a measure of efficiency relative to a base design. Building on ideas from sample surveys \perp\!\!\!\perptep{kish1965survey}, the notion of relative efficiency can be translated into a notion of effective sample size (ESS). In the spirit of model- and randomization-based inference (described later in Section \ref{sec_inf1}), we propose notions of the model- and randomization-based ESS for a general collection of designs. Here the ESS of a design $D$ in a collection of designs is the number $n_{\textrm{eff}}$ that equates the full-sample size divided by the ratio of variance of the effect estimator under design $D$ to that under the design having the highest precision in the collection. By definition, for all designs in the collection, $n_{\text{eff}} \leq N$, and equality holds for the design having the highest precision in the collection. We recommend computing the ESS as it provides a more palpable measure of the total amount of information provided by a design, as compared to other standard measures of relative efficiency. As an example, we now evaluate the covariate balance and the ESS of the assignment obtained in Table \ref{assign1}. Table \ref{balance_small} includes the first two moments of \texttt{Age} for each treatment group and the corresponding means in the full sample. Table \ref{balance_small} indicates that the two treatment groups are reasonably balanced with respect to the means of \texttt{Age} and $\texttt{Age}^2$. \begin{table}[!ht] \scalebox{0.85}{ \begin{tabular}{p{3cm}cc} \toprule \multirow{2}{5cm}{Groups} & \multicolumn{2}{c}{Covariate transformations}\\ \cline{2-3} & Age & $\text{Age}^2$\\ \toprule Treatment 1 & 24.67 & 626.00\\ Treatment 2 & 25.33 & 693.67\\ \hline Full sample & 25.00 & 659.83\\ \bottomrule \end{tabular} } \caption{\footnotesize Means of \texttt{Age} and $\texttt{Age}^2$ for the assignment in Table \ref{assign1}, plus the corresponding values in the full sample.} \label{balance_small} \end{table} To provide some perspective, we compare the overall degree of balance between the FSM and a completely randomized design (CRD). As a measure of imbalance, we compute the target absolute atandardized mean difference (TASMD, \perp\!\!\!\perptealt{chattopadhyay2020balancing}) of \texttt{Age} and $\text{\texttt{Age}}^2$ relative to their corresponding full-sample averages. In this context, the TASMD of a variable $U$ in $T_j$ relative to the full-sample is given by $\frac{|\bar{U}_j - \bar{U}|}{s_U}$. Here $\bar{U}_j$ and $\bar{U}$ are the means of $U$ in $T_j$ and the full-sample respectively, and $s_U$ is the standard deviation of $U$ in the full-sample. Smaller value of the TASMD indicates that $\bar{U}_j$ is closer to $\bar{U}$, implying better balance on $U$ in $T_j$ relative to the full-sample. In general, to measure imbalance on $U$ in $T_j$ relative to a target profile $U^*$ of interest, one can replace $\bar{U}$ by $U^*$ and $s_U$ by a suitable measure of standard deviation of $U$ (see \perp\!\!\!\perptealt{chattopadhyay2020balancing}). For unweighted samples with two treatment groups, the TASMD of a covariate in a treatment group (relative to the full sample) is a scalar multiple of the more commonly used absolute standardized mean differences (ASMD) of that covariate. In fact, for two groups of equal sizes, the ASMD of a covariate can be shown to be no less than twice its TASMD in each of the two groups, relative to the full sample. Despite their equivalence in this context, for diagnostics we prefer using the TASMD to the ASMD for two main reasons. First, by construction, the TASMD not only quantifies the similarity (or lack thereof) in the vector of covariate-means of the treatment groups with respect to each other, but with respect to any target profile of interest. For instance, the target profile can be the vector of covariate-means of an arbitrary target population, or the vector of covariates of a target individual. Thus, the TASMD allows judging the representativeness of each treatment group relative to the target, and in turn, assessing the transportability of the results of an experiment to the target population or individual of interest. Second, for weighted samples, TASMD is essential for balance diagnostics as it directly relates to the magnitude of the bias of the Haj\'{e}k estimator of treatment effect \perp\!\!\!\perptep{chattopadhyay2020balancing}. Section \ref{sec_designassess2} displays a comparison of the TASMD for a realized assignment under the FSM to that under CRD. In this section, we focus on comparing the randomization distributions of the TASMD under CRD and the FSM. In Table \ref{crdversusfsm}, we show the mean and standard deviation of the TASMDs of \texttt{Age} and $\text{\texttt{Age}}^2$ in $T_1$, across 1000 randomizations of CRD and the FSM. Since the group sizes are equal, by symmetry, similar results hold for $T_2$. \begin{table}[!ht] \scalebox{0.85}{ \begin{tabular}{p{3cm}cc} \toprule \multirow{2}{5cm}{Covariate \\ transformations} & \multicolumn{2}{c}{Design}\\ \cline{2-3} & CRD & FSM\\ \toprule Age & 0.24 (0.15) & 0.07 (0.01)\\ $\text{Age}^2$ & 0.25 (0.13) & 0.11 (0.01)\\ \bottomrule \end{tabular} } \caption{\footnotesize Mean and standard deviation (in parenthesis) of the TASMDs of \texttt{Age} and $\texttt{Age}^2$ across 1000 randomizations of CRD and the FSM.} \label{crdversusfsm} \end{table} Table \ref{crdversusfsm} exhibits better balance on the first two moments of \texttt{Age} under the FSM as compared to CRD since the corresponding means of the TASMDs are uniformly smaller under the FSM, with substantially smaller standard deviations. We note that for this example, the FSM implicitly assumes potential outcome models that are linear in \texttt{Age}, thus leading to reasonable balance on the mean of \texttt{Age} by design. However, despite not directly accounting for $\text{\texttt{Age}}^2$, the FSM exhibits substantially better balance on the mean of $\text{\texttt{Age}}^2$ than CRD. For symmetrically distributed covariates, the D-optimal selection function tends to balance all even functions of the covariates. By balancing transformations of the covariates that are not included in the assumed model, the FSM achieves reasonable robustness against model misspecification. We now assess the efficiency of the given realization of the FSM relative to CRD by comparing the effective sample sizes (ESS). More specifically, given a simulated set of potential outcomes $\{ Y_i(1),Y_i(2): i = 1,2,...,N \}$, we generate $M$ independent assignments of CRD and compute the model-based ESS of the realized assignment of the FSM and the $m$th assignment of CRD ($m=1,2,...,M$). This yields a distribution of the model-based ESS for the realized assignment of the FSM relative to the randomization distribution of the corresponding model-based ESS of CRD. In this example we simulate the set of potential outcomes using two data generating processes, namely (a) $Y_i(1) = 30 - \bm{X}_i + \epsilon_i$, $Y_i(2) = Y_i(1)$ and (b) $Y_i(1) = -35.98 - \bm{X}_i + 0.1 \bm{X}^2_i + \epsilon_i$, $Y_i(2) = Y_i(1)$, where $\epsilon_i$s are independent Normal random variables with mean $0$ and variance $16$. Model (a) is linear in \texttt{Age}, whereas model (b) is quadratic in \texttt{Age}. The coefficients in the two models are chosen such that the marginal means of $Y_i(1)$ are approximately the same under both models. The resulting boxplots of the distributions of the ESS for the FSM relative to $M=1000$ independent realizations of CRD are shown in Figure \ref{fig:ess boxplot}. \begin{figure} \caption{Linear in \texttt{Age} \label{fig:linear age} \caption{Quadratic in \texttt{Age} \label{fig:quadratic age} \caption{\footnotesize Boxplots of the distribution of the model-based ESS for the given realization of the FSM with respect to 1000 independent realizations of CRD under model (a) (left) and model (b) (right).} \label{fig:ess boxplot} \end{figure} Under both scenarios, the given realization of the FSM outperforms CRD in terms of an ESS distribution that is substantially more concentrated towards the full-sample size of 12. In particular, Figure \ref{fig:linear age} shows that the ESS of the FSM is equal to 12 for every realization of CRD when the potential outcomes are generated under a linear model in \texttt{Age}. This is not surprising since the assumed linear potential outcome model coupled with the D-optimal selection function ensures that the model based variance estimates are reasonably small (see Section \ref{sec_selfunc}). However, even when the true model is quadratic in \texttt{Age}, the FSM almost always achieves the highest possible ESS (Figure \ref{fig:quadratic age}), suggesting once again that the FSM provides robustness against model misspecification. We see that in both the cases, the FSM effectively provides 5 or 6 times the information provided by a \textit{typical} CRD. \subsection{Inference after the FSM} \label{sec_inf1} In randomized experiments, there are two common modes of inference: randomization- and model-based inference. Although randomization-based inference was Fisher's original proposal in the 1920s \perp\!\!\!\perptep{fisher1925statistical, fisher1935design, fisher1992statistical}, model-based inference has been the default mode of inference in the experimental design literature throughout much of the 20th century \perp\!\!\!\perptep{montgomery2001design}. However, over the last few decades, randomization-based inference has gained increasing attention under the potential outcome framework for causal inference (e.g., \perp\!\!\!\perptealt{rosenbaum2002observational, samii2012equivalencies, imbens2015causal}). Randomization-based inference explicitly takes into account the assignment mechanism that allocated the units into the treatments groups, often without resorting to large sample approximations. Here the sets of potential outcomes and covariates are viewed as fixed and the only source of randomness is the random assignment of treatments (see Chapter 2 of \perp\!\!\!\perptealt{rosenbaum2002observational} and chapters 5--7 of \perp\!\!\!\perptealt{imbens2015causal} for overviews). In model-based inference, the sample is assumed to be randomly drawn from a large super-population, rendering the set of potential outcomes and covariates random. Here, randomness stems from both the random sampling of units and the random assignment of treatments. The methodology and implementation of the FSM does not explicitly depend on which mode of inference we follow. However, the mode of inference may impact the confidence intervals obtained in the outcome analysis stage. The \texttt{FSM} package includes functions to perform both randomization- and model-based inference for average treatment effects. In this paper, we focus on model-based inference. In model-based inference, a common causal estimand of interest is the population average treatment effect of treatment $j'$ relative to $j''$, defined as $\text{PATE}_{j',j''} = \mathbb{E}[Y_i(j') - Y_i(j'')]$. Typically, inference for the $\text{PATE}_{j',j''}$ is conducted conditional on the covariates and the treatment assignment by fitting a suitable regression model of the outcome on the treatment indicator and the covariates. For instance, for $g=2$, consider a linear potential outcome model of the form $Y_i(j) = \bm{\beta}^\top_{j}\bm{B}(\bm{X}_i) + \epsilon_{ij}$ in treatment group $j \in \{1,2\}$, where $\bm{B}(\bm{X}_i) = \{ B_1(\bm{X}_i),...,B_b(\bm{X}_i) \}^\top$ is a vector of $b$ basis functions. This amounts to fitting a linear regression model of the observed outcome $Y^{obs}_i = \mathbbm{1}(Z_i=1)Y_i(1) + \mathbbm{1}(Z_i=2)Y_i(2)$ on $\bm{B}(\bm{X}_i)$, $\mathbbm{1}(Z_i=1)$, and a full set of first-order interactions between $\bm{B}(\bm{X}_i)$ and $\mathbbm{1}(Z_i=1)$. The model-based estimator of $\text{PATE}_{12}$ is given by $\widehat{\text{PATE}}_{12} = \hat{\mathbb{E}}[Y_i(1)] - \hat{\mathbb{E}}[Y_i(2)] = \hat{\bm{\beta}}^\top_{1}\overline{\bm{B}(\bm{X})} - \hat{\bm{\beta}}^\top_{2}\overline{\bm{B}(\bm{X})}$, where $\overline{\bm{B}(\bm{X})} = \frac{1}{N}\sum_{i=1}^{N}\bm{B}(\bm{X}_i)$ and $\hat{\bm{\beta}}_z$ is the OLS estimator of $\bm{\beta}_z$. The corresponding model-based variance of the estimator of $\text{PATE}_{1,2}$ is $Var(\widehat{\text{PATE}}_{1,2}) = \overline{\bm{B}(\bm{X})}^\top \{Var(\hat{\bm{\beta}}_{1}) + Var(\hat{\bm{\beta}}_{2}) \}\overline{\bm{B}(\bm{X})} $. If the posited potential outcome models are true, then this conditional variance expression is valid irrespective of the underlying assignment mechanism. Finally, given the point estimate and its variance, we can obtain a Wald-type 95\% confidence interval for $\text{PATE}_{j',j''}$ by a Normal approximation. We illustrate this model-based approach and provide the associated \texttt{R} code in Section \ref{sec_inf2}. \section[A step-by-step guide to the FSM in R]{A step-by-step guide to the FSM in \texttt{R}} \label{sec_stepbystep} In this section we illustrate how to use the FSM in \texttt{R}.\footnote{All computations are done under \texttt{R} version 4.0.3.} For this we use the entire Lalonde data set. In Section \ref{sec_example}, we give a brief description of the data and the assignment problem. Subsequently, in Section \ref{sec_buildup} we demonstrate how to run the FSM by choosing a selection order matrix (SOM) and a selection function. Finally, in sections \ref{sec_designassess2} and \ref{sec_inf2}, we show how to assess the assignment and conduct model-based inference after allocation with the FSM. \subsection[Loading the FSM package]{Loading the \texttt{FSM} package} \label{sec_example} In this section, we load the \texttt{FSM} package and consider the complete Lalonde data set with $N = 445$ units, which we are to assign to $g=2$ groups of essentially equal size, $n_1 = 222$ and $n_2 = 223$. The data set is included in the \texttt{FSM} package. In the following code, we display the first 6 rows of the data set. \singlespacing \begin{verbatim} R> # Load the package. R> library(FSM) R> # Display the Lalonde dataset. R> head(Lalonde) \end{verbatim} \begin{verbatim} Index Age Education Black Hispanic Married Nodegree Re74 Re75 1 1 17 7 1 0 0 1 0 0 2 2 17 10 1 0 0 1 0 0 3 3 17 10 1 0 0 1 0 0 4 4 17 8 1 0 0 1 0 0 5 5 17 8 1 0 0 1 0 0 6 6 17 9 1 0 0 1 0 0 \end{verbatim} \doublespacing In addition to the original eight covariates, we include two indicator variables \texttt{E74} and \texttt{E75} indicating whether \texttt{Re74} and \texttt{Re75} are positive, respectively. \singlespacing \begin{verbatim} R> # Include indicators for Re74 and Re75. R> df_sample = data.frame(Lalonde, E74 = ifelse(Lalonde$Re74, 1, 0), + E75 = ifelse(Lalonde$Re75, 1, 0)) \end{verbatim} \doublespacing Here \texttt{df\_sample} is the data frame corresponding to the Lalonde data set with a full set of $k=10$ covariates. The full-sample averages of these ten covariates are shown below. \singlespacing \begin{verbatim} R> round(colMeans(df_sample), 2) \end{verbatim} \begin{verbatim} Index Age Education Black Hispanic Married Nodegree 223.00 25.37 10.20 0.83 0.09 0.17 0.78 Re74 Re75 E74 E75 2102.27 1377.14 0.27 0.35 \end{verbatim} \doublespacing \subsection{Running the FSM} \label{sec_buildup} \subsubsection{Choosing an SOM} \label{sec_choosesom} For the given assignment problem, we use the \texttt{som} function to create a Selection Order Matrix. Here, randomness in the SOM is governed by the SCOMARS algorithm with marginal probabilities of $T_2$ selecting at each stage equal to $\frac{n_2}{N} \approx 0.5$. The following \texttt{R} code is used to generate a particular instance of the SOM. \singlespacing \begin{verbatim} R> set.seed(7) R> # Specify the full sample size. R> N = nrow(df_sample) R> # Specify size of T_1. R> n1 = 222 R> # Specify size of T_2. R> n2 = 223 R> # Generate a Selection Order Matrix. R> som_obs = som(n_treat = 2, treat_sizes = c(n1, n2), method = `SCOMARS', + marginal_treat = rep((n2/N), N)) \end{verbatim} \doublespacing Here \texttt{n\_treat} indicates the number of treatment groups $g$ and \texttt{treat\_sizes} indicates the vector of group sizes, where the $j$th element corresponds to $n_j$ ($j \in \{1,2,...,g\}$). Since the SCOMARS algorithm is used to generate the SOM, we set \texttt{method = `SCOMARS'}. Note that SCOMARS is only applicable when $g=2$. For $g>2$, one can use other methods which may leverage the symmetry of the problem (see Section \ref{sec_som} and the Appendix for more details). For applying SCOMARS one needs to specify the marginal probability that $T_2$ gets to choose at the $j$th stage for all $j \in \{1,2,...,n\}$. \texttt{marginal\_treat} incorporates this vector of marginal probabilities. We display the first 10 rows of the realization of the SOM below. \singlespacing \begin{verbatim} R> # Display first 10 rows of the SOM. R> som_obs[1:10,] \end{verbatim} \begin{verbatim} Probs Treat 1 0.5011 1 2 1.0000 2 3 0.5023 2 4 0.0089 1 5 0.5034 2 6 0.0133 1 7 0.5045 1 8 1.0000 2 9 0.5057 2 10 0.0220 1 \end{verbatim} \doublespacing \subsubsection{Choosing a selection function and an assignment} \label{sec_chooseself} Given an SOM, the final assignment of the units to treatment groups is obtained using the \texttt{fsm} function of the package. The \texttt{fsm} function uses the covariate data, an SOM, and a selection function to yield the final assignment of each unit to one of the treatment groups. The following code can be used to generate one such assignment. \singlespacing \begin{verbatim} R> # Generate a treatment assignment given som_obs. R> f = fsm(data_frame = df_sample, SOM = som_obs, s_function = `Dopt', + eps = 0.0001, units_print = FALSE) \end{verbatim} \doublespacing Unlike the \texttt{som} function, the \texttt{fsm} function requires access to the full data frame \texttt{df\_sample}, since it uses the covariate information from each row of the data frame to execute the selection of units. In particular, the columns of the data frame should include vectors of the transformations of the covariate vector (e.g., the original covariates, two-way interactions of the covariates, etc.) that we wish to include in a linear model of the potential outcomes. For this example, we consider a linear model of each potential outcome on the $k=10$ covariates. Also, at every stage of selection, the choosing treatment groups uses the D-optimal selection function to select the optimal unit, which is encoded by \texttt{s\_function = `Dopt'}. See the Appendix for details on the other arguments of the \texttt{fsm} function. The output of the \texttt{fsm} function (stored in \texttt{f}) contains several objects as a list. For instance, we can generate a data frame that augments the selection order matrix with the index and the corresponding covariate vector of the selected unit at each stage using \texttt{f\$som\_appended}. The code is shown below. \singlespacing \begin{verbatim} R> # Generate augmented SOM. R> som_obs_augmented = f$som_appended R> # Display first 10 rows and first 10 columns of the augmented SOM. R> round(som_obs_augmented[1:10,1:10]) \end{verbatim} \begin{verbatim} Treat Index Age Education Black Hispanic Married Nodegree Re74 Re75 1 1 387 33 11 1 0 1 1 14661 25142 2 2 147 21 8 1 0 0 1 39571 6608 3 2 176 22 10 0 0 1 1 25721 23032 4 1 144 21 7 1 0 0 1 33800 0 5 2 445 55 3 1 0 0 1 0 0 6 1 327 28 11 0 1 1 1 3473 0 7 1 187 23 8 0 0 1 1 0 1713 8 2 301 27 13 0 0 1 0 9382 854 9 2 443 50 10 0 1 0 1 0 0 10 1 442 48 4 1 0 0 1 0 0 \end{verbatim} \doublespacing Alternatively, a data frame with an additional column of the treatment indicator can be obtained using \texttt{f\$data\_frame\_allocated}. In the following code, we extract the vector of treatment labels (ordered by the unit indices) from this augmented data frame. \singlespacing \begin{verbatim} R> # Augment df_sample with the treatment label. R> df_sample_aug = f$data_frame_allocated R> # Create a vector of observed treatment labels. R> Z_fsm_obs = df_sample_aug$Treat \end{verbatim} \doublespacing \subsection{Assessing the design} \label{sec_designassess2} To assess covariate balance under the given realization of the FSM, we draw Love plots \perp\!\!\!\perptep{love2004graphical} of the TASMDs of the covariates using the \texttt{love\_plot} function of the package. For comparison, we also consider a random realization of CRD, drawn using the \texttt{crd} function of the package. \singlespacing \begin{verbatim} R> # Generate an assignment under CRD. R> Z_crd = crd(df_sample, n_treat = 2, treat_sizes = c(n1, n2), control = FALSE)$Treat R> # Generate Love plot of the TASMDs in T_1. R> love_plot(data_frame = df_sample, index_col = TRUE, + alloc1 = Z_fsm_obs, alloc2 = Z_crd, imbalance = 'TASMD', treat_lab = 1, + legend_text = c('FSM','CRD'), xupper = 0.15) \end{verbatim} \doublespacing In the \texttt{love\_plot} function, \texttt{data\_frame} includes a data frame containing the transformations of the covariate vector whose TASMDs we want to compute (along with an optional column for the index of the units). \texttt{alloc1} and \texttt{alloc2} are inputs for the two vectors of treatment assignments which, in this case, are the realized assignment vectors under the FSM and CRD, respectively. \texttt{imbalance} indicates the measure of imbalance used, which can be either \texttt{`TASMD'} or \texttt{`ASMD'}, the latter corresponding to the absolute standardized mean differences (ASMD). The argument \texttt{treat\_lab = 1} indicates that the TASMD is computed in $T_1$. Since the two group sizes are roughly equal, the corresponding plot of the TASMDs in $T_2$ is similar. See the package documentation for more details on the other arguments. To check balance on other transformations of the covariate vector, we also generate Love plots of the TASMDs of the squares of \texttt{Age}, \texttt{Education}, \texttt{Re74} and \texttt{Re75}, and their pairwise products. See the Appendix for the associated code. The resulting two plots are shown in Figure \ref{fig:loveplots}. It is evident that the given realization of the FSM yields better overall mean-balance on the observed covariates and their transformations in $T_1$ (relative to the full sample) than that of CRD, as the corresponding TASMDs under the realized FSM are almost uniformly smaller than those under the realized CRD. Thus, Figure \ref{fig:loveplots} reiterates the balancing property of the FSM under both correct and incorrect specifications of the underlying potential outcome models, as discussed in Section \ref{sec_designassess1}. \begin{figure} \caption{\footnotesize Love plot of the TASMDs of the original covariates (left) and the squares and pairwise products of \texttt{Age} \label{fig:sub1} \label{fig:sub2} \label{fig:loveplots} \end{figure} Indeed, Figure \ref{fig:loveplots} compares covariate balance between specific realizations of CRD and the FSM and hence is not fully reflective of their comparative performances across repeated randomizations. To address this, for each design we obtain the distribution of the TASMDs in $T_1$ across all the 10 covariates and across 1000 independent realizations of the design. To calculate the TASMDs for each realization of CRD and the FSM, we use the \texttt{tasmd\_rand} function of the package. A typical use of the \texttt{tasmd\_rand} function for a given assignment vector under CRD (denoted by \texttt{Z\_crd}) and that under the FSM (denoted by \texttt{Z\_fsm}) is shown below. \singlespacing \begin{verbatim} R> tasmd_rand(data_frame = df_sample, index_col = TRUE, + alloc1 = Z_crd, alloc2 = Z_fsm, treat_lab = 1, legend = c(`CRD', `FSM')) \end{verbatim} \doublespacing The arguments of \texttt{tasmd\_rand} are the same as those in the \texttt{love\_plot} function, barring a few exceptions (please see the Appendix and the package documentation for details). Using the output from this function, in Figure \ref{fig:tasmd_density} we provide density plots of the distributions of the TASMDs under CRD and the FSM across all the covariates and 1000 realization of the respective assignment mechanisms. \begin{figure} \caption{\footnotesize Distribution of the TASMDs across covariates and 1000 independent realizations of CRD and the FSM.} \label{fig:tasmd_density} \end{figure} Figure \ref{fig:tasmd_density} shows that the randomization distribution of the TASMDs under the FSM are substantially more concentrated towards zero than those under CRD. In fact, the mean and standard deviation of the TASMDs under the FSM are less than $19\%$ and $17\%$ of those under CRD, respectively. Moreover, for several realizations of the CRD the resulting TASMDs are larger than 0.15 (which translates to ASMDs larger than 0.3), indicating assignments that are highly imbalanced on one or more covariates. Such extreme assignments are ruled out by design under the FSM. Finally, we generate the distribution of the model-based ESS of the realized assignment under the FSM relative to CRD. Here we simulate the potential outcomes from the model $Y(1) = 100 - \texttt{Age} + 6 \times \texttt{Education} - 20 \times \texttt{Black} + 20 \times \texttt{Hispanic} + 0.003 \times \texttt{Re75} + \epsilon$, $Y(2) = Y(1)$; where $\epsilon$ is a Normal random variable with mean 0, variance 16. \singlespacing \begin{verbatim} R> set.seed(9) R> # Generate the potential outcomes Y_1 and Y_2. R> Y_1 = 100 - df_sample$Age + 6 * df_sample$Education - 20 * df_sample$Black + + 20 * df_sample$Hispanic + 0.003 * df_sample$Re75 + rnorm(N, 0, 4) R> Y_1 = round(Y_1, 2) R> # Set the unit level causal effect tau as zero. R> tau = 0 R> Y_2 = Y_1 + tau0 R> # Create a matrix of potential outcomes. R> Y_appended = cbind(Y_1, Y_2) \end{verbatim} \doublespacing Using the \texttt{ess\_model} function, we compute the vector of the ESS for the realized FSM relative to 1000 realizations of CRD generated earlier. A typical use of the \texttt{ess\_model} function for a given assignment vector under CRD and that under the FSM is shown below. \singlespacing \begin{verbatim} R> ess_model(X_cov = df_sample[,-1], assign_matrix = cbind(Z_crd, Z_fsm), + Y_mat = Y_appended, contrast = c(1,-1)) \end{verbatim} \doublespacing The \texttt{ess\_model} function requires the matrix of covariates or transformations thereof (denoted by \texttt{X\_cov}) that will be used as explanatory variables in the linear outcome models within each treatment group. Also, it requires the matrices of treatment assignments (denoted by \texttt{assign\_matrix}) and the potential outcomes (denoted by \texttt{Y\_mat}). Finally, the coefficients of the treatment contrast of interest is included as a vector (denoted by \texttt{contrast}). Here we focus on estimating $\text{PATE}_{1,2} = \mathbb{E}[Y_i(1) - Y_i(2)]$, and thus we set \texttt{contrast = c(1,-1)}. See the Appendix for the exact use of this function for this example. The ESS values for CRD and the FSM are stored in \texttt{Neff\_crd} and \texttt{Neff\_fsm} respectively. We now compare the ESS of the two designs using side-by-side boxplots. The resulting plot is shown in Figure \ref{fig:ess boxplot2}. \singlespacing \begin{verbatim} R> Create boxplots of the ESS. R> boxplot(Neff_crd, Neff_fsm, axes = F, names = c(`CRD', `FSM')) R> # See the Appendix for codes for specifications of the axes marks. \end{verbatim} \doublespacing \begin{figure} \caption{\footnotesize Boxplots of the distribution of model-based ESS for the given realization of the FSM with respect to 1000 independent realizations of CRD.} \label{fig:ess boxplot2} \end{figure} Similar to the example in Section \ref{sec_designassess1}, the distribution of the ESS for the realized FSM is substantially more concentrated towards the full-sample size of 445 than that under CRD. In particular, \texttt{Neff\_fsm} equals 445 for at least 75 percent of the iterations, as indicated by the first quartile. The improvement in the median of \texttt{Neff\_fsm} over the median of \texttt{Neff\_crd} is roughly $5 \%$ which, although less striking than the improvement noted in Section \ref{sec_designassess1}, is still meaningful in terms of allocation of resources in practice. The model-based ESS conditions on the assignment and only takes into account the variability from random sampling of the units. If we take into account the variability due to the random assignment mechanism (which is considered in randomization-based ESS), the FSM leads to considerable improvement of the ESS over CRD. See the Appendix for the corresponding codes for computing randomization-based ESS. \subsection{Making inference with the FSM} \label{sec_inf2} In this section we show how to conduct model-based inference with the FSM. We refer the reader to the package documentation for details on functions to conduct randomization-based inference. Here we use the set of potential outcomes \texttt{Y\_1} and \texttt{Y\_2} generated in the previous section. For model-based inference, we fit separate linear models of the observed outcome (denoted by \texttt{Y\_obs}) on all the covariates (with an intercept term) in the two treatment groups. The covariates are demeaned to ensure that the estimated treatment effect is same as the difference between the estimated coefficients of the intercept terms from the two models. The resulting matrix of covariates is denoted by \texttt{X\_cov\_demean}. \singlespacing \begin{verbatim} R> # Fit a linear model in T_1. R> fit_t1 = lm(Y_obs[Z_fsm_obs == 1] ~ X_cov_demean[Z_fsm_obs == 1,]) R> # Fit a linear model in T_2. R> fit_t2 = lm(Y_obs[Z_fsm_obs == 2] ~ X_cov_demean[Z_fsm_obs == 2,]) R> # Compute point estimate of the ATE. R> T0 = fit_t1$coefficients[1] - fit_t2$coefficients[1] R> as.numeric(T0) \end{verbatim} \begin{verbatim} [1] -0.04956801 \end{verbatim} \doublespacing Thus, the point estimate of the average treatment effect of treatment 1 versus treatment 2 is approximately -0.05. The following code is used to calculate the model-based standard error of this estimator. \singlespacing \begin{verbatim} R> # Compute the variance of the estimator. R> V0 = (coef(summary(fit_t1))[1, `Std. Error'])^2 + + (coef(summary(fit_t2))[1, `Std. Error'])^2 R> # Display the standard error of the estimator. R> sqrt(V0) \end{verbatim} \begin{verbatim} [1] 0.364697 \end{verbatim} \doublespacing The corresponding asymptotic 95\% confidence interval is computed using the following. \singlespacing \begin{verbatim} R> # Compute the 95 R> CI = c(T0 - 1.96 * sqrt(V0), T0 + 1.96 * sqrt(V0)) R> names(CI) = c(`Lower', `Upper') R> CI \end{verbatim} \begin{verbatim} Lower Upper -0.7643742 0.6652382 \end{verbatim} \doublespacing Therefore, the 95\% confidence interval for $\text{PATT}_{1,2}$ is approximately (-0.76, 0.67). We note that in this case, the interval contains the true average treatment effect of treatment 1 versus treatment 2, which equals zero. \section{Discussion} \label{sec_discussion} In this paper, we have provided a practical introduction to the FSM. Using as an empirical example the NSW experiment, we have illustrated the use of the \texttt{FSM} package for \texttt{R}. In the example we also illustrate the performance of the FSM in comparison to CRD, and show that the FSM tends to achieve better balance on multiple covariates (or transformations thereof) and higher effective sample sizes. The example pertains to the assignment problem with two treatment groups; however, the methodology directly applies to more groups. A number of methodological developments on the FSM are currently underway. Here we discuss two major areas of development. First, the selection function component of the FSM discussed in this paper is based on D-optimality. However, other selection functions can also be used. For example, in the HIE a selection function based on A-optimality criteria was used. A-optimal selection functions require a priori specifications of \textit{policy weights} that allow researchers to emphasize some main effects of the model as more important than others. We recommend using the D-optimality criteria as it satisfies several desirable statistical properties, while circumventing the specification of policy weights. However, there is room for further development of the selection function to handle more complex scenarios. For instance, the D-optimal selection function can be modified/replaced for generalized linear models of the potential outcomes with non-linear link functions. Also, more work needs to be done on the selection function to ensure balance on a high (possibly infinite) dimensional class of basis functions of the covariates. Second, the FSM can be extended to more complex experimental designs, such as sequential, stratified, and cluster randomized experiments. In particular, a modified version of the FSM which we call \textit{batched FSM} is being developed to assign units into treatment groups, when the units arrive sequentially in batches. Also, there is work in progress for devising a principled algorithm that generates a fair and random SOM in stratified experiments. \section*{Acknowledgments} This work was supported through a grant from the Alfred P. Sloan Foundation (G-2020-13946). \pagebreak \onehalfspacing \begin{appendix} \section{The \texttt{som} function} \label{app:technical} \subsection{\texttt{R} function} \texttt{som(data\_frame = NULL, n\_treat, treat\_sizes, include\_discard = FALSE, method = `SCOMARS', control = FALSE, marginal\_treat = NULL)} \subsection{Arguments} \begin{itemize} \item \texttt{data\_frame}: A (optional) data frame corresponding to the full sample of units.. Required if \texttt{include\_discard = TRUE}. \item \texttt{n\_treat}: Number of treatment groups. \item \texttt{treat\_sizes}: A vector of treatment group sizes. If \texttt{control = TRUE}, the first element of \texttt{treat\_sizes} should be the control group size. \item \texttt{include\_discard}: \texttt{TRUE} if a discard group is considered. \item \texttt{method}: Specifies the selection strategy used among \texttt{`global percentage'}, \texttt{`randomized chunk'}, \texttt{`SCOMARS'}. \texttt{`SCOMARS'} is applicable only if \texttt{n\_treat = 2}. For multiple groups ($g>2$) of sizes $cm_1,cm_2,...,c m_g$, where $c$ is an integer and $m_1, m_2,..., m_g$ are coprime integers, \texttt{method = `randomized chunk'} creates an SOM by randomly permuting the chunk $(\underbrace{1,1,...,1}_{m_1},\underbrace{2,2,...,2}_{m_2},...,\underbrace{g,g,...,g}_{m_g})$ $c$ times independently. For multiple groups ($g>2$), \texttt{method = `global percentage'} determines the choosing treatment group as the group that has made the lowest proportion of choices (relative to its total size) upto the previous stage. Ties are resolved randomly. \item \texttt{control}: if \texttt{TRUE}, treatments are labelled as 0,1,...,g-1 (0 representing the control group). If \texttt{FALSE}, they are labelled as 1,2,...,g. \item \texttt{marginal\_treat}: A vector of marginal probabilities, the jth element being the probability that treatment group (or treatment group 2 in case \texttt{control = FALSE}) gets to choose at the jth stage given the total number of choices made by treatment group upto the (j-1)th stage. Only applicable when \texttt{method = `SCOMARS'}. \end{itemize} \subsection{Value} A data frame containing the selection order of treatments, i.e. the labels of treatment groups at each stage of selection. If \texttt{method = `SCOMARS'}, the data frame contains an additional column of the conditional selection probabilities. \section{The \texttt{fsm} function} \label{sec_fsmfunc} \subsection{\texttt{R} function} \texttt{fsm(data\_frame, SOM, s\_function = `Dopt', Q\_initial = NULL, eps = 0.001, ties = `random', intercept = TRUE, standardize = TRUE, units\_print = TRUE, index\_col = TRUE, Pol\_mat = NULL, w\_pol = NULL)} \subsection{Arguments} \begin{itemize} \item \texttt{data\_frame}: A data frame containing a column of unit indices (optional) and covariates (or transformations thereof). \item \texttt{SOM}: A selection order matrix. \item \texttt{s\_function}: Specifies a selection function, a string among \texttt{`constant'}, \texttt{`Dopt'}, \texttt{`Aopt'}, \texttt{`max pc'}, \texttt{`min pc'}, \texttt{`Dopt pc'}, \texttt{`max average'}, \texttt{`min average'}, \texttt{`Dopt average'}. \texttt{`constant'} selection function puts a constant value on every unselected unit. \texttt{`Dopt'} use the D-optimality criteria based on the full set of covariates to select units. \texttt{`Aopt'} uses the A-optimality criteria. \texttt{`max pc'} (respectively, \texttt{`min pc'}) selects that unit that has the maximum (respectively, minimum) value of the first principal component. \texttt{`Dopt pc'} uses the D-optimality criteria on the first principal component, \texttt{`max average'} (respectively, \texttt{`min average'}) selects that unit that has the maximum (respectively, minimum) value of the simple average of the covariates. \texttt{`Dopt average'} uses the D-optimality criteria on the simple average of the covariates. \item \texttt{Q\_initial}: A (optional) non-singular matrix (called `initial matrix') that is added the $(X^T X)$ matrix of the choosing treatment group at any stage, when the $(X^T X)$ matrix of that treatment group at that stage is non-invertible. If \texttt{FALSE}, the $(X^T X)$ matrix for the full set of observations is used as the non-singular matrix. Applicable if \texttt{s\_function = `Dopt'} or \texttt{`Aopt'}. \item \texttt{eps}: Proportionality constant for \texttt{Q\_initial}, the default value is 0.001. \item \texttt{ties}: Specifies how to deal with ties in the values of the selection function. If \texttt{ties = `random'}, a unit is selected randomly from the set of candidate units. If \texttt{ties = `smallest'}, the unit that appears earlier in the data frame, i.e. the unit with the smallest index gets selected. \item \texttt{intercept}: if \texttt{TRUE}, the design matrix (within each treatment arm) includes a column of intercepts. \item \texttt{standardize}: if \texttt{TRUE}, the columns of the $\underline{\bm{X}}$ matrix other than the column for the intercept (if any), are standardized. \item \texttt{units\_print}: if \texttt{TRUE}, the function automatically prints the candidate units at each step of the build-up process. \item \texttt{index\_col}: if \texttt{TRUE}, \texttt{data\_frame} contains a column of unit indices. \item \texttt{Pol\_mat}: Applicable only when \texttt{s\_function = `Aopt'}. \item \texttt{w\_pol}: A vector of policy weights. Applicable only when \texttt{s\_function = `Aopt'}. \end{itemize} \subsection{ Value} The function returns a list containing the following items. \begin{itemize} \item \texttt{data\_frame\_allocated}: The original data frame augmented with the column of the treatment indicator. \item \texttt{som\_appended}: The SOM with augmented columns for the indices and covariate values for units selected. \item \texttt{som\_split}: \texttt{som\_appended}, split by the levels of the treatment. \item \texttt{crit\_print}: The value of the objective function, at each stage of build up process. At each stage, the unit that maximizes the objective function is selected. \end{itemize} \section{Replication code for Section 3} \singlespacing \begin{verbatim} R> # Load the required packages. R> library(ggplot2) R> library(ggthemes) R> # Load the package. R> library(FSM) R> # Display the Lalonde dataset. R> head(Lalonde) R> # Include indicators for Re74 and Re75. R> df_sample = data.frame(Lalonde, E74 = ifelse(Lalonde$Re74, 1, 0), + E75 = ifelse(Lalonde$Re75, 1, 0)) R> round(colMeans(df_sample), 2) R> set.seed(7) R> # Specify the full sample size. R> N = nrow(df_sample) R> # Specify size of T_1. R> n1 = 222 R> # Specify size of T_2. R> n2 = 223 R> # Generate a Selection Order Matrix. R> som_obs = som(data_frame = NULL, n_treat = 2, treat_sizes = c(n1, n2), + method = 'SCOMARS', marginal_treat = rep((n2/N), N)) R> # Display first 10 rows of the SOM. R> som_obs[1:10,] R> # Generate a treatment assignment given som_obs. R> f = fsm(data_frame = df_sample, SOM = som_obs, s_function = 'Dopt', + eps = 0.0001, units_print = FALSE) R> # Generate augmented SOM. R> som_obs_augmented = f$som_appended R> # Display first 10 rows and first 10 columns of the augmented SOM. R> round(som_obs_augmented[1:10,1:10]) R> # Augment df_sample with the treatment label. R> df_sample_aug = f$data_frame_allocated R> # Create a vector of observed treatment labels. R> Z_fsm_obs = df_sample_aug$Treat R> # Generate an assignment under CRD. R> Z_crd = crd(df_sample, n_treat = 2, treat_sizes = c(n1, n2), control = FALSE)$Treat R> # Generate Love plot of the TASMDs in T_1. R> love_plot(data_frame = df_sample, index_col = TRUE, + alloc1 = Z_fsm_obs, alloc2 = Z_crd, imbalance = 'TASMD', treat_lab = 1, + legend_text = c('FSM','CRD'), xupper = 0.15) R> # Check balance on squares and products of continuous covariates R> df_sample_sq_int = make_sq_inter(data_frame = df_sample[,c(2,3,8,9)], + is_square = TRUE, is_inter = TRUE, keep_marginal = FALSE) R> love_plot(data_frame = df_sample_sq_int, index_col = FALSE, + alloc1 = Z_fsm_obs, alloc2 = Z_crd, imbalance = "TASMD", treat_lab = 1, + legend_text = c('FSM','CRD'), xupper = 0.15) R> # Generate 1000 indep realizations of CRD & the FSM R> set.seed(7) R> # Set number of iterations R> n_iter = 1000 R> # Initialize assignment vectors for all the iterations R> Z_crd_iter = matrix(rep(0,n_iter*N),nrow = n_iter) R> Z_fsm_iter = matrix(rep(0,n_iter*N),nrow = n_iter) R> for(i in 1:n_iter) R> { R> # Generate assignment under CRD R> fc = crd(df_sample, n_treat = 2, treat_sizes = c(n1,n2), control = FALSE) R> Z_crd_iter[i,] = fc$Treat R> # Generate assignment under the FSM R> som_iter = som(data_frame = NULL, n_treat = 2, treat_sizes = c(n1,n2), + method = 'SCOMARS', marginal_treat = rep((n2/N),N)) R> f = fsm(data_frame = df_sample, SOM = som_iter, s_function = 'Dopt', + eps = 0.0001, units_print = FALSE) R> Z_fsm_iter[i,] = f$data_frame_allocated$Treat R> } R> # Compute the distribution of TASMD across different randomizations and + across all the covariates R> # Compute the number of original covariates R> k = ncol(df_sample)-1 R> # Initialize vectors of TASMDs for both designs R> TASMD_crd_iter = matrix(rep(0, n_iter*k), nrow = n_iter) R> TASMD_fsm_iter = matrix(rep(0, n_iter*k), nrow = n_iter) R> for(r in 1:n_iter) R> { R> TASMD_iter = tasmd_rand(data_frame = df_sample, index_col = TRUE, + alloc1 = Z_crd_iter[r,], alloc2 = Z_fsm_iter[r,], + treat_lab = 1, legend = c('CRD','FSM'), roundoff = 3) R> TASMD_crd_iter[r,] = TASMD_iter[,1] R> TASMD_fsm_iter[r,] = TASMD_iter[,2] R> } R> # Concatenate the TASMDs across all the covariates R> TASMD_crd_concat = as.vector(TASMD_crd_iter) R> TASMD_fsm_concat = as.vector(TASMD_fsm_iter) R> # Plot estimated densities of the TASMDs under CRD and the FSM R> TASMD_df = data.frame(TASMD = c(TASMD_crd_concat, TASMD_fsm_concat), + Design = rep(c('CRD', 'FSM'), each = length(TASMD_crd_concat))) R> p = ggplot(TASMD_df, aes(x = TASMD, fill = Design)) + + geom_density(alpha = 0.5, adjust = 2.5) R> p + labs(y = 'Density') + theme_bw() + + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())+ + scale_fill_grey(start = 0.9, end = .2) + theme(legend.position = c(0.9, 0.9)) R> set.seed(9) R> # Generate the potential outcomes Y_1 and Y_2. R> Y_1 = 100 - df_sample$Age + 6 * df_sample$Education - 20 * df_sample$Black + + 20 * df_sample$Hispanic + 0.003 * df_sample$Re75 + rnorm(N, 0, 4) R> Y_1 = round(Y_1, 2) R> # Set the unit level causal effect tau as zero. R> tau = 0 R> Y_2 = Y_1 + tau R> # Create a matrix of potential outcomes. R> Y_appended = cbind(Y_1, Y_2) R> # Create matrix of observed covariates R> X_cov = df_sample[,-1] R> # Initialize model-based ESS for CRD and the FSM R> Neff_crd = rep(0,n_iter) R> Neff_fsm = rep(0,n_iter) R> for(r in 1:n_iter) R> { R> Z_crd_obs = Z_crd_iter[r,] R> # Create matrix of assignments R> Z_big = cbind(Z_crd_obs, Z_fsm_obs) R> # Calculate the ESS R> ess = ess_model(X_cov = X_cov, assign_matrix = Z_big, + Y_mat = Y_appended, contrast = c(1, -1)) R> Neff_crd[r] = ess[1] R> Neff_fsm[r] = ess[2] R> } R> # Draw boxplots of the ESS R> boxplot(Neff_crd, Neff_fsm, axes = F, names = c('CRD', 'FSM')) R> # Set axes marks R> axis(1, at = c(1:2), labels = c('CRD', 'FSM')) R> axis(2, cex.axis = 0.8, at = c(seq(390,450,10),445)) R> axis(4, cex.axis = 0.8, at = c(seq(390,450,10),445)) R> rug(x =jitter(Neff_crd), side = 2) R> rug(x =jitter(Neff_fsm), side = 4) R> box() R> # Compute randomization-based ESS. R> # Construct 3-dim array of the assignments. R> # 1st coordinate represents iterations. R> # 2nd coordinate represents units. R> # 3rd coordinate represents designs. R> Z_array = array(0, dim = c(n_iter, N, 2)) R> Z_array[,, 1] = Z_crd_iter R> Z_array[,, 2] = Z_fsm_iter R> # Calculate ESS of CRD vs the FSM R> ess_rand(assign_array = Z_array, Y_mat = Y_appended, contrast = c(1, -1)) R> # Create a vector of observed outcome R> Y_obs = Y_1 R> Y_obs[Z_fsm_obs == 2] = Y_2[Z_fsm_obs == 2] R> # De-mean columns of X_cov R> X_cov_demean = apply(X_cov, 2, function(y) y - mean(y)) R> # Fit a linear model in T_1. R> fit_t1 = lm(Y_obs[Z_fsm_obs == 1] ~ X_cov_demean[Z_fsm_obs == 1,]) R> # Fit a linear model in T_2. R> fit_t2 = lm(Y_obs[Z_fsm_obs == 2] ~ X_cov_demean[Z_fsm_obs == 2,]) R> # Compute point estimate of the ATE. R> T0 = fit_t1$coefficients[1] - fit_t2$coefficients[1] R> as.numeric(T0) R> # Compute the variance of the estimator. R> V0 = (coef(summary(fit_t1))[1, 'Std. Error'])^2 + + (coef(summary(fit_t2))[1, 'Std. Error'])^2 R> # Display the standard error of the estimator. R> sqrt(V0) R> # Compute the 95 R> CI = c(T0 - 1.96 * sqrt(V0), T0 + 1.96 * sqrt(V0)) R> names(CI) = c('Lower', 'Upper') R> CI \end{verbatim} \doublespacing \end{appendix} \end{document}
math
71,628
\begin{document} \begin{abstract} In this paper we prove two main results about obstruction to graph planarity. One is that, if $G$ is a $3$-connected graph with a $K_5$-minor and $T$ is a triangle of $G$, then $G$ has a $K_5$-minor $H$, such that $E(T)^*ubseteq E(H)$. Other is that if $G$ is a $3$-connected simple non-planar graph not isomorphic to $K_5$ and $e,f\capn E(G)$, then $G$ has a minor $H$ such that $e,f\capn E(H)$ and, up to isomorphisms, $H$ is one of the four non-isomorphic simple graphs obtained from $K_{3,3}$ by the addiction of $\,0$, $1$ or $2$ edges. We generalize this second result to the class of the regular matroids. \\ \end{abstract} \begin{keyword} graph minors ^*ep graph planarity ^*ep regular matroid ^*ep Kuratowski Theorem ^*ep Wagner Theorem \end{keyword} \title{On $K_5$ and $K_{3,3} \newcommand{\put(10,0){\circle*{4}}\put(10,20){\circle*{4}}\put(10,40){\circle*{4}}\put(30,0){\circle*{4}}\put(30,20){\circle*{4}}\put(30,40){\circle*{4}}\put(-4,0){$u_3$}\put(-4,20){$u_2$}\put(-4,40){$u_1$}\put(32,0){$v_3$}\put(32,20){$v_2$}\put(32,40){$v_1$}}{\put(10,0){\circle*{4}}\put(10,20){\circle*{4}}\put(10,40){\circle*{4}}\put(30,0){\circle*{4}}\put(30,20){\circle*{4}}\put(30,40){\circle*{4}}\put(-4,0){$u_3$}\put(-4,20){$u_2$}\put(-4,40){$u_1$}\put(32,0){$v_3$}\put(32,20){$v_2$}\put(32,40){$v_1$}} \newcommand{\kttedges}{\drawline(10,0)(30,0)(10,20)(30,20)(10,40)(30,40)(10,0)(30,20) \drawline(10,20)(30,40)\drawline(10,40)(30,0)} \newcommand{\drawline(10,40)(30,20)(10,0)(30,0)(10,20)(30,20)(30,0)(10,40)(30,40)(10,0)(10,20)(30,40)}{\drawline(10,40)(30,20)(10,0)(30,0)(10,20)(30,20)(30,0)(10,40)(30,40)(10,0)(10,20)(30,40)} \newcommand{\put(10,10){\circle*{4}}\put(10,30){\circle*{4}}\put(10,50){\circle*{4}}\put(40,0){\circle*{4}}\put(40,20){\circle*{4}}\put(40,40){\circle*{4}}\put(40,60){\circle*{4}}}{\put(10,10){\circle*{4}}\put(10,30){\circle*{4}}\put(10,50){\circle*{4}}\put(40,0){\circle*{4}}\put(40,20){\circle*{4}}\put(40,40){\circle*{4}}\put(40,60){\circle*{4}}} \newcommand{\drawline(40,60)(10,30)(10,10)(40,0)(10,30)(40,40)(40,20)(40,0)(10,50)(40,60)(10,10)}{\drawline(40,60)(10,30)(10,10)(40,0)(10,30)(40,40)(40,20)(40,0)(10,50)(40,60)(10,10)} \newcommand{\put(-4,10){$u_3$}\put(-4,30){$u_2$}\put(-4,50){$u_1$}\put(42,-2){$v_3$}\put(42,18){$w_2$}\put(42,40){$w_1$}\put(42,60){$v_1$}}{\put(-4,10){$u_3$}\put(-4,30){$u_2$}\put(-4,50){$u_1$}\put(42,-2){$v_3$}\put(42,18){$w_2$}\put(42,40){$w_1$}\put(42,60){$v_1$}} \newcommand{K_{3,3}^{1,1}}{K_{3,3}^{1,1}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{N}}{\mathcal{N}} ^*ection{Introduction} We use the terminology set by Oxley~\cite{Oxley}. Our graphs are allowed to have loops and multiple edges. When there is no ambiguity we denote by $uv$ the edge linking the vertices $u$ and $v$. We use the notation $si(G/e)$ for a simplification of $G$ (a graph obtained from $G$ by removing all loops, and all, but one, edges in each parallel class). Usually we choose the edge-set of $si(G)$ satisfying our purposes with no mentions. It is a consequence of Whitney's 2-Isomorphism Theorem (Theorem 5.3.1 of \cite{Oxley}) that, for each $3$-connected graphic matroid $M$, there is, up to isomorphisms, a unique graph whose $M$ is the cycle matroid. We also use this result without mention, so as Kuratowski and Wagner Theorems about graph planarity. When talking about a {\bf triangle} in a graph we may be referring both to the subgraph corresponding to the triangle as to its edge-set. We say that a set of vertices in a graph is {\bf stable} if such set has no pair of vertices linked by an edge. Let $U$ and $V$ be different maximal stable sets of vertices in $K_{3,3}$. We define $K_{3,3}^{i,j}$ to be the simple graph obtained from $K_{3,3}$ by adding $i$ edges linking pairs of vertices of $U$ and $j$ edges linking pairs of vertices of $V$. By default, we label the vertices of $K_{3,3}^{i,j}$ like in Figure 1. \begin{figure} \caption{Labels for extensions of $K_{3,3} \end{figure} A family $\mathcal{F}$ of matroids (graphs, resp.) is said to be {\bf $k$-rounded} in a minor-closed class of matroids (graphs, resp.) $\mathcal{N}$ if each member of $\mathcal{F}$ is $(k+1)$-connected and, for each $(k+1)$-connected matroid (graph, resp.) $M$ of $\mathcal{N}$ with an $\mathcal{F}$-minor and, for each $k$-subset $X^*ubseteq E(M)$, $M$ has an $\mathcal{F}$-minor with $X$ in its ground set (edge set, resp.). When $\mathcal{N}$ is omitted we consider it as the class of all matroids (graphs, resp.). By Whitney's 2-isomorphism Theorem, the concepts of $k$-roundedness for graphs and matroids agree, for $k\ge 2$. Such definition is a slight generalization of that one made by Seymour~\cite{Seymour1985}. For more information about $k$-roundedness we refer the reader to Section 12.3 of \cite{Oxley}. The second main result stated in the abstract is Corollary \ref{sec-res}, that follows from the next Theorem we establish here: \begin{theorem}\label{k33-classes} The following families of graphs are $2$-rounded: \begin{enumerate} \captem [(a)]$\{K_{3,3},K_{3,3}^{0,1},K_{3,3}^{0,2},K_{3,3}^{1,1}\}$ and \captem [(b)]$\{K_{3,3},K_{3,3}^{0,1},K_{3,3}^{0,2},K_{3,3}^{1,1},K_5\}$. \end{enumerate} Moreover, the following families of matroids are $2$-rounded in the class of the regular matroids. \begin{enumerate} \captem [(c)]$\{M(K_{3,3}),M(K_{3,3}^{0,1}),M(K_{3,3}^{0,2}),M(K_{3,3}^{1,1})\}$ and \captem [(d)]$\{M(K_{3,3}),M(K_{3,3}^{0,1}),M(K_{3,3}^{0,2}),M(K_{3,3}^{1,1}),M(K_5)\}$. \end{enumerate} \end{theorem} Seymour~\cite[(7.5)]{Seymour1980} proved that each $3$-connected simple non-planar graph not isomorphic to $K_5$ has a $K_{3,3}$-minor. So, as consequence of Theorem \ref{k33-classes} we have: \begin{corollary}\label{sec-res} If $G$ is a $3$-connected simple non-planar graph and $e,f\capn E(G)$, then either $G\cong K_5$ or $G$ has a minor $H$ isomorphic to $K_{3,3},K_{3,3}^{0,1},K_{3,3}^{0,2}$ or $K_{3,3}^{1,1}$ such that $e,f\capn E(H)$. \end{corollary} The next Corollary follows from Theorem \ref{k33-classes}, combined with Bixby's Theorem about decomposition of connected matroids into 2-sums (\cite[Theorem 8.3.1]{Oxley}). To derive the next corollary, instead of Theorem \ref{k33-classes}, we also may use a result of Seymour~\cite{Seymour1985}, which states that $\{U_{2,4},M(K_{3,3}),M(K_{3,3}^{0,1}),M(K_5)\}$ is $1$-rounded. \begin{corollary} If $G$ is a non-planar $2$-connected graph and $e\capn E(G)$, then $G$ has a minor $H$ isomorphic to $K_5$, $K_{3,3}$ or $K_{3,3}^{0,1}$ such that $e\capn E(H)$. \end{corollary} The first result we state at the abstract is Corollary \ref{k5}, that follows from the following theorem: \begin{theorem}\label{kttuu} If $G$ is a $3$-connected simple graph with a $K_{3,3}^{1,1}$-minor and $T$ is a triangle of $G$, then $G$ has a $K_{3,3}^{1,1}$-minor with $E(T)$ as edge-set of a triangle. \end{theorem} \begin{corollary}\label{k5} If $G$ is a $3$-connected simple graph with a $K_{5}$-minor and $T$ is a triangle of $G$, then $G$ has a $K_5$-minor with $E(T)$ as edge-set of a triangle. \end{corollary} Other results about getting minors preserving a triangle were proved by Asano, Nishizeki and Seymour~\cite{Asano}. Truemper~\cite{Truemper} proved that if $G$ has a $K_{3,3}$-minor, and $e$, $f$ and $g$ are the edges of $G$ adjacent to a degree-$3$ vertex, then $G$ has a $K_{3,3}$-minor using $e$, $f$ and $g$. We define a class $\mathcal{F}$ of $3$-connected matroids to be {\bf $(3,k,l)$-rounded} in $\mathcal{N}$ provided the following property holds: if $M$ is a $3$-connected matroid in $\mathcal{N}$ with an $\mathcal{F}$-minor, $X^*ubseteq E(M)$, $|X|=k$ and $r(X)\le l$, then $M$ has an $\mathcal{F}$-minor $N$ such that $X^*ubseteq E(N)$ and $N|X=M|X$. Another formulation for Theorem \ref{kttuu} and Corollary \ref{k5} is that $\{M(K_{3,3}^{1,1})\}$ and $\{M(K_5)\}$ are $(3,3,2)$-rounded in the class of graphic matroids. Costalonga~\cite{Costalonga-Vertically}(in the last comments of the introduction) proved: \begin{prop}\label{costalonga}Let $2\le l\le k\le 3$. Let $\mathcal{F}$ be a finite family of matroids and $\mathcal{N}$ a class of matroids closed under minors. Then, there is a $(3,k,l)$-rounded family of matroids $\mathcal{F}'$ such that each $M\capn \mathcal{F}'$ has an $\mathcal{F}$-minor $N$ satisfying $r(M)-r(N)\le k+\lfloor\frac{k-1}{2}\rfloor$. \end{prop} In \cite{Costalonga-Vertically} there are more results of such nature. Although a minimal $(3,3,3)$-rounded family of graphs containing $\{K_5, K_{3,3}\}$ exists and even has a size that allows a computer approach, it has shown to be complicated. Such family must at least include the graphs $K_{3,3}^{i,j}$, for $i+j\le 3$, $K_5$ and the following two graphs in Figure 2, obtained, respectively, from $K_{3,3}$ and $K_{5}$ by the same kind of vertex expansion, which shall occur in such kind of families. \begin{figure} \caption{Members of a $(3,3,3)$-rounded family containing $\{K_5, K_{3,3} \end{figure} ^*ection{Proofs for the Theorems} The proof of Theorem \ref{k33-classes} is based on the following theorem: \begin{theorem}\label{criterion}(Seymour~\cite{Seymour1985}, see also \cite[Theorem 12.3.9]{Oxley}) Let $\mathcal{N}$ be a class of matroids closed under minors, and $\mathcal{F}$ be a family of $3$-connected matroids. If, for each matroid $M$, for each $e\capn E(M)$ such that $M/e\capn\mathcal{F}$ or $M\backslash e \capn \mathcal{F}$ and for each $f\capn E(M)-e$ there is an $\mathcal{F}$-minor using $e$ and $f$, then $\mathcal{F}$ is $2$-rounded in $\mathcal{N}$. \end{theorem} Seymour proved Theorem \ref{criterion} when $\mathcal{N}$ is the class of all matroids. But the same proof holds for this more general version. By Whitney's 2-isomorphism Theorem, the analogous for graphs of Theorem \ref{criterion} holds. \\ \begin{proofof}\emptyseth{Proof of theorem \ref{k33-classes}: } For items (a) and (b) we will consider $\mathcal{N}$ as the class of graphic matroids and for items (c) and (d) we will consider $\mathcal{N}$ as the class of regular matroids. In each item we will verify the criterion given by Theorem \ref{criterion}. First we prove item (a). We begin looking at the $3$-connected simple graphs $G$ such that $G\backslash e \capn \mathcal{F}_a:=\{K_{3,3},K_{3,3}^{0,1},K_{3,3}^{0,2},K_{3,3}^{1,1}\}$. We may assume that $G\notin \mathcal{F}_a$. So, up to isomorphisms, $G=K_{3,3}^{0,3}$ or $G=K_{3,3}^{1,k}$ for some $k\capn \{1,2,3\}$. Thus $e\notin E(K_{3,3})$. Define $H:=G[E(K_{3,3})\cup \{e,f\}]$. If $f\capn E(K_{3,3})$, then $H\cong K_{3,3}^{0,1}$, otherwise $H\cong K_{3,3}^{0,2}$ or $H\cong K_{3,3}^{1,1}$. Thus $H$ is an $\mathcal{F}_a$-minor of $G$ and we may suppose that $G/e\capn \mathcal{F}_a$. e have that $G$ is $3$-connected and simple, in particular, $G$ has no degree-$2$ vertices, hence $G$ must be obtained from $G/e$ by the expansion of a vertex with degree at least $4$. This implies that $G\ncong K_{3,3}$. Thus, we may assume that $G/e$ is one of graphs $K_{3,3}^{0,1}$, $K_{3,3}^{1,1}$ or $K_{3,3}^{0,2}$. We denote $e:=w_1w_2$. If $G/e= K_{3,3}^{0,1}$, then $G$ is obtained from $G/e$ by the expansion of a degree-~$4$ vertex. In this case we may assume without losing generality that $G$ is the graph $G_1$, defined in Figure 3. Note that, in this case, $G_1/u_3w_2\cong K_{3,3}$ and that $G_1/u_3v_1\cong K_{3,3}^{0,1}$(with $\{u_1,u_2,w_2\}$ stable). So, one of $G_1/u_3w_2$ or $G_1/u_3v_1$ is an $\mathcal{F}_a$-minor we are looking for. So we may assume that $G\neq K_{3,3}^{0,1}$. If $G\cong K_{3,3}^{1,1}$, then $G\cong G_1+ u_2u_3$ and the result follows as in the preceding case. Hence we may assume that $G/e\cong K_{3,3}^{0,2}$. If $G$ is obtained from $G/e$ by the expansion of a degree-$4$ vertex, then $G\cong G_2\cong G_1+v_1w_1$. In this case we may proceed as in the first case again. Thus, if $G/e=K_{3,3}^{0,2}$, we can assume that $G$ is obtained from $G/e$ by the expansion of the degree-$5$ vertex. If $\{v_1w_1,v_3w_2\}$ or $\{v_1w_2,v_3w_1\}$ is contained in $E(G)$, then $G$ is again isomorphic to $G_2$ and we are reduced to the first case again. Without loss of generality, say that $v_1w_2,v_2w_2\capn E(G)$. Then $G$ is one of the graphs $G_3$ or $G_4$ in Figure 3. If $G=G_3$, then one of $G_3/v_1w_2$ or $G_3/w_2v_3$, both isomorphic to $K_{3,3}^{0,2}$ is the $\mathcal{F}_a$-minor we are looking for. If $G=G_4$, then one of $si(G_4/u_3w_2)$ ($\cong K_{3,3}^{0,1}$) or $si(G_4/u_3v_1)$($\cong K_{3,3}^{0,1}$, with $\{u_1,u_2,w_2\}$ stable) is such an $\mathcal{F}_a$-minor. This proves item (a). Now we prove item (b). We just have to examine the $3$-connected simple single-element extensions and coextensions of $K_5$, since other verifications were made in the proof of item (a). The unique graph $G$ with an edge $e$ such that $G/e\cong K_5$ or $G\backslash e \cong K_5$ is $K_{3,3}^{1,1}$ (up to isomorphisms). So, we have item (b). Now we prove item (c). By the proof of item (a), it is just left to examine the $3$-connected extensions and coextensios of the matroids in $\mathcal{F}_c:=\{M(K_{3,3}),$ $M(K_{3,3}^{0,1}),$ $M(K_{3,3}^{0,2}),$ $M(K_{3,3}^{1,1})\}$ which are not graphic. By \cite[Theorem 13.1.2 and Proposition 12.2.8]{Oxley}, each $3$-connected regular matroid is graphic, cographic, isomorphic to $R_{10}$ or has a $R_{12}$-minor. But no cographic matroid has a minor in $\mathcal{F}_c$. Moreover, by cardinality, $R_{10}$ also has no $\mathcal{F}_c$-minor. So, the unique non-graphic matroids $M$ such that $M\backslash e$ or $M/e$ is possibly in $\mathcal{F}_c$ are those with $R_{12}$-minors. Specifically, by cardinality and rank the unique non-graphic matroid that possibly have a single element deletion or contraction in $\mathcal{F}_c$ is $R_{12}$, up to isomorphisms. Usually $R_{12}$ is defined as the matroid represented over $GF(2)$ by the following matrix: $$B:=\bordermatrix{ &1&2&3&4&5&6&7&8&9&10&11&12\cr z_1&1&0&0&0&0&0&1&1&1&0&0&0\cr z_2&0&1&0&0&0&0&1&1&0&1&0&0\cr z_3&0&0&1&0&0&0&1&0&0&0&1&0\cr z_4&0&0&0&1&0&0&0&1&0&0&0&1\cr z_5&0&0&0&0&1&0&0&0&1&0&1&1\cr z_6&0&0&0&0&0&1&0&0&0&1&1&1}$$ Now, we build a representation of $si(R_{12}/1)$ as follows. First, we eliminate the first row and column of $B$ and eliminate column $9$, that became equal column $5$, after that, we add row $z_5$ to row $z_6$ and, finally, we add an extra row $z_7$ equal to the sum of the other rows. So we get the matrix $A$, defined next: \begin{wrapfigure}[6]{r}[-30pt]{3cm}\caption{} \begin{picture}(50,90) \put(15,5){\circle*{4}}\put(0,5){$z_6$}\put(15,45){\circle*{4}}\put(0,45){$z_4$} \put(15,85){\circle*{4}}\put(0,85){$z_3$}\put(70,5){\circle*{4}}\put(75,5){$z_ 5$} \put(70,45){\circle*{4}}\put(75,45){$z_7$}\put(70,85){\circle*{4}}\put(75,85){$z_2$} \drawline(70,85)(70,45)\put(70,65){2}\drawline(15,85)(70,45)\put(33,72){3} \drawline(15,45)(70,45)\put(27,44){4}\drawline(15,5)(70,5)\put(41,-3){5} \drawline(15,5)(70,45)\put(33,11){6}\drawline(15,85)(70,85)\put(41,85){7} \drawline(15,45)(70,85)\put(18,51){8}\drawline(15,5)(70,85)\put(13,17){10} \drawline(15,85)(70,5)\put(12,68){11}\drawline(15,45)(70,5)\put(13,31){12} \end{picture} \end{wrapfigure} $$A:=\bordermatrix{ &2&3&4&5&6&7&8&10&11&12\cr z_2&1&0&0&0&0&1&1&1&0&0\cr z_3&0&1&0&0&0&1&0&0&1&0\cr z_4&0&0&1&0&0&0&1&0&0&1\cr z_5&0&0&0&1&0&0&0&0&1&1\cr z_6&0&0&0&1&1&0&0&1&0&0\cr z_7&1&1&1&0&1&0&0&0&0&0 }$$ \\ Note that $ R_{12}/1\backslash 9\cong si(R_{12}/1)\cong R_{12}/1\backslash 5$ is the cycle matroid of a graph in Figure 4. Now, observe that, inverting the order of the rows in matrix $B$ give us an automorphism $\phi$ of $R_{12}$ such that $\phi(1)=6$. Moreover $R_{12}$ is self dual, where an isomorphism between $R_{12}$ and $R_{12}^*$ takes $1$ into $7$. So $\{1,6,7\}$ is in a orbit of the automorphism group of $R_{12}$. Thus $so(R_{12}/1)\cong si(R_{12}/6)\cong si(R_{12}/7)\cong M(K_{3,3}^{0,2})$ and the ground set of one of these matroids can be chosen containing $\{e,f\}$. Therefore, for each pair of elements of $R_{12}$, there is an $\mathcal{F}_c$-minor containing both. This proves item (c). To prove item (d) we observe that if $M/e=M(K_5)$ or $M\backslash e \cong (K_5)$, then $|E(M)|=11$, so $M$ is not isomorphic to $R_{10}$ neither has an $R_{12}$-minor. Moreover $M$ is not cographic in this case. So, all matroids we have to deal are graphic, and the proof of item (d) is reduced to item (b). \end{proofof} \begin{lemma}\label{equivalent-minors}Let $G$ be a $3$-connected simple graph not isomorphic to $K_5$. Then $G$ has a $K_5$-minor if and only if $G$ has a $K_{3,3}^{1,1}$-minor. \end{lemma} \begin{proof} If $G$ has a $K_{3,3}^{1,1}$-minor, then $G$ has a $K_5$-minor, because $K_5\cong K_{3,3}^{1,1}/u_1v_1$. In other hand, suppose that $G$ has a $K_5$-minor. By the Splitter Theorem (Theorem 12.1.2 of \cite{Oxley}), $G$ has a $3$-connected simple minor $H$ with an edge $e$ such that $H/e\cong K_5$ or $H\backslash e\cong K_5$. But no simple graph $H$ has an edge $e$ such that $H\backslash e\cong K_5$. So $H/e\cong K_5$. Now, it is easy to verify that $H\congK_{3,3}^{1,1}$ and conclude the lemma. \end{proof} The next result is Corollary 1.8 of \cite{Costalonga2}. \begin{corollary}\label{costalonga} Let $G$ be a simple $3$-connected graph with a simple $3$-connected minor $H$ such that $|V(G)|-|V(H)|\geq3$. Then there is a $3$-subset $\{x,y,z\}$ of $E(G)$, which is not the edge-set of a triangle of $G$, such that $G/x$, $G/y$, $G/z$ and $G/x,y$ are all $3$-connected graphs having $H$-minors. \end{corollary} \begin{proofof}\emptyseth{Proof of Theorem \ref{kttuu}: } Suppose that $G$ and $T$ is a counter-example to the theorem minimizing $|V(G)|$. If $|V(G)|\ge 8$, by Corollary \ref{costalonga} applied to $G$ and $K_5$, $G$ has an edge $e$ such that, $e\notin cl_{M(G)}(T)$ and $G/e$ is $3$-connected and have a $K_5$-minor. Thus $si(G/e)$ is a $3$-connected simple graph having $T$ as triangle. By Lemma \ref{equivalent-minors}, $si(G/e)$ has a $K_{3,3}^{1,1}$-minor, contradicting the minimality of $G$. Thus $|V(G)|\le 7$. If $|V(G)|=6$, then $G\cong K_{3,3}^{i,j}$ for some $1\le i\le j \le 3$. In this case, the Theorem can be verified directly. Thus $|V(G)|=7$. So, there is $e\capn E(G)$ and $X^*ubseteq E(G)$ such that $G\backslash X/e \cong K_{3,3}^{1,1}$. If $e\notin T$, $si(G/e)$ contradicts the minimality of $T$, so $e\capn T$. We split the proof into two cases now. The first case is when $e$ is adjacent to a degree-$2$ vertex $v$ of $G\backslash X$. Let $f$ be the other edge adjacent to $v$ in $G\backslash X$. So $e,f\capn T$, otherwise, $si(G/f)$ would contradict the minimality of $G$. Up to isomorphisms, $G\backslash X$ can be obtained from $K_{3,3}^{1,1}\cong G\backslash X/e$ by adding the vertex $v$ in the middle of some edge $e'$. By symmetry, we may assume that $e'\capn\{u_1v_2,v_2v_3,u_1v_1\}$. So, there are, up to isomorphisms, three possibilities for $G\backslash (X-T)$, those in Figure 5. Since $G$ is simple, $G$ has a third edge $g$ adjacent to $v$. For any of the graphs in Figure 5, it verifies that $si(G\backslash(X-T)/g)$ contradicts the minimality of $G$. So the proof is done in the first case. In the second case, $e$ is an edge of $G\backslash X$ whose adjacent vertices has degree at least $3$. We may suppose that the end-vertices $w_1$ and $w_2$ of $e$ collapses into $v_2$ when contracting $e$ in $G\backslash X$. Let $S$ be the set of edges incident to $v_2$ in $G\backslash X/e$. We also may assume that $w_2$ is adjacent to $v_3$ in $G\backslash X$. With this assumptions $G\backslash (X\cup S)$ is the graph $G_4$ of Figure 6. Note also that $G\backslash X$ is obtained from $G_4$ adding $3$ edges, each incident to a different vertex in $\{u_1,u_2,u_3\}$, two of then incident to $w_1$ and one incident to $w_2$. Since switching $u_2$ and $u_3$ in $G_4$ induces an automorphism, we may suppose that $u_2w_1\capn E(G\backslash X)$. Then, without losing generality, $G\backslash X$ is one of the graphs $G_5$ or $G_6$ in Figure 6. In the case that $G=G_5$, in Figure 7, in the first row, for each possibility for $T$ we draw $G\backslash(X-T)$. The bold edges are those of $T$. In each graph of the first row, the double edge $g$ has the property that the graph $si(G\backslash(X-T)/g)$, draw in the second row in the respective column, contradicts the minimality of $G$. The vertex obtained in the contraction is labelled by $z$. In the third and fourth rows of Figure 7, we have the same for the case in which $G=G_6$. This proves the theorem \end{proofof} \newcommand{\kttright}{ \put(15,0){\circle*{4}} \put(15,20){\circle*{4}} \put(15,40){\circle*{4}} \put(40,0) {\circle*{4}} \put(40,20){\circle*{4}} \put(40,40){\circle*{4}} \drawline(15,0)(40,0) \drawline(15,0)(40,20) \drawline(15,0)(40,40) \drawline(15,20)(40,0) \drawline(15,20)(40,20) \drawline(15,20)(40,40) \drawline(15,40)(40,0) \drawline(15,40)(40,20) \drawline(15,40)(40,40) } \newcommand{\gfivebase}{\put(15,10){\circle*{4}}\put(0,8){$u_3$} \put(15,30){\circle*{4}}\put(0,28){$u_2$} \put(15,50){\circle*{4}}\put(0,48){$u_1$} \put(45,0){\circle*{4}}\put(46,-2){$v_3$} \put(45,20){\circle*{4}}\put(46,18){$w_2$} \put(45,40){\circle*{4}}\put(46,38){$w_1$} \put(45,60){\circle*{4}}\put(46,58){$v_1$} } \newcommand{\boldtriangle}[3]{{\linethickness{1.2pt} \qbezier#1#2#2 \qbezier#2#3#3 \qbezier#3#1#1}} \begin{proofof}\emptyseth{ Proof of Theorem \ref{k5}: } Suppose that $G$ is a $3$-connected simple graph with a $K_5$-minor and $T$ is a triangle of $G$. We may suppose that $G\ncong K_5$. By Lemma \ref{equivalent-minors}, $G$ has a $3$-connected simple minor $H\congK_{3,3}^{1,1}$. By Theorem \ref{kttuu}, we choose $H$ having the edges of $T$ in a triangle. Let $e\capn H$ be the edge such that $H/e\cong K_5$. Note that $e$ is in no triangle of $H$. So $H/e$ is the $K_5$-minor we are looking for. \end{proofof} \end{document}
math
21,936
\begin{document} \begin{center} \Large \textbf{Compressed sensing in the quaternion algebra}\\ \mathbbm end{center} \begin{center} \begin{tabular}{cc} \textbf{Agnieszka Bade\'nska\textsuperscript{1}} & \textbf{{\L}ukasz B{\l}aszczyk\textsuperscript{1,2}} \\ [email protected] & [email protected] \\ \textsuperscript{1} Faculty of Mathematics & \textsuperscript{2} Institute of Radioelectronics \\ and Information Science & and Multimedia Technology \\ Warsaw University of Technology & Warsaw University of Technology \\ ul. Koszykowa 75 & ul. Nowowiejska 15/19 \\ 00-662 Warszawa, Poland & 00-665 Warszawa, Poland \mathbbm end{tabular} \mathbbm end{center} \noindent \textbf{Keywords}: compressed sensing, quaternion, restricted isometry property, sparse signals. \normalsize \begin{abstract} The~article concerns compressed sensing methods in the~quaternion algebra. We prove that it is possible to uniquely reconstruct -- by $\mathbbm ell_1$-norm minimization -- a~sparse quaternion signal from a~limited number of its linear measurements, provided the~quaternion measurement matrix satisfies so-called restricted isometry property with a~sufficiently small constant. We also provide error estimates for the~reconstruction of a~non-sparse quaternion signal in the~noisy and noiseless cases. \mathbbm end{abstract} \section{Introduction} E.~Cand\'es et al. showed that -- in the~real or complex setting -- if a~measurement matrix satisfies so-called \textit{restricted isometry property} (RIP) with a~sufficiently small constant, then every sparse signal can be uniquely reconstructed from a~limited number of its linear measurements as a~solution of a~convex program of $\mathbbm ell_1$-norm minimization (see~e.g.~\cite{cRIP,crt,ct} and \cite{introCS} for more references). Sparsity of the~signal is a~natural assumption -- most of well known signals have a~sparse representation in an~appropriate basis (e.g. wavelet representation of an~image). Moreover, if the~original signal was not sparse, the~same minimization procedure provides a~good sparse approximation of the~signal and the~procedure is stable in the~sense that the~error is bounded above by the~$\mathbbm ell_1$-norm of the~difference between the~original signal and its best sparse approximation. For a~certain time the~attention of the~researchers in the~theory of \textit{compressed sensing} has mostly been focused on real and complex signals. Over the~last decade there have been published results of numerical experiments suggesting that the~compressed sensing methods can be successfully applied also in the~quaternion algebra~\cite{barthelemy2015,hawes2014,l1minqs}, however, until recently there were no theoretical results that could explain the~success of these experiments. The~aim of our research is to develop theoretical background of the~compressed sensing theory in the~quaternion algebra. Our~first step towards this goal was proving that one can uniquely reconstruct a~sparse quaternion signal -- by $\mathbbm ell_1$-norm minimization -- provided the~real measurement matrix satisfies the~RIP (for quaternion vectors) with a~sufficiently small constant (\cite[Corrolary~5.1]{bb}). This result can be directly applied since any~real matrix satisfying the~RIP for real vectors, satisfies the~RIP for quaternion vectors with the~same constant ({\cite[Lemma~3.2]{bb}}). We also want to point out a~very interesting recent result of N.~Gomes, S.~Hartmann and U.~K\"ahler concerning the~quaternion Fourier matrices -- arising in colour representation of images. They showed that with high probability such matrices allow a~sparse reconstruction by means of the~$\mathbbm ell_1$-minimization {\cite[Theorem~3.2]{ghk}}. Their proof, however, is straightforward and does not use the~notion of~RIP. The generalization of compressed sensing to the~quaternion algebra would be significant due to their wide applications. Apart from the~classical applications (in quantum mechanics and for the~description of 3D solid body rotations), quaternions have also been used in 3D and 4D signal processing \cite{snopek2015}, in particular to represent colour images (e.g.~in the RGB or CMYK models). Due to the~extension of classical tools (like the~Fourier transform~\cite{ell2014}) to the~quaternion algebra it is possible to investigate colour images without need of treating each component separately~\cite{dubey2014,ell2014}. That is why quaternions have found numerous applications in image filtering, image enhancement, pattern recognition, edge detection and watermarking \cite{es,G1,khalil2012,P1,rzadkowski2015,T1,W1}. There has also been proposed a~dual-tree quaternion wavelet transform in a~multiscale analysis of geometric image features~\cite{ccb}. For this purpose an~alternative representation of quaternions is used -- through its magnitude (norm) and three phase angles: two of them encode phase shifts while the~third contains image texture information \cite{bulow2001}. In view of numerous articles presenting results of numerical experiments of quaternion signal processing and their possible applications, there is a~natural need of further thorough theoretical investigations in this field. In this article we extend the~fundamental result of the~compressed sensing theory to the~quaternion case, namely we show that if a~quaternion measurement matrix satisfies the~RIP with a~sufficiently small constant, then it is possible to reconstruct sparse quaternion signals from a~small number of their measurements via $\mathbbm ell_1$-norm minimization (Corollary~\mathbf ref{l1mincor}). We also estimate the~error of reconstruction of a~non-sparse signal from exact and noisy data (Theorem~\mathbf ref{l1minthm}). Note that these results not only generalize the~previous ones {\cite[Theorem~4.1, Corrolary~5.1]{bb}} but also improve them by decreasing the~error estimation's constants. This enhancement was possible due to using algebraic properties of quaternion Hermitian matrices (Lemma~\mathbf ref{herm-norm}) to derive characterization of the~restricted isometry constants (Lemma~\mathbf ref{RIPequiv}) analogous to the~real and complex case. Consequently, one can carefully follow steps of the~classical Cand\'es' proof~\cite{cRIP} with caution to the~non-commutativity of quaternion multiplication. It is known that e.g. real Gaussian and Bernoulli random matrices, also partial Discrete Fourier Transform matrices satisfy the~RIP (with overwhelming probability)~\cite{introCS}, however, until recently there were no known examples of quaternion matrices satisfying this condition. It has been believed that quaternion Gaussian random matrices satisfy RIP and, therefore, they have been widely used in numerical experiments \cite{barthelemy2015,hawes2014,l1minqs} but there was a~lack of theoretical justification of this conviction. In the~subsequent article~\cite{bbRIP} we prove that this hypothesis is true, i.e. quaternion Gaussian matrices satisfy the~RIP, and we provide estimates on matrix sizes that guarantee the~RIP with overwhelming probability. This result, together with the~main results of this article (Theorem~\mathbf ref{l1minthm}, Corollary~\mathbf ref{l1mincor}), constitute the~theoretical foundation of the~classical compressed sensing methods in the~quaternion algebra. The~article is organized as follows. First, we recall basic notation and facts concerning the~quaternion algebra with particular emphasis put on the~properties of Hermitian form and Hermitian matrices. The~third section is devoted to the~RIP and characterization of the~restricted isometry constants in terms of Hermitian matrix norm. The~fourth and fifth sections contain proofs of the~main results of the~article. In the~sixth section we present results of numerical experiments illustrating our results -- we may see, in particular, that the~rate of perfect reconstructions in the~quaternion case is higher than in the~real case experiment with the~same parameters. We conclude with a~short r\'esum\'e of the~obtained results and our further research perspectives. \section{Algebra of quaternions} Denote by~$\mathbb H$ the~algebra of quaternions $$ q=a+b\mathbf i+c\mathbf j+d\mathbf k, \quad \textrm{where}\quad a,b,c,d\mathbf in\mathbb R, $$ endowed with the~standard norm $$ |q|=\sqrt{q\conj{q}}=\sqrt{a^2+b^2+c^2+d^2}, $$ where $\conj{q}=a-b\mathbf i-c\mathbf j-d\mathbf k$ is the~conjugate of~$q$. Recall that multiplication is associative but in general not commutative in the~quaternion algebra and is defined by the~following rules $$ \mathbf i^2=\mathbf j^2=\mathbf k^2=\mathbf i\mathbf j\mathbf k=-1 $$ and $$ \mathbf i\mathbf j=-\mathbf j\mathbf i=\mathbf k, \quad \mathbf j\mathbf k=-\mathbf k\mathbf j=\mathbf i, \quad \mathbf k\mathbf i=-\mathbf i\mathbf k=\mathbf j. $$ Multiplication is distributive with respect to addition and has a~neutral element $1\mathbf in\mathbb H$, hence $\mathbb H$ forms a~ring, which is usually called a~noncommutative field. We also have the~property that $$ \conj{q\cdot w}=\conj{w}\cdot\conj{q} \quad\textrm{for any}\quad q,w\mathbf in\mathbb H. $$ In what follows we will interpret signals as vectors with quaternion coordinates, i.e. elements of $\mathbb H^n$. Algebraically $\mathbb H^n$ is a~module over the~ring $\mathbb H$, usually called the~quaternion vector space. We will also consider matrices with quaternion entries with usual multiplication rules. For any matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ with quaternion entries by $\mathbbm herm{\mathbf \Phi}$ we denote the~adjoint matrix, i.e. $\mathbbm herm{\mathbf \Phi}=\conj{\mathbf \Phi}^T$, where $T$ is the~transpose. The~same notation applies to quaternion vectors $\mathbbm x\mathbf in\mathbb H^n$ which can be interpreted as one-column matrices $\mathbbm x\mathbf in\mathbb H^{n\times1}$. Obviously $\mathbbm herm{\left(\mathbbm herm{\mathbf \Phi}\mathbf right)}=\mathbf \Phi$. A~matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ defines a~$\mathbb H$-linear transformation $\mathbf \Phi:\mathbb H^n\to\mathbb H^m$ (in terms of the~right quaternion vector space, i.e. considering the~right scalar multiplication) which acts by the~standard matrix-vector multiplication: $$ \mathbf \Phi(\mathbbm x+\mathbbm y)=\mathbf \Phi\mathbbm x+\mathbf \Phi\mathbbm y \quad \textrm{and} \quad \mathbf \Phi(\mathbbm x q)=(\mathbf \Phi\mathbbm x)q \quad \textrm{for any}\quad \mathbbm x,\mathbbm y\mathbf in\mathbb H^n,\; q\mathbf in\mathbb H. $$ We also have that $$ \mathbbm herm{(\mathbf \Phi q)}=\conj{q}\mathbbm herm{\mathbf \Phi}, \quad \mathbbm herm{(q\mathbf \Phi)}=\mathbbm herm{\mathbf \Phi}\conj{q}, \quad \mathbbm herm{(\mathbf \Phi\mathbbm x)}=\mathbbm herm{\mathbbm x}\mathbbm herm{\mathbf \Phi}, \quad \mathbbm herm{(\mathbf \Phi\mathbf{\Psi})}=\mathbbm herm{\mathbf{\Psi}}\mathbbm herm{\mathbf \Phi} $$ for all $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$, $q\mathbf in\mathbb H$, $\mathbbm x\mathbf in\mathbb H^n$, $\mathbf{\Psi}\mathbf in\mathbb H^{n\times p}$. For any $n\mathbf in\mathbb N$ we introduce the~following Hermitian form $\mathbf ip{\cdot,\cdot}\colon\mathbb H^n\times\mathbb H^n\to\mathbb H$ with quaternion values: $$ \mathbf ip{\mathbbm x,\mathbbm y}=\mathbbm herm{\mathbbm y}\mathbbm x=\sum_{i=1}^{n}\conj{y_i}x_i, \quad\textrm{where}\quad \mathbbm x=(x_1,\ldots,x_n)^T,\; \mathbbm y=(y_1,\ldots,y_n)^T\mathbf in\mathbb H^n $$ and $T$ is the~transpose. Denote also $$ \norm{\mathbbm x}_2=\sqrt{\mathbf ip{\mathbbm x,\mathbbm x}}=\sqrt{\sum_{i=1}^{n}|x_i|^2}, \quad \textrm{for any }\mathbbm x=(x_1,\ldots,x_n)^T\mathbf in\mathbb H^n. $$ It is straightforward calculation to verify that $\mathbf ip{\cdot,\cdot}$ satisfies the~following properties of an~inner product for all $\mathbbm x,\mathbbm y,\mathbbm z\mathbf in\mathbb H^n$ and $q\mathbf in\mathbb H$. \begin{itemize} \mathbf item $\displaystyle \conj{\mathbf ip{\mathbbm x,\mathbbm y}}=\mathbf ip{\mathbbm y,\mathbbm x}$. \mathbf item $\displaystyle \mathbf ip{\mathbbm x q,\mathbbm y}=\mathbf ip{\mathbbm x,\mathbbm y}q$. \mathbf item $\displaystyle \mathbf ip{\mathbbm x+\mathbbm y,\mathbbm z}=\mathbf ip{\mathbbm x,\mathbbm z}+\mathbf ip{\mathbbm y,\mathbbm z}$. \mathbf item $\displaystyle \mathbf ip{\mathbbm x,\mathbbm x}=\norm{\mathbbm x}_2^2\geq0$ and $\displaystyle \norm{\mathbbm x}_2^2=0 \quad \mathbf iff \quad \mathbbm x=\mathbf 0$. \mathbbm end{itemize} Hence $\norm{\cdot}_2$ satisfies the axioms of a~norm in~$\mathbb H^n$. By carefully following the~classical steps of the~proof we also get the~Cauchy-Schwarz inequality (cf.\cite[Lemma 2.2]{bb}). $$ |\mathbf ip{\mathbbm x,\mathbbm y}|\leq\norm{\mathbbm x}_2\cdot\norm{\mathbbm y}_2 $$ for any $\mathbbm x,\mathbbm y\mathbf in\mathbb H^n$. Notice that for $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ the~matrix $\mathbbm herm{\mathbf \Phi}$ defines the~adjoint $\mathbb H$-linear transformation since $$ \mathbf ip{\mathbbm x,\mathbbm herm{\mathbf \Phi}\mathbbm y}=\mathbbm herm{\left(\mathbbm herm{\mathbf \Phi}\mathbbm y\mathbf right)}\mathbbm x=\mathbbm herm{\mathbbm y}\mathbf \Phi\mathbbm x=\mathbf ip{\mathbf \Phi\mathbbm x,\mathbbm y} \quad \textrm{for} \quad \mathbbm x\mathbf in\mathbb H^n,\mathbbm y\mathbf in\mathbb H^m. $$ Recall also that a~linear transformation (matrix) $\mathbf{\Psi}\mathbf in\mathbb H^{n\times n}$ is called \textit{Hermitian} if $\mathbbm herm{\mathbf{\Psi}}=\mathbf{\Psi}$. Obviously $\mathbbm herm{\mathbf \Phi}\mathbf \Phi$ is Hermitian for any $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$. In the~next section we will use the~following property of Hermitian matrices. \begin{lemma}\label{herm-norm} Suppose $\mathbf{\Psi}\mathbf in\mathbb H^{n\times n}$ is Hermitian. Then $$ \norm{\mathbf{\Psi}}_{2\to2}=\max_{\mathbbm x\mathbf in\mathbb H^n,\norm{\mathbbm x}_2=1}\left|\mathbf ip{\mathbf{\Psi}\mathbbm x,\mathbbm x}\mathbf right|=\max_{\mathbbm x\mathbf in\mathbb H^n\setminus\{\mathbf 0\}}\frac{\left|\mathbf ip{\mathbf{\Psi}\mathbbm x,\mathbbm x}\mathbf right|}{\norm{\mathbbm x}_2^2}, $$ where $\norm{\cdot}_{2\to2}$ is the~standard operator norm in the~right quaternion vector space $\mathbb H^n$ endowed with the~norm $\norm{\cdot}_2$, i.e. $$ \norm{\mathbf{\Psi}}_{2\to2}=\max_{\mathbbm x\mathbf in\mathbb H^n\setminus\{\mathbf 0\}}\frac{\norm{\mathbf{\Psi}\mathbbm x}_2}{\norm{\mathbbm x}_2}= \max_{\mathbbm x\mathbf in\mathbb H^n,\norm{\mathbbm x}_2=1}\norm{\mathbf{\Psi}\mathbbm x}_2. $$ \mathbbm end{lemma} \begin{proof} Recall that a~Hermitian matrix has real (right) eigenvalues~\cite{qalgebra}. Moreover, there exists an~orthonormal (in terms of the~$\mathbb H$-linear form $\mathbf ip{\cdot,\cdot}$) base of $\mathbb H^n$ consisting of eigenvectors $\mathbbm x_i$ corresponding to eigenvalues $\lambda_i\mathbf in\mathbb R$, $i=1,\ldots,n$ (cf.{\cite[Theorem 5.3.6. (c)]{qalgebra}}), i.e. $$ \mathbf{\Psi}\mathbbm x_i=\mathbbm x_i\lambda_i\quad \textrm{and}\quad \mathbf ip{\mathbbm x_i,\mathbbm x_j}=\mathbbm herm{\mathbbm x_j}\mathbbm x_i=\delta_{i,j}\quad \textrm{for}\quad i,j=1,\ldots,n. $$ Denote $\lambda_{\max}=\max_i|\lambda_i|$. Then $\norm{\mathbf{\Psi}}_{2\to2}=\lambda_{\max}$. Indeed, for any $\mathbbm x=\sum\limits_{i=1}^n\mathbbm x_i\alpha_i\mathbf in\mathbb H^n$ with $\norm{\mathbbm x}_2^2=\sum\limits_{i=1}^n|\alpha_i|^2=1$, since $\mathbbm x_i$ are orthonormal, we have $$ \norm{\mathbf{\Psi}\mathbbm x}_2^2=\norm{\sum_{i=1}^n\mathbf{\Psi}\mathbbm x_i\alpha_i}_2^2=\norm{\sum_{i=1}^n\mathbbm x_i\lambda_i\alpha_i}_2^2=\sum_{i=1}^n\left|\lambda_i\alpha_i\mathbf right|^2 \leq\lambda_{\max}\underbrace{\sum_{i=1}^n\left|\alpha_i\mathbf right|^2}_{=1} $$ and for the~appropriate eigenvector $\mathbbm x_i$ for which $|\lambda_i|=\lambda_{\max}$, $$ \norm{\mathbf{\Psi}\mathbbm x_i}_2=\norm{\mathbbm x_i\lambda_i}_2=\lambda_{\max}\norm{\mathbbm x_i}_2=\lambda_{\max}. $$ On the~other hand, since $\lambda_i$ are real, $$\aligned \mathbf ip{\mathbf{\Psi}\mathbbm x,\mathbbm x} &= \mathbbm herm{\mathbbm x}\mathbf{\Psi}\mathbbm x=\mathbbm herm{\left(\sum_{i=1}^n\mathbbm x_i\alpha_i\mathbf right)}\left(\sum_{j=1}^n\mathbf{\Psi}\mathbbm x_j\alpha_j\mathbf right)= \left(\sum_{i=1}^n\conj{\alpha_i}\mathbbm herm{\mathbbm x_i}\mathbf right)\left(\sum_{j=1}^n\mathbbm x_j\lambda_j\alpha_j\mathbf right)\\ & = \sum_{i=1}^n\conj{\alpha_i}\lambda_i\alpha_i \stackrel{\lambda_i\mathbf in\mathbb R}{=} \sum_{i=1}^n\lambda_i|\alpha_i|^2. \mathbbm endaligned $$ Hence $$ \left|\mathbf ip{\mathbf{\Psi}\mathbbm x,\mathbbm x}\mathbf right|\leq \lambda_{\max}\sum\limits_{i=1}^n|\alpha_i|^2=\lambda_{\max} $$ and -- again -- for the~appropriate eigenvector the~last two quantities are equal. The~result follows. \mathbbm end{proof} In what follows we will consider $\norm{\cdot}_p$ norms for quaternion vectors $\mathbbm x\mathbf in\mathbb H^n$ defined in the~standard way: $$ \norm{\mathbbm x}_{p}=\left(\sum_{i=1}^{n}|x_i|^p\mathbf right)^{1/p}, \quad \textrm{for}\quad p\mathbf in[1,\mathbf infty) $$ and $$ \norm{\mathbbm x}_{\mathbf infty}=\max_{1\leq i\leq n}|x_i|, $$ where $\mathbbm x=(x_1,\ldots,x_n)^T$. We will also apply the~usual notation for the~cardinality of the~support of~$\mathbbm x$, i.e. $$ \norm{\mathbbm x}_0=\#\mathrm {supp}(\mathbbm x), \quad\textrm{where}\quad \mathrm {supp}(\mathbbm x)=\{i\mathbf in\{1,\ldots,n\}\colon x_i\neq0\}. $$ \section{Restricted Isometry Property} Recall that we call a~vector (signal) $\mathbbm x\mathbf in\mathbb H^n$ $s$-sparse if it has at most $s$ nonzero coordinates, i.e. $$ \norm{\mathbbm x}_0\leq s. $$ As it was mentioned in the~introduction, one of the~conditions which guarantees exact reconstruction of a~sparse real signal from a~few number of its linear measurements is that the~measurement matrix satisfies so-called restricted isometry property (RIP) with a~sufficiently small constant. The~notion of restricted isometry constants was introduced by Cand\`es and Tao in~\cite{ct}. Here we generalize it to quaternion signals. \begin{definition}\label{RIP} Let $s\mathbf in\mathbb N$ and $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$. We say that $\mathbf \Phi$ satisfies the $s$-restricted isometry property (for quaternion vectors) with a~constant $\delta_s\geq0$ if \begin{equation}\label{qRIP} \left(1-\delta_s\mathbf right)\norm{\mathbbm x}_2^2\leq\norm{\mathbf \Phi\mathbbm x}_2^2\leq\left(1+\delta_s\mathbf right)\norm{\mathbbm x}_2^2 \mathbbm end{equation} for all $s$-sparse quaternion vectors $\mathbbm x\mathbf in\mathbb H^n$. The~smallest number $\delta_s\geq0$ with this property is called the $s$-restricted isometry constant. \mathbbm end{definition} Note that we can define $s$-restricted isometry constants for any matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ and any number $s\mathbf in\{1,\ldots,n\}$. It has been proved that if a~real matrix $\mathbf \Phi\mathbf in\mathbb R^{m\times n}$ satisfies the~inequality~\mathbbm eqref{qRIP} for real $s$-sparse vectors~$\mathbbm x\mathbf in\mathbb R^n$, then it also satisfies it -- with the~same constant~$\delta_s$ -- for $s$-sparse quaternion vectors~$\mathbbm x\mathbf in\mathbb H^n$ \cite[Lemma~3.2]{bb}. The~following lemma extends an~analogous result, known for real and complex matrices~\cite{introCS}, to the~quaternion case. As is it accustomed, for a~matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ and a~set of indices $S\subset\{1,\ldots,n\}$ with $\# S=s$ by $\mathbf \Phi_S\mathbf in\mathbb H^{m\times s}$ we denote the~matrix consisting of columns of~$\mathbf \Phi$ with indices in the~set~$S$. \begin{lemma}\label{RIPequiv} The $s$-restricted isometry constant $\delta_s$ of a~matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ equivalently equals $$ \delta_s=\max_{S\subset\{1,\ldots,n\},\#S\leq s}\norm{\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}}_{2\to2}. $$ \mathbbm end{lemma} \begin{proof} We proceed as in~{\cite[Chapter 6]{introCS}}. Fix any $s\mathbf in\{1,\ldots,n\}$ and $S\subset\{1,\ldots,n\}$ with $\#S\leq s$. Notice that the~condition \mathbbm eqref{qRIP} can be equivalently rewritten as $$ \left|\norm{\mathbf \Phi_S\mathbbm x}_2^2-\norm{\mathbbm x}_2^2\mathbf right|\leq\delta_s\norm{\mathbbm x}_2^2 \quad \textrm{for all} \quad \mathbbm x\mathbf in\mathbb H^s, $$ where $\delta_s$ is the $s$-restricted isometry constant of~$\mathbf \Phi$. The left hand side equals $$ \left|\norm{\mathbf \Phi_S\mathbbm x}_2^2-\norm{\mathbbm x}_2^2\mathbf right|\leq\delta_s\norm{\mathbbm x}_2^2 = \left|\mathbf ip{\mathbf \Phi_S\mathbbm x,\mathbf \Phi_S\mathbbm x}-\mathbf ip{\mathbbm x,\mathbbm x}\mathbf right|=\left|\mathbf ip{\left(\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}\mathbf right)\mathbbm x,\mathbbm x}\mathbf right|$$ and by the~Lemma~\mathbf ref{herm-norm}, since the~matrix $\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}$ is Hermitian, we get that $$ \max_{\mathbbm x\mathbf in\mathbb H^s\setminus\{\mathbf 0\}}\frac{\left|\mathbf ip{\left(\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}\mathbf right)\mathbbm x,\mathbbm x}\mathbf right|}{\norm{\mathbbm x}_2^2} = \norm{\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}}_{2\to2}. $$ Arbitrary choice of $s$ and $S$ finishes the proof. \mathbbm end{proof} The~next result is an~important tool in the~proof of Theorem~\mathbf ref{l1minthm}. Having the above equivalence and the~Cauchy-Schwarz inequality we are able to obtain for quaternion vectors the~same estimate as in the~real and complex case (cf. {\cite[Lemma~2.1]{cRIP}} and {\cite[Proposition~6.3]{introCS}}). \begin{lemma}\label{RIPip} Let $\delta_s$ be the~$s$-restricted isometry constant for a~matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ for $s\mathbf in\{1,\ldots,n\}$. For any pair of $\mathbbm x,\mathbbm y\mathbf in\mathbb H^n$ with disjoint supports and such that $\norm{\mathbbm x}_0\leq s_1$ and $\norm{\mathbbm y}_0\leq s_2$, where $s_1+s_2\leq n$, we have that $$ \left|\mathbf ip{\mathbf \Phi\mathbbm x,\mathbf \Phi\mathbbm y}\mathbf right|\leq \delta_{s_1+s_2}\norm{\mathbbm x}_2\norm{\mathbbm y}_2. $$ \mathbbm end{lemma} \begin{proof} In this proof we will use the~following notation: for any~vector $\mathbbm x\mathbf in\mathbb H^n$ and a~set of indices $S\subset\{1,\ldots,n\}$ with $\# S=s$ by $\mathbbm x_{|S}\mathbf in\mathbb H^s$ we denote the~vector of $\mathbbm x$-coordinates with indices in~$S$. Take any vectors $\mathbbm x,\mathbbm y\mathbf in\mathbb H^n$ satisfying the~assumptions of the~lemma and denote $S=\mathrm {supp}(\mathbbm x)\cup\mathrm {supp}(\mathbbm y)$. Obviously $\#S=s_1+s_2$. Since $\mathbbm x$ and $\mathbbm y$ have disjoint supports, they are orthogonal, i.e. $\mathbf ip{\mathbbm x,\mathbbm y}=\mathbf ip{\mathbbm x_{|S},\mathbbm y_{|S}}=0$. Using the~Cauchy-Schwarz inequality and Lemma~\mathbf ref{RIPequiv} we get that $$\aligned \left|\mathbf ip{\mathbf \Phi\mathbbm x,\mathbf \Phi\mathbbm y}\mathbf right| &= \left|\mathbf ip{\mathbf \Phi_S\mathbbm x_{|S},\mathbf \Phi_S\mathbbm y_{|S}}-\mathbf ip{\mathbbm x_{|S},\mathbbm y_{|S}}\mathbf right| = \left|\mathbf ip{\left(\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}\mathbf right)\mathbbm x_{|S},\mathbbm y_{|S}}\mathbf right| \\ &\leq \norm{\mathbbm herm{\mathbf \Phi_S}\mathbf \Phi_S-\mathbf{Id}}_{2\to2}\norm{\mathbbm x_{|S}}_2\norm{\mathbbm y_{|S}}_2\leq \delta_{s_1+s_2}\norm{\mathbbm x_{|S}}_2\norm{\mathbbm y_{|S}}_2, \mathbbm endaligned $$ which finishes the~proof since $\norm{\mathbbm x_{|S}}_2=\norm{\mathbbm x}_2$ and $\norm{\mathbbm y_{|S}}_2=\norm{\mathbbm y}_2$. \mathbbm end{proof} \section{Stable reconstruction from noisy data} As we mentioned in the~introduction, our aim is to reconstruct a~quaternion signal from a~limited number of its linear measurements with quaternion coefficients. We will also assume the~presence of a~white noise with bounded $\mathbbm ell_2$ quaternion norm. The~observables are, therefore, given by $$ \mathbbm y=\mathbf \Phi\mathbbm x+\mathbbm e, \quad\textrm{where}\quad \mathbbm x\mathbf in\mathbb H^n, \; \mathbf \Phi\mathbf in\mathbb H^{m\times{n}}, \; \mathbbm y\mathbf in\mathbb H^m \;\textrm{and}\; \mathbbm e\mathbf in\mathbb H^n \textrm{ with } \norm{\mathbbm e}_2\leq\mathbbm eta $$ for some $m\leq n$ and $\mathbbm eta\geq0$. We will use the~following notation: for any $\mathbbm h\mathbf in\mathbb H^n$ and a~set of indices $T\subset\{1,\ldots,n\}$, the~vector $\mathbbm h_T\mathbf in\mathbb H^n$ is supported on~$T$ with entries $$ \left(\mathbbm h_T\mathbf right)_i=\left\{\begin{array}{cl} h_i & \textrm{if } i\mathbf in T, \\ 0 & \textrm{otherwise}, \mathbbm end{array}\mathbf right. \quad \textrm{where} \quad \mathbbm h=(h_1,\ldots,h_n)^T. $$ The~complement of $T\subset\{1,\ldots,n\}$ will be denoted by $T^c=\{1,\ldots,n\}\setminus{T}$ and the~symbol~$\mathbbm x_s$ will be used for the~best $s$-sparse approximation of the~vector~$\mathbbm x$. The~following result is a~generalization of {\cite[Theorem~1.3]{cRIP}} and \cite[Theorem 4.1]{bb} to the~full quaternion case. It also improves the~error estimate's constants from~\cite[Theorem~4.1]{bb}. \begin{theorem}\label{l1minthm} Suppose that $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ satisfies the~$2s$-restricted isometry property with a~constant $\delta_{2s}<\sqrt{2}-1$ and let $\mathbbm eta\geq0$. Then, for any $\mathbbm x\mathbf in\mathbb H^n$ and $\mathbbm y=\mathbf \Phi\mathbbm x+\mathbbm e$ with $\norm{\mathbbm e}_2\leq\mathbbm eta$, the~solution $\mathbbm x^{\#}$ of the~problem \begin{equation}\label{l1minproblem} \argmin\limits_{\mathbbm z\mathbf in\mathbb H^n} \norm{\mathbbm z}_1 \quad\text{subject to}\quad\norm{\mathbf \Phi\mathbbm z-\mathbbm y}_2\leq\mathbbm eta \mathbbm end{equation} satisfies \begin{equation} \label{main-ineq} \norm{\mathbbm x^{\#}-\mathbbm x}_2 \leq \frac{C_0}{\sqrt{s}}\norm{\mathbbm x-\mathbbm x_s}_1 + C_1\mathbbm eta \mathbbm end{equation} with constants $$ C_0 = 2\cdot \frac{1+\left(\sqrt{2}-1\mathbf right)\delta_{2s}}{1-\left(\sqrt{2}+1\mathbf right)\delta_{2s}} ,\quad C_1 = \frac{4\sqrt{1+\delta_{2s}}}{1-\left(\sqrt{2}+1\mathbf right)\delta_{2s}}, $$ where $\mathbbm x_s$ denotes the~best $s$-sparse approximation of $\mathbbm x$. \mathbbm end{theorem} \begin{proof} Denote $$ \mathbbm h=\mathbbm x^{\#}-\mathbbm x $$ and decompose $\mathbbm h$ into a~sum of vectors $\mathbbm h_{T_0},\mathbbm h_{T_1},\mathbbm h_{T_2}\ldots$ in the~following way: let $T_0$ be the~set of indices of $\mathbbm x$ coordinates with the~biggest quaternion norms (hence $\mathbbm x_s=\mathbbm x_{T_0}$); $T_1$ is the~set of indices of $\mathbbm h_{T_0^c}$ coordinates with the~biggest norms, $T_1$ is the~set of indices of $\mathbbm h_{\left(T_0\cup T_1\mathbf right)^c}$ coordinates with the~biggest norms, etc. Then obviously all $\mathbbm h_{T_j}$ are $s$-sparse and have disjoint supports. In what follows we will separately estimate norms $\norm{\mathbbm h_{T_0\cup T_1}}_2$ and~$\norm{\mathbbm h_{\left(T_0\cup T_1\mathbf right)^c}}_2$. Notice that for $j\geq2$ we have that $$ \norm{\mathbbm h_{T_j}}_2^2=\sum_{i\mathbf in T_j}|h_i|^2\leq \sum_{i\mathbf in T_j}\norm{\mathbbm h_{T_j}}_{\mathbf infty}^2\leq s\norm{\mathbbm h_{T_j}}_{\mathbf infty}^2, $$ where $h_i$ are the~coordinates of~$\mathbbm h$ and $\norm{\mathbbm h_{T_j}}_{\mathbf infty}=\max\limits_{i\mathbf in{T_j}}|h_i|$ (the~last $T_j$ may have less than~$s$ nonzero coordinates). Moreover, since all non-zero coordinates of~$\mathbbm h_{T_{j-1}}$ have norms not smaller than non-zero coordinates of $\mathbbm h_{T_j}$, $$ \norm{\mathbbm h_{T_j}}_{\mathbf infty}\leq\frac{1}{s}\sum_{i\mathbf in{T_{j-1}}}|h_i|=\frac{1}{s}\norm{\mathbbm h_{T_{j-1}}}_1. $$ Hence, for $j\geq2$ we get that $$ \norm{\mathbbm h_{T_j}}_2\leq \sqrt{s}\norm{\mathbbm h_{T_j}}_{\mathbf infty}\leq \frac{1}{\sqrt{s}}\norm{\mathbbm h_{T_{j-1}}}_1, $$ which implies \begin{equation}\label{sumnormhTj} \sum_{j\geq2}\norm{\mathbbm h_{T_j}}_2\leq \frac{1}{\sqrt{s}}\sum_{j\geq1}\norm{\mathbbm h_{T_j}}_1\leq \frac{1}{\sqrt{s}}\norm{\mathbbm h_{T_0^c}}_1. \mathbbm end{equation} Finally \begin{equation}\label{normhT01c} \norm{\mathbbm h_{\left(T_0\cup T_1\mathbf right)^c}}_2=\norm{\sum_{j\geq2}\mathbbm h_{T_j}}_2\leq \sum_{j\geq2}\norm{\mathbbm h_{T_j}}_2\leq \frac{1}{\sqrt{s}}\norm{\mathbbm h_{T_0^c}}_1. \mathbbm end{equation} Observe that $\norm{\mathbbm h_{T_0^c}}_1$ can not be to large. Indeed, since $\norm{\mathbbm x^{\#}}_1=\norm{\mathbbm x+\mathbbm h}_1$ is minimal $$ \norm{\mathbbm x}_1\geq\norm{\mathbbm x+\mathbbm h}_1= \norm{\mathbbm x_{T_0}+\mathbbm h_{T_0}}_1+ \norm{\mathbbm x_{T_0^c}+\mathbbm h_{T_0^c}}_1 \geq \norm{\mathbbm x_{T_0}}_1-\norm{\mathbbm h_{T_0}}_1-\norm{\mathbbm x_{T_0^c}}_1+\norm{\mathbbm h_{T_0^c}}_1, $$ hence $$ \norm{\mathbbm x_{T_0^c}}_1=\norm{\mathbbm x}_1-\norm{\mathbbm x_{T_0}}_1\geq -\norm{\mathbbm h_{T_0}}_1-\norm{\mathbbm x_{T_0^c}}_1+\norm{\mathbbm h_{T_0^c}}_1 $$ and therefore \begin{equation}\label{normhT0c} \norm{\mathbbm h_{T_0^c}}_1\leq \norm{\mathbbm h_{T_0}}_1+2\norm{\mathbbm x_{T_0^c}}_1. \mathbbm end{equation} Now, the Cauchy-Schwarz inequality immediately implies that $\norm{\mathbbm h_{T_0}}_1\leq\sqrt{s}\norm{\mathbbm h_{T_0}}_2$. From this, (\mathbf ref{normhT01c}) and (\mathbf ref{normhT0c}) we conclude that \begin{equation}\label{ineq1} \norm{\mathbbm h_{\left(T_0\cup T_1\mathbf right)^c}}_2\leq \frac{1}{\sqrt{s}}\norm{\mathbbm h_{T_0^c}}_1 \leq \frac{1}{\sqrt{s}}\norm{\mathbbm h_{T_0}}_1+\frac{2}{\sqrt{s}}\norm{\mathbbm x_{T_0^c}}_1\leq \norm{\mathbbm h_{T_0}}_2+2\mathbbm epsilon, \mathbbm end{equation} where $\mathbbm epsilon=\frac{1}{\sqrt{s}}\norm{\mathbbm x_{T_0^c}}_1=\frac{1}{\sqrt{s}}\norm{\mathbbm x-\mathbbm x_s}_1$. This is the~first ingredient of the~final estimate. Now, we are going to estimate the~remaining component, i.e. $\norm{\mathbbm h_{T_0\cup T_1}}_2$. Using $\mathbb H$-linearity of $\mathbf ip{\cdot,\cdot}$, we get that $$\aligned \norm{\mathbf \Phi\mathbbm h_{T_0\cup T_1}}_2^2&=\mathbf ip{\mathbf \Phi\mathbbm h_{T_0\cup T_1},\mathbf \Phi\mathbbm h_{T_0\cup T_1}}=\mathbf ip{\mathbf \Phi\mathbbm h_{T_0\cup T_1},\mathbf \Phi\mathbbm h}-\sum_{j\geq2}\mathbf ip{\mathbf \Phi\mathbbm h_{T_0\cup T_1},\mathbf \Phi\mathbbm h_{T_j}}\\ &=\mathbf ip{\mathbf \Phi\mathbbm h_{T_0\cup T_1},\mathbf \Phi\mathbbm h}-\sum_{j\geq2}\mathbf ip{\mathbf \Phi\mathbbm h_{T_0},\mathbf \Phi\mathbbm h_{T_j}}-\sum_{j\geq2}\mathbf ip{\mathbf \Phi\mathbbm h_{T_1},\mathbf \Phi\mathbbm h_{T_j}}. \mathbbm endaligned$$ Estimate of the~norm of the~first element follows from the~Cauchy-Schwarz inequality in the~quaternion case, RIP and the~following simple observation $$ \norm{\mathbf \Phi\left(\mathbbm x^{\#}-\mathbbm x\mathbf right)}_2\leq \norm{\mathbf \Phi\mathbbm x^{\#}-\mathbbm y}_2+\norm{\mathbf \Phi\mathbbm x-\mathbbm y}_2\leq 2\mathbbm eta, $$ which follows from the~fact that $\mathbbm x^{\#}$ is the~minimizer of~(\mathbf ref{l1minproblem}) and $\mathbbm x$ is feasible. We get therefore that \begin{equation}\label{FihT01Fih} \left|\mathbf ip{\mathbf \Phi\mathbbm h_{T_0\cup T_1},\mathbf \Phi\mathbbm h}\mathbf right|\leq\norm{\mathbf \Phi\mathbbm h_{T_0\cup T_1}}_2\cdot\norm{\mathbf \Phi\mathbbm h}_2 \leq \sqrt{1+\delta_{2s}}\norm{\mathbbm h_{T_0\cup T_1}}_2\cdot2\mathbbm eta. \mathbbm end{equation} For the~remaining terms recall that $\mathbbm h_{T_j}$, $j\geq0$, are $s$-sparse with pairwise disjoint supports and apply Lemma~\mathbf ref{RIPip} $$ \left|\mathbf ip{\mathbf \Phi\mathbbm h_{T_i},\mathbf \Phi\mathbbm h_{T_j}}\mathbf right|\leq\delta_{2s}\cdot\norm{\mathbbm h_{T_i}}_2\cdot\norm{\mathbbm h_{T_j}}_2 \quad\text{for}\quad i=0,1 \quad\text{and}\quad j\geq2, $$ Since $T_0$ and $T_1$ are disjoint, $\norm{\mathbbm h_{T_0\cup T_1}}_2^2=\norm{\mathbbm h_{T_0}}_2^2+\norm{\mathbbm h_{T_1}}_2^2$ and therefore $\norm{\mathbbm h_{T_0}}_2+\norm{\mathbbm h_{T_1}}_2\leq \sqrt2\norm{\mathbbm h_{T_0\cup T_1}}_2$. Hence, using the~RIP, (\mathbf ref{FihT01Fih}) and (\mathbf ref{sumnormhTj}), $$\aligned \left(1-\delta_{2s}\mathbf right)\norm{\mathbbm h_{T_0\cup T_1}}_2^2 &\leq \norm{\mathbf \Phi\mathbbm h_{T_0\cup T_1}}_2^2 \\ &\leq \sqrt{1+\delta_{2s}}\norm{\mathbbm h_{T_0\cup T_1}}_2\cdot2\mathbbm eta+ \delta_{2s}\cdot\left(\norm{\mathbbm h_{T_0}}_2+\norm{\mathbbm h_{T_1}}_2\mathbf right)\sum_{j\geq2}\norm{\mathbbm h_{T_j}}_2\\ &\leq \left(2\sqrt{1+\delta_{2s}}\cdot\mathbbm eta+\frac{\sqrt{2}\,\delta_{2s}}{\sqrt{s}}\norm{\mathbbm h_{T_0^c}}_1\mathbf right)\norm{\mathbbm h_{T_0\cup T_1}}_2, \mathbbm endaligned$$ which implies that \begin{equation}\label{normhT01} \norm{\mathbbm h_{T_0\cup T_1}}_2\leq \frac{2\sqrt{1+\delta_{2s}}}{1-\delta_{2s}}\cdot\mathbbm eta+\frac{\sqrt{2}\,\delta_{2s}}{1-\delta_{2s}}\cdot\frac{\norm{\mathbbm h_{T_0^c}}_1}{\sqrt{s}}. \mathbbm end{equation} This, together with (\mathbf ref{normhT0c}), gives the~following estimate $$ \norm{\mathbbm h_{T_0\cup T_1}}_2\leq \alpha\cdot\mathbbm eta+\beta\cdot\frac{\norm{\mathbbm h_{T_0}}_1}{\sqrt{s}}+2\beta\cdot\frac{\norm{\mathbbm x_{T_0^c}}_1}{\sqrt{s}}, $$ where $$ \alpha=\frac{2\sqrt{1+\delta_{2s}}}{1-\delta_{2s}}\qquad \textrm{and}\qquad \beta=\frac{\sqrt{2}\,\delta_{2s}}{1-\delta_{2s}}. $$ Since $\norm{\mathbbm h_{T_0}}_1\leq\sqrt{s}\norm{\mathbbm h_{T_0}}_2\leq\sqrt{s}\norm{\mathbbm h_{T_0\cup T_1}}_2$, therefore $$ \norm{\mathbbm h_{T_0\cup T_1}}_2\leq \alpha\cdot\mathbbm eta+\beta\cdot\norm{\mathbbm h_{T_0\cup T_1}}_2+2\beta\cdot\mathbbm epsilon,$$ where recall that $\mathbbm epsilon=\frac{1}{\sqrt{s}}\norm{\mathbbm h_{T_0^c}}_2$, hence \begin{equation}\label{ineq2} \norm{\mathbbm h_{T_0\cup T_1}}_2\leq\frac{1}{1-\beta}\left(\alpha\cdot\mathbbm eta+2\beta\cdot\mathbbm epsilon\mathbf right), \mathbbm end{equation} as long as $\beta<1$ which is equivalent to $\delta_{2s}<\sqrt{2}-1$. Finally, (\mathbf ref{ineq1}) and (\mathbf ref{ineq2}) imply the main result $$\aligned \norm{\mathbbm h}_2&\leq\norm{\mathbbm h_{T_0\cup T_1}}_2+\norm{\mathbbm h_{(T_0\cup T_1)^c}}_2\leq \norm{\mathbbm h_{T_0\cup T_1}}_2+\norm{\mathbbm h_{T_0}}_2+2\mathbbm epsilon \\ &\leq 2\norm{\mathbbm h_{T_0\cup T_1}}_2+2\mathbbm epsilon\leq \frac{2\alpha}{1-\beta}\cdot\mathbbm eta+\left(\frac{4\beta}{1-\beta}+2\mathbf right)\cdot\mathbbm epsilon \mathbbm endaligned $$ and the constants in the~statement of the~theorem equal $$ C_0=\frac{4\beta}{1-\beta}+2=2\frac{1+\beta}{1-\beta}=2\frac{1+\left(\sqrt{2}-1\mathbf right)\delta_{2s}}{1-\left(\sqrt{2}+1\mathbf right)\delta_{2s}} \quad \textrm{and}\quad C_1=\frac{2\alpha}{1-\beta}=\frac{4\sqrt{1+\delta_{2s}}}{1-\left(\sqrt{2}+1\mathbf right)\delta_{2s}}.$$ \mathbbm end{proof} \section{Stable reconstruction from exact data} In this section we will assume that our observables are exact, i.e. $$ \mathbbm y=\mathbf \Phi\mathbbm x, \quad\textrm{where}\quad \mathbbm x\mathbf in\mathbb H^n, \; \mathbf \Phi\mathbf in\mathbb H^{m\times{n}}, \; \mathbbm y\mathbf in\mathbb H^m.$$ The~undermentioned result is a~natural corollary of~Theorem~\mathbf ref{l1minthm} for $\mathbbm eta=0$. \begin{corollary}\label{l1mincor} Let $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ satisfies the~$2s$-restricted isometry property with a~constant $\delta_{2s}<\sqrt{2}-1$. Then for any $\mathbbm x\mathbf in\mathbb H^n$ and $\mathbbm y=\mathbf \Phi\mathbbm x\mathbf in\mathbb H^m$, the~solution $\mathbbm x^{\#}$ of the~problem \begin{equation}\label{l1minexact} \argmin\limits_{\mathbbm z\mathbf in\mathbb H^n} \norm{\mathbbm z}_1 \quad\text{subject to}\quad \mathbf \Phi\mathbbm z=\mathbbm y \mathbbm end{equation} satisfies \begin{equation} \label{exact-ineq1} \norm{\mathbbm x^{\#}-\mathbbm x}_1 \leq C_0\norm{\mathbbm x-\mathbbm x_s}_1 \mathbbm end{equation} and \begin{equation} \label{exact-ineq2} \norm{\mathbbm x^{\#}-\mathbbm x}_2 \leq \frac{C_0}{\sqrt{s}}\norm{\mathbbm x-\mathbbm x_s}_1 \mathbbm end{equation} with constant $C_0$ as in the~Theorem~\mathbf ref{l1minthm}. In particular, if $\mathbbm x$ is $s$-sparse and there is no noise, then the~reconstruction by $\mathbbm ell_1$-norm minimization is exact. \mathbbm end{corollary} \begin{proof} Inequality (\mathbf ref{exact-ineq2}) follows directly from Theorem~\mathbf ref{l1minthm} for $\mathbbm eta=0$. The~result for sparse signals is obvious since in this case $\mathbbm x=\mathbbm x_s$. We only need to prove~(\mathbf ref{exact-ineq1}). We will use the same notation as in the~proof of Theorem~\mathbf ref{l1minthm}. Recall that $$ \norm{\mathbbm h_{T_0}}_1\leq\sqrt{s}\norm{\mathbbm h_{T_0}}_2\leq\sqrt{s}\norm{\mathbbm h_{T_0\cup T_1}}_2 $$ which together with (\mathbf ref{normhT01}) for $\mathbbm eta=0$ implies $$ \norm{\mathbbm h_{T_0}}_1\leq \frac{\sqrt{2}\,\delta_{2s}}{1-\delta_{2s}}\cdot\norm{\mathbbm h_{T_0^c}}_1. $$ Using this and (\mathbf ref{normhT0c}), deonoting again $\beta=\frac{\sqrt{2}\,\delta_{2s}}{1-\delta_{2s}}$, we get that $$ \norm{\mathbbm h_{T_0^c}}_1\leq \beta\norm{\mathbbm h_{T_0^c}}_1+2\norm{\mathbbm x_{T_0^c}}_1, \qquad \textrm{hence}\qquad \norm{\mathbbm h_{T_0^c}}_1\leq \frac{2}{1-\beta}\norm{\mathbbm x_{T_0^c}}_1. $$ Finally, we obtain the~following estimate on the~$\mathbbm ell_1$ norm of the~vector $\mathbbm h=\mathbbm x^{\#}-\mathbbm x$ $$ \norm{\mathbbm h}_1=\norm{\mathbbm h_{T_0}}_1+\norm{\mathbbm h_{T_0^c}}_1\leq (1+\beta)\norm{\mathbbm h_{T_0^c}}_1\leq \underbrace{2\frac{1+\beta}{1-\beta}}_{=C_0}\norm{\mathbbm x_{T_0^c}}_1, $$ which finishes the proof. \mathbbm end{proof} We conjecture that the~ requirement $\delta_{2s}<\sqrt{2}-1$ is not optimal -- there are known refinements of this condition for real signals (see e.g. {\cite[Chapter~6]{introCS}} for references). On the~other hand, the~authors of~\cite{cz} constructed examples of $s$-sparse real signals which can not be uniquely reconstructed via $\mathbbm ell_1$-norm minimization for $\delta_s>\frac13$. This gives an~obvious upper bound for~$\delta_{s}$ also for the~general quaternion case. \section{Numerical experiment} In~\cite{bb} we presented results of numerical experiments of sparse quaternion vector~$\mathbbm x$ reconstruction (by $\mathbbm ell_1$-norm minimization) from its linear measurements $\mathbbm y=\mathbf \Phi\mathbbm x$ for the~case of real-valued measurement matrix~$\mathbf \Phi$. Those experiments were inspired by the~articles~\cite{barthelemy2015,hawes2014,l1minqs} and involved expressing the $\mathbbm ell_1$ quaternion norm minimization problem in terms of the~second-order cone programming (SOCP). In view of the~main results of this paper (Theorem~\mathbf ref{l1minthm}, Corollary~\mathbf ref{l1mincor}) and having in mind that quaternion Gaussian random matrices satisfy (with overwhelming probability) the restricted isometry property~\cite{bbRIP}, we performed similar experiments for the~case of~quaternion matrix~$\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ -- as in~\cite{l1minqs}. By a~quaternion Gaussian random matrix we mean a~matrix $\mathbf \Phi=(\mathbf \phi_{ij})\mathbf in\mathbb H^{m\times n}$ whose entries $\mathbf \phi_{ij}$ are independent quaternion Gaussian random variables with distribution denoted by $\mathcal{N}_{\mathbb H}\left(0,\sigma^2\mathbf right)$, i.e. $$ \mathbf \phi_{ij} = \mathbf \phi_{\mathbf{r},ij} + \mathbf \phi_{\mathbf i,ij}\mathbf i + \mathbf \phi_{\mathbf j,ij}\mathbf j + \mathbf \phi_{\mathbf k,ij}\mathbf k, $$ where $$ \mathbf \phi_{e,ij}\sim\mathcal{N}\left(0,\frac{\sigma^2}{4}\mathbf right), \; e\mathbf in\{\mathbf{r},\mathbf i,\mathbf j,\mathbf k\}, \quad\text{and}\quad \mathbf \phi_{e,ij} \text{ are pairwise independent.}$$ In particular, each $\mathbf \phi_{ij}$ has independent components, which are real Gaussian random variables. In what follows, we consider only the case of noiseless measurements, i.e. we solve the problem~\mathbbm eqref{l1minexact}. Recall after~\cite{l1minqs} that problem~\mathbbm eqref{l1minexact} is equivalent to \begin{align} \label{eq:4_4_02} \argmin\limits_{t\mathbf in\mathbb R_+} t\quad\text{subject to}\quad \mathbbm y=\mathbf \Phi\mathbbm z,\,\norm{\mathbbm z}_1\leq t. \mathbbm end{align} We decompose vectors $\mathbbm y\mathbf in\mathbb H^m$ and $\mathbbm z\mathbf in\mathbb H^n$ into real vectors representing their~real parts and components of their~imaginary parts \begin{align*} \mathbbm y = \mathbbm y_{\mathbf{r}}+\mathbbm y_{\mathbf i}\mathbf i + \mathbbm y_{\mathbf j}\mathbf j + \mathbbm y_{\mathbf k}\mathbf k,\qquad \mathbbm z = \mathbbm z_{\mathbf{r}}+\mathbbm z_{\mathbf i}\mathbf i + \mathbbm z_{\mathbf j}\mathbf j + \mathbbm z_{\mathbf k}\mathbf k, \mathbbm end{align*} where $\mathbbm y_{\mathbf{r}},\mathbbm y_{\mathbf i},\mathbbm y_{\mathbf j},\mathbbm y_{\mathbf k}\mathbf in\mathbb R^m$, $\mathbbm z_{\mathbf{r}},\mathbbm z_{\mathbf i},\mathbbm z_{\mathbf j},\mathbbm z_{\mathbf k}\mathbf in\mathbb R^n$. Denote \begin{align*} \mathbbm z_{\mathbf{r}} = (z_{\mathbf{r},1},\ldots,z_{\mathbf{r},n})^T, \,\,\, \mathbbm z_{\mathbf i} = (z_{\mathbf i,1},\ldots,z_{\mathbf i,n})^T, \,\,\, \mathbbm z_{\mathbf j} = (z_{\mathbf j,1},\ldots,z_{\mathbf j,n})^T, \,\,\, \mathbbm z_{\mathbf k} = (z_{\mathbf k,1},\ldots,z_{\mathbf k,n})^T. \mathbbm end{align*} and let $\mathbf \phi_k\mathbf in\mathbb H^m$, $k\mathbf in\{1,\ldots,n\}$, be the~$k$-th column of the~matrix $\mathbf \Phi$. Again decompose as previously $$ \mathbf \phi_k = \mathbf \phi_{\mathbf{r},k}+\mathbf \phi_{\mathbf i,k}\mathbf i + \mathbf \phi_{\mathbf j,k}\mathbf j + \mathbf \phi_{\mathbf k,k}\mathbf k, $$ where $\mathbf \phi_{\mathbf{r},k},\mathbf \phi_{\mathbf i,k},\mathbf \phi_{\mathbf j,k},\mathbf \phi_{\mathbf k,k}\mathbf in\mathbb R^m$. Note that the~second constraint in~\mathbbm eqref{eq:4_4_02} can be written in the~form $$ \norm{(z_{\mathbf{r},k},z_{\mathbf i,k},z_{\mathbf j,k},z_{\mathbf k,k})^T}_2\leq t_k\quad\text{for }k\mathbf in\{1,\ldots,n\}, $$ where~$t_k$ are positive real numbers such that $\sum\limits_{k=1}^n t_k = t$. Having that, we can rewrite~\mathbbm eqref{eq:4_4_02} in the~real-valued setup in the~following way: \begin{align}\label{eq:4_4_03} \argmin\limits_{\tilde{\mathbbm z}\mathbf in\mathbb R^n} \mathbbm c^T\tilde{\mathbbm z} \quad\text{subject to}\quad &\tilde{\mathbbm y}=\tilde{\mathbf \Phi}\tilde{\mathbbm z} \\ &\text{and}\quad \norm{(z_{\mathbf{r},k},z_{\mathbf i,k},z_{\mathbf j,k},z_{\mathbf k,k})^T}_2\leq t_k\quad\text{for }k\mathbf in\{1,\ldots,n\}, \nonumber \mathbbm end{align} where \begin{align} \tilde{\mathbbm z} &= (t_1,z_{\mathbf{r},1},z_{\mathbf i,1},z_{\mathbf j,1},z_{\mathbf k,1},\ldots,t_n,z_{\mathbf{r},n},z_{\mathbf i,n},z_{\mathbf j,n},z_{\mathbf k,n})^T\mathbf in\mathbb R^{5n}, \label{eq:4_4_04} \\ \mathbbm c &= (1,0,0,0,0,\ldots,1,0,0,0,0)^T\mathbf in\mathbb R^{5n}, \label{eq:4_4_05} \\ \tilde{\mathbbm y} &= (\mathbbm y_{\mathbf{r}}^T,\mathbbm y_{\mathbf i}^T,\mathbbm y_{\mathbf j}^T,\mathbbm y_{\mathbf k}^T)^T\mathbf in\mathbb R^{4m}, \label{eq:4_4_06} \\ \tilde{\mathbf \Phi} &= \left(\begin{array} {ccccccccccc} \mathbf 0 & \mathbf \phi_{\mathbf{r},1} & -\mathbf \phi_{\mathbf i,1} & -\mathbf \phi_{\mathbf j,1} & -\mathbf \phi_{\mathbf k,1} & \ldots & \mathbf 0 & \mathbf \phi_{\mathbf{r},n} & -\mathbf \phi_{\mathbf i,n} & -\mathbf \phi_{\mathbf j,n} & -\mathbf \phi_{\mathbf k,n} \\ \mathbf 0 & \mathbf \phi_{\mathbf i,1} & \mathbf \phi_{\mathbf{r},1} & -\mathbf \phi_{\mathbf k,1} & \mathbf \phi_{\mathbf j,1} & \ldots & \mathbf 0 & \mathbf \phi_{\mathbf i,n} & \mathbf \phi_{\mathbf{r},n} & -\mathbf \phi_{\mathbf k,n} & \mathbf \phi_{\mathbf j,n} \\ \mathbf 0 & \mathbf \phi_{\mathbf j,1} & \mathbf \phi_{\mathbf k,1} & \mathbf \phi_{\mathbf{r},1} & -\mathbf \phi_{\mathbf i,1} & \ldots & \mathbf 0 & \mathbf \phi_{\mathbf j,n} & \mathbf \phi_{\mathbf k,n} & \mathbf \phi_{\mathbf{r},n} & -\mathbf \phi_{\mathbf i,n} \\ \mathbf 0 & \mathbf \phi_{\mathbf k,1} & -\mathbf \phi_{\mathbf j,1} & \mathbf \phi_{\mathbf i,1} & \mathbf \phi_{\mathbf{r},1} & \ldots & \mathbf 0 & \mathbf \phi_{\mathbf k,n} & -\mathbf \phi_{\mathbf j,n} & \mathbf \phi_{\mathbf i,n} & \mathbf \phi_{\mathbf{r},n} \mathbbm end{array}\mathbf right) \label{eq:4_4_07} \mathbbm end{align} and~$\tilde{\mathbf \Phi}\mathbf in\mathbb R^{4m\times 5n}$. This is a~standard form of the~SOCP, which can be solved using the~SeDuMi toolbox for MATLAB~\cite{SeDuMi}. The~solution \begin{equation} \tilde{\mathbbm x}^{\#} = \left(t_1,x_{\mathbf{r},1}^{\#},x_{\mathbf i,1}^{\#},x_{\mathbf j,1}^{\#},x_{\mathbf k,1}^{\#},\ldots,t_n,x_{\mathbf{r},n}^{\#},x_{\mathbf i,n}^{\#},x_{\mathbf j,n}^{\#},x_{\mathbf k,n}^{\#}\mathbf right)^T\mathbf in\mathbb R^{5n} \mathbbm end{equation} to the problem~\mathbbm eqref{eq:4_4_03} can easily be expressed as \begin{equation} \label{eq:4_4_08} \mathbbm x^{\#} = \left(x_{\mathbf{r},1}^{\#}+x_{\mathbf i,1}^{\#}\mathbf i+x_{\mathbf j,1}^{\#}\mathbf j+x_{\mathbf k,1}^{\#}\mathbf k, \; \ldots, \;x_{\mathbf{r},n}^{\#}+x_{\mathbf i,n}^{\#}\mathbf i+x_{\mathbf j,n}^{\#}\mathbf j+x_{\mathbf k,n}^{\#}\mathbf k\mathbf right)\mathbf in\mathbb H^n, \mathbbm end{equation} which is the~solution of our original problem~\mathbbm eqref{l1minexact}. \mathbf renewcommand{\textbf{\arabic{enumi}}}{\textbf{\arabic{enumi}}} The~experiments were carried out in MATLAB R2016a on a~standard PC machine, with Intel(R) Core(TM) i7-4790 CPU (3.60GHz), 16GB RAM and with Microsoft Windows 10~Pro. The~algorithm consisted of the~following steps: \begin{enumerate} \mathbf item Fix constants $n=256$ (length of~$\mathbbm x$) and~$m$ (number of measurements, i.e. length of~$\mathbbm y$) and generate the~measurement matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ with Gaussian entries sampled from i.i.d. quaternion normal distribution $\mathcal{N}_{\mathbb H}\left(0,\frac{1}{m}\mathbf right)$; \mathbf item Choose the~sparsity $s\leq \frac{m}{2}$ and draw the~support set $S\subseteq\{1,\ldots,n\}$ with $\# S=s$, uniformly at random. Generate a~vector $\mathbbm x\mathbf in\mathbb H^n$ such that $\mathrm {supp}\mathbbm x=S$ with i.i.d. quaternion normal distribution $\mathcal{N}_{\mathbb H}(0,1)$; \mathbf item \label{alg:3} Compute $\mathbbm y=\mathbf \Phi\mathbbm x\mathbf in\mathbb H^m$; \mathbf item \label{alg:4} Construct vectors $\tilde{\mathbbm y},\mathbbm c$ and matrix $\tilde{\mathbf \Phi}$ as in~\mathbbm eqref{eq:4_4_04}--\mathbbm eqref{eq:4_4_07}; \mathbf item \label{alg:5} Call the~SeDuMi toolbox to solve the~problem~\mathbbm eqref{eq:4_4_03} and calculate the~solution $\tilde{\mathbbm x}^{\#}$; \mathbf item \label{alg:6} Compute the solution $\mathbbm x^{\#}$ using~\mathbbm eqref{eq:4_4_08} and the~errors of reconstruction (in the~$\mathbbm ell_1$- and $\mathbbm ell_2$-norm sense), i.e. $\norm{\mathbbm x^{\#}-\mathbbm x}_1$ and~$\norm{\mathbbm x^{\#}-\mathbbm x}_2$. \mathbbm end{enumerate} The~experiment was carried out for $m=2,\ldots,64$ and $s=1,\ldots,\frac{m}{2}$. The range of $s$ is not accidental -- it is known in general that the~minimal number~$m$ of measurements needed for the~reconstruction of an~$s$-sparse vector is~$2s$~\cite[Theorem 2.13]{introCS}. For each pair of $(m,s)$ we performed 1000 experiments, saving the~errors of each reconstruction and the~number of perfect reconstructions (the~reconstruction is said to be perfect if $\norm{\mathbbm x^{\#}-\mathbbm x}_2\leq 10^{-7}$). For comparison we also repeated this experiment for the~case of $\mathbf \Phi\mathbf in\mathbb R^{m\times n}$ and $\mathbbm x\mathbf in\mathbb R^{n}$. The~percentage of perfect reconstructions in each case is presented in Fig.~\mathbf ref{fig:6_1} and Fig.~\mathbf ref{fig:6_2}~(a). \begin{figure}[h] \centerline{ \mathbbm epsfig{file=quat_procent.eps,width=0.49\textwidth} \mathbbm epsfig{file=real_procent.eps,width=0.49\textwidth} } \begin{minipage}{0.49\textwidth} \centerline{(a) $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$} \mathbbm end{minipage} \begin{minipage}{0.49\textwidth} \centerline{(b) $\mathbf \Phi\mathbf in\mathbb R^{m\times n}$} \mathbbm end{minipage} \caption{Results of the recovery experiment for $n=256$ and different $m$ and $s$. Image intensity stands for the~percentage of perfect reconstructions.} \label{fig:6_1} \mathbbm end{figure} \begin{figure}[h] \centerline{ \mathbbm epsfig{file=porownanie.eps,width=0.49\textwidth} \mathbbm epsfig{file=eksperyment2-2.eps,width=0.49\textwidth} } \begin{minipage}{0.49\textwidth} \centerline{(a)} \mathbbm end{minipage} \begin{minipage}{0.49\textwidth} \centerline{(b)} \mathbbm end{minipage} \caption{(a) Comparison of the~recovery experiment results for $n=256$, $m=32$ and different values of~$s$. (b) Lower estimate of the~constant $C_0$ in~Corollary~\mathbf ref{l1mincor} obtained from the~inequality~\mathbbm eqref{exact-ineq1} for $n=256$ and $m=32$.} \label{fig:6_2} \mathbbm end{figure} Fig.~\mathbf ref{fig:6_1}~(a) presents dependence of the~perfect recovery percentage on the~number of measurements~$m$ and sparsity~$s$ in the~quaternion case. We see that simulations confirm our theoretical~considerations. Fig.~\mathbf ref{fig:6_1}~(b) shows the~same results for the~real case, i.e. $\mathbf \Phi\mathbf in\mathbb R^{m\times n}$ and $\mathbbm x\mathbf in\mathbb R^n$. Note that in the~first experiment for $m=32$ and $s\leq 9$ the~recovery rate is greater than 95\%, the~same holds for $m=64$ and $s\leq 20$. It is also worth noticing that the~results for coresponding pairs $(m,s)$ are much better in the~quaternion setup than in the~real-valued case (see~Fig.~\mathbf ref{fig:6_2}(a)). We explain this phenomenon in~{\cite[Lemma~3.1]{bbRIP}}, namely, we show that for a~fixed vector~$\mathbbm x\mathbf in\mathbb H^m$ and the~ensemble of quaternion Gaussian random matrices $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$, the~ratio random variable $\frac{\norm{\mathbf \Phi\mathbbm x}_2^2}{\norm{\mathbbm x}_2^2}$ has distribution $\Gamma(2m,2m)$, i.e. its variance equals $\frac{1}{2m}$, which is four times smaller than in the~case of a~real vector and real Gaussian matrices of the~same size. In other words, a~quaternion Gaussian random matrix statistically has smaller restricted isometry constant than its real counterpart. We also performed another experiment illustrating the~approximated reconstruction of non-sparse quaternion vectors from the~exact data -- as stated in Corollary~\mathbf ref{l1mincor}. We fixed constants $n=256$ and $m=32$ and generated the measurement matrix $\mathbf \Phi\mathbf in\mathbb H^{m\times n}$ with random entries sampled from i.i.d. quaternion normal distribution and $1000$ arbitrary vectors $\mathbbm x\mathbf in\mathbb H^n$ with standard Gaussian random quaternion entries ($\sigma^2=1$), without assuming their sparsity. The~above-described algorithm (steps \mathbf ref{alg:3}.--\mathbf ref{alg:6}.) was applied to approximately reconstruct the~vectors. We used the~reconstruction errors $\norm{\mathbbm x^{\#}-\mathbbm x}_1$ to obtain a~lower bound on the~constant $C_0$ as a~function of~$s$, for $s=1,\ldots,64$, using inequality~\mathbbm eqref{exact-ineq1}, i.e. $$ C_0\geq \frac{\norm{\mathbbm x^{\#}-\mathbbm x}_1}{\norm{\mathbbm x-\mathbbm x_s}_1}, $$ where $\mathbbm x_s$ denotes the best $s$-sparse approximation of $\mathbbm x$. The~results of this experiment are shown in Fig.~\mathbf ref{fig:6_2}~(b) in the~form of a~scatter plot -- each point represents a~lower estimate of~$C_0$ for one vector $\mathbbm x$ and sparsity $s$. We see in particular that -- as expected -- the~dependence on $s$ is monotone. \section{Conclusions} The results of this article, together with aforementioned~\cite{bbRIP}, form a~theoretical background of the~classical compressed sensing methods in the~quaternion algebra. We extended the~fundamental result of this theory to the~full quaternion case, namely we proved that if a~quaternion measurement matrix satisfies the~RIP with a~sufficiently small constant, then it is possible to reconstruct sparse quaternion signals from a~small number of their measurements via $\mathbbm ell_1$-norm minimization. We also estimated the~error of the~approximated reconstruction of a~non-sparse quaternion signal from exact and noisy data. This improves our previous result for real measurement matrices and sparse quaternion vectors~\cite{bb} and explains success of various numerical experiments in the~quaternion setup~\cite{barthelemy2015,hawes2014,l1minqs}. There are several possibilities of further research in this field -- both in theoretical and applied directions. Among others:\\ -- further refinements of the~main results in the~quaternion algebra or their extensions to different algebraic structures,\\ -- search for other than Gaussian quaternion matrices satisfying the RIP,\\ -- adjust reconstruction algorithms to quaternions.\\ -- applications of the~theory in practice.\\ In view of numerous articles concerning quaternion signal processing published in the~last decade, we expect that this new branch of compressed sensing will attract attention of even more researchers and considerably develop. \noindent \textbf{Acknowledgments} The~research was supported in part by WUT grant No. 504/01861/1120. The~work conducted by the~second author was supported by a scholarship from Fundacja Wspierania Rozwoju Radiokomunikacji i Technik Multimedialnych. \begin{thebibliography}{99} \bibitem{bb} A. Bade\'nska, \L. B\l aszczyk, Compressed sensing for real measurements of quaternion signals, preprint (2015), arXiv:1605.07985. \bibitem{bbRIP} A. Bade\'nska, \L. B\l aszczyk, Quaternion Gaussian matrices satisfy the RIP, preprint (2017), arXiv:1704.08894. \bibitem{barthelemy2014} Q. Barth\'elemy, A. Larue, J. Mars, Sparse Approximations for Quaternionic Signals, \mathbbm emph{Advances in Applied Clifford Algebras}, 24 (2014), no. 2, 383--402. \bibitem{barthelemy2015} Q. Barth\'elemy, A. Larue, J. Mars, Color Sparse Representations for Image Processing: Review, Models, and Prospects, \mathbbm emph{IEEE Trans Image Process}, (2015), 1--12. \bibitem{bulow2001} T. B\"ulow, G. Sommer, Hypercomplex Signals -- A Novel Extension of the Analytic Signal to the Multidimensional Case, \mathbbm emph{IEEE Trans. Signal Processing}, 49 (2001), no. 11, 2844--2852. \bibitem{cz} T. Cai, A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, \mathbbm emph{Appl. Comput. Harmon. Anal.} 35 (2013), no. 1, 74--93. \bibitem{cRIP} E. J. Cand\`es, The restricted isometry property and its implications for compressed sensing, \mathbbm emph{C. R. Acad. Sci. Paris} ser. I 346 (2008) 589-92. \bibitem{crt} E. J. Cand\`es, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, \mathbbm emph{Comm. Pure Appl. Math.} 59 (8) (2006) 1207--1223. \bibitem{ct} E. J. Cand\`es, T. Tao, Decoding by linear programming, \mathbbm emph{IEEE Trans. Inform. Theory} 51 (12) (2005) 4203--4215. \bibitem{ccb} W. L. Chan, H. Choi, R. G. Baraniuk, Coherent multiscale image processing using dual-tree quaternion wavelets, \mathbbm emph{IEEE Trans. Image Process.} 17 (2008), no. 7, 1069--1082. \bibitem{dubey2014} V. R. Dubey, Quaternion Fourier Transform for Colour Images, \mathbbm emph{International Journal of Computer Science and Information Technologies}, 5 (2014), no. 3, 4411--4416. \bibitem{es} T. A. Ell, S. J. Sangwine, Hypercomplex Fourier Transforms of Color Images, \mathbbm emph{IEEE Trans. Image Process.} 16 (2007), no. 1, 22--35. \bibitem{ell2014} T. A. Ell, N. Le Bihan, S. J. Sangwine, \mathbbm emph{Quaternion Fourier Transforms for Signal and Image Processing}, Wiley-ISTE (2014). \bibitem{introCS} S. Foucart, H. Rauhut, \mathbbm emph{A mathematical introduction to compressive sensing}, Applied and Numerical Harmonic Analysis, Birkh\"auser/Springer, New York (2013). \bibitem{ghk} N. Gomes, S. Hartmann, U. K\"ahler, Compressed Sensing for Quaternionic Signals, \mathbbm emph{Complex Anal. Oper. Theory} 11 (2017), 417--455. \bibitem{G1} C. Gao, J. Zhou, F. Lang, Q. Pu, C. Liu, Novel Approach to Edge Detection of Color Image Based on Quaternion Fractional Directional Differentation, \mathbbm emph{Advances in Automation and Robotics}, 1 (2012) 163--170. \bibitem{hawes2014} M. B. Hawes, W. Liu, A Quaternion-Valued Reweighted Minimisation Approach to Sparse Vector Sensor Array Design, \mathbbm emph{Proceedings of the 19th International Conference on Digital Signal Processing}, (2014), 426--430. \bibitem{khalil2012} M. I. Khalil, Applying Quaternion Fourier Transforms for Enhancing Color Images, \mathbbm emph{I.J. Image, Graphics and Signal Processing}, 2 (2012), 9--15. \bibitem{P1} S.-C. Pei, J.-H. Chang, J.-J. Ding, Color pattern recognition by quaternion correlation, \mathbbm emph{IEEE Int. Conf. Image Process.}, Thessaloniki, Greece, October 7-10 (2010) 894--897. \bibitem{qalgebra} L.~Rodman, \mathbbm emph{Topics in Quaternion Linear Algebra}, Princeton University Press, Princeton (2014). \bibitem{rzadkowski2015} W. Rzadkowski, K. Snopek, A New Quaternion Color Image Watermarking Algorithm, \mathbbm emph{The 8th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications}, (2015), 245--250. \bibitem{snopek2015} K. M. Snopek, Quaternions and octonions in signal processing -- fundamentals and some new results, \mathbbm emph{Telecommunication Review + Telecommunication News, Tele-Radio-Electronic, Information Technology}, 6 (2015), 618--622. \bibitem{SeDuMi} J. F. Sturm, Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones, \mathbbm emph{Optimization Methods and Software} 11 (1999), no. 1-4, 625--653. \bibitem{T1} C. C Took, D. P. Mandic, The Quaternion LMS Algorithm for Adaptive Filtering of Hypercomplex Processes, {\mathbbm em IEEE Trans. Signal Processing}, 57(4) (2009) 1316--1327. \bibitem{vc} N. N. Vakhania, G. Z. Chelidze, Quaternion Gaussian random variables, (Russian) \mathbbm emph{Teor. Veroyatn. Primen.} 54 (2009), no. 2, 337--344; translation in \mathbbm emph{Theory Probab. Appl.} 54 (2010), no. 2, 363--369. \bibitem{W1} B. Witten, J. Shragge, Quaternion-based Signal Processing, \mathbbm emph{Stanford Exploration Project}, New Orleans Annual Meeting (2006) 2862--2866. \bibitem{l1minqs} J. Wu, X. Zhang, X. Wang, L. Senhadji, H. Shu, L1-norm minimization for quaternion signals, \mathbbm emph{Journal of Southeast University} 1 (2013) 33--37. \mathbbm end{thebibliography} \mathbbm end{document}
math
58,527
\begin{document} \title{A Steering Paradox for Einstein-Podolsky-Rosen Argument and its Extended Inequality} \author{Tianfeng Feng} \affiliation{State Key Laboratory of Optoelectronic Materials and Technologies and School of Physics, Sun Yat-sen University, Guangzhou, People's Republic of China} \author{Changliang Ren} \email{[email protected]} \affiliation{Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China} \author{Qin Feng} \affiliation{State Key Laboratory of Optoelectronic Materials and Technologies and School of Physics, Sun Yat-sen University, Guangzhou, People's Republic of China} \author{Maolin Luo} \affiliation{State Key Laboratory of Optoelectronic Materials and Technologies and School of Physics, Sun Yat-sen University, Guangzhou, People's Republic of China} \author{Xiaogang Qiang} \affiliation{National Innovation Institute of Defense Technology, AMS, Beijing, China} \author{Jing-Ling Chen} \email{[email protected]} \affiliation{Theoretical Physics Division, Chern Institute of Mathematics, Nankai University, Tianjin 300071, People's Republic of China} \author{Xiaoqi Zhou} \email{[email protected]} \affiliation{State Key Laboratory of Optoelectronic Materials and Technologies and School of Physics, Sun Yat-sen University, Guangzhou, People's Republic of China} \date{\today} \begin{abstract} The Einstein-Podolsky-Rosen (EPR) paradox is one of the milestones in quantum foundations, arising from the lack of local realistic description of quantum mechanics. The EPR paradox has stimulated an important concept of ``quantum nonlocality", which manifests itself by three different types: quantum entanglement, quantum steering, and Bell nonlocality. Although Bell nonlocality is more often used to show the ``quantum nonlocality'', the original EPR paradox is essentially a steering paradox. In this work, we formulate the original EPR steering paradox into a contradiction equality, thus making it amenable to an experimental verification. We perform an experimental test of the steering paradox in a two-qubit scenario. Furthermore, by starting from the steering paradox, we generate a generalized linear steering inequality and transform this inequality into a mathematically equivalent form, which is more friendly for experimental implementation, i.e., one may only measure the observables in $x$-, $y$-, or $z$-axis of the Bloch sphere, rather than other arbitrary directions. We also perform experiments to demonstrate this scheme. Within the experimental errors, the experimental results coincide with the theoretical predictions. Our results deepen the understanding of quantum foundations and provide an efficient way to detect the steerability of quantum states. \end{abstract} \pacs{03.65.Ud, 03.67.Mn, 42.50.Xa} \maketitle \section{1. Introduction} Quantum paradox has provided an intuitive way to reveal the essential difference between quantum mechanics and classical theory. In 1935, by considering a continuous-variable entangled state $\Psi(x_1, x_2)=\int_{-\infty}^{+\infty} e^{i p(x_1-x_2+x_0)/\hbar}dp$, Einstein, Podolsky and Rosen (EPR) proposed a thought experiment to highlight a famous paradox~\cite{EPR}: either the quantum wave-function does not provide a complete description of physical reality, or measuring one particle from a quantum entangled pair instantaneously affects the second particle regardless of how far apart the two entangled particles are. The EPR paradox has revealed a sharp conflict between local realism and quantum mechanics, thus triggering the investigation of nonlocal properties of quantum entangled states. Soon after the publication of the EPR paper, Schr{\" o}dinger made an immediate response by introducing the term ``steering" to depict ``the spooky action at a distance" that was mentioned in the EPR argument~\cite{Schrodinger35}. According to Schr{\" o}dinger, ``steering" reflects a nonlocal phenomenon which, in a bipartite scenario, describes the ability of one party, say Alice, to prepare the other party (say Bob) particle in different quantum states by simply measuring her own particle using different settings. However, the notion of steering has not been gained much attention and development until 2007, when Wiseman \emph{et al.} gave a rigorous definition using concepts from quantum information~\cite{wiseman07}. Undoubtedly, the EPR paradox is a milestone in quantum foundations, for it has opened the door of ``quantum nonlocality". In 1964, Bell made a distinct response to the EPR paradox by showing that quantum entangled states may violate Bell inequality, which hold for any local-hidden-variable model \cite{Bell}. This indicates that the local-hidden-variable models cannot reproduce all quantum predictions, and the violation of Bell inequality by entangled states directly implies that a kind of nonlocal properties -- Bell nonlocality. Since then, Bell nonlocality has been achieved a rapid and fruitful development in two directions~\cite{BrunnerRMP}: (i) On one hand, more and more Bell's inequalities have been introduced to detect Bell's nonlocality in different physical systems, for examples, the Clause-Horne-Shimony-Holt (CHSH) inequality for two qubits~\cite{CHSH}, the Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality for multipartite qubits~\cite{MABK}, the Collins-Gisin-Linden-Masser-Popescu inequality for two qudits~\cite{CGLMP}. (ii) On the other hand, some novel quantum paradoxes, or the all-versus-nothing (AVN) proofs have been suggested to reveal Bell's nonlocality without inequalities. Typical examples are the Greenberger-Horne-Zeilinger (GHZ) paradox~\cite{GHZ} and the Hardy paradox~\cite{Hardy}. Experimental verifications of Bell nonlocality have also been carried out, for instances, Aspect \emph{et al.} have successfully made the first observation of Bell's nonlocality with the CHSH inequality~\cite{Aspect}, Pan \emph{et al.} have tested the three-qubit GHZ paradox in the photon-based experiment~\cite{Pan}, and very recently Luo \emph{et al.} have tested the generalized Hardy paradox for the multi-qubit systems~\cite{Luo}. Despite of being developed from the EPR paradox, Bell nonlocality does not directly correspond to the EPR paradox. As pointed out in Ref.~\cite{wiseman07}, inspired by the EPR argument, one can derive three different types of ``quantum nonlocality" : quantum entanglement, quantum steering, and Bell nonlocality. The original EPR paradox is actually a special case of quantum steering \cite{Reid}. Although quantum steering has been experimentally demonstrated in various quantum systems~\cite {saunders10,sun14,armstrong15,Bennet,Schneeloch,Fadel,Dolde,Dabrowski,Cavalcanti, Zeng}, all of these experiments just indirectly illustrate the EPR paradox, in which most of them are based on statistical inequalities. Here the direct illustration of a quantum paradox means that we can find a contradiction equality for this paradox and demonstrate it (the ref. \cite{sun14} is a AVN proof but not a contradiction equality). For examples, (i) The GHZ paradox \cite{GHZ} can be formulated as a contradiction equality ``$+1=-1$", where ``$+1$" represents the prediction of the local-hidden-variable model, while ``$-1$" is the quantum prediction. Thus, if one observes the value of ``$-1$" by some quantum technologies in the experiments, then the GHZ paradox is demonstrated. (ii) The formulation of the Hardy paradox \cite{Hardy} is given as follows: under some certain Hardy-type constraints for probabilities $P_1=P_2=\cdots=P_N=0$, any local-hidden-variable model predicts a zero-probability (i.e., $P_{\rm suc}=0$), while quantum prediction is $P_{\rm suc}>0$, where $P_{\rm suc}$ is the success probability of a specific event. Upon successfully measuring the desired non-zero success probability under the required Hardy constraints, one verifies the Hardy paradox. A natural question arises whether the EPR paradox, which excludes any local-hidden-state (LHS) model, can be illustrated in a direct way just like GHZ or Hardy paradox? The purpose of this paper is two-fold: (i) Based on our previous results of the steering paradox ``$2=1$'' \cite{chen16}, we present a generalized steering paradox ``$k=1$''. We have also performed an experiment to illustrate the original EPR paradox through demonstrating the steering paradox ``$2=1$'' in a two-qubit scenario. (ii) A steering paradox can correspond to an inequality (e.g., the two-qubit Hardy paradox may correspond to the well-known CHSH inequality)~\cite{Mermin94,chen18}, and from the steering paradox ``$k=1$'', we generate a generalized linear steering inequality (GLSI), which naturally includes the usual linear steering inequality as a special case \cite{wiseman07,saunders10}. Besides, the GLSI can be transformed into a mathematically equivalent form, but is more friendly for experimental implementation, i.e., one may only measure the observables in $x$-, $y$-, or $z$-axis of the Bloch sphere, rather than other arbitrary directions. Meanwhile, we also experimentally test quantum violations of the GLSI, which shows that it is more powerful than the usual one in detecting the steerability of quantum states. \section{2. The EPR paradox as a steering paradox ``$k=1$"} Following~Ref.\cite{chen16}, let us consider an arbitrary two-qubit pure entangled state $\rho_{AB}=|\Psi(\alpha,\varphi)\rangle\langle\Psi(\alpha,\varphi)|$ shared by Alice and Bob. Using the Schmidt decomposition, i.e., in the $\hat{z}$-direction representation, the wave-function $|\Psi(\alpha,\varphi)\rangle$ may be written as \begin{eqnarray}\label{decom-z} |\Psi(\alpha,\varphi)\rangle=\cos\alpha|00\rangle+e^{i\varphi}\sin\alpha|11\rangle, \end{eqnarray} with $\alpha\in (0, \pi/2), \varphi \in[0, 2\pi]$. For the same state Eq. (\ref{decom-z}), in the general $\hat{n}$-direction decomposition one may recast it to \begin{eqnarray}\label{decom-n} |\Psi(\alpha,\varphi)\rangle &=& |+\hat{n}\rangle |\chi_{+\hat{n}}\rangle+ |-\hat{n}\rangle |\chi_{-\hat{n}}\rangle, \end{eqnarray} where $|\pm \hat{n}\rangle$ are the eigenstates of the operator $\hat{P}_a^{\hat{n}}=[\openone+(-1)^a \vec{\sigma}\cdot \hat{n}]/2$ denoting Alice's projective measurement on her qubit along the $\hat{n}$-direction with measurement outcomes $a$ ($a=0, 1$), $\openone$ is the identity matrix , $\vec{\sigma}=(\sigma_x, \sigma_y, \sigma_z)$ is the vector of Pauli matrices, and $|\chi_{\pm\hat{n}}\rangle=\langle \pm \hat{n}|\Psi(\alpha,\varphi)\rangle$ are the collapsed pure states (unnormalized) for Bob's qubit. By performing a projective measurement on her qubit along the $\hat{n}$-direction, Alice, by wavefunction collapse, steers Bob's qubit to the pure states ${\rho}^{\hat{n}}_a=\tilde{\rho}^{\hat{n}}_a/{\rm tr}(\tilde{\rho}^{\hat{n}}_a)$ with the probability ${\rm tr}(\tilde{\rho}^{\hat{n}}_a)$, here $\tilde{\rho}^{\hat{n}}_a={\rm tr}_A[(\hat{P}_a^{\hat{n}}\otimes \openone)\;\rho_{AB}]$ are the so-called Bob's unnormalized conditional states and ${\rho}^{\hat{n}}_a$ are the normalized ones~\cite{wiseman07}. In a two-setting steering protocol $\{\hat{z}, \hat{x}\}$, if Bob's four unnormalized conditional states can be simulated by an ensemble $\{\wp_\xi \rho_\xi\}$ of the LHS model, then these may be described as \cite{chen16} \begin{subequations} \label{E0-p4} \begin{eqnarray} \tilde{\rho}^{\hat{z}}_0&=& \cos^2\alpha|0\rangle\langle 0|= \wp_1 \rho_1,\label{Erhoz0q-4}\\ \tilde{\rho}^{\hat{z}}_1&=& \sin^2\alpha|1\rangle\langle 1|=\wp_2 \rho_2,\label{Erhoz1q-4}\\ \tilde{\rho}^{\hat{x}}_0&=& (1/2)|\chi_+\rangle\langle\chi_+|=\wp_3 \rho_3,\label{Erhox0q-4}\\ \tilde{\rho}^{\hat{x}}_1&=& (1/2)|\chi_-\rangle\langle\chi_-|=\wp_4 \rho_4,\label{Erhox1q-4} \end{eqnarray} \end{subequations} where $|\chi_\pm\rangle =\cos\alpha|0\rangle\pm e^{i\varphi}\sin\alpha|1\rangle$ are normalized pure states, the $\rho_i$ are hidden states, and the $\wp_i$ represent the corresponding probabilities in the ensemble. They satisfy the constraint $\sum_\xi \wp_\xi \rho_\xi=\rho_B={\rm tr}_A [\rho_{AB}]$, where $\rho_B$ is the reduced density matrix of Bob. On the other hand, since $\tilde{\rho}^{\hat{n}}_0+\tilde{\rho}^{\hat{n}}_1=\rho_B$ and ${\rm tr}\rho_B=1$, if we sum up terms in Eq. (\ref{E0-p4}) and take the trace, we arrive at the contradiction ``$2=1$'', which represents the EPR paradox in the two-setting steering protocol. Here we show that a more general steering paradox ``$k=1$'' can be similarly obtained if one considers a $k$-setting steering scenario $\{\hat{n}_1, \hat{n}_2, \cdots, \hat{n}_k \}$, in which Alice performs $k$ projective measurements on her qubit along $\hat{n}_j$-directions (with $j=1, 2,\cdots, k$). For each projective measurement $ \hat {P}^{\hat{n}_j}_a$, Bob obtains the corresponding unnormalized pure states $\tilde{\rho}^{\hat{n}_j}_a$. Suppose these states can be simulated by the LHS model, then one may obtain the following set of $2k$ equations: \begin{subequations} \label{E0-pk} \begin{eqnarray} \tilde{\rho}^{\hat{n}_j}_0&=&\sum_\xi \wp(0|\hat{n}_j,\xi) \wp_{\xi} \rho_{\xi},\label{Erhonj0q-2}\\ \tilde{\rho}^{\hat{n}_j}_1&=&\sum_\xi \wp(1|\hat{n}_j,\xi) \wp_{\xi} \rho_{\xi}, \;\;\;(j=1, 2,\cdots, k).\label{Erhoj1q-2} \end{eqnarray} \end{subequations} Since the $\tilde{\rho}^{\hat{n}_j}_a$'s are proportional to pure states, the sum of the right-hand side of Eq. (\ref{E0-pk}) actually contains only one $\rho_{\xi}$, as we have seen for Eq. (\ref{E0-p4}). Furthermore, due to the relations $\tilde{\rho}^{\hat{n}_j}_0+ \tilde{\rho}^{\hat{n}_j}_1=\rho_B$ and $\sum_{\xi=1}^{2k} \wp_{\xi} \rho_{\xi}=\rho_B$, and by taking trace of Eq. (\ref{E0-pk}), one immediately has the steering paradox ``$k=1$''. Experimentally, we shall test the EPR paradox for a two-qubit system in the simplest case of $k=2$. To this aim, we need to perform measurements leading to four quantum probabilities. The first one is $P^{\rm QM}_1={\rm tr} [ \tilde{\rho}^{\hat{z}}_0\;|0\rangle\langle 0|]=\cos^2\alpha$, which is obtained from Bob by performing the projective measurement $|0\rangle\langle 0|$ on his unnormalized conditional state as in Eq. (\ref{Erhoz0q-4}). Similarly, from Eq. (\ref{Erhoz1q-4})-Eq. (\ref{Erhox1q-4}), one has $P^{\rm QM}_2={\rm tr} [ \tilde{\rho}^{\hat{z}}_1\;|1\rangle\langle 1|]=\sin^2\alpha$, $P^{\rm QM}_3={\rm tr} [ \tilde{\rho}^{\hat{x}}_0\;|\chi_+\rangle\langle\chi_+|]=1/2$, and $P^{\rm QM}_4={\rm tr} [ \tilde{\rho}^{\hat{x}}_1\;|\chi_-\rangle\langle\chi_-|]=1/2$. Consequently, the total quantum prediction is $P^{\rm QM}_{\rm total}=\sum_{i=1}^{4} P^{\rm QM}_i=2$, which contradicts the LHS-model prediction ``1''. If, within the experimental measurement errors, one obtains a value $P^{\rm QM}_{\rm total}\approx 2$, then the steering paradox ``$2=1$'' is demonstrated. \section{Generalized linear steering inequality} Just like Bell's inequalities may be derived from the GHZ and Hardy paradoxes~\cite{Mermin94,chen18}, this is also the case for the EPR paradox. In turn, from the steering paradox ``$k=1$'', one may derive a $k$-setting generalized linear steering inequality as follows: In the steering scenario $\{\hat{n}_1, \hat{n}_2, \cdots, \hat{n}_k \}$, Alice performs $k$ projective measurements along $\hat{n}_j$-directions. Upon preparing the two-qubit system in the pure state $|\Psi(\theta,\phi)\rangle$ (note that this is not $|\Psi(\alpha,\varphi)\rangle$ and here $|\Psi(\theta,\phi)\rangle$ is used to derive the inequality), for each measurement $\hat{P}_a^{\hat{n}_j}$, Bob has the corresponding normalized pure states as ${\rho}^{\hat{n}_j}_a(\theta,\phi)=\tilde{\rho}^{\hat{n}_j}_a/{\rm tr}(\tilde{\rho}^{\hat{n}_j}_a)$, where $\tilde{\rho}^{\hat{n}_j}_a={\rm tr}_A[(\hat{P}_a^{\hat{n}_j}\otimes \openone)\;|\Psi(\theta,\phi)\rangle\langle \Psi(\theta,\phi)|]$ with ($a=0, 1$). Then the $k$-setting GLSI is given by (see Appendix A) \begin{eqnarray}\label{k-setting inequality m} \mathcal{S}_k(\theta,\phi)&=& \sum_{j=1}^k \biggr(\sum_{a=0}^1 P(A_{n_j}=a)\;\langle {\rho}^{\hat{n}_j}_a(\theta,\phi) \rangle \biggr)\\&& \leq C_{\rm LHS}, \end{eqnarray} which is a $(\theta,\phi)$-dependent inequality, where $C_{\rm LHS}$ is the classical bound determined by the maximal eigenvalue of $\mathcal{S}_k(\theta,\phi)$ for the given values of $\theta$ and $\phi$, $P(A_j=a)$ is the probability of the $j$-th measurement of Alice with outcome $a$, and ${\rho}^{\hat{n}_j}_a(\theta,\phi)=|\chi^j_\pm(\theta,\phi)\rangle\langle \chi^j_\pm(\theta,\phi)|$ corresponds to Bob's projective measurements. This inequality can be used to detect the steerability of two-qubit pure or mixed states. The GLSI has two remarkable advantages over the usual LSI~\cite{saunders10}: (i) Based on its own form as in Eq. (\ref{k-setting inequality m}), the GLSI includes naturally the usual LSI as a special case, thus can detect more quantum states. In particular, the GLSI can detect the steerability for all pure entangled states Eq. (\ref{decom-z}) in the whole region $\alpha\in (0, \pi/2)$, at variance with the usual LSI, which fails to detect EPR steering for some regions of $\alpha$ close to 0 \cite{X}. (ii) The use of GLSI reduces the numbers of experimental measurements and improves the experimental accuracy. This may be seen as follow: with the usual $k$-setting LSI, Bob needs to perform $k$ measurements in different $k$ directions, for different input states $\rho_{AB}$. This is experimentally challenging since it may be hard to suitably tune the setup for all the $k$ directions. However, with the GLSI one may solve this issue using the Bloch realization $|\chi^j_\pm\rangle\langle \chi^j_\pm|=(\openone+\vec{\sigma}\cdot\hat{m}^j_\pm)/2$, which transforms the GLSI to an equivalent form where Bob only needs to perform measurements along the $\hat{x}$, $\hat{y}$ and $\hat{z}$ directions of the Bloch sphere, which are independent on the input states~(see Appendix A). To be more specific, we give an example of the 3-setting GLSI from Eq. (\ref{k-setting inequality m}), where Alice's three measuring directions are $\{\hat{x}, \hat{y}, \hat{z}\}$. Then we immediately have \begin{eqnarray}\label{3-setting inequality} \mathcal{S}_3&=&P(A_x=0)\; \langle |\chi_+\rangle\langle \chi_+| \rangle+P(A_x=1)\;\langle |\chi_-\rangle\langle \chi_-| \rangle \nonumber\\ &&+ P(A_y=0)\; \langle |\chi'_+\rangle\langle \chi'_+| \rangle+P(A_y=1)\;\langle |\chi'_-\rangle\langle \chi'_-| \rangle\nonumber\\ &&+P(A_z=0)\;\langle |0\rangle\langle 0| \rangle+ P(A_z=1)\;\langle |1\rangle\langle 1| \rangle\nonumber\\ &\leq& C_{\rm LHS}, \end{eqnarray} with $|\chi_\pm\rangle=\cos\theta|0\rangle \pm e^{i\phi}\sin\theta|1\rangle$, $|\chi'_\pm\rangle=\cos\theta|0\rangle \mp i e^{i\phi}\sin\theta|1\rangle$, $C_{\rm LHS}={\rm Max}\{ \frac{3+\mathcal{C}_+}{2}, \;\frac{3+\mathcal{C}_-}{2}\}$, and $\mathcal{C}_\pm=\sqrt{4\pm4\cos{2\theta}+\cos{4\theta}}$. The equivalent 3-setting steering inequality is given by (see Appendix B) \begin{eqnarray}\label{si-3-1 m} \mathcal{S}'_3(\theta,\phi)&=&\sin2\theta\cos\phi\langle A_x \sigma_x\rangle-\sin2\theta\cos\phi \langle A_y \sigma_y\rangle\nonumber\\ & &+\sin2\theta\sin\phi\langle A_x \sigma_y\rangle+\sin2\theta\sin\phi \langle A_y \sigma_x\rangle\nonumber\\ & &+ \langle A_z \sigma_z\rangle+2\cos2\theta\langle\sigma_z\rangle\leq C'_{\rm LHS}, \end{eqnarray} with $C'_{\rm LHS}={\rm Max}\{\mathcal{C}_+, \mathcal{C}_-\}$. Obviously, by taking $\theta=\pi/4, \phi=0$, the inequality Eq. (\ref{si-3-1 m}) reduces to the usual 3-setting LSI in the form~\cite{saunders10}: \begin{eqnarray}\label{si-3-2} \mathcal{S}'_3(\pi/4,0)&=& \langle A_x \sigma_x\rangle- \langle A_y \sigma_y\rangle + \langle A_z \sigma_z\rangle\leq \sqrt{3}. \label{3setting} \end{eqnarray} In the experiment to test the inequalities, Alice prepares two qubits and sends one of them to Bob, who trusts his own measurements but not Alice's. Bob asks Alice to measure at random $\sigma_x$, $\sigma_y$ or $\sigma_z$ on her qubit with or simply not to perform any measurement; then Bob measures $\sigma_x$, $\sigma_y$ or $\sigma_z$ on his qubit with according to Alice's measurement. Finally, Bob evaluates the average values $\langle\sigma_x\otimes\sigma_x\rangle$, $\langle\sigma_y\otimes\sigma_y\rangle$, $\langle\sigma_x\otimes\sigma_y\rangle$, $\langle\sigma_y\otimes\sigma_x\rangle$, $\langle\sigma_z\otimes\sigma_z\rangle$, and $\langle\openone\otimes\sigma_z\rangle$ and is therefore capable of checking whether the steering inequality Eq. (\ref{si-3-1 m}) is violated or not. In particular, for the case of pure states Eq. (\ref{decom-z}), if Alice is honest in the preparation and measurements of the states, the inequalities are violated for all values of $\alpha$ and $\varphi$ (except at $\alpha=0, \pi/2$), thereby confirming Alice's ability to steer Bob. \begin{figure} \caption{{\bf Experimental setup} \label{setup} \end{figure} \begin{figure} \caption{{\bf Experimental results for pure states.} \label{data1} \end{figure} \begin{figure*} \caption{{\bf Experimental results for mixed states.} \label{data} \end{figure*} \section{3. Experimental results} The experimental setup is shown in Fig.~\ref{setup}, the degenerated polarization-entangled photon pairs are created by spontaneous parametric down-conversion~\cite{Source} type-II barium borate~(BBO) crystal pumped by a 404nm laser. The initial two-photon state is singlet state $|\psi\rangle=(|HV\rangle-|VH\rangle)/\sqrt{2}$. By setting HWP1 at $0^{o}$ (correspond to a phase gate) one may switch the phase of entangled state from ``minus'' to ``plus''. HWP2, HWP3 and beam displacers~(BD) are used to construct an asymmetric loss interferometer to adjust the amplitude and flip the qubit, where HWP3 fixed at $45^{o}$ and HWP2 at $\arcsin(\sqrt{\frac{1}{\sin^2\alpha}-1})\cdot\frac{90}{\pi}\in[0,45^{o}]$. Therefore, we may prepare desired two-qubit state as $ |\Psi(\alpha)\rangle= \cos\alpha |HH\rangle + \sin\alpha |VV\rangle$ (see Appendix C). Now we define $|H\rangle=|0\rangle$ and $|V\rangle=|1\rangle$, the entangled state becomes $ |\Psi(\alpha)\rangle= \cos\alpha |00\rangle + \sin\alpha |11\rangle$. Compare with Eq. (\ref{decom-z}), in our experiment, the phase $\varphi$ of the entangled state is set to 0. As illustrated in Fig.~\ref{setup}, after preparation of entangled state, we sent the first qubit to Alice, and the second to Bob. Then Alice and Bob measure their own photons through the polarization analyzer, which consists of quarter-wave plate (QWP), HWP and PBS, and test EPR steering. First, we test the EPR paradox via steering paradox ``$2=1$'' by choosing $\alpha$ from $\frac{\pi}{36}$ to $\frac{\pi}{4}$ with an interval of $\frac{\pi}{36}$ to obtain nine different two-qubit entangled states. In the two-setting steering scenario, Alice performs measurements on her photon along the $\hat{x}$-direction and $\hat{z}$-direction of Bloch sphere. The eigenvectors of $\sigma_{x}$ are $|\pm\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}$, which are the states on which the photon of Alice may collapse with a certain probability. The corresponding normalized conditional states for Bob are given by $|\chi_{\pm}\rangle=\cos\alpha|0\rangle\pm\sin\alpha|1\rangle$. Similarly, Bob's normalized conditional states are $|0\rangle$ and $|1\rangle$ when Alice performs corresponding measurements. As shown in Fig.~\ref{data1}(a), the experimental values $S=P^Q_{total}$ for the nine different entangled pure states largely exceed the classical prediction. The average value is $S\approx1.9899$, which is far exceeds above the classical bound predicted by LHS models. Thus, the steering paradox has been successfully demonstrated. Second, we experimentally address the violations of the GLSI using above pure states $ |\Psi(\alpha)\rangle$. We experimentally evaluate the value of $\mathcal{S}'_3$ by using the 3-setting steering inequality Eq. (\ref{si-3-1 m}). For simplicity, in our experiment the phase $\phi$ is set to 0, and therefore, following Eq. (\ref{si-3-1 m}), we only need to measure the following four expectation values: $\langle\sigma_{x}\otimes \sigma_{x}\rangle$, $\langle\sigma_{y}\otimes \sigma_{y}\rangle$, $\langle\sigma_{z}\otimes \sigma_{z}\rangle$, and $\langle\sigma_{I}\otimes \sigma_{z}\rangle$. Besides, in order to experimentally observe the violation of the GLSI for any $\alpha\in(0, \pi/4]$, we have to maximize the difference between $\mathcal{S}'_3$ and classical bound $C'_{\rm LHS}$ at any fixed value $\alpha$. This is done by numerically solving the optimal solutions of $\theta$. Remarkably, for $\alpha \in(0, (\arcsin\frac{\sqrt{3}-1}{2})/2\approx \frac{\pi}{17}]$, one observes a significant violation of the inequality, which does not occur for the usual 3-setting LSI Eq. (\ref{si-3-2}). On the other hand, when $\alpha$ is close to $\pi/4$, the violation of the GLSI Eq. (\ref{si-3-1 m}) and the LSI Eq. (\ref{si-3-2}) are of the same order. The experimental results are shown in Fig.~\ref{data1}(b) and (c), which are almost indistinguishable from the theoretical predictions. Finally, we have experimentally tested inequality Eq. (\ref{si-3-1 m}) with two types of mixed states (see Appendix C). The first one is a generalized Werner state $\rho_1$ \cite{Werner} and the second one is the asymmetric mixed state $\rho_2$ \cite{chen13}, which are given as \begin{subequations} \label{mixed} \begin{eqnarray} \rho_{1}&=&V|\Psi(\alpha)\rangle\langle\Psi(\alpha)|+\frac{1-V}{4}\openone\otimes \openone, \label{mixed-1}\\ \rho_{2}&=&V|\Psi(\alpha)\rangle\langle\Psi(\alpha)|+(1-V)|\Phi(\alpha)\rangle\langle\Phi(\alpha)|,\label{mixed-2} \end{eqnarray} \end{subequations} with $|\Phi(\alpha)\rangle=\sin\alpha|01\rangle+\cos\alpha|10\rangle$, $\alpha\in[0,\frac{\pi}{4}]$, and $V\in[0,1]$. As it is apparent from Fig.~\ref{data}(a) and Fig.~\ref{data}(b), the experimental results confirm that the GLSI has an advantage over the LSI in detecting steerability for more quantum states (One can see Appendix B for more theoretical detials). \section{4. Conclusions} In summary, we have advanced the study of EPR paradox in two aspects: (i) We have presented a generalized steering paradox ``$k=1$'' and performed an experiment to illustrate the original EPR paradox through demonstrating the steering paradox ``$2=1$'' in a two-qubit scenario. (ii) Based on the steering paradox ``$k=1$'', we have successfully generated a $k$-setting generalized linear steering inequality, which may detect steerability of quantum states to a larger extent than the previous ones. We have also rewritten this inequality into a mathematically equivalent form, which is more suitable for experimental implementation since it allows us to measure only along the $x$-, $y$-, or $z$-axis in Bloch sphere, rather than other arbitrary directions, and thus greatly simplifying the experimental setups and improving precision. This finding is valuable for the open problem of how to optimize the measurement settings for steering verification in experiments~\cite{RMP}. Our results deepen the understanding of quantum foundations and provide an efficient way to detect the steerability of quantum states. Recently, quantum steering has been applied to the one-side device-independent quantum key distribution protocol to secure shared keys by measuring the quantum steering inequality \cite{Branciard}. Our generalized linear steering inequality can also be applied to this scenario to implement the one-side DIQKD. In addition, our results may be applied to applications such as quantum random number generation \cite{Skrzypczyk,Guo} and quantum sub-channel discrimination \cite{Piani,Sun}. \section{Appendix A. Generalized linear steering inequality obtained from the general steering paradox ``$k=1$''.} Actually, from the steering paradox ``$k=1$'', one can naturally derive a $k$-setting generalized linear steering inequality (GLSI), which include the usual LSI~\cite{saunders10} as special case. The derivation procedure is as follows: In the steering scenario $\{\hat{n}_1, \hat{n}_2, \cdots, \hat{n}_k \}$, Alice performs $k$ projective measurements $\hat{P}_a^{\hat{n}_j}$. For the pure state $ |\Psi(\theta,\phi)\rangle$ as shown in Eq. (\ref{decom-z}), it can have an equivalent but more general decomposition along the $\hat{n}$-direction as Eq. (\ref{decom-n}) . Explicitly, Alice's projective measurements could be rewritten as \begin{subequations} \label{proj} \begin{eqnarray} \hat {P}^{\hat{n}_j}_0&=&\frac{\openone+ {\hat{n}_j}\cdot {\vec \sigma}}{2}=|+ \hat{n}\rangle \langle +\hat{n}|,\\ \hat {P}^{\hat{n}_j}_1&=&\frac{\openone- {\hat{n}_j}\cdot {\vec \sigma}}{2}=|- \hat{n}\rangle \langle -\hat{n}|. \end{eqnarray} \end{subequations} Based on the two-qubit pure state $|\Psi (\theta,\phi)\rangle$, for the $j$-th projective measurement $\hat{P}_a^{\hat{n}_j}$ of Alice, Bob will have the unnormalized conditional states as \begin{subequations} \begin{eqnarray} \tilde{\rho}^{\hat{n}_j}_0={\rm tr}_A[(\hat{P}_0^{\hat{n}_j}\otimes \openone)\;|\Psi\rangle\langle \Psi|]=|\chi_{+\hat{n}_j}\rangle\langle \chi_{+\hat{n}_j}|,\\ \tilde{\rho}^{\hat{n}_j}_1={\rm tr}_A[(\hat{P}_1^{\hat{n}_j}\otimes \openone)\;|\Psi\rangle\langle \Psi|]=|\chi_{-\hat{n}_j}\rangle\langle \chi_{-\hat{n}_j}|, \end{eqnarray} \end{subequations} and from which one has the normalized conditional states as \begin{subequations} \begin{eqnarray} {\rho}^{\hat{n}_j}_0&=&\frac{\tilde{\rho}^{\hat{n}_j}_0}{{\rm tr}(\tilde{\rho}^{\hat{n}_j}_0)}=|\chi_{+}^j\rangle\langle \chi_{+}^j|,\\ {\rho}^{\hat{n}_j}_1&=&\frac{\tilde{\rho}^{\hat{n}_j}_1}{{\rm tr}(\tilde{\rho}^{\hat{n}_j}_1)}=|\chi_{-}^j\rangle\langle \chi_{-}^j|, \end{eqnarray} \end{subequations} with \begin{eqnarray} |\chi_{+}^j\rangle=\frac{|\chi_{+\hat{n}_j}\rangle}{\sqrt{{\rm tr}[|\chi_{+\hat{n}_j}\rangle\langle \chi_{+\hat{n}_j}|]}},\\ \;\;\; |\chi_{-}^j\rangle=\frac{|\chi_{-\hat{n}_j}\rangle}{\sqrt{{\rm tr}[|\chi_{-\hat{n}_j}\rangle\langle \chi_{-\hat{n}_j}|]}}, \end{eqnarray} are pure states. Obviously, for the pure state $|\Psi\rangle$, the following probability relation is always hold: \begin{eqnarray}\label{prob} {\rm tr}[(\hat{P}_0^{\hat{n}_j}\otimes |\chi_{+}^j\rangle\langle \chi_{+}^j| )\;|\Psi\rangle\langle \Psi|]+\\ {\rm tr}[(\hat{P}_1^{\hat{n}_j}\otimes |\chi_{-}^j\rangle\langle \chi_{-}^j| )\;|\Psi\rangle\langle \Psi|]=\\ {\rm tr}[|\chi_{+\hat{n}_j}\rangle\langle \chi_{+\hat{n}_j}|]+{\rm tr}[|\chi_{-\hat{n}_j}\rangle\langle \chi_{-\hat{n}_j}|] \equiv 1. \end{eqnarray} The quantity in the left-hand-side of Eq. (\ref{prob}) can be used to construct the steering inequality, where for the Alice's side we need to replace the quantum measurement operator $\hat{P}_a^{\hat{n}_j}$ by their correspondingly classical probabilities $P(A_{n_j}=a)$. Then we immediately have the $k$-setting GLSI as \begin{eqnarray}\label{k-setting inequality} \mathcal{S}_k&=& \sum_{j=1}^k \biggr(\sum_{a=0}^1 P(A_{n_j}=a)\;\langle {\rho}^{\hat{n}_j}_a \rangle \biggr)\leq C_{\rm LHS}, \end{eqnarray} where $P(A_j=a)$ is the classical probability of the $j$-th measurement of Alice with outcome $a$, \begin{eqnarray}\label{projpm} {\rho}^{\hat{n}_j}_0=|\chi^j_+\rangle\langle \chi^j_+|,\;\;\; {\rho}^{\hat{n}_j}_1=|\chi^j_-\rangle\langle \chi^j_-|, \end{eqnarray} denote the projective measurements in Bob's side, $C_{\rm LHS}$ is the classical bound that determined by the maximal eigenvalue of the steering parameter $\mathcal{S}_k$. By definition, it is easy to verify directly that for the pure state (\ref{decom-z}) the quantum prediction of $\mathcal{S}_k$ is equal to $k$. \emph{Remark 1.---}In Ref. \cite{saunders10}, the usual $k$-setting linear steering inequality is given as \begin{eqnarray}\label{kLSI} \mathcal{S}^{\rm LSI}_k&=& \sum_{j=1}^k A_j \langle \hat{m}_j \cdot \vec{\sigma} \rangle \leq C^{\rm LSI}_{\rm LHS}, \end{eqnarray} here $C^{\rm LSI}_{\rm LHS}$ denotes the classical bound for the LSI. Note that for the LSI Eq. (\ref{kLSI}), quantum mechanically Alice will perform $k$ measurements (corresponds to $\hat{A}_j=\hat{n}_j\cdot \vec{\sigma}$), and Bob will also perform $k$ measurements (corresponds to $\hat{B}_j=\hat{m}_j\cdot \vec{\sigma}$). In the following we would like to show that the LSI is a special case of the GLSI as given in Eq. (\ref{k-setting inequality}). Let us rewrite the projective measurements Eq. (\ref{projpm}) in the Bloch-representation as \begin{eqnarray}\label{projpm2} {\rho}^{\hat{n}_j}_0&=&|\chi^j_+\rangle\langle \chi^j_+|=\frac{1}{2}(\openone + \hat{m}^j_+ \cdot \vec{\sigma}),\nonumber\\ {\rho}^{\hat{n}_j}_1&=&|\chi^j_-\rangle\langle \chi^j_-|=\frac{1}{2}(\openone + \hat{m}^j_- \cdot \vec{\sigma}), \end{eqnarray} and we denote \begin{eqnarray}\label{classp} P(A_j=0)=\frac{1+A_j}{2}, \;\;\; P(A_j=1)=\frac{1-A_j}{2}, \end{eqnarray} then by substituting Eqs. (\ref{projpm2}) and (\ref{classp}) into the inequality Eq. (\ref{k-setting inequality}), we have \begin{eqnarray}\label{k-setting 2} \mathcal{S}_k&=& \sum_{j=1}^k \biggr(\frac{1+A_j}{2} \;\langle \frac{1}{2}(\openone + \hat{m}^j_+ \cdot \vec{\sigma}) \rangle + \\ &&\frac{1-A_j}{2} \;\langle \frac{1}{2}(\openone + \hat{m}^j_- \cdot \vec{\sigma}) \rangle\biggr)\leq C_{\rm LHS}. \end{eqnarray} Note that for the GLSI Eq. (\ref{k-setting 2}), quantum mechanically Alice will perform $k$ measurements (corresponds to $\hat{A}_j=\hat{n}_j\cdot \vec{\sigma}$), and Bob will perform $2k$ measurements (corresponds to $\hat{B}_j^+=\hat{m}^j_+\cdot \vec{\sigma}$ and $\hat{B}_j^-=\hat{m}^j_-\cdot \vec{\sigma}$, if $\hat{m}^j_+ \neq \pm\hat{m}^j_-$). In the following we would like to show that the LSI is a special case of the GLSI as given in Eq. (\ref{k-setting inequality}). Let \begin{eqnarray}\label{nj} \hat{n}_j=(\sin\tau \cos{\gamma}, \sin\tau \sin{\gamma}, \cos\tau), \end{eqnarray} then \begin{eqnarray}\label{pmnj} |+\hat{n}_j\rangle=\left( \begin{array}{l} \cos\frac{\tau}{2} \\ \sin\frac{\tau}{2} e^{i \gamma} \\ \end{array} \right),\;\;\; |-\hat{n}_j\rangle=\left( \begin{array}{l} \sin\frac{\tau}{2} \\ -\cos\frac{\tau}{2} e^{i \gamma} \\ \end{array} \right), \end{eqnarray} which are eigenstates of $\hat{n}_j\cdot \vec{\sigma}$. From Eq. (\ref{decom-n}), one can explicitly have \begin{eqnarray}\label{chinew} |\chi_{+\hat{n}_j}\rangle &=& \langle +\hat{n}_j|\Psi(\theta,\phi)\rangle \\ & =&\cos\frac{\tau}{2}\cos\theta|0\rangle+e^{i(\phi-\gamma)} \sin\frac{\tau}{2}\sin\theta|1\rangle, \nonumber\\ |\chi_{-\hat{n}_j}\rangle &=& \langle -\hat{n}_j|\Psi(\theta,\phi)\rangle \\ & =&\sin\frac{\tau}{2}\cos\theta|0\rangle-e^{i(\phi-\gamma)} \cos\frac{\tau}{2}\sin\theta|1\rangle, \end{eqnarray} which yields \begin{eqnarray}\label{chinew1} \langle \chi_{-\hat{n}_j} |\chi_{+\hat{n}_j}\rangle &=& \cos\frac{\tau}{2} \sin\frac{\tau}{2} (\cos^2\theta-\sin^2\theta). \end{eqnarray} Namely, if $\theta=\pi/4$, the states $|\chi_{+\hat{n}_j}\rangle$ and $|\chi_{-\hat{n}_j}\rangle$ are orthogonal, then from Eq. (\ref{projpm2}) one immediately knows that the two Bloch vectors are antiparallel, i.e., \begin{eqnarray}\label{anti} \hat{m}^j_+ =- \hat{m}^j_- \equiv \hat{m}^j. \end{eqnarray} Substituting the relation Eq. (\ref{anti}) into the inequality Eq. (\ref{k-setting 2}), we have \begin{equation}\label{k-setting 3a} \mathcal{S}_k= \frac{1}{2} \sum_{j=1}^k \biggr(1 +\; A_j \langle \hat{m}^j \cdot \vec{\sigma} \rangle \biggr) \nonumber \leq C_{\rm LHS} \end{equation} i.e., \begin{eqnarray}\label{kLSI-2} \mathcal{S}^{\rm LSI}_k&=& \sum_{j=1}^k A_j \langle \hat{m}_j \cdot \vec{\sigma} \rangle \leq 2\; C_{\rm LHS}-k. \end{eqnarray} proving the usual LSI is a special case of the GLSI. In short, for an arbitrary two-qubit pure entangled state $|\Psi(\theta,\phi)\rangle$, one can have a steering paradox ``$k=1$''~\cite{chen16}, based on which one can derive a general linear steering inequality as shown in Eq. (\ref{k-setting inequality}). If one further fixes the parameter $\theta$ as $\theta=\pi/4$ (in this case $|\Psi(\theta=\pi/4,\phi)\rangle$ is the maximally entangled state), then the GLSI reduces to the usual LSI. \emph{Remark 2.---}We may rewrite the GLSI Eq. (\ref{k-setting 2}) in a mathematically equivalent form, but which is more friendly for experimental implements. Let us denote \begin{eqnarray}\label{mm} \hat{m}^j_+=(m^j_{+x}, m^j_{+y},m^j_{+z}), \;\;\; \hat{m}^j_-=(m^j_{-x}, m^j_{-y},m^j_{-z}), \end{eqnarray} then from the inequality Eq. (\ref{k-setting 2}) one has \begin{widetext} \begin{eqnarray}\label{k-setting 3} \mathcal{S}_k&=& \sum_{j=1}^k \biggr(\frac{1+A_j}{2} \;\langle \frac{1}{2}(\openone + \hat{m}^j_+ \cdot \vec{\sigma}) \rangle + \frac{1-A_j}{2} \;\langle \frac{1}{2}(\openone + \hat{m}^j_- \cdot \vec{\sigma}) \rangle\biggr)\nonumber\\ &=& \sum_{j=1}^k \biggr(\frac{1}{2}+\frac{1+A_j}{4} \;(\hat{m}^j_{+x}\langle \vec{\sigma}_x \rangle+\hat{m}^j_{+y}\langle \vec{\sigma}_y \rangle+\hat{m}^j_{+z}\langle \vec{\sigma}_z \rangle) + \frac{1-A_j}{4} \;\;(\hat{m}^j_{-x}\langle \vec{\sigma}_x \rangle+\hat{m}^j_{-y}\langle \vec{\sigma}_y \rangle+\hat{m}^j_{-z}\langle \vec{\sigma}_z \rangle)\biggr)\nonumber\\ &\leq& C_{\rm LHS}. \end{eqnarray} \end{widetext} The remarkable point for the inequality Eq. (\ref{k-setting 3}) is that Bob always measures his particle in three directions of $x, y, z$, which not only greatly reduces the numbers of measurements and but also need not tune the measurement direction to other directions. \section{Appendix B. EPR steering by using the 3-setting generalized LSI.} In this experimental work, we shall demonstrate EPR steering for the two-qubit generalized Werner state by using the generalized linear steering inequality. We focus on the 3-setting GLSI. In the steering scenario $\{\hat{x}, \hat{y}, \hat{z} \}$, Alice performs projective measurements on her qubit along the $\hat{x}$-, $\hat{y}$- and $\hat{z}$-directions, from Eq. (\ref{k-setting inequality}) one immediately has \begin{eqnarray}\label{3-setting inequality} \mathcal{S}_3&=&P(A_x=0)\; \langle |\chi_+\rangle\langle \chi_+| \rangle+P(A_x=1)\;\langle |\chi_-\rangle\langle \chi_-| \rangle \nonumber\\ &&+ P(A_y=0)\; \langle |\chi'_+\rangle\langle \chi'_+| \rangle+P(A_y=1)\;\langle |\chi'_-\rangle\langle \chi'_-| \rangle\nonumber\\ &&+P(A_z=0)\;\langle |0\rangle\langle 0| \rangle+ P(A_z=1)\;\langle |1\rangle\langle 1| \rangle \nonumber\\ &\leq& C_{\rm LHS}, \end{eqnarray} with \begin{eqnarray} |\chi_\pm\rangle & = & \cos\theta|0\rangle \pm e^{i\phi}\sin\theta|1\rangle,\nonumber\\ |\chi'_\pm\rangle& = &\cos\theta|0\rangle \mp i e^{i\phi}\sin\theta|1\rangle. \end{eqnarray} One can have \begin{eqnarray}\label{allchi} |\chi_+\rangle \langle \chi_+|& = & \frac{1}{2}(\openone+\hat{m}_+\cdot \vec{\sigma}),\;\;\; |\chi_-\rangle \langle \chi_-|=\frac{1}{2}(\openone+\hat{m}_-\cdot \vec{\sigma}),\nonumber\\ |\chi'_+\rangle \langle \chi'_+|& = & \frac{1}{2}(\openone+\hat{m}'_+\cdot \vec{\sigma}), \;\;\; |\chi'_-\rangle \langle \chi'_-| = \frac{1}{2}(\openone+\hat{m}'_-\cdot \vec{\sigma}),\nonumber\\ |0\rangle\langle 0| & = & \frac{1}{2}(\openone + \sigma_z), \;\;\; |1\rangle\langle 1| = \frac{1}{2}(\openone - \sigma_z), \end{eqnarray} with \begin{eqnarray}\label{allm} \hat{m}_+ &=&(\sin 2\theta \cos\phi,\sin 2\theta \sin\phi,\cos 2\theta ),\nonumber\\ \hat{m}_- &=&(-\sin 2\theta \cos\phi,-\sin 2\theta \sin\phi,\cos 2\theta ),\nonumber\\ \hat{m}'_+ &=&(\sin 2\theta \sin\phi,-\sin 2\theta \cos\phi,\cos 2\theta ),\nonumber\\ \hat{m}'_- &=&(-\sin 2\theta \sin\phi,\sin 2\theta \cos\phi,\cos 2\theta ). \end{eqnarray} We now come to compute the the classical bound $C_{\rm LHS}$. Because $P(A_i=0)+P(A_i=1)=1$, ($i=x, y, z$), i.e., $P(A_i=0)$ is exclusive with $P(A_i=1)$, if $P(A_i=0)=1$, then one must have $P(A_i=1)=0$. For the inequality (Eq. \ref{3-setting inequality}), there are totally eight combinations: (i) $P(A_z=0)=P(A_x=0)=P(A_y=0)=1$, then the left-hand side of the inequality Eq. (\ref{3-setting inequality}) is \begin{eqnarray} \biggr\langle |\chi_+\rangle\langle \chi_+|+ |\chi'_+\rangle\langle \chi'_+| + |0\rangle\langle 0| \biggr\rangle, \end{eqnarray} for such matrix, its two eigenvalues are \begin{eqnarray} \frac{3+\sqrt{4-4\cos{2\theta}+\cos{4\theta}}}{2},\\ \;\;\; \frac{3-\sqrt{4-4\cos{2\theta}+\cos{4\theta}}}{2}. \end{eqnarray} Similarly, for $P(A_z=0)=P(A_x=0)=P(A_y=1)=1$, $P(A_z=0)=P(A_x=1)=P(A_y=0)=1$, $P(A_z=0)=P(A_x=1)=P(A_y=1)=1$. (ii) $P(A_z=1)=P(A_x=0)=P(A_y=0)=1$, then the left-hand side of the inequality Eq. (\ref{3-setting inequality}) is \begin{eqnarray} \biggr\langle |\chi_+\rangle\langle \chi_+|+ |\chi'_+\rangle\langle \chi'_+| + |1\rangle\langle 1| \biggr\rangle, \end{eqnarray} for such matrix, its two eigenvalues are \begin{eqnarray} \frac{3+\sqrt{4+4\cos{2\theta}+\cos{4\theta}}}{2},\\ \;\;\; \frac{3-\sqrt{4+4\cos{2\theta}+\cos{4\theta}}}{2}. \end{eqnarray} Similarly, for $P(A_z=1)=P(A_x=0)=P(A_y=1)=1$, $P(A_z=1)=P(A_x=1)=P(A_y=0)=1$, $P(A_z=1)=P(A_x=1)=P(A_y=1)=1$. Thus, in summary, the classical bound is given by (here $\theta\in (0, \pi/2)$) \begin{eqnarray}\label{bound2} C_{\rm LHS}={\rm Max}\{ \frac{3+\mathcal{C}_+}{2}, \;\frac{3+\mathcal{C}_-}{2}\}, \end{eqnarray} with \begin{eqnarray}\label{bound3} \mathcal{C}_\pm=\sqrt{4\pm4\cos{2\theta}+\cos{4\theta}}. \end{eqnarray} Obviously the classical bound $C_{\rm LHS}\leq 3$, however, for any two-qubit pure entangled state $|\Psi(\theta,\phi)\rangle$ one has $\mathcal{S}^{QM}_3=3$, thus any two-qubit pure entangled state violates the steering inequality Eq. (\ref{3-setting inequality}). Let \begin{equation}\label{pxyz} P(A_i=0)=\frac{1+A_i}{2}, \;\;\; P(A_i=1)=\frac{1-A_i}{2}, \end{equation} where ($i=x, y, z$. Substituting Eqs. (\ref{allchi}), (\ref{allm}) and (\ref{pxyz}) into the inequality Eq. (\ref{3-setting inequality}) and after simplify, one obtains the equivalent 3-setting steering inequality as \begin{eqnarray}\label{si-3-1} \mathcal{S}'_3&=&\sin2\theta\cos\phi\langle A_x \sigma_x\rangle-\sin2\theta\cos\phi \langle A_y \sigma_y\rangle\nonumber\\ & &+\sin2\theta\sin\phi\langle A_x \sigma_y\rangle+\sin2\theta\sin\phi \langle A_y \sigma_x\rangle\nonumber\\ & &+ \langle A_z \sigma_z\rangle+2\cos2\theta\langle\sigma_z\rangle\leq C'_{\rm LHS}, \end{eqnarray} with $C'_{\rm LHS}={\rm Max}\{\mathcal{C}_+, \mathcal{C}_-\}$. Obviously, by taking $\theta=\pi/4, \phi=0$, the inequality Eq. (\ref{si-3-1}) reduces to the usual 3-setting LSI (an equivalent form) in~\cite{saunders10}. \begin{figure} \caption{\label{figwerner-1} \label{figwerner-1} \end{figure} \begin{figure} \caption{\label{figwerner-2} \label{figwerner-2} \end{figure} \begin{figure} \caption{\label{AVN-1} \label{AVN-1} \end{figure} \begin{figure} \caption{\label{AVN-2} \label{AVN-2} \end{figure} \emph{Example 1.---} Let us consider the two-qubit pure state $|\Psi(\alpha)\rangle=\cos\alpha|00\rangle+\sin\alpha|11\rangle$, its maximal violation value for the usual 3-setting steering inequality Eq. (\ref{si-3-2}) is $1+2\sin{2\alpha}$. Hence, only when $\alpha>\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2}\approx 0.1873$, the usual linear steering inequality can be violated. However, the pure state violates the GLSI Eq. (\ref{3-setting inequality}) for the whole region $\alpha\in (0, \pi/2)$, thus the generalized LSI is stronger than the usual LSI in detecting the EPR steerability of pure entangled states. \emph{Example 2.---}Let us consider the two-qubit generalized Werner state \begin{eqnarray}\label{wer} \rho_1=\rho_{AB}(\alpha, V)= V |\Psi(\alpha)\rangle \langle \Psi(\alpha)|+\frac{1-V}{4} \openone\otimes\openone, \end{eqnarray} with $\alpha\in[0,\frac{\pi}{4}], V\in[0,1]$. For the state Eq. (\ref{wer}), we come to compare the performance between the usual 3-setting LSI (Blue line) and the 3-setting GLSI (Red line) in Fig. \ref{figwerner-1}. For the usual LSI, the threshold value of the visibility is given by $V_{\mathrm{Min}}=\frac{\sqrt{3}}{1+2\sin{2\alpha}}\approx 0.1873$, below which the usual LSI cannot be violated. Namely, it can be concluded that there are no states violate the usual 3-setting LSI with the range of $\alpha\in[0,\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2}]$. However, the 3-setting GLSI can detect more steerable states in the region of $\alpha$ and $V$, which can be calculated numerically. See Fig.~\ref{figwerner-1} and Fig.~\ref{figwerner-2}, for example, when $\alpha=\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2} (\approx\frac{\pi}{17})$, there are no states violate the usual 3-setting LSI, however, the GLSI still can detect quantum states in the region $V\in [V_{\mathrm{Min}}\approx0.914, 1]$. In the range of $\alpha\in[\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2}$, $\alpha_{B}]$,$\alpha_{B}\approx 0.3508\approx\frac{\pi}{9}$, there exist states violate the usual LSI, but the corresponding lower bound $V_{\mathrm{Min}}$ is larger than that of the 3-setting generalized LSI. In the range of $\alpha\in[\alpha_{B},\frac{\pi}{4}]$, Both inequalities are almost equivalent in the task of steering test. Especially, when $\alpha=\frac{\pi}{4}$, both of them achieve $V_{\mathrm{Min}}=\frac{\sqrt{3}}{3}$. \emph{Example 3.---}Let us consider the following asymmetric two-qubit mixed state~\cite{chen13} \begin{equation}\label{avnstate} \rho_{2}=V|\Psi(\alpha)\rangle\langle\Psi(\alpha)|+(1-V)|\Phi(\alpha)\rangle\langle\Phi(\alpha)|, \end{equation} where $|\Psi(\alpha)\rangle= \cos\alpha |HH\rangle + \sin\alpha |VV\rangle$ and $|\Phi(\alpha)\rangle=\sin\alpha|HV\rangle+\cos\alpha|VH\rangle$. Obviously, $\rho_{2}$ is entangled for the region of $\alpha\in(0,\pi/2),\;V\in[0,1/2)\cup(1/2,1]$. For the state Eq. (\ref{avnstate}), we come to compare the performance between the usual 3-setting LSI (Blue line) and the 3-setting GLSI (Red line) in Fig. \ref{AVN-1}. Because the state $\rho_2$ is unchanged under the following operations: $V \rightleftharpoons (1-V)$ and flipping Alice's states (i.e., $|H\rangle \rightleftharpoons|V\rangle$), thus the performance in the region $V\in(1/2,1]$ will be the same as that in the region $V\in[0,1/2)$. Without loss of generality, we choose the region of $\alpha\in(0,\pi/2),\;V\in[0,1/2)$, for the usual 3-setting LSI the upper bound $V_{\mathrm{Max}}=\frac{1-\sqrt{3}+2\sin{2\alpha}}{2(1+\sin{2\alpha})}$. It is obvious to conclude that there are no states violate the usual 3-setting LSI with the range of $\alpha\in[0,\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2}\approx 0.1873]$. However, the 3-setting GLSI can detect some more steerable states for a wider region of $\alpha$ and $V$, which can be calculated numerically. See Fig.~\ref{AVN-1} and Fig.~\ref{AVN-2}, for example, when $\alpha=\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2} (\approx\frac{\pi}{17})$, there are no states violate the usual LSI, however, the GLSI still can detect quantum states in the region $V\in [0, V_{\mathrm{Max}}\approx 0.0889]$. On the range of $\alpha\in[\frac{\arcsin{\frac{\sqrt{3}-1}{2}}}{2},\alpha_{B}]$, $\alpha_{B}\approx0.4597\approx\frac{5\pi}{34}$, there exist some states violate the usual 3-setting LSI, but the upper bound $V_{\mathrm{Max}}$ is lower than that of the 3-setting GLSI. On the range of $\alpha\in[\alpha_{B},\frac{\pi}{4}]$, both inequalities are almost equivalent in the task of steering test. When $\alpha=\frac{\pi}{4}$, both of them achieve $V_{\mathrm{Max}}=\frac{3-\sqrt{3}}{4}$. \section{Appendix C. Experimental details.} A 404nm laser is sent into a nonlinear crystal BBO to generate maximally entangled state of the form $|\phi\rangle=\frac{1}{\sqrt{2}}(|H\rangle_{A} |V\rangle_{B}-|V\rangle_{A} |H\rangle_{B})$ with average fidelity over $99\%$. By setting HWP1 at $0^{o}$, the photon of Bob passes through BD1, which splits photon into two paths, upper~($u$) path and lower~($l$) path, according to its polarization, either vertical (V) and horizontal (H). If HWP2 is rotated by an angle $\frac{\beta}{2}$ and HWP3 is fixed at $45^{o}$, the two-photon entangled state becomes \begin{equation} |\Psi\rangle=\frac{\sin\beta}{\sqrt{2}}|H\rangle_{A} |H\rangle_{Bu}-\frac{\cos\beta}{\sqrt{2}}|H\rangle_{A} |V\rangle_{Bu}+\frac{1}{\sqrt{2}}|V\rangle_{A} |V\rangle_{Bl}, \end{equation} where $u$ and $l$ donate upper path and lower path of Bob's photon, respectively. After that Bob's photon passes through BD2, the V-polarized element of the entangled state is lossed in the upper path. One can verify that the matrix of the process of the asymmetric loss interferometer is \begin{equation} \begin{pmatrix} sin\beta & 0 \\ 0 & 1 \end{pmatrix}, \end{equation} Therefore, the final state (unnormalized state) is given by \begin{figure*} \caption{\label{methods} \label{methods} \end{figure*} \begin{equation} |\Psi\rangle=\frac{\sin\beta}{\sqrt{2}}|H\rangle_{A} |H\rangle_{B}+\frac{1}{\sqrt{2}}|V\rangle_{A} |V\rangle_{B}. \end{equation} After normalization, the two-qubit entangled state becomes \begin{equation} |\Psi\rangle=\frac{\sin\beta}{\sqrt{(\sin\beta)^{2}+1}}|H\rangle_{A} |H\rangle_{B}+\frac{1}{\sqrt{(\sin\beta)^{2}+1}}|V\rangle_{A} |V\rangle_{B}. \end{equation} Compared with the form in the main text, by rotating HWP2 by $\beta=\arcsin(\sqrt{\frac{1}{(\sin\alpha)^2}-1})\cdot\frac{90}{\pi}\in[0,45^{o}]$, we can generate \begin{equation} |\Psi\rangle=\cos\alpha|H\rangle_{A} |H\rangle_{B}+\sin\alpha|V\rangle_{A} |V\rangle_{B}. \end{equation} In our experiments, the verification of the mixed state is achieved by probabilistically mixing the corresponding pure states. Specifically, we measured the corresponding observables in different pure states, and post-processed the data (change the probability of these pure states and mixing them together) to obtain experimental data of different mixed states. Now we show how to construct two types of mixed states, the generalized Werner state and a asymmetric mixed state~\cite{chen13}, \begin{eqnarray} \rho_{1}=V|\Psi\rangle\langle\Psi|+(1-V)\frac{\openone\otimes\openone}{4},\\ \rho_{2}=V|\Psi\rangle\langle\Psi|+(1-V)|\Phi\rangle\langle\Phi|, \end{eqnarray} where $|\Phi\rangle=\sin\alpha|H\rangle_{A} |V\rangle_{B}+\cos\alpha|V\rangle_{A} |H\rangle_{B}$, and $V\in[0,1]$ is the visibility (probability for $|\Psi\rangle\langle\Psi|$). The preparation of the maximally mixed state $\openone\otimes\openone$ is simulated by mixing the four states $|HH\rangle$, $|HV\rangle$, $|VH\rangle$, and $|VV\rangle$ with equal probability. The generalized Werner state is simulated by mixing $\openone\otimes\openone$ and $|\Psi\rangle\langle\Psi|$ with probability $V$ and $1-V$ respectively. The asymmetric mixed state is simulated by mixing $|\Psi\rangle$ and $|\Phi\rangle$ with probability $V$ and $1-V$ respectively. The specific rotation angles of HWP in the beam displacer interferometer for preparation theses quantum states are shown in the table of Fig.~\ref{methods}. \section{Funding} This work was supported by National Key R$\&$D Program of China ( 2017YFA0305200, 2016YFA0301300), The Key R$\&$D Program of Guangdong Province ( 2018B030329001, 2018B030325001), The National Natural Science Foundation of China (61974168, 12075245, 11875167, 12075001), and Xiaoxiang Scholars Programme of Hunan Normal university. \section{Disclosures} The authors declare that there are no competing interests. T. Feng, C. R and Q. F contributed equally to this work. \end{document}
math
51,928
\begin{document} \begin{frontmatter} \title{Optimal and Better Transport Plans} \author[UNI]{Mathias Beiglb\"ock\thanksref{S9612}}, \author[DMG]{Martin Goldstern}, \author[DMG]{Gabriel Maresch \thanksref{Y328}}, \author[UNI]{Walter Schachermayer\thanksref{multi}} \thanks[S9612]{Supported by the Austrian Science Fund (FWF) under grant S9612.} \thanks[Y328]{Supported by the Austrian Science Fund (FWF) under grant Y328 and P18308.} \thanks[multi]{Supported by the Austrian Science Fund (FWF) under grant P19456, from the Vienna Science and Technology Fund (WWTF) under grant MA13 and from the Christian Doppler Research Association (CDG)} \address[DMG]{Institut f\"ur Diskrete Mathematik und Geometrie, Technische Universit\"at Wien\endgraf Wiedner Hauptstra\ss e 8-10/104,1040 Wien, Austria} \address[UNI]{Fakult\"at f\"ur Mathematik, Universit\"at Wien\endgraf Nordbergstra\ss e 15, 1090 Wien, Austria} \begin{abstract} We consider the Monge-Kantorovich transport problem in a purely measure theoretic setting, i.e.\ without imposing continuity assumptions on the cost function. It is known that transport plans which are concentrated on $c$-monotone sets are optimal, provided the cost function $c$ is either lower semi-continuous and finite, or continuous and may possibly attain the value $\infty$. We show that this is true in a more general setting, in particular for merely Borel measurable cost functions provided that $\{c=\infty\}$ is the union of a closed set and a negligible set. In a previous paper Schachermayer and Teichmann considered strongly $c$-monotone transport plans and proved that every strongly $c$-monotone transport plan is optimal. We establish that transport plans are strongly $c$-monotone if and only if they satisfy a ``better'' notion of optimality called robust optimality. \end{abstract} \begin{keyword} Monge-Kantorovich problem, c-cyclically monotone, strongly c-monotone, measurable cost function \end{keyword} \end{frontmatter} \section{Introduction}\label{Sintro} We consider the \emph{Monge-Kantorovich transport problem} $(\mu,\nu,c)$ for Borel probability measures $\mu,\nu$ on Polish spaces $X,Y$ and a Borel measurable cost function $c:X\times Y\to \mathbb{R}op$. As standard references on the theory of mass transport we mention \cite{AmPr03,RaRu98,Vill03,Vill05}. By $\mathcal{P}i(\mu,\nu)$ we denote the set of all probability measures on $X\times Y$ with $X$-marginal $\mu$ and $Y$-marginal $\nu$. For a Borel measurable \emph{cost function} $c:X\times Y\to\mathbb{R}op$ the transport costs of a given transport plan $\pi\in\mathcal{P}i(\mu,\nu)$ are defined by \begin{equation}\label{eqncost} I_c[\pi]:=\int_{X\times Y} c(x,y) d\pi. \end{equation} $\pi$ is called a \emph{finite} transport plan if $I_c[\pi]<\infty$. A nice interpretation of the Monge-Kantorovich transport problem is given by C{\'e}dric Villani in Chapter 3 of the impressive monograph \cite{Vill05}: ^{-1}dskip\begin{quotation}\begin{small} ``Consider a large number of bakeries, producing breads, that should be transported each morning to caf{\'e}s where consumers will eat them. The amount of bread that can be produced at each bakery, and the amount that will be consumed at each caf{\'e} are known in advance, and can be modeled as probability measures (there is a ``density of production'' and a ``density of consumption'') on a certain space, which in our case would be Paris (equipped with the natural metric such that the distance between two points is the length of the shortest path joining them). The problem is to find in practice where each unit of bread should go, in such a way as to minimize the total transport cost."\end{small}\end{quotation}^{-1}dskip We are interested in \emph{optimal} transport plans, i.e.\ minimizers of the functional $I_c[\cdot]$ and their characterization via the notion of $c$-monotonicity. \begin{defn}\label{Dcmon} A Borel set $\Gamma\subseteq X\times Y$ is called \emph{$c$-monotone} if \begin{equation}\label{eqncmon} \sum_{i=1}^n c(x_i,y_i)\leq \sum_{i=1}^n c(x_{i},y_{i+1}) \end{equation} for all pairs $(x_1,y_1),\dotsc, (x_n,y_n)\in \Gamma$ using the convention $y_{n+1}:=y_1$. A transport plan $\pi$ is called $c$-monotone if there exists a $c$-monotone $\Gamma$ with $\pi(\Gamma)=1$. \end{defn} In the literature (e.g.\ \cite{AmPr03,GaMc96,KnSm92,Prat07,ScTe08}) the following characterization was established under various continuity assumptions on the cost function. Our main result states that those assumptions are not required. \begin{thmT}\label{CmonEquivOptimal}Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c: X\times Y \to \mathbb{R}op$ a Borel measurable cost function. \begin{enumerate} \item \label{opt->cmon} Every finite optimal transport plan is $c$-monotone. \item \label{cmon->opt} Every finite $c$-monotone transport plan is optimal if there exist a closed set $F$ and a $\mu\otimes\nu$-null set $N$ such that $\{(x,y): c(x,y)=\infty\} = F\cup N$. \end{enumerate} \end{thmT} Thus in the case of a cost function which does not attain the value $\infty$ the equivalence of optimality and $c$-monotonicity is valid without any restrictions beyond the obvious measurability conditions inherent in the formulation of the problem. The subsequent construction due to Ambrosio and Pratelli in \cite[Example 3.5]{AmPr03} shows that if $c$ is allowed to attain $\infty$ the implication ``$c$-monotone $\mathbb{R}ightarrow$ optimal'' does not hold without some additional assumption as in Theorem \ref{CmonEquivOptimal}.b. \begin{exmp}[Ambrosio and Pratelli]\label{ExAmPra} Let $X=Y= [0,1]$, equipped with Lebesgue measure $\lambda=\mu=\nu$. Pick $\alpha\in [0,1)$ irrational. Set $$\Gamma_0=\{(x,x):x\in X\},\quad\Gamma_1=\{(x,x\oplus\alpha):x\in X\},$$ where $\oplus$ is addition modulo $1$. Let $c: X\times Y \to \mathbb{R}op$ be such that $c=a\in[0,\infty)$ on $\Gamma_0$, $c=b\in[0,\infty)$ on $ \Gamma_1$ and $c=\infty$ otherwise. It is then easy to check that $\Gamma_0$ and $\Gamma_1$ are $c$-monotone sets. Using the maps $f_0, f_1: X \to X\times Y$, $f_0(x)=(x,x), f_1(x) = (x,x\oplus \alpha)$ one defines the transport plans $\pi_0=f_0{}_{\#}\lambda$, $\pi_1=f_1{}_{\#} \lambda$ supported by $\Gamma_0$ respectively $\Gamma_1$. Then $\pi_0$ and $\pi_1$ are finite $c$-monotone transport plans, but as $I_c[\pi_0]=a, I_c[\pi_1]=b$ it depends on the choice of $a$ and $b$ which transport plan is optimal. Note that in contrast to the assumption in Theorem \ref{CmonEquivOptimal}.b the set $\{(x,y)\in X\times Y:c=\infty\}$ is open. \end{exmp} We want to remark that rather trivial (folkloristic) examples show that no optimal transport has to exist if the cost function doesn't satisfy proper continuity assumptions. \begin{exmp} Consider the task to transport points on the real line (equipped with the Lebesgue measure) from the interval $[0,1)$ to $[1,2)$ where the cost of moving one point to another is the squared distance between these points ($X=[0,1), Y= [1,2)$, $c(x,y)=(x-y)^2$, $\mu=\nu=\lambda$). The simplest way to achieve this transport is to shift every point by $1$. This results in transport costs of $1$ and one easily checks that all other transport plans are more expensive. If we now alter the cost function to be $2$ whenever two points have distance $1$, i.e.\ if we set $$\tilde c(x,y)= \begin{cases} 2& \mbox{if } y=x+1 \\ c(x,y) & \mbox{otherwise} \end{cases},$$ it becomes impossible to find a transport plan $\pi\in\mathcal{P}i(\mu,\nu)$ with total transport costs $I_{\tilde c} [\pi]=1$, but it is still possible to achieve transport costs arbitrarily close to $1$. (For instance, shift $[0,1-\varepsilon)$ to $[1+\varepsilon,2)$ and $[1-\varepsilon,1)$ to $[1,1+\varepsilon)$ for small $\varepsilon>0$.) \end{exmp} \subsection{History of the problem} The notion of $c$-monotonicity originates in convex analysis. The well known Rockafellar Theorem (see for instance \cite[Theorem 3]{Rock66} or \cite[Theorem 2.27]{Vill03}) and its generalization, R\"uschendorf's Theorem (see \cite[Lemma 2.1]{R96}), characterize $c$-monotonicity in $\mathbb{R}^n$ in terms of integrability. The definitions of $c$-concave functions and super-differentials can be found for instance in \cite[Section 2.4]{Vill03}. \begin{thmcite}{Rockafellar}\label{Trockafellar} A non-empty set $\Gamma\subseteq \mathbb{R}^n\times \mathbb{R}^n$ is cyclically monotone (that is, $c$-monotone with respect to the squared euclidean distance) if and only if there exists a l.s.c.\ concave function $\varphi: \mathbb{R}^n \to \mathbb{R}$ such that $\Gamma$ is contained in the super-differential $\partial(\varphi)$. \end{thmcite} \begin{thmcite}{R\"uschendorf}\label{Trueschendorf} Let $X,Y$ be abstract spaces and $c: X\times Y\to \mathbb{R}op$ arbitrary. Let $\Gamma\subseteq X\times Y$ be $c$-monotone. Then there exists a $c$-concave function $\varphi: X \to Y$ such that $\Gamma$ is contained in the $c$-super-differential $\partial^c(\varphi)$. \end{thmcite} Important results of Gangbo and McCann \cite{GaMc96} and Brenier \cite[Theorem 2.12]{Vill03} use these potentials to establish uniqueness of the solutions of the Monge-Kantorovich transport problem in $\mathbb{R}^n$ for different types of cost functions subject to certain regularity conditions. ^{-1}dskip \emph{Optimality implies $c$-monotonicity: } This is evident in the discrete case if $X$ and $Y$ are finite sets. For suppose that $\pi$ is a transport plan for which $c$-monotonicity is violated on pairs $(x_1,y_1),\dotsc,(x_n,y_n)$ where all points $x_1,\dotsc, x_n$ and $y_1,\dotsc, y_n$ carry positive mass. Then we can reduce costs by sending the mass $\alpha>0$, for $\alpha$ sufficiently small, from $x_i$ to $y_{i+1}$ instead of $y_i$, that is, we replace the original transport plan $\pi$ with \begin{equation}\pi^\beta=\pi+\alpha \sum_{i=1}^n\delta_{(x_i,y_{i+1})} -\alpha \sum_{i=1}^n\delta_{(x_i,y_i)}.\end{equation} (Here we are using the convention $y_{n+1}=y_1.$) Gangbo and McCann (\cite[Theorem 2.3]{GaMc96}) show how continuity assumptions on the cost function can be exploited to extend this to an abstract setting. Hence one achieves: \begin{nil}\label{Popt->cmon} Let $X$ and $Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$. Let $c: X\times Y\to \mathbb{R}op$ be a l.s.c.\ cost function. Then every finite optimal transport plan is $c$-monotone. \end{nil} Using measure theoretic tools, as developed in the beautiful paper by Kellerer \cite{Kell84}, we are able to extend this to Borel measurable cost functions (Theorem \ref{CmonEquivOptimal}.a.) without any additional regularity assumption. ^{-1}dskip \emph{$c$-monotonicity implies optimality:} In the case of finite spaces $X,Y$ this again is nothing more than an easy exercise (\cite[Exercise 2.21]{Vill03}). The problem gets harder in the infinite setting. It was first proved in \cite{GaMc96} that for $X,Y$ compact subsets of $\mathbb{R}^n$ and $c$ a continuous cost function, $c$-monotonicity implies optimality. In a more general setting this was shown in \cite[Theorem 3.2]{AmPr03} for l.s.c.\ cost functions which additionally satisfy the moment conditions \begin{align*} \mu\;\mathfrak{B}igg(\mathfrak{B}igg\{ x: \int_Y c(x,y) d\nu < \infty\mathfrak{B}igg\}\mathfrak{B}igg) &> 0,\\ \nu\;\mathfrak{B}igg(\mathfrak{B}igg\{ y: \int_X c(x,y) d\mu < \infty\mathfrak{B}igg\}\mathfrak{B}igg) &> 0. \end{align*} Further research into this direction was initiated by the following problem posed by Villani in \cite[Problem 2.25]{Vill03}: \begin{nil} For $X=Y=\mathbb{R}^n$ and $c(x,y)=\|x-y\|^2$, the squared euclidean distance, does $c$-monotonicity of a transport plan imply its optimality?\end{nil} A positive answer to this question was given independently by Pratelli in \cite{Prat07} and by Schachermayer and Teichmann in \cite{ScTe08}. Pratelli proves the result for countable spaces and shows that it extends to the Polish case by means of approximation if the cost function $c:X\times Y\to\mathbb{R}op$ is continuous. The paper \cite{ScTe08} pursues a different approach: The notion of \emph{strong $c$-monotonicity} is introduced. From this property optimality follows fairly easily and the main part of the paper is concerned with the fact that strong $c$-monotonicity follows from the usual notion of $c$-monotonicity in the Polish setting if $c$ is assumed to be l.s.c.\ and finitely valued. Part (b) of Theorem \ref{CmonEquivOptimal} unifies these statements: Pratelli's result follows from the fact that for continuous $c: X\times Y \to \mathbb{R}op$ the set $\{ c = \infty \}=c^{-1}[\{\infty\}]$ is closed; the Schachermayer-Teichmann result follows since for finite $c$ the set $\{ c = \infty \}$ is empty. Similar to \cite{ScTe08} our proofs are based on the concept of \emph{strong $c$-monotonicity}. In Section \ref{StrongNotions} we present \emph{robust optimality} which is a variant of optimality that we shall show to be equivalent to strong $c$-monotonicity. As not every optimal transport plan is also robustly optimal, this accounts for the somewhat provocative concept of ``better than optimal'' transport plans alluded to in the title of this paper. Correspondingly the notion of strong $c$-monotonicity is in fact stronger than ordinary $c$-monotonicity (at least if $c$ is allowed to assume the value $\infty$). \subsection{Strong Notions}\label{StrongNotions} It turns out that optimality of a transport plan is intimately connected with the notion of \emph{strong $c$-monotonicity} introduced in \cite{ScTe08}. \begin{defn}\label{Dstrong} A Borel set $\Gamma\subseteq X\times Y$ is \emph{strongly $c$-monotone} if there exist Borel measurable functions $\varphi:X\to \mathbb{R}m$ and $\psi:Y\to\mathbb{R}m$ such that $\varphi(x)+\psi(y)\leq c(x,y)$ for all $(x,y)\in X\times Y$ and $ \varphi(x)+ \psi(y)=c(x,y)$ for all $(x,y)\in\Gamma$. A transport plan $\pi\in \mathcal{P}i(\mu,\nu)$ is \emph{strongly $c$-monotone} if $\pi$ is concentrated on a strongly $c$-monotone Borel set $\Gamma$. \end{defn} Strong $c$-monotonicity implies $c$-monotonicity since \begin{align}\label{eqnstrong} \sum_{i=1}^n c(x_{i+1},y_{i})\ge\sum_{i=1}^n \varphi(x_{i+1})+\psi(y_{i})= \sum_{i=1}^n \varphi(x_i)+\psi(y_i)=\sum_{i=1}^n c(x_i,y_i) \end{align} whenever $(x_1,y_1),\dotsc, (x_n,y_n)\in \Gamma$ . If there are \emph{integrable} functions $\varphi$ and $\psi$ witnessing that $\pi$ is strongly $c$-monotone, then for every $\tilde \pi\in\mathcal{P}i(\mu,\nu)$ we can estimate: \begin{align*} I_c[\pi] =& \int_{\Gamma} c(x,y)d\pi=\int_{\Gamma} [\varphi(x)+\psi(y)]d\pi=\\ =&\int_{\Gamma} \varphi(x)d\mu +\int_{\Gamma} \psi(y)d\nu=\int_{\Gamma} [\varphi(x)+\psi(y)]d\tilde\pi \le I_c[\tilde\pi]. \end{align*} Thus in this case strong $c$-monotonicity implies optimality. However there is no reason why the Borel measurable functions $\varphi,\psi$ appearing in Definition \ref{Dstrong} should be integrable. In \cite[Proposition 2.1]{ScTe08} it is shown that for l.s.c.\ cost functions, there is a way of truncating which allows to also handle non-integrable functions $\varphi$ and $\psi$. The proof extends to merely Borel measurable functions; hence we have: \begin{prop}\label{StrongCmonImpliesOptimal} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c: X\times Y\to \mathbb{R}op$ be Borel measurable. Then every finite transport plan which is strongly $c$-monotone is optimal. \end{prop} No new ideas are required to extend \cite[Proposition 2.1]{ScTe08} to the present setting but since Proposition \ref{StrongCmonImpliesOptimal} is a crucial ingredient of several proofs in this paper we provide an outline of the argument in Section 3. As it will turn out, strongly $c$-monotone transport plans even satisfy a ``better'' notion of optimality, called \emph{robust optimality}. \begin{defn}\label{Drobust} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c:X\times Y\to \mathbb{R}op$ be a Borel measurable cost function. A transport plan $\pi\in \mathcal{P}i(\mu,\nu)$ is \emph{robustly optimal} if, for any Polish space $Z$ and any finite Borel measure $\lambda\ge 0$ on $Z$, there exists a Borel measurable extension $\tilde c: (X\cup Z)\times (Y\cup Z)\to \mathbb{R}op$ satisfying \[\tilde{c}(a,b)=\left\{\begin{array}{ccl} c(a,b)&\mbox{ for }& a\in X, b\in Y\\ 0 &\mbox{ for }& a,b \in Z\\ <\infty&\mbox{ otherwise }\end{array} \right.\] such that the measure $\tilde\pi:=\pi+\left(\mbox{id}_Z\!\times\!\mbox{id}_Z\right){}_{\#}\lambda$ is optimal on $(X\cup Z)\times (Y\cup Z)$. Note that $\tilde \pi$ is not a probability measure, but has total mass $1+\lambda(Z)\in [1,\infty)$. \end{defn} Note that since we allow the possibility $\lambda (Z)=0 $ every robustly optimal transport plan is in particular optimal in the usual sense. Robust optimality has a colorful ``economic'' interpretation: a tycoon wants to enter the Parisian croissant consortium. She builds a storage of size $\lambda(Z)$ where she buys up croissants and sends them to the caf\'{e}s. Her hope is that by offering low transport costs, the previously optimal transport plan $\pi$ will not be optimal anymore, so that the traditional relations between bakeries and caf\'{e}s will collapse. Of course, the authorities of Paris will try to defend their structure by imposing (possibly very high, but still finite) tolls for all transports to and from the tycoon's storage, thus resulting in finite costs $\tilde c(a,b)$ for $(a,b)\in (X\times Z)\cup (Z\times Y)$. In the case of robustly optimal $\pi$ they can successfully defend themselves against the intruder. Every robustly optimal transport $\pi$ plan is optimal in the usual sense and hence also $c$-monotone. The crucial feature is that robust optimality implies strong $c$-monotonicity. In fact, the two properties are equivalent. \begin{thmT}\label{StrongCmonEquivRobustlyOptimal} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and $c:X\times Y\to \mathbb{R}op$ a Borel measurable cost function. For a finite transport plan $\pi$ the following assertions are equivalent: \begin{enumerate} \item\label{T2strong} $\pi$ is strongly $c$-monotone. \item\label{T2robust} $\pi$ is robustly optimal. \end{enumerate} \end{thmT} Example \ref{ZeroOneExample} below shows that robust optimality resp.\ strong c-monotonicity is in fact a stronger property than usual optimality. \subsection{Putting things together} Finally we want to point out that in the situation where $c$ is finite all previously mentioned notions of monotonicity and optimality coincide. We can even pass to a slightly more general setting than finite cost functions and obtain the following result. \begin{thmT}\label{AllEquiv} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c: X\times Y\to \mathbb{R}op$ be Borel measurable and $\mu\otimes\nu$-a.e.\ finite. For a finite transport plan $\pi$ the following assertions are equivalent: \begin{enumerate}\renewcommand{(\arabic{enumi})}{(\arabic{enumi})} \item \label{T3optimal} $\pi$ is optimal. \item \label{T3cmon} $\pi$ is $c$-monotone. \item \label{T3robust} $\pi$ is robustly optimal. \item \label{T3strong} $\pi$ is strongly $c$-monotone. \end{enumerate} \end{thmT} The equivalence of (1), (2) and (4) was established in \cite{ScTe08} under the additional assumption that $c$ is l.s.c.\ and finitely valued. We sum up the situation under fully general assumptions. The upper line (1 and 2) relates to the optimality of a transport plan $\pi$. The lower line (3 and 4) contains the two equivalent strong concepts and implies the upper line but - without additional assumptions - not vice versa. \begin{center}\begin{figure} \caption{Implications between properties of transport plans} \label{diagram} \end{figure} \end{center} Note that the implications symbolized by dotted lines in Figure 1 are not true without additional assumptions ($(2) \not\mathbb{R}ightarrow (1)$: Example \ref{ExAmPra}, $(1) \not\mathbb{R}ightarrow (3)$ resp.\ $(4)$: Example \ref{ZeroOneExample}). The paper is organized as follows: In Section 2 we prove that every optimal transport plan $\pi$ is $c$-monotone (Theorem \ref{CmonEquivOptimal}.a). In Section 3 we introduce an auxilliary property (\emph{connectedness}) of the support of a transport plan and show that it allows to pass from $c$-monotonicity to strong $c$-monotonicity. Moreover we establish that strong $c$-monotonicity implies optimality (Proposition \ref{StrongCmonImpliesOptimal}). Section 4 is concerned with the proof of Theorem \ref{CmonEquivOptimal}.b. Finally we complete the proofs of Theorems \ref{StrongCmonEquivRobustlyOptimal} and \ref{AllEquiv} in Section 5. We observe that in all the above discussion we only referred to the Borel structure of the Polish spaces $X,Y$, and never referred to the topological structure. Hence the above results (with the exception of Theorem \ref{CmonEquivOptimal}.b.) hold true for standard Borel measure spaces. In fact it seems likely that our results can be transferred to the setting of perfect measure spaces. (See \cite{Rama02} for a general overview resp.\ \cite{RaRu98} for a treatment of problems of mass transport in this framework.) However we do not pursue this direction. \noindent {\bf Acknowledgement.} The authors are indebted to the extremely careful referee who noticed many inaccuracies resp.\ mistakes and whose insightful suggestions led to a more accessible presentation of several results in this paper. \section{Improving Transports}\label{SmilestoneA} Assume that some transport plan $\pi\in\mathcal{P}i(\mu,\nu)$ is given. From a purely heuristic point of view there are either few tupels $\left((x_1,y_1),\dotsc, (x_n,y_n)\right)$ along which $c$-monotonicity is violated, or there are many such tuples, in which case $\pi$ can be enhanced by rerouting the transport along these tuples. As the notion of $c$-monotonicity refers to $n$-tuples it turns out that it is necessary to consider finitely many measure spaces to properly formulate what is meant by ``few'' resp.\ ``many''. Let $X_1,\dotsc,X_n$ be Polish spaces equipped with finite Borel measures $\mu_1,\dotsc,\mu_n$. By $\mathcal{P}i(\mu_1,\dotsc,\mu_n)\subseteq \mathcal{M}(X_1\times\dotsb\times X_n)$ we denote the set of all Borel measures on $X_1\times\dotsb\times X_n$ such that the $i$-th marginal measure coincides with the Borel measure $\mu_i$ for $i=1,\dotsc, n$. By $p_{X_i}: X_1\times\dotsb\times X_n\to X_i$ we denote the projection onto the $i$-th component. $B\subseteq X_1\times\dotsb\times X_n$ is called an \emph{L-shaped null set} if there exist null sets $N_1\subseteq X_1,\dotsc,N_n\subseteq X_n$ such that $B\subseteq \bigcup_{i=1}^n p_{X_i}^{-1} [N_i]$. The Borel sets of $X_1\times\dotsb\times X_n$ satisfy a nice dichotomy. They are either L-shaped null sets or they carry a positive measure whose marginals are absolutely continuous with respect to $\mu_1,\dotsc,\mu_n$: \begin{prop}\label{PKmeasure} Let $X_1,\ldots,X_n,n\geq 2$ be Polish spaces equipped with Borel probability measures $\mu_1,\dotsc,\mu_n$. Then for any Borel set $B\subseteq X_1\times\dotsb\times X_n$ let \begin{align} P(B):=&\sup\left\{ \pi(B): \pi\in\mathcal{P}i(\mu_1,\dotsc,\mu_n)\right\}\\ L(B):=&\inf\left\{\sum_{i=1}^n \mu_i(B_i): B_i \subseteq X_i \mbox{ and } B\subseteq \bigcup_{i=1}^n p_{X_i}^{-1}[B_i]\right\}. \end{align} Then $P(B)\geq1/n\,L(B)$. In particular $B$ satisfies one of the following alternatives: \begin{enumerate} \item $B$ is an L-shaped null set. \item There exists $\pi\in \mathcal{P}i(\mu_1,\dotsc,\mu_n)$ such that $\pi(B)>0$. \end{enumerate} \end{prop} The main ingredient in the proof Proposition \ref{PKmeasure} is the following duality theorem due to Kellerer (see \cite[Lemma 1.8(a), Corollary 2.18]{Kell84}). \begin{thmcite}{Kellerer}\label{KellererDuality} Let $X_1,\ldots,X_n,n\geq 2$ be Polish spaces equipped with Borel probability measures $\mu_1,\dotsc, \mu_n$ and assume that $c: X=X_1\times\dotsb\times X_n\to\mathbb{R}$ is Borel measurable and that $\overline c := \sup_X c, \underline c:=\inf_X c$ are finite. Set \begin{align*} I(c)=&\inf\left\{ \int_X c\ d\pi: \pi\in\mathcal{P}i(\mu_1,\dotsc,\mu_n)\right\},\\ S(c)=&\sup\left\{\sum_{i=1}^n\int_{X_i} \varphi_i \ d\mu_i: c(x_1,\dotsc,x_n)\geq \sum_{i=1}^n \varphi_i(x_i), \frac {1}{n} \overline c -(\overline c-\underline c)\leq \varphi_i\leq \frac {1} {n} \overline c \right\}. \end{align*} Then $I(c)=S(c)$. \end{thmcite} \begin{pf*}{PROOF of Proposition \ref{PKmeasure}.} Observe that $-I(-\mathbb{E}ins_B)=P (B)$ and that \begin{equation}\label{reversedDefinition} -S(-\mathbb{E}ins_B)= \inf\!\left\{ \sum_{i=1}^n \int_{X_i}\chi_i d\mu_i:\mathbb{E}ins_B(x_1,\dotsc, x_n)\le\sum_{i=1}^n \chi_i (x_i), 0\le \chi_i\le 1\right\}\!.\end{equation} By Kellerer's Theorem $-S(-\mathbb{E}ins_B)=-I(-\mathbb{E}ins_B)$. Thus it remains to show that $-S(-\mathbb{E}ins_B)\geq 1/n\, L (B)$. Fix functions $\chi_1,\ldots,\chi_n$ as in (\ref{reversedDefinition}). Then for each $(x_1,\ldots,x_n)\in B$ one has $1=\mathbb{E}ins_B(x_1,\ldots, x_n)\le\sum_{i=1}^n \chi_i (x_i)$ and hence there exists some $i$ such that $\chi_i(x_i)\geq 1/n$. Thus $B\subseteq\mathfrak{C}up_{i=1}^n p_{X_i}^{-1}[\{\chi_i\geq 1/n\}]$. It follows that \begin{eqnarray*} -S(-\mathbb{E}ins_B)&\geq& \inf\left\{ \sum_{i=1}^n \int_{X_i}\chi_i d\mu_i: B\subseteq \mathfrak{C}up_{i=1}^n p_{X_i}^{-1}[\{\chi_i\geq 1/n\}], 0\le \chi_i\le 1\right\}\\ &\geq& \inf\left\{ \sum_{i=1}^n \frac1n \mu_i(\{\chi_i\geq 1/ n\}) : B\subseteq \mathfrak{C}up_{i=1}^n p_{X_i}^{-1}[\{\chi_i\geq 1/n\}] \right\}\geq \frac1n L(B) \end{eqnarray*} From this we deduce that either $L(B) =0$ or that there exists $\pi\in \mathcal{P}i(\mu_1,\dotsc,\mu_n)$ such that $\pi(B)>0$. The last assertion of Proposition \ref{PKmeasure} now follows from the following Lemma due to Rich\'ard Balka and M\'arton Elekes (private communication). \qed\end{pf*} \begin{lem}\label{Lelekes} Suppose that $L(B)=0$ for a Borel set $B\subseteq X_1\times\dotsb\times X_n$. Then $B$ is an L-shaped null set. \end{lem} \begin{pf} Fix $\varepsilon>0$ and Borel sets $B_1^{(k)},\dotsc, B_n^{(k)}$ with $\mu_i(B_i^{(k)})\le \varepsilon\, 2^{-k}$ such that for each $k$ \[B\subseteq p_{X_1}^{-1}[B_1^{(k)}]\cup\dotsc\cup p_{X_n}^{-1}[B_n^{(k)}].\] Let $B_i:=\bigcup_{k=1}^{\infty} B_i^{(k)}$ for $i=2,\dotsc,n$ such that \[B\subseteq p_{X_1}^{-1}[B_1^{(k)}]\cup p_{X_2}^{-1}[B_2]\cup\dotsc\cup p_{X_n}^{-1}[B_n]\] for each $k\in \mathbb{N}$. Thus with $B_1:=\bigcap_{k=1}^{\infty} B_1^{(k)}$, \[B\subseteq p_{X_1}^{-1}[B_1]\cup p_{X_2}^{-1}[B_2]\cup\dotsc\cup p_{X_n}^{-1}[B_n].\] Hence we can assume from now on that $\mu_1(B_1)=0$ and that $\mu_i(B_i)$ is arbitrarily small for $i=2,\dotsc, n$. Iterating this argument in the obvious way we get the statement. \qed\end{pf} \begin{rem} In the case $n=2$ it was shown in \cite[Proposition 3.3]{Kell84} that $L(B)=P(B)$ for every Borel set $B\subseteq X_1\times X_2$. However, for $n>2$, equality does not hold true, cf.\ \cite[Example 3.4]{Kell84}. \end{rem} \begin{defn}\label{Dbneps} Let $X,Y$ be Polish spaces. For a Borel measurable cost function $c: X\times Y \to \mathbb{R}op, n\in\mathbb{N}$ and $\varepsilon >0$ we set \begin{equation}\label{eqnbneps} B_{n,\varepsilon}:=\mathfrak{B}igg\{(x_i,y_i)_{i=1}^n\in (X\times Y)^n : \sum_{i=1}^n c(x_i,y_i) \ge \sum_{i=1}^n c(x_i,y_{i+1})+\varepsilon\mathfrak{B}igg\}. \end{equation} \end{defn} The definition of the sets $B_{n,\varepsilon}$ is implicitly given in \cite[Theorem 2.3]{GaMc96}. The idea behind it is, that $(x_i,y_i)_{i=1}^n\in B_{n,\varepsilon}$ tells us that transport costs can be reduced if ``$x_i$ is transported to $y_{i+1}$ instead of $y_{i}$'' (recall the conventions $x_{n+1}=x_1$ resp.\ $y_{n+1}=y_1$). In what follows we make this statement precise and give a coordinate free formulation. Denote by $\sigma, \tau: (X\times Y)^n \to (X\times Y)^n$ the shifts defined via \begin{align} \sigma: (x_i,y_i)_{i=1}^n &\mapsto (x_{i+1},y_{i+1})_{i=1}^n \\ \tau: (x_i,y_i)_{i=1}^n &\mapsto \hspace{2ex}(x_{i},y_{i+1})_{i=1}^n. \end{align} Observe that $\sigma^n=\tau^n=\mbox{Id}_{(X\times Y)^n}$ and that $\sigma$ and $\tau$ commute. Also note that the set $B_{n,\varepsilon}$ from \eqref{eqnbneps} is $\sigma$-invariant (i.e.\ $\sigma(B_{n,\varepsilon})=B_{n,\varepsilon}$), but in general not $\tau$-invariant. Denote by $p_i: (X\times Y)^n \to X\times Y$ the projection on the $i$-th component of the product. The projections $p_X: X\times Y \to X, (x,y)\mapsto x$ and $p_Y: X\times Y \to Y, (x,y)\mapsto y$ are defined as usual and there will be no danger of confusion. \begin{lem}\label{twoAlternatives} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$. Let $\pi$ be a transport plan. Then one of the following alternatives holds: \begin{enumerate} \item $\pi$ is $c$-monotone, \item there exist $n\in\mathbb{N}$, $\varepsilon>0$ and a measure $\kappa \in \mathcal{P}i(\pi,\ldots,\pi)$ such that $\kappa(B_{n,\varepsilon})>0$. Moreover $\kappa$ can be taken to be both $\sigma$ and $\tau$ invariant. \end{enumerate} \end{lem} \begin{pf} Suppose that $B_{n,\varepsilon}$ is an L-shaped null set for all $n\in \mathbb{N}$ and every $\varepsilon>0$. Then there are Borel sets $S_{n,\varepsilon}^1,\ldots, S_{n,\varepsilon}^n\subseteq X\times Y$ of full $\pi$-measure such that \[\left(S_{n,\varepsilon}^1\times\ldots\times S_{n,\varepsilon}^n\right) \cap B_{n,\varepsilon} = \emptyset\] and $\pi $ is concentrated on the $c$-monotone set \[S=\bigcap_{k=1}^{\infty}\bigcap_{n=1}^\infty\bigcap_{i=1}^n S_{n,1/k}^i.\] If there exist $n\in\mathbb{N}$ and $\varepsilon>0$ such that $B_{n,\varepsilon}$ is not an L-shaped null set, we apply Proposition \ref{PKmeasure} to conclude the existence of a measure $\kappa\in\mathcal{P}i(\pi,\dotsc,\pi)$ with $\kappa(B_{n,\varepsilon})>0$. To achieve the desired invariance, simply replace $\kappa$ by \begin{equation}\label{eqninv} \frac{1}{n^2}\sum_{i,j=1}^n (\sigma^i\circ\tau^j){}_{\#}\kappa\qed \end{equation} \end{pf} We are now in the position to prove \textbf{Theorem \ref{CmonEquivOptimal}.a}, i.e.\ \begin{nil} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$ and let $c: X\times Y \to \mathbb{R}op$ be a Borel measurable cost function. If $\pi$ is a finite optimal transport plan, then $\pi$ is $c$-monotone.\end{nil} \begin{pf} Suppose by contradiction that $\pi$ is optimal, $I_c[\pi]<\infty$ but $\pi$ is not $c$-monotone. Then by Lemma \ref{twoAlternatives} there exist $n\in \mathbb{N}$, $\varepsilon>0$ and an invariant measure $\kappa\in\mathcal{P}i(\pi,\ldots,\pi)$ which gives mass $\alpha>0$ to the Borel set $B_{n,\varepsilon}\subseteq (X\times Y)^n$. Consider now the restriction of $\kappa$ to $B_{n,\varepsilon}$ defined via $ \hat\kappa(A):= \kappa(A\cap B_{n,\varepsilon})$ for Borel sets $A\subseteq (X\times Y)^n$. $\hat\kappa$ is $\sigma$-invariant since both the measure $\kappa$ and the Borel set $B_{n,\varepsilon}$ are $\sigma$-invariant. Denote the marginal of $\hat\kappa$ in the first coordinate $(X\times Y)$ of $(X\times Y)^n$ by $\hat\pi$. Due to $\sigma$-invariance we have \[p_i{}_{\#}\hat\kappa =p_i{}_{\#}(\sigma{}_{\#}\hat\kappa)\\ =(p_i\circ \sigma){}_{\#}\hat\kappa=p_{i+1}{}_{\#}\hat\kappa,\] i.e.\ all marginals coincide and we have $\hat\kappa \in \mathcal{P}i(\hat\pi,\ldots,\hat\pi)$. Furthermore, since $\hat\kappa\le\kappa$, the same is true for the marginals, i.e.\ $\hat\pi\le \pi$. Denote the marginal of $\tau{}_{\#}\hat \kappa$ in the first coordinate $(X\times Y)$ of $(X\times Y)^n$ by $\hat\pi_{\beta}$. As $\sigma$ and $\tau$ commute, $\tau{}_{\#}\hat\kappa$ is $\sigma$-invariant, so the marginals in the other coordinates coincide with $\hat\pi_{\beta}$. An easy calculation shows that $\hat{\pi}$ and $\hat\pi_{\beta}$ have the same marginals in $X$ resp. $Y$: \begin{align*} p_X{}_{\#}\hat\pi_{\beta}&=p_X{}_{\#}(p_i{}_{\#}(\tau{}_{\#}\hat\kappa))=(p_X\circ p_i\circ\tau){}_{\#} \hat\kappa =(p_X\circ p_i){}_{\#} \hat\kappa=p_X{}_{\#}\hat\pi,\\ p_Y{}_{\#}\hat\pi_{\beta}&=p_Y{}_{\#}(p_i{}_{\#}(\tau{}_{\#}\hat\kappa))=(p_Y\circ p_i\circ\tau){}_{\#} \hat\kappa =(p_Y\circ p_{i+1}){}_{\#} \hat\kappa=p_Y{}_{\#}\hat\pi. \end{align*} The equality of the total masses is proved similarly: \begin{align*} \alpha=\hat\pi_{\beta}(X\times Y)&=(p_i\circ\tau){}_{\#} \hat\kappa(X\times Y) =p_i{}_{\#} \hat\kappa(X\times Y)=\hat\pi(X\times Y). \end{align*} Next we compute the transport costs associated to $\hat\pi_{\beta}$: \begin{align*} \int_{X\times Y} c\, d\hat\pi_{\beta} &= \hspace{5ex}\int_{(X\times Y)^n} c\circ p_1\, d(\tau{}_{\#}\hat\kappa) & \text{(marginal property)}\\ &= \frac{1}{n}\sum_{i=1}^n \int_{(X\times Y)^n} c\circ p_i\, d(\tau{}_{\#}\hat\kappa) & \text{($\sigma$-invariance)}\\ &=\frac{1}{n}\sum_{i=1}^n \int_{(X\times Y)^n} (c\circ p_i\circ \tau)\, d\hat\kappa & \text{(push-forward)}\\ &=\frac{1}{n}\sum_{i=1}^n \int_{B_{n,\varepsilon}} (c\circ p_i\circ\tau)\, d\kappa & \text{(definition of $\hat\kappa$)}\\ &\le \frac{1}{n}\int_{B_{n,\varepsilon}}\mathfrak{B}ig[\sum_{i=1}^n (c\circ p_i)\, - \varepsilon\mathfrak{B}ig]\;d\kappa& \text{(definition of $B_{n,\varepsilon}$)}\\ &= \hspace{5ex}\int_{X\times Y} c d\hat\pi- \varepsilon\frac{\alpha}{n}.&\text{(definition of $\hat\pi$)} \end{align*} To improve the transport plan $\pi$ we define \begin{equation} \pi_{\beta}:=(\pi-\hat\pi)+\hat\pi_{\beta}. \end{equation} Recall that $\pi-\hat\pi$ is a positive measure, so $\pi_{\beta}$ is a positive measure. As $\hat\pi$ and $\hat\pi_{\beta}$ have the same total mass, $\pi_{\beta}$ is a probability measure. Furthermore $\hat\pi$ and $\hat\pi_{\beta}$ have the same marginals, so $\pi_{\beta}$ is indeed a transport plan. We have \begin{equation} I_c[\pi_{\beta}]=I_c[\pi]+ \int_{X\times Y} c\, d(\hat\pi_{\beta}-\hat\pi) \le I_c[\pi]-\varepsilon\frac{\alpha}{n} < I_c[\pi].\qed \end{equation} \end{pf} \section{Connecting $c$-monotonicity and strong $c$-monotonicity}\label{SmilestoneB} The Ambrosio-Pratelli example (Example \ref{ExAmPra}) shows that $c$-monotonicity need not imply strong $c$-monotonicity in general. Subsequently we shall present a condition which ensures that this implication is valid. A $c$-monotone transport plan resists the attempt of enhancement by means of cyclically rerouting. This, however, may be due to the fact that cyclical rerouting is a priori impossible due to infinite transport costs on certain routes. Continuing Villani's interpretation, a situation where rerouting in this consortium of bakeries and caf{\'e}s is possible in a satisfactory way is as follows: Suppose that bakery $x=x_0$ is able to produce one more croissant than it already does and that caf\'{e} $\tilde y$ is short of one croissant. It might not be possible to transport the additional croissant itself to the caf\'e in need, as the costs $c(x, \tilde y)$ may be infinite. Nevertheless it might be possible to find another bakery $x_1 $ (which usually supplies caf\'e $y_1$) such that bakery $x$ can transport (with finite costs!) the extra croissant to $y_1$; this leaves us with a now unused item from bakery $x_1$, which can be transported to $\tilde y$ with finite costs. Of course we allow not only one, but finitely many intermediate pairs $(x_1,y_1),\dotsc,(x_n,y_n)$ of bakeries/caf\'{e}s to achieve this relocation of the additional croissant. In the Ambrosio-Pratelli example we can reroute from a point $(x,x\oplus \alpha)\in \Gamma_1$ to a point $(\tilde x,\tilde x\oplus \alpha)\in \Gamma_1$ only if there exists $n\in \mathbb{N}$ such that $x\oplus (n\alpha)=\tilde x$. In particular, irrationality of $\alpha$ implies that if we can redirect with finite costs from $(x,x\oplus \alpha)$ to $(\tilde x,\tilde x\oplus \alpha)$ we never can redirect back from $(\tilde x,\tilde x\oplus \alpha)$ to $(x,x\oplus \alpha)$. \begin{defn}\label{Connectedness} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$, let $c:X\times Y\to \mathbb{R}op$ be a Borel measurable cost function and $\Gamma\subseteq X\times Y$ a Borel measurable set on which $c$ is finite. We define \begin{enumerate} \item $(x,y)\lesssim(\tilde x,\tilde y)$ if there exist pairs $(x_0,y_0),\dotsc,(x_n,y_n)\in\Gamma$ such that $(x,y)=(x_0,y_0)$ and $(\tilde x,\tilde y)=(x_n,y_n)$ and $c(x_1,y_0),\dotsc,c(x_n,y_{n-1})<\infty$. \item $(x,y)\approx (\tilde x,\tilde y)$ if $(x,y)\lesssim(\tilde x,\tilde y)$ and $(x,y)\gtrsim (\tilde x,\tilde y)$. \end{enumerate} We call $(\Gamma,c)$ \emph{connecting} if $c$ is finite on $\Gamma$ and $(x, y)\approx(\tilde x,\tilde y)$ for all $(x,y), $ $(\tilde x,\tilde y)\in\Gamma.$ \end{defn} These relations were introduced in \cite[Chapter 5, p.75]{Vill05} and appear in a construction due to Stefano Bianchini. When there is any danger of confusion we will write $\lesssim_{c,\Gamma}$ and $\approx_{c,\Gamma}$, indicating the dependence on $\Gamma$ and $c$. Note that $\lesssim$ is a pre-order, i.e.\ a transitive and reflexive relation, and that $\approx$ is an equivalence relation. We will also need the projections $\lesssim_X,\approx_X$ resp.\ $\lesssim_Y, \approx_Y$ of these relations onto the set $p_X[\Gamma]\subseteq X$ resp.\ $p_Y[\Gamma]\subseteq Y$. The projection is defined in the obvious way: $x\lesssim_X \tilde x$ if there exist $y,\tilde y$ such that $(x,y),(\tilde x,\tilde y)\in \Gamma $ and $(x,y)\lesssim (\tilde x,\tilde y)$ holds. The other relations are defined analogously. The projections of $\lesssim$ are again pre-orders and the projections of $\approx$ are again equivalence relations, provided $c$ is finite on $\Gamma$. The equivalence classes of $\approx$ and its projections are compatible in the sense that $[(x,y)]_\approx= \left([x]_{\approx_X}\!\times [y]_{\approx_Y}\right)\cap \Gamma$. The elementary proofs of these facts are left to the reader. The main objective of this section is to prove Proposition \ref{CmonImpliesStrongCmon}, based on several lemmas which will be introduced throughout the section. \begin{prop}\label{CmonImpliesStrongCmon} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c:X\times Y\to \mathbb{R}op$ be a Borel measurable cost function. Let $\pi$ be a finite transport plan. Assume that there exists a $c$-monotone set $\Gamma \subseteq X\times Y$ with $\pi(\Gamma)=1$ on which $c$ is finite, such that $(\Gamma, c)$ is connecting. Then $\pi$ is strongly $c$-monotone. \end{prop} In the proof of Proposition \ref{CmonImpliesStrongCmon} we will establish the existence of the functions $\varphi, \psi$ using the construction given in \cite{R96}, see also \cite[Chapter 2]{Vill03} and \cite[Theorem 3.2]{AmPr03}. As we do not impose any continuity assumptions on the cost function $c$, we can not prove the Borel measurability of $\varphi$ and $\psi$ by using limiting procedures similar to the methods used in \cite{AmPr03,R96,ScTe08,Vill03}. Instead we will use the following projection theorem, a proof of which can be found in \cite[Theorem III.23]{CaVa77} by analyists or in \cite[Section 29.B]{Kech95} by readers who have some interest in set theory. \begin{prop}\label{univmeasurable} \hspace{-3mm} \footnote{Sets which are images of Borel sets under measurable functions are called \emph{analytic} in descripitive set theory. Lusin first noticed that analytic sets are universally measurable. Details can be found for instance in \cite{Kech95}.} Let $X$ and $Y$ be Polish spaces, $A\subseteq X$ a Borel measurable set and $f: X\to Y$ a Borel measurable map. Then $B:=f(A)$ is \emph{universally measurable}, i.e. $B$ is measurable with respect to the completion of every $\sigma$-finite Borel measure on $Y$.\end{prop} The system of universally measurable sets is a $\sigma$-algebra. If $X$ is a Polish space, we call a function $f:X\to[-\infty,\infty]$ universally measurable if the pre-image of every Borel set is universally measurable. \begin{lem}\label{borelversion} Let $X$ be a Polish space and $\mu$ a finite Borel measure on $X$. If $\varphi: X\to \mathbb{R}m$ is universally measurable, then there exists a Borel measurable function $\tilde{\varphi}: X\to\mathbb{R}m$ such that $\tilde{\varphi} \leq\varphi$ everywhere and $\varphi=\tilde{\varphi}$ almost everywhere. \end{lem} \begin{pf} Let $(I_n)_{n=1}^{\infty}$ be an enumeration of the intervals $[a,b)$ with endpoints in $\mathbb{Q}$ and denote the completion of $\mu$ by $\tilde \mu$. Then for each $n\in \mathbb{N}$, $\varphi^{-1}[I_n]$ is $\tilde{\mu}$-measurable and hence the union of a Borel set $B_n$ and a $\tilde{\mu}$-null set $N_n$. Let $N$ be a Borel null set which covers $\bigcup_{n=1}^{\infty} N_n$. Let $\tilde{\varphi}(x)=\varphi(x)-\infty\cdot \mathbf{1}_N(x)$. Clearly $\tilde{\varphi}(x) \le \varphi(x)$ for all $x\in X$ and $\varphi(x)=\tilde{\varphi}(x)$ for $\tilde\mu$-almost all $x\in X$. Furthermore, $\tilde{\varphi}$ is Borel measurable since $(I_n)_{n=1}^{\infty}$ is a generator of the Borel $\sigma$-algebra on $\mathbb{R}m$ and for each $n\in \mathbb{N}$ we have that $\tilde{\varphi}^{-1} [I_n]=B_n\setminus N$ is a Borel set. \qed\end{pf} The following definition of the functions $\varphi_n, n\in\mathbb{N}$ resp.\ $\varphi$ is reminiscent of the construction in \cite{R96}. \begin{lem}\label{analytic} Let $X,Y$ be Polish spaces, $c: X\times Y \to \mathbb{R}op$ a Borel measurable cost function and $\Gamma\subseteq X\times Y$ a Borel set. Fix $(x_0,y_0)\in \Gamma$ and assume that $c$ is finite on $\Gamma$. For $n\in\mathbb{N}$, define $\varphi_n: X\times\Gamma^{n}\to \mathbb{R}p $ by \begin{equation} \varphi_n (x;x_1,y_1,\ldots,x_n,y_n)= [c(x,y_n)-c(x_n,y_n)]+\sum_{i=0}^{n-1} [ c(x_{i+1},y_i)-c(x_i,y_i)] \end{equation} Then the map $\varphi: X \to [-\infty, \infty]$ defined by \begin{equation}\label{phirueschen} \varphi(x)=\inf \{\varphi_n (x;x_1,y_1,\ldots,x_n,y_n): n\ge 1, (x_i,y_i)_{i=1}^n \in \Gamma^n\} \end{equation} is universally measurable. \end{lem} \begin{pf} First note that the Borel $\sigma$-algebra on $[-\infty, \infty]$ is generated by intervals of the form $[-\infty,\alpha)$, thus it is sufficient to determine the pre-images of those sets under $\varphi$. We have \begin{equation*} \varphi(x)< \alpha \leftrightarrow \exists n\in \mathbb{N} \;\exists (x_1,y_1),\ldots,(x_n,y_n) \in \Gamma:\varphi_n(x;x_1,y_1,\ldots,x_n,y_n)<\alpha. \end{equation*} The set $\varphi_n^{-1}[[-\infty,\alpha)]$ is Borel measurable. Hence \[\varphi^{-1}[[-\infty,\alpha)]= \bigcup_{n\in\mathbb{N}} p_X[\varphi_n^{-1}[[-\infty,\alpha)]]\] is the countable union of projections of Borel sets. Since projections of Borel sets are universally measurable by Proposition \ref{univmeasurable}, $\varphi^{-1}[[-\infty,\alpha)]$ belongs also to the $\sigma$-algebra of universally measurable sets. \qed\end{pf} \begin{lem}\label{phi finite} Let $X,Y$ be Polish spaces and $c: X\times Y \to \mathbb{R}op$ a Borel measurable cost function. Suppose $\Gamma$ is $c$-monotone, $c$ is finite on $\Gamma$ and $(\Gamma,c)$ is connecting. Fix $(x_0,y_0)\in\Gamma$. Then the map $\varphi$ from \eqref{phirueschen} is finite on $p_X[\Gamma]$. Furthermore \begin{equation}\label{ctrafoineq} \varphi(x)\le \varphi(x')+c(x,y)-c(x',y)\quad \forall x\in X,\,(x',y)\in \Gamma. \end{equation} \end{lem} \begin{pf} Fix $x\in p_X[\Gamma]$. Since $x_0\lesssim x$ (recall Definition \ref{Connectedness}), we can find $x_1,y_1,\ldots,x_n,y_n$ such that $\varphi_n (x;x_1,y_1,\ldots,x_n,y_n)<\infty$. Hence $\varphi(x)< \infty$. Proving $\varphi(x)>-\infty$ involves some wrestling with notation but, not very surprisingly, it comes down to applying the fact that $x\lesssim x_0$. Let $a_1=x$ and choose $b_1,a_2,b_2,\ldots, a_m,b_m$ such that $(a_1,b_1),\ldots,(a_m,b_m)\in \Gamma$ and $c(a_2,b_1), \ldots,c(a_m,b_{m-1}),c(x,b_m)<\infty$. Assume now that $x_1,y_1,\ldots,x_n,y_n$ are given such that $\varphi_n (x;x_1,y_1,\ldots,x_n,y_n)<\infty$. Put $x_{n+i}=a_i$ and $y_{n+i}=b_i$ for $i\in\{1,\ldots,m\}$. Due to $c$-monotonicity of $\Gamma$ and the finiteness of all involved terms we have: \begin{equation*}\label{c monoton} 0\le [c(x_0,y_{n+m})-c(x_{n+m},y_{n+m})]+\sum_{i=0}^{n+m-1} [c(x_{i+1},y_i)-c(x_i,y_i)], \end{equation*} which, after regrouping yields \begin{multline}\label{eqnhenceandforth} \alpha:=[c(x_0,b_{m})-c(a_{m},b_{m})]+\sum_{i=1}^{m-1}[c(a_{i+1},b_i)-c(a_i,b_i)]\\ \le [c(x,y_{n})-c(x_{n},y_{n})]+\sum_{i=0}^{n-1}[c(x_{i+1},y_i)-c(x_i,y_i)]. \end{multline} Note that the right hand side of \eqref{eqnhenceandforth} is just $\varphi_n(x;x_1,y_1,\ldots,x_n,y_n)$. Thus passing to the infimum we see that $\varphi(x)\ge \alpha>-\infty.$ To prove the remaining inequality, observe that the right hand side of \eqref{ctrafoineq} can be written as $$\inf\{\varphi_n(x;x_1,y_1,\ldots, x_n,y_n):n\ge 1,(x_i,y_i)_{i=1}^n \in \Gamma^n \mbox{ and } (x_n, y_n)=(x',y)\}$$ whereas the left hand side of \eqref{ctrafoineq} is the same, without the restriction $(x_n,y_n)=(x',y)$. \qed\end{pf} \begin{lem}\label{analytic2} Let $X,Y$ be Polish spaces and $c: X\times Y \to \mathbb{R}op$ a Borel measurable cost function. Let $X_0\subseteq X$ be a non-empty Borel set and let $\varphi: X_0\to \mathbb{R}$ be a Borel measurable function. Then the $c$-transform $\psi: Y \to \mathbb{R}m$, defined as \begin{equation} \psi(y):=\inf_{x\in X_0} [c(x,y)-\varphi(x)] \end{equation} is universally measurable. \end{lem} \begin{pf} As in the proof of Lemma \ref{borelversion} we consider the set $\psi^{-1}[[-\infty,\alpha)]$: \[\psi(y)< \alpha \leftrightarrow \exists x\in X_0: c(x,y)-\varphi(x)<\alpha.\] Note that the set $\{(x,y)\in X_0\times Y: c(x,y)-\varphi(x)<\alpha\}$ is Borel. Thus \[\psi^{-1}[[-\infty,\alpha)]=p_X[\{(x,y)\in X_0\times Y: c(x,y)-\varphi(x)<\alpha\}]\] is the projection of a Borel set, hence universally measurable. \qed\end{pf} We are now able to prove the main result of this section. \begin{pf*}{PROOF of Proposition \ref{CmonImpliesStrongCmon}.} Let $\Gamma\subseteq X\times Y$ be a $c$-monotone Borel set such that $\pi(\Gamma)=1$ and the pair $(\Gamma, c)$ is connecting. Let $\varphi$ be the map from Lemma \ref{analytic}. Using Lemma \ref{borelversion} and Lemma \ref{phi finite}, and eventually passing to a subset of full $\pi$-measure, we may assume that $\varphi$ is Borel measurable, that $X_0:=p_X[\Gamma]$ is a Borel set and that \begin{equation}\label{equalonC} c(x',y)-\varphi(x')\le c(x,y)-\varphi(x)\quad \forall x\in X_0,\,(x',y)\in \Gamma. \end{equation} Note that \eqref{equalonC} follows from \eqref{ctrafoineq} in Lemma \ref{phi finite}. Here we consider $x\in X_0$ in order to ensure that $\varphi(x)$ is finite on $X_0$. Now consider the $c$-transform \begin{equation}\label{eqctrafo} \psi(y):=\inf_{x\in X_0} [c(x,y)-\varphi(x)], \end{equation} which by Lemma \ref{analytic2} is universally measurable. Fix $y \in p_Y[\Gamma]$. Using \eqref{equalonC} we see that the infimum in \eqref{eqctrafo} is attained at a point $x_0\in X_0$ satisfying $(x_0,y)\in \Gamma$. This implies that $\varphi(x)+\psi(y)=c(x,y)$ on $\Gamma$ and $\varphi(x)+\psi(y)\le c(x,y)$ on $p_X[\Gamma]\times p_Y[\Gamma]$. To guarantee this inequality on the whole product $X\times Y$, one has to redefine $\varphi$ and $\psi$ to be $-\infty$ on the complement of $p_X[\Gamma]$ resp.\ $p_Y[\Gamma]$. Applying Lemma \ref{borelversion} once more, we find that there exists a Borel set $N\subseteq Y$ of zero $\nu$-measure, such that $\tilde\psi(y)=\psi(y)-\infty\cdot \mathbb{E}ins_N(y)$ is Borel measurable. Finally, replace $\Gamma$ by $\Gamma \cap (X\times (Y\setminus N))$ and $\psi$ by $\tilde\psi$. \qed\end{pf*} We conclude this section by proving that every strongly $c$-monotone transport plan is optimal ({\bf Proposition \ref{StrongCmonImpliesOptimal}}). \begin{nil} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c: X\times Y\to \mathbb{R}op$ be Borel measurable. Then every finite transport plan which is strongly $c$-monotone is optimal. \end{nil} \begin{pf} Let $\pi_0$ be a strongly $c$-monotone transport plan. Then, according to the definition, there exist Borel functions $\varphi(x)$ and $\psi(y)$ taking values in $[-\infty,\infty)$ such that \begin{equation}\label{defstrongc} \varphi(x)+\psi(y)\le c(x,y) \end{equation} everywhere on $X\times Y$ and equality holds $\pi_0$-a.e. We define the truncations $\varphi_n = (n\wedge (\varphi\vee -n)), \psi_n=(n\wedge (\psi\vee -n))$ and let $\xi_n(x,y):=\varphi_n(x)+\psi_n(y)$ resp.\ $\xi(x,y):=\varphi(x)+\psi(y)$. Note that $\varphi_n,\psi_n,\xi_n, \xi$ are Borel measurable. By elementary considerations which are left the reader, we get pointwise monotone convergence $\xi_n\uparrow \xi$ on the set $\{\xi\ge 0\}$ resp.\ $\xi_n\downarrow \xi$ on the set $\{\xi\le 0\}$ Let $\pi_1$ be an arbitrary finite transport plan; to compare $I_c[\pi_0]$ and $I_c[\pi_1]$ we make the following observations: \begin{enumerate} \item By monotone convergence \begin{align} \int_{\{\xi\geq 0\}} \xi_n \, d\pi_i\uparrow & \int_{\{\xi\geq 0\}} \xi \, d\pi_i \leq I_c[\pi_i]< \infty \mbox{ and}\\ \int_{\{\xi< 0\}} \xi_n \, d\pi_i\downarrow & \int_{\{\xi< 0\}} \xi \, d\pi_i \end{align} for $i\in \{0,1\}$, hence $\lim_{n\to \infty} \int \xi_n\, d\pi_i= \int \xi\, d\pi_i.$ \item By the assumption on equal marginals of $\pi_0$ and $\pi_1$ we obtain for $n\geq 0$ \begin{align} \int \xi_n\,d\pi_0 & =\int \varphi_{n}\, d\pi_0+\int \psi_{n}\,d\pi_0\\ & =\int \varphi_{n}\, d\pi_1+\int \psi_{n}\,d\pi_1 =\int\xi_{n}\,d\pi_1. \end{align} \end{enumerate} Thus $I_c[\pi_0]=\int \xi\, d\pi_0= \lim_{n\to \infty} \int \xi_n\, d\pi_0= \lim_{n\to \infty} \int \xi_n\, d\pi_1=\int \xi\, d\pi_1\leq I_c[\pi_1]$; since $\pi_1$ was arbitrary, this implies optimality of $\pi_0$. \qed\end{pf} \section{From $c$-monotonicity to optimality}\label{SmilestoneC} This section is devoted to the proof of Theorem \ref{CmonEquivOptimal}.b. Our argument starts with a finite $c$-monotone transport plan $\pi$ and we aim for showing that $\pi$ is at least as good as any other finite transport plan. The idea behind the proof is to partition $X$ and $Y$ into cells $C_i, i\in I$ resp.\ $D_i,i\in I$ in such a way that $\pi$ is strongly $c$-monotone on ``diagonal'' sets of the form $C_i\times D_i$ while regions $C_i\times D_j, i\neq j$ can be ignored, because no finite transport plan will give positive measure to the set $C_i\times D_j$. Thus it will be necessary to apply previously established results to some restricted transport problems on a space $C_i\times D_i$ equipped with some relativized transport plan $\pi\upharpoonright C_i\times D_i$. As in general the cells $C_i,D_i$ are plainly Borel sets they may fail to be Polish spaces with respect to the topologies inherited from $X$ resp.\ $Y$. However, for us it is only important that there exist \emph{some} Polish topologies that generate the same Borel sets on $C_i$ resp.\ $D_i$ (see e.g.\ \cite[Theorem 13.1]{Kech95}). At this point it is crucial that our results only need measurability of the cost function and do not ask for any form of continuity (cf.\ the remarks at the end of the introduction). Before we give the proof of Theorem \ref{CmonEquivOptimal}.b we will need some preliminary lemmas. \begin{lem}\label{omegaClasses} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c: X\times Y \to \mathbb{R}op$ be a Borel measurable cost function. Let $\pi,\pi_0$ be finite transport plans and $\Gamma\subseteq X\times Y$ a Borel set with $\pi(\Gamma)=1$ on which $c$ is finite. Let $I=\{0,\ldots, n\}$ or $I=\mathbb{N}$ and assume that $C_i, i\in I $ are mutually disjoint Borel sets in $X$, $D_i,i\in I$ are mutually disjoint Borel sets in $Y$ such that the equivalence classes of $\approx_{c,\Gamma}$ are of the form $\Gamma \cap (C_i\times D_i)$. Then also $\pi_0(\mathfrak{C}up_{i\in I } C_i\times D_i)=1$. \end{lem} In the proof we will need the following simple lemma. (For a proof see for instance \cite[Proposition 8.13]{Kall97}.) \begin{lem}\label{markoversatz} Let $I=\{0,\ldots,n\}$ or $I=\mathbb{N}$ and let $P=(p_{ij})_{i,j\in I} $ be a matrix with non-negative entries such that $\sum_{j\in I} p_{i_0j}=1$ for each $i_0\in I$. Assume that there exists a vector $(p_i)_{i\in I}$ with strictly positive entries such $p\cdot P= p$.\footnote{Such a matrix $P$ is often called a \emph{stochastic matrix} while $p$ is a \emph{stochastic vector}.} Then whenever $p_{i_0 i_1}>0$ for $i_0, i_1\in I$, there exists a finite sequence $i_0, i_1, \ldots, i_n=i_0$ such that for all $0\le k < n$ one has $p_{i_k i_{k+1}}>0$. \end{lem} \begin{pf*}{PROOF of Lemma \ref{omegaClasses}.} As $\approx_{\Gamma, c}$ is an equivalence relation and $\pi$ is concentrated on $\Gamma$, the sets $C_i$, $i\in I$ are a partition of $X$ modulo $\mu$-null sets. Likewise the sets $D_i$, $i\in I$ form a partition of $Y$ modulo $\nu$-null sets. In particular the quantities \begin{equation} p_i:=\mu(C_i)=\nu(D_i)=\pi (C_i\times D_i),\quad i\in I \end{equation} add up to $1$. Without loss of generality we may assume that $p_i>0$ for all $i\in I$. We define \begin{equation} p_{ij}:=\frac{\pi_0(C_i\times D_j)}{\mu(C_i)}, \quad i,j\in I. \end{equation} Then $\sum_{j\in I} p_{i_0j}= \frac{\pi_0(C_i\times Y)}{\mu(C_i)}=1$ for each $i_0\in I$. By the condition on the marginals of $\pi_0$ we have for the $i$-th component of $p\!\cdot\!P$ \[(p\!\cdot\!P)_i = \sum_{j\in I}\mu(C_j) \frac{\pi_0(C_j\times D_i)}{\mu(C_j)}= \pi_0(X\times D_i) = \nu (D_i)=p_i\] i.e.\ $p\!\cdot\!P=p$. Hence $P$ satisfies the assumptions of Lemma \ref{markoversatz}. We claim that $p_{ii}=1$ for all $i\in I$. Suppose not. Pick $i_0\in I$ such that $p_{i_0i_0}<1$. Then there exists some index $i_1\neq i_0$ such that $p_{i_0i_1}>0$. Pick a finite sequence $i_0,i_1,\ldots, i_n=i_0$ according to Lemma \ref{markoversatz}. Fix $k\in \{1,\dotsc, n-1\}$. Then $$\pi_0(C_{i_k}\times D_{i_{k+1}})=p_{i_k i_{k+1}}>0.$$ Since $\pi_0$ is a finite transport plan, there exist $x_k\in C_{i_k}\cap p_X[\Gamma]$ and $y_{k+1}'\in D_{i_{k+1}}\cap p_Y[\Gamma]$ such that $c(x_k,y_{k+1}')<\infty$. Choose $y_k\in D_{i_k}$ and $x_{k+1}'\in C_{i_{k+1}}$ such that $(x_k,y_k), (x_{k+1}',y_{k+1}')\in \Gamma.$ Then \begin{equation*}(x_0,y_0)\lesssim (x_1',y_1')\approx(x_1,y_1)\lesssim (x_2',y_2')\approx(x_2,y_2)\lesssim\dotsc\lesssim (x_n',y_n')\approx (x_0,y_0). \end{equation*} But this implies that $(x_0,y_0)\approx (x_1,y_1),$ contradicting the assumption that $(C_{i_0}\times D_{i_0})\cap \Gamma, (C_{i_1}\times D_{i_1})\cap \Gamma$ are different equivalence classes of $\approx_{\Gamma,c}$. Hence we have indeed $p_{ii}=1$ for all $i\in I$, thus $ \pi_0(C_i\times D_i)=\mu(C_i)$ which implies $\pi_0(\bigcup_{i\in I} C_i\times D_i)=1$. \qed\end{pf*} \begin{lem}\label{CFiniteImpliesConnected} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c:X\times Y\to [0,\infty]$ be a Borel measurable cost function which is $\mu\otimes\nu$-a.e.\ finite. For every finite transport plan $\pi$ and every Borel set $\Gamma\subseteq X\times Y$ with $\pi(\Gamma)=1$ on which $c$ is finite, there exist Borel sets $O\subseteq X,U\subseteq Y$ such that $\Gamma'=\Gamma\cap (O\times U)$ has full $\pi$-measure and $(\Gamma',c)$ is connecting. \end{lem} \begin{pf} By Fubini's Theorem for $\mu$-almost all $x\in X$ the set $\{y: c(x,y)<\infty\}$ has full $\nu$-measure and for $\nu$-almost all $y\in Y$ the set $\{x: c(x,y)<\infty\}$ has full $\mu$-measure. In particular the set of points $(x_0,y_0)$ such that both $\mu\left(\{x: c(x,y_0)<\infty\}\right)=1$ and $\nu\left(\{y: c(x_0,y)<\infty\}\right)=1$ has full $\pi$-measure. Fix such a pair $(x_0,y_0)\in\Gamma$ and let $O=\{x\in X: c(x,y_0)<\infty \}, U=\{y\in Y:c(x_0,y)<\infty\}$. Then $\Gamma'=\Gamma \cap (O\times U)$ has full $\pi$-measure and for every $(x,y)\in\Gamma'$ both quantities $c(x,y_0)$ and $c(x_0,y)$ are finite. Hence $x\approx_X x_0$, for every $x\in p_X[\Gamma']$. Similarly we obtain $y\approx_Y y_0$, for every $y\in p_Y[\Gamma']$. Hence $(\Gamma',c)$ is connecting. \qed\end{pf} Finally we prove the statement of \textbf{Theorem \ref{CmonEquivOptimal}.b}: \begin{nil} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and $c: X\times Y \to \mathbb{R}op$ a Borel measurable cost function. Every finite $c$-monotone transport plan is optimal if there exist a closed set $F$ and a $\mu\otimes\nu$-null set $N$ such that $\{(x,y): c(x,y)=\infty\} = F\cup N$. \end{nil} \begin{pf} Let $\pi$ be a finite $c$-monotone transport plan and pick a $c$-monotone Borel set $\Gamma\subseteq X\times Y$ with $\pi(\Gamma)=1$ on which $c$ is finite. Let $O_n,U_n,n\in \mathbb{N}$ be open sets such that $\bigcup_{n\in \mathbb{N}} (O_n\times U_n)=(X\times Y)\setminus F$. Fix $n\in \mathbb{N}$ and interpret $\pi\upharpoonright O_n\times U_n$ as a transport plan on the spaces $(O_n,\mu_n)$ and $(U_n,\nu_n)$ where $\mu_n$ and $\nu_n$ are the marginals corresponding to $\pi\upharpoonright O_n\times U_n$. Apply Lemma \ref{CFiniteImpliesConnected} to $\Gamma\cap(O_n\times U_n)$ and the cost function $c\upharpoonright O_n\times U_n$ to find $O_n'\subseteq O_n,U_n'\subseteq U_n$ and $\Gamma_n =\Gamma\cap (O_n' \times U_n')$ with $\pi(\Gamma_n)=\pi(\Gamma\cap (O_n \times U_n))$ such that $(\Gamma_n,c)$ is connecting. Then $\tilde \Gamma=\mathfrak{C}up_{n\in\mathbb{N}}\Gamma_n $ is a subset of $\Gamma$ of full measure and every equivalence class of $\approx_{\tilde \Gamma,c} $ can be written in the form $((\bigcup_{n\in N} O_n')\times (\bigcup_{n\in N} U_n'))\cap \Gamma$ for some non-empty index set $N\subseteq \mathbb{N}$. Thus there are at most countably many equivalence classes which we can write in the form $(C_i\times D_i) \cap \Gamma, i\in I $ where $I=\{1,\dotsc, n\}$ or $I=\mathbb{N}$. Note that by shrinking the sets $C_i, D_i, i\in I$ we can assume that $C_i\cap C_j=D_i\cap D_j=\emptyset$ for $i\neq j$. Assume now that we are given another finite transport plan $ \pi_0$. Apply Lemma \ref{omegaClasses} to $\pi, \pi_0$ and $\tilde \Gamma$ to achieve that $\pi_0$ is concentrated on $\mathfrak{C}up_{i\in I} C_i\times D_i$. For $i\in I$ we consider the restricted problem of transporting $\mu\!\upharpoonright\! C_i$ to $\nu\!\upharpoonright\!D_i$. We know that $\pi\! \upharpoonright\! C_i\times D_i $ is optimal for this task by Propositions \ref{StrongCmonImpliesOptimal} and \ref{CmonImpliesStrongCmon}, hence $I_c[\pi]\le I_c[\pi_0]$. \qed\end{pf} \begin{rem} In fact the following somewhat more general (but also more complicated to state) result holds true: \emph{Assume that $\{(x,y): c(x,y)=\infty\} \subseteq F\cup N$ where $F$ is closed and $N$ is a $\mu\otimes\nu$-null set. Then every $c$-monotone transport plan $\pi$ with $\pi(F\cup N)=0$ is optimal.} \end{rem} \section{Completing the picture}\label{Spuzzle} First we give the proof of \textbf{Theorem \ref{StrongCmonEquivRobustlyOptimal}}. \begin{nil} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and $c:X\times Y\to \mathbb{R}op$ a Borel measurable cost function. For a finite transport plan $\pi$ the following assertions are equivalent: \begin{enumerate} \item $\pi$ is robustly optimal. \item $\pi$ is strongly $c$-monotone. \end{enumerate} \end{nil} \begin{pf} $a. \mathbb{R}ightarrow b.$: Let $Z$ and $\lambda\neq 0$ be according to the definition of robust optimality. As $\tilde\pi=(\mbox{Id}_Z\times\mbox{Id}_Z){}_{\#} \lambda + \pi$ is optimal, Theorem \ref{CmonEquivOptimal}.a ensures the existence of a $\tilde c$-monotone Borel set $\tilde \Gamma\subseteq (X\cup Z) \times (Y\cup Z)$ such that $\tilde c$ is finite on $\tilde \Gamma $ and $\tilde \pi $ is concentrated on $\tilde \Gamma$. Note that $(z,z)\in \tilde\Gamma$ for $\lambda$-a.e.\ $z\in Z$. We claim that for $\lambda$-a.e.\ $z\in Z$ and all $(x,y)\in \Gamma=\tilde \Gamma\cap (X\times Y)$ the relation \begin{align} (x,y)\approx_{\tilde \Gamma, \tilde c} (z,z)\end{align} holds true. Indeed, since $\tilde c$ is finite on $ Z\times Y$ we have $c(z,y)<\infty$ hence $(x,y)\lesssim_{\tilde \Gamma, \tilde c} (z,z)$. Analogously finiteness of $\tilde c$ on $X\times Z$ implies $c(x,z)<\infty$ such that also $(z,z)\lesssim_{\tilde \Gamma, \tilde c} (x,y)$. By transitivity of $\approx_{\tilde \Gamma, \tilde c}$, $(\tilde \Gamma,\tilde c)$ is connecting. Applying Proposition \ref{CmonImpliesStrongCmon} to the spaces $X\cup Z$ and $Y\cup Z$ we get that $\tilde\pi$ is strongly $\tilde c$-monotone, i.e.\ there exist $\tilde \varphi$ and $\tilde \psi$ such that $\tilde\varphi(a)+\tilde\psi(b)\le \tilde c(a,b)$ for $(a,b)\in (X\cup Z)\times (Y\cup Z)$ and equality holds $\tilde \pi$-almost everywhere. By restricting $\tilde \varphi$ and $\tilde \psi$ to $X$ resp.\ $Y$ we see that $\pi$ is strongly $c$-monotone. ^{-1}dskip \noindent $b. \mathbb{R}ightarrow a.$: Let $Z$ be a Polish space and let $\lambda$ be a finite Borel measure on $Z$. We extend $c$ to $\tilde c: (X\cup Z)\times (Y\cup Z)\to \mathbb{R}op$ via \[\tilde{c}(a,b)=\left\{\begin{array}{cl} c(a,b) &\mbox{ for } (a,b)\in X\times Y\\ \max\left(\varphi(a),0\right)&\mbox{ for } (a,b)\in X\times Z\\ \max\left(\psi(b),0\right)&\mbox{ for } (a,b)\in Z\times Y\\ 0 &\mbox{ otherwise.} \end{array} \right.\] Define $\tilde\varphi(a):=\left\{\begin{array}{cll} \varphi(a)&\mbox{ for }& a\in X\\ 0 &\mbox{ for }& a \in Z\end{array} \right.$ and $\tilde\psi(b):=\left\{\begin{array}{cll} \psi(b)&\mbox{ for }& b\in Y\\ 0 &\mbox{ for }&b\in Z.\end{array} \right.$ Then $\tilde\varphi$ resp.\ $\tilde\psi$ are extensions of $\varphi$ resp. $\psi$ to $X\cup Z$ resp. $Y\cup Z$ which satisfy $\tilde\varphi(a)+\tilde\psi(b)\le \tilde c(a,b)$ and equality holds on $\tilde\Gamma=\Gamma\cup\{(z,z):z\in Z\}$. Hence $\tilde \Gamma$ is strongly $\tilde c$-monotone. Since $\tilde{\pi}$ is concentrated on $\tilde{\Gamma}$, $\tilde \pi $ is optimal by Proposition \ref{StrongCmonImpliesOptimal}. \qed\end{pf} Next consider \textbf{Theorem \ref{AllEquiv}}. \begin{nil} Let $X,Y$ be Polish spaces equipped with Borel probability measures $\mu,\nu$\ and let $c: X\times Y\to \mathbb{R}op$ be Borel measurable and $\mu\otimes\nu$-a.e.\ finite. For a finite transport plan $\pi$ the following assertions are equivalent: \begin{enumerate}\renewcommand{(\arabic{enumi})}{(\arabic{enumi})} \item $\pi$ is optimal. \item $\pi$ is $c$-monotone. \item $\pi$ is robustly optimal. \item $\pi$ is strongly $c$-monotone. \end{enumerate} \end{nil} \begin{pf} By Theorem \ref{StrongCmonEquivRobustlyOptimal}, (3) and (4) are equivalent and they trivially imply (1) and (2) which are equivalent by Theorem \ref{CmonEquivOptimal}. It remains to see that $(2) \mathbb{R}ightarrow (4)$. Let $\pi $ be a finite $c$-monotone transport plan. Pick a $c$-monotone Borel set $\Gamma \subseteq X\times Y$ such that $c$ is finite on $\Gamma $ and $\pi(\Gamma)=1$. By Lemma \ref{CFiniteImpliesConnected} there exists a Borel set $\Gamma'\subseteq \Gamma$ such that $\pi(\Gamma')=1$ and $(\Gamma',c)$ is connecting, hence Proposition \ref{CmonImpliesStrongCmon} applies. \qed\end{pf} Finally the example below shows that the ($\mu\!\otimes\!\nu$-a.e.) finiteness of the cost function is essential to be able to pass from the ``weak properties'' (optimality, $c$-monotonicity) to the ``strong properties'' (robust optimality, strong $c$-monotonicity). \begin{exmp}[Optimality does not imply strong $c$-monotonicity] \label{ZeroOneExample} Let $X=Y=[0,1]$ and equip both spaces with Lebesgue measure $\lambda=\mu=\nu$. Define $c$ to be $\infty $ above the diagonal and $1-\sqrt{x-y}$ for $y\leq x$. The optimal (in this case the only finite) transport plan is the Lebesgue measure $\pi$ on the diagonal $\Delta$. We claim that $\pi$ is not strongly $c$-monotone. Striving for a contradiction we assume that there exist $\varphi$ and $\psi$ witnessing the strong $c$-monotonicity. Let $\Delta_1$ be the full-measure subset of $\Delta$ on which $\varphi+\psi=c$, and write $p_X[\Delta_1]$ for the projection of $\Delta_1$. We claim that \begin{equation}\label{localineq} \forall x,x'\in p_X[\Delta_1]: \mbox{ If } x<x', \mbox{ then } \varphi(x)-\varphi(x') \geq \sqrt {x'-x}, \end{equation} which will yield a contradiction when combined with the fact that $p_X[\Delta_1]$ is dense. Our claim \eqref{localineq} follows directly from \begin{equation} \varphi(x')+\psi(x)\leq c(x',x)= 1-\sqrt{x'-x} \ \mbox{ and }\ \varphi(x)+\psi(x) = c(x,x)=1. \end{equation} Now let $x<x+a $ be elements of $p_X[\Delta_1]$, let $b:= \varphi(x)-\varphi(x')$, and let $n\in \mathbb{N}$ be a sufficiently large number, say satisfying $n>2\tfrac{b^2}{a^2}$. Using the fact that $p_X[\Delta_1]$ is dense, we can find real numbers $x=x_0< x_1 < \cdots < x_n = x+a$ in $\Delta_1$ satisfying $x_k-x_{k-1} < 2/n$ for $k=1,\ldots, n$. Let $\varepsilon_k:= x_k-x_{k-1}$ for $k=1,\ldots, n$. Then we have $\varepsilon_k < \tfrac 2n < \tfrac {a^2}{b^2}$ for all $k$, hence $ \sqrt{\varepsilon_k} > \tfrac ba \varepsilon_k$. So we get \begin{equation*} b = \varphi(x)-\varphi(x') = \sum_{k=1}^n \varphi(x_{k-1})-\varphi(x_{k}) \ge \sum_{k=1}^n \sqrt{\varepsilon_k} > \sum_{k=1}^n \frac{b}{a} \varepsilon_k = \frac{b}{a} \sum_{k=1}^n \varepsilon_k = b, \end{equation*} a contradiction. (By letting $c=0$ below the diagonal the argument could be simplified, but then we would lose lower semi-continuity of $c$.) \end{exmp} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi} \end{document} \end{document}
math
66,091
\begin{document} \title{Bias Reduced Peaks over Threshold Tail Estimation} \begin{abstract} {\noindent In recent years several attempts have been made to extend tail modelling towards the modal part of the data. Frigessi et al. (2002) introduced dynamic mixtures of two components with a weight function $\pi=\pi (x)$ smoothly connecting the bulk and the tail of the distribution. Recently, Naveau et al. (2016) reviewed this topic, and, continuing on the work by Papastathopoulos and Tawn (2013), proposed a statistical model which is in compliance with extreme value theory and allows for a smooth transition between the modal and tail part. Incorporating second order rates of convergence for distributions of peaks over thresholds (POT), Beirlant et al. (2002, 2009) constructed models that can be viewed as special cases from both approaches discussed above. When fitting such second order models it turns out that the bias of the resulting extreme value estimators is significantly reduced compared to the classical tail fits using only the first order tail component based on the Pareto or generalized Pareto fits to peaks over threshold distributions. In this paper we provide novel bias reduced tail fitting techniques, improving upon the classical generalized Pareto (GP) approximation for POTs using the flexible semiparametric GP modelling introduced in Tencaliec et al. (2018). We also revisit and extend the second-order refined POT approach started in Beirlant et al. (2009) to all max-domains of attraction using flexible semiparametric modelling of the second order component. In this way we relax the classical second order regular variation assumptions. } \end{abstract} \noindent {\bf Keywords:} Peaks over Threshold; Generalized Pareto distribution; Tail estimation; Mixture models. \section{Introduction} \label{Sec1} Extreme value (EV) methodology starts from the assumption that the distribution of the available sample $X_1, X_2,\ldots,X_n$ belongs to the domain of attraction of a generalized extreme value distribution, i.e. there exists sequences $(b_n)_n$ and $(a_n>0)_n$ such that as $n \to \infty$ \begin{equation} {\max (X_1, X_2,\ldots,X_n)-b_n \over a_n} \to _d Y_{\xi}, \label{maxd} \end{equation} where $\mathbb{P} (Y_\xi >y) = \exp (-(1+\xi y)^{-1/\xi})$, for some $\xi \in \mathbb{R}$ with $1+\xi y>0$. The parameter $\xi$ is termed the extreme value index (EVI). It is well-known (see e.g. Beirlant et al., 2004, and de Haan and Ferreira, 2006)) that \eqref{maxd} is equivalent to the existence of a positive function $t \mapsto \sigma_t$, such that \begin{equation} \mathbb{P}\left({X-t \over \sigma_t} >y| X>t \right) = {\bar{F}(t+y\sigma_t) \over \bar{F}(t)} \to_{t \to x_+} \bar{H}^{GP}_{\xi}(y)= (1+\xi y)^{-1/\xi}, \label{POT} \end{equation} where $\bar{F}(x)=\mathbb{P}(X>x)$ and $x_+$ denotes the endpoint of the distribution of $X$. The conditional distribution of $X-t$ given $X>t$ is called the peaks over threshold (POT) distribution, while $\bar{H}_{\xi}^{GP}$ is the survival function of the generalized Pareto distribution (GPD).\\ In case $\xi >0$, the limit in \eqref{maxd} holds if and only if $F$ is of Pareto-type, i.e. \begin{equation} \bar{F}(x) = x^{-1/\xi}\ell (x), \label{Patype} \end{equation} for some slowly varying function $\ell$, i.e. satisfying $\frac{\ell(yt)}{\ell (t)} \to 1 $ as $t \to \infty$, for every $y>1$. Pareto-type distributions satisfy a simpler POT limit result: as $t \to \infty$ \begin{equation} \mathbb{P} \left( {X \over t} >y | X>t \right) \to \bar{H}^P_{\xi}(y) := y^{-1/\xi}, y>1. \label{POTPa} \end{equation} Estimation of $\xi$ and tail quantities such as return periods is then based on fitting a GPD to the observed excesses $X-t$ given $X>t$, respectively a simple Pareto distribution with survival function $y^{-1/\xi}$ to $X/t$ given $X>t$ in case $\xi >0$. The main difficulty in such an EV application is the choice of the threshold $t$. Most often, the threshold $t$ is chosen as one of the top data points $X_{n-k,n}$ for some $k \in \{1,2, \ldots,n \}$ where $X_{1,n} \leq X_{2,n} \leq \ldots \leq X_{n,n}$ denotes the ordered sample. The limit results in \eqref{POT} and \eqref{POTPa} require $t$ to be chosen as large as possible (or, equivalently, $k$ as small as possible) for the bias in the estimation of $\xi$ and other tail parameters to be limited. However, in order to limit the estimation variance, $t$ should be as small as possible, i.e. the number of data points $k$ used in the estimation should be as large as possible. Several adaptive procedures for choosing $t$ or $k$ have been proposed, but mainly in the Pareto-type case with $\xi >0$ under further second-order specifications of \eqref{Patype} or \eqref{POTPa}, see for instance Chapter 3 in Beirlant et al. (2004), or Matthys and Beirlant (2000). \\ In case of a real-valued EVI, the selection of an appropriate threshold is even more difficult and only a few methods are available. Dupuis (1999) suggested a robust model validation mechanism to guide the threshold selection, assigning weights between 0 and 1 to each data point where a high weight means that the point should be retained since a GPD model is fitting it well. However, thresholding is required at the level of the weights and hence the method cannot be used in an unsupervised manner. Another approach consists of proposing penultimate limit distributions in \eqref{POT} and \eqref{POTPa}. In case $\xi >0$, under the mathematical theory of second-order slow variation, i.e. assuming that \begin{equation} \frac{\ell(yt)}{\ell (t)} - 1 =\delta_t \left( y^{-\beta}-1 \right), \label{SO} \end{equation} where $\delta_t=\delta (t)= t^{-\beta}\tilde{\ell}(t)$, with $\beta >0$ and $\tilde\ell$ slowly varying at infinity (see section 2.3 in de Haan and Ferreira, 2006), the left hand side of \eqref{POTPa} equals \[ \frac{\bar{F}(yt)}{\bar{F}(t)} = y^{-1/\xi} \frac{\ell(yt)}{\ell (t)} = y^{-1/\xi} \left( 1 + \delta_t (y^{-\beta}-1)\right), \; y>1. \] This then leads to the extension of the Pareto distribution (EPD) to approximate the distribution of $X/t$ given $X>t$ as $t \to \infty$: \begin{equation} \bar{H}_{\xi,\delta}^{EP}(y) := y^{-1/\xi}\left( 1+ \delta_t \left( (y^{-1/\xi})^{\beta\xi}-1\right)\right), \; y>1, \label{EP} \end{equation} with $\delta_t$ satisfying $\delta_t \downarrow 0$ as $t \to \infty$. {\it In cases where the second order model \eqref{SO} holds}, such a mixture model $\bar{H}_{\xi,\delta}^{EP}$ will improve the approximation of $\left( {X \over t} >u | X>t \right)$ for values of $t$ which are smaller than the appropriate $t$-values when modelling the POTs using $\bar{H}_{\xi}^{P}$. So the extension can work when modelling large and moderate extremes. As a byproduct however, at instances, it may even work for the full sample. \\ In Beirlant et al. (2009), using an external estimator of $\rho=-\beta\xi$, the parameters $(\xi, \delta)$ are estimated fitting the EPD (slightly adapted, with survival function $ \left\{y (1+ \tilde{\delta}_t -\tilde{\delta}_t y^{-\beta}) \right\}^{-1/\xi}$ and $\tilde\delta _t = \delta_t \xi$) by maximum likelihood on excesses over a random threshold $X_{n-k,n}$, $k=1,2,\ldots,n$. The result of this procedure is two-fold: \begin{itemize} \item First, the estimates $\hat{\xi}_k^{EP}$ of $\xi$ are more stable as a function of $k$ compared to the original ML estimator derived by Hill (1975) $$ H_{k,n} = {1 \over k} \sum_{j=1}^k \log {X_{n-j+1,n}\over X_{n-k,n}} $$ which is obtained by fitting the Pareto distribution $\bar{H}_{\xi}^P$ to the excesses $\{ {X_{n-j+1,n}\over X_{n-k,n}}, j=1,\ldots,k\}$ following \eqref{POTPa}. Indeed, the bias in the simple POT model \eqref{POTPa} is estimated when fitting $\bar{H}_{\xi,\delta}^{EP}$ and it is shown that, under the assumption that the EP model for the excesses $X/t$ is correct and that $\beta$ is estimated consistently, the asymptotic bias of $\hat{\xi}_k^{EP}$ is 0 as long as $k (k/n)^{2\beta\xi} \to \lambda \geq 0$ as $k,n \to \infty$, while the asymptotic bias of $H_{k,n}$ is only 0 when $k (k/n)^{2\beta\xi} \to 0$. \item On the other hand, the asymptotic variance of $\hat{\xi}_k^{EP}$ equals $\left({1-\rho\over \rho}\right)^2 {\xi^2 \over k}$, where ${\xi^2 \over k}$ is the asymptotic variance of $H_{k,n}$. \end{itemize} As an example Figure 1 shows both the Hill estimates $H_{k,n}$ and the bias reduced estimates $\hat{\xi}_k^{EP}$, obtained from maximum likelihood fitting of \eqref{EP} using $\rho=-\xi\beta=-0.25, -0.5$ and $-1$, as a function of $k$ for a dataset of Belgian ultimate car insurance claims from 1995 and 2010 discussed in more detail in Albrecher et al. (2017). Note that the bias reduced estimates helps to interpret the original Hill "horror" plot. Here from the bias reduced estimator a $\xi$ level around 0.5 becomes apparent for $k \geq 200$ and a lower value between 0.3 and 0.4 for smaller values of $k$. In fact in insurance claim data mixtures in the ultimate tail do appear quite often. Moreover the EPD fit appears to extend quite well down to the lower threshold value, i.e. with $k$ up to 600 (but not when using almost all data, $k>600$). In this sense, classical first order extreme value modelling can in some cases be extended using mixture modelling in order to capture the characteristics of the bulk of the data. \begin{figure} \caption{ Ultimates of Belgian car insurance claims: bias reduction of Hill estimator (full line) using $\bar{H} \end{figure} Other bias reduction techniques in the Pareto-type case $\xi >0$ have been proposed among others in Feuerverger and Hall (1999), Gomes et al. (2000), Beirlant et al. (1999, 2002) and Gomes and Martins (2002). In Caeiro and Gomes (2011) methods are proposed to limit the variance of bias-reduced estimators to the level of the variance of the Hill estimator $H_{k,n}$. The price to pay is then to assume a third-order slow variation model specifying \eqref{SO} even further. These methods focus on the distribution of the $\log$-spacings of high order statistics. Other construction methods for asymptotically unbiased estimators of $\xi >0$ were introduced in Peng (1998) and Drees (1996). \\ In this paper we concentrate on bias reduction when using the GPD approximation to the distribution of POTs $X-t|X>t$, on which the literature is quite limited. This allows to extend bias reduction to the general case $\xi > -1/2$. We apply the flexible semiparametric GP modelling introduced in Tencaliec et al. (2018) to the POT distributions. We also extend the second-order refined POT approach using $\bar{H}^{EP}_{\xi,\delta}$ from \eqref{EP} to all max-domains of attraction. Here the corresponding basic second order regular variation theory can be found in Theorem 2.3.8 in de Haan and Ferreira (2006) stating that \begin{equation} \lim_{t \to x_+}{ \mathbb{P}(X-t >y\sigma_t|X>t)- (1+\xi y)^{-1/\xi} \over \delta (t)} = (1+\xi y)^{-1-1/\xi}\Psi_{\xi,\tilde{\rho}}((1+\xi y)^{1/\xi}), \label{secondorder} \end{equation} with $\delta(t) \to 0$ as $t \to x_+$ and $\Psi_{\xi,\tilde{\rho}}(x)={1 \over \tilde\rho}\left({x^{\xi +\tilde{\rho}}-1 \over \xi +\tilde\rho}- {x^{\xi}-1 \over \xi}\right)$ which for the cases $\xi=0$ and $\tilde\rho=0$ is understood to be equal to the limit as $\xi \to 0$ and $\tilde\rho \to 0$. We further allow more flexible second-order models than the ones arising from second-order regular variation theory such as in \eqref{secondorder} using non-parametric modelling of the second-order component. These new methods are also applied to the specific case of Pareto-type distributions. \\ In the next section we propose our transformed and extended GPD models, and detail the estimation methods. Some basic asymptotic results are provided in section 3. In the final section we discuss simulation results of the proposed methods and some practical case studies. We then also discuss the evaluation of the overall goodness-of-fit behaviour of the fitted models. \section{Transformed and extended GPD models} Recently, Naveau et al. (2016), generalizing Papastathopoulos and Tawn (2013), proposed to use full models for rainfall intensity data that are able to capture low, moderate and heavy rainfall intensities without a threshold selection procedure. These authors, considering only applications with a positive EVI however, propose to model all data jointly using transformation models with survival function \begin{equation} \bar{F}(x) = 1-\bar{G}_0 \left( H^{GP}_{\xi} ({x \over \sigma})\right)=: G_0\left( \bar{H}^{GP}_{\xi} ({x \over \sigma})\right), \label{naveau} \end{equation} with $\bar{G}_0$ and $G_0$ distribution functions on $[0,1]$ linked by $G_0(u)=1-\bar{G}_0 (1-u)$ ($0<u<1$), and satisfying constraints to preserve the classical tail GPD fit and a power behaviour for small rainfall intensities: \begin{itemize} \item $\lim_{u \downarrow 0} \frac{G_0(u)}{u}=a$, for some $a>0$, \item $\lim_{u \downarrow 0} \frac{\bar{G}_0(u)}{u^\kappa}=c$, for some $c>0$ and $\kappa >0$. \end{itemize} In Naveau et al. (2016) the authors propose parametric examples for $G_0$, such as $G_0(u) = {1+D \over D} u (1-\frac{u^D}{1+D})$, $v \in (0,1)$ with $D>0$. In Tencaliec et al. (2018) a non-parametric approach is taken using Bernstein polynomials of degree $m$ to approximate $G_0$, i.e. using $G_{0}^{(m)} (u) = \sum_{j=0}^m G({j \over m})b_{j,m}(u)$ with beta densities $$b_{j,m}(u)=\left( \begin{array}{c} m \\ j \end{array}\right)u^j (1-u)^{m-j},\; u \in (0,1). $$ \\ In Naveau et al. (2016) and Tencaliec et al. (2018) the primary goal is the search for a model fitting the whole outcome set, while the fit of the proposed model to POT values $X-t |X>t$ for extrapolation purposes in order to estimate extreme quantiles and tail probabilities is imposed using the condition $\lim_{u \downarrow 0} \frac{G_0(u)}{u}=a$. However the bias and MSE properties of the estimators of $\xi$ and $\sigma$ are still to be analyzed.\\ To encompass the above mentioned methods from Beirlant et al. (2009), Naveau et al. (2016) and Tencaliec et al. (2018) we propose to approximate $\mathbb{P}\left(X-t >y| X>t \right)$ with a transformation model with right tail function of the type \[\hspace{-3.5cm} ({\cal{T}}): \hspace{0.5cm} \bar{F}^T_t (y) = G_t \left(\bar{H}^{GP}_{\xi}({y \over \sigma})\right) \] where $G_t(u)/u \to 1$ for all $u \in (0,1)$ as $t \to x_+$. Note here that for $u \in (0,1)$ and \\ $Y=_d X-t|X>t$, \begin{equation} G_t (u) = \mathbb{P} \left(\bar{H}^{GP}_{\xi}({Y \over \sigma}) \leq u\right). \label{Gt} \end{equation} \noindent We also consider a submodel of $({\cal{T}})$, approximating the POT distribution with an extended GPD model \[ ({\cal{E}}) : \hspace{0.5cm} \bar{F}^E_t(y)= \bar{H}^{GP}_\xi ({y \over \sigma})\left\{1 +\delta_t B_{\eta} \left( \bar{H}^{GP}_\xi ( {y \over \sigma}) \right) \right\}, \] where \begin{itemize} \item $\delta_t =\delta (t) \to 0$ as $t \to x_+$, \item $B_{\eta} (1)=0$ and $\lim_{u \to 0} u^{1-\epsilon}B_{\eta}(u)=0$ for every $1>\epsilon >0$, \item $B_{\eta}$ is twice continously differentiable. \end{itemize} Here the parameter $\eta$ represents a second order (nuisance) parameter. For negative $\delta$-values one needs $\delta_t > \{\min_u (1-{d \over du}\, (uB_{\eta}(u))\}^{-1}$ to obtain a valid distribution. At $t=0$ the function $u \mapsto u(1+\delta_0 B_{\eta}(u))$ then corresponds to $u \mapsto G_0(u)$ in \eqref{naveau}, while ${G_t (u) \over u} \to 1$ as $t \to \infty$ leads to the GPD survival function $\bar{H}^{GP}_\xi (x/\sigma)$ at large thresholds. \\ Note that model ($\cal{E}$) is a direct generalization of the EPD model \eqref{EP} replacing the Pareto distribution $y^{-1/\xi}$ by the GPD $\bar{H}^{GP}_\xi$ and considering a general function $B_\eta (u)$ rather than $ u^{\beta\xi}-1 = u^{-\rho}-1$. \\ \noindent Now several possibilities for bias reduction appear: \begin{enumerate} \item[(1)] {\bf Estimation under the transformed model (${\cal T}$).} Modelling the distribution of $Y=X-t|X>t$ with model ($\cal{T}$) and estimating $G_t$ and $(\xi,\sigma)$ for every $t$, we propose to use the algorithm from Tencaliec et al. (2018) for every $t$ or $k=1,\ldots,n$. This approach is further denoted with ($T\bar{p}$). \\ Here we apply Bernstein approximation and estimation of $G_t$ which is the distribution function of $\bar{H}_{\xi}^{GP}(Y/\sigma)$. The Bernstein approximation of order $m$ of a continuous distribution function $G$ on $[0,1]$ is given by \[ G^{(m)}(u) = \sum_{j=0}^m G\left( {j \over m}\right)\left( \begin{array}{c} m \\ j \end{array}\right)u^j (1-u)^{m-j}, \; u \in [0,1]. \] As in Babu et al. (2002) one then replaces the unknown distribution function $G$ itself with the empirical distribution function $\hat{G}_n$ of the available data in order to obtain a smooth estimator of $G$: \[ \hat{G}_n^{(m)}(u) = \sum_{j=0}^m \hat{G}_n\left( {j \over m}\right)\left( \begin{array}{c} m \\ j \end{array}\right)u^j (1-u)^{m-j}. \] In the present application, data from $G_t$ are only available after imputing a value for $(\xi,\sigma)$. This then leads to the iterative algorithm from Tencaliec et al. (2018), which is applied to every threshold $t$, or every number of top $k$ data. We here detail the algorithm for excesses $Y_{j,k}=X_{n-j+1,n}-X_{n-k,n}$ $(j=1,\ldots,k)$, using the reparametrization $(\xi,\tau)$ with $\tau=\xi/\sigma$: \noindent {\it Algorithm} ($A_{\cal T}$) \begin{enumerate} \item[(i)] Set starting values ($\hat{\xi}_k^{(0)},\hat{\tau}_k^{(0)}$). Here one can use ($\hat{\xi}_k^{ML},\hat{\tau}_k^{ML}$) from using $G_t (u)=u$. \item[(ii)] Iterate for $r=0,1,\ldots$ until the difference in loglikelihood taken in ($\hat{\xi}_k^{(r)},\hat{\tau}_k^{(r)}$) and ($\hat{\xi}_k^{(r+1)},\hat{\tau}_k^{(r+1)}$) is smaller than a prescribed value \begin{enumerate} \item Given ($\hat{\xi}_k^{(r)},\hat{\tau}_k^{(r)}$) construct rv's $ \hat{Z}_{j,k}= \left( 1+ \hat{\tau}_k^{(r)}Y_{j,k}\right)^{-1/\hat{\xi}_k^{(r)}} $ \item Construct Bernstein approximation based on $\hat{Z}_{j,k}$ ($1\leq j \leq k$) \[ \hat{G}_k^{(m)}(u) = \sum_{j=0}^m \hat{G}_k \left( {j \over m}\right)\left( \begin{array}{c} m \\ j \end{array}\right)u^j (1-u)^{m-j} \] with $\hat{G}_k$ the empirical distribution function of $\hat{Z}_{j,k}$ \item Obtain new estimates ($\hat{\xi}_k^{(r+1)},\hat{\tau}_k^{(r+1)}$) with ML: \begin{eqnarray*} (\hat{\xi}_k^{(r+1)},\hat{\tau}_k^{(r+1)})&=& \mbox{argmax} \left\{ \sum_{j=1}^k \log \{ \hat{g}^{(m)}_k ((1+\tau \hat{Z}_{j,k})^{-1/\xi})\} \right.\\ && \hspace{1.5cm} \left. + \sum_{j=1}^k \log \{{\tau \over \xi}(1+\tau \hat{Z}_{j,k})^{-1-1/\xi} \} \right\} \end{eqnarray*} with $\hat{g}^{(m)}_k $ denoting the derivative of $\hat{G}_k^{(m)}$. \end{enumerate} \end{enumerate} \noindent The final estimates of $(\xi,\tau)$ and $G_t$ are denoted here by $(\hat{\xi}_k^T,\hat{\tau}_k^T)$ and $\hat{G}_k^T$. As noted in Tencaliec et al. (2018) the theoretical study of these estimates is difficult. In the simulation study the finite sample characteristics of these estimators $\hat{\xi}_k^T$ are given using $m=k^a$ with a fixed value of $a$ using $\hat{a} = argmin \sum_{k=2}^n (\hat{\xi}_k^T - \bar{\hat{\xi}}^T_n)^2$ in order to stabilize the plots of the estimates of $\xi$ as much as possible. Note that this estimation procedure is computationally demanding. \noindent Estimates of small tail probabilities $\mathbb{P}(X>c)$ can be obtained through \[ \hat{\mathbb{P}}_k^T(X>c) = {k \over n} \hat{G}^T_k \left( \bar{H}_{\hat{\xi}^T_k} ({\hat{\tau}^T_k \over \hat{\xi}^T_k}(c-X_{n-k,n}) )\right). \] Finally, bias reduced estimators of extreme $1-p$ quantiles for small $p$ are obtained by setting the above formulas equal to $p$ and solving for $c$. \item[(2)] {\bf Estimation under the extended model (${\cal E}$).} Modelling the distribution of the exceedances $Y$ with model ($\cal{E}$) leads to maximum likelihood estimators based on the excesses $Y_{j,k}=X_{n-j+1,n}-X_{n-k,n}$ $(j=1,\ldots,k)$: \begin{eqnarray} (\hat{\xi}^E_k,\hat{\tau}^E_k, \hat{\delta}_k)&=& \mbox{argmax} \left\{ \sum_{j=1}^k \log\left( 1+\delta_k b_{\eta}((1+\tau Y_{j,k})^{-1/\xi}) \right) \right. \nonumber \\ && \hspace{1.5cm} \left. + \sum_{j=1}^k \log \{{\tau \over \xi}(1+\tau Y_{j,k})^{-1-1/\xi} \} \right\} \label{MLE} \end{eqnarray} with $b_{\eta}(u) ={d \over du} (uB_{\eta}(u))$ for a given choice of $B_\eta$.\\ \noindent Estimates of small tail probabilities $\mathbb{P}(X>c)$ are then obtained through \[ \hat{\mathbb{P}}_k^E(X>c) = {k \over n} \bar{H}^{GP}_{\hat{\xi}^E_k}\left({\hat{\tau}^E_k \over \hat{\xi}^E_k}(c-X_{n-k,n}) \right) \left( 1+ \hat{\delta}_k^E \hat{B}_{\eta}\left(\bar{H}^{GP}_{\hat{\xi}^E_k} ({\hat{\tau}^E_k \over \hat{\xi}^E_k}(c-X_{n-k,n})\right)\right). \] \noindent As in Naveau et al. (2016), respectively Tencaliec et al. (2018), two approaches can be taken towards the bias function $B_{\eta}$: a parametric approach, respectively a non-parametric approach. \begin{itemize} \item[(a)] {\it In the parametric approach}, denoted ($Ep$), the second-order result \eqref{secondorder} leads to the parametric choice $B_{\xi,\tilde{\rho}}(u)= {u^{\xi} \over \tilde\rho}\left({u^{-\xi -\tilde{\rho}}-1 \over \xi +\tilde\rho}- {u^{-\xi}-1 \over \xi}\right)$ in case $\xi+\tilde\rho \neq 0$ and $\xi \neq 0$. \\ Model (${\cal{E}}$) allows for bias reduced estimation of $(\xi,\tau)$ under the assumption that the corresponding second-order model \eqref{secondorder} is correct for the POTs $X-t|X>t$. Note that ($Ep$) generalizes the approach taken in Beirlant et al. (2009) to all max-domains of attraction. When model (${\cal E}$) is used as a model for all observations, i.e. taking $t=0$, this model directly encompasses the models from Frigessi et al. (2002) and Naveau et al. (2016).\\ Here \[b_\eta (u)= u^{-\tilde\rho}\left( {1-\tilde\rho\over \tilde\rho (\xi +\tilde\rho)}\right) + u^\xi \left( {1-\xi \over \xi (\xi +\tilde\rho)}\right) - {1 \over \xi\tilde\rho}, \] in which the classical estimator of $\xi$ (with $\delta_k=0$), or an appropriate value $\xi_0$, is used to substitute $\xi$, next to an appropriate value of $\tilde\rho$. One can also choose a value of ($\xi_0,\tilde\rho$) minimizing the variance in the plot of the resulting estimates of $\xi$ as a function of $k$. \item[(b)] Alternatively, {\it a non-parametric approach} (denoted $E{\bar{p}}$) can be performed using the Bernstein polynomial algorithm from Tencaliec et al. (2018). In fact in practice a particular distribution probably follows laws of nature, environment or business and does not have to follow the second-order regular variation assumptions as in \eqref{secondorder}. Moreover in the case of a real-valued EVI, the function $B_{\eta}$ can take different mathematical forms depending on ($\xi,\tilde\rho$) and $\xi+\tilde\rho$ being close to 0 or not.\\ Here a Bernstein type approximation is obtained for $u\mapsto uB_\eta (u)$ from $\hat{G}_{k_*}^{(m)}(u) -u$ obtained through algorithm ($A_{\cal T}$), and reparametrizing $\delta_k$ by $\delta_k/\delta_{k_*}$ with $k_*$ an appropriate value of the number of top data used. The function $b_\eta (u)$ is then substituted by $-1 + {d \over du}\hat{G}_{k_*}^{(m)}(u)$. \end{itemize} \end{enumerate} \noindent The methods described above of course can be rewritten for the specific case of Pareto-type distributions where the distribution of POTs $Y={X \over t}|X>t$ are approximated by transformed Pareto distributions. The models are then rephrased as \[\hspace{-2.9cm} ({\cal{T}^+}): \hspace{0.5cm} \bar{F}^E_t (y) = G_t \left(\bar{H}^{P}_{\xi}(y)\right), \] where for $u \in (0,1)$ \begin{equation*} G_t (u) = \mathbb{P} \left(\bar{H}^{P}_{\xi}(Y) \leq u\right), \end{equation*} and \[ ({\cal{E}^+}) : \hspace{0.5cm} \bar{F}^E_t(y)= \bar{H}^{P}_\xi (y)\left\{1 +\delta_t B_{\eta} \left( \bar{H}^{P}_\xi (y) \right) \right\}. \] The above algorithms, now based on the exceedances $Y_{j,k}= X_{n-j+1,n}/X_{n-k,n}$ ($j=1,\ldots,k$), are then adapted as follows:\\ $\bullet$ In algorithm ($A_{\cal T}$) step (ii.c) is replaced by \[ \hat{\xi}_k^{(r+1)}= \mbox{argmax} \left\{ \sum_{j=1}^k \log \{ \hat{g}^{(m)}_k ( (\hat{Z}_{j,k})^{-1/\xi})\} + \sum_{j=1}^k \log \{{1 \over \xi}(\hat{Z}_{j,k})^{-1-1/\xi} \} \right\}, \] with $ \hat{Z}_{j,k}= Y_{j,k}^{-1/\hat{\xi}_k^{(r)}} $. The resulting estimates are denoted with $\hat{\xi}^{T+}_{k}$ and $\hat{G}^{T+}_k $. \\ $\bullet$ In approach (${\cal E}$) the likelihood solutions are given by \begin{equation} (\hat{\xi}^{E+}_k, \hat{\delta}^{E+}_k)= \mbox{argmax} \left\{ \sum_{j=1}^k \log\left( 1+\delta_k b_{\eta}(Y_{j,k}^{-1/\xi}) \right)+ \sum_{j=1}^k \log \{{1 \over \xi} (Y_{j,k})^{-1-1/\xi} \} \right\}. \label{E+} \end{equation} Note that the ($Ep^+$) approach using the parametric version $B_\eta (u) = u^{-\rho}-1$ for a particular fixed $\rho <0$ equals the EPD method from Beirlant et al. (2009), while ($E\bar{p}^+$) is new. \noindent Estimators of tail probabilities are then given by \[ \hat{\mathbb{P}}_k^{T+}(X>c) = {k \over n} \hat{G}^{T+}_k \left( \bar{H}_{\hat{\theta}^{T+}_k} \; (c/X_{n-k,n})\right), \] respectively \[ \hat{\mathbb{P}}_k^{E+}(X>c) = {k \over n} \bar{H}^{P}_{\hat{\xi}^{E+}_k}\left( c/X_{n-k,n} \right) \left( 1+ \hat{\delta}_k^{E+} \hat{B}_{\eta}\left(\bar{H}^{P}_{\hat{\xi}^{E+}_k} (c/X_{n-k,n})\right)\right). \] \section{Basic asymptotics under model (${\cal E}$)} We discuss here in detail the asymptotic properties of the maximum likelihood estimators solving \eqref{MLE} and \eqref{E+}. To this end, as in Beirlant et al. (2009) we develop the likelihood equations up to linear terms in $\delta_k$ since $ \delta_k \to 0$ with decreasing value of $k$. Below we set $\bar{H}_{\theta}(y)=(1+\tau y)^{-1/\xi}$ when using extended GPD modelling, while $\bar{H}_{\theta}(y)=y^{-1/\xi}$ when using extended Pareto modelling under $\xi >0$. \noindent {\it Extended Pareto POT modelling}. The likelihood problem \eqref{E+} was already considered in Beirlant et al. (2009) in case of parametric modelling for $B_\eta$. We here propose a more general treatment. The limit statements in the derivation can be obtained using the methods from Beirlant et al. (2009). The likelihood equations following from \eqref{E+} are given by \begin{equation} \left\{ \begin{array}{lcl} {\partial \over \partial \xi} \ell &=& -{k \over \xi }+ {1 \over \xi^2} \sum_{j=1}^k \log Y_{j,k} + {\delta_k \over \xi^2} \sum_{j=1}^k \frac{ b'_\eta (\bar{H}_{\theta}(Y_{j,k}))\bar{H}_{\theta}(Y_{j,k})\log Y_{j,k}}{1+\delta_k b_\eta (\bar{H}_{\theta}(Y_{j,k}))} \\ {\partial \over \partial \delta_k} \ell &=& \sum_{j=1}^k b_\eta (\bar{H}_{\theta}(Y_{j,k}))-\delta_k \sum_{j=1}^k b^2_\eta (\bar{H}_{\theta}(Y_{j,k})). \end{array} \right. \label{systemE+} \end{equation} \noindent {\it Extended Generalized Pareto POT modelling}. The likelihood equations following from \eqref{MLE} up to linear terms in $\delta_k$ are now given by \[ \left\{ \begin{array}{lcl} {\partial \over \partial \xi} \ell &=& -{k \over \xi }+ {1 \over \xi^2} \sum_{j=1}^k \log (1+\tau Y_{j,k}) + {\delta_k \over \xi^2} \sum_{j=1}^k b'_\eta (\bar{H}_{\theta}(Y_{j,k}))\bar{H}_{\theta}(Y_{j,k})\log (1+\tau Y_{j,k}) \\ {\partial \over \partial \tau} \ell &=& {k \over \xi \tau} \left\{ -1+ (1+\xi) {1 \over k}\sum_{j=1}^k {1 \over 1+\tau Y_{j,k}} \right. \\ && \hspace{1cm} \left. -{\delta_k \over k}\sum_{j=1}^k b'_\eta (\bar{H}_{\theta}(Y_{j,k})) (\tau Y_{j,k}) (1+\tau Y_{j,k})^{-1-1/\xi} \right\} \\ {\partial \over \partial \delta_k} \ell &=& \sum_{j=1}^k b_\eta (\bar{H}_{\theta}(Y_{j,k}))-\delta_k \sum_{j=1}^k b^2_\eta (\bar{H}_{\theta}(Y_{j,k})), \end{array} \right. \] from which \begin{equation} \left\{ \begin{array}{l} \hat{\delta}_k = \frac{\sum_{j=1}^k b_\eta (\bar{H}_{\hat{\theta}_k}(Y_{j,k}))}{\sum_{j=1}^k b^2_\eta (\bar{H}_{\hat{\theta}_k}(Y_{j,k}))}, \\ {1 \over k}\sum_{j=1}^k \log (1+\hat{\tau}_k Y_{j,k}) = \hat{\xi}_k- {\hat{\delta}_k \over k}\sum_{j=1}^k b'_{\eta}(\bar{H}_{\hat{\theta}_k}(Y_{j,k})) \bar{H}_{\hat{\theta}_k}(Y_{j,k}) \log (1+\hat{\tau}_k Y_{j,k}), \\ {1 \over k}\sum_{j=1}^k {1 \over 1+\hat{\tau}_k Y_{j,k}} = {1 \over 1+ \hat{\xi}_k} + {\hat{\delta}_k \over 1+ \hat{\xi}_k} \left\{ {1\over k}\sum_{j=1}^k b'_{\eta}(\bar{H}_{\hat{\theta}_k}(Y_{j,k}))\bar{H}_{\hat{\theta}_k}(Y_{j,k}) \right.\\ \hspace{6cm} \left. - {1\over k}\sum_{j=1}^k b'_{\eta}(\bar{H}_{\hat{\theta}_k}(Y_{j,k}))\bar{H}_{\hat{\theta}_k}(Y_{j,k}) {1 \over 1+\hat{\tau}_k Y_{j,k}} \right\}. \end{array} \right. \label{systemE} \end{equation} \noindent Under the extended model we now state the asymptotic distribution of the estimators $\hat{\xi}_k^{E+}$ and $\hat{\xi}_k^{E}$. To this end let $Q$ denote the quantile function of $F$, and let $U(x)=Q(1-x^{-1})$ denote the corresponding tail quantile function. Model assumption $({\cal{E}})$ can be rephrased in terms of $U$: \[ ({\tilde{\cal{E}}}):\;\; \frac{\frac{U(vx)-U(v)}{\sigma_{U(v)}} -h_{\xi}(x)}{\delta (U(v))} \to _{v \to \infty} x^{\xi} B_{\eta}(1/x), \] where $h_{\xi}(x)= (x^{\gamma}-1)/\gamma$ and $\delta (U)$ regularly varying with index $\tilde\rho<0$. Moreover in the mathematical derivations one needs the extra condition that for every $\epsilon,\nu>0$, and $v, vx$ sufficiently large \[ ({\tilde{\cal{E}}}_2):\;\; \left| \frac{\frac{U(vx)-U(v)}{\sigma_{U(v)}} -h_{\xi}(x)}{\delta (U(v))} - x^{\xi} B_{\eta}(1/x) \right| \leq \epsilon x^{\xi}|B_{\eta}(1/x)| \max\{x^{\nu},x^{-\nu}\}. \] Similarly, $({\cal{E}}^+)$ is rewritten as \[ ({\tilde{\cal{E}}}^+):\;\; \frac{\frac{U(vx)}{U(v)} - x^{\xi}}{\xi \delta(U(v)))} \to_{v \to \infty} x^{\xi} B_{\eta}(1/x). \] The analogue of $({\tilde{\cal{E}}}_2)$ in this specific case is given by \[ ({\tilde{\cal{E}}}_2^+):\;\; \left| \frac{\frac{U(vx)}{U(v)} - x^{\xi}}{\xi\delta(U(v))} - x^{\xi} B_{\eta}(1/x) \right| \leq \epsilon x^{\xi}|B_{\eta}(1/x)| \max\{x^{\nu},x^{-\nu}\}, \] with $\delta(U)$ regularly varying with index $\rho <0$.\\ Finally, in the expression of the asymptotic variances we use \[ Eb^2_{\eta} = \int_0^1 b^2_{\eta} (u)du, \;\; EB_{\eta} = \int_0^1 B_{\eta} (u)du, \;\; EC_{\eta} = \int_0^1 u^{\xi}B_{\eta} (u)du. \] The proof of the next theorem is outlined in the Appendix. It allows to construct confidence intervals for the estimators of $\xi$ obtained under the extended models.\\ {\bf Theorem 1} {\it Let $k=k_n$ be a sequence such that $k,n \to \infty$ and $k/n \to 0$ such that $\sqrt{k}\delta (U(n/k)) \to \lambda \in \mathbb{R}$. Moreover assume that in \eqref{MLE} and \eqref{E+}, $B_{\eta}$ is substituted by a consistent estimator as $n\to \infty$. Then \begin{enumerate} \item[i.] when $\xi>0$ with $({\tilde{\cal{E}}}_2^+)$ $$\sqrt{k}\left( \hat{\xi}_k^{E+} -\xi \right) \to_d \mathcal{N}\left(0,\xi^2 \frac{Eb^2_{\eta}}{Eb^2_{\eta}-(EB_{\eta})^2}\right),$$ \item[ii.] when $\xi > -1/2$ with $({\tilde{\cal{E}}}_2)$ $$\left( \sqrt{k}( \hat{\xi}_k^{E} -\xi), \sqrt{k} ({\hat{\tau}^{E}_k\over \tau}-1) \right)\to_d \mathcal{N}_2({\bf 0},\Sigma)$$ \[ \Sigma ={\xi^2 \over D} \left( \begin{array}{ll} {1 \over (1+\xi)^2(1+2\xi)} - \frac{ (EC_{\eta})^2}{Eb^2_{\eta}} & {1 \over \xi(1+\xi)^3 }-\frac{EB_{\eta}EC_{\eta}}{\xi(1+\xi)Eb^2_{\eta}} \\ {1 \over \xi(1+\xi)^3 }-\frac{EB_{\eta}EC_{\eta}}{\xi(1+\xi)Eb^2_{\eta}} & {1 \over \xi^2(1+\xi)^2}\left(1- \frac{(EB_{\eta})^2}{Eb^2_{\eta}} \right) \end{array} \right) \] where \[ D=\left( {1 \over (1+\xi)^2(1+2\xi)} - \frac{ (EC_{\eta})^2}{Eb^2_{\eta}} \right)\left(1- \frac{(EB_{\eta})^2}{Eb^2_{\eta}}\right) - \left( {1 \over (1+\xi)^2 }- \frac{EB_{\eta}EC_{\eta}}{Eb^2_{\eta}}\right)^2, \] \end{enumerate} } \noindent {\bf Remark 1.} The asymptotic variance of $\hat{\xi}_k^{E+}$ is larger than the asymptotic variance $\xi^2$ of the Hill estimator $H_{k,n}$. Indeed, \begin{eqnarray*} (EB_\eta)^2 &=& \left( \int_0^1 \log (1/u) b_{\eta}(u) du \right)^2 \\ &=& \left( \int_0^1 (\log (1/u) -1) b_{\eta}(u) du \right)^2 \\ &\leq & \left( \int_0^1 (\log (1/u) -1)^2 du \right) \left( \int_0^1 b^2_{\eta}(u) du \right)\\ &=& (Eb^2_\eta), \end{eqnarray*} where the above inequality follows using the Cauchy-Schwarz inequality. \\ Similarly, one can show that \[ (EC_\eta)^2= \xi^{-2} \left(\int_0^1 (u^\xi -{1 \over 1+\xi})b_\eta du \right)^2 \leq {1 \over (1+2\xi)(1+\xi)^2} (Eb^2_\eta). \] The asymptotic variance of $\hat{\xi}_k^{E}$ equals \[ {(1+\xi)^2 \over k} \; \frac{1- (1+\xi)^2(1+2\xi)(EC_\eta)^2/(Eb_\eta^2)} {1- {(1+\xi)^4(1+2\xi)\over \xi^2} (Eb_\eta^2)^{-1} [(EC_\eta)^2 -2 {(EC_\eta)(EB_\eta)\over (1+\xi)^2}+ {(EB_\eta)^2 \over (1+\xi)^{2}(1+2\xi)}] } \] which can be shown to be larger than the asymptotic variance $(1+\xi)^2/k$ of the classical GPD maximum likelihood estimator. In the parametric case with $B_\eta (u) = {u^{\xi} \over \tilde\rho}\left({u^{-\xi -\tilde{\rho}}-1 \over \xi +\tilde\rho}- {u^{-\xi}-1 \over \xi}\right)$, one obtains $EB_\eta = (1+\xi)^{-1}(1-\tilde\rho)^{-1}$, $EC_\eta = (1+\xi)^{-1}(1+2\xi)^{-1} (\xi-\tilde\rho+1)^{-1}$ and $Eb^2_\eta = 2 (1+2\xi)^{-1}(1-2\tilde\rho)^{-1}(\xi-\tilde\rho+1)^{-1}$. It then follows that the asymptotic variance of $\hat{\xi}_k^{E}$ equals ${(1+\xi)^2 \over k} \left(\frac{1-\tilde\rho}{\tilde\rho}\right)^2$.\\ In case $\xi >0$ with $B_\eta (u)= u^{-\rho}-1$, the asymptotic variance of $\hat{\xi}_k^{E+}$ is given by ${\xi^2 \over k} \left(\frac{1-\rho}{\rho}\right)^2$ as already found in Beirlant et al. (2009). Since in model (${\cal{E}}$) the $B_{\eta}$ factor is multiplied by $\delta_t$, the asymptotic distribution of tail estimators based on (${\cal{E}}$) will not depend on the asymptotic distribution of the estimator of $B_{\eta}$. As in Beirlant et al. (2009) when using the EPD model in the Pareto-type setting, one can rely in the parametric approach on consistent estimators of the nuisance parameter $\eta$ using a larger proportion $k_*$ of the data. Alternatively, one can also consider different values of $\eta$ in the parametric approach, and of $(k_*,m)$ in the non-parametric setting, and search for values of this nuisance parameter which stabilizes the plots of the EVI estimates as a function of $k$ using the minimum variance principle for the estimates as a function of $k$. Clearly one loses the asymptotic unbiasedness in Theorem 1 if $B_{\eta}$ is not consistently estimated. However as becomes clear from the simulation results in many instances the extreme value index estimators are not very sensitive to such a misspecification, especially in the non-parametric approaches $T\bar{p}$, $T\bar{p}^+$, $E\bar{p}$ and $E\bar{p}^+$, and the proposed estimators can still outperform the classical maximum likelihood estimators based on the first order approximations of the POT distributions. \section{Simulations and case studies} Simulation results and practical cases are proposed on \url{https://phdshinygao.shinyapps.io/ExtendedModels/} \noindent Under {\it Simulations} one finds simulation results with sample sizes $n= 200$ for different distributions from each max-domain of attraction. The bias and MSE for the different estimators are plotted as a function of the number of exceedances $k$. Using the notation from the preceding sections one has a choice to apply the technique with $\bar{H}_{\theta}$ equal to the GPD, respectively the simple Pareto distribution (only when $\xi>0$). \\ \noindent In case of the transformed models (${\cal T}$) one finds a slider to adapt the degree $m$ of the Bernstein polynoms along $m=k^a$ for different values of $a \in (0,1)$. One can also choose $a$ adaptively per sample so as to minimize the variance of $\hat{\xi}_k$ over $k=2,\ldots,n$ (in order to have stable plots over $k$). \\ \noindent In case of the extended models (${\cal E}$) one finds sliders for the following parameters: \begin{itemize} \item in case of Pareto modelling: $\rho$ in $Ep^+$, and $(k_*,m)$ in $E{\bar p}^+$ estimation; \item in case of GPD modelling: $\tilde\rho$ in $Ep$, and $(k_*,m)$ in $E{\bar p}$ estimation. \end{itemize} Again one can indicate to choose these parameters so as to minimize the variance of $\hat{\xi}_k$ over $k=2,\ldots,n$. The value of $\xi$ in the parametric function $B_{\xi,\tilde\rho}$ in $Ep$ is imputed with the classical GPD-ML estimator at the given value of $k$. \\ \noindent Also bias and RMSE plots of the corresponding tail probability estimates of $p=\mathbb{P}(X>c)$ are given, where $c$ is chosen so that these probabilities equal $p=0.005$ or $p=0.003$. Here the bias, respectively RMSE, are expressed as the average, respectively the average of squared values, of $\log (p/\hat{p})$. \\ One can also change the vertical scale of the plots, smooth the figures by taking moving averages of a certain number of estimates. Finally one can download the figures in pdf. \\ \noindent While on the above link, several other distributions are used and sliders are provided for the different parameters $a$, $m$, $\rho$, $\tilde\rho$, and $k_*$, we collect here the resulting figures for estimation of $\xi$ and estimating 0.003 tail probabilities, when using the minimum variance principle for all parameters, in case of the following subset of models: \begin{itemize} \item {\it The Burr$\left(\tau,\lambda\right)$ distribution} with $\bar{F}(x)= \left(1+x^{\tau} \right)^{-\lambda}$ for $x>0$ with $\tau=1$ and $\lambda=2$, so that $\xi= {1 \over \tau\lambda}={1 \over 2}$ and $\rho= -{1 \over \lambda}= -{1 \over 2}$. \item {\it The Fr\'echet$\,(2)$ distribution} with $\bar{F}(x)= 1-\exp \left(-x^{-2} \right)$ for $x>0$, so that $\xi = \frac{1}{2}$ and $\rho= \tilde\rho=-1$. \item {\it The standard normal distribution} with $\xi=0$ and $\tilde{\rho}=0$. \item {\it The Exponential distribution} with $\bar{F}(x) = e^{-\lambda x}$ for $x>0$, so that $\xi=0$ and $\tilde{\rho}=0$. \item {\it The Reversed Burr distribution} with $\bar{F}(x) =\left(1+(1-x)^{-\tau}\right)^{-\lambda}$ for $x< 1$ with $\tau=5$ and $\lambda=1$, so that $\xi = -1/(\tau\lambda)=-\frac{1}{5}$ with $\tilde{\rho}=-1/\lambda=-1$. \item {\it The extreme value Weibull distribution} with $F(x) =e^{-(1-x)^{\alpha}}$ for $x< 1$ with $\alpha =4$, so that $\xi = -\frac{1}{4}$ with $\tilde{\rho}=-1$. \end{itemize} In general the minimum variance principle works well, though in some cases some improved results can be obtained by choosing specific values of the parameters $a$, $\rho$, $m$ and $k_*$. This is mainly the case for the Pareto-type models when using $T\bar{p}$ and $E\bar{p}$, such as for the Fr\'echet distribution. Also, in case of tail probability estimation using $Ep$ for cases with $\xi <0$ particular choices of the corresponding parameters lead to improvements over the minimum variance principle. \\ When using $GPD$ modelling of the exceedances, overall the $Ep$ approach yields the best results, both in estimation of $\xi$ and tail probabilities. The improvement over the classical GPD maximum likelihood approach is smaller for $E\bar{p}$, and in case of situations where the second order parameter $\tilde\rho$ equals 0 then $E\bar{p}$ basically equals the ML estimators. \\ In case of simple Pareto modelling for $\xi >0$ cases (see Figures 3 and 5) the $Ep^+$ and $E\bar{p}^+$ approaches yield serious improvements over the Hill estimator, with small bias for $Ep^+$ and $E\bar{p}^+$, while parametric approach $Ep^+$ naturally exhibits the best RMSE. Note that when $\tilde\rho=0$ the conditions of the main theorem are not met, in which case the GPD and the bias reductions are known to exhibit a large bias. This is typically the case when $\xi=0$. This is also known to be the case using simple Pareto modelling when $\rho=0$.\\ \noindent Under {\it Applications} the app also offers the analysis of some case studies, some of which are discussed here in more detail. We use the ultimates of the Belgian ultimate car insurance claims used in Figure 1, in order to illustrate $T\bar{p}^+$, $E\bar{p}^+$ and $Ep^+$, and the winter rainfall data at Mont-Aigoual station (1976-2015) already used in Tencaliec et al. (2018) to illustrate $T\bar{p}$, $E\bar{p}$ and $Ep$. We then present estimates of $\xi$, $\sigma$ and tail probabilities $\mathbb{P}(X>x_{n,n})$ with $x_{n,n}$ denoting the largest observation, so that the estimated probability is supposed to be close to $1/n$. An option is provided to construct confidence intervals for $\xi$ on the basis of Theorem 1. \\ In case $k=n$ the exceedances correspond to the reversely ordered data, i.e. $Y_{j,n}= X_{n-j+1,n}$. The goodness-of-fit for the {\it complete data set} can be analyzed choosing a specific value $\theta_0=(\xi_0,\sigma_0)$ using the estimates of ($\xi,\sigma$) which were obtained as a function of $k$, and by estimating the transformation $G$ using one (ii.b) step from the transformation algorithm ($A_{\mathcal T}$) with starting value ($\xi_0,\sigma_0$) and with a chosen value $m=n^a$ with $a \in (0,1)$ (slider). We then construct transformed P-P plots \begin{eqnarray} && \hspace{-1cm} \left( -\log (1-\hat{F}_n (X_{n-j+1,n})); -\log \hat{G}(\bar{H}_{\theta_0}(X_{n-j+1,n})) \right) \nonumber \\ &=& \left( \log {n+1 \over j}; -\log \hat{G}_n^{(m)}(\bar{H}_{\theta_0}(X_{n-j+1,n})) \right), \; j=1,\ldots,n, \label{gof} \end{eqnarray} where $\hat{F}_n$ denotes the empirical distribution function based on $X_{n-j+1,n}$ ($j=1,\ldots,n$). The closer the plot lies to the diagonal, the better the fit of the model defined by the survival function $\hat{G}^{(m)} (\bar{H}_{\theta_0})$.\\ \noindent In Figure 10, the estimates of $\xi$ and $\mathbb{P}(X>4 \, 564 \,759)$ using the minimum variance principle are given, next the goodness-of-fit plot as defined in \eqref{gof} with $\xi_0=0.28$ and $m=n^{0.99}$. The estimates of $\xi$ obtained from $Ep^+$ and $E\bar{p}^+$ are clearly most stable as a function of $k$ indicating the EVI value 0.4. The tail probability above the largest ultimate observation is also most stable for $Ep^+$ and $E\bar{p}^+$ indicating a value close to $1/n$ (indicated by the horizontal line). While the goodness-of-fit plot shows some deviations from the 45 degree line for the fitted transformation model, the presented overall global fit is useful (correlation equals 0.97). \\ \noindent In Figure 11, for the winter rainfall data the results for $E\bar{p}$ with $m=57$, $k_*=43$ indicate two levels for $\xi$ (0.4 and ultimately at small $k$ close to 0), $\sigma$ (10 and 40 for small $k$) and tail probability $\mathbb{P}(X>162.05)$ (0.005 and 0.002 for small $k$, to be compared with $1/n=0.0018$), which indicates a change of statistical tail behaviour near the top. Method $Ep$ with $\rho=-1$ yields almost the same result as the classical GPD-ML method, except for the tail probability where it is quite unstable. Finally the transformation approach $T\bar{p}$ yields very stable plots at compromise values 0.2 for $\xi$, 10 for $\sigma$ and 0.003 for the tail probability. While the goodness-of-fit plot with $m=n^{0.99}$, $\sigma_0=10$ and $\xi_0=0.21$ has a correlation 0.997, the transformation approach seems to be unable to catch the deviating tail component near the top data with an EVI value close to 0. \section{Conclusion} In this contribution we have constructed bias reduced estimators of tail parameters extending the classical POT method using the generalized Pareto distribution. The bias can be modelled parametrically (for instance based on second order regular variation theory), or non-parametrically using Bernstein polynomial approximations. A basic asymptotic limit theorem is provided for the estimators of the extreme value parameters which allows to compute asymptotic confidence intervals. A shinyapp has been constructed with which the characteristics and the effectiveness of the proposed methods are illustrated through simulations and practical case studies. From this it follows that within the proposed methods it is always possible to improve upon the classical POT method both in bias and RMSE. \section{Acknowledgments} \label{Sec5} \noindent This work is based on the research supported wholly/in part by the National Research Foundation of South Africa (Grant Number 102628) and the DST-NRF Centre of Excellence in Mathematical and Statistical Sciences (COE-Mass). The Grantholder acknowledges that opinions, findings and conclusions or recommendations expressed in any publication generated by the NRF supported research is that of the author(s), and that the NRF accepts no liability whatsoever in this regard. \section{Appendix} \noindent In this section we provide details concerning the proof of Theorem 1. \\ \noindent {\it Asymptotic distribution of $\hat{\xi}_k^{E+}$}. \\ From \eqref{systemE+} we obtain up to linear terms in $\delta_k$ that (denoting $\hat{\xi}_k$ for $\hat{\xi}^{E+}_k$) \[ \left\{ \begin{array}{lcl} \hat{\delta}_k &=& \frac{\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\hat{\xi}_k})}{\sum_{j=1}^k b^2_\eta (Y_{j,k}^{-1/\hat{\xi}_k})} \\ \hat{\xi}_k &=& H_{k,n} + \hat{\delta}_k B^{(1)}_{k}, \end{array} \right. \] with $B^{(1)}_{k}= {1 \over k}\sum_{j=1}^k b'_\eta (Y_{j,k}^{-1/\hat{\xi}_k})Y_{j,k}^{-1/\hat{\xi}_k}\log Y_{j,k}$. As $k,n \to \infty$ and $k/n \to 0$ we have $B^{(1)}_{k} \to_p -\xi \int_0^1 b'_{\eta}(u) u \log u du = -\xi EB_\eta$. \\ Using a Taylor expansion on the numerator of the right hand side of the first equation leads to \[ {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\hat{\xi}_k}) = {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\xi}) - (\hat{\xi}_k - \xi) \xi^{-1} (EB_\eta) \, (1+o_p(1)), \] so that, with ${1 \over k}\sum_{j=1}^k b^2_\eta (Y_{j,k}^{-1/\hat{\xi}_k}) \to_p E b^2_{\eta}$, up to lower order terms \[ \hat{\delta}_k = {1 \over E b^2_{\eta}} {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\xi}) - (\hat{\xi}_k - \xi) \xi^{-1}\frac{EB_\eta }{E b^2_{\eta}} \, (1+o_p(1)). \] Hence, inserting this expansion into $\hat{\xi}_k = H_{k,n} + \hat{\delta}_k B^{(1)}_{k}$, finally leads to \begin{eqnarray*} \sqrt{k}(\hat{\xi}_k - \xi)(1+o_p(1)) &=& \frac{E b^2_{\eta}}{E b^2_{\eta}-(EB_\eta)^2} \sqrt{k}\left(H_{k,n}-\xi \right) - \frac{\xi EB_\eta }{E b^2_{\eta}-(EB_\eta)^2 }\sqrt{k} \left( {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\xi})\right)\\ &=& \frac{E b^2_{\eta}}{E b^2_{\eta}-(EB_\eta)^2} \sqrt{k}\left(H_{k,n}-\xi - \xi \delta_k EB_\eta\right) \\ && \;\; - \frac{\xi EB_\eta }{E b^2_{\eta}-(EB_\eta)^2 }\sqrt{k} \left( {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\xi}) - \delta_k E b^2_{\eta} \right), \end{eqnarray*} with $\delta_k = \delta (U(n/k))$. We now show that this final expression is a linear combination of two zero centered statistics (up to the required accuracy) which is asymptotically normal with the stated asymptotic variance. To this end let $Z_{n-k,n} \leq Z_{n-k+1,n} \leq \ldots \leq Z_{n,n}$ denote the top $k+1$ order statistics of a sample of size $n$ from the standard Pareto distribution with distribution function $z \mapsto z^{-1}$, $z>1$. Then from $({\tilde{\cal{E}}}_2^+) $ \begin{eqnarray*} H_{k,n} &=& {1 \over k}\sum_{j=1}^k \left(\log U(Z_{n-j+1,n})- \log U(Z_{n-k,n}) \right) \\ &=& {1 \over k}\sum_{j=1}^k \log \left\{ \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^\xi \left[ 1+\xi \delta (U(Z_{n-k,n}))B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) \right. \right. \\ && \hspace{4.6cm}\left.\left. +o_p(1)|\delta (U(Z_{n-k,n}))| |B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right)| \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^\epsilon \right] \right\}\\ &=& \xi {1 \over k}\sum_{j=1}^k \log {Z_{n-j+1,n}\over Z_{n-k,n}} + \xi \delta (U(Z_{n-k,n}))B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right)\\ && \hspace{3.5cm} +o_p(1)|\delta (U(Z_{n-k,n}))| |B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right)| \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^\epsilon. \end{eqnarray*} Now $\log Z_{n-j+1,n}-\log Z_{n-k,n} =_d E_{k-j+1,k}$, the $(k-j+1)$th smallest value from a standard exponential sample $E_1,\ldots,E_k$ of size $k$, so that ${1 \over k}\sum_{j=1}^k \log {Z_{n-j+1,n}\over Z_{n-k,n}}=_d {1 \over k}\sum_{j=1}^k E_j$ and ${1 \over k}\sum_{j=1}^k B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) =_d {1 \over k}\sum_{j=1}^k B_\eta (e^{-E_j})=_d {1 \over k}\sum_{j=1}^k B_\eta (U_j)$ where $U_1,\ldots,U_k$ is a uniform (0,1) sample. Hence, since $\delta (U(Z_{n-k,n})) / \delta (U(n/k)) \to_p 1$ and ${1 \over k}\sum_{j=1}^k B_\eta (U_j) \to_p EB_{\eta}$, we have that $H_{k,n} -\xi - \xi \delta_k EB_\eta$ is asymptotically equivalent to $ {1 \over k}\sum_{j=1}^k \xi (E_j-1)$ as $\sqrt{k} \delta_k \to \lambda$. \\ Similarly \begin{eqnarray*} {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\xi}) &=& {1 \over k}\sum_{j=1}^k b_\eta \left(\left[ {U \left({Z_{n-j+1,n}\over Z_{n-k,n}} Z_{n-k,n} \right)\over U(Z_{n-k,n}) }\right]^{-1/\xi} \right)\\ &=& {1 \over k}\sum_{j=1}^k b_\eta \left( \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^{-1} \left[ 1+\xi \delta (U(Z_{n-k,n}))B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) \right. \right. \\ && \hspace{2.5cm}\left.\left. +o_p(1)|\delta (U(Z_{n-k,n}))| |B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right)| \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^\epsilon \right]^{-1/\xi} \right)\\ &=&{1 \over k}\sum_{j=1}^k b_\eta \left( \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^{-1} \left[ 1-\delta (U(Z_{n-k,n}))B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) \right. \right. \\ && \hspace{2.5cm}\left.\left. +o_p(1)|\delta (U(Z_{n-k,n}))| |B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right)| \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^\epsilon \right] \right) \\ &=& {1 \over k}\sum_{j=1}^k b_\eta (e^{-E_j})\\ && \hspace{0.3cm} -\delta (U(Z_{n-k,n})){1 \over k}\sum_{j=1}^k b'_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right)(1+o_p(1)). \end{eqnarray*} Since $\delta (U(Z_{n-k,n}))/\delta_k \to_p 1$ and ${1 \over k}\sum_{j=1}^k b'_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) B_\eta \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) \left( {Z_{n-k,n} \over Z_{n-j+1,n}}\right) \to_p - Eb^2_\eta$ it follows that $ {1 \over k}\sum_{j=1}^k b_\eta (Y_{j,k}^{-1/\xi}) - \delta_k E b^2_{\eta}$ is asymptotically equivalent to ${1 \over k}\sum_{j=1}^k b_\eta (e^{-E_j})=_d {1 \over k}\sum_{j=1}^k b_\eta (U_j)$ as $\sqrt{k} \delta_k \to \lambda$, which is centered at 0 since $E(b_\eta (U))=0$. \noindent {\it Asymptotic distribution of $\hat{\xi}_k^{E}$}. \\ This derivation follows similar lines starting from \eqref{systemE}: \[ \left\{ \begin{array}{l} {1 \over k}\sum_{j=1}^k b'_{\eta}(\bar{H}_{\hat{\theta}_k}(Y_{j,k})) \bar{H}_{\hat{\theta}_k}(Y_{j,k}) \log (1+\hat{\tau}_k Y_{j,k}) \to_p -\xi EB_\eta, \\ {1 \over k}\sum_{j=1}^k b^2_\eta (\bar{H}_{\hat{\theta}_k}(Y_{j,k})) \to_p Eb^2_{\eta}, \\ {1\over k}\sum_{j=1}^k b'_{\eta}(\bar{H}_{\hat{\theta}_k}(Y_{j,k}))\bar{H}_{\hat{\theta}_k}(Y_{j,k}) \to_p b_\eta (1), \\ {1\over k}\sum_{j=1}^k b'_{\eta}(\bar{H}_{\hat{\theta}_k}(Y_{j,k}))\bar{H}_{\hat{\theta}_k}(Y_{j,k}) {1 \over 1+\hat{\tau}_k Y_{j,k}} \to_p \xi(1+\xi) EC_{\eta} + b_\eta (1), \end{array} \right. \] as $k,n\to \infty$ and $k/n\to \infty$, so that the system of equations is asymptotically equivalent to \[ \left\{ \begin{array}{l} \hat{\delta}_k = \frac{{1 \over k}\sum_{j=1}^k b_\eta (\bar{H}_{\hat{\theta}_k}(Y_{j,k}))}{Eb^2_{\eta}}, \\ {1 \over k}\sum_{j=1}^k \log (1+\hat{\tau}_k Y_{j,k}) = \hat{\xi}_k + \hat{\xi}_k\hat{\delta}_kEB_{\eta} \\ {1 \over k}\sum_{j=1}^k {1 \over 1+\hat{\tau}_k Y_{j,k}} = {1 \over 1+ \hat{\xi}_k}- \hat{\xi}_k\hat{\delta}_k EC_{\eta}. \end{array} \right. \] Using a Taylor expansion on the numerator of the right hand side of the first equation leads to \[ \hat{\delta}_k Eb^2_{\eta}= {1 \over k } \sum_{j=1}^kb_{\eta}(\bar{H}_{\theta}(Y_{j,k})) - {EB_{\eta} \over \xi}(\hat{\xi}_k -\xi) + (1+\xi) EC_{\eta} \left({\hat{\tau}_k \over \tau}-1 \right). \] Imputing this in the second and third equation in $\xi$ and $\tau$, and expanding these equations linearly around the correct values ($\xi,\tau$), while using, as $k,n \to \infty$ and $k/n \to 0$ \[ {1 \over k}\sum_{j=1}^k {\tau Y_{j,k} \over 1+\tau Y_{j,k}} \to_p {\xi \over 1+\xi} \mbox{ and }{1 \over k}\sum_{j=1}^k {\tau Y_{j,k} \over (1+\tau Y_{j,k})^2} \to_p {\xi \over (1+\xi)(1+2\xi)}, \] leads to the linearized equations \begin{equation} \left\{ \begin{array}{l} \left(\hat{\xi}_k - \xi \right)\left(-1+ \frac{(EB_{\eta})^2}{Eb^2_{\eta}}\right) + \left({\hat{\tau}_k \over \tau}- 1 \right)\left({\xi \over 1+\xi} -\xi(1+\xi) \frac{EB_{\eta}\, EC_{\eta}}{Eb^2_{\eta}}\right) \\ \hspace{3cm} = - \left( {1 \over k}\sum_{j=1}^k \log (1+\tau Y_{j,k})-\xi \right) + {\xi EB_{\eta}\over Eb^2_{\eta}} {1 \over k}\sum_{j=1}^k b_{\eta}(\bar{H}_{\theta}(Y_{j,k})), \\ \\ \left(\hat{\xi}_k - \xi \right)\left({1 \over (1+\xi)^2 }- \frac{EB_{\eta}EC_{\eta}}{Eb^2_{\eta}}\right) + \left({\hat{\tau}_k \over \tau}- 1 \right)\left(-{\xi \over (1+\xi)(1+2\xi)} +\xi(1+\xi) \frac{ (EC_{\eta})^2}{Eb^2_{\eta}}\right) \\ \hspace{3cm} = - \left( {1 \over k}\sum_{j=1}^k {1 \over 1+\tau Y_{j,k}} -{1 \over 1+\xi} \right) - {\xi EC_{\eta}\over Eb^2_{\eta}} {1 \over k}\sum_{j=1}^k b_{\eta}(\bar{H}_{\theta}(Y_{j,k})). \end{array} \right. \label{lineqs} \end{equation} It follows that the right hand sides in \eqref{lineqs} can be rewritten as linear combination of two zero centered statistics from which the asymptotic normality of $\left( \sqrt{k}(\hat{\xi}^{E}_k-\xi ), \sqrt{k}({\hat{\tau}^{E}_k \over \tau}-1) \right)$ can be obtained, as stated in Theorem 1: \begin{equation*} \left\{ \begin{array}{l} \left(\hat{\xi}_k - \xi \right)\left(-1+ \frac{(EB_{\eta})^2}{Eb^2_{\eta}}\right) + \left({\hat{\tau}_k \over \tau}- 1 \right)\left({\xi \over 1+\xi} -\xi(1+\xi) \frac{EB_{\eta}\, EC_{\eta}}{Eb^2_{\eta}}\right) \\ \hspace{1.5cm} = - \left( {1 \over k}\sum_{j=1}^k \log (1+\tau Y_{j,k})-\xi - \xi\delta_k EB_{\eta} \right) + {\xi EB_{\eta}\over Eb^2_{\eta}} \left({1 \over k}\sum_{j=1}^k b_{\eta}(\bar{H}_{\theta}(Y_{j,k}))-\delta_k Eb^2_{\eta} \right), \\ \\ \left(\hat{\xi}_k - \xi \right)\left({1 \over (1+\xi)^2 }- \frac{EB_{\eta}EC_{\eta}}{Eb^2_{\eta}}\right) + \left({\hat{\tau}_k \over \tau}- 1 \right)\left(-{\xi \over (1+\xi)(1+2\xi)} +\xi(1+\xi) \frac{ (EC_{\eta})^2}{Eb^2_{\eta}}\right) \\ \hspace{1.5cm} = - \left( {1 \over k}\sum_{j=1}^k {1 \over 1+\tau Y_{j,k}} -{1 \over 1+\xi} + \xi\delta_k EC_{\eta}\right) - {\xi EC_{\eta}\over Eb^2_{\eta}} \left({1 \over k}\sum_{j=1}^k b_{\eta}(\bar{H}_{\theta}(Y_{j,k}))-\delta_k Eb^2_{\eta}\right). \end{array} \right. \end{equation*} This is done using similar derivations as in the case $\hat{\xi}_k^{E+}$. \begin{figure} \caption{ Burr distribution with $\xi=0.5$ and $\rho=-0.5$. Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): GPD-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{Burr distribution with $\xi=0.5$ and $\rho=-0.5$. Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): Pareto-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{ Fr\'echet distribution with $\xi=0.5$. Estimation of $\xi$ (top) and tail probability (bottom), bias (left), RMSE (right): GPD-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{Fr\'echet distribution with $\xi=0.5$. Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): Pareto-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{ Standard normal distribution ($\xi=0$ and $\tilde\rho=0$). Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): GPD-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{ The exponential distribution ($\xi=0$ and $\tilde\rho=0$). Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): GPD-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{ Reversed Burr distribution ($\xi=-0.2$ and $\tilde\rho=-1$). Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): GPD-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{Extreme value Weibull distribution ($\xi=-0.25$ and $\tilde\rho=-1$). Estimation of $\xi$ (top) and tail probability (bottom) using minimum variance principle, bias (left), RMSE (right): GPD-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{Ultimates of Belgian car insurance claims: estimation of $\xi$ (top left), tail probability at maximum observation (top right): Pareto-ML (full line), $T\bar{p} \end{figure} \begin{figure} \caption{Winter rain data at Mont-Aigoual: estimation of $\xi$ and $\sigma$ (top) and tail probability (bottom left) using minimum variance principle: GPD-ML (full line), $T\bar{p} \end{figure} \end{document}
math
57,326
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem{application}[theorem]{Application} \newtheorem{definition-proposition}[theorem]{Definition-Proposition} \title{On calibrated and separating sub-actions} \author{ Eduardo Garibaldi\\ \footnotesize{Departamento de Matem\'atica}\\ \footnotesize{Universidade Estadual de Campinas}\\ \footnotesize{13083-859 Campinas -- SP, Brasil}\\ \footnotesize{\texttt{[email protected]}} \and Artur O. Lopes\thanks{Partially supported by CNPq, PRONEX -- Sistemas Din\^amicos, Instituto do Mil\^enio, and beneficiary of CAPES financial support.}\\ \footnotesize{Instituto de Matem\'atica}\\ \footnotesize{Universidade Federal do Rio Grande do Sul}\\ \footnotesize{91509-900 Porto Alegre -- RS, Brasil}\\ \footnotesize{\texttt{[email protected]}} \and Philippe Thieullen\thanks{Partially supported by ANR BLANC07-3\_187245, Hamilton-Jacobi and Weak KAM Theory.}\\ \footnotesize{Institut de Math\'ematiques}\\ \footnotesize{Universit\'e Bordeaux 1, CNRS, UMR 5251}\\ \footnotesize{F-33405 Talence, France}\\ \footnotesize{\texttt{[email protected]}} } \date{\today} \maketitle \begin{abstract} We consider a one-sided transitive subshift of finite type $ \sigma: \Sigma \to \Sigma $ and a H\"older observable $ A $. In the ergodic optimization model, one is interested in properties of $A$-minimizing probability measures. If $\bar A$ denotes the minimizing ergodic value of $A$, a sub-action $u$ for $A$ is by definition a continuous function such that $A\geq u\circ \sigma-u + \bar A$. We call contact locus of $u$ with respect to $A$ the subset of $\Sigma$ where $A=u\circ\sigma-u + \bar A$. A calibrated sub-action $u$ gives the possibility to construct, for any point $x\in\Sigma$, backward orbits in the contact locus of $u$. In the opposite direction, a separating sub-action gives the smallest contact locus of $A$, that we call $\Omega(A)$, the set of non-wandering points with respect to $A$. We prove that separating sub-actions are generic among H\"older sub-actions. We also prove that, under certain conditions on $\Omega(A)$, any calibrated sub-action is of the form $u(x)=u(x_i)+h_A(x_i,x)$ for some $x_i\in\Omega(A)$, where $h_A(x,y)$ denotes the Peierls barrier of $A$. We present the proofs in the holonomic optimization model, a formalism which allows to take into account a two-sided transitive subshift of finite type $(\hat \Sigma, \hat \sigma)$. {\bf To appear in Bull of the Braz. Math. Soc. Vol 40 (4) (2009)} \end{abstract} \begin{section}{Introduction} In the {\it ergodic optimization model} (see, for instance, \cite{Bousch1, Bousch2, Bremont, CLT, HY, Jenkinson1, LT1}), given a continuous observable $ A: X \to \mathbb{R}$, one is interested in understanding which $T$-invariant Borel probability measure $\mu$ of a compact metric space $X$ minimizes the average $\int_X A\,d\mu$. Such measures are called {\it minimizing probability measures}\footnote{Maximizing probabilities also appear in the literature. Obviously, replacing the observable $ A $ by $ - A $, both vocabularies can be interchanged and the rephrased statements will be immediately verified. The maximizing terminology seems more convenient to study the connections with the thermodynamic formalism (see, for example, \cite{CLT, Leplaideur}).}. Minimizing probability measures admit dual objects: the {\it sub-actions}. A sub-action $ u: X \to \mathbb{R}$ associated to an observable $A$ enables to replace $A$ by a cohomologous observable whose ergodic minimizing value is actually the absolute minimum. To each sub-action $u$ one associates a compact subset of $X$ called {\it contact locus} which contains the support of any minimizing probability measure. A sub-action gives therefore important information on $T$-invariant Borel probability measures that minimize the average of $A$. It is a relevant problem to investigate the existence of a particular sub-action having the smallest contact locus, that is, the smallest ``trapping region'' of all minimizing probability measures. In section~\ref{simplified_version}, we give a simplified version for the ergodic optimization model of the main results, namely, of the theorems \ref{principal}, \ref{structure} and \ref{discretestructure}. In section~\ref{basic_concepts}, we recall the definition of the {\it holonomic optimization model} and state the main results. We give in section \ref{proof_theorem_principal} the proof of theorem \ref{principal} and in section \ref{proof_theorem_structure} the proof of theorem \ref{discretestructure}. We address the reader to \cite{GL2} for a proof of theorem~\ref{structure}. We will adopt throughout the text the point of view which consists in interpreting ergodic optimization problems as questions of variational dynamics (see, for instance, \cite{CLT, GL2, LT1}), similar to Aubry-Mather technics for Lagrangian systems. For an expository introduction to the general theory of ergodic optimization, we refer the reader to the article of O. Jenkinson (see \cite{Jenkinson1}). We still would like to point out that one of the main conjectures in the theory of ergodic optimization on compact spaces can be roughly formulated in the following way: \emph{in any hyperbolic dynamics, a generic H\"older (or Lipshitz) observable possesses an unique minimizing probability measure, which is supported by a periodic orbit}. Concerning this problem, partial answers were already obtained, among them \cite{Bousch2, CLT, HY, LT1, Morris, TAZ}. Working with a transitive expanding dynamical system, J. Br\'emont (see \cite{Bremont}) has recently shown how such conjecture might follow from a careful study of the contact loci of typical sub-actions with finitely many connected components. Of course, such result reaffirms the importance of the study of sub-actions as well as of their respective contact loci. In the same dynamical context, we are in particular interested in finding \emph{separating sub-actions}, that is sub-actions whose contact locus is the smallest one. As mentioned above, these sub-actions give more information on the minimizing measure(s) than does a general sub-action. Our main theorem (namely, theorem~\ref{principal}) states that such sub-actions are actually generic among the set of H\"older sub-actions. An interesting result we also present here and which is independent of the previous considerations is an analysis related to the following situation: it is known that, for each irreducible component of the $A$-non-wandering set, one can associate via the Peierls barrier a calibrated sub-action. We present in theorem~\ref{discretestructure} sufficient conditions (by no means necessary) that assure that there exists a dominant one among such calibrated sub-actions. \noindent \textbf{Acknowledgement.} We would like to thanks the referee for his careful reading of our manuscript. This improved the exposition of the final version considerably. \end{section} \begin{section}{A simplified version of theorems \ref{principal}, \ref{structure} and \ref{discretestructure}} \label{simplified_version} Let $ (X, T) $ be a transitive expanding dynamical system, that is, a continuous covering several-to-one map $ T: X \to X $ on a compact metric space $ X $ whose inverse branches are uniformly contracting by a factor $ 0 < \lambda < 1 $. We denote by $ \mathcal M_T $ the set of $T$-invariant Borel probability measures. Our objective in this section is to summarize the conclusions of theorems \ref{principal}, \ref{structure} and \ref{discretestructure} in ergodic optimization theory. We first recall basic definitions from \cite{CLT} (see also \cite{Jenkinson1}). Given a continuous observable $ A: X \to \mathbb R $, we call {\it ergodic minimizing value} the quantity \[ \bar A := \min_{\mu \in \mathcal M_T} \int A \; d\mu. \] We call {\it $A$-minimizing probability} a measure $ \mu \in \mathcal M_T $ which realizes the above minimum. We say that a continuous function $ u: X \to \mathbb R $ is a sub-action with respect to the observable $ A $ if the following inequality holds everywhere on $ X $ \begin{equation*} A \geq u\circ T - u + \bar A. \end{equation*} We would like to emphasize that, although the definition of a sub-action can be extended to other regularities (for instance, to the class of bounded measurable functions), we will only consider continuous sub-actions in this paper. \begin{definition} A sub-action $u:X\to\mathbb{R}$ is said calibrated if \[ u(x) = \min_{T(y)=x} [ u(y) + A(y) - \bar A ] \;\; \text{ for all } \; x \in X. \] \end{definition} \begin{definition} We call contact locus of a sub-action $u$ the set \[ \mathbb M_A(u) := (A - u \circ T + u )^{-1}(\bar A). \] It is just the subset of $ X $ where $ A = u\circ T - u + \bar A $. \end{definition} A point $ x \in X $ is said to be {\it non-wandering with respect to $A$} if, for every $ \epsilon > 0 $, there exists an integer $ k \ge 1 $ and a point $ y \in X $ such that $$ d(x, y) < \epsilon, \;\;\; d(x, T^k(y)) < \epsilon \; \text{ and } \; \Big| \sum_{j = 0}^{k - 1} (A - \bar A) \circ T^j (y) \Big| < \epsilon. $$ We denote by $ \Omega(A) $ the {\it set of non-wandering points} with respect to the observable $ A \in C^0(X) $. When the observable is H\"older, $\Omega(A)$ is a non-empty compact $T$-invariant set containing the support of all minimizing probability measures. Moreover, \[ \Omega(A) \subset \bigcap \Big\{ \mathbb M_A(u) \,\,\big| \textrm{ $u$ is a continuous sub-action} \Big\}. \] We are interested in finding $u$ so that $\Omega(A)=\mathbb{M}_A(u)$. \begin{definition} A sub-action $ u \in C^0(X) $ is said to be separating (with respect to $A$) if it satisfies $ \mathbb M_A(u) = \Omega(A) $. \end{definition} The main conclusion of theorem \ref{principal} can be stated in the following way. The proof of this particular case will not be given and can be adapted from the one of the general situation (see section~\ref{proof_theorem_principal}). \begin{theorem}\label{principalbis} Let $ (X, T) $ be a transitive expanding dynamical system on a compact metric space and $ A : X \to \mathbb R $ be a $\theta$-H\"older observable. Then there exist a $\theta$-H\"older separating sub-action for $ A $. Furthermore, in the $\theta$-H\"older topology, the subset of $\theta$-H\"older separating sub-actions is generic among all $\theta$-H\"older sub-actions. \end{theorem} We will present in theorem~\ref{discretestructurebis} a result of different nature and independent interest. The item which is totally new on this claim will be item 2. Contrary to a separating sub-action, a calibrated sub-action $u$ possesses a large contact locus in the sense $T(\mathbb{M}_A(u))=X$. Calibrated sub-actions are built using a particular sub-action called the {\it Peierls barrier}. For H\"older observable $A$, the Peierls barrier of $A$, $ h_A: \Omega(A) \times X \to \mathbb{R}$, is a H\"older calibrated sub-action in the second variable defined by \begin{multline*} h_A (x,y) := \lim_{\epsilon \to 0} \; \liminf_{k \to +\infty} \; \inf \Big\{\sum_{j = 0}^{k - 1} (A - \bar A) \circ T^j (z) \,\big|\, \\ z \in X,\,\, d(z,x) < \epsilon \textrm{ and } d(T^k(z),y) < \epsilon \Big\}. \end{multline*} The equivalent theorem to \ref{structure} may be stated in the following form. \begin{theorem}\label{structurebis} Let $ (X, T) $ be a transitive expanding dynamical system on a compact metric space and $ A : X \to \mathbb R $ be a H\"older observable. Then the set of continuous calibrated sub-actions coincides with the set of functions of the form \[ u(y)=\min_{x\in\Omega(A)} [\phi(x)+h_A(x,y)], \quad \forall\ y\in X, \] where $\phi:\Omega(A)\to\mathbb{R}$ is any continuous function satisfying \[ \phi(y)-\phi(x)\leq h_A(x,y), \quad \forall\ x,y\in\Omega(A). \] Moreover, $u$ extends $\phi$ and is thus uniquely characterized by $\phi$. \end{theorem} The condition $x\sim y \Leftrightarrow h_A(x,y)+h_A(y,x)=0$ defines an equivalent relation on $\Omega(A)$. An equivalence class is called an {\it irreducible component}. It is a closed $T$-invariant set\footnote{We prove these statements in the general setting (see definition-proposition~\ref{relacao de equivalencia} and proposition~\ref{proposicao componentes}).}. In the case $\Omega(A)$ is reduced to a finite number of disjoint irreducible components, the set of calibrated sub-actions is parametrized by a finite number of conditions. More precisely, if $\Omega(A)=\sqcup_{i=1}^r C_i$ is equal to a disjoint union of irreducible components and $x_i\in C_i$ are chosen, the {\it sub-action constraint set} is by definition \[ \mathcal{C}_A(x_1,\ldots,x_r) := \{(u_1,\ldots,u_r)\in\mathbb{R}^r \mid u_j-u_i \leq h_A(x_i,x_j),\quad \forall\ i,j \}. \] Therefore, the analogous result to theorem~\ref{discretestructure} can be stated as follows. \begin{theorem}\label{discretestructurebis} Let $ (X, T) $ be a transitive expanding dynamical system on a compact metric space and $ A : X \to \mathbb R $ be a H\"older observable. Assume that $\Omega(A)=\sqcup_{i=1}^r C_i$ is equal to a disjoint union of irreducible components. \begin{enumerate} \item There is a one-to-one correspondence between the sub-action constraint set and the set of calibrated sub-actions, \[ \left\{ \begin{array}{l} (u_1,\ldots,u_r)\in\mathcal{C}_A(x_1,\dots,x_r) \\ {\displaystyle u(x)=\min_{1\leq i \leq r}[u_i+h_A(x_i,x)]} \end{array} \right. \Longleftrightarrow \left\{ \begin{array}{l} u \textrm{ is a calibrated sub-action} \\ u_i = u(x_i) \end{array} \right.. \] \item Let $i_0\in\{1,\ldots,r\}$ and $u_{i_0}\in\mathbb{R}$ fixed. Define $u_i=u_{i_0}+h_A(x_{i_0},x_i)$ for all $i$, then $(u_1,\ \ldots,u_r)\in\mathcal{C}_A(x_1,\ldots,x_r)$ and the unique calibrated sub-action $u$ satisfying $u(x_i)=u_i$, for all $i$, is of the form \[ u(x) := \min_{1\leq i \leq r}[u_i+h_A(x_i,x)] = u_{i_0}+h_A(x_{i_0},x). \] \end{enumerate} \end{theorem} When the optimizing probability is unique, the calibrated sub-action is unique (up to additive constants) and generally the proofs of important results are easiest to discuss. One of the main issues of the thermodynamic formalism at temperature zero is the analysis, in the case there are several ergodic maximizing probabilities for $A$, which of these probabilities the Gibbs states $\mu_{\beta A}$ accumulates, when the inverse temperature parameter $ \beta $ goes to infinite. It is not clear when there is a unique one in the general H\"older case\footnote{Examples of Lipschitz observables on the full shift $ \{0,1\}^{\mathbb N} $ for which the zero temperature limit of the associated Gibbs measures does not exist have been recently announced (see \cite{CH}).}. In the case of a potential $A$ that depends on finitely many coordinates, this question is addressed in \cite{Bremont0, Leplaideur}. Let us denote, in our notation, by $ C_1, C_2, \ldots, C_r $ the different supports of the ergodic components of the set of maximizing probabilities for $A$. Then, one can ask: is there a unique one, let us say, with support in $C_{i_0}$ that will be attained as the only limit of Gibbs states $\mu_{\beta A}$ when $ \beta \to \infty $? This question is in some way related to the result of item 2 of theorem~\ref{discretestructurebis}. Indeed, the dual question can be made for the limits $\frac{1}{\beta} \log \phi_\beta$ when $\beta \to \infty $, where $\phi_\beta$ is the normalized eigenfunction for the Ruelle operator associated to the potential $\beta A$. It is well-known that any convergent subsequence will determine a calibrated sub-action, but is it not clear if there is only one possible limit. Hence, which one among the various calibrated sub-actions would be chosen? This is an important question. All functions of the form $h_A(x_i,\cdot)$, with $ x_i \in C_i$, are calibrated sub-actions, for any $ i = 1,2,\ldots,r $. Item 2 of theorem~\ref{discretestructurebis} gives sufficient conditions to say that a certain $ h_A(x_{i_0},\cdot) $ is preferred in some sense. We believe this fact is related to the important issues described above. \end{section} \begin{section}{Basic Concepts and Main Results} \label{basic_concepts} For simplicity, we will restrict the exposition of the {\it holonomic optimization model} to the symbolic dynamics case. Let $ (\Sigma, \sigma) $ be a one-sided transitive subshift of finite type given by a $ s \times s $ irreducible transition matrix $ \mathbf M $. More precisely \[ \Sigma := \Big\{ \mathbf{x} \in \{1, \ldots, s\}^{\mathbb N} \,\big|\, \mathbf{M}(x_j, x_{j+1}) = 1 \text{ for all } j \geq 0 \Big\} \] \noindent and $ \sigma $ is the left shift acting on $ \Sigma $ by $ \sigma(x_0, x_1, \ldots) = (x_1, x_2, \ldots) $. Fix $\lambda \in (0, 1)$. We choose a particular metric on $\Sigma$ defined by $d(\mathbf x, \bar{\mathbf x}) = \lambda^k$, for any $ \mathbf{x}, \bar{\mathbf{x}} \in \Sigma$, $\mathbf{x} = (x_0, x_1, \ldots) $, $\bar{\mathbf{x}} = (\bar x_0, \bar x_1, \ldots)$ and $ k = \min \{j: x_j \ne \bar x_j \} $. The holonomic model is a generalization of the ergodic optimization framework. The holonomic model has been introduced first by R. Ma\~n\'e in an attempt to clarify Aubry-Mather theory for continuous time Lagrangian dynamics (see \cite{CI,Mane}). In this model, the set of invariant minimizing probability measures is replaced by a broader class of measures called holonomic measures. In Aubry-Mather theory for discrete time Lagrangian dynamics on the $n$ dimensional torus $\mathbb{T}^n$ (see \cite{Gomes}), an holonomic probability measure $\mu(dx,dv)$ is a probability measure on $\mathbb{T}^n \times \mathbb{R}^n $ satisfying \[ \int_{\mathbb T^n \times \mathbb R^n} f(x + v) \, d\mu(x, v) = \int_{\mathbb T^n \times \mathbb R^n} f(x) \, d\mu(x, v), \quad \forall\ f \in C^0(\mathbb T^n), \] where the sum $ x + v $ is obviously taken modulo $ \mathbb Z^n $. One may exploit an interesting analogy with Aubry-Mather theory in symbolic dynamics. Similarly to the previous example of discrete dynamics, $\Sigma$ will play the role of the ``space of positions'' (analogous to $\mathbb{T}^n$ in the holonomic model) and the set of inverse branches or possible pasts $\Sigma^*$ will play the role of the ``space of immediately anterior velocities'' (analogous to $\mathbb{R}^n$). For a complete exposition and motivation of the holonomic optimization model, see \cite{Garibaldi, GL2}. We call {\it dual subshift of finite type} the space \[ \Sigma^* := \Big\{ \mathbf{y} \in \{1, \ldots, s\}^{\mathbb N_*} \,\big|\, \mathbf{M}(y_{j + 1},y_j) = 1 \text{ for all } j \geq 1 \Big\}. \] We denote by $\mathbf{y}=(\ldots,y_3,y_2,y_1)$ a point of $\Sigma^*$. We call {\it dual shift} the map $ \sigma^*(\ldots,y_3,y_2, y_1) := (\ldots, y_3, y_2) $. The {\it natural extension} of $(\Sigma, \sigma)$ will play the role of the ``phase space'' (analogous to $\mathbb{T}^n\times\mathbb{R}^n$) and will be identified with a subset of $\Sigma^* \times \Sigma$ \begin{multline*} \hat \Sigma := \Big\{(\mathbf y, \mathbf x) = (\ldots, y_2, y_1 | x_0, x_1, \ldots) \in \Sigma^* \times \Sigma \,\big|\, \\ \mathbf{x} = (x_0, x_1, \ldots),\,\, \mathbf{y} = (\ldots, y_2, y_1) \textrm{ and } \mathbf{M}(y_1, x_0) = 1 \Big\}. \end{multline*} Equivalently, one may write $ \hat \Sigma = \bigcup_{\mathbf x \in \Sigma} \; \Sigma_{\mathbf x}^* \times \{\mathbf x\} $, where $$ \Sigma_{\mathbf x}^* := \big\{\mathbf y = (\ldots, y_2, y_1) \in \Sigma^* \,\big|\, \mathbf M(y_1, x_0) = 1 \big\} \quad \forall \; \mathbf x = (x_0, x_1, \ldots) \in \Sigma. $$ The analogue of the ``discrete Euler-Lagrange map'' is obtained by the usual left shift $\hat{\sigma}$ on the natural extension, $$ \hat \sigma (\ldots, y_2, y_1 | x_0, x_1, \ldots) = (\ldots, y_1, x_0 | x_1, x_2, \ldots). $$ Consider then $\tau^*: \hat \Sigma \to \Sigma^*$ given by \[ \tau^*(\mathbf y, \mathbf x) := \tau^*_{\mathbf x}(\mathbf y) := (\ldots,y_2,y_1,x_0). \] Notice that $ \tau^*_{\mathbf x}(\mathbf y) \in (\sigma^*)^{-1}(\mathbf y) $. Similarly, inverse branches of $ \mathbf x \in \Sigma $ with respect to $ \sigma $ are constructed using the map $\tau:\hat{\Sigma}\to\Sigma$, \[ \tau(\mathbf y, \mathbf x) := \tau_{\mathbf{y}}(\mathbf{x})=(y_1,x_0,x_2,\ldots). \] Clearly we have $ \hat\sigma(\mathbf y, \mathbf x) := (\tau^*_{\mathbf{x}}(\mathbf{y}), \sigma(\mathbf{x})) $ and $ \hat{\sigma}^{-1}(\mathbf y, \mathbf x) = (\sigma^*(\mathbf y), \tau_{\mathbf{y}}(\mathbf{x})) $. Note that $ \tau = \pi \circ \hat \sigma^{-1}$, where $ \pi : \hat \Sigma \to \Sigma$ is the canonical projection onto the $\mathbf{x}$-variable. Let $ \hat{\mathcal{M}}$ be the set of probability measures over the Borel sigma-algebra of $ \hat \Sigma $. Instead of considering the set of $\hat{\sigma}$-invariant probability measures\footnote{It is well-known that a H\"older observable defined on the two-sided shift is cohomologous to an observable that depends just on future coordinates. So a minimization over $\hat\sigma$-invariant probabilities may be reduced to a minimization over $\sigma$-invariant probabilities.}, we introduce the set of {\it holonomic probability measures}, \[ \hat{\mathcal{M}}_{\textrm{hol}} := \Big\{\hat \mu \in \hat{\mathcal{M}} \,\big|\, \int_{\hat \Sigma} f(\tau_{\mathbf y}(\mathbf x)) \; d\hat\mu(\mathbf y, \mathbf x) = \int_{\hat \Sigma} f(\mathbf x) \; d\hat\mu(\mathbf y, \mathbf x), \; \; \forall \; f \in C^0(\Sigma) \Big\}. \] It seems important to insist that the holonomic condition demands only the continuous function $ f $ to be defined on the one-sided shift of finite type $ \Sigma $ and not on the natural extension $ \hat \Sigma $ as would be the case for the characterization of $\hat\sigma$-invariance. Observe that $\hat \mu \in \hat{\mathcal{M}}_{\textrm{hol}} $ if, and only if, $ \pi_*(\hat{\mu}) = \pi_*(\hat{\sigma}^{-1}_*(\hat{\mu}))$ if, and only if, $\sigma^{-1}_*(\hat\mu)$ projects onto a $\sigma$-invariant Borel probability measure. As in section~\ref{simplified_version}, we denote by $\mathcal{M}_\sigma$ the set of $\sigma$-invariant Borel probability measures. The triple $ (\hat \Sigma, \hat \sigma, \hat{\mathcal{M}}_{\textrm{hol}}) $ is called the holonomic model. Such a formalism includes the ergodic optimization model discussed in section~\ref{simplified_version} as we will see. Let $ A \in C^\theta(\hat \Sigma) $ be a H\"older observable. We would like to emphasize that $ A $ is continuous on the natural extension $\hat \Sigma$. This is one of the crucial points in the holonomic setting: the possibility of formulating a relevant minimization question for functions defined on the two-sided shift. Then, we call {\it holonomic minimizing value} of $A$ \begin{eqnarray*} \bar{A} & := & \min \Big\{ \int_{\hat \Sigma} A(\mathbf y, \mathbf x) \; d\hat\mu(\mathbf y, \mathbf x) \,\big|\, \hat\mu \in \hat{\mathcal{M}}_{\textrm{hol}} \Big\} \\ & = & \min\Big\{ \int_{\hat\Sigma}A\circ\hat\sigma(\mathbf y, \mathbf x) \, d\hat\mu(\mathbf y, \mathbf x) \,|\, \pi_*(\hat\mu)\in\mathcal{M}_\sigma \Big\}. \end{eqnarray*} If $A\circ\hat\sigma=B\circ\pi$ depends only on the $\mathbf{x}$-variable, $\bar A=\bar B$ as in the section \ref{simplified_version}. The set of {\it minimizing (holonomic) probability measures} is denoted \[ \hat{\mathcal{M}}_{\textrm{hol}}(A) := \Big\{\hat \mu \in \hat{\mathcal{M}}_{\textrm{hol}} \,\,\big|\, \int_{\hat \Sigma} A(\mathbf y, \mathbf x) \; d\hat\mu(\mathbf y, \mathbf x) = \bar A \Big\}. \] A continuous function $u:\Sigma\to\mathbb{R}$ is called {\it sub-action} with respect to $A$ if \[ u(\mathbf{x}) - u(\tau_{\mathbf{y}}(\mathbf{x})) \leq A(\mathbf y, \mathbf x) - \bar A, \quad \forall\ (\mathbf y, \mathbf x) \in \hat{\Sigma}, \] or equivalently $ A - \bar A \geq u\circ\pi - u\circ\pi\circ\hat\sigma^{-1}$. We call {\it contact locus} of a sub-action $u$ the set \[ \hat{\mathbb{M}}_A(u) := (A - u\circ\pi + u\circ\pi\circ\hat\sigma^{-1})^{-1}(\bar A) \] where the above inequality becomes an equality, that is, a point $ (\mathbf y, \mathbf x) \in \hat \Sigma $ belongs to $ \hat{\mathbb{M}}_A(u) $ if, and only if, $ u(\mathbf{x}) - u(\tau_{\mathbf{y}}(\mathbf{x})) = A(\mathbf y, \mathbf x) - \bar A $. If $A\circ\hat\sigma = B\circ\pi$ for some $B:\Sigma\to\mathbb{R}$, notice that $\pi\circ\hat\sigma^{-1}(\hat{\mathbb{M}}_A(u))=\mathbb{M}_B(u)$. A calibrated sub-action is a particular sub-action which possesses a large contact locus in the sense that $\pi(\hat{\mathbb{M}}_A(u))=\Sigma$. \begin{definition} A sub-action $u:\Sigma\to\mathbb{R}$ is said to be calibrated for $A$ if \[ u(\mathbf x) = \min_{\mathbf y \in \Sigma_{\mathbf x}^*} \big[ u(\tau_{\mathbf y}(\mathbf x)) + A(\mathbf y, \mathbf x) - \bar A \big], \quad \forall \, \mathbf x\in\Sigma, \] where recall that $\Sigma_{\mathbf x}^* := \{\mathbf y \in \Sigma^* \,|\, (\mathbf y, \mathbf x) \in \hat{\Sigma} \}$. \end{definition} If $\hat B := A\circ\hat\sigma$ and $B(\mathbf{x}) := \min\{\hat B(\mathbf y, \mathbf x) \,|\, \mathbf{y} \in \Sigma_{\mathbf x}^*\}$, then $u$ is a calibrated sub-action for $A$ if, and only if, $u$ is a calibrated sub-action for $B$. Indeed, \begin{align*} u(\mathbf{x}) &= \min_{\sigma(\bar{\mathbf{x}})=\mathbf{x}} \quad \min_{\mathbf y\in\Sigma_{\mathbf x}^*,\, \tau_{\mathbf{y}}(\mathbf{x})=\bar{\mathbf{x}}} \big[ u(\bar{\mathbf{x}}) + \hat{B}(\sigma^*(\mathbf{y}),\bar{\mathbf{x}}) - \bar A \big] \\ &= \min_{\sigma(\bar{\mathbf{x}})=\mathbf{x}} \big[ u(\bar{\mathbf{x}}) + B(\bar{\mathbf{x}}) -\bar A \big] = \min_{\sigma(\bar{\mathbf{x}})=\mathbf{x}} \big[ u(\bar{\mathbf{x}}) + B(\bar{\mathbf{x}}) -\bar B \big]. \end{align*} (The definition of $B$ gives $\bar B\leq \bar A$ and the calibration gives $\bar B\geq \bar A$.) A classification theorem for calibrated sub-actions is presented in \cite{GL2}. A central concept is the set of non-wandering points with respect to $ A $ (previously defined in \cite{CLT, LT1} in the ergodic optimization model). We call {\it path of length $k$} a sequence $(\mathbf{z}^0,\ldots,\mathbf{z}^k)$ of points of $\hat{\Sigma}$ such that \[ \mathbf{z}^{i}=(\mathbf{y}^i,\mathbf{x}^i) \; \textrm{ with } \; \mathbf{x}^{i}=\tau_{\mathbf{y}^{i + 1}}(\mathbf{x}^{i + 1}),\,\, \forall\ i=0, 1,\ldots,k - 1, \] that is, a sequence $(\mathbf{z}^0,\ldots,\mathbf{z}^k)$ where $\mathbf{x}^i=\sigma^i(\mathbf{x}^0)$ for all $i=0,1,\ldots,k$, $\mathbf{x}^0=(x_0,x_1,\ldots,x_{k-1},\mathbf{x}^k)$ and \begin{multline*} \mathbf{z}^0=\big(\mathbf{y}^0|x_0,\ldots,x_{k-1},\mathbf{x}^k\big),\,\, \mathbf{z}^1=\big(\sigma^*(\mathbf y^1), x_0|x_1,\ldots,x_{k-1},\mathbf{x}^k\big), \, \ldots,\\ \mathbf{z}^{k-1}=\big(\sigma^*(\mathbf y^{k - 1}), x_{k-2}|x_{k-1},\mathbf{x}^k\big),\,\, \mathbf{z}^k=\big(\sigma^*(\mathbf y^k), x_{k-1}|\mathbf{x}^k\big). \end{multline*} Note that the point $ \mathbf{y}^{0} $ is free of any restriction except that $\mathbf{M}(y_1^0,x_0)=1$, more precisely, one just asks that $ \mathbf y^0 \in \Sigma_{\mathbf x^0}^* $ while $\mathbf{y}^j\in\Sigma_{\mathbf{x}^j}^*\cap(\sigma^*)^{-1}(\Sigma_{\mathbf{x}^{j-1}}^*)$ for $ j = 1, \ldots, k $. Equivalently, one could present a path in the following way \begin{multline*} \mathbf{z}^0=\big(\mathbf{y}^0,\tau_{\mathbf y^1} \circ \tau_{\mathbf y^2} \circ \cdots \circ \tau_{\mathbf y^k}(\mathbf{x}^k)\big),\,\, \mathbf{z}^1=\big(\mathbf{y}^1,\tau_{\mathbf y^2} \circ \cdots \circ \tau_{\mathbf y^k}(\mathbf{x}^k)\big), \, \ldots,\\ \mathbf{z}^{k-1}=\big(\mathbf y^{k - 1}, \tau_{\mathbf y^k}(\mathbf{x}^k)\big),\,\, \mathbf{z}^k=\big(\mathbf y^k,\mathbf{x}^k\big). \end{multline*} Given $ \epsilon > 0 $ and $ \mathbf x, \bar{\mathbf x} \in \Sigma $, we say that a path of length $k$, $(\mathbf{z}^0,\ldots,\mathbf{z}^k)$, begins within $\epsilon $ of $\mathbf{x}$ and ends within $ \epsilon $ of $\bar{\mathbf{x}}$ if $d(\mathbf x^0,\mathbf{x}) < \epsilon$ and $ d(\mathbf{x}^{k},\bar{\mathbf{x}}) < \epsilon $. Denote by $\mathcal{P}_k(\mathbf{x},\bar{\mathbf{x}},\epsilon)$ the set of such paths. Denote by $\mathcal{P}_k(\mathbf{x})$ the set of paths of length $k$ beginning exactly at $\mathbf{x}$. Notice that a path $ (\mathbf z^0, \ldots, \mathbf z^k) $ belongs to $ \mathcal{P}_k(\mathbf{x}) $ if, and only if, $ \pi(\mathbf z^i) = \sigma^i(\mathbf x) $ for all $ i = 0, 1, \ldots, k $. A point $ \mathbf x \in \Sigma $ will be called {\it non-wandering with respect to $A$} if, for every $ \epsilon > 0 $, one can find a path $(\mathbf{z}^0,\ldots,\mathbf{z}^k)$ in $\mathcal{P}_k(\mathbf x, \mathbf x, \epsilon) $, with $ k \ge 1 $, such that \[ \Big| \sum_{i = 1}^{k} (A - \bar A)(\mathbf{z}^i) \Big| < \epsilon. \] We will denote by $ \Omega(A) $ the set of non-wandering points with respect to $ A $. If $A\circ\hat\sigma=B\circ\pi$, notice that $\Omega(A)=\Omega(B)$ as in section \ref{simplified_version}. The first two authors have proved in \cite{GL2} that $\Omega(A)$ is a non-empty compact $\sigma$-invariant set and satisfies \[ \Omega(A) \subset \bigcap \Big\{\pi(\hat{\mathbb{M}}_A(u)) \,\,\big| \textrm{ $u$ is a continuous sub-action} \Big\}. \] \begin{remark} The set $\Omega(A)$ is analogous to the projected Aubry set in the continuous time Lagrangian dynamics. One could have introduced the corresponding Aubry set $ \hat{\Omega}(A) \subset \hat \Sigma $ and proved $\pi(\hat{\Omega}(A)) = \Omega(A) $. Unfortunately, even for H\"older observable $A$, the graph property is not any more true: $\pi : \hat{\Omega}(A) \to \Omega(A)$ is no more bijective. A counter-example can be found in \cite{GL2}. It would be interesting to find the right assumptions on $ A \in C^\theta(\hat \Sigma)$ in order to get this property. \end{remark} Contrary to a calibrated sub-action, a {\it separating sub-action} is a sub-action with the smallest contact locus. More precisely, \begin{definition} A sub-action $ u \in C^0(\Sigma) $ is said to be separating (with respect to $A$) if it verifies $\pi(\hat{\mathbb{M}}_A(u)) = \Omega(A)$. \end{definition} Our first result is the following one. \begin{theorem}\label{principal} If $ A : \hat \Sigma \to \mathbb R $ is a $\theta$-H\"older observable, then there exists a $\theta$-H\"older separating sub-action. Moreover, in the $\theta$-H\"older topology, the subset of $\theta$-H\"older separating sub-actions is generic among all $\theta$-H\"older sub-actions. \end{theorem} According to the analogy with continuous time Lagrangian dynamics, sub-actions correspond to viscosity sub-solutions of the stationary Hamilton-Jacobi equation, calibrated sub-actions correspond to the weak KAM solutions introduced by A. Fathi (see \cite{Fathi}) and separating sub-actions correspond to special sub-solutions as described in \cite{FS}. By adapting the proof of theorem 10 in \cite{GL2} and by using definition \ref{ManePeierlsBarrier} of the the Peierls barrier $h_A$, we obtain a structure theorem for calibrated sub-actions. Such characterization corresponds to the one obtained for weak KAM solutions in Lagrangian dynamics (see \cite{Contreras}). The proof of the following theorem will be omitted. \begin{theorem}\label{structure} Let $A$ be a $\theta$-H\"older observable. \begin{enumerate} \item If $u$ is a continuous calibrated sub-action for $A$, then \[ u(\mathbf x) = \min_{ \bar{\mathbf x} \in \Omega(A)} \big[ u(\bar{\mathbf x}) + h_A ( \bar{\mathbf x}, \mathbf x) \big]. \] \item Conversely, for every continuous application $\phi:\Omega(A)\to\mathbb{R}$ satisfying \[ \phi(\mathbf{x}) - \phi(\bar{\mathbf{x}}) \leq h_A(\bar{\mathbf{x}},\mathbf{x}), \quad \forall\ \mathbf{x},\bar{\mathbf{x}}\in\Omega(A), \] \noindent the function $u(\mathbf{x}) := \min_{\bar{\mathbf{x}}\in\Omega(A)} [ \phi(\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) ] $ is a continuous calibrated sub-action extending $\phi$ on $\Omega(A)$. \end{enumerate} \end{theorem} In particular, this representation formula for calibrated sub-actions implies immediately that, in order to compare two such functions, we just need to compare their restrictions to $ \Omega(A) $. For instance, if two calibrated sub-actions coincide for every non-wandering point with respect to $ A $, then they are the same. In the case the set of non-wandering points for $A$ is reduced to a finite union of irreducible components $\Omega(A)=C_1\cup\ldots\cup C_r$, the set of calibrated sub-actions admits a simpler characterization. We first show that the condition $\mathbf{x}\sim\bar{\mathbf{x}} \; \Leftrightarrow \; h_A(\mathbf{x},\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) = 0 $ defines an equivalent relation. Each one of its equivalent classes is called an irreducible component. Let $\bar{\mathbf{x}}^1 \in C_1, \ldots, \bar{\mathbf{x}}^r \in C_r$ fixed. We call {\it sub-action constraint set} the set \[ \mathcal{C}_A(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r) = \{(u_1,\ldots,u_r)\in\mathbb{R}^r \mid u_j-u_i \leq h_A(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}^j),\quad \forall\ i,j \}. \] \noindent Our second result is the following one. \begin{theorem}\label{discretestructure} Let $A$ be a H\"older observable. Assume $\Omega(A)$ is a finite union of disjoint irreducible components, namely, $\Omega(A)=\sqcup_{i=1}^r C_i$. Let $\bar{\mathbf{x}}^1 \in C_1, \dots, \bar{\mathbf{x}}^r \in C_r$ fixed. \begin{enumerate} \item If $u$ is a continuous calibrated sub-action and $u_i := u(\bar{\mathbf{x}}^i)$ for every $i=1,\ldots,r$, then \[ (u_1,\ldots,u_r) \in \mathcal{C}(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r) \quad\textrm{and}\quad u(\mathbf{x}) = \min_{1\leq i \leq r} \big[ u(\bar{\mathbf{x}}^i) + h_A(\bar{\mathbf{x}}^i,\mathbf{x}) \big]. \] \item If $(u_1,\ldots,u_r) \in \mathcal{C}(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r)$ and $ u(\mathbf{x}) := \min_{1\leq i \leq r} \big[ u_i + h_A(\bar{\mathbf{x}}^i,\mathbf{x}) \big]$, then $u$ is a continuous calibrated sub-action satisfying $u(\bar{\mathbf{x}}^i)=u_i$ for all $i=1,\ldots,r$. \item Take $i_0\in\{1,\ldots,r\}$ and $(u_1,\ldots,u_r)$ such that $ u_i := u_{i_0} + h_A(\bar{\mathbf{x}}^{i_0},\bar{\mathbf{x}}^i) $ for all $i=1,\ldots,r$. Then $i_0$ is unique, $(u_1,\ldots,u_r) \in \mathcal{C}(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r)$ and the unique calibrated sub-action $u$ satisfying $u(\bar{\mathbf{x}}^i)=u_i$, for all $i=1,\ldots,r$, is of the form $u(\mathbf{x}) = u_{i_0} + h_A(\bar{\mathbf{x}}^{i_0},\mathbf{x})$. \end{enumerate} \end{theorem} The application we present here has a certain similarity to lemma 6 in \cite{AIPS}. We point out that the local character of viscosity solutions (as in definition 1 of \cite{AIPS}) is not present in our setting. \begin{application}\label{application} Let $A$ be a H\"older observable. Consider any continuous sub-action $ v $ and a continuous calibrated sub-action $ u $. \begin{enumerate} \item Then $u-v$ is constant on every irreducible component and \[ \min_\Sigma(u-v)=\min_{\Omega(A)}(u-v). \] \item Assume $\Omega(A)=\sqcup_{i=1}^r C_i$ is a finite union of disjoint irreducible components. If $\min_\Sigma(u-v)$ is realized on an unique component $C_{i_1}$ and the other components $C_i$, $i\not=i_1$, are not local minimum for $u-v$, then \[ u(\mathbf{x})=u(\bar{\mathbf{x}}_{i_1})+h_A(\bar{\mathbf{x}}_{i_1},\mathbf{x}), \quad \forall\ \mathbf{x}\in\Sigma, \] where $\bar{\mathbf{x}}_{i_1}$ is any point in $C_{i_1}$. \end{enumerate} \end{application} \end{section} \begin{section}{Proof of theorem~\ref{principal}} \label{proof_theorem_principal} We first recall two notions of action potential between two points: the Ma\~n\'e potential and the Peierls barrier. Given $\epsilon > 0$, $ \mathbf x, \bar{\mathbf x} \in \Sigma $ and $k\geq 1$, we denote \[ S_A^\epsilon(\mathbf x, \bar{\mathbf x}, k) = \inf \Big\{ \sum_{i = 1}^{k} (A - \bar A)(\mathbf{z}^i) \,\big|\, (\mathbf z^0,\ldots, \mathbf z^{k}) \in \mathcal P_k(\mathbf x, \bar{\mathbf x}, \epsilon) \Big\}. \] If $\hat B := A\circ\hat{\sigma}$ and $B := \min\{ \hat B(\mathbf y, \mathbf x) \,|\, \mathbf{y}\in\Sigma_{\mathbf{x}}^* \}$, notice that \[ S_A^\epsilon(\mathbf x, \bar{\mathbf x}, k) = \inf \Big\{ \sum_{i=0}^{k-1}(B-\bar B)\circ\sigma^i(\mathbf{x}^0) \,\big|\, d(\mathbf{x}^0,\mathbf{x})<\epsilon, \,\, d(\sigma^k(\mathbf{x}^0),\bar{\mathbf{x}})<\epsilon \Big\}. \] \begin{definition}\label{ManePeierlsBarrier} \noindent We call Ma\~n\'e potential the function $ \phi_A: \Sigma \times \Sigma \to \mathbb R \cup \{+ \infty \} $ defined by \[ \phi_A(\mathbf x, \bar{\mathbf x}) = \lim_{\epsilon \to 0} \, \inf_{k \geq 1} \, S_A^\epsilon(\mathbf x, \bar{\mathbf x}, k). \] \noindent We call Peierls barrier the function $ h_A: \Sigma \times \Sigma \to \mathbb R \cup \{+ \infty \} $ defined by \[ h_A(\mathbf x, \bar{\mathbf x}) = \lim_{\epsilon \to 0} \, \liminf_{k \to +\infty} \, S_A^\epsilon(\mathbf x, \bar{\mathbf x}, k). \] \end{definition} Clearly, $ \phi_A \le h_A $ and both functions are lower semi-continuous. We summarize the main properties of these action potentials. \begin{proposition}\label{propriedadesbasicas} Let $A$ be a H\"older observable. \begin{enumerate} \item If $u$ is a continuous sub-action, then $u(\bar{\mathbf x})- u(\mathbf x) \leq \phi_A(\mathbf x, \bar{\mathbf x})$. \item For any points $ \mathbf x, \bar{\mathbf x}, \bar{\bar{\mathbf x}} \in \Sigma $, $ \phi_A(\mathbf x, \bar{\bar{\mathbf x}}) \le \phi_A(\mathbf x, \bar{\mathbf x}) + \phi_A(\bar{\mathbf x}, \bar{\bar{\mathbf x}}) $. \item Given a point $ \mathbf x \in \Sigma $, if there exists a positive integer $ L $ such that $ 0 < L < \min \{j > 0 : \sigma^j(\mathbf x) = \mathbf x\} \le + \infty $, then \[ \phi_A(\mathbf x, \mathbf x) = \phi_A(\mathbf x, \sigma^L(\mathbf x)) + \phi_A(\sigma^L(\mathbf x), \mathbf x). \] Moreover, if $\phi_A(\mathbf x, \mathbf{x})<+\infty$, then there exists a path of length $L$, $(\bar{\mathbf{z}}^0 = (\bar{\mathbf y}^0, \bar{\mathbf x}^0), \ldots, \bar{\mathbf{z}}^L = (\bar{\mathbf y}^L, \bar{\mathbf x}^L))$, beginning at $\mathbf{x}$ $(\bar{\mathbf{x}}^j=\sigma^j(\mathbf{x})$ for all $j = 0,\ldots, L)$, such that \[ \phi_A(\mathbf x, \sigma^L(\mathbf x)) = \sum_{j = 1}^L (A - \bar A) (\bar{\mathbf{z}}^j). \] \item For any points $ \mathbf x, \bar{\mathbf x}, \bar{\bar{\mathbf x}} \in \Sigma $ and any sequence $ \{\bar{\mathbf x}^l \} $ converging to $ \bar{\mathbf x} $, \[ h_A(\mathbf x, \bar{\bar{\mathbf x}}) \leq \liminf_{l \to +\infty} \phi_A(\mathbf x, \bar{\mathbf x}^l) + h_A(\bar{\mathbf x}, \bar{\bar{\mathbf x}}). \] \item If $\mathbf{x}\in\Sigma$, then $ \mathbf x \in \Omega(A) \, \Leftrightarrow \, \phi_A(\mathbf x, \mathbf x) = 0 \, \Leftrightarrow \, h_A(\mathbf x, \mathbf x) = 0 $. \item If $ \mathbf x \in \Omega(A) $, then $ \phi_A(\mathbf x, \cdot) = h_A(\mathbf x, \cdot) $ and $h_A(\mathbf x, \cdot)$ is a H\"older calibrated sub-action with respect to the second variable. \end{enumerate} \end{proposition} This proposition shows how to construct H\"older calibrated sub-actions without the use of the Lax-Oleinik fixed point method. \begin{remark} In Lagrangian Aubry-Mather theory on a compact manifold $ M $, it is well known that, for any point $ x \in M $, the map $ y \in M \mapsto h(x, y) \in \mathbb R $ defines a weak KAM solution, where $ h : M \times M \to \mathbb R $ denotes the corresponding Peierls barrier. The analogous result for $ h_A(\mathbf x, \cdot) $ is however false in the holonomic optimization model. Using item 3, it is not difficulty to built examples where \[ \lim_{L \to + \infty} \phi_A(\mathbf x, \sigma^L(\mathbf x)) = \lim_{L \to + \infty} h_A(\mathbf x, \sigma^L(\mathbf x)) = + \infty, \] which shows that $ h_A(\mathbf x, \cdot) $ is not always a continuous function. \end{remark} \begin{proof}[Proof of proposition~\ref{propriedadesbasicas}] Items 1, 2, 5 and 6 are well known and a demonstration can be found, for instance, in \cite{CLT, GL2}. So let us prove items 3 and 4. \noindent {\it Item 3.} We already know from item 2, that \[ \phi_A(\mathbf{x},\mathbf{x}) \leq \phi_A(\mathbf{x},\sigma^L(\mathbf{x})) + \phi_A(\sigma^L(\mathbf{x}),\mathbf{x}). \] \noindent Define $ \eta = \min \{d(\sigma^i(\mathbf x), \sigma^j(\mathbf x)) : 0 \le i < j \le L \} $. Fix $ \gamma > 0 $ and take $ \epsilon \in (0, \min\{\lambda, \eta/2\}) $ such that $\textrm{H\"old}(A) L \epsilon^\theta < \gamma$. Consider also $ \rho \in (0, \epsilon) $ such that $ d(\mathbf x, \bar{\mathbf x}) < \rho $ implies $ d(\sigma^j(\mathbf x), \sigma^j(\bar{\mathbf x})) < \epsilon $ for $ 1 \le j \le L $. Take then a path $ (\mathbf{z}^0,\ldots,\mathbf{z}^l) \in \mathcal P_l(\mathbf x, \mathbf x, \rho) $ satisfying \[ \sum_{j = 1}^{l} (A - \bar A)(\mathbf{z}^j) < \inf_{k \geq 1} S_A^\rho(\mathbf x, \mathbf x, k) + \gamma \leq \phi_A(\mathbf{x},\mathbf{x})+\gamma. \] Let $\mathbf{z}^j=(\mathbf{y}^j, \mathbf{x}^j)$ where $\mathbf{x}^j=\sigma^j(\mathbf{x}^0)$ for all $j=0,1,\ldots,l$. We claim that $ l > L $. Indeed, $\rho$ has been chosen so that, for each $j \in \{1, 2, \ldots, L\}$, \[ d(\mathbf x^{j}, \mathbf x) = d(\sigma^j(\mathbf x^0), \mathbf x) \ge d(\sigma^j(\mathbf x), \mathbf x) - d(\sigma^j(\mathbf x), \sigma^j(\mathbf x^0)) > \eta - \epsilon > \rho. \] Introduce a new path $(\bar{\mathbf{z}}^0,\ldots,\bar{\mathbf{z}}^L)\in\mathcal{P}_L(\mathbf{x},\mathbf{x},\epsilon)$ given by $ \bar{\mathbf{z}}^j=(\mathbf{y}^j,\sigma^j(\mathbf{x})) $, for all $ j=0,\ldots, L $. The definition of $\rho$ guarantees \[ \sum_{j = 1}^{L} (A - \bar A)(\bar{\mathbf{z}}^j) < \sum_{j = 1}^{L} (A - \bar A)(\mathbf{z}^j) + \text{H\"old}_\theta(A) L \epsilon^\theta \leq \sum_{j = 1}^{L} (A - \bar A)(\mathbf{z}^j) + \gamma. \] Notice that $ (\mathbf{z}^L,\ldots,\mathbf{z}^l)\in\mathcal{P}_{l-L}(\sigma^L(\mathbf{x}),\mathbf{x},\epsilon) $. We finally obtain \begin{align*} \inf_{k\geq 1} S_A^\epsilon(\mathbf x,\sigma^L(\mathbf x), k) &+ \inf_{k\geq 1} S_A^\epsilon(\sigma^L(\mathbf x), \mathbf x, k) \\ &\leq \sum_{j = 1}^{L} (A - \bar A)(\bar{\mathbf{z}}^j) + \sum_{j = L+1}^{l} (A - \bar A)(\mathbf{z}^j) \\ &\leq \sum_{j = 1}^{L} (A - \bar A)(\mathbf{z}^j) + \sum_{j = L+1}^{l} (A - \bar A)(\mathbf{z}^j) + \gamma \\ &\leq \inf_{k \geq 1} S_A^\rho(\mathbf x, \mathbf x, k) + 2\gamma \leq \phi_A(\mathbf{x},\mathbf{x})+2\gamma. \end{align*} By letting $\epsilon$ goes to $0$ and $\gamma$ to $0$, we get \[ \phi_A(\mathbf x, \sigma^L(\mathbf x)) + \phi_A(\sigma^L(\mathbf x), \mathbf x) \leq \phi_A(\mathbf x, \mathbf x). \] The first part of item 3 is proved. To prove the second part, the previous computation shows that, for any sufficiently small $\epsilon$, there exists a path $(\bar{\mathbf{z}}^0_\epsilon,\ldots,\bar{\mathbf{z}}^L_\epsilon)\in\mathcal{P}_L(\mathbf{x})$ such that \[ \sum_{j = 1}^{L} (A - \bar A)(\bar{\mathbf{z}}^j_\epsilon) + \inf_{k\geq 1} S_A^\epsilon(\sigma^L(\mathbf x), \mathbf x, k) \leq \phi_A(\mathbf{x},\mathbf{x})+2\gamma. \] By taking accumulation points of $\bar{\mathbf{z}}^j_\epsilon$ when $\epsilon\to0$, we obtain, for any $\gamma$, a path $(\bar{\mathbf{z}}^0,\ldots,\bar{\mathbf{z}}^L)$ such that \[ \phi_A(\mathbf x, \sigma^L(\mathbf x)) \leq \sum_{j = 1}^{L} (A - \bar A)(\bar{\mathbf{z}}^j) \leq \phi_A(\mathbf{x},\mathbf{x}) - \phi_A(\sigma^L(\mathbf x), \mathbf x) + 2\gamma \] \noindent The result follows from item 2 and by taking once more accumulation points of $\bar{\mathbf{z}}^j$ when $\gamma\to0$. \noindent {\it Item 4.} Since $ \phi_A $ is lower semi-continuous, the statement is equivalent to \[ h_A(\mathbf x, \bar{\bar{\mathbf x}}) \leq \phi_A(\mathbf x, \bar{\mathbf x}) + h_A(\bar{\mathbf x}, \bar{\bar{\mathbf x}}), \quad \forall\ \mathbf x, \bar{\mathbf x}, \bar{\bar{\mathbf x}} \in \Sigma. \] Fix $\gamma>0$ and $\epsilon \in (0, \lambda/2) $ such that $\textrm{H\"old}(A)(2\epsilon)^\theta/(1-\lambda^\theta)<\gamma$. There exists a path $(\mathbf{z}^0,\ldots,\mathbf{z}^k) \in \mathcal{P}_k(\mathbf x, \bar{\mathbf x}, \epsilon)$ such that \[ \sum_{j = 1}^{k} (A - \bar A)(\mathbf z^j) < \inf_{n \geq 1} S_A^\epsilon(\mathbf x, \bar{\mathbf x}, n) + \gamma. \] For any $ N \geq 1 $, there exists a path $(\bar{\mathbf z}^0,\ldots,\bar{\mathbf z}^{l}) \in \mathcal P_{l}(\bar{\mathbf x}, \bar{\bar{\mathbf x}}, \epsilon) $ of length $l\geq N$ such that \[ \sum_{j = 1}^{l} (A - \bar A)(\bar{\mathbf z}^j) < \inf_{n \ge N} S_A^\epsilon(\bar{\mathbf x}, \bar{\bar{\mathbf x}}, n) + \gamma. \] We define a path $(\bar{\bar{\mathbf{z}}}^0,\ldots,\bar{\bar{\mathbf{z}}}^{k+l})\in\mathcal{P}_{k+l}(\mathbf x, \bar{\bar{\mathbf x}}, 3\epsilon)$ in the following way \begin{gather*} \bar{\bar{\mathbf{z}}}^j=\bar{\mathbf{z}}^{j-k},\,\, \forall\ j=k+1,\ldots,k+l, \quad\quad \bar{\bar{\mathbf{z}}}^j=(\bar{\bar{\mathbf{y}}}^j,\bar{\bar{\mathbf{x}}}^j),\,\, \forall\ j=0,\ldots,k, \\ \bar{\bar{\mathbf{y}}}^j=\mathbf{y}^j,\,\, \forall\ j=0,\ldots,k, \quad\quad \bar{\bar{\mathbf{x}}}^k=\bar{\mathbf{x}}^0,\,\, \bar{\bar{\mathbf{x}}}^{j-1}=\tau_{\mathbf{y}^j}(\bar{\bar{\mathbf{x}}}^j),\,\, \forall\ j=1,\ldots,k. \end{gather*} \noindent We notice that $d(\bar{\bar{\mathbf{x}}}^j,\mathbf{x}^j) \leq \lambda^{k-j}d(\bar{\bar{\mathbf{x}}}^k,\mathbf{x}^k)$, for all $j=0,\ldots,k$. Since \[ d(\bar{\bar{\mathbf{x}}}^k,\mathbf{x}^k)= d(\bar{\mathbf{x}}^0,\mathbf{x}^k) \leq d(\bar{\mathbf{x}}^0,\bar{\mathbf{x}}) +d(\bar{\mathbf{x}},\mathbf{x}^k)< 2\epsilon, \] \noindent we obtain $d(\bar{\bar{\mathbf{x}}}^0,\mathbf{x}) \leq \lambda^k 2\epsilon +\epsilon < 3\epsilon$. Hence, it follows that \begin{align*} \inf_{n \ge N} S_A^{3\epsilon}(\mathbf x, \bar{\bar{\mathbf x}}, n) &\leq \sum_{j = 1}^{k+l} (A - \bar A)(\bar{\bar{\mathbf z}}^j) \\ &\leq \sum_{j = 1}^{l} (A - \bar A)(\bar{\mathbf z}^j) + \sum_{j = 1}^{k} (A - \bar A)(\mathbf z^j) + \frac{(2\epsilon)^\theta}{1 - \lambda^\theta}\text{H\"old}_\theta(A) \\ &\leq \inf_{n \geq 1} S_A^\epsilon(\mathbf x, \bar{\mathbf x}, n) + \inf_{n \ge N} S_A^\epsilon(\bar{\mathbf x}, \bar{\bar{\mathbf x}}, n) + 3\gamma \\ &\leq \phi_A(\mathbf x, \bar{\mathbf x}) + \inf_{n \ge N} S_A^\epsilon(\bar{\mathbf x}, \bar{\bar{\mathbf x}}, n) + 3\gamma. \end{align*} \noindent By taking first $N\to+\infty$, then $\epsilon\to 0$ and $\gamma\to 0$, we get \[ h_A(\mathbf x, \bar{\bar{\mathbf x}}) \leq \phi_A(\mathbf x, \bar{\mathbf x}) + h_A(\bar{\mathbf x}, \bar{\bar{\mathbf x}}). \] \end{proof} Other properties of the Ma\~n\'e potential and the Peierls barrier can be derived from the previous proposition. For instance, item 4 gives us the following inequality \[ h_A(\mathbf x, \bar{\bar{\mathbf x}}) \le h_A(\mathbf x, \bar{\mathbf x}) + h_A(\bar{\mathbf x}, \bar{\bar{\mathbf x}}), \quad \forall\ \mathbf x, \bar{\mathbf x}, \bar{\bar{\mathbf x}} \in \Sigma. \] We now begin the proof of theorem \ref{principal}. It follows immediately from the next lemma. \begin{lemma}\label{secundario} Let $ D \subset \Sigma $ be an open set containing $ \Omega(A) $. Denote by $\mathcal{D}_A$ the subset of H\"older sub-actions $ u $ such that $ \pi(\mathbb M_A(u)) \subset D $. Then, for the H\"older topology, $\mathcal{D}_A$ is an open dense subset of the H\"older sub-actions. \end{lemma} We only need a few lines to show that lemma~\ref{secundario} yields theorem~\ref{principal}. As a matter of fact, if one considers, for each positive integer $ j $, the open set $ D_j = \{ \mathbf x \in \Sigma \,|\, d(\mathbf x, \Omega(A)) < 1/j \} $ and the corresponding open dense subset of H\"older sub-actions $ \mathcal{D}_{A,j} $, then the set of H\"older separating sub-actions contains the countable intersection $\cap_{j>0}\mathcal{D}_{A,j}$. \begin{proof}[Proof of lemma \ref{secundario}] We only discuss the denseness of $ \mathcal{D}_A$. \noindent {\it Part 1.} Let $ v $ be any H\"older sub-action for $ A $. We will show that, for every $ \mathbf x \notin D $, there exists a H\"older sub-action $ v_{\mathbf x} $ as close as we want to $ v $ in the H\"older topology with a projected contact locus disjoint from $ \mathbf x $, that is, $ \mathbf x \notin \pi(\mathbb M_A(v_{\mathbf x})) $ or \[ v_{\mathbf x} (\mathbf x) - v_{\mathbf x}(\tau_{\mathbf y}(\mathbf x)) < A(\mathbf y, \mathbf x) - \bar A, \quad \forall\ \mathbf y \in \Sigma_{\mathbf x}^*. \] Let $ \mathbf x \notin D $. We discuss two cases. \noindent {\it Case a.} We assume there exists an integer $k \geq 0$ such that, for every path of length $k$ beginning at $\mathbf{x}$, $(\mathbf{z}^0 = (\mathbf y^0, \mathbf x), \ldots, \mathbf{z}^k =(\mathbf y^k, \sigma^k(\mathbf x))) \in \mathcal{P}_k(\mathbf{x})$, the terminal point $\mathbf{z}^k \not\in \mathbb{M}_A(v)$. If $k=0$, we choose $v_{\mathbf{x}}=v$. Assume now $k\geq 1$. Let \[ B := A - \bar A - v \circ \pi + v \circ \pi \circ \hat \sigma^{-1} \geq 0 \] be the associated normalized observable ($B\geq0$ and $\bar B=0$). We recall that $ \tau_{\mathbf{y}^j}(\sigma^{j}(\mathbf x)) = \sigma^{j-1}(\mathbf x) $, for all $j=1,\ldots,k$. So by hypothesis \begin{equation} B(\mathbf{z}^k)=B(\mathbf y^k, \sigma^k(\mathbf{x})) > 0, \;\; \forall\ \mathbf{y}^k \in \Sigma_{\sigma^{k}(\mathbf x)}^* \textrm{ s.t. } \sigma^{k-1}(\mathbf x)=\tau_{\mathbf{y}^k }(\sigma^{k}(\mathbf x)). \tag{I} \end{equation} Notice first that, if $(\bar{\mathbf{z}}^0,\ldots,\bar{\mathbf{z}}^k)$ is a path of length $k$ and $ \gamma \in (0,1) $ is any constant, as $ B $ is non-negative, one has \begin{align} B(\bar{\mathbf{z}}^0) &= \sum_{j = 0}^{k-1} B(\bar{\mathbf z}^j) - \sum_{j = 1}^{k} B(\bar{\mathbf z}^j) + B(\bar{\mathbf z}^k) \nonumber \\ &\geq \gamma \sum_{j = 0}^{k-1} B(\bar{\mathbf z}^j) - \gamma \sum_{j = 1}^{k} B(\bar{\mathbf z}^j) + \gamma B(\bar{\mathbf z}^k) \geq \gamma \sum_{j = 0}^{k-1} B(\bar{\mathbf z}^j) - \gamma \sum_{j = 1}^{k} B(\bar{\mathbf z}^j). \tag{II} \end{align} Let $w_{k} : \Sigma \to \mathbb R$ be the function given by \[ w_k(\bar{\mathbf x}) := \inf \Big\{ \sum_{j = 1}^{k} B(\bar{\mathbf z}^j) \,\big|\, (\bar{\mathbf z}^0, \ldots,\bar{\mathbf z}^{k}) \in \mathcal P_k(\bar{\mathbf{x}}) \Big\}, \quad \forall\ \bar{\mathbf x} \in \Sigma. \] Because $ \mathcal P_k(\bar{\mathbf{x}}) $ is a closed subspace of the compact space $ \hat \Sigma^{k + 1} $, the above infimum is effectively a minimum. Moreover, since the application $ C(\bar{\mathbf{x}}) := \min \{ B \circ \hat \sigma(\bar{\mathbf y},\bar{\mathbf{x}}) \,|\, \bar{\mathbf{y}}\in\Sigma_{\bar{\mathbf{x}}}^* \}$ is H\"older, $w_k = \sum_{j=0}^{k-1} C\circ\sigma^j$ is also H\"older\footnote{We leave the details to the reader. In particular, one shall note that $w_k = \sum_j C\circ\sigma^j$ means $ \min(a + b) = \min a + \min b $, which indicates the importance of what a path is.}. We first prove that $-\gamma w_k$ is a sub-action. Let $\bar{\mathbf{x}}\in\Sigma$ and $\bar{\mathbf{y}}\in\Sigma_{\bar{\mathbf{x}}}^*$. There exists a path of length $k$, $ (\bar{\mathbf z}^0,\ldots,\bar{\mathbf z}^k)$, beginning at $\bar{\mathbf{x}}$ and realizing the minimum \[ w_k(\bar{\mathbf x}) = \sum_{j = 1}^{k} B(\bar{\mathbf z}^j). \] Notice the only constraint on $\bar{\mathbf{y}}^0$ is $\bar{\mathbf{y}}^0\in\Sigma_{\bar{\mathbf{x}}}^*$, besides $\bar{\mathbf{y}}^0$ does not appear in the previous sum. Choose $\bar{\mathbf{y}}^0=\bar{\mathbf{y}}$, $\bar{\mathbf{y}}^{-1}\in\Sigma_{\bar{\mathbf{x}}^{-1}}^*$ and call $\bar{\mathbf{x}}^{-1}=\tau_{\bar{\mathbf{y}}}(\bar{\mathbf{x}})$. Then $(\bar{\mathbf{z}}^{-1},\bar{\mathbf{z}}^0,\ldots,\bar{\mathbf{z}}^{k-1})$ is a path of length $k$ beginning at $\tau_{\bar{\mathbf{y}}}(\bar{\mathbf{x}})$. So denote $ \bar{\mathbf z} := (\bar{\mathbf x}, \bar{\mathbf y}) $. Thanks to inequality (II) \[ B(\bar{\mathbf{z}})=B(\bar{\mathbf{z}}^0) \geq \gamma \sum_{j = 0}^{k-1} B(\bar{\mathbf z}^j) - \gamma \sum_{j = 1}^{k} B(\bar{\mathbf z}^j) \ge \gamma w_k(\tau_{\bar{\mathbf{y}}}(\bar{\mathbf{x}})) - \gamma w_k(\bar{\mathbf{x}}), \] which shows $-\gamma w_k$ is a sub-action for $ B $. Moreover, given any $\mathbf{y}\in\Sigma_{\mathbf{x}}^*$, the same computation for $ \mathbf{z} := (\mathbf x, \mathbf y) $ instead of $\bar{\mathbf{z}}$ and (I) assure that \[ B(\mathbf{z}) - \gamma w_k(\tau_{\mathbf{y}}(\mathbf{x})) + \gamma w_k(\mathbf{x}) \geq \gamma B(\mathbf{z}^k) > 0. \] We have proved that $ \mathbf x \notin \pi(\mathbb M_B(-\gamma w_k))=\pi(\mathbb{M}_A(v-\gamma w_k))$. Since $ \gamma $ can be taken as small as we want, we have shown the existence of a H\"older sub-action $v_{\mathbf x} = v - \gamma w_{k} $ close to $v$ in the H\"older topology satisfying $ \mathbf x \notin \pi(\mathbb M_A(v_{\mathbf x})) $. \noindent {\it Case b.} We suppose that, for every integer $ k\geq0 $, one can find a path of length $k$, $(\mathbf{z}^0,\ldots,\mathbf{z}^{k})$, beginning at $\mathbf{x}$, such that $\mathbf{z}^k \in \mathbb M_A(v)$, or equivalently $ B(\mathbf{z}^k) = 0 $ with $B$ as before. In other words, there exists $ \mathbf y^0 \in \Sigma_{\mathbf x}^* $ with $ B(\mathbf y^0, \mathbf x) = 0 $ and, for any $k\geq1$, there exists $\mathbf{y}^k\in\Sigma_{\mathbf{x}^k}^*\cap(\sigma^*)^{-1}(\Sigma_{\mathbf{x}^{k-1}}^*)$ such that $B(\mathbf{y}^k,\mathbf{x}^k)=0$, where $\mathbf{x}^k=\sigma^k(\mathbf{x})$. Define $\bar{\mathbf{z}}^0=(\mathbf{y}^0,\mathbf{x})$ and $\bar{\mathbf{z}}^k=(\mathbf{y}^k,\mathbf{x}^k)$ for all $k\geq1$. Notice that $(\bar{\mathbf{z}}^0,\ldots,\bar{\mathbf{z}}^k)$ is now a path of arbitrary length $ k $, beginning at $\mathbf{x}$, which satisfies $B(\bar{\mathbf{z}}^j)=0$ for $j = 0, \ldots, k $. Let $\bar{\mathbf{x}}\in\Omega(A)=\Omega(B)$ be any limit point of $(\mathbf{x}^k)_k$ chosen once for all. Let $ w := h_B(\bar{\mathbf x},\cdot)$ be the H\"older sub-action for $ B $ given by the corresponding Peierls barrier. Notice that by the definition of the Peierls barrier (see definition~\ref{ManePeierlsBarrier}) we clearly get $ h_B \ge 0 $, since $B\geq0$ and $\bar B=0$. Furthermore, we remark that $ \phi_B (\mathbf x, \sigma^k(\mathbf x)) = 0 $ for all $k\geq1$ and that \[ w(\mathbf{x}) = h_B(\bar{\mathbf x}, \mathbf x) = \liminf_{k \to +\infty} \phi_B(\mathbf x, \sigma^k(\mathbf x)) + h_B(\bar{\mathbf x}, \mathbf x) \ge h_B(\mathbf x, \mathbf x) > 0. \] Here we have used item 4 of proposition~\ref{propriedadesbasicas} to obtain the first inequality and item 5 of the same proposition to assure the strict inequality since $ \mathbf x \notin D \supset \Omega(A) = \Omega(B) $. Let $ \gamma \in (0,1) $ be any real number as close to 0 as we want. We claim that $ \mathbf x $ satisfies again the first case, namely, there exists $k\geq1$ such that, for any path of length $k$, $(\mathbf{z}^0 = (\mathbf y^0,\mathbf x^0), \ldots, \mathbf{z}^k = (\mathbf y^k, \mathbf x^k))$, beginning at $\mathbf{x}$, one has \[ B(\mathbf{z}^k) - \gamma h_B(\bar{\mathbf{x}},\mathbf{x}^k) +\gamma h_B(\bar{\mathbf{x}},\mathbf{x}^{k-1}) > 0. \] (Notice that $\gamma w$ is again a sub-action for $B$ since $B$ is non-negative.) Indeed, by contradiction, for any integer $ k \ge 0 $, we would have a path of length $ k $, $ (\mathbf{z}^0 = (\mathbf y^0,\mathbf x^0), \ldots, \mathbf{z}^k = (\mathbf y^k, \mathbf x^k)) $, beginning at $ \mathbf x $, such that $ \mathbf z^k \in \mathbb M_B(\gamma w) $, which would yield \[ 0 \leq B(\mathbf{z}^k) = \gamma h_B(\bar{\mathbf{x}},\mathbf{x}^k) - \gamma h_B(\bar{\mathbf{x}},\mathbf{x}^{k-1}), \quad \forall \, k \ge 1. \] On the one hand, from the inequality $ \gamma h_B(\bar{\mathbf{x}},\mathbf{x}^{k-1}) \le \gamma h_B(\bar{\mathbf{x}},\mathbf{x}^k) $, we would obtain $0 < w(\mathbf x) = h_B(\bar{\mathbf{x}},\mathbf{x}) \leq h_B(\bar{\mathbf{x}},\mathbf{x}^k)$ for all $k\geq1$. On the other hand, by taking a subsequence of $\{\mathbf{x}^k\} = \{\sigma^k(\mathbf x)\} $ converging to $ \bar{\mathbf x} $, $h_B(\bar{\mathbf{x}},\mathbf{x}^k)$ would converge to $ h_B(\bar{\mathbf{x}},\bar{\mathbf{x}}) = 0 $, since $ \bar{\mathbf x} \in \Omega(B) $. We have thus obtained a contradiction. Hence, case (a) implies that there exists a sub-action $v_{\mathbf{x}}$, close to $v$ in the H\"older topology, satisfying $ \mathbf x \notin \pi(\mathbb M_A(v_{\mathbf x})) $. \noindent{\it Part 2.} We have just proved that, for any $ \mathbf x \notin D $, there exists a sub-action $ v_{\mathbf x} $ close to $ v $ and a ball $ B(\mathbf x, \epsilon_{\mathbf x}) $ of radius $ \epsilon_{\mathbf x} > 0 $ centered at $ \mathbf x $ such that $$ \forall \; \bar{\mathbf x} \in B(\mathbf x, \epsilon_{\mathbf x}), \quad \bar{\mathbf x} \notin \pi(\mathbb M_A(v_{\mathbf x})). $$ We can extract from the family of these balls $ \{ B(\mathbf x, \epsilon_{\mathbf x}) \}_{\mathbf x} $ a finite family indexed by $ \{ \mathbf x^j \}_{1 \le j \le K} $ which is still a covering of the compact set $ \Sigma \setminus D $. Let $$ u = \frac{1}{K} \sum_{j = 1}^K v_{\mathbf x^j}. $$ Then it is easy to check that $ u $ is a H\"older sub-action for $ A $ satisfying $ \pi(\mathbb M_A(u)) \subset D $, namely, $ u \in \mathcal D_A $. Since each sub-action $ v_{\mathbf x} $ can be taken as close as we want to $ v $ in the H\"older topology, the same is true for $ u $. \end{proof} \end{section} \begin{section}{Proof of theorem \ref{discretestructure}} \label{proof_theorem_structure} It was proved in \cite{GL2} that the projection of the support of a minimizing probability measure $\hat\mu$ is included into the $A$-non-wandering set $\Omega(A)$ when such projection is ergodic. If $\pi_*\hat\mu$ is ergodic, $\pi(\textrm{supp}(\hat\mu))$ may be seen as an irreducible component in the sense that any two points can be joined by an $\epsilon$-closed trajectory. We introduce here a more general notion of irreducibility. \begin{definition-proposition}\label{relacao de equivalencia} Let $A : \hat \Sigma \to \mathbb R $ be a H\"older observable. We say that two points $\mathbf{x},\bar{\mathbf{x}}$ of $\Omega(A)$ are equivalent and write $\mathbf{x}\sim\bar{\mathbf{x}}$ if \[ h_A(\mathbf{x},\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) = 0. \] Then $\sim$ is an equivalent relation. Its equivalent classes are called irreducible components. \end{definition-proposition} \begin{proof} It is obvious that $\sim$ is reflexive ($h_A(\mathbf{x},\mathbf{x})=0 \Leftrightarrow \mathbf{x}\in\Omega(A)$) and symmetric. Let $u$ be a continuous sub-action and $B:=A-\bar A-u\circ\pi+u\circ\tau$ be the associated normalized observable. Then the definition of the Peierls barrier (see definition~\ref{ManePeierlsBarrier}) implies \[ h_B(\mathbf{x},\bar{\mathbf{x}}) = h_A(\mathbf{x},\bar{\mathbf{x}}) -u(\bar{\mathbf{x}}) + u(\mathbf{x}), \quad \forall\ \mathbf{x},\bar{\mathbf{x}} \in \Sigma. \] Since $ h_B(\mathbf{x},\bar{\mathbf{x}}) \geq 0 $, we see that $\mathbf{x}\sim\bar{\mathbf{x}} \Leftrightarrow h_B(\mathbf{x},\bar{\mathbf{x}})=0$ and $h_B(\bar{\mathbf{x}},\mathbf{x})=0$. To show the transitivity property, it is enough to prove \[ \mathbf{x}\sim\bar{\mathbf{x}} \textrm{ and } \bar{\mathbf{x}}\sim\bar{\bar{\mathbf{x}}} \Longrightarrow h_B(\mathbf{x},\bar{\bar{\mathbf{x}}}) = 0. \] But proposition \ref{propriedadesbasicas} guarantees \[ 0 \leq h_B(\mathbf{x},\bar{\bar{\mathbf{x}}}) \leq h_B(\mathbf{x},\bar{\mathbf{x}}) + h_B(\bar{\mathbf{x}},\bar{\bar{\mathbf{x}}}) = 0. \] The transitivity property is proved. \end{proof} \begin{proposition}\label{proposicao componentes} The irreducible components are closed and $\sigma$-invariant. \end{proposition} \begin{proof} \noindent{\it Part 1.} Let $\mathbf{x}\in\Omega(A)$. Consider $\{\bar{\mathbf{x}}_\epsilon\}_\epsilon$ a sequence of points of $\Omega(A)$ equivalent to $\mathbf{x}$ and within $\epsilon$ of $\bar{\mathbf{x}}\in\Omega(A)$. Then on the one hand, $h_A(\mathbf{x},\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) \geq h_A(\mathbf{x},\mathbf{x})=0$, and on the other hand, \[ h_A(\mathbf{x},\bar{\mathbf{x}}_\epsilon) + h_A(\bar{\mathbf{x}},\mathbf{x}) \leq h_A(\mathbf{x},\bar{\mathbf{x}}_\epsilon) + h_A(\bar{\mathbf{x}},\bar{\mathbf{x}}_\epsilon) + h_A(\bar{\mathbf{x}}_\epsilon,\mathbf{x}) = h_A(\bar{\mathbf{x}},\bar{\mathbf{x}}_\epsilon). \] By continuity of $h_A(\mathbf{x},\cdot)$ and $h_A(\bar{\mathbf{x}},\cdot)$ with respect to the second variable, the previous inequality gives $h_A(\mathbf{x},\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) \leq 0$. Therefore $\bar{\mathbf{x}}\sim\mathbf{x}$ and the class containing $\mathbf{x}$ is closed. \noindent{\it Part 2.} Let $\mathbf{x}\in\Omega(A)$. Either $\sigma(\mathbf{x})=\mathbf{x}$ and in an obvious way $\sigma(\mathbf{x})\sim\mathbf{x}$ or $\sigma(\mathbf{x})\not=\mathbf{x}$ and item 3 of proposition~\ref{propriedadesbasicas} shows $\phi_A(\mathbf{x},\sigma(\mathbf{x})) + \phi_A(\sigma(\mathbf{x}),\mathbf{x}) = \phi_A(\mathbf{x},\mathbf{x})=0$. Remember that $h_A(\mathbf{y},\cdot)=\phi_A(\mathbf{y},\cdot)$ whenever $\mathbf{y}\in\Omega(A)$; note that $ \mathbf x $ and $ \sigma(\mathbf x) $ belong to the $\sigma$-invariant set $ \Omega(A) $. Then we get $ h_A(\mathbf{x},\sigma(\mathbf{x})) + h_A(\sigma(\mathbf{x}),\mathbf{x}) = h_A(\mathbf{x},\mathbf{x})=0 $ and $ \mathbf x $ and $ \sigma(\mathbf x) $ belong to the same irreducible class. \end{proof} We assume from now on that $\Omega(A)$ is equal to a disjoint union of irreducible components, $\Omega(A)=C_1\sqcup\ldots\sqcup C_r$. The following proposition shows that the Peierls barrier normalized by a separating sub-action could play the role of a quantized set of levels of energy. \begin{proposition} Let $A$ be a H\"older observable and assume that $\Omega(A)=\sqcup_{i=1}^rC_i$ is equal to a finite union of irreducible components. \begin{enumerate} \item If $u$ is a continuous sub-action, then \[ (\mathbf{x}^i,\mathbf{x}^j)\mapsto h_A(\mathbf{x}^i,\mathbf{x}^j)-u(\mathbf{x}^j)+u(\mathbf{x}^i) \textrm{ is constant on } C_i\times C_j. \] \item If $u$ is a continuous separating sub-action, then \[ h_A(\mathbf{x}^i,\mathbf{x}^j) > u(\mathbf{x}^j)-u(\mathbf{x}^i), \quad \forall\ (\mathbf{x}^i,\mathbf{x}^j) \in C_i\times C_j, \;\; \forall\ i \not= j. \] \end{enumerate} \end{proposition} \begin{proof} We first normalize $A$ by taking $B=A-\bar A - u\circ\pi + u\circ\tau$ so that $B\geq 0$ and $\bar B=0$. \noindent{\it Part 1.} Let $(\mathbf{x}^i,\mathbf{x}^j), (\bar{\mathbf{x}}^i,\bar{\mathbf{x}}^j)\in C_i\times C_j$. Then $h_B(\mathbf{x}^i,\bar{\mathbf{x}}^i)=h_B(\mathbf{x}^j,\bar{\mathbf{x}}^j)=0$ and \[ h_B(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}^j) \leq h_B(\bar{\mathbf{x}}^i,\mathbf{x}^i) + h_B(\mathbf{x}^i,\mathbf{x}^j) + h_B(\mathbf{x}^j,\bar{\mathbf{x}}^j) \leq h_B(\mathbf{x}^i,\mathbf{x}^j). \] \noindent Conversely $h_B(\mathbf{x}^i,\mathbf{x}^j) \leq h_B(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}^j)$ and we have proved that $h_B(\cdot,\cdot)$ is constant on $C_i\times C_j$. \noindent{\it Part 2.} Let $\{U_i^\eta\}_{\eta>0}$ be a basis of neighborhoods of $C_i$. Since $\sigma(C_i)\subset C_i$ is disjoint from each $C_j$, $j\not= i$, there exists $\eta>0$ small enough such that $\sigma(U_i^\eta)$ is disjoint from $\cup_{j\not= i}U_j^\eta$. Let $i\not= j$ and $\mathbf{x}\in C_i$, $\bar{\mathbf{x}} \in C_j$. For $\epsilon>0$ sufficiently small, the ball of radius $\epsilon$ centered at $\mathbf{x}$ is included in $U_i^\eta$. Let $(\mathbf{z}^0 = (\mathbf y^0, \mathbf x^0),\ldots,\mathbf{z}^k = (\mathbf y^k, \mathbf x^k))$ be a path of length $k$ within $\epsilon$ of $\mathbf{x}$ and $\bar{\mathbf{x}}$, more precisely, satisfying $d(\mathbf{x}^0,\mathbf{x})<\epsilon$ and $d(\mathbf{x}^k,\bar{\mathbf{x}})<\epsilon$. Let $p\geq1$ be the first time $\sigma^p(\mathbf{x})\not\in U_i^\eta$. Then $\sigma^{p-1}(\mathbf{x})\in U_i^\eta$ and $\sigma^p(\mathbf{x})\in \sigma(U_i^\eta) \setminus U_i^\eta$. By the choice of $\eta$, $\sigma^p(\mathbf{x})\not\in \cup_{j=1}^r U_j^\eta =: \mathcal{U}\supset\Omega(A)$. Since $\Omega(A)=\pi(\mathbb{M}_A(u))$, let $\hat{\mathcal{U}} := \pi^{-1}(\mathcal{U})$, then $\mathbf{z}^p\not\in\hat{\mathcal{U}}$ and \[ \sum_{l=1}^k B(\mathbf{z}^l) \geq B(\mathbf{z}^p) \geq \min_{\hat{\Sigma}\setminus \hat{\mathcal{U}}} B =: m >0. \] \noindent We have proved that $h_B(\mathbf{x},\bar{\mathbf{x}}) \geq m >0$. \end{proof} We are now in a position to prove our second result. \begin{proof}[Proof of theorem \ref{discretestructure}] We fixed once for all $\bar{\mathbf{x}}^i\in C_i$. \noindent{\it Part 1.} We know from theorem \ref{structure} that a continuous calibrated sub-action satisfies $u(\mathbf{x})=\min_{\bar{\mathbf{x}}\in\Omega(A)} [ u(\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) ]$. If $\bar{\mathbf{x}}\in C_i$, then $\bar{\mathbf{x}}\sim\bar{\mathbf{x}}^i$ and $h_A(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\bar{\mathbf{x}}^i)=0$. Then \begin{align*} u(\bar{\mathbf{x}}^i) + h_A(\bar{\mathbf{x}}^i,\mathbf{x}) &\leq u(\bar{\mathbf{x}}^i) + h_A(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) \\ & = u(\bar{\mathbf{x}}^i) - h_A(\bar{\mathbf{x}},\bar{\mathbf{x}}^i) + h_A(\bar{\mathbf{x}},\mathbf{x}) \leq u(\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}). \end{align*} We have proved that $u(\mathbf{x}) = \min_{1\leq i\leq r} [ u(\bar{\mathbf{x}}^i) + h_A(\bar{\mathbf{x}}^i,\mathbf{x})]$. The fact that $(u(\bar{\mathbf{x}}^1),\ldots,u(\bar{\mathbf{x}}^r))\in\mathcal{C}_A(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r)$ comes from items 1 and 6 of proposition~\ref{propriedadesbasicas}. \noindent{\it Part 2.} Let $(u_1,\ldots,u_r)\in\mathcal{C}_A(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r)$ and define $\phi:\Omega(A)\to\mathbb{R}$ by $\phi(\mathbf{x}) := u_i+h_A(\bar{\mathbf{x}}^i,\mathbf{x})$ for all $\mathbf{x}\in C_i$. We notice that $\phi$ is continuous and we show that $\phi(\bar{\mathbf{x}}) - \phi(\mathbf{x}) \leq h_A(\mathbf{x},\bar{\mathbf{x}})$ for all $\mathbf{x},\bar{\mathbf{x}}\in\Omega(A)$. Indeed, if $\mathbf{x}\in C_i$ and $\bar{\mathbf{x}}\in C_j$, then \begin{align*} \phi(\bar{\mathbf{x}}) - \phi(\mathbf{x}) &= (u_j-u_i) + h_A(\bar{\mathbf{x}}^j,\bar{\mathbf{x}}) - h_A(\bar{\mathbf{x}}^i,\mathbf{x}) \\ &\leq h_A(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}^j) + h_A(\bar{\mathbf{x}}^j,\bar{\mathbf{x}}) - h_A(\bar{\mathbf{x}}^i,\mathbf{x}) \\ &= h_A(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}^j) - h_A(\bar{\mathbf{x}},\bar{\mathbf{x}}^j) - h_A(\bar{\mathbf{x}}^i,\mathbf{x}) \\ &\leq h_A(\bar{\mathbf{x}}^i,\bar{\mathbf{x}}) - h_A(\bar{\mathbf{x}}^i,\mathbf{x}) \leq h_A(\mathbf{x},\bar{\mathbf{x}}). \end{align*} \noindent (The last but one inequality uses item 1 of proposition \ref{propriedadesbasicas} and the fact that $h_A(\bar{\mathbf{x}}^i,\cdot)$ is a sub-action.) By theorem~\ref{structure}, we know that the function $ u(\mathbf{x}) := \min_{\bar{\mathbf{x}}\in\Omega(A)} [ \phi(\bar{\mathbf{x}}) + h_A(\bar{\mathbf{x}},\mathbf{x}) ]$ is a continuous calibrated sub-action which extends $\phi$ on $\Omega(A)$. In particular, $u(\bar{\mathbf{x}}^i)=\phi(\bar{\mathbf{x}}^i)=u_i$ and, thanks to part 1, $u$ coincides with $\min_{1\leq i\leq r} [ u_i + h_A(\bar{\mathbf{x}}^i,\cdot) ]$. \noindent{\it Part 3.} Let $i_0 \in \{1, \ldots, r\} $. If $(u_1,\ldots,u_r)$ satisfies $u_i = u_{i_0} + h_A(\bar{\mathbf{x}}^{i_0},\bar{\mathbf{x}}^i)$, then $i_0$ is unique. Otherwise there would exist $i_1 \not= i_0$ such that $u_i = u_{i_1} + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^i)$. Thus \[ u_{i_1} = u_{i_0} + h_A(\bar{\mathbf{x}}^{i_0},\bar{\mathbf{x}}^{i_1}) \quad\textrm{and}\quad u_{i_0} = u_{i_1} + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{i_0}). \] We would obtain $h_A(\bar{\mathbf{x}}^{i_0},\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{i_0}) = 0$ contradicting $\bar{\mathbf{x}}^{i_0} \not\sim \bar{\mathbf{x}}^{i_1}$. The fact that $(u_1,\ldots,u_r)\in\mathcal{C}_A(\bar{\mathbf{x}}^1,\ldots,\bar{\mathbf{x}}^r)$ comes from \[ u_j-u_i = h_A(\bar{\mathbf{x}}^{i_0},\bar{\mathbf{x}}^{j}) - h_A(\bar{\mathbf{x}}^{i_0},\bar{\mathbf{x}}^{i}) \leq h_A(\bar{\mathbf{x}}^{i},\bar{\mathbf{x}}^{j}). \] \noindent The end of part 3 follows since $u(\mathbf{x}) := u_{i_0} + h_A(\bar{\mathbf{x}}^{i_0},\mathbf{x})$ already defines a calibrated sub-action satisfying $u(\bar{\mathbf{x}}^i)=u_{i}$ for all $i$. \end{proof} The proof of application \ref{application} is elementary. \begin{proof}[Proof of application \ref{application}] Define $B := A-v\circ\pi + v\circ\tau-\bar A$, then the null function is a sub-action of $B$ and $v-u$ is a sub-action calibrated to $B$. Moreover, $h_B(\mathbf{x},\bar{\mathbf{x}}) = h_A(\mathbf{x},\bar{\mathbf{x}}) - v(\bar{\mathbf{x}}) + v(\mathbf{x})$ and $\Omega(A)=\Omega(B)$. It is therefore enough to assume $A$ normalized ($A\geq0$ and $\bar A=0$) and $v=0$. \noindent{\it Part 1.} If $\mathbf{x}\sim\bar{\mathbf{x}}$ are two points of $\Omega(A)$, then $h_A(\mathbf{x},\bar{\mathbf{x}})=0$ and $h_A(\bar{\mathbf{x}},\mathbf{x})=0$. Thanks to items 1 and 6 of proposition \ref{propriedadesbasicas}, we obtain $u(\mathbf{x})=u(\bar{\mathbf{x}})$. If $\mathbf{x}$ is any point of $\Sigma$, by the calibration of $u$, one can construct an inverse path $\{\mathbf{z}^{-i}\}_{i\geq0}$ of $\hat{\Sigma}$, with $\pi(\mathbf{z}^0)=\mathbf{x}$, such that $u(\mathbf{x}^{-i})-u(\mathbf{x}^{-i-1})=A(\mathbf{z}^{-i})$, $\mathbf{x}^{-i}=\pi(\mathbf{z}^{-i})$, for all $i$. Let $\bar{\mathbf{x}}$ be an accumulation point of $\{\mathbf{x}^{-i}\}_{i\geq0}$. Then $\bar{\mathbf{x}}\in\Omega(A)$ and, since $A\geq0$, the sequence $\{u(\mathbf{x}^{-i})\}_{i\geq0}$ is decreasing. In particular, $u(\mathbf{x})\geq u(\bar{\mathbf{x}}) $ establishes $ \min_\Sigma u = \min_{\Omega(A)} u $. \noindent{\it Part 2.} Let $u_i$ be the value of $u$ on $C_i$. Assume we have ordered these values as $u_{i_1}\leq u_{i_2} \leq \ldots \leq u_{i_r}$. Let $\bar{\mathbf{x}}^i\in C_i$ fixed. It suffices to prove $u(\bar{\mathbf{x}}^{i_k}) = u(\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{i_k})$ for all $k=1,\ldots,r$. It is true for $k=1$. Since $C_{i_{k+1}}$ is not a minimum local of $u$, one can find a sequence of points $\{\mathbf{x}_\epsilon\}_{\epsilon>0}$ within $\epsilon$ of $C_{i_{k+1}}$ such that $u(\mathbf{x}_\epsilon) < u(\bar{\mathbf{x}}^{i_{k+1}})$. From part 1 of theorem \ref{discretestructure}, there exists an index $j$ such that $u(\mathbf{x_\epsilon}) = u(\bar{\mathbf{x}}^{j}) + h_A(\bar{\mathbf{x}}^j,\mathbf{x}_\epsilon)$. Since $h_A\geq0$, $u_j=u(\bar{\mathbf{x}}^{j}) \leq u(\mathbf{x_\epsilon}) < u_{i_{k+1}}$. So $j$ has to be one of indexes $i_1,\ldots,i_k$. By induction, $u(\bar{\mathbf{x}}^{j}) = u(\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{j})$ and \[ u(\mathbf{x_\epsilon}) = u(\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{j}) + h_A(\bar{\mathbf{x}}^j,\mathbf{x}_\epsilon). \] On the one hand, $h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{j}) + h_A(\bar{\mathbf{x}}^j,\mathbf{x}_\epsilon) \geq h_A(\bar{\mathbf{x}}^{i_1},\mathbf{x}_\epsilon)$ implies \[ u(\mathbf{x_\epsilon}) \geq u(\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\mathbf{x}_\epsilon). \] On the other hand, as $u$ is a sub-action, we obtain the reverse inequality and finally \[ u(\mathbf{x_\epsilon}) = u(\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\mathbf{x}_\epsilon). \] Letting $\epsilon$ go to 0, $\mathbf{x}_\epsilon$ accumulates to $C_{i_{k+1}}$ and \[ u(\bar{\mathbf{x}}^{i_{k+1}}) = u(\bar{\mathbf{x}}^{i_1}) + h_A(\bar{\mathbf{x}}^{i_1},\bar{\mathbf{x}}^{i_{k+1}}). \] \end{proof} \end{section} {\footnotesize } \end{document}
math
71,720
\begin{document} \title{The effect of forcing on vacuum radiation} \author{Katherine Brown \and Ashton Lowenstein \and Harsh Mathur } \institute{K.Brown\at Physics Department\\ Hamilton College \\ 198 College Hill Road \\ Clinton, NY 13323\\ U.S.A. Tel.: +315-859-4585\\ \email{[email protected]} \and A. Lowenstein \at Hamilton College, Clinton, NY 13323 \and H.Mathur \at Case Western Reserve University, Cleveland, OH 44106 } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Vacuum radiation has been the subject of theoretical study in both cosmology and condensed matter physics for many decades. Recently there has been impressive progress in experimental realizations as well. Here we study vacuum radiation when a field mode is driven both parametrically and by a classical source. We find that in the Heisenberg picture the field operators of the mode undergo a Bogolyubov transformation combined with a displacement; in the Schr\"{o}dinger picture the oscillator evolves from the vacuum to a squeezed coherent state. Whereas the Bogolyubov transformation is the same as would be obtained if only the parametric drive were applied the displacement is determined by both the parametric drive and the force. If the force is applied well after the parametric drive then the displacement is the same as would be obtained by the action of the force alone and it is essentially independent of $t_f$, the time lag between the application of the force and the parametric drive. If the force is applied well before the parametric drive the displacement is found to oscillate as a function of $t_f$. This behavior can be understood in terms of quantum interference. A rich variety of behavior is observed for intermediate values of $t_f$. The oscillations can turn off smoothly or grow dramatically and decrease depending on strength of the parametric drive and force and the durations for which they are applied. The displacement depends only on the Fourier component of the force at a single resonant frequency when the forcing and the parametric drive are well separated in time. However for a weak parametric drive that is applied at the same time as the force we show that the displacement responds to a broad range of frequencies of the force spectrum. Implications of our findings for experiments are briefly discussed. \keywords{ Casimir effect \and Quantum field theory (low energy) \and Quantum gravity} \end{abstract} \section{Introduction} \label{intro} In a landmark paper, Casimir recognized that quantum vacuum fluctuations can produce measurable effects, for example, an attractive force between two parallel conducting plates in a vacuum \cite{casimir}. Subsequently, Moore \cite{moore} observed that motion of the conducting plates would result in the generation of radiation from the vacuum, an effect dubbed the dynamical Casimir effect. Earlier, Parker \cite{parker} showed that in an expanding Universe, vacuum fluctuations would lead to particle production; according to the modern inflationary paradigm this mechanism is the origin of structure in the Universe \cite{weinberg}. In his seminal work, Hawking \cite{hawking} showed that even an apparently static system such as a non-rotating black hole will produce vacuum radiation due to its horizon. More recently, there have been significant advances on the experimental front: the Casimir effect \cite{casimirexpt} and the dynamical Casimir effect \cite{superconduct} have been unambiguously observed, and a laboratory analog of Hawking radiation proposed by Unruh \cite{unruh} has been claimed to have been observed in a Bose-Einstein condensate \cite{bec}. A common element in much analysis of dynamical Casimir radiation is to assume that the system starts in the vacuum state. However in cosmology there is an ambiguity about what constitutes the appropriate initial state and hence there has been an exploration of various alternative vacua as the initial state \cite{bunch}. Still earlier, in context of Hawking radiation from black holes, Wald explored the consequences of starting the system in an excited state rather than the vacuum. He found that the quantum radiation was enhanced in a manner reminiscent of stimulated emission \cite{wald}. Here we wish to study the related but distinct effect of starting the system in the vacuum and examining the quantum radiation that results when the system is simultaneously excited parametrically (as in the dynamical Casimir effect) and also driven directly by a classical source. This analysis may be relevant to the experimental systems that are currently under investigation noted above and discussed further in section \ref{sec:conclusion}. Another key motivation for this work is the observation of gravitational radiation from merging black holes \cite{ligo}. The gravitational radiation in this case is predominantly classical and reliable calculation of its magnitude has only recently been achieved \cite{numerics}. In order to estimate the quantum contribution to radiation in a merger it would be necessary to take into account the strong classical driving to which the gravitational radiation field is subject in these events. Here for simplicity we primarily focus on a simple toy model of a single field mode. In section \ref{sec:fields} we sketch the generalization to coupled modes and a complete field theory. We assume that the oscillator experiences parametric driving (frequency $\omega$ varies in time approaching the natural value $\omega_0$ asymptotically for $t \rightarrow \pm \infty$) and in addition an applied force $F(t)$. When treated individually the effect of each is well-known: in the Heisenberg picture, for a parametrically-driven oscillator the evolution of the ladder operators is a Bogolyubov transformation, and for a forced oscillator the ladder operators undergo simple displacement. Equivalently in the Schr\"{o}dinger picture the vacuum evolves into a squeezed vacuum state under parametric driving and into a coherent state under forcing. Our principal finding is that when the oscillator is driven parametrically as well as by a force then in the Heisenberg picture the ladder operators undergo a Bogolyubov transformation combined with a displacement (see eq (\ref{eq:forcedsol})), or equivalently, in the Schr\"{o}dinger picture the vacuum evolves to a squeezed coherent state. Furthermore, whereas the Bogolyubov transformation is found to be exactly the same as it would be in the absence of forcing, the displacement is markedly influenced by the simultaneous application of forcing and parametric driving. Only if the force is applied well after the parametric drive is the displacement the same as would have been obtained in the absence of the parametric drive. If the force is applied well before the parametric drive, the final state is the same as one would obtain if only the parametric drive were applied but the starting state was an appropriately chosen coherent state rather than the vacuum. When the parametric drive is applied at the same time as the force (the case of primary interest here) it has a dramatic effect on the displacement. The behavior the displacement $\alpha$ as a function of the time lag between the force and the parametric drive $t_f$ is shown in fig \ref{fig:plotz}. We show generically that for $t_f \rightarrow - \infty$ (force is applied much earlier than the parametric drive) $|\alpha|$ should undergo oscillations due to quantum interference; for $t_f \rightarrow \infty$ (force is applied much later than the parametric drive) the displacement should approach a constant value controlled by the Fourier component of the force at the frequency $\omega_0$. A rich variety of behavior is seen at intermediate $t_f$. We find in the adiabatic limit (slowly varying $\omega(t)$) the oscillations turn off with a profile that may be described by a complementary error function. In the opposite abrupt limit too the oscillations turn off smoothly and monotonically (fig \ref{fig:plotz} lower panel). However for other values of the parameters using an exactly soluble model we find that the oscillations can grow dramatically even when they are negligible for $t_f \rightarrow - \infty$ (fig \ref{fig:plotz} upper panel). The displacement depends only on the Fourier component of the force at frequency $\omega_0$ in the absence of parametric driving or when the parametric drive and the force are well separated in time; however when the parametric drive is weak and applied at the same time as the force we are able to show perturbatively that the displacement responds to a broad range of frequencies in the driving force. The remainder of this paper is organized as follows. In section \ref{sec:analysis} we present a general analysis of the model. We show that in the Heisenberg picture the field operators will undergo a Bogolyubov transformation combined with a displacement and derive general formulae for the transformation parameters and the displacement. We interpret these results in the Schr\"{o}dinger picture and extend the general analysis to multiple coupled modes and field theory. In section \ref{sec:examples} we return to a single mode and analyze a soluble model and various approximations that illustrate the rich variety of behavior that is obtained due to the interplay of the forcing and the parametric drive. In section \ref{sec:conclusion} we discuss the implications of our results for experiments and open questions. \section{Forced oscillator: general analysis} \label{sec:analysis} We consider a harmonic oscillator of mass $m$ whose natural frequency $\omega(t)$ varies in time; the time variation of the frequency represents the parametric driving that leads to vacuum radiation. The frequency is assumed to approach the natural value $\omega_0$ asymptotically as $t \rightarrow \pm \infty$. In addition the oscillator also experiences a time dependent force $F(t)$ that is also assumed to vanish asymptotically as $t \rightarrow \pm \infty$; the forcing corresponds to a classical source that drives the field. For the applications we envisage the oscillator represents a single mode of a quantum field that is in some approximation decoupled from other degrees of freedom. Generally we will assume that the oscillator starts in the vacuum state $|0\rangle$ which is well defined as $t \rightarrow - \infty$ and we wish to determine the final behavior of the system as $t \rightarrow \infty$. Throughout the paper we adopt units where $\hbar =1$. \subsection{Classical Analysis} \label{sec:classical} It is helpful to first solve the classical equation of motion \begin{equation} \frac{d^2 x}{d t^2} + \omega^2 (t) x= \frac{F(t)}{m}. \label{eq:classical} \end{equation} For the case $F = 0$ eq (\ref{eq:classical}) has the form of a Schr\"{o}dinger equation for a particle scattering from a localized potential. Thus we can draw upon our intuition about the Schr\"{o}dinger equation to deduce that there exists a solution with asymptotic behavior \begin{eqnarray} \xi (t) & \rightarrow & e^{-i \omega_0 t} \hspace{3mm} {\rm for} \hspace{3mm} t \rightarrow - \infty \nonumber \\ & \rightarrow & A e^{-i \omega_0 t} + B e^{i \omega_0 t} \hspace{3mm} {\rm for} \hspace{3mm} t \rightarrow \infty \label{eq:jost} \end{eqnarray} where $A$ and $B$ are scattering coefficients. Since eq (\ref{eq:classical}) is real $\xi^\ast (t)$ constitutes a second independent solution. Note that these solutions have been chosen to satisfy Jost boundary conditions which are more suitable for our purpose than the conventional scattering boundary conditions. It is evident from the Schr\"{o}dinger analogy that \begin{equation} |A|^2 - |B|^2 = 1. \label{eq:currentconservation} \end{equation} In section \ref{sec:examples} we solve eq (\ref{eq:classical}) approximately in various circumstances and exactly for a particular choice of $\omega^2 (t)$ and thereby obtain more explicit expressions for the coefficients $A$ and $B$ and for $\xi(t)$. It is easy to verify that the solution that flows from the initial conditions $x (t_0) = x_0$ and $p(t_0) = p_0$ is \begin{eqnarray} \overline{x} (t) & = & \frac{1}{2} \left( x_0 + i \frac{p_0}{m \omega_0} \right) e^{i \omega_0 t_0} \xi (t) \nonumber \\ & + & \frac{1}{2} \left( x_0 - i \frac{p_0}{m \omega_0} \right) e^{- i \omega_0 t_0} \xi^\ast (t). \nonumber \\ \label{eq:xsolution} \end{eqnarray} Here we have assumed that $t_0$ is far in the past before the parametric drive was turned on. Eq (\ref{eq:xsolution}) can be further simplified if we assume that $t$ is sufficiently far in the future that the parametric drive has been turned off. In that case we can use the $t \rightarrow \infty$ asymptotic form of $\xi(t)$ given in eq (\ref{eq:jost}). By differentiating eq (\ref{eq:xsolution}) one can then obtain $\overline{p} (t)$, the momentum for the solution that flows from the initial conditions $(x_0, p_0)$. In order to analyze the effects of the force it is helpful to first consider the response $g(t, \tau)$ to an impulse $F(t)/m = \delta( t - \tau )$ at time $\tau$. With the retarded boundary condition $g(t, \tau) = 0$ for $t < \tau$ the impulse response is given by \begin{equation} g(t, \tau) = \frac{ \theta( t - \tau) }{W} \left[ \xi (\tau) \xi^\ast (t) - \xi^\ast (\tau) \xi(t) \right] \label{eq:green} \end{equation} where $\theta$ is the unit step function. The Wronskian $W = \xi (t) \xi^{\ast}$$' (t) - \xi'(t) \xi^\ast (t)$, where primes denote differentiation with respect to $t$. Since the Wronskian is a constant independent of time we use the $t \rightarrow - \infty$ behavior of $\xi$ to obtain $W = 2 i \omega_0$. Now by superposition the solution that flows from the initial condition $x(t_0) = x_0$ and $p(t_0) = p_0$ is given by \begin{equation} x(t) = \overline{x} (t) + \frac{1}{m} \int_{t_0}^t d \tau \; g(t, \tau) F(\tau). \label{eq:superpos} \end{equation} Here we assume that $t_0$ is far in the past before either the force or the parametric drive turn on. By differentiating eq (\ref{eq:superpos}) we then obtain the momentum $p(t)$ that flows from the initial conditions $(x_0, p_0)$ under the influence of both the force and the parametric drive. Eqs (\ref{eq:xsolution}) and (\ref{eq:superpos}) represent the solution to eq (\ref{eq:classical}) for a given initial condition. From the classical solution it is now easy to construct a complete solution to the corresponding quantum problem as discussed below. \subsection{Heisenberg picture} \label{sec:heisenberg} We turn now to the quantum problem. The quantum Hamiltonian corresponding to our model is \begin{equation} \hat{H} = \frac{1}{2 m} \hat{p}^2 + \frac{1}{2} m \omega^2(t) \hat{x}^2 - F(t) \hat{x}. \label{eq:Hamiltonian} \end{equation} This leads to the Heisenberg equations of motion \begin{equation} \frac{d}{d t} \hat{x} = \frac{1}{m} \hat{p}, \hspace{3mm} \frac{d}{d t} \hat{p} = - m \omega^2 (t) \hat{x} + F(t). \label{eq:heisenberg} \end{equation} First for simplicity let us assume $F = 0$. Because of their linearity the solution to the quantum Heisenberg equations of motion can be constructed using the classical solution eq (\ref{eq:xsolution}). We obtain \begin{eqnarray} \hat{x} (t) & = & \frac{1}{2} \left( \hat{x}_0 + i \frac{\hat{p}_0}{m \omega_0} \right) e^{i \omega_0 t_0} \xi (t) \nonumber \\ & + & \frac{1}{2} \left( \hat{x}_0 - i \frac{\hat{p}_0}{m \omega_0} \right) e^{- i \omega_0 t_0} \xi^\ast (t) \nonumber \\ \label{eq:xheisenberg} \end{eqnarray} and an analogous relation that expresses $\hat{p}(t)$ in terms of $\hat{x}_0$ and $\hat{p}_0$ from the classical expression for $\overline{p}(t)$. It is easy to verify that these expressions for $\hat{x}(t)$ and $\hat{p}(t)$ satisfy the Heisenberg equations of motion and the appropriate initial conditions $\hat{x}(t_0) = \hat{x}_0$ and $\hat{p} (t_0) = \hat{p}_0$. For the quantum oscillator it is preferable to work with the creation operator $\hat{a} = \sqrt{m \omega/2} ( \hat{x} + i \hat{p}/m \omega)$ and the annihilation operator $\hat{a}^\dagger$. From the evolution of $\hat{x}$ and $\hat{p}$ we find \begin{equation} \hat{a} (t) = u \hat{a}_0 + v \hat{a}^\dagger_0 \hspace{3mm} {\rm and} \hspace{3mm} \hat{a}^\dagger (t) = u^\ast \hat{a}_0^{\dagger} + v^\ast \hat{a}_0. \label{eq:bogolyubov} \end{equation} Thus the evolution of the ladder operators is a Bogolyubov transformation with coefficients \begin{eqnarray} u = A \exp[ - i \omega_0 (t - t_0) ] &\hspace{2mm} {\rm and} \hspace{2mm} & v = B^* \exp [ - i \omega_0 (t + t_0) ] \nonumber \\ \label{eq:bcoefficients} \end{eqnarray} In eqs (\ref{eq:bogolyubov}) and (\ref{eq:bcoefficients}) we have assumed that $t_0$ is well before the parametric drive turns on and $t$ is well after. If we assume that the initial state of the system at $ t_0$ is the vacuum $|0\rangle$ defined by $a_0 | 0 \rangle = 0$ then in the Heisenberg picture of quantum mechanics the number of quanta excited at time $t$ is given by \begin{equation} \langle 0 | \hat{a}^\dagger (t) \hat{a} | 0 \rangle = | B |^2. \label{eq:vacuum} \end{equation} This excitation of the system is the basic phenomenon of vacuum radiation. Suppose following \cite{wald} we assume that the initial state of the system already has $n$ quanta then \begin{equation} \langle n | \hat{a}^\dagger (t) \hat{a} (t) | n \rangle - \langle n | \hat{a}^\dagger_0 \hat{a}_0 | n \rangle = (2 n + 1) | B |^2. \label{eq:lasing} \end{equation} This enhancement over the $n=0$ result is a special case of the stimulated emission of vacuum radiation described by \cite{wald}. Another interesting initial state to consider is a coherent state $| \alpha \rangle$ defined by $\hat{a}_0 | \alpha \rangle = \alpha | \alpha \rangle$ with $\alpha$ the complex coherent amplitude. In this case we obtain \begin{eqnarray} \langle \alpha | \hat{a}^\dagger (t) \hat{a} (t) | \alpha \rangle - \langle \alpha | \hat{a}^\dagger_0 \hat{a}_0 | \alpha \rangle & = & (2 | \alpha |^2 + 1) | B |^2 \nonumber \\ & + & 2 | A | | B | |\alpha|^2 \cos ( 2 \omega_0 t_0 + 2 \phi ). \nonumber \\ \label{eq:coherent} \end{eqnarray} Here the phase $\phi$ is defined by the relation $\alpha^2 A B = |\alpha|^2 |A| |B| e^{i 2 \phi}$. Once again the first term on the right hand side corresponds to an enhancement compared to the excitation of the vacuum $| 0 \rangle$ which is simply a coherent state with $\alpha = 0$. The second term shows a remarkable oscillatory behavior in time due to quantum interference. Due to the Heisenberg dynamics the number of quanta is uncertain at the final time even if it is definite initially; the interference is between states with differing numbers of quanta present. We will see that this oscillatory behavior occurs under many conditions when the oscillator is forced even if the initial state is the vacuum. Now let us include the force $F$ in our analysis. Once again the solution to the quantum Heisenberg equations of motion can be constructed by adapting the classical solution eq (\ref{eq:superpos}) and its counterpart for $p(t)$. Again it is preferable to work with the creation and annihilation operators $\hat{a}$ and $\hat{a}^\dagger$ rather than $\hat{x}$ and $\hat{p}$. The final result is \begin{equation} \hat{a} (t) = u \hat{a}_0 + v \hat{a}_0^\dagger + \alpha \hspace{3mm} {\rm and} \hspace{3mm} \hat{a}^\dagger (t) = u^\ast \hat{a}_0^\dagger + v^\ast \hat{a}_0 + \alpha^\ast. \label{eq:forcedsol} \end{equation} Thus the evolution of the ladder operators in this case is a Bogolyubov transformation together with a constant displacement $\alpha$. The coefficients $u$ and $v$ are still given by eq (\ref{eq:bcoefficients}) assuming that $t_0$ is before the force or parametric drive are turned on and $t$ is after. The displacement $\alpha$ is given by \begin{equation} \alpha = \frac{ i e^{-i \omega_0 t} }{\sqrt{2 m \omega_0}} \int_{t_0}^t d \tau \; F(\tau)[A \;\xi^\ast (\tau) - B^\ast \xi (\tau) ]. \label{eq:displacement} \end{equation} It is useful to consider various special cases of this result. First suppose that there is no parametric driving. In that case $A = 1$ and $B = 0$ and $\xi(\tau) = \exp ( - i \omega_0 \tau)$ for all time. Hence in this case \begin{equation} \alpha = \frac{i e^{-i \omega_0 t}}{\sqrt{2 m \omega_0}} \tilde{f} (\omega_0) e^{i \omega_0 t_f} \label{eq:freeforced} \end{equation} and the number of quanta at time $t$ is given by $\langle 0 | \hat{a}^\dagger (t) \hat{a} | 0 \rangle = | \alpha |^2$. Here we have taken the force to be $f(t - t_f)$ in the time domain. The force is assumed to be localized about the time $t_f$. For explicit calculations below we will sometimes take the force to be a Gaussian centered at $t_f$ with a sinusoidal modulation. Thus we arrive at the well-known result \cite{loudon},\cite{landaucm} that the excitation of the field oscillator is determined by $ \tilde{f} (\omega_0) $, the Fourier amplitude of the force at the natural frequency of the mode, $\omega_0$. In the Schr\"{o}dinger picture the state of the oscillator evolves from the vacuum $| 0 \rangle$ to the coherent state $|\alpha\rangle$. Next suppose that the force is applied well before the parametric drive. In that case we can use the $t \rightarrow - \infty$ behavior of $\xi (t)$ in order to evaluate $\alpha$. Making use of eq (\ref{eq:jost}) and \ref{eq:displacement}) we obtain \begin{equation} \alpha \approx \frac{i e^{-i \omega_0 t}}{\sqrt{2 m\omega_0}} \left[ A \tilde{f}(\omega_0) e^{i \omega_0 t_f} - B^\ast \tilde{f}(\omega_0) e^{- i \omega_0 t_f} \right]. \label{eq:genearly} \end{equation} In this case too the displacement is determined entirely by the Fourier amplitude of the force at the natural frequency $\omega_0$. Note that if we compute $| \alpha |^2$ it will have an interference term that oscillates with frequency $2 \omega_0$ as a function of $t_f$. This oscillation is also manifest if we compute the number of quanta at late times \begin{eqnarray} \langle 0 | \hat{a}^\dagger (t) \hat{a} (t) | 0 \rangle & = & |B|^2 + \frac{ | \tilde{f}(\omega_0) |^2}{2 m \omega_0} \left[ (2 |B|^2 +1) \right] \nonumber \\ & + & \frac{ | \tilde{f} ( \omega_0 ) |^2 }{ 2 m \omega_0 } \left[ 2 |A| |B| \cos ( 2 \omega_0 t_f + 2 \phi) \right]. \nonumber \\ \label{eq:earlyosc} \end{eqnarray} Here we have made use of eqs (\ref{eq:forcedsol}) and eq (\ref{eq:genearly}) and the phase $\phi$ is defined by $ A B [i f(\omega_0)]^2 = |A| |B| | f (\omega_0) |^2 e^{i 2 \phi}$. The oscillation has a simple interpretation because of the temporal separation in the force and the parametric drive. The force first causes the oscillator to go into a coherent state $| \alpha \rangle$ at time $t_f$ where $\alpha$ is given by eq (\ref{eq:freeforced}) with $t \rightarrow t_f$; the subsequent parametric drive then causes quantum interference between states with different numbers of quanta as in eq (\ref{eq:coherent}). Indeed eq (\ref{eq:earlyosc}) is identical to eq (\ref{eq:coherent}) if we make the replacement $t_0 \rightarrow t_f$ and $\alpha \rightarrow i \tilde{f}(\omega_0)/\sqrt{2 m \omega_0}$. Another circumstance in which we can compute $\alpha$ is if the force is applied well after the parametric drive. In this case we can use the late time asymptotic behavior of $\xi$. Making use of eqs (\ref{eq:jost}) and (\ref{eq:displacement}) we obtain \begin{equation} \alpha = \sqrt{ \frac{m}{2 \omega_0} } i e^{-i \omega_0 t} e^{i \omega_0 t_f} \tilde{f}( \omega_0 ). \label{eq:genlate} \end{equation} Since in this case also the action of the force and the parametric drive occur separately it is not surprising that the displacement is the same as would be obtained if only the force were applied. In this case too the displacement is determined entirely by the Fourier amplitude of the force at the natural oscillator frequency $\omega_0$. The analysis simplifies in the cases that the force and the parametric drive are temporally separated or only one drive is present. However the main focus of this paper is on the new effects that arise when the force and parametric drive are both simultaneously present. As we will see below in this circumstance, among other new features, the displacement responds to a broad range of frequency components of the force. In order to explore these features in the next section we analyze a number of soluble models. It is worth noting that if we were interested only in parametric driving it would be sufficient to determine the asymptotics of $\xi(t)$ or more precisely the coefficients $A$ and $B$. However in order to calculate the displacement $\alpha$ due to the forcing it is necessary to determine the entire trajectory $\xi(t)$ exactly or in some approximation. \subsection{Schr\"{o}dinger Picture} \label{sec:schrodinger} Although in principle everything can be worked out in the Heisenberg picture it is instructive to examine the same dynamics in the Schr\"{o}dinger picture. In the Heisenberg picture the state remains fixed and operators evolve according to $a(t) = U^\dagger a_0 U$ where $a_0$ is the initial operator and $U$ is the evolution operator. In the Schr\"{o}dinger picture operators like $a$ remain fixed in time while the state evolves according to $| \Psi(t) \rangle = U | \Psi(0) \rangle$. In order to analyze the dynamics of the states it proves useful to first evolve the operators back in time according to the conjugate dynamics $a_c (t) = U a_0 U^\dagger$. It is not difficult to verify that if $a(t)$ in the Heisenberg picture is given by eq (\ref{eq:forcedsol}) then the time-reversed dynamics is given by \begin{equation} a_c(t) = u^\ast a_0 + v^\ast a_0^\dagger + \alpha^\ast. \label{eq:schrodingeroperators} \end{equation} Now it is easy to verify that if the initial state at time $t_0$ is the vacuum $|0 \rangle$, which is defined by the condition $ a_0 | 0 \rangle$, then the state at time $t$, which is formally equal to $U | 0 \rangle$, satisfies the condition \begin{equation} a_c (t) | \Psi (t) \rangle = 0. \label{eq:squeeze} \end{equation} Eqs (\ref{eq:schrodingeroperators}) and eq (\ref{eq:squeeze}) fully determine the state $ | \Psi (t) \rangle$. This state is a squeezed coherent state in the language of quantum optics \cite{loudonreview}. We see that for pure parametric driving ($\alpha = 0$) the final state obtained is a squeezed vacuum. The effect of forcing is to produce a non-zero displacement $\alpha$ and lead to a final state that is a squeezed coherent state. \subsection{Field theory formulation} \label{sec:fields} We now generalize the preceding results to a field theory with a large (possibly infinite) number of coupled modes. For notational simplicity we assume that there are $n$ coupled oscillators. Without loss of generality we may take these modes to obey the equation of motion \begin{equation} \frac{d^2 x_i}{d t^2} + \omega_i^2 x_i + \sum_{j=1}^{n} \Omega_{ij}^2 (t) x_j = \frac{1}{m} F_i (t). \label{eq:multimode} \end{equation} We assume that the modes are decoupled asymptotically and that the source also turns off asymptotically. In other words we assume $\Omega_{ij}^2 (t) \rightarrow 0$ and $F_i(t) \rightarrow 0$ for $t \rightarrow \pm \infty$. First let us analyze the case in which the field oscillators are only driven parametrically and $F = 0$. Evidently eq (\ref{eq:multimode}) then has $n$ independent solutions $\xi^{\mu}_i (t)$ with the label $\mu = 1, \ldots, n$. These solutions satisfy Jost boundary conditions \begin{eqnarray} \xi^\mu_i (t) & = & \sqrt{ \frac{\omega_0}{\omega_i} } \delta_{i \mu} e^{- i \omega_i t} \hspace{3mm} {\rm for} \hspace{3mm} t \rightarrow - \infty \nonumber \\ & = & \sqrt{ \frac{\omega_0}{\omega_i} } A_{i \mu} e^{- i \omega_i t} + \sqrt{ \frac{\omega_0}{\omega_i} } B_{i \mu} e^{+ i \omega_i t} \hspace{3mm} {\rm for} \hspace{3mm} t \rightarrow \infty. \nonumber \\ \label{eq:fieldjost} \end{eqnarray} Here $\omega_0^2$ might represent the lowest of the frequencies $\omega_i$ (the ``mass gap'') or it might be an arbitrarily chosen scale if the field theory we wish to analyze is gapless in the $n \rightarrow \infty$ limit. In addition there is a second set of $n$ independent solutions obtained by complex conjugation. The matrices $A$ and $B$ may be shown to satisfy \begin{equation} \sum_{i = 1}^n \left( A^\ast_{i \mu} A_{i \nu} - B^\ast_{i \mu} B_{i \nu} \right) = \delta_{\mu \nu}. \label{eq:fieldunitarity} \end{equation} These solutions can be superposed to match any specified initial conditions exactly as in the the single mode case. In order to incorporate the effect of the force we need the Green's function $G_{ij} (t, \tau)$ that obeys \begin{equation} \frac{ d^2 }{d t^2} G_{ij} (t, \tau) + \omega_i^2 G_{ij} (t, \tau) + \sum_{k=1}^n \Omega^2_{ik} G_{kj} (t, \tau) = \delta_{ij} \delta( t - \tau ) \label{eq:fieldgreen} \end{equation} together with the boundary condition $G_{ij} (t, \tau) = 0$ for $t < \tau$. As in the single mode case the desired Green's function can be constructed by use of the free solutions $\xi^\mu$. Thus \begin{equation} G_{ij} (t, \tau) = \frac{ \theta( t - \tau ) }{ W} \sum_{\mu = 1}^n \left[ \xi^\mu_j (\tau) \xi^{\mu \ast}_i (t) - \xi^{\mu \ast}_{j} (\tau) \xi^{\mu}_{i} (t) \right]. \label{eq:greenfield} \end{equation} Here the normalization factor $W = 2 i \omega_0$. Now making use of superposition we can write down a solution to eq (\ref{eq:multimode}) that flows from a specified initial condition by a straightforward generalization of the single mode analysis. From this solution we can construct the transformation that connects the Heisenberg field operators at late times to the initial field operators. The result is \begin{equation} a_i (t) = \sum_{\mu =1}^n \left[ U_{i \mu} a_\mu (t_0) + V_{i \mu} a_\mu^\dagger (t_0) \right] + \alpha_i \label{eq:waldplus} \end{equation} Thus the evolution of the ladder operators is a Bogolyubov transformation together with a displacement that is due to forcing. The Bogolyubov coefficients are given by \begin{equation} U_{i \mu} = A_{i \mu} e^{- i \omega_i t} e^{i \omega_\mu t_0} \hspace{3mm} {\rm and} \hspace{3mm} V_{i \mu} = B_{i \mu} e^{- i \omega_i t} e^{- i \omega_\mu t_0} \label{eq:uvfield} \end{equation} while the displacement is \begin{equation} \alpha_i = \frac{ i e^{-i \omega_i t} }{ \sqrt{ 2 m \omega_0 } } \sum_{\mu, j} \sqrt{ \frac{ \omega_i }{\omega_\mu} } \int_{t_0}^t d \tau \; \left[ A_{i \mu} \xi^{\mu \ast}_j (\tau) - B^\ast_{i \mu} \xi^\mu_j (\tau) \right] F_j (\tau). \end{equation} Eq (\ref{eq:waldplus}) is the main result of this section. In the absence of forcing $\alpha_i = 0$ and eq (\ref{eq:waldplus}) reduces to the familiar result that parametric driving corresponds to a Bogolyubov transformation (see for example eqs (2.7) and (2.8) of ref \cite{wald}). Eq (\ref{eq:waldplus}) generalizes these results to the case when both forcing and parametric driving are applied. \section{Forced oscillator: soluble models} \label{sec:examples} In order to gain further insight into the states produced by the combination of forcing and parametric driving we now investigate four circumstances wherein the classical dynamics can be solved exactly or in some approximation. \subsection{The Sech potential} \label{sec:exact} We choose the frequency to have the time dependence \begin{equation} \omega^2 (t) = \omega_0^2 + \frac{ \Omega^2 }{ \cosh^2 (t/T) }. \label{eq:sechsquare} \end{equation} For this choice the classical equation of motion is exactly soluble in terms of hypergeometric functions. A closely related model was studied in \cite{bernard} corresponding to the choice $\omega^2(t) = \omega_0^2 + \Omega^2 \tanh^2 (t/T)$ in connection with vacuum radiation in a two-dimensional expanding Universe. For our choice in eq (\ref{eq:sechsquare}) the solution to the analogous Schr\"{o}dinger equation is presented in \cite{landauqm}. Transcribing that result we obtain \begin{eqnarray} \xi (t) & = & \left( \frac{1 - \eta^2}{4} \right)^{- i \omega_0 T/2} \nonumber \\ & \times & _2F_1 \left[ - i \omega_0 T - s, - i \omega_0 T +s +1, - i \omega_0 T + 1; \frac{1}{2} (1 + \eta) \right]. \nonumber \\ \label{eq:hypergeometric} \end{eqnarray} Here $_2F_1$ is the hypergeometric function, $\eta = \tanh (t /T)$, and the parameter \begin{equation} s = \frac{1}{2} \left( \sqrt{1 + 4 \Omega^2 T^2} - 1 \right). \label{eq:s} \end{equation} This solution has the asymptotic behavior given in eq (\ref{eq:jost}) with the coefficients \begin{eqnarray} A & = & \frac{ \Gamma( 1 - i \omega_0 T ) \Gamma( - i \omega_0 T )}{ \Gamma( - i \omega_0 T - s ) \Gamma( - i \omega_0 T + s + 1 ) }, \nonumber \\ B & = & \frac{i \sin \pi s}{ \sinh( \pi \omega_0 T) }. \nonumber \\ \label{eq:absoluble} \end{eqnarray} It can be verified that $|A|^2 - |B|^2 = 1$ by use of the identity $\Gamma(z) \Gamma(1-z) = \pi/\sin(\pi z)$. Eq. (\ref{eq:vacuum}) indicates that if the system starts in the vacuum, in the absence of forcing the average number of quanta generated by the vacuum process is $|B|^2$. From eq (\ref{eq:absoluble}) we see that in the adiabatic limit $\omega_0 T \gg 1$ the amount of vacuum radiation is exponentially suppressed. Note the oscillatory dependent of $|B|^2$ upon $s$. This is a well-known feature of the ${\rm sech}^2$ potential which is known to be reflectionless for critical values of its amplitude. In order to calculate the effect of forcing we need to choose a particular form of the forcing function $F(t)$ and use eqs (\ref{eq:displacement}), (\ref{eq:hypergeometric}) and (\ref{eq:absoluble}). The integrals therein have to be evaluated numerically. We will return to a discussion of these results in connection with the approximate solutions below. \subsection{The Born approximation} If the frequency $\omega^2 (t)$ does not deviate significantly from the asymptotic value $\omega_0^2$ we can solve eq (\ref{eq:classical}) via perturbation theory. To this end it is useful to recast the problem as an integral equation analogous to the Lipmann-Schwinger equation. We wish to solve eq (\ref{eq:classical}) with $F = 0$ subject to the Jost boundary conditions eq (\ref{eq:jost}). This is equivalent to the integral equation \begin{equation} \xi (t) = e^{- i \omega_0 t} - \int_{-\infty}^\infty d\tau\; g^{(0)} (t, \tau) [ \omega^2 (\tau) - \omega_0^2 ] \xi (\tau) \label{eq:lippmann} \end{equation} where $g^{(0)}$ is the unperturbed Green's function that satisfies \begin{equation} \frac{d^2}{d t^2} g( t, \tau) + \omega_0^2 g (t, \tau) = \delta (t - \tau). \label{eq:freegreen} \end{equation} A key difference from the conventional Lipmann-Schwinger equation is that we impose the boundary condition $g^{(0)} (t, \tau) = 0 $ for $t < \tau$. Using the general result eq (\ref{eq:green}) we obtain $g^{(0)} (t, \tau) = \theta( t - \tau) (1/\omega_0) \sin [ \omega_0 (t - \tau) ]$. Thus far we have rewritten the problem exactly. The first-order Born approximation for $\xi(t)$ is to insert the zeroth-order approximation $\xi(\tau) \approx e^{-i \omega_0 \tau}$ on the right hand side of eq (\ref{eq:lippmann}). We obtain \begin{equation} \xi (t) = e^{-i \omega_0 t} - \int_{-\infty}^{t} d \tau \; \frac{1}{\omega_0} \sin [ \omega_0 (t - \tau) ] [ \omega^2 (\tau) - \omega_0^2 ] e^{- i \omega_0 \tau}. \label{eq:bornapproximation} \end{equation} From eq (\ref{eq:bornapproximation}) we infer that \begin{eqnarray} A & = & 1 - \frac{i}{2 \omega_0} \int_{-\infty}^\infty d \tau \; [ \omega^2 (\tau) - \omega_0^2 ], \nonumber \\ B & = & \frac{i}{2 \omega_0} \int_{-\infty}^\infty d \tau \; [ \omega^2 (\tau) - \omega_0^2 ] e^{- i 2 \omega_0\tau }. \nonumber \\ \label{eq:abborn} \end{eqnarray} Eqs (\ref{eq:bornapproximation}) and (\ref{eq:abborn}) constitute the first-order Born approximation for an arbitrary $\omega^2 (t)$. In order to get more explicit expressions we consider $\omega^2(t)$ of the form in eq (\ref{eq:sechsquare}). Evaluating eq (\ref{eq:abborn}) for this choice yields \begin{equation} A = 1 - \frac{ i \Omega^2 T}{\omega_0} \hspace{3mm} {\rm and} \hspace{3mm} B = \frac{ i \pi \Omega^2 T^2}{ \sinh ( \pi \omega_0 T ) }. \label{eq:abbornsech} \end{equation} To leading order in $\Omega T$ these results match the exact result eq (\ref{eq:absoluble}). Finally making use of eqs (\ref{eq:displacement}) and eq (\ref{eq:bornapproximation}) we obtain for the displacement \begin{equation} \alpha = \frac{i e^{- i \omega_0 t} }{ \sqrt{2 m \omega_0} } \left[ A \tilde{F} (\omega_0) - B^\ast \tilde{F}^\ast (\omega_0) - \int_{-\infty}^\infty \frac{d \nu}{2 \pi} \tilde{F}^\ast (\nu) R(\nu) \right]. \label{eq:broadband} \end{equation} Here $A$ and $B$ are given by eq (\ref{eq:abbornsech}) and the response function \begin{equation} R(\nu) = \frac{\Omega^2 T^2}{ \omega_0 - \nu } \frac{1}{ \sinh \left[ \pi (\omega_0 + \nu ) T / 2 \right] }. \label{eq:rnu} \end{equation} Eq (\ref{eq:broadband}) reveals a new effect that arises due to the interplay of the forcing and the parametric drive. The first two contributions to $\alpha$ are determined entirely by the resonant Fourier component $\tilde{F} (\omega_0)$ of the force. However the third contribution in eq (\ref{eq:broadband}) depends on the full spectrum of the force weighted by the response function $R(\nu)$. $R(\nu)$ is peaked at $\nu \approx \pm \omega_0$ with a width of order $1/T$. Due to this term a force that is off-resonance can still produce a displacement when accompanied by a parametric drive. \subsection{The Abrupt limit} Consider the circumstance that the time scale $T$ over which the frequency changes is very short compared to $\omega_0^{-1}$. In this case the change in the frequency can be approximated by a delta function. For the particular case of eq (\ref{eq:sechsquare}) in the absence of forces we approximate eq (\ref{eq:classical}) by \begin{equation} \frac{d^2 x}{d t^2} + \omega_0^2 x + 2 \Omega^2 T \delta(t) x = 0. \label{eq:abruptclassical} \end{equation} The coefficient of the delta function is chosen to match the weight $\int_{-\infty}^\infty d t [ \omega^2(t) - \omega_0^2 ] = 2 \Omega^2 T$. Evidently in this case the solution with Jost boundary conditions is given by \begin{eqnarray} \xi(t) & = & e^{-i \omega_0 t} \hspace{3mm} {\rm for} \hspace{3mm} t < 0; \nonumber \\ & = & e^{-i \omega_0 t} - \frac{2 \Omega^2 T}{\omega_0} \sin ( \omega_0 t ) \hspace{3mm} {\rm for} \hspace{3mm} t > 0. \label{eq:abruptxi} \end{eqnarray} This corresponds to \begin{equation} A = 1 - i \frac{\Omega^2 T}{\omega_0} \hspace{3mm} {\rm and} \hspace{3mm} B = i \frac{\Omega^2 T}{\omega_0}. \label{eq:ababrupt} \end{equation} The solution (\ref{eq:abruptxi}) is obtained as usual by taking an appropriate superposition of plane waves on either side of the delta function and imposing matching conditions across the origin. It is easy to verify that eq (\ref{eq:ababrupt}) is consistent with the exact solution of eq (\ref{eq:absoluble}) in the limit $T \rightarrow \infty$ whilst $\Omega^2 T$ is held constant. We assume that the force $F(t)$ is localized about a time $t_f$ and write it in the form $f(t - t_f)$. Using eqs (\ref{eq:displacement}), (\ref{eq:abruptxi}) and (\ref{eq:ababrupt}) we obtain \begin{equation} \alpha = \frac{ i e^{-i \omega_0 t} }{\sqrt{2 m \omega_0}} \left[ \tilde{f} (\omega_0) e^{i \omega_0 t_f} - \frac{2 \Omega^2 T}{\omega_0} \left\{ {\cal I} + {\rm Im} [ \tilde{f}^\ast (\omega)_0) e^{- i \omega_0 t} ] \right\} \right]. \label{eq:abruptalpha} \end{equation} Here \begin{equation} {\cal I} = \int_0^\infty dt\; f(t - t_f) \sin (\omega_0 t). \label{eq:cali} \end{equation} Together eqs (\ref{eq:ababrupt}) and (\ref{eq:abruptalpha}) allow us to answer all questions of interest about the excitation of the oscillator by the combined parametric drive and forcing. Our principal result is that due to the combined effects of forcing and the parametric drive, the system goes into not just a squeezed vacuum but rather into a squeezed coherent state with displacement $\alpha$. Hence $|\alpha|^2$ is the key quantity of interest here. It is easy to verify that in the limits $t_f \rightarrow \pm \infty$ when the forcing and the parametric drive are temporally separate we recover the general results of section \ref{sec:heisenberg}. Now however we are in a position to examine what happens when the forcing and the parametric drive are applied simultaneously. To this end we take the force to have the form of a Gaussian of width $T_2$ that is modulated at a frequency $\omega_f$. Thus \begin{equation} F(t) = F_0 \cos [ \omega_f (t - t_f) ] \exp \left[ - \frac{ (t - f_f)^2 }{T_2^2} \right]. \label{eq:force} \end{equation} For this form $\tilde{F} (\omega_0)$ and the integral ${\cal I}$ can be evaluated in closed form; for the sake of brevity, these expressions are omitted. In fig \ref{fig:plotz} (lower panel) we plot the resulting displacement as a function of $t_f$ for $T_2 = 1$ and $\Omega^2 T = 10, \omega_0 = 10 \pi, \omega_f = 10 \pi$. For these parameters we see the expected asymptotic behavior, namely, oscillations for $t_f \rightarrow - \infty$ and the approach to a constant displacement for $t_f \rightarrow \infty$. For intermediate $t_f$ we see that the oscillations turn off smoothly and monotonically. This monotonic behavior is also found in the opposite adiabatic limit in section \ref{sec:adiabatic} below. However a variety of different behaviors are seen under other circumstances. For example in fig \ref{fig:plotz} (upper panel) we use the exact solution of section \ref{sec:exact} to plot the magnitude as a function of $t_f$ for $T_2 = 1$ and $T = 0.5, \Omega = 1, \omega_0 = 6 \pi$ and $\omega_f = 6 \pi$. For these values of the parameters the asymptotic oscillations for $t_f \rightarrow - \infty$ have negligible amplitude since $B \approx 0$. However large oscillations are observed to turn on for intermediate values of $t_f$ when the force and parametric drive are applied simultaneously. \begin{figure} \caption{Plot of the magnitude of the displacement $\alpha$ as a function of $t_f$, the time lag between the application of the force and the parametric drive. The frequency $\omega(t)$ is assumed to be given by eq (\ref{eq:sechsquare} \label{fig:plotz} \end{figure} \subsection{The Adiabatic limit} \label{sec:adiabatic} The adiabatic limit is of particular interest for experimental applications. Our objective is now to solve eq (\ref{eq:classical}) in the limit that $\omega^2(t)$ varies slowly. Formally we wish to take the limit $\omega_0 T \rightarrow \infty$. This is a subtle limit. If we interpret eq (\ref{eq:classical}) with $F = 0$ as a Schr\"{o}dinger equation, the problem we wish to solve is that of reflection above a slowly varying barrier. The conventional WKB approximation fails to capture the effect and yields $B \approx 0$. However by continuation in the complex plane one can obtain the exponentially small value of the scattering coefficient \cite{landauqm}. For analysis of the forcing one needs not only the scattering coefficients but rather the entire solution $\xi(t)$. Considering its vintage this problem was solved relatively recently \cite{berry} by a sophisticated use of divergent series. Applying these techniques here we wish to solve \begin{equation} \frac{d^2 x}{d t^2} + \left[ \omega_0^2 - \frac{ \Omega^2}{\cosh^2 (t/T)} \right] x = 0. \label{eq:adiabatic} \end{equation} Note that we have changed the sign of the term with coefficient $\Omega^2$. The adiabatic analysis for the case with this sign is a bit simpler and we focus on this case to avoid unneeded complications. We must assume that $\Omega^2 < \omega_0^2$ to ensure that the oscillator remains stable at all times. The exponentially small corrections to the conventional WKB approximation are controlled by the turning points at which $\omega^2 (t) = 0$. There are no turning points for $t$ real but if we allow complex $t$ there are an infinity of turning points along the imaginary axis. Let $\varphi_1$ be the solution to $\cos \varphi = \Omega/\omega_0$ that lies in the range $0 \leq \varphi_1 \leq \pi/2$. Then the two turning points closest to the real axis are $t = \pm i \varphi_1 T$. There is an additional pair of turning points at $t = \pm i \varphi_2 T$ where $\varphi_2$ is the solution to $\cos \varphi = - \Omega/\omega_0$ that lies in the range $\pi/2 \leq \varphi_2 \leq \pi$. Finally these four turning points are repeated periodically along the imaginary axis at $t = \pm i \varphi_{1,2} T + 2 \pi i n T$ where $n$ is an integer. It is useful to rewrite eq (\ref{eq:adiabatic}) along the imaginary axis by making the substitution $t \rightarrow i s$. We obtain \begin{equation} \frac{d^2 x}{d s^2} + \left[ \frac{ \Omega^2}{\cos^2 (s/T) } - \omega_0^2 \right] x = 0. \label{eq:imaginary} \end{equation} The conventional WKB solutions to this equation are real exponentials revealing that in the terminology of \cite{berry} the segment of the imaginary axis that connects the turning points $\pm i \varphi_1 T$ is a Stokes line that intersects the real time axis. It now follows from ref \cite{berry} that the adiabatic solution to eq (\ref{eq:adiabatic}) with our preferred Jost boundary condition is \begin{eqnarray} \xi (t) & \approx & \sqrt{ \frac{\omega_0}{\omega(t)} } \left \{ \exp \left[ - i \int_0^t d \tau \omega(\tau) \right] \right. \nonumber \\ & + & \exp \left. \left[ - \omega_0 T g \left( \frac{\Omega}{\omega_0} \right) \right] U(t) \exp \left[ i \int_0^t d \tau \omega (\tau) \right] \right \}. \nonumber \\ \label{eq:berry} \end{eqnarray} where $g$ is defined below. The upper line on the right hand side of eq (\ref{eq:berry}) corresponds to the conventional WKB solution. The second line corresponds to the exponentially small backscattering. The coefficient $B$ is given by \begin{equation} B \approx \exp \left[ - \omega_0 T g \left( \frac{\Omega}{\omega_0} \right) \right] , \label{eq:reflection} \end{equation} while $U(t)$ is a smooth step function that goes from zero to one across the Stokes line with the universal form \begin{equation} U(t) = {\rm Erf} \left( \frac{t}{T_S} \right) \hspace{2mm} {\rm with} \hspace{2mm} T_S = \frac{ \omega_0 T g ( \Omega/ \omega_0 ) }{ 2 \sqrt{ \omega_0^2 - \Omega^2} }. \label{eq:universal} \end{equation} The function $g$ is determined by the integral of $\omega(t)$ along the the Stokes line from the origin to the nearest turning point \begin{equation} \omega_0 T g \left( \frac{ \Omega }{ \omega_0 } \right) = \int_0^{\varphi_1 T} d s \; \sqrt{ \omega_0^2 - \frac{ \Omega^2}{ \cos^2 (s/T) } } = \frac{\pi}{2} ( \omega_0 - \Omega ) T. \label{eq:badiabatic} \end{equation} For illustration consider the displacement produced by a force $F(t) = f(t - t_f)$ that acts at time $t_f$ and has duration $T_2$. Assume for simplicity that $| t_f | \ll T$ so that the force acts at the same time as the parametric drive and that the force is a brief impulse so that $T_2 \ll T_S, T$. In this regime the integrals in eq (\ref{eq:displacement}) can be evaluated asymptotically and yield the result \begin{eqnarray} \alpha & = & \frac{ i e^{- i \omega_0 t} }{ \sqrt{2 m \omega_{{\rm eff}} } } e^{ i \omega_{{\rm eff}} t_f } \tilde{f} (\omega_{{\rm eff}} ) \nonumber \\ & - & \frac{ i e^{- i \omega_0 t} }{ \sqrt{2 m \omega_{{\rm eff}} } } e^{- i \omega_{{\rm eff}} t_f} \tilde{f}^\ast (\omega_{{\rm eff}}) \exp \left[ - \omega_0 T g \left( \frac{\Omega}{\omega_0} \right) \right] {\rm Erfc} \left(\frac{t}{T_S} \right). \nonumber \\ \label{eq:adiabaticalpha} \end{eqnarray} Here $\omega_{{\rm eff}} = \sqrt{ \omega_0^2 - \Omega^2}$ and ${\rm Erfc}$ is the complementary error function. Eq (\ref{eq:adiabaticalpha}) reveals that if $|\alpha|$ is plotted as a function of $t_f$ there will be oscillations due to the interference between the two terms. Note that the second term is modulated by a complementary error function and vanishes as $t_f \rightarrow \infty$. The oscillations likewise vanish as $t_f \rightarrow \infty$ with a complementary error function profile. \section{Conclusion} \label{sec:conclusion} In this paper we study the radiation produced when a field is excited both parametrically and driven by a classical source. Normally in an experiment to measure the dynamical Casimir effect all sources of radiation besides the parametrically excited vacuum radiation are minimized and the field is not driven classically. However our finding that the parametric drive has a strong effect on the radiation field when both kinds of excitation are applied suggests that it may be possible to extract signatures of the quantum effects of the parametric drive even in experiments where classical sources are present. Cavitation of bubbles in superfluid helium provides a possible realization of this physics. The motion of the bubble wall would parametrically excite phonons in the fluid much as a moving mirror excites photons \cite{moore}. Moreover, unlike a moving mirror, a moving bubble wall is a strong classical source of acoustic radiation \cite{landaufluid}. Cavitating bubbles might even be able to achieve supersonic flow \cite{putterman} leading to the formation of a sound horizon \cite{unruh}. Although bubbles in superfluid helium are well-studied \cite{heliumreview} these are likely challenging experiments and it would be desirable in future work to develop an optimum experimental design. In ref \cite{superconduct} the dynamical Casimir effect was realized by terminating a transmission line with a SQUID. Changing the flux in the SQUID modifies the boundary condition at the termination effectively mimicking a moving mirror. The flux is varied periodically coupling electromagnetic modes in pairs and leading to the formation of two-mode squeezed states \cite{loudonreview}. In this case it is feasible to drive the transmission line with a weak classical voltage source. Designing an unambiguous signature of the quantum effects of the parametric drive on the resulting radiation field is therefore an interesting question for future work. A primary motivation for this work is to develop an estimate of the quantum contribution to gravitational radiation produced by astrophysical phenomena such as black hole mergers that are accessible to LIGO and its planned successor gravitational wave observatories. Mergers are strong classical sources of gravitational radiation and in this case quantum effects, if at all detectable, necessarily have to be observed against this background. \end{document}
math
50,871
\begin{document} \tauitle[Homological dimensions of subalgebras]{Homological dimensions for co-rank one idempotent subalgebras} \author{Colin Ingalls} \author{Charles Paquette} \maketitle \begin{abstract} Let $k$ be an algebraically closed field and $A$ be a (left and right) Noetherian associative $k$-algebra. Assume further that $A$ is either positively graded or semiperfect (this includes the class of finite dimensional $k$-algebras, and $k$-algebras that are finitely generated modules over a Noetherian central Henselian ring). Let $e$ be a primitive idempotent of $A$, which we assume is of degree $0$ if $A$ is positively graded. We consider the idempotent subalgebra $\hbox{$\mathit\Gamma$}mma = (1-e)A(1-e)$ and $S_e$ the simple right $A$-module $S_e = eA/e{\rm rad}A$, where ${\rm rad}A$ is the Jacobson radical of $A$, or the graded Jacobson radical of $A$ if $A$ is positively graded. In this paper, we relate the homological dimensions of $A$ and $\hbox{$\mathit\Gamma$}mma$, using the homological properties of $S_e$. First, if $S_e$ has no self-extensions of any degree, then the global dimension of $A$ is finite if and only if that of $\hbox{$\mathit\Gamma$}mma$ is. On the other hand, if the global dimensions of both $A$ and $\hbox{$\mathit\Gamma$}mma$ are finite, then $S_e$ cannot have self-extensions of degree greater than one, provided $A/{\rm rad}A$ is finite dimensional. \end{abstract} \sigmaection{Introduction} Let $k$ be an algebraically closed field and $A$ an associative $k$-algebra. The left (or right) global dimension of $A$ is an nice homological invariant of the algebra which is defined as the supremum of the projective dimensions of all left (resp. right) $A$-modules. The notion of global dimension, which traces back to Cartan and Eilenberg in \cite{CartanEilenberg}, has been widely studied in the past decades. It was first studied in the context of commutative algebras. When $A$ is commutative Noetherian, Auslander and Buchsbaum have proven the well known fact that $A$ is regular if and only if its global dimension is finite. This result has been proven independently by Serre in \cite{Serre}. In \cite{Aus}, Auslander gave many interesting properties of the global dimension in the context of noncommutative algebras. One of his key results shows that when $A$ is both left and right Noetherian, then the left global dimension of $A$ coincides with the right global dimension of $A$. In this paper, an algebra which is both left and right Noetherian is called \emph{Noetherian}, and in this case, the global dimension of $A$ is denoted ${\rm gl.dim}\,A$. In the representation theory of finite dimensional algebras, Happel \cite{Happel} has shown the very important fact that the bounded derived category $D^b({\rm mod}\, A)$ of the category ${\rm mod}\, A$ of finite dimensional right $A$-modules admits a Serre functor if and only if ${\rm gl.dim}\,A$ is finite. In particular, $D^b({\rm mod}\, A)$ admits Auslander-Reiten triangles if and only if ${\rm gl.dim}\,A$ is finite. This also shows that the finiteness of the global dimension is a derived invariant. The global dimension also has applications in noncommutative algebraic geometry. Namely, if $\mathbb{X}$ is a projective variety, then one can consider the bounded derived category $D^b({\rm coh}(\mathbb{X}))$ of the coherent sheaves over $\mathbb{X}$; and this derived category admits a Serre functor if $\mathbb{X}$ is smooth. When $D^b({\rm coh}(\mathbb{X}))$ admits a tilting object $T$, then $D^b({\rm coh}(\mathbb{X}))$ is triangle equivalent to $D^b({\rm mod}\, {\rm End}(T))$, where ${\rm End}(T)$ is a finite dimensional $k$-algebra. Hence in this case, if $\mathbb{X}$ is smooth, then ${\rm End}(T)$ has finite global dimension. In order to compute the global dimension of an algebra $A$, it is often easier to reduce the computations to a smaller algebra. One way to reduce the size of an algebra is to find another closely related algebra with fewer simple modules. In this paper, we work in the wide context of associative Noetherian $k$-algebras. Our algebras are not assumed to be commutative. All algebras in this paper are assumed to be (associative) Noetherian $k$-algebras, unless otherwise indicated. In most of the results, the algebras considered are either semiperfect or positively graded. The definitions of a semiperfect algebra and a positively graded algebra are recalled in the corresponding sections. Semiperfect $k$-algebras have finitely many - say $n$ - non-isomorphic simple right $A$-modules, and the identity $1_A$ of $A$ decomposes as a finite sum $1_A = e_1 + \cdots + e_n$ of pairwise orthogonal idempotents, each such idempotent corresponds to a unique isomorphism class of simple right $A$-modules. Let $e$ be a fixed nonzero idempotent of $A$ with $S_e$ the semi-simple top of $eA$, and let $\hbox{$\mathit\Gamma$}mma:=(1-e)A(1-e)$ be the corresponding idempotent subalgebra. The algebra $\hbox{$\mathit\Gamma$}mma$ is again Noetherian and semiperfect and admits fewer simple $\hbox{$\mathit\Gamma$}mma$-modules, up to isomorphism. A similar behavior arises for graded simple modules over a positively graded algebra. Consequently, we can define $e, S_e$ and $\hbox{$\mathit\Gamma$}mma$ as follows. A positively graded $k$-algebra $A$ has finitely many - say $n$ - non-isomorphic graded simple right $A$-modules, up to a shift, and the identity $1_A$ of $A$ decomposes as a finite sum $1_A = e_1 + \cdots + e_n$ of pairwise orthogonal idempotents of degree $0$, each such idempotent corresponds to a unique isomorphism class of graded simple right $A$-modules of degree $0$. Let $e$ be a fixed nonzero idempotent of degree $0$ of $A$ with $S_e$ the semi-simple graded top of $eA$, and let $\hbox{$\mathit\Gamma$}mma:=(1-e)A(1-e)$ be the corresponding idempotent subalgebra. The algebra $\hbox{$\mathit\Gamma$}mma$ is again Noetherian and positively graded and admits fewer graded simple $\hbox{$\mathit\Gamma$}mma$-modules up to isomorphism and shift. Even if the algebras $A, \hbox{$\mathit\Gamma$}mma$ are closely related, their homological behaviors can be very different. One can easily find examples where $A$ has finite global dimension while $\hbox{$\mathit\Gamma$}mma$ does not, and conversely. However, the homological properties of the semi-simple module $S_e$ gives more information on the relationship between the global dimensions of $A$ and $\hbox{$\mathit\Gamma$}mma$. When $S_e$ is simple, that is, when $e$ is primitive, we will show that this relationship is much stronger. We will show the following theorem, for when $A$ is semiperfect or positively graded. \begin{Theor} Suppose that ${\rm Ext}_A^i(S_e, S_e)=0$ for $i > 0$. Then the global dimension of $A$ is finite if and only if the global dimension of $\hbox{$\mathit\Gamma$}mma$ is finite. More precisely, we have bounds $${\rm gl.dim}\,\hbox{$\mathit\Gamma$}mma \le {\rm max}({\rm id}_A S_e + {\rm pd}_A S_e -1, {\rm gl.dim}\, A)$$ and $${\rm gl.dim}\,A \le 2 {\rm gl.dim}\,\hbox{$\mathit\Gamma$}mma + 2.$$ \end{Theor} In the statement, id$_A$ stands for the injective dimension and pd$_A$ stands for the projective dimension. Surprisingly, when $A$ is finite dimensional (hence is Noetherian semiperfect with $A/{\rm rad}A$ finite dimensional) and $e$ is primitive, if one assumes that both ${\rm gl.dim}\,A, {\rm gl.dim}\,\hbox{$\mathit\Gamma$}mma$ are finite, then ${\rm Ext}_A^i(S_e, S_e)=0$ for $i > 0$. We actually get a stronger version as follows. \begin{Theor} Assume that $A$ is finite dimensional, $e$ is primitive and both ${\rm pd}_A S_e$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma (eA(1-e))$ are finite. Then ${\rm Ext}_A^i(S_e, S_e)=0$ for $i > 0$. \end{Theor} If $A$ is not finite dimensional, the above theorem is clearly not true. Take for instance the one-point extension $$A = \left( \begin{array}{cc} k[[x]] & k[[x]]/\langle x \rangle\\ 0 & k \\ \end{array} \right),$$ where $k[[x]]$ is the $k$-algebra of formal power series in one variable. Then $A$ has global dimension two and is Noetherian semiperfect. For the idempotent $e=e_{11}$ of $A$, the simple module $S_e$ is not self-orthogonal since ${\rm Ext}^1_A(S_e, S_e) \ne 0$ and $\hbox{$\mathit\Gamma$}mma = e_{22}Ae_{22} \cong k$ has global dimension zero. If one rather considers the polynomial algebra $k[x]$ instead of $k[[x]]$, then the analogues of $A,\hbox{$\mathit\Gamma$}mma$ defined above are Noetherian positively graded and for the same reason, we get a counter-example. Observe however that when $A$ is finite dimensional and $e$ is primitive with ${\rm pd}_A S_e < \infty$, then the condition ${\rm Ext}^1_A(S_e,S_e)=0$ is automatically verified. This indeed follows from \cite{ILP}. So in the general case ($A$ is a Noetherian $k$-algebra and is either semiperfect or positively graded), we have the following conjecture. \begin{Conjec} Assume that $e$ is primitive and of degree zero if $A$ is positively graded, and assume that ${\rm Ext}^1_A(S_e,S_e)=0$. If both ${\rm pd}_A S_e$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma (eA(1-e))$ are finite, then ${\rm Ext}_A^i(S_e, S_e)=0$ for $i > 0$. \end{Conjec} In this paper, we show that the conjecture holds with the additional assumption that $A/{\rm rad}A$ is finite dimensional (which is verified in the positively graded case). Finally, note that if $e$ is not primitive, then the above conjecture does not hold. For instance, take the algebra $A = kQ/I$, where $Q$ is the quiver $$\xymatrix{1 \ar[r]^\alpha & 2 \ar[r]^\beta & 3 \ar[dl]^\gamma \\ & 4 \ar[ul]^\delta}$$ with $I = \langle \delta\gamma, \alpha\delta \rangle$ and take $e$ the sum of the primitive idempotents of the vertices $1,3$. The algebra $A$ has global dimension three while $\hbox{$\mathit\Gamma$}mma$ is hereditary. Clearly, ${\rm Ext}^1_A(S_e,S_e)=0$. However, ${\rm Ext}^2_A(S_e, S_e) \ne 0$ because of the minimal relation $\delta\gamma$. In general, we do not know whether there exists a right $A$-module $M_e$ that completely controls the relationship between the homological dimensions of $A$ and $\hbox{$\mathit\Gamma$}mma$, when $e$ is not primitive. \sigmaection{Semiperfect Noetherian algebras} We refer the reader to \cite[page 301]{AndFul} for properties of semiperfect algebras. Let $A$ be an associative $k$-algebra where $k$ is algebraically closed. We denote by mod$\,A$ the category of finitely generated right $A$-modules. Let ${\rm rad} A$ denote the Jacobson radical of $A$, that is, the intersection of all maximal right (or left) ideals of $A$. Then $A$ is \emph{semiperfect} if $A/{\rm rad}A$ is a semi-simple $k$-algebra, and idempotents lift modulo ${\rm rad} A$. By the well known Wedderburn-Artin theorem, the first condition means $$A/{\rm rad}A \cong M_{m_1}(k_1) \tauimes \cdots \tauimes M_{m_n}(k_n)$$ as $k$-algebras, where for $1 \le i \le n$, $M_{m_i}(k_i)$ is the simple $k$-algebra of all $m_i \tauimes m_i$ matrices over a division $k$-algebra $k_i$. If $A$ (or $A/{\rm rad}A$) is finite dimensional, since $k = \bar k$, we have $k_i = k$ for all $i$. Using the lifting of idempotents property, this yields a decomposition $1_A = e_1 + \cdots + e_n$ of $1_A$ into pairwise orthogonal idempotents. Note that the simple right $A$-modules are the simple right $A/{\rm rad}A$-modules and hence, there are exactly $n$ simple right $A$-modules up to isomorphism. In this section, every $k$-algebra considered is semiperfect, unless otherwise indicated. While semiperfect algebras form a nice class of algebras having well behaved homological properties, finitely generated modules may not have finitely generated projective resolutions. Even worse, for an arbitrary semiperfect algebra, the left global dimension may differ from the right global dimension. To avoid these problems, and since most of our applications fall in this class, we consider only Noetherian algebras, that is, algebras that are both left and right Noetherian. So in this section, all algebras considered are both semiperfect and Noetherian, unless otherwise indicated. Let $e$ be a fixed idempotent of $A$. By considering the \emph{idempotent subalgebra} $\hbox{$\mathit\Gamma$}mma:=(1-e)A(1-e)$, we are reducing the number of simple modules of the algebra, but keeping the property of $\hbox{$\mathit\Gamma$}mma$ being semiperfect and Noetherian; see \cite[Cor. 27.7]{AndFul} and \cite[Prop. 2.3]{Sando}. Since we are studying homological properties of algebras, and since the properties of being semiperfect and Noetherian are preserved under Morita-equivalence, for the same reason as above, we may assume that our algebra is basic, which means that $m_1 = \cdots = m_n=1$ and all the $e_j$ are primitive idempotents. For simplifying notation, there is no loss of generality in fixing $e = e_1$, when we are given that $e$ is primitive. Observe that $e_1A, \ldots, e_nA$ represent all the indecomposable projective right $A$-modules, up to isomorphism; see \cite{AndFul}. In particular, every indecomposable projective module is cyclic. There is an $A$-module which is of special interest for relating the homological properties of $A$ and $\hbox{$\mathit\Gamma$}mma$. This module is the semi-simple right $A$-module at $e$, that is, $S_e:=eA/e{\rm rad}A$. A class of examples of semiperfect Noetherian $k$-algebras are the finite dimensional $k$-algebras, which we know are Morita equivalent to $kQ/I$ for some finite quiver $Q$ and some admissible ideal $I$ of $kQ$. Other examples are obtained as follows. Let $Q$ be a finite quiver and denote by $J_Q$ the ideal of $kQ$ generated by all arrows. Let $I$ be an ideal of $kQ$ with $I \sigmaubseteq J_Q^2$ and which is generated by homogeneous elements. Let $\Lambdambda:=kQ/I$ and consider $J$ the ideal of $\Lambdambda$ generated by all classes of arrows. We can define a topology on $\Lambdambda$, which is called the \emph{$J$-adic topology}, as follows. A subset $U$ of $\Lambdambda$ is \emph{open} if for every $x$ in $U$, there exists an integer $r$ with $x + J^r \sigmaubseteq U$. It can be checked that this is a topology on $\Lambdambda$ such that the ring operations $$\cdot \,: \Lambdambda \tauimes \Lambdambda \tauo \Lambdambda \;\;\; \tauext{and}\;\;\; +: \Lambdambda \tauimes \Lambdambda \tauo \Lambdambda$$ are continuous, where we use the product topology on $\Lambdambda \tauimes \Lambdambda$. This makes $\Lambdambda$ into a \emph{topological algebra}. The notion of a topological algebra is a classical notion that is widely studied by commutative algebraists. In noncommutative algebra, topological algebras are also studied, however, they do not share all the properties that hold for their commutative counterparts. The reader is invited to see, for instance, the work of Gabriel \cite{Gabriel}, where the notion of pseudo-compact algebras, which is a particular class of topological algebras, is used. We refer the reader to \cite[Chapter 10]{McDonald} for the very basic definitions and properties of topological algebras. We say that $\Lambdambda$ is \emph{complete} is it is complete as a topological space, that is, every Cauchy sequence $(x_i)_{i \ge 0}$ in $\Lambdambda$ converges. Recall that a Cauchy sequence $(x_i)_{i \ge 0}$ in $\Lambdambda$ converges if for every neighborhood of zero $U$, there exists $r \ge 0$ such that for $s_1, s_2 \ge r$, we have $x_{s_1} - x_{s_2} \in U$. Since the powers of $J$ form a basis of neighborhoods of $0$, one may take a power of $J$ for the open set $U$. In our setting, the topology is always Hausdorff, since $\cap_{i\ge 1}J^i = 0$; see \cite[Lemma 10.1]{McDonald}. For $i \ge 0$, let $p_i : \Lambdambda/J^{i+1} \tauo \Lambdambda/J^i$ be the canonical projection. The inverse limit $A:=\varprojlim \Lambdambda/J^i$ of the inverse system $$A/J \,\mid\,ackrel{p_1}{\longleftarrow} A/J^2 \,\mid\,ackrel{p_2}{\longleftarrow} \cdots $$ is an algebra and is called the \emph{completion} of $\Lambdambda$ with respect to the $J$-adic topology. Indeed, $A$ is a topological algebra with a $\hat J$-adic topology, where $\hat J$ is the ideal of $A$ defined as $\hat J : = \varprojlim J/J^i$. The algebra $A$ with the $\hat J$-adic topology is complete. We have an induced canonical map $\Lambdambda \tauo A$ whose kernel is $\cap_{i\ge 1}J^i = 0$. Therefore, $\Lambdambda$ can be viewed as a subalgebra of $A$. \begin{Prop} The completion $A$ of $\Lambdambda$ is semiperfect. \end{Prop} \begin{proof} By \cite[Thm 27.6]{AndFul}, since $1_A = e_1 + \cdots + e_n$ is a decomposition of $1_A$ as a sum of pairwise orthogonal idempotents in $A$, it is sufficient to prove that for $1 \le i \le n$, $e_iAe_i$ is a local algebra. Fix $i$ with $1 \le i \le n$. Using the definition of $A$ as an inverse limit $A=\varprojlim \Lambdambda/J^i$, one may think of the algebra $e_iAe_i$ as follows. Consider $T'$ the set of all nonzero classes modulo $I$ of paths in $Q$ from $i$ to $i$. Observe that $T'$ contains a basis of $e_i \Lambdambda e_i$. So let $T\sigmaubseteq T'$ be a basis of $e_i \Lambdambda e_i$. Observe that we can define the length of an element in $T$ as the length of the corresponding path, and this is well defined since $I$ is homogeneous. Take a total order on the elements of $T$ refining path length, so that $T = \{t_0 = e_i, t_1, t_2, \ldots\}$. An element in $e_iAe_i$ can be thought of as a formal sum $\sigmaum_{j \ge 0} \lambda_j t_j$ where $\lambda_j \in k$. Addition is done termwise and multiplication is done as for multiplying power series: $$(\lambda_0 e_i + \lambda_1 t_1 + \lambda_2 t_2 + \cdots) \cdot (\mu_0 e_i + \mu_1 t_1 + \mu_2 t_2 + \cdots) = \sigmaum_{t \in T}\left(\sigmaum_{\,\mid\,ackrel{t_{i_1}, t_{i_2} \in T}{t_{i_1} t_{i_2} = t}}\lambda_{i_1}\mu_{i_2}\right)t.$$ It is then easily verified that an element $\sigmaum_{j \ge 0} \lambda_j t_j$ has a left inverse if and only if $\lambda_0$ is non-zero. This proves, by Proposition 15.15 in \cite{AndFul}, that $e_iAe_i$ is a local algebra, and hence that $A$ is semiperfect. \end{proof} In general, the completion $A$ of an arbitrary topological Noetherian $k$-algebra $\Lambdambda$ may fail to be Noetherian. However, in our setting, $A$ is always Noetherian when $\Lambdambda$ is, since $I$ is homogeneous; see \cite[Prop. 2.1]{Connell}. If $A$ is a finitely generated module over a commutative Noetherian Henselian local ring then $A$ is semiperfect. Indeed, this characterizes Henselian rings as in \cite[Lemma 1.12.5]{ABAEM}. Now, let us go back to the general theory, where $A$ is a fixed semiperfect Noetherian $k$-algebra. Using that $A$ is semiperfect, we get that the finitely generated projective $A$-modules satisfy the Krull-Schmidt decomposition theorem and hence, any indecomposable finitely generated projective $A$-module is isomorphic to some $e_iA$. Using this observation with the fact that $A$ is Noetherian gives that if $M \in {\rm mod}A$, then $M$ admits a projective resolution $$\cdots \tauo P_r \tauo P_{r-1} \tauo \cdots \tauo P_1 \tauo P_0 \tauo M \tauo 0$$ such that for $i \ge 0$, each $P_i$ is a finite direct sum of copies of the modules $e_1A, \ldots, e_nA$. Recall that since $A$ is Noetherian, the global dimension of $A$ is well defined and coincides with the left or right global dimension of $A$. For a right $A$-module $M$, we denote by pd$_A M$ its projective dimension and by id$_A M$ its injective dimension. From \cite{Oso}, one has $${\rm gl.dim}A = {\rm sup} \{{\rm pd}_A M \mid M \in {\rm mod} A\} = {\rm sup} \{{\rm id}_A M \mid M \in {\rm mod} A\},$$ which will be handy in the sequel. Also, if we know that the global dimension of $A$ is finite, then it coincides with the supremum of the projective dimensions of the simple right $A$-modules, see \cite{RobsonMcConnell}. However, we do not always know in advance that the global dimension is finite. In our setting, using the fact that our algebra is semiperfect, we have a stronger result. \begin{Prop} Let $A$ be Noetherian semiperfect. Then $${\rm gl.dim}A = {\rm max}\left\{{\rm pd}_A \left(\frac{e_iA}{e_i{\rm rad}(A)}\right)\, \mathcal{B}ig| \; i=1, \ldots, n\right\}.$$ \end{Prop} \begin{proof} Since $A$ is semi-local (that is, $A/{\rm rad}(A)$ is semi-simple), we can use Theorem $2$ in \cite{ChengXu}: the weak global dimension of $A$ is the flat dimension of the right $A$-module $A/{\rm rad}(A)$. Since $A/{\rm rad}(A)$ is finitely generated and $A$ is semi-perfect and right Noetherian, the flat dimension of the right $A$-module $A/{\rm rad}(A)$ coincides with its projective dimension. This implies that the weak global dimension of $A$ is the projective dimension of $A/{\rm rad}(A)$. Now, since $A$ is Noetherian, the weak global dimension coincide with the global dimension. \end{proof} Recall that the \emph{radical} ${\rm rad}(M)$ of a right $A$-module $M$ is the intersection of all its maximal submodules. Since $A/{\rm rad}(A)$ is semi-simple, if $M$ is finitely generated, then ${\rm rad}(M)=M{\rm rad}(A)$. Recall also that a projective resolution $$\cdots \,\mid\,ackrel{d_2}{\tauo} P_1 \,\mid\,ackrel{d_1}{\tauo} P_0 \tauo M \tauo 0$$ of $M \in {\rm mod}(A)$ is \emph{minimal} if for $i \ge 1$, $d_i$ is a radical morphism, that is, the image of $d_i$ is contained in the radical of $P_{i-1}$. Every $M \in {\rm mod}(A)$ admits a minimal projective resolution in ${\rm mod}(A)$. Now, we have an exact functor $$F:= {\rm Hom}((1-e)A,-): {\rm mod}A \tauo {\rm mod}\hbox{$\mathit\Gamma$}mma$$ between the corresponding categories of finitely generated right modules. Note that we also have a functor $G:=- \otimes_\hbox{$\mathit\Gamma$}mma (1-e)A : {\rm mod}\hbox{$\mathit\Gamma$}mma \tauo {\rm mod}A$ which is left adjoint to $F$. However, this functor is not exact. The reader is referred, for instance, to \cite{Psaroudakis} for a better idea of the canonical functors between the algebras $A, \hbox{$\mathit\Gamma$}mma$ and $A/A(1-e)A$. In this paper, we shall concentrate on the functor $F$. The following proposition collects some of the properties of the functor $F$. \begin{Prop} \label{HomologicalProp} Let $M, P, S \in {\rm mod} A$ with $P$ indecomposable projective and $S$ simple. \begin{enumerate}[$(1)$] \item If $P$ is not isomorphic to a direct summand of $eA$, then $F(P)$ is indecomposable projective. \item If $S$ is not isomorphic to a direct summand of $S_e$, then $F(S)$ is simple. \item The functor $F$ is essentially surjective. \item If ${\rm Ext}^i_A(M,S_e) = 0$ for all $i \ge 0$ and $\cdots \tauo P_1 \tauo P_0 \tauo M \tauo 0$ is a minimal projective resolution of $M$, then $\cdots \tauo F(P_1) \tauo F(P_0) \tauo F(M) \tauo 0$ is a minimal projective resolution of $F(M)$. \item $F(S_e)=0$. \end{enumerate} \end{Prop} \begin{proof} Properties $(1),(2),(5)$ are easy and well known. For proving property $(3)$, it suffices to observe that $F$ and $G$ induce quasi-inverse equivalences between add$((1-e)A)$ and add$(\hbox{$\mathit\Gamma$}mma_\hbox{$\mathit\Gamma$}mma),$ the additive category generated by $\hbox{$\mathit\Gamma$}mma$ as a right $\hbox{$\mathit\Gamma$}mma$-module. Hence, if $N$ is a finitely generated right $\hbox{$\mathit\Gamma$}mma$-module with a projective presentation $Q_1 \,\mid\,ackrel{f}{\rightarrow} Q_0 \tauo N \tauo 0$, then we get a projective presentation $G(Q_1) \,\mid\,ackrel{G(f)}{\rightarrow} G(Q_0) \tauo {\rm Coker}(G(f)) \tauo 0$. Then we see that $F({\rm Coker}(G(f))) \cong N$. Property $(4)$ follows from property $(1)$ by observing that if $g: P \tauo Q$ is a radical morphism with $P,Q \in {\rm add}((1-e)A)$, then $F(g)$ is a radical morphism. \end{proof} Therefore, if $M \in {\rm mod} A$ is such that ${\rm Ext}_A^i(M,S_e) =0$ for all $i \ge 0$, then a minimal projective resolution of $F(M)$ is obtained by applying $F$ to a minimal projective resolution of $M$. Hence, in this case, pd$_A M = {\rm pd}_\hbox{$\mathit\Gamma$}mma F(M)$. In general, a module $M$ in ${\rm mod} A$ needs not satisfy ${\rm Ext}_A^i(M,S_e) =0$ for all $i$, and we need to measure this defect. For $M \in {\rm mod} A$, denote by $d_e(M)$ the maximal integer $i$ for which ${\rm Ext}^i_A(M,S_e)$ is non-zero (if ${\rm Ext}_A^i(M,S_e) =0$ for all $i \ge 0$, we set $d_e(M)=-1$; if ${\rm Ext}_A^i(M,S_e) \ne 0$ for infinitely many $i$, we set $d_e(M) = \infty$). The numbers $d_e(M)$ for $M \in {\rm mod} A$ will be very handy in the sequel. Observe that the supremum of the $d_e(M)$ for $M \in {\rm Mod} A$ gives the injective dimension of $S_e$. However, using Baer's criteria, one only needs to take the supremum over the cyclic modules. In particular, we can take the supremum over finitely generated $A$ modules, $${\rm id}_A S_e = {\rm sup} \{d_e(M) \mid M \in {\rm mod} A\}.$$ \sigmaection{Noetherian positively graded $k$-algebras} \label{sectionNoethGraded} In this section, $A$ is a Noetherian basic $k$-algebra which is \emph{positively graded} and generated in degrees $0$ and $1$. This means that $$A = A_0 \oplus A_1 \oplus \cdots,$$ as $k$-vector spaces, where $A_0$ is a product of $n$ copies of $k$, and $A_iA_j = A_{i+j}$ for all $i, j \ge 0$. We do not assume that $A$ is semiperfect. Observe that since $A$ is Noetherian, each $A_i$ should be finite dimensional. Examples of this include Noetherian $k$-algebras of the form $kQ/I$ where $Q$ is a finite quiver and $I$ is an ideal of $kQ$ generated by homogeneous elements of degree at least two. Observe that $1_A = e_1 + \cdots + e_n$ where the $e_i$ are primitive pairwise orthogonal idempotents of degree $0$. We let ${\rm rad}A$ denote the ideal $A_1 \oplus A_2 \oplus \cdots$ of $A$. This ideal does not necessarily coincide with the Jacobson radical of $A$, but we will see that it plays a similar role in the category of graded modules. For this reason, it is call the \emph{graded Jacobson radical} of $A$ and this is why we use that notation ${\rm rad}A$. Let $M$ be a right $A$-module. One says that $M$ is \emph{graded} if $M$ admits a $k$-vector space decomposition $$M = \bigopls_{i \in \mathbb{Z}}M_i$$ such that $M_jA_i \sigmaubseteq M_{i+j}$. Observe that for an idempotent $f$ in $A_0$, the projective module $fA$ is naturally graded, as $$fA = fA_0 \oplus fA_1 \oplus fA_2 \oplus \cdots.$$ Let gr$A$ be the category of all finitely generated graded right $A$-modules. A \emph{morphism} $f: M \tauo N$ in gr$A$ is a morphism of $A$-modules such that $f(M_i) \sigmaubseteq f(N_i)$ for all $i$. Given $t \in \mathbb{Z}$ and $M \in {\rm gr}A$, we define $M[t]$ to be the module in gr$A$ such that $M[t]_i = M_{i-t}$. The right $A$-module $M[t]$ is called a \emph{shift} of $M$. The following essential facts about gr$A$ can all be found in \cite{MartinezSolberg}, for instance. In gr$A$, any indecomposable projective module is isomorphic to a shift of some $e_iA$. Given $f: L \tauo M$ a morphism in ${\rm gr}A$, we have that $f$ lies in the radical of the category gr$A$ if and only if $f(L) \sigmaubseteq M{\rm rad}A$. The graded submodule $M{\rm rad}A$ of $M$ is called the \emph{graded radical} of $M$. It coincides with the intersection of all graded submodules of $M$. The \emph{graded top} of $M$ is $M/M{\rm rad}A$. Moreover, every finitely generated module in gr$A$ has a projective cover in gr$A$. Using the Noetherian property, any $M \in {\rm gr}A$ admits a minimal finitely generated projective resolution $$\cdots \tauo P_2 \tauo P_1 \tauo P_0 \tauo M \tauo 0$$ which is graded and such that each term $P_i$ is a finite direct sum of shifts of modules in $\{e_1A, \ldots, e_nA\}$. For $M,N \in {\rm gr}A$ and $r \ge 0$, by ${\rm Ext}^r_A(M,N)$, we mean the $r$-th extension group of $M$ by $N$ in the category ${\rm mod} A$. We have $${\rm Ext}^r_A(M,N) = \bigopls_{i \in \mathbb{Z}} {\rm Ext}_{{\rm gr}A}^r(M,N[i]).$$ In particular, ${\rm Hom}_A(M,N)$ coincides with $\bigopls_{i \in \mathbb{Z}} {\rm Hom}_{{\rm gr}A}(M,N[i])$. It is clear that for $M \in {\rm gr}A$, the projective dimension of $M$ in ${\rm gr}A$ coincides with ${\rm pd}_A M$. All these facts mean that the homological algebra in the category gr$A$ looks very similar to the homological algebra in ${\rm mod}(B)$ for a Noetherian semiperfect algebra $B$. We fix $e$ an idempotent of degree $0$ of $A$. As before, we set $\hbox{$\mathit\Gamma$}mma:= (1-e)A(1-e)$ and we set $S_e = eA/e{\rm rad}A$, which is the semi-simple graded right $A$-module of degree $0$ supported at $e$. The algebra $A$ being positively graded implies that $\hbox{$\mathit\Gamma$}mma = (1-e)A(1-e)$ is positively graded as well. The grading on $\hbox{$\mathit\Gamma$}mma$ is given by $$\hbox{$\mathit\Gamma$}mma = (1-e)A_0(1-e) \oplus (1-e)A_1(1-e) \oplus \cdots$$ By \cite[Prop. 2.3]{Sando}, $\hbox{$\mathit\Gamma$}mma$ is again Noetherian and we may assume that the $e_iA$ are pairwise non-isomorphic. Observe that this induced grading does not necessarily implies that $\hbox{$\mathit\Gamma$}mma$ is generated in degree $0,1$. Indeed, $\hbox{$\mathit\Gamma$}mma$ is generated in degree $0,1,2$ when ${\rm Ext}^1(S_e,S_e)=0$. Now, the functor $F={\rm Hom}_A((1-e)A, -): {\rm mod} A \tauo {\rm mod} \hbox{$\mathit\Gamma$}mma$ induces a functor $F_{\rm gr} = {\rm Hom}_{A}((1-e)A, -): {\rm gr}A \tauo {\rm gr}\hbox{$\mathit\Gamma$}mma$ at the level of the graded categories of right-modules. The left adjoint $G = - \otimes_\hbox{$\mathit\Gamma$}mma (1-e)A$ to $F$ also induces a functor $G_{\rm gr} = - \otimes_\hbox{$\mathit\Gamma$}mma (1-e)A: {\rm gr}\hbox{$\mathit\Gamma$}mma \tauo {\rm gr}A$ such that if $M_\hbox{$\mathit\Gamma$}mma =\bigopls_{i \in \mathbb{Z}} M_i$ is a graded finitely generated right $\hbox{$\mathit\Gamma$}mma$-module, then $M \otimes_\hbox{$\mathit\Gamma$}mma (1-e)A$ is a graded finitely generated right $A$-module such that $(M \otimes_\hbox{$\mathit\Gamma$}mma (1-e)A)_t$ is the $k$-vector space generated by the elements $$\{m_i \otimes_\hbox{$\mathit\Gamma$}mma (1-e)a_j \mid m_i \in M_i, a_j \in A_j, i+j = t\}.$$ In the proof of Proposition \ref{HomologicalProp}, the key fact was that $F,G$ induce quasi-inverse equivalences between add$((1-e)A)$ and add$(\hbox{$\mathit\Gamma$}mma_\hbox{$\mathit\Gamma$}mma)$. The same is true for $F_{\rm gr}, G_{\rm gr}$ if we consider instead the graded additive categories add$_{\rm gr}((1-e)A)$ and add$_{\rm gr}(\hbox{$\mathit\Gamma$}mma_\hbox{$\mathit\Gamma$}mma)$. For simplicity, when there is no risk of confusion, $F_{\rm gr}$ and $G_{\rm gr}$ will simply be denoted by $F$ and $G$, respectively. Hence, Proposition \ref{HomologicalProp} easily extends to the graded case as follows. \begin{Prop} \label{HomologicalPropGraded} Let $M, P, S \in {\rm gr} A$ with $P$ indecomposable projective and $S$ simple. \begin{enumerate}[$(1)$] \item If $P$ is not isomorphic to a direct summand of a shift of $eA$, then $F(P)$ is a graded indecomposable projective module. \item If $S$ is not isomorphic to a direct summand of a shift of $S_e$, then $F(S)$ is a graded simple module. \item The functor $F$ is essentially surjective. \item If ${\rm Ext}^i_A(M,S_e) = 0$ for all $i \ge 0$ and $\cdots \tauo P_1 \tauo P_0 \tauo M \tauo 0$ is a minimal graded projective resolution of $M$, then $\cdots \tauo F(P_1) \tauo F(P_0) \tauo F(M) \tauo 0$ is a minimal graded projective resolution of $F(M)$. \item $F(S_e)=0$. \end{enumerate} \end{Prop} For finding the global dimension of a positively graded algebra, by \cite{Roy}, we only need to look at the projective dimensions of graded simple modules, that is, $${\rm gl.dim}A = {\rm max} \left\{{\rm pd}\left(\frac{e_iA}{e_i{\rm rad}A}\right) \mathcal{B}ig| i = 1,2, \ldots, n\right\}.$$ Before going further, we need to study filtered modules. For $i \ge 0$, set $F_i = A_0 \oplus \cdots \oplus A_i$, so that we get a chain $$F_0 \sigmaubseteq F_1 \sigmaubseteq F_2 \sigmaubseteq \cdots$$ of $k$-vector spaces whose union is $A$. We have $F_iF_j \sigmaubseteq F_{i+j}$ for all $i,j \ge 0$. Hence, $A$ is a \emph{filtered algebra}. A \emph{filtrated right $A$-module} $M$ is just a right $A$-module $M$ with an ascending chain $$M_0 \sigmaubseteq M_1 \sigmaubseteq M_2 \sigmaubseteq \cdots$$ of $k$-vector spaces whose union is $M$ and such that $M_iF_j \sigmaubseteq M_{i+j}$ for all $i,j\ge 0$. Given any finitely generated (but not necessarily graded) right $A$-module $N$, let $g_1, \ldots, g_t$ be a fixed finite set of generators of $N$. For $i \ge 0$, set $N_i = \sigmaum_j g_j \cdot F_i$, so that we have a chain $$N_0 \sigmaubseteq N_1 \sigmaubseteq \cdots$$ of $k$-vectors spaces whose union is $N$. Clearly, $N_iF_j \sigmaubseteq N_{i+j}$, so that $N$ becomes a filtered $A$-module. Thus, every finitely generated right $A$-module admits a structure of a filtered module. Given $N \in {\rm mod} A$, there are many choices of filtrations on $N$ that give a filtered module structure. The filtration given above, which depends on the chosen set of generators, is called a \emph{standard filtration} of $N$. Given two filtered $A$-modules $M=\cup_{i \ge 0} M_i$ and $N = \cup_{i \ge 0}N_i$, a \emph{filtered morphism} $f: M \tauo N$ is a morphism of $A$-modules that satisfies $f(M_i) \sigmaubseteq N_i$ for all $i \ge 0$. It is called \emph{strict} if for all $i$, $f(M_i) = f(M)\cap N_i$. We get a category fil$(A)$ whose objects are all the finitely generated filtered right $A$-modules and morphisms are the filtered morphisms. Let Gr$A$ denote the category of all graded $A$-modules. There is a functor $${\rm gr}: {\rm fil}(A) \tauo {\rm Gr}A$$ such that for $M = \cup_{i \ge 0}M_i$ a finitely generated filtered $A$-module, gr$M$ is defined so that $({\rm gr}M)_i = M_i/M_{i-1}$ for $i\ge 1$ and $({\rm gr}M)_0 = M_0$. The structure of gr$M$ as a graded $A$-module is the obvious one. Observe that even if $M=\cup_{i\ge 0}M_i$ is finitely generated, gr$M$ may not be finitely generated. However, if $\{M_i\}_{i \ge 0}$ is a standard filtration of $M$, then gr$M$ is finitely generated. The following, which is a particular case of a result due to Roy (see \cite{Roy}), can be found in \cite[page 258]{RobsonMcConnell}. \begin{Prop} \label{PropFiltered}Let $M$ be a finitely generated $A$-module, and choose a standard filtration of it in order to build ${\rm gr}M \in {\rm gr}A$. Let $$(*): \quad Q'_t \,\mid\,ackrel{d_t}{\tauo} Q'_{t-1} \,\mid\,ackrel{d_{t-1}}{\tauo} \cdots \,\mid\,ackrel{d_1}{\tauo} Q'_0 \,\mid\,ackrel{d_0}{\tauo} {\rm gr}M \tauo 0$$ be a finitely generated graded free resolution of ${\rm gr}M$. \begin{enumerate}[$(1)$] \item There is an exact sequence $$(**): \quad Q_t \,\mid\,ackrel{f_t}{\tauo} Q_{t-1} \,\mid\,ackrel{f_{t-1}}{\tauo} \cdots \,\mid\,ackrel{f_1}{\tauo} Q_0 \,\mid\,ackrel{f_0}{\tauo} M \tauo 0$$ in ${\rm fil}(A)$ where the $Q_i$ are finitely generated free filtered and all the maps are strict, so that $(**)$ is sent to $(*)$ by ${\rm gr}$. \item If ${\rm ker}(d_t)$ is projective in ${\rm gr}A$, then ${\rm ker}(f_t)$ is projective. \end{enumerate} \end{Prop} For a graded $A$-module $M$, one defines $d_e(M)$ in a similar way: $d_e(M)$ is the maximal integer $i$ for which ${\rm Ext}^i_A(M,S_e)$ is non-zero (if ${\rm Ext}_A^i(M,S_e) =0$ for all $i \ge 0$, we set $d_e(M)=-1$; if ${\rm Ext}_A^i(M,S_e) \ne 0$ for infinitely many $i$, we set $d_e(M) = \infty$). We first have to make sure that the supremum of the $d_e(M)$ where $M \in {\rm gr}A$ gives the injective dimension of $S_e$. \begin{Lemma} We have ${\rm sup}\{d_e(M) \mid M \in {\rm gr}A\} = {\rm id}_A S_e$. \end{Lemma} \begin{proof} We may restrict to the case where $e$ is primitive. First, we have ${\rm sup}\{d_e(M) \mid M \in {\rm gr}A\} \le {\rm id}_A S_e$. If the inequality is strict, then there exists a finitely generated $A$-module $M$, which is not graded, such that ${\rm Ext}^r_A(M,S_e) \ne 0$, where $r > {\rm sup}\{d_e(M) \mid M \in {\rm gr}A\}$. Now, consider a standard filtration for $M$ with a finitely generated graded free resolution $$(*): \quad Q'_{r+1} \,\mid\,ackrel{d_{r+1}}{\tauo} Q'_r \,\mid\,ackrel{d_r}{\tauo} \cdots \tauo Q'_0 \tauo {\rm gr}M \tauo 0$$ of ${\rm gr}M$. Since ${\rm gr}M$ is finitely generated graded, ${\rm Ext}^r_A({\rm gr}M, S_e)=0$. By Proposition \ref{PropFiltered}, there is an exact sequence $$(**): \quad Q_{r+1} \,\mid\,ackrel{f_{r+1}}{\tauo} Q_r \,\mid\,ackrel{f_r}{\tauo} Q_{r-1} \,\mid\,ackrel{f_{r-1}}{\tauo} \cdots \,\mid\,ackrel{f_1}{\tauo} Q_0 \,\mid\,ackrel{f_0}{\tauo} M \tauo 0$$ in ${\rm fil}(A)$ where the $Q_i$ are finitely generated free filtered and all the maps are strict, so that $(**)$ is sent to $(*)$ by ${\rm gr}$. Since ${\rm Ext}^r_A(M,S_e) \ne 0$, there exists a morphism $f: Q_r \tauo S_e$ such that $ff_{r+1} = 0$ and $f$ does not factor through $f_r$. Since $S_e$ is one dimensional, the morphism $f: Q_r \tauo S_e$ gives rise to a strict filtered epimorphism, also denoted by $f$. By applying the functor gr, we get $d_{r+1}{\rm gr}(f)=0$ where ${\rm gr}(f): Q'_r \tauo S_e[t]$ for some $t$. Since ${\rm Ext}^r_A({\rm gr}M, S_e)=0$, there exists $g': Q'_{r-1} \tauo S_e[t]$ such that $g'd_r = {\rm gr}(f)$. Now from \cite[Chap. 7, Prop. 6.15]{RobsonMcConnell}, there exists a strict filtered morphism $g: Q_{r-1} \tauo S_e$ such that gr$(g) = g'$. Hence, we get gr$(f-gf_r)={\rm gr}(f)-g'd_{r}=0$. If $f-gf_r$ is non-zero, then it is a strict epimorphism from $Q_r$ to $S_e$, and hence, by \cite[Chap. 7, Cor. 6.14]{RobsonMcConnell}, gr$(f-gf_r)$ is an epimorphism, a contradiction. This shows that $f-gf_r=0$, a contradiction. \end{proof} The above lemma actually tells us that the injective dimension of $S_e$ in gr$A$ (or Gr$A$, the category of not necessarily finitely generated graded modules) coincides with the injective dimension of $S_e$ in ${\rm mod}(A)$ (or Mod$(A)$). Note that the corresponding result is not true, in general, for an arbitrary object $M$ in gr$A$. \sigmaection{Homological dimensions} \label{sectionHomDim} In this section, the algebras considered are Noetherian and they are either semiperfect or positively graded. Sometimes, we restrict to the Artinian case, that is, the finite dimensional case. As a first step to our investigation, we want to relate the homological dimensions of $A$-modules with those of the $\hbox{$\mathit\Gamma$}mma$-modules. More precisely, we want to relate the global dimensions of $A$ and $\hbox{$\mathit\Gamma$}mma$ with the homological properties of the semi-simple module $S_e$. Later in this section, we will assume that $e$ is primitive, but at this stage, $e$ is any idempotent of $A$, which is of degree $0$ if $A$ is positively graded. To unify the notations, we denote by $\mathcal{C}(A)$ and $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ the following categories. If $A$ is semiperfect, $\mathcal{C}(A) = {\rm mod} A$ and $\mathcal{C}(\hbox{$\mathit\Gamma$}mma) = {\rm mod} \hbox{$\mathit\Gamma$}mma$ and if $A$ is positively graded, $\mathcal{C}(A) = {\rm gr}A$ and $\mathcal{C}(\hbox{$\mathit\Gamma$}mma) = {\rm gr}\hbox{$\mathit\Gamma$}mma$. Recall that we have a functor $F: \mathcal{C}(A) \tauo \mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ having nice properties, see Propositions \ref{HomologicalProp} and \ref{HomologicalPropGraded}. We start by relating the projective dimension of an object $M$ in $\mathcal{C}(A)$ to that of $F(M)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$. We say that $M \in \mathcal{C}(A)$ is \emph{self-orthogonal} if we have ${\rm Ext}^i_A(M,M)=0$ for all positive integers $i$. The following lemma is an analogue to the well known Horseshoe Lemma and will be very handy in the sequel. \begin{Lemma} \label{ConstructionProjResol} Let $0 \tauo L \tauo M \tauo N \tauo 0$ be a short exact sequence in $\mathcal{C}(A)$. Let $$\cdots \tauo P_1 \tauo P_0 \tauo L \tauo 0$$ and $$\cdots \tauo Q_1 \tauo Q_0 \tauo M \tauo 0$$ be projective resolutions of $L$ and $M$ in $\mathcal{C}(A)$, respectively. Then there exists a projective resolution $$ \cdots \tauo P_1 \oplus Q_2 \tauo P_0 \oplus Q_1 \tauo Q_0\tauo N \tauo 0$$ of $N$ in $\mathcal{C}(A)$. \end{Lemma} \begin{proof} Observe that in the bounded derived category of ${\rm mod} A$, one can replace $L$ and $M$ by their respective projective resolutions, which are in $\mathcal{C}(A)$. The short exact sequence given gives rise to a triangle $$L \tauo M \tauo N \tauo L[1].$$ Hence $N$ is quasi-isomorphic to the cone of the morphism $L \tauo M$. Moreover, this cone is a complex of graded modules if $A$ is positively graded. The result is clear from this: the cone obtained has zero cohomology in all degrees but zero (where it is isomorphic to $N$), since it is quasi-isomorphic to $N$. Hence, the cone is a projective resolution of $N$. \end{proof} The following is essential. \begin{Prop} \label{firstprop} Let $M$ be in $\mathcal{C}(A)$. Then $${\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(M) \le {\rm max}(d_e(M)+{\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA),{\rm pd}_A M).$$ Moreover, ${\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(M) = {\rm pd}_A M$ whenever $d_e(M)+{\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) < {\rm pd}_A M-1$ or $d_e(M)=-1$. \end{Prop} \begin{proof} If $d_e(M)=-1$, then ${\rm Ext}^i_A(M,S_e)=0$ for $i \ge 0$ and we have already observed that ${\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(M) = {\rm pd}_A M$. We may assume that pd$_A M = r < \infty$ and pd$_{\hbox{$\mathit\Gamma$}mma}F(eA) = s < \infty$. Then $0 \le d_e(M) \le r$. Let $$\mathcal{P}_M: \quad 0 \tauo P_r\tauo \cdots \tauo P_1 \tauo P_0\tauo M \tauo 0$$ be a minimal projective resolution of $M$ in $\mathcal{C}(A)$, and suppose that for $0 \le i \le r$, we have that $P_i \cong T_i\oplus Q_i$ where $Q_i$ has no direct summand isomorphic to (a shift of) a direct summand of $eA$. Then $d_e(M)$ is the maximal $i \ge 0$ such that $T_i \ne 0$. Note that $F(Q_i)$ is a projective object in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ for all $i$. Let $$\mathcal{P}_{F(eA)}: \quad 0 \tauo R_s\tauo \cdots \tauo R_1 \tauo R_0 \tauo F(eA) \tauo 0$$ be a minimal projective resolution of $F(eA)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$. By using Lemma \ref{ConstructionProjResol} many times, we see that we can get a projective resolution of $F(M)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ of length at most max$(d_e(M)+s,r)$ using the $F(Q_i)$ and summands of the $R_j$ as terms. Suppose now that $d_e(M) + s < r-1$. Let $N$ be the $(d_e(M)+1)$-syzygy of $M$ in $\mathcal{C}(A)$. Then pd$_A N = r-d_e(M)-1 > s$ and the minimal projective resolution of $N$ in $\mathcal{C}(A)$ does not contain (a shift of) a direct summand of $eA$ as a summand. Therefore, applying $F$ to it, we get a minimal projective resolution of $F(N)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ of length $r-d_e(M)-1$ and hence, pd$_\hbox{$\mathit\Gamma$}mma F(N) = r-d_e(M)-1$. Consider now the short exact sequence $$0 \tauo F(N) \tauo F(Q_{d_e(M)}) \oplus F(T_{d_e(M)}) \tauo F(L) \tauo 0,$$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ where $L$ is the $d_e(M)$-syzygy of $M$. By Lemma \ref{ConstructionProjResol}, we get a projective resolution of $F(L)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ of length $r-d_e(M)$ where the last map is the last map of the minimal projective resolution of $F(N)$, since pd$_\hbox{$\mathit\Gamma$}mma F(N) > {\rm pd}_\hbox{$\mathit\Gamma$}mma F(Q_{d_e(M)}) \oplus F(T_{d_e(M)})$. Hence, this projective resolution of $F(L)$ is of minimal length and pd$_\hbox{$\mathit\Gamma$}mma F(L) = r-d_e(M) > s$. By induction, we get pd$_\hbox{$\mathit\Gamma$}mma F(M)=r$. \end{proof} As already observed, the supremum of $\{d_e(M) \mid M \in \mathcal{C}(A)\}$ is the injective dimension of $S_e$ in ${\rm mod} A$. Therefore, we have the following as a consequence, compare with \cite[Cor. 8.1 (viii)]{Psaroudakis}. \begin{Cor} \label{firstcoro} We have ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma \le {\rm max}({\rm id}_A S_e + {\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(eA),{\rm gl.dim} A)$, and we have ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma = {\rm gl.dim} A$ if ${\rm id}_A S_e+{\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(eA) < {\rm gl.dim} A -1$. \end{Cor} \begin{proof} Observe first that the functor $F$ is essentially surjective by Propositions \ref{HomologicalProp} and \ref{HomologicalPropGraded}. Moreover, the global dimension of an algebra can be reduced to taking the supremum of the projective dimension of the simple objects in $\mathcal{C}(A)$. Thus, $${\rm sup} \{{\rm pd}_\hbox{$\mathit\Gamma$}mma F(M) \mid M \in \mathcal{C}(A)\} = {\rm gl.dim} \hbox{$\mathit\Gamma$}mma.$$ Therefore, by Proposition \ref{firstprop}, we have $${\rm gl.dim} \hbox{$\mathit\Gamma$}mma \le {\rm max}({\rm sup}\{d_e(M) \mid M \in \mathcal{C}(A)\} + {\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(eA),{\rm sup}\{{\rm pd}_A(M) \mid M \in \mathcal{C}(A)\}),$$ and this yields the first part of the statement. For the second part, suppose ${\rm id}_A S_e+{\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(eA) < {\rm gl.dim} A -1$. Let $M$ be an object in $\mathcal{C}(A)$ such that pd$_A M = {\rm gl.dim} A$. We have $$d_e(M)+{\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) \le {\rm id}_A S_e +{\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) < {\rm gl.dim} A -1 = {\rm pd}_A M-1.$$ Therefore, from Proposition \ref{firstprop}, pd$_A M = {\rm pd}_\hbox{$\mathit\Gamma$}mma F(M)$. This gives pd$_\hbox{$\mathit\Gamma$}mma F(M) = {\rm gl.dim} A$. Thus, gl.dim$\hbox{$\mathit\Gamma$}mma= {\rm gl.dim} A$. \end{proof} Note that the bound obtained contains a number that depends on $\hbox{$\mathit\Gamma$}mma$, namely ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA)$. In general, we cannot replace the latter by a number that does not depend on $\hbox{$\mathit\Gamma$}mma$. Indeed, in general, ${\rm gl.dim} A < \infty$ does not imply ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma < \infty$. However, if $S_e$ is self-orthogonal, then the first syzygy $\Omega$ of $S_e$ satisfies ${\rm Ext}^i_A(\Omega, S_e) = 0$ for all $i \ge 0$. As observed above, this gives ${\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(eA) = {\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(\Omega) = {\rm pd}_{A}\Omega$. Hence, we get the following. \begin{Prop} \label{prop1} Suppose that $S_e$ is self-orthogonal. \begin{enumerate}[$(1)$] \item If $S_e$ is not projective, then ${\rm pd}_{\hbox{$\mathit\Gamma$}mma}F(eA) = {\rm pd}_{A}S_e - 1$. \item We have ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma \le {\rm max}({\rm id}_A S_e + {\rm pd}_A S_e -1,{\rm gl.dim} A)$, with equality if ${\rm id}_A S_e + {\rm pd}_A S_e < {\rm gl.dim} A$. \item If ${\rm gl.dim} A < \infty$, then ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma < \infty$.\end{enumerate} \end{Prop} Part (3) of Proposition \ref{prop1} can also be deduced from results in \cite{APT} or in \cite{FullerSaorin}, in the finite dimensional case. \begin{Remark} In the terminology of \cite{APT}, when $A$ is finite dimensional, then $S_e$ is self-orthogonal if and only if the idempotent ideal $A(1-e)A$ is strong idempotent: every indecomposable summand in the minimal projective resolution of the right $A$-module $A(1-e)A$ is in add$((1-e)A)$. However, the above bound seems not to appear in that paper. \end{Remark} For $M \in {\rm mod} A$, $\Omega(M)$ denotes the first syzygy of $M$. If $M \in \mathcal{C}(A)$, then $\Omega(M) \in \mathcal{C}(A)$. For finding a bound on gl.dim$A$, we have the following. \begin{Prop} \label{Prop2} Let $M \in \mathcal{C}(A)$, and set $r = {\rm id}_A S_e$. \begin{enumerate}[$(1)$] \item If $r$ is finite, then ${\rm pd}_A M \le r + {\rm pd}_\hbox{$\mathit\Gamma$}mma F(\Omega^{r+1}(M)) + 1$. \item We have ${\rm gl.dim} A \le r + {\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 1$.\end{enumerate} \end{Prop} \begin{proof} Statement $(2)$ follows from statement $(1)$. Assume that $r < \infty$. Let $L = \Omega^{r+1}(M)$, that is, $L$ is the $(r+1)$-th syzygy of $M$ in $\mathcal{C}(A)$. Then ${\rm Ext}^i_A(L,S_e)=0$ for all $i$. Applying $F$ to a minimal projective resolution of $L$ yields a minimal projective resolution of $F(L)$. Hence, ${\rm pd}_A L = {\rm pd}_\hbox{$\mathit\Gamma$}mma F(L)$. Thus, ${\rm pd}_A M \le r+1 + {\rm pd}_A L = r + {\rm pd}_\hbox{$\mathit\Gamma$}mma F(L) + 1$. \end{proof} Since the left global dimension of $A$ coincides with the right global dimension of $A$, in the above proposition, we can replace the injective dimension of $S_e = eA/e{\rm rad}A$ by the injective dimension of $Ae/{\rm rad}Ae$. In case $A$ is finite dimensional, id$_A (Ae/{\rm rad}Ae) = {\rm pd}_A S_e$. Thus, the following result follows immediately. \begin{Prop} \label{LastProp} Assume that $A$ is finite dimensional. Then $${\rm gl.dim} A \le {\rm min}({\rm id}_A S_e, {\rm pd}_A S_e) + {\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 1.$$ \end{Prop} The following lemma is essential for describing the kernel of $F$. It will be particularly useful when $e$ is a primitive idempotent. For $M \in \mathcal{C}(A)$, we denote by add$(M)$ the modules which are direct summands of finite direct sums of copies of (shift of) $M$ in $\mathcal{C}(A)$. \begin{Lemma} \label{SNLC} Suppose that ${\rm Ext}^1_A(S_e,S_e) = 0$. Then, for $M \in \mathcal{C}(A)$, $F(M)=0$ if and only if $M \in {\rm add}(S_e)$. \end{Lemma} \begin{proof} Suppose that $M \in\mathcal{C}(A)$ is indecomposable and $F(M)=0$. Let $I = A(1-e)A$. Then $M \cong M/MI$, that is, $M$ is a $A/I$-module. Observe that $M$ has as a projective cover $P \tauo M$ in $\mathcal{C}(A)$ with $P$ a finite direct sum of (shift of) copies of $eA$. Hence, we have an epimorphism $P/PI \tauo M$. Since ${\rm Ext}^1_A(S_e,S_e) = 0$, we have that $P/PI$ is in add$(S_e)$, and so is $M$. \end{proof} In the above lemma, when $A$ is finite dimensional and $e$ is primitive, ${\rm pd}_A S_e < \infty$ is enough to guarantee the condition ${\rm Ext}^1_A(S_e,S_e) = 0$; see \cite{ILP}. However, when $A$ is not finite dimensional, the condition ${\rm pd}_A S_e < \infty$ is not sufficient. Observe also that when $A$ is not finite dimensional, in Proposition \ref{Prop2}, we cannot replace ${\rm id}_A S_e$ by ${\rm pd}_A S_e$ using a duality argument: there may not be a duality between ${\rm mod} A$ and ${\rm mod} A^{\,\rm op}$. However, we still get the following. \begin{Prop} \label{LastProp2} Assume ${\rm Ext}^1_A(S_e, S_e)=0$ and let $M \in \mathcal{C}(A)$. Then \begin{enumerate}[$(1)$] \item ${\rm pd}_A M \le {\rm pd}_A S_e + {\rm pd}_\hbox{$\mathit\Gamma$}mma F(M) + 1.$ \item ${\rm gl.dim} A \le {\rm pd}_A S_e + {\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 1.$ \item ${\rm gl.dim} A \le {\rm min}({\rm id}_A S_e, {\rm pd}_A S_e) + {\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 1.$ \end{enumerate} \end{Prop} \begin{proof} Statement (3) follows from Statement (2) and Proposition \ref{Prop2}. Statement (2) follows from Statement (1). To prove Statement (1), we may assume that ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(M) = m < \infty$ and ${\rm pd}_A S_e = r < \infty$. Since ${\rm Ext}^1_A(S_e, S_e)=0$, we see that there exists a short exact sequence $$\eta: \;\; 0 \tauo N \tauo M \tauo S \tauo 0$$ in $\mathcal{C}(A)$ where $S\in {\rm add}(S_e)$ such that the top of $N$ does not contain (a shift of) a direct summand of $S_e$ as a direct summand. Let $P_0 \tauo N \tauo 0$ be a projective cover in $\mathcal{C}(A)$ with kernel $K$. Then $F(P_0) \tauo F(N) \tauo 0$ is also a projective cover in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$. We proceed by induction on $m$. Suppose first that $m=0$. Then we have $F(K)=0$. From Lemma \ref{SNLC}, we get that $K \in {\rm add}(S_e)$. Thus, ${\rm pd}_A K \le r$, and this gives ${\rm pd}_A N \le r+1$. The short exact sequence $\eta$ gives ${\rm pd}_A M \le r+1$ as wanted. Now, assume $m > 0$. We have ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(K) \le m-1$. Therefore, by induction, ${\rm pd}_A K \le r+m$ and using the same argument as above, we get ${\rm pd}_A M \le r + m + 1$. \end{proof} \begin{Cor} Suppose that $S_e$ is self-orthogonal. Then $${\rm gl.dim} A \le 2{\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 2$$ and $${\rm gl.dim} \hbox{$\mathit\Gamma$}mma \le {\rm max}({\rm id}_A S_e + {\rm pd}_A S_e -1,{\rm gl.dim} A).$$ \end{Cor} \begin{proof} The second inequality is just Proposition \ref{prop1}. Let $\Omega$ be the first syzygy of $S_e$. Then ${\rm Ext}^i_A(\Omega, S_e)=0$ for all non-negative integers $i$. This means that ${\rm pd}_A \Omega = {\rm pd}_\hbox{$\mathit\Gamma$}mma F(\Omega) \le {\rm gl.dim} \hbox{$\mathit\Gamma$}mma$. Now from Proposition \ref{LastProp2}, \begin{eqnarray*} {\rm gl.dim} A &\le& {\rm pd}_A S_e + {\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 1\\ & = & {\rm pd}_A \Omega + {\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 2\\ & \le & 2{\rm gl.dim} \hbox{$\mathit\Gamma$}mma + 2. \end{eqnarray*} \qedhere \end{proof} The following theorem, in the case of a finite dimensional $k$-algebra, follows from the fact \cite{Howard} that when $S_e$ is self-orthogonal (that is, ${\rm Ext}_A^i(S_e, S_e)=0$ for all $i > 0$), then the singularity categories of $A$ and $\hbox{$\mathit\Gamma$}mma$ are triangle-equivalent. \begin{Theo} Let $A$ be Noetherian which is either semiperfect or positively graded. Suppose that $S_e$ is self-orthogonal. Then ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma < \infty$ if and only if ${\rm gl.dim} A < \infty$. \end{Theo} One question remains: if both ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma, {\rm gl.dim} A$ are finite, does this imply that $S_e$ is self-orthogonal? As already observed, the answer is no in general, even if $e$ is primitive. We may have $e$ primitive and ${\rm Ext}_A^1(S_e,S_e)\ne 0$ with both $A, \hbox{$\mathit\Gamma$}mma$ of finite global dimensions. This cannot happens when $A$ is finite dimensional, since ${\rm pd}_A S_e < \infty$ implies ${\rm Ext}_A^1(S_e,S_e)=0$. So, in the finite dimensional case, we have the following conjecture. \begin{Conj} \label{conj3} Let $A$ be a finite dimensional $k$-algebra with $e$ primitive. If both ${\rm pd}_A S_e$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma (eA(1-e))$ are finite, then $S_e$ is self-orthogonal. \end{Conj} When $A$ is not finite dimensional, we simply have to add the vanishing condition ${\rm Ext}_A^1(S_e,S_e)=0$, and we get the following conjecture. \begin{Conj} \label{conj3infinite} Let $A$ be a Noetherian $k$-algebra which is either semiperfect or positively graded. Assume that $e$ is primitive (and of degree zero if $A$ is positively graded) with ${\rm Ext}_A^1(S_e,S_e)=0$. If both ${\rm pd}_A S_e$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma (eA(1-e))$ are finite, then $S_e$ is self-orthogonal. \end{Conj} \begin{Prop}[Assuming Conj. \ref{conj3infinite}] \label{conj1infinite} Let $A$ be a Noetherian $k$-algebra which is either semiperfect or positively graded. Let $e$ be primitive (and of degree zero if $A$ is positively graded) with ${\rm Ext}_A^1(S_e,S_e)=0$. If the global dimension of $A$ and $\hbox{$\mathit\Gamma$}mma$ are finite, then $S_e$ is self-orthogonal. \end{Prop} \begin{Prop}[Assuming Conj. \ref{conj3infinite}] \label{conj2infinite} Let $A$ be a Noetherian $k$-algebra which is either semiperfect or positively graded. Let $e$ be primitive (and of degree zero if $A$ is positively graded). Any two of the following imply the third. \begin{enumerate}[$(1)$] \item ${\rm gl.dim} \hbox{$\mathit\Gamma$}mma < \infty$, \item ${\rm gl.dim} A < \infty$ and ${\rm Ext}_A^1(S_e,S_e)=0$, \item $S_e$ is self-orthogonal. \end{enumerate} \end{Prop} The rest of the paper is devoted to proving Conjecture \ref{conj3} and, with the additional assumption that $A/{\rm rad}A$ is finite dimensional, Conjecture \ref{conj3infinite}. Note that this additional assumption holds when $A$ is positively graded, but does not necessarily hold when $A$ is semiperfect. \sigmaection{Main tools} \label{sectionSmallHomDim} In this section, all algebras considered are Noetherian $k$-algebras and are either semiperfect or positively graded. We provide useful tools for proving Conjecture \ref{conj3infinite}. In this section, $e$ is always assumed to be primitive, and is of degree zero if $A$ is positively graded. The following two lemmas will be our main tools in the rest of this paper. \begin{Lemma} \label{cruciallemma} Assume ${\rm Ext}_A^1(S_e,S_e)=0$, ${\rm pd}_A S_e < \infty$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) < \infty$. Then ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) \ge {\rm pd}_A S_e-1$ and $d_e(S_e) \le {\rm max}(0,{\rm pd}_A S_e - 2)$. \end{Lemma} \begin{proof} We start with an easy observation. Let $M \in \mathcal{C}(A)$. Then there exists a short exact sequence $$\eta_M: \;\; 0 \tauo M' \tauo M \tauo S_M \tauo 0$$ in $\mathcal{C}(A)$ where $S_M \in {\rm add}(S_e)$ is such that ${\rm Hom}_A(M', S_e)=0$. Let $P \tauo M' \tauo 0$ be a projective cover in $\mathcal{C}(A)$ with kernel $K_M$. Then $F(P) \tauo F(M') = F(M)$ is a projective cover in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ and hence, the first syzygy of $F(M)$ is $F(K_M)$. Now, let ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) = m$ and ${\rm pd}_A S_e = r$, where we may assume $r \ge 2$. If $m=0$, then the second syzygy of $S_e$ lies in add$(S_e)$. Since pd$_A S_e$ is finite, this means that $r \le 1$. Hence, we may assume $m > 0$. Consider a minimal projective presentation $P \tauo eA \tauo S_e \tauo 0$ in $\mathcal{C}(A)$ of $S_e$ where we know that (a shift of) $eA$ is not a direct summand of $P$. Let the second syzygy of $S_e$ be $M_0$, so pd$_A M_0 = r-2$ and $M_0 \ne 0$. If $S_{M_0} \ne 0$, then pd$_A M_0' = r-1$ and otherwise, $M_0' = M_0$ and pd$_A M_0' = r-2$. Let $M_1 = K_{M_0}$, so pd$_A M_1 = r-2$ if $S_{M_0} \ne 0$ and pd$_A M_1 = r-3$ if $S_{M_0} = 0$. For $1 \le i \le m$, we let $M_{i} = K_{M_{i-1}}$. We can prove by induction that pd$_A M_i \le r-2$ and pd$_A M_i' \le r-1$ for $0 \le i \le m-1$. At the last step, $F(M_{m-1})$ is projective in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$, hence $M_m=K_{M_{m-1}} \in {\rm add}(S_e)$. But pd$_A K_{M_{m-1}} = {\rm pd}_A M'_{m-1}-1 \le r-2$. Thus, $K_{M_{m-1}} = 0$ and hence, $M'_{m-1}$ is projective in $\mathcal{C}(A)$. Since $r > 0$, this gives $S_{M_{m-1}}=0$ so $M_{m-1} = M'_{m-1}$. We can prove using another (reverse) induction that pd$_A M_i = m-1 - i$ and $M_i = M'_i$ for $i = m-1, m-2, \ldots, m-r+1$. In particular, $m-r+1 \ge 0$. This proves the first part of the proposition. If $m-r+1 = 0$, then $M_0$ is such that ${\rm Ext}^i_A(M_0, S_e)=0$ for all $i \ge 0$ and thus, $S_e$ is self-orthogonal and this proves the second part of the proposition in this case. Assume $m-r+1 > 0$. This gives pd$_A M'_{m-r} = r-1$. Now, the exact sequence $$\eta_{M_{m-r}}: \;\; 0 \tauo M'_{m-r} \tauo M_{m-r} \tauo S_{M_{m-r}} \tauo 0$$ with $S_{M_{m-r}} \in {\rm add}(S_e)$ gives $S_{M_{m-r}} \ne 0$ since otherwise, pd$_A M_{m-r} = r-1$, contrary to what we have proven so far. Observe that ${\rm Ext}^i_A(M'_{m-r}, S_e)=0$ for all $i \ge 0$. Using $\eta_{M_{m-r}}$ together with Lemma \ref{ConstructionProjResol}, we see that we can get a projective resolution $$0 \tauo Q_r \tauo Q_{r-1} \tauo \cdots \tauo Q_0 \tauo S_{M_{m-r}} \tauo 0$$ in $\mathcal{C}(A)$ such that $Q_r, Q_{r-1}$ are the last two terms in a minimal projective resolution of $M'_{m-r}$. Thus, (a shift of) $eA$ is not a direct summand of $Q_r \oplus Q_{r-1}$. Since the latter resolution is of minimal length, this gives ${\rm Ext}^i_A(S_e, S_e)=0$ for $i=r-1, r$. \end{proof} \begin{Lemma} \label{technical} Suppose that ${\rm pd}_A S_e < \infty$, ${\rm Ext}^1_A(S_e,S_e)=0$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) < \infty$. If $S_e$ is not self-orthogonal then ${\rm Ext}^{d_e(S_e)-1}_A(S_e,S_e)\ne 0$. \end{Lemma} \begin{proof} Suppose that ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA)=r$. Let $M_i$ in $\mathcal{C}(A)$ be the $(d_e(S_e)+i)$-syzygy of $S_e$. We know that $M_1$ is nonzero by Lemma \ref{cruciallemma}. We have a short exact sequence $$(*): \quad 0 \tauo F(M_1) \tauo Q \oplus R \tauo F(M_0) \tauo 0,$$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ where $Q$ is a projective object in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ and $R$ is nonzero in add$(F(eA))$. By assumption, $d_e(S_e) > 0$, and hence $d_e(S_e) \ge 2$. Suppose to the contrary that ${\rm Ext}^{d_e(S_e)-1}_A(S_e,S_e)$ vanishes. Set $t = {\rm pd}_A S_e - d_e(s_e)-1 \ge 1$, which is the projective dimension of $F(M_1)$. By Lemma \ref{cruciallemma}, we know that $r \ge {\rm pd}_A S_e -1 \ge t + 2$, since $d_e(S_e) \ge 2$. Hence, $t \le r-2$. Let $$0 \tauo P_t \tauo P_{s-1} \tauo \cdots \tauo P_1 \tauo P_0 \tauo F(M_1) \tauo 0$$ be a minimal projective resolution of $F(M_1)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ and $$0 \tauo Q_r \tauo Q_{r-1} \tauo \cdots \tauo Q_1 \tauo Q_0 \tauo R \tauo 0$$ be a minimal projective resolution of $R$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$. By Lemma \ref{ConstructionProjResol}, we get a projective resolution $$(**): \cdots \tauo Q_{t+2} \tauo Q_{t+1} \oplus P_t \tauo Q_t \oplus P_{t-1}\tauo \cdots \tauo Q_1\oplus P_0 \tauo Q\oplus Q_0\tauo F(M_0) \tauo 0$$ of $F(M_0)$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$, where $Q_{t+1}$ is nonzero. If $t < r-2$, then $(**)$ is clearly of minimal length since the last map is the last map $Q_{r} \tauo Q_{r-1}$ of the minimal projective resolution of $F(eA)$. If $t = r-2$, the last map is $Q_{t+2} \tauo Q_{t+1} \oplus P_t$ whose image lies in $Q_{t+1}$. Thus, this map is a radical map and hence, $(**)$ is of minimal length. Therefore, in all cases, pd$_\hbox{$\mathit\Gamma$}mma F(M_0)=r$. Since ${\rm Ext}^{d_e(S_e)-1}_A(S_e,S_e)=0$, we have a short exact sequence $$0 \tauo F(M_0) \tauo Q' \tauo F(M_{-1}) \tauo 0,$$ in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$ where $Q'$ is projective in $\mathcal{C}(\hbox{$\mathit\Gamma$}mma)$. Hence, pd$_\hbox{$\mathit\Gamma$}mma F(M_{-1}) = r+1$. Now, since $r+1 > r$, by using an argument similar as in the first part of the proof, we get pd$_\hbox{$\mathit\Gamma$}mma F(M_{-2}) = r+2$. By induction, ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) = {\rm pd}_\hbox{$\mathit\Gamma$}mma F(M_{-d_e(S_e)+1})=r+d_e(S_e)-1$, which gives $d_e(S_e)=1$, a contradiction. \end{proof} \begin{Lemma} \label{NotProjective} Suppose that ${\rm pd}_A S_e < \infty$, ${\rm Ext}_A^1(S_e,S_e)=0$ and $S_e$ is not self-orthogonal. If ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA)$ is finite, then ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) \ge 3$ and ${\rm pd}_A S_e \ge 4$. \end{Lemma} \begin{proof} Assume that ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) < \infty$. Since ${\rm Ext}_A^1(S_e,S_e) = 0$ and $S_e$ is not self-orthogonal, we get pd$_A S_e, d_e(S_e) \ge 2$. From Lemma \ref{cruciallemma}, we have pd$_\hbox{$\mathit\Gamma$}mma F(eA) \ge {\rm pd}_A S_e -1$ and $d_e(S_e) \le {\rm max}(0,{\rm pd}_A S_e - 2)$. The second inequality gives ${\rm pd}_A S_e \ge 4$. Hence, it follows from the first inequality that ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) \ge 3$. \end{proof} \sigmaection{The conjecture} In this section, $A$ is a Noetherian $k$-algebra which is either semiperfect with $A/{\rm rad}A$ finite dimensional or positively graded. Therefore, $A/{\rm rad}A$ is a product of copies of $k$ in both cases. The idempotent $e$ is always assumed to be primitive, and of degree zero if $A$ is positively graded. We prove Conjecture \ref{conj3infinite} in this setup. For a morphism $f: M \tauo N$ between finitely generated modules, we denote by $\bar f$ the induced morphism $\bar f: M/{\rm rad}M \tauo N/{\rm rad}N$ on the tops of $M,N$. The following lemma is quite easy to prove in the finite dimensional setting, but it is not so obvious in our setting. \begin{Lemma} \label{MinResolution1} Assume that $A$ is semiperfect with $A/{\rm rad}A$ finite dimensional. Let $M \in {\rm mod} A$ admitting a projective resolution $$ 0 \tauo P_r \,\mid\,ackrel{d_r}{\longrightarrow} P_{r-1} \,\mid\,ackrel{d_{r-1}}{\longrightarrow} \cdots \,\mid\,ackrel{d_2}{\longrightarrow} P_1 \,\mid\,ackrel{d_1}{\longrightarrow} P_0 \,\mid\,ackrel{d_0}{\longrightarrow} M \tauo 0$$ which is minimal in ${\rm mod} A$. Let $f: M \tauo M$ be a morphism in ${\rm mod} A$ and $\{f_i: P_i \tauo P_i\}_{0 \le i \le r}$ a lifting of $f$, in ${\rm mod} A$, to the above projective resolution of $M$. If $f$ is a radical morphism, then all of the $\bar f_i$ are nilpotent. \end{Lemma} \begin{proof} Let us consider the projective cover $P_0 \,\mid\,ackrel{d_0}{\tauo} M \tauo 0$ of $M$ in ${\rm mod} A$ with the lifting $f_0 : P_0 \tauo P_0$ of $f$ in ${\rm mod} A$. Since $d_0$ is a projective cover, $\bar d_0$ is an isomorphism. By considering the diagram $$\xymatrix{P_0 \ar[r]^{d_0} \ar[d]^{f_0} & M \ar[r] \ar[d]^f & 0 \\ P_0 \ar[r]^{d_0} & M \ar[r] & 0}$$ modulo the radical, we easily see that if $f$ is an isomorphism, then so is $\bar f$ and hence $\bar f_0$ is an isomorphism. This gives that $f_0$ is an isomorphism. Conversely, if $f_0$ is an isomorphism, then $f$ is surjective. Since $M$ is a Noetherian module, it is well known that surjectivity of $f$ implies injectivity of $f$. Hence, $f_0$ is an isomorphism if and only if $f$ is. By repeating the argument at the level of the kernel of $d_0$, we see that $f$ is an isomorphism if and only if all of the $f_i$ are isomorphisms. Assume that $f$ is radical. Then $f_0$ is also a radical morphism. We claim that $1-af$ is invertible for all $a \in k$. By the above observation, it is sufficient to prove that $1-af_0$ is invertible for all $a \in k$. Since the category $\mathcal{P}(A)$ of the projective objects in ${\rm mod} A$ is Krull-Schmidt, it has a well defined radical $\mathcal{J}(A)$. For $P,Q$ indecomposable in $\mathcal{P}(A)$, $g: P \tauo Q$ lies in $\mathcal{J}(A)$ if and only if $g$ is a radical map. In general, if $g: P \tauo Q$ is represented by a matrix, then $g$ lies in $\mathcal{J}(A)$ if and only if each entry is in $\mathcal{J}(A)$. By the properties of the radical of a category, $f_0$ being in $\mathcal{J}(A)$ means that $1 - hf_0$ is invertible for all morphisms $h : P_0 \tauo P_0$, and in particular, $1 - af_0$ is invertible for all $a \in k$. This proves our first claim. Since for $a \in k$, the morphisms $1-af, 1-af_0$ are invertible, it follows that for all $a \in k$ and all $i$, the morphisms $1-af_i$ are invertible. Suppose that $P_i = \bigopls_{j=1}^{t_i}Q_{ij}$ where the $Q_{ij}$ are objects in the list $\{e_1A, \ldots, e_nA\}$. Let $[f_i]$ be the matrix of $f_i$ according to this decomposition. Since $k = \bar k$ and $A/{\rm rad}A$ is finite dimensional, each entry of $[f_i]$ can be written as a scalar times the identity (whenever this makes sense) plus a radical map. Hence, $[f_i] = E_i + F_i$, where $F_i$ is a matrix containing only radical maps and $E_i$ is a matrix containing scalar multiples of identities. The matrix of $1-af_i$ is $I-aE_i-aF_i$. This is invertible for all $a \in k$ and hence, the matrix $I-aE_i$ of $\overline{1 - af_i}$ is invertible for all $a \in k$. This implies that $E_i$ is nilpotent. Hence, $\bar f_i$ is nilpotent. \end{proof} \begin{Lemma} \label{MinResolution2} Assume that $A$ is semiperfect with $A/{\rm rad}A$ finite dimensional. Let $M \in {\rm mod} A$ admitting a projective resolution $$\mathcal{P}: \quad 0 \tauo P_r \,\mid\,ackrel{d_r}{\longrightarrow} P_{r-1} \,\mid\,ackrel{d_{r-1}}{\longrightarrow} \cdots \,\mid\,ackrel{d_2}{\longrightarrow} P_1 \,\mid\,ackrel{d_1}{\longrightarrow} P_0 \,\mid\,ackrel{d_0}{\longrightarrow} M \tauo 0$$ which is minimal in ${\rm mod} A$. Let $f: M^s \tauo M^t$ be a morphism in ${\rm mod} A$ and $\{f_i: P_i^s \tauo P_i^t\}_{0 \le i \le r}$ a lifting of $f$, in ${\rm mod} A$, to the projective resolutions $\mathcal{P}^s$ and $\mathcal{P}^t$ of $M^s$ and $M^t$, respectively. If $f$ is a radical morphism, then none of the $f_i$ are sections. \end{Lemma} \begin{proof} Assume that $f$ is radical. Then all the components $M \tauo M^t$ of $f$ are radical. Also, if some $f_i$ is a section, then all components $P_i \tauo P_i^t$ of $f_i$ are sections. Therefore, we may assume that $s=1$. Write $f = (f^1, f^2, \ldots, f^t)^T: M \tauo M^t$ and for $0 \le i \le r$, write $f_i = (f_i^1, f_i^2, \ldots, f_i^t)^T: P_i \tauo P_i^t$. Observe that for a given $1 \le j \le t$, the morphisms $\{f_i^j: P_i \tauo P_i\}_{0 \le i \le r}$ form a lifting of $f^j$, in ${\rm mod} A$, to the given projective resolution of $M$. Fix $0 \le i \le r$. It follows from Lemma \ref{MinResolution1} that the morphisms $\bar{f_i^1}, \ldots, \bar{f_i^t}$ are nilpotent. Consider the Lie subalgebra $\mathfrak{g}$ of $\mathfrak{g}\mathfrak{l}(P_i/{\rm rad}P_i)$ generated by the $\bar{f_i^1}, \ldots, \bar{f_i^t}$. Since a sum of compositions of the morphisms $f^1, \ldots, f^t$ is again radical, it follows that any element in $\mathfrak{g}$ is a nilpotent endomorphism. By Engel's Theorem, there is a common null vector $v$ to all elements of $\mathfrak{g}$. Therefore, the morphism $\bar f_i = (\bar{f_i^1}, \ldots, \bar{f_i^t})^T$ is not a section and hence, $f_i$ is not a section. \end{proof} The following lemma is an analogue of Lemma \ref{MinResolution2} in the setting of positively graded $k$-algebras. \begin{Lemma} \label{LemmaLiftingGraded} Assume that $A$ is positively graded. Let $L \in {\rm gr} A$ be generated in a single degree. Let $M = \bigopls_{i=1}^rL[p_i]$ and $N = \bigopls_{j=1}^sL[q_j]$. Consider a minimal projective resolution $\mathcal{P}_L$ of $L$ in ${\rm gr} A$ and use direct sums of shifts of $\mathcal{P}_L$ to build minimal projective resolutions $$\mathcal{P}_M: \quad \cdots \tauo P_2 \tauo P_1 \tauo P_0 \tauo M \tauo 0$$ and $$\mathcal{P}_N: \quad \cdots \tauo Q_2 \tauo Q_1 \tauo Q_0 \tauo N \tauo 0$$ of $M$ and $N$ in ${\rm gr} A$, respectively. Let $f: M \tauo N$ be a graded morphism, and for $i \ge 1$, let $f_i: P_i \tauo Q_i$ be graded morphisms that form a lifting of $f$ to the above projective resolutions. If $f$ is a radical morphism, then none of the $f_i$ are sections. \end{Lemma} \begin{proof} Suppose that $f$ is a radical morphism. We may assume that $L$ is generated in degree $0$. Let us fix $m \ge 1$. Let $$\mathcal{P}_L: \quad \cdots \tauo R_2 \tauo R_1 \tauo R_0 \tauo L \tauo 0$$ be a minimal projective resolution of $L$ in gr$A$. Decompose $R_m$ so that $R_m = S_1[j_1] \oplus \cdots \oplus S_t[j_t]$ where the $S_i$ are nonzero projective in gr$A$ that are generated in degree $0$ and $j_1 \le j_2 \le \cdots \le j_t$ are non-negative integers. Observe that $$P_m = \bigopls_{i=1}^rR_m[p_i] = \bigopls_{i=1}^r\bigopls_{l=1}^tS_l[j_l+p_i],$$ and $$Q_m = \bigopls_{i=1}^sR_m[q_i] = \bigopls_{i=1}^s\bigopls_{l=1}^tS_l[j_l+q_i].$$ Now, $f_m$ is given by an $st \tauimes rt$ matrix $[f_m]$. If $f_m$ is a section, then every column of $[f_m]$ is a section. Let $q$ be the greatest element of the $q_i$ and $p$ be the greatest element of the $p_i$. We may assume that in the $s \tauimes r$ matrix of $f$, there is no zero row. Since $f$ is a radical morphism and $L$ is generated in a single degree, this means that $p > q$. Consider the column $c$ of $[f_m]$ corresponding to the summand $S_1[j_t + p]$ of $P_m$. Since $j_t + p > j_l + q_i$ for all $1 \le l \le t$, $1 \le i \le s$, we see that $c$ only contains radical morphisms, and hence cannot be a section, a contradiction. \end{proof} We are now ready to prove our first main theorem of this section. \begin{Theo} \label{MainTheo} Let $A$ be a semiperfect Noetherian $k$-algebra with $A/{\rm rad}A$ finite dimensional and assume that $e$ is primitive with ${\rm Ext}_A^1(S_e, S_e) = 0$. If both ${\rm pd}_A S_e$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA)$ are finite, then $S_e$ is self-orthogonal. \end{Theo} \begin{proof} Assume ${\rm pd}_A S_e = r < \infty$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) = s < \infty$ and suppose to the contrary that $S_e$ is not self-orthogonal. Write $d := d_e(S_e)$. We have $d, r \ge 2$. The case $r \le 3$ can be excluded using Lemma \ref{NotProjective}. If $r = 4$, then from Lemma \ref{cruciallemma}, we get $d=2$. By Lemma \ref{technical}, we get ${\rm Ext}_A^1(S_e, S_e) \ne 0$, which is impossible. So consider the case where $r \ge 5$. Again, we have $d > 2$. By Lemma \ref{technical}, both ${\rm Ext}_A^{d-1}(S_e,S_e)$, ${\rm Ext}_A^d(S_e,S_e)$ are nonzero. Let $L_i$ be the $i$-th syzygy of $S_e$. By definition of $d$, the syzygy $L_{d+1}$ is such that ${\rm Ext}^j_A(L_{d+1}, S_e)=0$ for all $j \ge 0$. Therefore, ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(L_{d+1}) = {\rm pd}_A L_{d+1} = r-d-1 \ge 1$. Since $s \ge r-1$ by Lemma \ref{cruciallemma}, we get $r-d-1 \le s-d < s-2$. Consider the following part of a minimal projective resolution $$0 \tauo L_{d+1} \tauo (eA)^p \oplus P_d \tauo (eA)^q \oplus P_{d-1} \tauo L_{d-1} \tauo 0$$ of $S_e$ in mod$A$ where $eA$ is not isomorphic to a direct summand of $P_d \oplus P_{d-1}$ and both $p,q$ are positive. Applying the functor $F$, we get an exact sequence $$0 \tauo F(L_{d+1}) \tauo F(eA)^p \oplus F(P_d) \,\mid\,ackrel{f}{\tauo} F(eA)^q \oplus F(P_{d-1}) \tauo F(L_{d-1}) \tauo 0$$ where $F(P_d), F(P_{d-1})$ are projective $\hbox{$\mathit\Gamma$}mma$-modules and $F(L_{d+1})$ has projective dimension $r-d-1$. Consider now a minimal projective resolution $$0 \tauo Q_s \tauo \cdots \tauo Q_1 \tauo Q_0 \tauo F(eA) \tauo 0$$ of $F(eA)$, where $s \ge r-1 \ge 4$. We have a commutative diagram $$\xymatrixcolsep{10pt}\xymatrixrowsep{14pt}\xymatrix{& &0 \ar[d] & 0 \ar[d]&& \\ & & Q_s^p \ar[d]^{h_s} \ar[r]^{f_s}& Q_s^q \ar[d]^{g_s}&&& \\ &&Q_{s-1}^p\ar@{.}[d] \ar[r]^{f_{s-1}}&Q_{s-1}^q\ar@{.}[d] &&\\ & &F(P_d) \oplus Q_0^p \ar[d] \ar[r]^{f_0}& F(P_{d-1}) \oplus Q_0^q \ar[d] &&\\0 \ar[r]& F(L_{d+1})\ar[r]& F(P_d) \oplus F(eA)^p \ar[r]^f& F(P_{d-1}) \oplus F(eA)^q \ar[r] & F(L_{d-1}) \ar[r]&0\\}$$ where $\{f_i\}_{0 \le i \le s}$ is a lifting of $f$ to the projective resolutions in the diagram above. By Lemma \ref{ConstructionProjResol}, this yields a projective resolution of length $s+1$ of $F(L_{d-1})$. The tail of this projective resolution is $$0 \rightarrow Q_s^p \,\mid\,ackrel{u}{\rightarrow} Q_s^q \oplus Q_{s-1}^p$$ where $u = (f_s, h_s)^T$. Assume that $u$ is a section, which means that $f_s$ is a section, since $h_s$ is a radical map. Decompose $f$ as a $2 \tauimes 2$ matrix whose component corresponding to $F(eA)^p \tauo F(eA)^q$ is denoted $g$. Similarly decompose $f_0$ as a $2 \tauimes 2$ matrix whose component corresponding to $Q_0^p \tauo Q_0^q$ is denoted $h$. We get a commutative diagram $$\xymatrix{0 \ar[r] & Q_s^p \ar[d]^{f_s} \ar[r] & Q_{s-1}^p \ar[d]^{f_{s-1}}\ar[r] & \cdots \ar[r] & Q_1^p \ar[d]^{f_1}\ar[r] & Q_0^p \ar[d]^h\ar[r] & F(eA)^p \ar[r]\ar[d]^g & 0\\ 0 \ar[r] & Q_s^q \ar[r] & Q_{s-1}^q \ar[r] & \cdots \ar[r] & Q_1^q \ar[r] & Q_0^q \ar[r] & F(eA)^q \ar[r] & 0}$$ Observe that $g$ is a radical map, which gives that $f_s$ is not a section, by Lemma \ref{MinResolution2}. This is a contradiction. Therefore, $u$ is not a section, meaning that ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(L_{d-1})=s+1$. Since for $P$ finitely generated projective in ${\rm mod} A$, we have ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(P) \le s$, we get ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(L_{d-1-i})=s+1+i$ for $0 \le i \le d-2$. For $i=d-2$, we get ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(L_{1}) = {\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA) =s+d-1 \ne s$, since $d \ne 1$. This is a contradiction. \end{proof} Of course, the above theorem also establishes this weaker version. \begin{Theo} Let $A$ be a semiperfect Noetherian $k$-algebra with $A/{\rm rad}A$ finite dimensional and assume that $e$ is primitive with ${\rm Ext}_A^1(S_e, S_e) = 0$. If both ${\rm gl.dim} A$ and ${\rm gl.dim \hbox{$\mathit\Gamma$}mma}$ are finite, then $S_e$ is self-orthogonal. \end{Theo} In case $A$ is positively graded, we get the following. \begin{Theo} \label{MainTheo2} Let $A$ be a positively graded Noetherian $k$-algebra, and assume that $e$ is primitive of degree zero with ${\rm Ext}_A^1(S_e, S_e) = 0$. If ${\rm pd}_A S_e$ and ${\rm pd}_\hbox{$\mathit\Gamma$}mma F(eA)$ are finite, then $S_e$ is self-orthogonal. \end{Theo} \begin{proof} Same as the proof of Theorem \ref{MainTheo}, by working in the categories ${\rm gr}A, {\rm gr}\hbox{$\mathit\Gamma$}mma$ rather than ${\rm mod} A, {\rm mod} \hbox{$\mathit\Gamma$}mma$ and using Lemma \ref{LemmaLiftingGraded} rather than Lemma \ref{MinResolution2}. The modules $(eA)^p, (eA)^q$ in the proof of Theorem \ref{MainTheo} have to be replaced by non-trivial direct sums of shifts of $eA$ and $P_d \oplus P_{d-1}$ do not have a shift of $eA$ as a direct summand. \end{proof} Of course, we also have this weaker version. \begin{Theo} Let $A$ be a positively graded Noetherian $k$-algebra, and assume that $e$ is primitive of degree zero with ${\rm Ext}_A^1(S_e, S_e) = 0$. If both ${\rm gl.dim} A$ and ${\rm gl.dim \hbox{$\mathit\Gamma$}mma}$ are finite, then $S_e$ is self-orthogonal. \end{Theo} \sigmaection{Examples} \sigmaubsection{Skew group rings with cyclic groups} Let $k$ be an algebraically closed field of characteristic zero {and $m \ge 2$ a positive integer. Let $S=k[x_1,\ldots,x_n]$ and $\mu_m = \{z \in k \,\mid\, z^m=1 \}.$ We write a diagonal action of $\mu_m$ on $S$, using superscript notation, $x_i^\zetaeta = \zetaeta^{a_i}x_i$ for $\zetaeta \in \mu_m$ by choosing $a_1,\ldots,a_n \in \mathbb{Z}/m$ . Consider the skew group algebra $$A=S \rtimes \mu_m = \bigopls_{\zetaeta \in \mu_m} S\zetaeta$$ with multiplication $\zetaeta x_i= x_i^\zetaeta\zetaeta.$ Let $\chi_i: \mu_m \rightarrow \mu_m$ be the character $\chi_i(\zetaeta)=\zetaeta^i$. For $i \in \mathbb{Z}/m$, let $e_i = \frac{1}{m} \sigmaum_{\zetaeta \in \mu_m} \chi_i(\zetaeta) \zetaeta$ be the primitive idempotents of $A$. Note that $\chi_i(\zetaeta)$ is a coefficient in $k$ and $\zetaeta$ is an element of the group $\mu_m$ in this expression of $e_i$ in the group algebra $k\mu_m$ which is a subalgebra of $A$. It is well known that $A$ can be presented as the path algebra with relations via the McKay graph and commuting relations~\cite{CMT,BSW}. More specifically, we define $Q_0 =\mathbb{Z}/m$ and we have $n$ arrows that go into and out of each vertex. The arrows that start from $i$ go to vertices $i+a_1,\ldots, i+a_n$. Write $x_i^j$ for the arrow from vertex $j$ to vertex $j+a_i$. The commuting relations are of the form $$ x_i^{j+a_k} x^j_k = x_k^{j+a_i} x^j_i.$$ The explicit isomorphism of $A$ with the path algebra $kQ$ modulo these relations can be seen in Proposition 2.8(3) of \cite{CMT} or in Corollary 4.1 of \cite{BSW}. Let $S_i$ and $P_i$ be the simple and projective modules at vertex $i$, respectively. By attaching the correct weights on the usual Koszul resolution to make it equivariant, we get that the projective resolutions of the simple modules are of the form: $$\rightarrow \bigopls_{1 \leq x < y \leq n} P_{k+a_x+a_y} \rightarrow \bigopls_{1 \leq x \leq n} P_{k+a_x} \rightarrow P_k \rightarrow S_k \rightarrow 0.$$ $$ 0 \rightarrow P_{k+\sigmaum a_i} \rightarrow \bigopls_{1 \leq x \leq n} P_{k-a_x+\sigmaum a_i} \rightarrow \bigopls_{1 \leq x < y \leq n} P_{k-a_x-a_y+\sigmaum a_i} \rightarrow \cdots $$ Note that the global dimension of $A$ is $n$, and that $A$ is positively graded and Noetherian. Now, Proposition \ref{prop1} yields the following. \begin{Cor} Let $e=e_k$ be a primitive idempotent of $A=S \rtimes \mu_m$ as above. If $\sigmaum_{i \in I} a_i \neq 0 ^\perp{\hspace{-2pt}}mod{m}$ for all non-empty subsets $I \sigmaubseteq \mathbb{Z}/m$, then ${\rm gl.dim} \; \hbox{$\mathit\Gamma$}mma \leq 2n-1$. \end{Cor} For some particular examples, we could take weights $a_i=1$ for $n<m$ so we obtain an algebra $\hbox{$\mathit\Gamma$}mma$ that can be interpreted as a noncommutative resolution of the affine cone of the $m^{th}$ Veronese embedding of $\mathbb{P}^{n-1}$. \sigmaubsection{Skew group algebras of dimension two} Let $G$ be a finite group and let $V$ be a two dimensional representation of $V$ via the map $\rho: G \rightarrow {\rm GL}(V)$. Let $\overline{A}=k[V] \rtimes G$. It is shown in Theorem 5.6 \cite{RVDB} that $\overline{A}$ is Morita equivalent to a basic algebra $A$ with quiver given by the McKay graph. The McKay graph is defined by letting $Q_0=\{1,\ldots,n\}$ be indexed by the set $\{ W_1,\ldots,W_n \}$ of irreducible representations of $G$. The number of arrows from $i$ to $j$ is given by the dimension of the vector space ${\rm Hom}_G(W_i, W_j \otimes V)$. As studied in \cite{RVDB}, this quiver is a finite translation quiver with translation $\tauau(i) = j$ where $W_i \otimes \wedge^2 V \sigmaimeq W_j$. We also know that $A$ is Noetherian and Koszul and has global dimension two. The projective resolutions of the simples are given by $$0\rightarrow P_{\tauau(i)} \rightarrow \bigopls_{i \rightarrow j} P_j \rightarrow P_i \rightarrow S_i \rightarrow 0.$$ We make the following observation which is well known to experts. We obtain that if there are no loops or $\tauau$ loops at the vertex corresponding to $W_i$ then the subalgebra $\hbox{$\mathit\Gamma$}mma$ without that vertex has global dimension 3. \begin{Cor} \label{cor8.2}Let $A$ be a skew group algebra of dimension two as above. Let $e$ be the primitive idempotent corresponding to the irreducible representation $W_i$. If ${\rm Hom}_G(W_i,W_i\otimes V)=0$ and $W_i \otimes \wedge^2 V \not \cong W_i$ then $\rm{gl.dim} \hbox{$\mathit\Gamma$}mma = 3$. \end{Cor} \begin{proof} It follows from Proposition~\ref{prop1} that $\rm{gl.dim} \hbox{$\mathit\Gamma$}mma \leq 3$. Let $W_m = \wedge^2V^* \otimes W_i$ so $m = \tauau^{-1}(i) \neq i$. We have the resolutions $$0\rightarrow P_{i} \rightarrow \bigopls_{m \rightarrow j} P_j \rightarrow P_m \rightarrow S_m \rightarrow 0.$$ $$0\rightarrow P_{\tauau(i)} \rightarrow \bigopls_{i \rightarrow \ell} P_\ell \rightarrow P_i \rightarrow S_i \rightarrow 0.$$ Note that ${\rm Ext}^j_A(S_m, S_i)=0$ for $j = 0, 1$ and ${\rm Ext}^j_A(S_i, S_i)=0$ for $j = 1,2$. Now we apply the functor $F$ and combine the resulting sequences to obtain the projective resolution $$0\rightarrow F(P_{\tauau(i)}) \rightarrow \bigopls_{i \rightarrow \ell} F(P_\ell) \rightarrow \bigopls_{m \rightarrow j} F(P_j) \rightarrow F(P_m) \rightarrow F(S_m) \rightarrow 0.$$ Note that the maps are all radical maps so this is a minimal resolution of $F(S_m)$ So we must have that $\rm{gl.dim} \hbox{$\mathit\Gamma$}mma = 3$. \end{proof} For a more specific example, let $r>1$ and let $G$ be the dihedral group $\langle \sigma,\tau \,\mid\, \sigma^r = \tau^2 = 1, \sigma\tau=\tau\sigma^{-1} \rangle$. The representation $V$ is defined by \[ \sigma = \begin{pmatrix} \zeta & 0 \\ 0 & \zeta^{-1} \end{pmatrix} \ \ , \ \ \tau = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \] where $\zeta$ is a primitive $r$-th root of unity. Now we can let $e$ be the primitive idempotent corresponding to any of the one dimensional representations. These are the trivial representation and $\wedge^2 V$ when $r$ is odd, and there are four one dimensional representations when $r$ is even. For more details, see Section 8.2 of \cite{CHI} where this example is referred to as type $BL$ for $r$ odd and type $B$ for $r$ even and in Example 5.1 of \cite{BSW} the dihedral group of order $8$, that is, $r=4$ is treated. For another specific example, where the group does not contain any pseudo reflections, we can let $G$ be the subgroup of ${\rm GL}(V)$ generated by $$ \begin{pmatrix} \zetaeta_{6} & 0 \\ 0 & \zetaeta_{6} \end{pmatrix}, \begin{pmatrix} 0 & -\zetaeta_8 \\ -\zetaeta_8^3 & 0 \end{pmatrix}, \begin{pmatrix} -\zetaeta_8 & 0 \\ 0 & -\zetaeta_8^3 \end{pmatrix}$$ where $\zetaeta_n$ is a primitive $n^{th}$ root of unity. All possible primitive idempotents will satisfy the conditions of Corollary \ref{cor8.2} so $\rm{gl.dim} \hbox{$\mathit\Gamma$}mma = 3$. For one last example, we generalize the above corollary to the case of more than one idempotent and we omit the proof which also follows from Proposition \ref{prop1}. \begin{Cor} Let $A$ be a skew group algebra of dimension two as above. Let $e$ be the idempotent corresponding to a direct summand $W$ of $\bigopls_{i=1}^n W_i$ and let $\hbox{$\mathit\Gamma$}mma=(1-e)A(1-e)$. If ${\rm Hom}_G(W,W\otimes V)=0$ and ${\rm Hom}(W,W \otimes \wedge^2 V)=0$ then $\rm{gl.dim} \hbox{$\mathit\Gamma$}mma \le 3.$ \end{Cor} It is interesting to compare this with Theorem 2.10 of \cite{IW} where $e$ corresponds to the set of special Cohen-Macaulay modules over $k[V]^G$. \noindent{\emph{Acknowledgment}.} The authors are supported by NSERC while the second author is also supported in part by AARMS. The first author would like to thank Michael Wemyss for helpful comments. Both authors would like to express their gratitude to an anonymous referee for pointing out Lemma \ref{MinResolution2}, which made Theorem \ref{MainTheo} much stronger. \end{document}
math
84,687
\begin{document} \begin{center} \LARGE Maximal, potential and singular operators\\ in the local "complementary" variable exponent Morrey type spaces \end{center} \centerline{\lambdarge {\bf Vagif S. Guliyev }, \,\, {\bf Javanshir J. Hasanov} } \centerline{\it Institute of Mathematics and Mechanics, Baku, Azerbaijan.} \centerline{\it E-mail: [email protected]} \centerline{\it E-mail: [email protected]} \ \centerline{\lambdarge \bf Stefan G. Samko} \centerline{\it University of Algarve, Portugal.} \centerline{\it E-mail: [email protected]} \ \begin{abstract} We consider local "complementary" generalized Morrey spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ in which the $p$-means of function are controlled over $\Omega\backslash B(x_0,r)$ instead of $B(x_0,r)$, where $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ is a bounded open set, $p(x)$ is a variable exponent, and no monotonicity type conditio is imposed onto the function $\omegamega(r)$ defining the "complementary" Morrey-type norm. In the case where $\omegamega$ is a power function, we reveal the relation of these spaces to weighted Lebesgue spaces. In the general case we prove the boundedness of the Hardy-Littlewood maximal operator and Calderon-Zygmund singular operators with standard kernel, in such spaces. We also prove a Sobolev type ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega} (\Omega)\rightarrow {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{q(\cdot),\omegamega} (\Omega)$-theorem for the potential operators $I^{\alphalpha(\cdot)},$ also of variable order. In all the cases the conditions for the boundedness are given it terms of Zygmund-type integral inequalities on $\omegamega(r)$, which do not assume any assumption on monotonicity of $\omegamega(r)$. \end{abstract} \begin{quote}\mathop{\rm sup}\limitsmall {\it Key Words:} generalized Morrey space; local "complementary" Morrey spaces; maximal operator; fractional maximal operator; Riesz potential, singular integral operators, weighetd spaces \end{quote} \begin{quote}\mathop{\rm sup}\limitsmall 2000 \textit{ Mathematics Subject Classification: } Primary 42B20, 42B25, 42B35 \end{quote} \mathop{\rm sup}\limitsection{Introduction} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} In the study of local properties of solutions to of partial differential equations, together with weighted Lebesgue spaces, Morrey spaces ${\cal L}^{p, \lambdambda}({\mathbb{R}^n})$ play an important role, see \cite{Gi}. Introduced by C. Morrey \cite{Morrey} in 1938, they are defined by the norm \begin{equation}\lambdabel{11} \left\| f\right\|_{{\cal L}^{p,\lambdambda }}: = \mathop{\rm sup}\limitsup_{x, \; r>0 } r^{-\frac{\lambdambda}{p}} \|f\|_{L^{p}(B(x,r))} , \end{equation} where $0 \le \lambdambda < n,$ $1\le p < \infty.$ We refer in particular to \cite{KJF} for the classical Morrey spaces. As is known, last two decades there is an increasing interest to the study of variable exponent spaces and operators with variable parameters in such spaces, we refer for instance to the surveying papers \cite{Din3}, \cite{K1}, \cite{KSAMADE}, \cite{Sam4} and the recent book \cite{160zc} on the progress in this field, see also references therein. The spaces defined by the norm \eqref{11} are sometimes called \textit{global} Morrey spaces, in contrast to \textit{local} Morrey spaces defined by the norm \begin{equation}\lambdabel{11local} \left\| f\right\|_{{\cal L}^{p,\lambdambda }_{\{x_0\}}}: = \mathop{\rm sup}\limitsup_{r>0 } r^{-\frac{\lambdambda}{p}} \|f\|_{L^{p}(B(x_0,r))}. \end{equation} Variable exponent Morrey spaces ${\cal L}^{p(\cdot),\lambdambda(\cdot)}(\Omega)$, were introduced and studied in \cite{AHS} and \cite{MizShim} in the Euclidean setting and in \cite{KokMeskhi-Morrey} in the setting of metric measure spaces, in case of bounded sets $\Omega$. In \cite{AHS} there was proved the boundedness of the maximal operator in variable exponent Morrey spaces ${\cal L}^{p(\cdot),\lambdambda(\cdot)}(\Omega)$ and a Sobolev-Adams type ${\cal L}^{p(\cdot),\lambdambda(\cdot)}\rightarrow {\cal L}^{q(\cdot),\lambdambda(\cdot)}$-theorem for potential operators of variable order $\alphalpha(x)$. In the case of constant $\alphalpha$, there was also proved the ${\cal L}^{p(\cdot),\lambdambda(\cdot)}\to BMO$-boundedness in the limiting case $p(x) = \frac{n-\lambdambda(x)}{\alphalpha}.$ In \cite{MizShim} the maximal operator and potential operators were considered in a somewhat more general space, but under more restrictive conditions on $p(x)$. \ P. H\"ast\"o in \cite{Hasto} used his "local-to-global" approach to extend the result of \cite{AHS} on the maximal operator to the case of the whole space ${\mathbb{R}^n}$. In \cite{KokMeskhi-Morrey} there was proved the boundedness of the maximal operator and the singular integral operator in variable exponent Morrey spaces ${\cal L}^{p(\cdot),\lambdambda(\cdot)}$ in the general setting of metric measure spaces. In the case of constant $p$ and $\lambdambda$, the results on the boundedness of potential operators and classical Calderon-Zygmund singular operators go back to \cite{Adams} and \cite{P}, respectively, while the boundedness of the maximal operator in the Euclidean setting was proved in \cite{ChiFra}; for further results in the case of constant $p$ and $\lambdambda$ see for instance \cite{BurGul1}-- \cite{BurGulSerTar1}. In \cite{GulHasSam} we studied the boundedness of the classical integral operators in the generalized variable exponent Morrey spaces \ $\mathcal{M}^{p(\cdot),\varkapparphi}(\Omega)$ over an open bounded set $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}.$ Generalized Morrey spaces of such a kind in the case of constant $p$, with the norm \eqref{11} replaced by \begin{equation}\lambdabel{12} \left\| f\right\|_{{\cal L}^{p,\varkapparphi }}: = \mathop{\rm sup}\limitsup_{x, \; r>0 }\frac{r^{-\frac{n}{p}}}{\varkapparphi(r)} \|f\|_{L^{p}(B(x,r))} , \end{equation} under some assumptions on $\varkapparphi$ were studied in \cite{EridGunaNakai}, \cite{KurNishSug}, \cite{Miz}, \cite{Nakai}, \cite{Nakai1}. Results of \cite{GulHasSam} were extended in \cite{GulHasSam1} to the case of the generalized Morrey spaces ${\cal M}^{p(\cdot),\theta(\cdot) ,\omegamega(\cdot)}(\Omega)$ (where the $L^\infty$-norm in $r$ in the definition of the Morrey space is replaced by the Lebesgue $L^\theta$-norm), we refer to \cite{BurHus1} for such spaces in the case of constant exponents. In \cite{Gul} (see, also \cite{GulBook}) there were introduced and studied local "complementary" generalized Morrey spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omega}(\Omega), \ \Omega\mathop{\rm sup}\limitsubseteq{\mathbb{R}^n},$ with constant $p\in [1,\infty)$, the space of all functions $f \in L_{\textrm{loc}}^{p}(\Omega\backslash \{x_0\})$ with the finite norm \begin{equation*}\lambdabel{103} \|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omega}(\Omega)}= \mathop{\rm sup}\limitsup_{r>0 } \frac{r^{\frac{n}{p^\prime}}}{\omegamega(r)} \|f\|_{L^{p}(\Omega\backslash B(x_0,r))}. \end{equation*} For the particular case when $\omegamega$ is a power function ( \cite{Gul}, \cite{GulBook}), see also \cite{GulMus1, GulMus2}), we find it convenient to keep the traditional notation ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ for the space defined by the norm \begin{equation}\lambdabel{13} \|f\|_{{\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)}= \mathop{\rm sup}\limitsup_{r>0 } r^{\frac{\lambdambda}{p^\prime}} \|f\|_{L^{p}({\mathbb{R}^n}\backslash B(x_0,r))}<\infty, \ \ \ x_0\in \Omega, \ \ \ 0 \le \lambdambda < n \end{equation} Obviously, we recover the space ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ from ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega}(\Omega)$ under the choice $\omegamega(r)=r^{\frac{n-\lambdambda}{p^\prime}}.$ In contrast to the Morrey space, where one measures the regularity of a function $f$ near the point $x_0$ (in the case of local Morrey spaces) and near all points $x\in\Omega$ (in the case of global Morrey spaces), the norm \eqref{13} is aimed to measure a "bad" behaviour of $f$ near the point $x_0$ in terms of the possible growth of $\|f\|_{L^{p}(\Omega\backslash B(x_0,r))}$ as $r\to 0$. Correspondingly, one admits $\varkapparphi(0)=0$ in \eqref{12} and $\omegamega(0)=\infty$ in \eqref{13}. In this paper we consider local "complementary" generalized Morrey spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ with variable exponent $p(\cdot)$, see Definition \ref{def0}. However, we start with the case of constant $p$ and in this case reveal an intimate connection of the complementary spaces with weighted Lebesgue spaces. In the case where $\omegamega(r)$ is a power function, we show that the space ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ is embedded between the weighted space $L^p(\Omega, |x-x_0|^{\lambdambda(p-1)})$ and its weak version, but does not coincide with either of them, which elucidates the nature of these spaces. In the general case, for the spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ over bounded sets $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ we consider the following operators: \\ 1) the Hardy-Littlewood maximal operator $$ Mf(x)=\mathop{\rm sup}\limitsup\limits_{r>0}\frac{1}{|B(x,r)|} \int\limits_{\widetilde{B}(x,r)}|f(y)|dy $$ 2) variable order potential type operators $$ I^{\alphalpha(x)} f(x)=\int\limits_{\Omega} \frac{f(y)\,dy}{|x-y|^{n-\alphalpha(x)}}, , $$ 3) variable order fractional maximal operator $$ M^{\alphalpha(x)}f(x)=\mathop{\rm sup}\limitsup\limits_{r>0}\frac{1}{|B(x,r)|^{1-\frac{\alphalpha(x)}{n}}} \int\limits_{\widetilde{B}(x,r)}|f(y)|dy, $$ where $0<\inf \alphalpha(x)\le \mathop{\rm sup}\limitsup\alphalpha(x)<n$, and\\ 4) Calderon-Zygmund type singular operator \begin{equation*}\lambdabel{s1} T f(x)=\int\limits_{\Omega}K(x,y) f(y) dy \end{equation*} with a "standard" singular kernel in the sense of R.Coifman and Y.Meyer, see for instance \cite{132}, p. 99. We find conditions on the pair of functions $\omegamega_1(r)$ and $\omegamega_2(r)$ for the $p(\cdot)\to p(\cdot)$-boundedness of the operators $M$ and $T$ from a variable exponent local "complementary" generalized Morrey space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1}(\Omega)$ into another one ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)$, and for the corresponding Sobolev $p(\cdot)\to q(\cdot)$-boundedness for the potential operators $I^{\alphalpha(\cdot)},$ under the log-condition on $p(\cdot)$. The paper is organized as follows. In Section \ref{relations} we start with the case of the spaces ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ with constant $p$ and show their relation to the weighted Lebesgue space $L^p(\Omega, |x-x_0|^{\lambdambda(p-1)}).$ The main statements are given in Theorems \ref{lem78} and \ref{lem1}. In Section \ref{Preliminaries} we provide necessary preliminaries on variable exponent Lebesgue and Morrey spaces. In Section \ref{generalized} we introduce the local "complementary" generalized Morrey spaces with variable exponents and recall some facts known for generalized Morrey spaces with constant $p$. In Section \ref{sectionmaximal} we deal with the maximal operator, while potential operators are studied in Section \ref{potentials}. In Section \ref{singular} we treat Calderon-Zygmund singular operators. The main results are given in Theorems \ref{M1}, \ref{M1X} and \ref{SIO1}. We emphasize that the results we obtain for generalized Morrey spaces are new even in the case when $p(x)$ is constant, because we do not impose any monotonicity type condition on $\omegamega(r).$ \noindent N o t a t i o n : \\ ${\mathbb{R}^n}$ is the $n$-dimensional Euclidean space,\\ $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ is an open set, \ $\ell=$ diam\, $\Omega$;\\ $\chi_E(x)$ is the characteristic function of a set $E\mathop{\rm sup}\limitsubseteq {\mathbb{R}^n}$;\\ $B(x,r)=\{y \in {\mathbb{R}^n} :|x-y| < r \}), \ \widetilde{B}(x,r)=B(x,r)\cap\Omega$;\\ by $c$,$C, c_1,c_2$ etc, we denote various absolute positive constants, which may have different values even in the same line. \mathop{\rm sup}\limitsection{Relations of the "complementary" Morrey spaces ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ with weighted Lebesgue spaces; the case of constant $p$}\lambdabel{relations} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} We use the standard notation $L^p(\Omega, \varkapparrho)=\left\{f: \int_\Omega \varkapparrho(y)|f(y)|^p\, dy <\infty\right\}, $ where $\varkapparrho$ is a weight function. For the space ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ defined in \eqref{13}, the following statement holds. \begin{theorem}\lambdabel{lem78} Let $\Omega$ be a bounded open set, $1\le p <\infty, \ 0\le\lambdambda\le n$ and $A>\ell$. Then \begin{equation}\lambdabel{2a3255} L^p(\Omega, |y-x_0|^{\lambdambda(p-1)}) \hookrightarrow {\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega) \hookrightarrow \ \bigcap\limits_{\varkapparepsilon>0} L^p\left(\Omega, \frac{|y-x_0|^{\lambdambda(p-1)}}{\left(\ln\frac{A}{|y-x_0|}\right)^{1+\varkapparepsilon}}\right) \end{equation} where both the embeddings are strict, with the counterexamples $f(x)=\frac{1}{|x-x_0|^{\frac{n}{p}+\frac{\lambdambda}{p^\prime}}}$ and $g(x)=\frac{\ln\left(\ln \frac{B}{|x-x_0|}\right)}{|x-x_0|^{\frac{n}{p}+\frac{\lambdambda}{p^\prime}}}, \ B>\ell e^e$: $$f\in {\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega), \ \ \ \textrm{but} \ \ \ \ f \notin L^p(\Omega, |y-x_0|^{\lambdambda(p-1)}),$$ and $$ g\in \bigcap\limits_{\varkapparepsilon>0} L^p\left(\Omega, \frac{|y-x_0|^{\lambdambda(p-1)}}{\left(\ln\frac{A}{|y-x_0|}\right)^{1+\varkapparepsilon}}\right), \ \ \ \ \textrm{but} \ \ \ \ g \notin {\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega).$$ \end{theorem} \begin{proof}\\ \textit{1$^0.\ $ The left-hand side embedding}.\\ Denote $\nu=\lambdambda(p-1).$ For all $0<r<\ell$ we have \begin{equation}\lambdabel{1hop} \left(\int_\Omega |y-x_0|^\nu|f(y)|^p\,dy\right)^\frac{1}{p}\ge \left(\int_{\Omega\backslash \widetilde{B}(x_0,r)} |y-x_0|^\nu|f(y)|^p\,dy\right)^\frac{1}{p}\ge r^\frac{\nu}{p} \left(\int_{\Omega\backslash \widetilde{B}(x_0,r)} |f(y)|^p\,dy\right)^\frac{1}{p}. \end{equation} Thus $\|f\|_{L^p(\Omega, |y-x_0|^\nu)}\ge r^\frac{\lambdambda}{p^\prime} \|f\|_{L^p\Omega\backslash \widetilde{B}(x_0,r))}$ and then $$\|f\|_{L^p(\Omega, |y-x_0|^\nu)}\ge \|f\|_{{\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega) }.$$ \textit{2$^0.\ $ The right-hand side embedding}.\\ We take $x_0=0$ for simplicity and denote $w_\varkapparepsilon(|y|)=\frac{|y|^{\lambdambda(p-1)}}{\left(\ln\frac{A}{|y|}\right)^{1+\varkapparepsilon}}.$ We have \begin{equation}\lambdabel{trick} \int_{\widetilde{B}(x_0,t)}|f(y)|^p w_\varkapparepsilon(|y|)dy = \int_{\widetilde{B}(x_0,t)}|f(y)|^p \left(\int_0^{|y|} \frac{d}{ds}w_\varkapparepsilon(s)ds\right)dy, \end{equation} with $$w^\prime_\varkapparepsilon(t)=t^{\lambdambda(p-1)-1}\left[\frac{\lambdambda(p-1)}{\left(\ln\frac{A}{t}\right)^{1+\varkapparepsilon}}+\frac{(1+\varkapparepsilon)} {\left(\ln\frac{A}{t}\right)^{2+\varkapparepsilon}}\right]\ge 0.$$ Therefore, $$ \int_{\widetilde{B}(x_0,t)}|f(y)|^p w_\varkapparepsilon(|y|)dy= \int_0^{t} w_\varkapparepsilon^\prime(s) \, \left(\int_{\{y \in \Omega : s<|x_0-y|< t\}} |f(y)|^p dy\right)ds\le $$ $$\le \int_0^\ell w^\prime_\varkapparepsilon(s) \|f\|^p_{L^{p}(\Omega\backslash \widetilde{B}(x_0,s))} ds. \le \|f\|^p_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega}(\Omega)}\int_0^\ell s^{\lambdambda(p-1)}w_\varkapparepsilon^\prime(s)\, ds$$ where the last integral converges when $\varkapparepsilon>0$ since $s^{\lambdambda(p-1)}w_\varkapparepsilon^\prime(s)\le \frac{C}{s\left(\ln\frac{A}{s}\right)^{1+\varkapparepsilon}}.$ This completes the proof. \end{proof} \textit{3$^0.\ $ The strictness of the embeddings}.\\ Calculations for the function $f$ are obvious. In case of the function $g$, take $x_0=0$ for simplicity and denote $w_\varkapparepsilon(|x|)=\frac{|x|^{\lambdambda(p-1)}}{\left(\ln\frac{A}{|x|}\right)^{1+\varkapparepsilon}}.$ We have $$\|f\|^p_{L^p\left(\Omega,w_\varkapparepsilon\right)}=\int_\Omega\frac{\ln^p\left(\ln \frac{B}{|x|}\right)}{|x|^n\left(\ln\frac{A}{|x|}\right)^{1+\varkapparepsilon}}dx \le C\int\limits_0^\ell \frac{\ln^p\left(\ln \frac{B}{t}\right)}{t\left(\ln\frac{A}{t}\right)^{1+\varkapparepsilon}}dt<\infty$$ for every $\varkapparepsilon>0.$ However, for small $r\in \left(0,\frac{\delta}{2}\right)$, where $\delta=dist(0,\partialrtial\Omega)$, we obtain $$r^\frac{\lambdambda}{p^\prime}\|f\|_{L^p(\Omega\backslash B(0,r))}= \left(r^{\lambdambda(n-1)}\int\limits_{x\in\Omega:\ |x|>r} \frac{\ln^p\left(\ln \frac{B}{|x|}\right)\,dx}{|x|^{n+\lambdambda(p-1)}} \right)^\frac{1}{p}$$ $$\ge \left(r^{\lambdambda(p-1)}\int\limits_{x\in\Omega:\ r<|x| <\delta} \frac{\ln^p\left(\ln \frac{B}{|x|}\right)\,dx}{|x|^{n+\lambdambda(p-1)}} \right)^\frac{1}{p}= \left(r^{\lambdambda(p-1)}|\mathbb{S}^{n-1}|\int\limits_r^\delta \frac{\ln^p\left(\ln \frac{B}{t}\right)\,dt}{t^{1+\lambdambda(p-1)}} \right)^\frac{1}{p}.$$ But $$\int\limits_r^\delta \frac{\ln^p\left(\ln \frac{B}{t}\right)\,dt}{t^{1+\lambdambda(p-1)}}\ge \int\limits_r^{2r} \frac{\ln^p\left(\ln \frac{B}{t}\right)\,dt}{t^{1+\lambdambda(p-1)}} \ge \ln^p\left(\ln \frac{B}{2r}\right) \int\limits_r^{2r} \frac{\,dt}{t^{1+\lambdambda(p-1)}}= C \ln^p\left(\ln \frac{B}{r}\right) r^{-\lambdambda(p-1)}, $$ so that $$r^\frac{\lambdambda}{p^\prime}\|f\|_{L^p(\Omega\backslash B(0,r))}\ge C \ln \left(\ln \frac{B}{2r}\right) \to \infty \ \ \ \ \textrm{as} \ \ \ \ r\to 0,$$ which completes the proof of the lemma. \begin{remark}\lambdabel{lem} The arguments similar to those in \eqref{1hop} show that the left-hand side embedding in \eqref{2a3255} may be extended to the case of more general spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega}(\Omega)$: $$L^p(\Omega, \rho(|y-x_0|)) \hookrightarrow {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega}(\Omega)$$ where $\rho$ is a positive increasing (or almost increasing) function such that $\inf\limits_{r>0}\frac{\rho(r)\omegamega^p(r)}{r^{n(p-1)}}>0. $ \end{remark} The next theorem shows that the space ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)$ is also embedded between that weighted space $L^p(\Omega, |y-x_0|^{\lambdambda(p-1)})$ and its weak version $wL^p(\Omega, |y-x_0|^{\lambdambda(p-1)})$. The latter is defined by the norm $$\|f\|_{wL^p(\Omega, |y-x_0|^{\lambdambda(p-1)})}=\mathop{\rm sup}\limitsup_{t>0}t \left[\mu\{x\in\Omega : |f(x)|>t\}\right]^\frac{1}{p}<\infty.$$ where $\mu(E)=\int\limits_E |y-x_0|^{\lambdambda(p-1)}dy.$ \begin{theorem}\lambdabel{lem1} Let $\Omega$ be a bounded domain and $1\le p <\infty, \ 0<\lambdambda\le n$. Then \begin{equation}\lambdabel{2at7} L^p(\Omega, |y-x_0|^{\lambdambda(p-1)}) \hookrightarrow {\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega) \hookrightarrow wL^p(\Omega, |y-x_0|^{\lambdambda(p-1)}). \end{equation} \end{theorem} \begin{proof} By Lemma \ref{lem78}, we only have to prove the right-hand side embedding. Let $x_0=0$. We have $$\|f\|_{wL^p(\Omega, |y|^{\lambdambda(p-1)})}=\mathop{\rm sup}\limitsup_{t>0} \left(t^p\int\limits_{x\in\Omega : \ |f(x)|>t}|x|^{\lambdambda(p-1)}\,dx\right)^\frac{1}{p}.$$ It suffices to estimate this norm only for small $|x|<\delta , \ \delta=dist(0,\partialrtial\Omega)$, since the embedding $${\,^{^{\complement}}\! \cal L}_{\{0\}}^{p,\lambdambda}\left(\Omega\backslash B\left(0,\delta \right)\right) \hookrightarrow L^p\left(\Omega\backslash B\left(0,\delta \right), |y|^{\lambdambda(p-1)}\right) \hookrightarrow wL^p\left(\Omega\backslash B\left(0,\delta \right), |y|^{\lambdambda(p-1)}\right)$$ is obvious. We obtain $$\mathop{\rm sup}\limitsup_{t>0}t^p \int\limits_{x\in B\left(0,\delta \right) : \ |f(x)|>t}|x|^{\lambdambda(p-1)}\,dx = \mathop{\rm sup}\limitsup_{t>0}t^p \int\limits_{x\in B\left(0,\delta \right) : \ |f(x)|>t}\frac{\left\{|x|^\frac{\lambdambda}{p^\prime} \|f\|_{L^p(\Omega\backslash B(0,|x|))} \right\}^p}{\int\limits_{\Omega\backslash B(0,|x|)}|f(y)|^p\,dy}\,dx$$ Hence $$\|f\|_{wL^p(\Omega, |y|^{\lambdambda(p-1)})}\le \|f\|_{{\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)}\left( \int\limits_{B(0,\delta)}\frac{dx}{|\Omega\backslash B(0,|x|)|}\right) ^\frac{1}{p} \le \left( \frac{|B(0,\delta)|}{|\Omega\backslash B(0,\delta)|}\right) ^\frac{1}{p}\|f\|_{{\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}(\Omega)} $$ which proves the lemma. \end{proof} \mathop{\rm sup}\limitsection{Preliminaries on variable exponent Lebesgue and Morrey spaces}\lambdabel{Preliminaries} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} We refer to the book \cite{160zc} for variable exponent Lebesgue spaces but give some basic definitions and facts. Let $p(\cdot)$ be a measurable function on $\Omega$ with values in $[1,\infty)$. An open set $\Omega$ is assumed to be bounded throughout the whole paper. We mainly suppose that \begin{equation}\lambdabel{h0} 1< p_-\le p(x)\le p_+<\infty, \end{equation} where $ p_-:=\mathop{\rm ess \; inf}\limits_{x\in \Omega}p(x), \ p_+:=\mathop{\rm ess \; sup}\limits_{x\in \Omega}p(x). $ By $L^{p(\cdot)}(\Omega)$ we denote the space of all measurable functions $f(x)$ on $\Omega$ such that $$ I_{p(\cdot)}(f)= \int_{\Omega}|f(x)|^{p(x)} dx < \infty. $$ Equipped with the norm $$\|f\|_{p(\cdot)}=\inf\left\{\eta>0:~I_{p(\cdot)}\left(\frac{f}{\eta}\right)\le 1\right\}, $$ this is a Banach function space. By $p^\prime(x) =\frac{p(x)}{p(x)-1}$, $x\in \Omega,$ we denote the conjugate exponent. By $WL(\Omega)$ (weak Lipschitz) we denote the class of functions defined on $\Omega$ satisfying the log-condition \begin{equation}\lambdabel{h8} |p(x)-p(y)|\le\frac{A}{-\ln |x-y|}, \;\; |x-y|\le \frac{1}{2} \;\; x,y\in \Omega, \end{equation} where $A=A(p) > 0$ does not depend on $x, y$. \begin{theorem}\lambdabel{D} (\cite{Din}) Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set and $p\in WL(\Omega)$ satisfy condition \eqref{h0}. Then the maximal operator $M$ is bounded in $L^{p(\cdot)}(\Omega)$. \end{theorem} The following theorem was proved in \cite{Sam2} under the condition that the maximal operator is bounded in $L^{p(\cdot)}(\Omega)$, which became an unconditional result after the result of Theorem \ref{D}. \begin{theorem}\lambdabel{S1} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be bounded, $p,\alphalpha\in WL(\Omega)$ satisfy assumption \eqref{h0} and the conditions \begin{equation}\lambdabel{1} \inf\limits_{x\in \Omega} \alphalpha(x)>0,\,\, \mathop{\rm sup}\limitsup\limits_{x\in \Omega} \alphalpha(x)p(x)<n. \end{equation} Then the operator $I^{\alphalpha(\cdot)}$ is bounded from $L^{p(\cdot)}(\Omega)$ to $L^{q(\cdot)}(\Omega)$ with $ \frac 1{q(x)}=\frac 1{p(x)}-\frac {\alphalpha(x)} {n}.$ \end{theorem} Singular operators within the framework of the spaces with variable exponents were studied in \cite{Din2}. From Theorem 4.8 and Remark 4.6 of \cite{Din2} and the known results on the boundedness of the maximal operator, we have the following statement, which is formulated below for our goals for a bounded $\Omega$, but valid for an arbitrary open set $\Omega$ under the corresponding condition on $p(x)$ at infinity. \begin{theorem}\ (\cite{Din2})\lambdabel{SIO} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be a bounded open set and $p\in WL(\Omega)$ satisfy condition \eqref{h0}. Then the singular integral operator $T$ is bounded in $L^{p(\cdot)}(\Omega)$. \end{theorem} The estimate provided by the following lemma (see \cite{Sam2}, Corollary to Lemma 3.22) is crucial for our further proofs. \begin{lemma}\lambdabel{lemma} Let $\Omega$ be a bounded domain and $p$ satisfy the condition \eqref{h8} and $1\le p_-\le p(x)\le p_+<\infty$. Let also $\mathop{\rm sup}\limitsup_{x\in\Omega} \nu(x)<\infty$ and $\mathop{\rm sup}\limitsup_{x\in\Omega}[n+\nu(x)p(x)]<0$. Then \begin{equation}\lambdabel{estikmate} \||x-\cdot|^{\nu(x)}\chi_{\Omega\backslash \widetilde{B}(x,r)}(\cdot)\|_{p(\cdot)}\le C r^{\nu(x)+\frac{n}{p(x)}}, \quad \ x\in\Omega, 0<r<\ell=diam\,\Omega, \end{equation} where $C$ does not depend on $x$ and $r$. \end{lemma} Let $ \lambdambda(x)$ be a measurable function on $\Omega$ with values in $[0,n]$. The variable Morrey space ${\cal L}^{p(\cdot),\lambdambda(\cdot)}(\Omega)$ introduced in \cite{AHS}, is defined as the set of integrable functions $f$ on $\Omega$ with the finite norm $$ \|f\|_{{\cal L}^{p(\cdot),\lambdambda(\cdot)}(\Omega)}= \mathop{\rm sup}\limitsup_{x\in \Omega, \; t>0} t^{-\frac{\lambdambda(x)}{p(x)}}\| f\chi_{\widetilde{B}(x,t)}\|_{L^{p(\cdot)}(\Omega)}. $$ The following statements are known. \begin{theorem}\lambdabel{maximal} (\cite{AHS}) Let $\Omega$ be bounded and $p\in WL(\Omega)$ satisfy condition \eqref{h0} and let a measurable function $\lambdambda$ satisfy the conditions $ 0\le \lambdambda(x), \ \quad \mathop{\rm sup}\limitsup_{x\in \Omega} \lambdambda(x)<n. $ Then the maximal operator $M$ is bounded in ${\cal L}^{p(\cdot),\lambdambda(\cdot)} (\Omega)$. \end{theorem} Theorem \ref{maximal} was extended to unbounded domains in \cite{Hasto}. Note that the boundedness of the maximal operator in Morrey spaces with variable $p(x)$ was studied in \cite{KokMeskhi-Morrey} in the more general setting of quasimetric measure spaces. The known statements for the potential operators read as follows. \begin{theorem}\lambdabel{Spanne-Ris} (\cite{AHS}; Spanne-type result). Let $\Omega$ be bounded, $p,\alphalpha \in WL(\Omega)$ and $p$ satisfy condition \eqref{h0}. Let also $\lambdambda(x)\ge 0$, the conditions \eqref{1} be fulfilled and $\frac 1{q(x)}=\frac 1{p(x)}-\frac {\alphalpha(x)} {n}.$ Then the operator $I^{\alphalpha(\cdot)}$ is bounded from ${\cal L}^{p(\cdot),\lambdambda(\cdot)} (\Omega)$ to ${\cal L}^{q(\cdot),\mu(\cdot)} (\Omega)$, where $ \frac{\mu(x)}{q(x)}=\frac{\lambdambda(x)}{p(x)}. $ \end{theorem} \begin{theorem}\lambdabel{Ris} (\cite{AHS}; Adams-type result). Let $\Omega$ be bounded, $p,\alphalpha \in WL(\Omega)$ and $p$ satisfy condition \eqref{h0}. Let also $\lambdambda(x)\ge 0$ and the conditions \begin{equation}\lambdabel{2} \inf\limits_{x\in \Omega} \alphalpha(x)>0,\,\, \mathop{\rm sup}\limitsup\limits_{x\in \Omega} [\lambdambda(x)+\alphalpha(x)p(x)]<n \end{equation} hold. Then the operator $I^{\alphalpha(\cdot)}$ is bounded from ${\cal L}^{p(\cdot),\lambdambda(\cdot)} (\Omega)$ to ${\cal L}^{q(\cdot),\lambdambda(\cdot)} (\Omega)$, where $ \frac 1{q(x)}=\frac 1{p(x)}-\frac {\alphalpha} {n-\lambdambda(x)}. $ \end{theorem} \mathop{\rm sup}\limitsection{Variable exponent local "complementary" generalized Morrey spaces}\lambdabel{generalized} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} Everywhere in the sequel the functions $\omegamega(r),\ \omegamega_1(r)$ and $\omegamega_2(r)$ used in the body of the paper, are non-negative measurable function on $(0,\ell)$, $\ell=diam\,\Omega$. Without loss of generality we may assume that they are bounded beyond any small neighbourhood $(0,\delta)$ of the origin. The local generalized Morrey space ${\cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ and global generalized Morrey spaces ${\cal M}^{p(\cdot),\omegamega}(\Omega)$ with variable exponent are defined (see \cite{GulHasSam}) by the norms $$ \|f\|_{{\cal M}_{\{x_0\}}^{p(\cdot),\omegamega}} = \mathop{\rm sup}\limitsup\limits_{r>0}\frac{r^{-\frac{n}{p(x_0)}}}{\omegamega(r)} \|f\|_{L^{p(\cdot)}(\widetilde{B}(x_0,r))}, $$ $$ \|f\|_{{\cal M}^{p(\cdot),\omegamega}} = \mathop{\rm sup}\limitsup\limits_{x \in \Omega, r>0}\frac{r^{-\frac{n}{p(x)}}}{\omegamega(r)} \|f\|_{L^{p(\cdot)}(\widetilde{B}(x,r))}, $$ where $x_0 \in \Omega$ and $1 \le p(x) \le p_+< \infty$ for all $x \in \Omega.$ We find it convenient to introduce the variable exponent version of the local "complementary" space as follows (compare with \eqref{13}). \begin{definition}\lambdabel{def0} Let $x_0 \in \Omega$, $1 \le p(x) \le p_+< \infty$. The \textit{variable exponent generalized local "complementary" Morrey space} ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ is defined by the norm $$ \|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}} = \mathop{\rm sup}\limitsup\limits_{r>0}\frac{r^{\frac{n}{p^\prime(x_0)}}}{\omegamega(r)} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))}. $$ \end{definition} Similarly to the notation in \eqref{13}, we introduce the following particular case of the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega),$ defined by the norm \begin{equation}\lambdabel{13suo} \|f\|_{{\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p(\cdot),\lambdambda}(\Omega)}= \mathop{\rm sup}\limitsup_{r>0 } r^{\frac{\lambdambda}{p^\prime}} \|f\|_{L^{p(\cdot)}(\Omega\backslash B(x_0,r))}<\infty, \ \ \ x_0\in \Omega, \ \ \ 0 \le \lambdambda < n \end{equation} Everywhere in the sequel we assume that \begin{equation} \lambdabel{BHF} \mathop{\rm sup}\limitsup\limits_{0<r<\ell}\frac{r^{\frac{n}{p^\prime(x_0)}}}{\omegamega(r)}<\infty , \ \ \ \ \ell=\mbox{\,\rm diam\,}\ \Omega, \end{equation} which makes the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ non-trivial, since it contains $L^{p(\cdot)}(\Omega)$ in this case. \begin{remark}\lambdabel{rem1} Suppose that $$\inf\limits_{\delta<r<\ell}\frac{r^{\frac{n}{p^\prime(x_0)}}}{\omegamega(r)}>0$$ for every $\delta>0.$ Then $$\|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)}\mathop{\rm sup}\limitsim \|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(B(x_0,\delta))}+ \|f\|_{L^{p(\cdot)}(\Omega\backslash B(x_0,\delta))}$$ (with the constants in the above equivalence depending on $\delta$). Since $\delta>0$ is arbitrarily small, the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ may be interpreted as the space of functions \textit{whose restrictions onto an arbitrarily small neighbourhood $B(x_0,\delta)$ are in local "complementary" Morrey space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(B(x_0,\delta))$ with the exponent $p(\cdot)$ close to the constant value $p(x_0)$ and the restrictions onto the exterior $\Omega\backslash B(x_0,\delta)$ are in the variable exponent Lebesgue space $L^{p(\cdot)}$.} \end{remark} If also $\inf\limits_{0<r<\ell}\frac{r^{\frac{n}{p^\prime(x_0)}}}{\omegamega(r)}>0$, then ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)=L^{p(\cdot)}(\Omega)$. Therefore, to guarantee that the "complementary" space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ is strictly larger than $L^{p(\cdot)}(\Omega),$ one should be interested in the cases where \begin{equation} \lambdabel{14} \lim\limits_{r\to 0}\frac{r^{\frac{n}{p^\prime(x_0)}}}{\omegamega(r)}=0. \end{equation} Clearly, the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ may contain functions with a non-integrable singularity at the point $x_0$, if no additional assumptions are introduced. To study the operators in ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega),$ we need its embedding into $L^1(\Omega).$ Corollary below shows that the Dini condition on $\omegamega$ is sufficient for such an embedding. \begin{lemma}\lambdabel{integrability} Let $f\in L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))$ for every $s\in (0.\ell)$ and $\gamma\in \mathbb{R}.$ Then \begin{equation} \lambdabel{Ga14} \int_{\widetilde{B}(x_0,t)}|y-x_0|^\gamma |f(y)| dy \le C \int_0^{t} s^{\gamma+\frac{n}{p^\prime(x_0)} -1} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))}ds \end{equation} for every $t\in (0,\ell),$ where $C$ does not depend on $f,t$ and $x_0$. \end{lemma} \begin{proof} We use the following trick, where the parameter $\beta>0$ which will be chosen later: \begin{equation}\lambdabel{trick} \int_{\widetilde{B}(x_0,t)}|y-x_0|^\gamma|f(y)| dy = \beta\int_{\widetilde{B}(x_0,t)}|x_0-y|^{\gamma-\beta}|f(y)| \left(\int_0^{|x_0-y|} s^{\beta-1}ds\right)dy \end{equation} $$ =\beta \int_0^{t} s^{\beta-1} \, \left(\int_{\{y \in \Omega : s<|x_0-y|< t\}}|x_0-y|^{\gamma -\beta} \, |f(y)| dy\right)ds. $$ Hence $$\int_{\widetilde{B}(x_0,t)}|f(y)| dy\le C \int_0^{t} s^{\beta-1} \, \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))} \; \||x_0-y|^{\gamma -\beta}\|_{L^{p^\prime(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))} ds. $$ Now we make use of Lemma \ref{lemma} which is possible if we choose $\beta >\max\left(0,\frac {n}{p^\prime_-}+\gamma\right)$ and then arrive at \eqref{Ga14}. \end{proof} \begin{corollary}\lambdabel{cor} The following embeddings hold \begin{equation}\lambdabel{embeddungs} L^{p(\cdot)}(\Omega)\hookrightarrow {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega) \hookrightarrow L^1(\Omega) \end{equation} where the left-hand side embedding is guaranteed by the condition \eqref{BHF} and the right-hand side one by the condition \begin{equation}\lambdabel{convergence} \int\limits_0^\ell \frac{\omegamega(r)\, dr}{r}<\infty. \end{equation} \end{corollary} \begin{proof} The statement for the left-hand side embedding is obvious. The right-hand side follows from Lemma \ref{integrability} with $\gamma=0$ and the inequality \begin{equation}\lambdabel{qwet} \int_{0}^\ell r^{\frac{n}{p^\prime(x_0)}-1} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr \le \|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)} \int_0^\ell \omegamega(r)\frac{dr}{r}. \end{equation} \end{proof} \begin{remark}\lambdabel{rem1a} Note that similarly to the arguments in the proof of Corollary \ref{cor}, we can see that the condition $ \int\limits_0^\ell \omegamega(r)r^{\gamma-1}\, dr<\infty, \ \gamma\in \mathbb{R}, $ guarantees the embedding $ {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega) \hookrightarrow L^1(\Omega,|y-x_0|^\gamma)$ into the weighted space; note that only the values $\gamma>-n/p^\prime(x_0)$ may be of interest for us, because the above condition with $\gamma\le-n/p^\prime(x_0)$ is not compatible with the condition \eqref{BHF} of the non-triviality of the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega).$ \end{remark} \begin{remark}\lambdabel{rem2} We also find it convenient to give a condition for $f\in L^1(\Omega)$ in the form as follows: \begin{equation}\lambdabel{dobavleno} \int_0^{\ell} t^{\frac{n}{p^\prime(x_0)}-1} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}dt<\infty \ \ \ \ \Longrightarrow \ \ \ \ f\in L^1(\Omega), \end{equation} for which it suffices to refer to \eqref{Ga14}. \end{remark} In the sequel all the operators under consideration (maximal, singular and potential operators) will be considered on function $f$ either satisfying the condition of the existence of the integral in \eqref{dobavleno}, or belonging to ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$ with $\omegamega$ satisfying the condition \eqref{convergence}. Such functions are therefore integrable on $\Omega$ in both the cases, and consequently all the studied operators exist on such functions a.e. Note that the statements on the boundedness of the maximal, singular and potential operators in the "complementary" Morrey spaces known for the case of the constant exponent $p$, obtained in \cite{Gul}, read as follows. Note that the theorems below do not assume no monotonicity type conditions on the functions $\omegamega, \omegamega_1$ and $\omegamega_2$. \begin{theorem} (\cite{Gul}, Theorem 1.4.6) \lambdabel{nakaiVagif1} Let $1 < p < \infty$, $x_0 \in {\mathbb{R}^n}$ and $\omegamega_1(r)$ and $\omegamega_2(r)$ be positive measurable functions satisfying the condition \begin{equation*} \int_0^r \omegamega_1(t)\frac{dt}{t} \le c \,\omegamega_2(r) \end{equation*} with $c>0 $ not depending on $r>0$. Then the operators $M$ and $T$ are bounded from ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega_1}({\mathbb{R}^n})$ to ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega_2}({\mathbb{R}^n})$. \end{theorem} \begin{corollary} \lambdabel{Vagif1} (\cite{Gul}) Let $1 < p < \infty$, $x_0 \in {\mathbb{R}^n}$ and $0 \le \lambdambda < n$. Then the operators $M$ and $T$ are bounded in the space ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}({\mathbb{R}^n})$. \end{corollary} \begin{theorem}\lambdabel{Guliev} (\cite{Gul}, Theorem 1.3.9) \lambdabel{nakaiVagif2} Let $0<\alpha<n$, $1 < p < \infty$, $\frac{1}{q}=\frac{1}{p}-\frac{\alpha}{n}$, $x_0 \in \mathbb{R}^n$ and $\omegamega_1(r)$, $\omegamega_2(r)$ be positive measurable functions satisfying the condition \begin{equation*} r^\alpha \int_0^r \omegamega_1(t)\frac{dt}{t} \le c \,\omegamega_2(r), \end{equation*} with $c>0 $ not depending on $r>0$. Then the operators $M^{\alpha}$ and $I^{\alpha}$ are bounded from ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p,\omegamega_1}({\mathbb{R}^n})$ to ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{q,\omegamega_2}({\mathbb{R}^n})$. \end{theorem} \begin{corollary} \lambdabel{Vagif2}(\cite{Gul}) Let $0<\alpha<n$, $1<p<\frac{n}{\alpha}$, $x_0 \in \mathbb{R}^n$ and $\frac{1}{p}-\frac{1}{q}=\frac{\alpha}{n}$ and $\frac{\lambdambda}{p^\prime}=\frac{\mu}{q^\prime}$. Then the operators $M^{\alpha}$ and $I^{\alpha}$ are bounded from ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p,\lambdambda}({\mathbb{R}^n})$ to ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{q,\mu}({\mathbb{R}^n})$. \end{corollary} \begin{remark}\lambdabel{rem} The introduction of global "complementary" Morrey-type spaces has no big sense, neither in case of constant exponents, nor in the case of variable exponents. In the case of constant exponents this was noted in \cite{69acb}, pp 19-20; in this case the global space defined by the norm $$\mathop{\rm sup}\limitsup\limits_{x\in\Omega, r>0}\frac{r^{\frac{n}{p}}}{\omegamega(r)} \|f\|_{L^{p}(\Omega\backslash \widetilde{B}(x,r))}$$ reduces to $L^p(\Omega)$ under the assumption \eqref{BHF}. In the case of variable exponents there happens the same. In general, to make it clear, note that for instance under the assumption \eqref{14} if we admit that $\mathop{\rm sup}\limitsup\limits_{ r>0}\frac{r^{\frac{n}{p(\cdot)}}}{\omegamega(r)} \|f\|_{L^{p}(\Omega\backslash \widetilde{B}(x,r))}$ for two different points $x=x_0$ and $x=x_1, \ x_0\ne x_1$, this would immediately imply that $f\in L^{p(\cdot)}$ in a neighbourhood of both the points $x_0$ and $x_1$. \end{remark} \mathop{\rm sup}\limitsection{The maximal operator in the spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega} (\Omega)$}\lambdabel{sectionmaximal} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} The proof of the main result of this section presented in Theorem \ref{M1} is based on the estimate given in the following preliminary theorem. \begin{theorem} \lambdabel{HG21} Let $\Omega$ be bounded, $p\in WL(\Omega)$ satisfy the condition \eqref{h0} and $f \in L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))$ for every $r\in (0,\ell)$. If the integral \begin{equation}\lambdabel{converg} \int_{0}^{\ell} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr \end{equation} converges, then \begin{equation}\lambdabel{GOP} \|Mf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))} \le Ct ^{-\frac{n}{p^\prime(x_0)}}\int_{0}^{t} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr \end{equation} for every $t\in (0,\ell),$ where $C$ does not depend on $f, t$ and $x_0$. \end{theorem} \begin{proof} We represent $f$ as \begin{equation}\lambdabel{repr} f=f_1+f_2, \ \quad f_1(y)=f(y)\chi _{\Omega\backslash \widetilde{B}(x_0,t)}(y) \quad f_2(y)=f(y)\chi _{\widetilde{B}(x_0,t)}(y). \end{equation} \textit{$1^o.$ Estimation of $Mf_1$.} This case is easier, being treated by means of Theorem \ref{D}. Obviously $f_1\in L^{p(\cdot)}(\Omega)$ so that by Theorem \ref{D} \begin{equation} \lambdabel{kkk} \|Mf_1\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))} \le \|Mf_1\|_{L^{p(\cdot)}(\Omega)} \le C \|f_1\|_{L^{p(\cdot)}(\Omega)} = C\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}. \end{equation} By the monotonicity of the norm $\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))}$ with respect to $r$ we have \begin{equation}\lambdabel{ADI} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}\le Ct^{-\frac{n}{p^\prime(x_0)}} \int_{0}^{t} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr \end{equation} and then \begin{equation}\lambdabel{Ga12'} \|Mf_1\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))} \le Ct^{-\frac{n}{p^\prime(x_0)}} \int_{0}^{t} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr. \notag \end{equation} \textit{$2^o.$ Estimation of $Mf_2$.} This case needs the application of Lemma \ref{integrability}. To estimate $Mf_2(z)$ by means of \eqref{Ga14}, we observe that for $z\in \Omega\backslash \widetilde{B}(x_0,2t)$ we have \begin{align*} Mf_2(z) &= \mathop{\rm sup}\limitsup\limits_{r>0}|B(z,r)|^{-1}\int_{\widetilde{B}(z,r)}|f_2(y)|dy \\ & \le \mathop{\rm sup}\limitsup\limits_{r \ge t} \int_{ \widetilde{B}(x_0,t)\cap B(z,r)} |y-z|^{-n} \, |f(y)|dy \\ & \le 2^{n} \, |x_0-z|^{-n} \, \int_{ \widetilde{B}(x_0,t)} |f(y)|dy. \end{align*} Then by \eqref{Ga14} \begin{equation}\lambdabel{1001} Mf_2(z) \le C |x_0-z|^{-n} \, \int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))}ds. \end{equation} Therefore \begin{align}\lambdabel{Ga13'} \|Mf_2\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} &\le C \int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))}ds\,\left\||x_0-z|^{-n}\right\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}\notag \\ &\le C t^{-\frac{n}{p^\prime(x_0)}}\int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1} \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))}ds. \end{align} Since $ \|Mf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le \|Mf_1\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} +\|Mf_2\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}, $ from \eqref{Ga12'} and \eqref{Ga13'} we arrive at \eqref{GOP} with $\|Mf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}$ on the left-hand side and then \eqref{GOP} obviously holds also for $\|Mf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}$. \end{proof} The following theorem for the complementary Morrey spaces is, in a sense, a counterpart to Theorem \ref{maximal} formulated in Section \ref{Preliminaries} for the usual Morrey spaces. \begin{theorem}\lambdabel{M1} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set, $p\in WL(\Omega)$ satisfy the assumption \eqref{h0} and the functions $\omegamega_1(t)$ and $\omegamega_2(t)$ satisfy the condition \begin{equation}\lambdabel{eq3.6.VZ} \int_0^{t} \omegamega_1(r)\frac{dr}{r} \le C \,\omegamega_2(t), \end{equation} where $C$ does not depend on $t$. Then the maximal operator $M$ is bounded from the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1} (\Omega)$ to the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)$. \end{theorem} \begin{proof} For $f\in {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1}(\Omega)$ we have $$ \|Mf\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)}=\mathop{\rm sup}\limitsup_{t\in (0,\ell)} \frac{t^\frac{n}{p^\prime(x_0)}}{\omegamega_2(t)} \|Mf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}, $$ where Theorem \ref{HG21} is applicable to the norm $\|Mf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}$. Indeed from \eqref{eq3.6.VZ} it follows that the integral $\int_0^t\frac{\omegamega_1(r)}{r}dr$ converges. This implies that for $f\in {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1} (\Omega)$ the assumption of the convergence of the integral of type \eqref{converg} is fulfilled by \eqref{qwet}. Then by Theorem \ref{HG21} we obtain \begin{align*} \|Mf\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)} & \le C \mathop{\rm sup}\limitsup_{0<t\le \ell}\omegamega^{-1}_2(t)\int_0^{t} r^{-\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr. \end{align*} Hence $$ \|Mf\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)} \le C \|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1}(\Omega)} \mathop{\rm sup}\limitsup_{t\in(0,\ell)}\frac1{\omegamega_2(t)} \int_0^{t} \omegamega_1(r)\frac{dr}{r} \le C\|f\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1}(\Omega)} $$ by \eqref{eq3.6.VZ}, which completes the proof. \end{proof} \begin{corollary} \lambdabel{VHS1} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set, $x_0 \in \Omega$, $0 \le \lambdambda < n$, $\lambdambda \le \mu \le n$ and let $p\in WL(\Omega)$ satisfy the assumption \eqref{h0}. Then the operator $M$ is bounded from the space ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p(\cdot),\lambdambda}(\Omega)$ to ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p(\cdot),\mu}(\Omega)$. \end{corollary} \ \mathop{\rm sup}\limitsection{Riesz potential operator in the spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$}\lambdabel{potentials} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} \ In this section we extend Theorem \ref{Guliev} to the variable exponent setting. Note that Theorems \ref{G4} and \ref{M1X} in the case of constant exponent $p$ were proved in \cite{Gul}, Theorems 1.3.2 and 1.3.9 (see also \cite{GulBook}, p. 112, 129). \begin{theorem} \lambdabel{G4} Let \eqref{h0} be fulfilled and $p(\cdot), \alphalpha(\cdot) \in WL(\Omega)$ satisfy the conditions in \eqref{1}. If $f$ is such that the integral \eqref{converg} converges, then \begin{equation}\lambdabel{GOPRiesz} \|I^{\alphalpha(\cdot)} f\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))} \le Ct^{-\frac{n}{p^\prime(x_0)}}\int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))} ds \end{equation} for every $f \in L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))$, where \begin{equation}\lambdabel{1cxz} \frac{1}{q(x)}=\frac 1{p(x)}-\frac {\alphalpha(x)} {n}. \end{equation} and $C$ does not depend on $f, x_0$ and $ t\in (0,\ell)$. \end{theorem} \begin{proof} We represent the function $f$ in the form $f=f_1+f_2$ as in \eqref{repr} so that $$ \|I^{\alpha(\cdot)} f\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le \|I^{\alpha(\cdot)} f_1\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}+ \|I^{\alpha(\cdot)} f_2\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}. $$ Since $f_1\in L^{p(\cdot)}(\Omega)$, by Theorem \ref{S1} we have \begin{gather*} \|I^{\alpha(\cdot)} f_1\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le \|I^{\alpha(\cdot)} f_1\|_{L^{q(\cdot)}(\Omega)} \le C \|f_1\|_{L^{p(\cdot)}(\Omega)}=C \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))} \end{gather*} and then \begin{equation}\lambdabel{Ha1} \|I^{\alpha(\cdot)} f_1\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le C t^{-\frac{n}{p^\prime(x_0)}} \int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))}ds \end{equation} in view of \eqref{ADI}. To estimate $$\|I^{\alpha(\cdot)} f_2\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}= \left\|\int\limits_{\widetilde{B}(x_0,t)}|z-y|^{\alphalpha(z) -n}f(y) dy\right\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}, $$ we observe that for $z\in \Omega\backslash \widetilde{B}(x_0,2t)$ and $y\in \widetilde{B}(x_0,t)$ we have $\frac{1}{2} |x_0-z| \le |z-y|\le\frac{3}{2} |x_0-z|$, so that $$ \|I^{\alpha(\cdot)} f_2\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le C \int\limits_{\widetilde{B}(x_0,t)}|f(y)| dy \, \left\||x_0-z|^{\alpha(z)-n} \right\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}. $$ From the log-condition for $\alphalpha(\cdot)$ it follows that $$c_1|x_0-z|^{\alpha(x_0)-n}\le |x_0-z|^{\alpha(z)-n} \le c_2 |x_0-z|^{\alpha(x_0)-n}.$$ Therefore, \begin{equation}\lambdabel{problem} \|I^{\alpha(\cdot)} f_2\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le C \int\limits_{\widetilde{B}(x_0,t)}|f(y)| dy \, \left\||x_0-z|^{\alpha(x_0)-n} \right\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}. \end{equation} The norm in the integral on the right-hand side is estimated by means of Lemma \ref{lemma}, which yields $$\|I^{\alpha(\cdot)} f_2\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}\le C t^{-\frac{n}{p^\prime(x_0)}} \int\limits_{\widetilde{B}(x_0,t)}|f(y)| dy. $$ It remains to make use of \eqref{Ga14} and obtain \begin{equation}\lambdabel{Ha2} \|I^{\alpha(\cdot)} f_2\|_{L^{q(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}\le C t^{-\frac{n}{p^\prime(x_0)}}\int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,s))}ds. \end{equation} From \eqref{Ha1} and \eqref{Ha2} we arrive at \eqref{GOPRiesz}. \end{proof} \begin{theorem}\lambdabel{M1X} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set, $x_0 \in \Omega$ and $p(\cdot), \alphalpha(\cdot) \in WL(\Omega)$ satisfy assumption \eqref{h0} and \eqref{1}, $q(x)$ given by \eqref{1cxz} and the functions $\omegamega_1(r)$ and $\omegamega_2(r)$ fulfill the condition \begin{equation}\lambdabel{eq3.6.VZX} t^{\alpha(x_0)} \int_0^{t} \omegamega_1(r)\frac{dr}{r} \le C \,\omegamega_2(t), \end{equation} where $C$ does not depend on $t$. Then the operators $M^{\alphalpha(\cdot)}$ and $I^{\alphalpha(\cdot)}$ are bounded from ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1}(\Omega)$ to ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{q(\cdot),\omegamega_2}(\Omega)$. \end{theorem} \begin{proof} It suffices to prove the boundedness of the operator $I^{\alphalpha(\cdot)}$, since $M^{\alphalpha(\cdot)} f(x)\le C I^{\alphalpha(\cdot)}|f|(x)$. Let $f\in {\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$. We have \begin{equation}\lambdabel{ggyxx} \|I^{\alphalpha(\cdot)} f\|_{{\,^{^{\complement}}\! \cal M}^{\{x_0\}}_{q(\cdot),\omegamega_2}(\Omega)}=\mathop{\rm sup}\limitsup_{t>0} \frac{t^{\frac{n}{q^\prime(x_0)}}}{\omegamega_2(t)} \|\chi_{\Omega\backslash \widetilde{B}(x_0,t)} I^{\alphalpha(\cdot)} f\|_{L^{q(\cdot)}(\Omega)}. \end{equation} We estimate $\|\chi_{\Omega\backslash \widetilde{B}(x_0,t)}I^{\alphalpha(\cdot)} f\|_{L^{q(\cdot)}(\Omega)}$ in \eqref{ggyxx} by means of Theorem \ref{G4}. This theorem is applicable since the integral \eqref{converg} with $\omegamega=\omegamega_1$ converges by \eqref{qwet}. We obtain \begin{align*} \|I^{\alphalpha(\cdot)} f\|_{{\,^{^{\complement}}\! \cal M}^{\{x_0\}}_{q(\cdot),\omegamega_2}(\Omega)} & \le C \mathop{\rm sup}\limitsup_{t>0} \frac{t^{-\frac{n}{p^\prime(x_0)}+\frac{n}{q^\prime(x_0)}}}{\omegamega_2(t)} \int_0^{t} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr \\ &\le C\|f\|_{{\,^{^{\complement}}\! \cal M}^{\{x_0\}}_{p(\cdot),\omegamega_1}(\Omega)} \mathop{\rm sup}\limitsup_{t>0} \frac{t^{\alpha(x_0)}}{\omegamega_2(t)} \int_0^{t}\frac{\omegamega_1(r)}{r}dr. \end{align*} It remains to make use of the condition \eqref{eq3.6.VZX}. \end{proof} \begin{corollary} \lambdabel{VHS2} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set and $p(\cdot), \alphalpha(\cdot) \in WL(\Omega)$ satisfy assumption \eqref{h0} and \eqref{1}, $q(x)$ given by \eqref{1cxz}, $x_0 \in \Omega$ and $\frac{\lambdambda}{p^\prime(x_0)} \le \frac{\mu}{q^\prime(x_0)}$. Then the operators $M^{\alphalpha(\cdot)}$ and $I^{\alphalpha(\cdot)}$ are bounded from ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p(\cdot),\lambdambda}(\Omega)$ to ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{q(\cdot),\mu}(\Omega)$. \end{corollary} \ \mathop{\rm sup}\limitsection{Singular integral operators in the spaces ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega}(\Omega)$}\lambdabel{singular} \mathop{\rm sup}\limitsetcounter{theorem}{0} \mathop{\rm sup}\limitsetcounter{equation}{0} \ Theorems \ref{HG11} and \ref{SIO1} proved below, in the case of the constant exponent $p$ were proved in \cite{Gul}, Theorems 1.4.2 and 1.4.6 (see also \cite{GulBook}, p. 132, 135). \begin{theorem} \lambdabel{HG11} Let $\Omega$ be an open bounded set, $p\in WL(\Omega)$ satisfy condition \eqref{h0} and $f\in L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))$ for every $t\in (0,\ell)$. If the integral $$ \int_0^\ell r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr $$ converges, then \begin{equation*}\lambdabel{GOPSI} \|T f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))} \le Ct ^{-\frac{n}{p^\prime(x_0)}}\int_{0}^{2t} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr, \end{equation*} where $C$ does not depend on $f$, $x_0$ and $t\in (0,\ell)$. \end{theorem} \begin{proof} We split the function $f$ in the form $f_1+f_2$ as as in \eqref{repr} and have $$ \|Tf\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \le \|Tf_1\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} +\|Tf_2\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}. $$ Taking into account that $f_1\in L^{p(\cdot)}(\Omega)$, by Theorem \ref{SIO} we have \begin{equation*} \|Tf_1\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}\le \|Tf_1\|_{L^{p(\cdot)}(\Omega)} \le C \|f_1\|_{L^{p(\cdot)}(\Omega)}=C \|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}. \end{equation*} Then in view of \eqref{ADI} \begin{equation}\lambdabel{Ga10'} \|Tf_1\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,t))}\le Ct^{-\frac{n}{p^\prime(x_0)}} \int_0^{t} r^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr. \end{equation} To estimate $\|Tf_2\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))},$ note that $\frac{1}{2} |x_0-z| \le |z-y|\le\frac{3}{2} |x_0-z|$ for $z\in \Omega\backslash \widetilde{B}(x_0,2t)$ and $y\in \widetilde{B}(x_0,t)$, so that \begin{align*} \|Tf_2\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} & \le C \left\|\int_{\widetilde{B}(x_0,t)}|z-y|^{-n}f(y) dy\right\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))} \\ &\le C \int_{\widetilde{B}(x_0,t)}|f(y)| dy\||x_0-z|^{-n}\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}. \end{align*} Therefore, with the aid of the estimate \eqref{estikmate} and inequality \eqref{Ga14}, we get \begin{equation*}\lambdabel{Ga11'} \|Tf_2\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,2t))}\le Ct^{-\frac{n}{p^\prime(x_0)}}\int_0^{t} s^{\frac{n}{p^\prime(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash\widetilde{B}(x_0,s))}ds, \end{equation*} which together with \eqref{Ga10'} yields \eqref{GOPSI}. \end{proof} \begin{theorem}\lambdabel{SIO1} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set, $x_0 \in \Omega$, $p\in WL(\Omega)$ satisfy condition \eqref{h0} and $\omegamega_1(t)$ and $\omegamega_2(t)$ fulfill condition \eqref{eq3.6.VZ}. Then the singular integral operator $T$ is bounded from the space ${\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_1} (\Omega)$ to the space ${\,^{^{\complement}}\! \cal M} ^{\{x_0\}}_{p(\cdot),\omegamega_2} (\Omega)$. \end{theorem} \begin{proof} Let $f\in {\,^{^{\complement}}\! \cal M}^{\{x_0\}}_{p(\cdot),\omegamega_1}(\Omega)$. We follow the procedure already used in the proof of Theorems \ref{M1} and \ref{M1X}: in the norm \begin{equation}\lambdabel{ggy} \|Tf\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)} = \mathop{\rm sup}\limitsup_{t>0} \frac{t^{\frac{n}{p^\prime (x_0)}}}{\omegamega_2(t)} \|Tf\chi_{\Omega\backslash \widetilde{B}(x_0,t)}\|_{L^{p(\cdot)}(\Omega)}, \end{equation} we estimate $\|Tf\chi_{\Omega\backslash \widetilde{B}(x_0,t)}\|_{L^{p(\cdot)}(\Omega)}$ by means of Theorem \ref{HG11} and obtain \begin{align*} \|Tf\|_{{\,^{^{\complement}}\! \cal M}_{\{x_0\}}^{p(\cdot),\omegamega_2}(\Omega)} & \le C\mathop{\rm sup}\limitsup_{t>0} \frac1{\omegamega_2(t)} \int_0^{t} r^{\frac{n}{p(x_0)}-1}\|f\|_{L^{p(\cdot)}(\Omega\backslash \widetilde{B}(x_0,r))} dr \\ &\le C\|f\|_{{\,^{^{\complement}}\! \cal M}^{\{x_0\}}_{p(\cdot),\omegamega_1}(\Omega)} \mathop{\rm sup}\limitsup_{t>0} \frac1{\omegamega_2(t)} \int_0^{t}\omegamega_1(r)\frac{dr}{r} \le C\|f\|_{{\,^{^{\complement}}\! \cal M}^{\{x_0\}}_{p(\cdot),\omegamega_1}(\Omega)}. \end{align*} \end{proof} \begin{corollary} \lambdabel{VHS3} Let $\Omega \mathop{\rm sup}\limitsubset {\mathbb{R}^n}$ be an open bounded set, $p\in WL(\Omega)$ satisfy the assumption \eqref{h0}, $x_0 \in \Omega$ and $0 \le \lambdambda < n$, $\lambdambda \le \mu \le n$. Then the singular integral operator $T$ is bounded from ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p(\cdot),\lambdambda}(\Omega)$ to ${\,^{^{\complement}}\! \cal L}_{\{x_0\}}^{p(\cdot),\mu}(\Omega)$. \end{corollary} {\bf Acknowledgements.} The research of V. Guliyev and J. Hasanov was partially supported by the grant of Science Development Foundation under the President of the Republic of Azerbaijan project EIF-2010-1(1)-40/06-1. The research of V. Guliyev and S. Samko was partially supported by the Scientific and Technological Research Council of Turkey (TUBITAK Project No: 110T695). \end{document}
math
58,620
\begin{document} \newcommand{\spacing}[1]{\renewcommand{\baselinestretch}{#1}\large\normalsize} \spacing{1.14} \title[On the classification of five-dimensional nilsolitons]{On the classification of five-dimensional nilsolitons} \author {Hamid Reza Salimi Moghaddam} \address{Department of Pure Mathematics \\ Faculty of Mathematics and Statistics\\ University of Isfahan\\ Isfahan\\ 81746-73441-Iran.\\ Scopus Author ID: 26534920800 \\ ORCID Id:0000-0001-6112-4259\\} \email{[email protected] and [email protected]} \keywords{ Ricci soliton, left invariant Riemannian metric, nilsoliton, five-dimensional nilmanifolds. \\ AMS 2020 Mathematics Subject Classification: 22E60, 53C44, 53C21.} \begin{abstract} In 2002, using a variational method, Lauret classified five-dimensional nilsolitons. In this work, using the algebraic Ricci soliton equation, we obtain the same classification. We show that, among ten classes of five-dimensional nilmanifolds, seven classes admit Ricci soliton structure. In any case, the derivation which satisfies the algebraic Ricci soliton equation is computed. \end{abstract} \maketitle \section{\textbf{Introduction}}\label{Introduction} Suppose that $(M,g)$ is a complete Riemannian manifold and $\textsf{ric}_g$ denotes its Ricci tensor. If, for a real number $c$ and a complete vector field $X$, the Riemannian metric $g$ satisfies the equation \begin{equation}\label{Main Ricci soliton equation} \textsf{ric}_g=c g+\textsf{L}_Xg, \end{equation} then $g$ is called a Ricci soliton. In this equation, if $c>0$, $c=0$ or $c<0$, then $g$ is named shrinking, steady, or expanding Ricci soliton. \\ Although the above definition is a generalization of Einstein metrics but the main motivation for considering Ricci solitons is the Ricci flow equation \begin{equation}\label{Ricci flow equation} \frac{\partial}{\partial t}g_t=-2\textsf{ric}_{g(t)}, \end{equation} where $\phi_t$ is a one-parameter group of diffeomorphisms. A Riemannian metric $g$ is a Ricci soliton on $M$ if and only if the following one-parameter family of Riemannian metrics is a solution of (\ref{Ricci flow equation}) (see \cite{Lauret1} and \cite{Lauret2}), \begin{equation}\label{family of Riemannian metrics} g_t=(-2c t+1)\phi_t^\ast g. \end{equation} A very interesting case happens when we study Ricci solitons on Lie groups. Suppose that $G$ is a Lie group, $\frak{g}$ is its Lie algebra and $g$ is a left invariant Riemannian metric on $G$. If for a real number $c$ and a derivation $D\in\textsf{Der}(\frak{g})$, the $(1,1)$-Ricci tensor $\textsf{Ric}_g$ of $g$ satisfies the equation \begin{equation}\label{Algebraic Ricci soliton equation} \textsf{Ric}_g=c\textsl{Id}+D, \end{equation} then $g$ is called an algebraic Ricci soliton. Moreover, if $G$ is a nilpotent Lie group then it is called a nilsoliton. It is shown that all algebraic Ricci solitons are Ricci soliton (for more details see \cite{Jablonski1} and \cite{Lauret3}).\\ In the year 2001, Lauret proved that on nilpotent Lie groups a left invariant Riemannian metric $g$ is a Ricci soliton if and only if it is an algebraic Ricci soliton. In fact for left invariant Riemannian metrics on nilpotent Lie groups the equations (\ref{Main Ricci soliton equation}) and (\ref{Algebraic Ricci soliton equation}) are equivalent (see \cite{Lauret3} and \cite{Jablonski2}).\\ Naturally, one can generalize the concept of algebraic Ricci soliton to homogeneous spaces. In 2014, Jablonski showed that any homogeneous Ricci soliton is an algebraic Ricci soliton (see \cite{Jablonski2}). So the classification of left invariant Ricci solitons on Lie groups reduces to the classification of algebraic Ricci solitons on them. \\ In the year 2002, using a variational method, Lauret classified five-dimensional nilsolitons (for more details see \cite{Lauret4} and \cite{Will}). In this work, we classify five-dimensional nilsolitons using the classification of five-dimensional nilmanifolds given in \cite{Homolya-Kowalski} and \cite{Figula-Nagy}, and the algebraic Ricci soliton equation (\ref{Algebraic Ricci soliton equation}). We see that our results are compatible with the results of \cite{Lauret4}.\\ Suppose that $g$ is a left invariant Riemannian metric on a Lie group $G$ and $\alpha_{ijk}$ are the structural constants of the Lie algebra $\frak{g}$, with respect to an orthonormal basis $\{E_1,\cdots,E_n\}$, defined by the following equations: \begin{equation}\label{structural constants} [E_i,E_j]=\sum_{k=1}^n\alpha_{ijk}E_k. \end{equation} In an earlier paper (see \cite{Salimi}), we proved that $g$ is an algebraic Ricci soliton if and only if there exists a real number $c$ such that, for any $t, p, q=1,\cdots,n$ the structural constants satisfie in the following equation: \begin{eqnarray}\label{Main formula} c\alpha_{qpt}+\frac{1}{4} \sum_{i=1}^n\sum_{j=1}^n\sum_{r=1}^n &&\hspace{-0.6cm} 2\alpha_{rjj}\Big{(}\alpha_{iqt}(\alpha_{pri}+\alpha_{ipr}-\alpha_{rip}) -\alpha_{ipt}(\alpha_{qri}+\alpha_{iqr}-\alpha_{riq})\Big{)}\nonumber\\ &&\hspace{-0.6cm} +2(\alpha_{rji}+\alpha_{irj}-\alpha_{jir})(\alpha_{ipt}\alpha_{qjr}-\alpha_{iqt}\alpha_{pjr})\nonumber\\ &&\hspace{-0.6cm} +(\alpha_{jri}+\alpha_{ijr}-\alpha_{rij})\Big{(}\alpha_{ipt}(\alpha_{qjr}+\alpha_{rqj}-\alpha_{jrq}) -\alpha_{iqt}(\alpha_{pjr}+\alpha_{rpj}-\alpha_{jrp})\Big{)}\\ &&\hspace{-0.6cm} -2\alpha_{pqi}\alpha_{rjj}(\alpha_{irt}+\alpha_{tir}-\alpha_{rti})+2\alpha_{pqi}\alpha_{ijr}(\alpha_{rjt}+\alpha_{trj}-\alpha_{jtr})\nonumber\\ &&\hspace{-0.6cm} +\alpha_{pqi}(\alpha_{ijr}+\alpha_{rij}-\alpha_{jri})(\alpha_{jrt}+\alpha_{tjr}-\alpha_{rtj})=0. \nonumber \end{eqnarray} In this case we showed that, for any $i=1,\cdots,n$, the derivation $D$ satisfies in the following equation: \begin{eqnarray}\label{Derivation equation} D(E_i) &=& -cE_i+\frac{1}{4} \sum_{l=1}^n\Big{\{} \sum_{j=1}^n\sum_{r=1}^n2\alpha_{rjj}(\alpha_{irl}+\alpha_{lir}-\alpha_{rli})\nonumber \\ &&\hspace{4cm}-2\alpha_{ijr}(\alpha_{ril}+\alpha_{lrj}-\alpha_{jlr})\\ &&\hspace{4cm}-(\alpha_{ijr}+\alpha_{rij}-\alpha_{jri})(\alpha_{jrl}+\alpha_{ljr}-\alpha_{rlj})\Big{\}}E_l.\nonumber \end{eqnarray} In \cite{Salimi}, using the above equations, we classified all algebraic Ricci solitons on three-dimensional Lie groups. \\ Using the equations (\ref{Main formula}) and (\ref{Derivation equation}) which is different from Lauret's variational method, again, we classify all five-dimensional nilsolitons. We remember that in this method, against some recent works (such as \cite{Di Cerbo}) which have studied the Ricci solitons only on the set of left invariant vector fields, we consider the Ricci solitons on all vector fields.\\ A Riemannian nilmanifold is a connected Riemannian manifold $(M,g)$ such that there exists a nilpotent Lie subgroup of its isometry group $I(M)$ which acts transitively on $M$. Wilson, in \cite{Wilson}, proved that if $M$ is a homogeneous Riemannian nilmanifold then there is a unique nilpotent Lie subgroup of $I(M)$ which acts simply transitively on $M$. Also, he showed that this Lie subgroup is a normal subgroup of $I(M)$. Thus, we can assume a homogeneous Riemannian nilmanifold as a nilpotent Lie group $N$ equipped with a left invariant Riemannian metric $g$. \\ In 2006, Homoloya and Kowalski, up to isometry, classified five-dimensional two-step nilpotent Lie groups equipped with left invariant Riemannian metrics (see \cite{Homolya-Kowalski}). In a paper written by Figula and Nagy in 2018, up to isometry, the classification of five-dimensional nilmanifolds has been completed by the classification of five-dimensional nilpotent Lie algebras of nilpotency classes three and four equipped with inner products. We mention that this classification does not contain the Lie algebras which are direct products of Lie algebras of lower dimensions (see \cite{Figula-Nagy}).\\ In the next section we will classify all left invariant Ricci solitons on all ten classes of five-dimensional nilmanifolds. \section{\textbf{The classification of five-dimensional nilsolitons}}\label{The classification} In \cite{Homolya-Kowalski}, all five-dimensional two-step nilmanifolds up to isometry have been classified by Homolya and Kowalski. Recently, in \cite{Figula-Nagy}, Figula and Nagy have completed the classification of five-dimensional nilmanifolds up to isometry by the classification of five-dimensional nilpotent Lie groups of nilpotency class greater than two. In this section using formula (\ref{Main formula}) and the above classifications we classify all algebraic Ricci solitons on simply connected five-dimensional nilmanifolds. \\ Suppose that $N$ is an arbitrary simply connected five-dimensional nilmanifold then, up to isometry, its Lie algebra is one of the following ten cases. In all cases the set $\{E_1, \cdots, E_5\}$ is an orthonormal basis for the Lie algebra and in any case we give only non-vanishing commutators. We use the notation $d(a_1,\cdots,a_n)$ to denote the diagonal matrix with entries $a_1,\cdots,a_n$.\\ In the cases of nilpotency class greater than two, we will consider in detail the cases 2.6 and 2.7, which are the most difficult cases. \subsection{\textbf{Two-step nilpotent Lie algebra with one-dimensional center.}} Let $N$ be the two-step nilpotent Lie group with one-dimensional center and $\frak{n}$ denotes its Lie algebra. By \cite{Homolya-Kowalski}, the non-zero Lie brackets are as follows: \begin{equation} [E_1,E_2]=sE_5, \ \ \ \ [E_3,E_4]=mE_5, \end{equation} where $s\geq m >0$. So, in this case, the structure constants with respect to the orthonormal basis $\{E_1, \cdots, E_5\}$ are of the following forms: \begin{equation*} \alpha_{125}=-\alpha_{215}=s, \ \ \ \ \alpha_{345}=-\alpha_{435}=m. \end{equation*} Easily, using the formula (\ref{Main formula}), we see that $\frak{n}$ is an algebraic Ricci soliton if and only if \begin{equation*} \left\{ \begin{array}{ll} cs+\frac{1}{2}sm^2+\frac{3}{2}s^3=0, & \\ cm+\frac{1}{2}ms^2+\frac{3}{2}m^3=0. & \end{array} \right. \end{equation*} A direct computation shows that the above equations hold if and only if $m=s$. So for the constant $c$ we have $c=-2m^2$. Now, the equation (\ref{Derivation equation}) shows that for the matrix representation of $D$ we have $$D=d(\frac{3}{2}m^2,\frac{3}{2}m^2,\frac{3}{2}m^2,\frac{3}{2}m^2,3m^2).$$ \subsection{\textbf{Two-step nilpotent Lie algebra with two-dimensional center.}} The second case is two-step nilpotent Lie group $N$ with two-dimensional center. In \cite{Homolya-Kowalski}, it is shown that there are real numbers $m\geq s >0$ such that the non-zero Lie brackets are \begin{equation} [E_1,E_2]=mE_4, \ \ \ \ [E_1,E_3]=sE_5. \end{equation} In this case for the structure constants we have: \begin{equation*} \alpha_{124}=-\alpha_{214}=m, \ \ \ \ \alpha_{135}=-\alpha_{315}=s. \end{equation*} Now, the equation (\ref{Main formula}) shows that $\frak{n}$ satisfies the algebraic Ricci soliton equation (\ref{Algebraic Ricci soliton equation}) if and only if \begin{equation*} \left\{ \begin{array}{ll} cm+\frac{3}{2}m^3+\frac{1}{2}s^2m=0, & \\ cs+\frac{3}{2}s^3+\frac{1}{2}m^2s=0. & \end{array} \right. \end{equation*} Easily we can see, for the solutions we have $m=s$ and $c=-2m^2$. Now, by equation (\ref{Derivation equation}), for the matrix representation of $D$ in the basis $\{E_1, \cdots, E_5\}$ we have $$D=d(m^2,\frac{3}{2}m^2,\frac{3}{2}m^2,\frac{5}{2}m^2,\frac{5}{2}m^2).$$ \subsection{\textbf{Two-step nilpotent Lie algebra with three-dimensional center.}} The third Lie group which we have considered is the two-step nilpotent Lie group $N$ with three-dimensional center. This is the last five-dimensional two-step nilpotent Lie group. Based on \cite{Homolya-Kowalski}, the non-zero Lie bracket of this case is \begin{equation} [E_1,E_2]=mE_3, \end{equation} where $m$ is a positive real number. So, the non-zero structure constants are \begin{equation*} \alpha_{123}=-\alpha_{213}=m. \end{equation*} It is easy to show that the equation (\ref{Main formula}) holds if and only if $cm+\frac{2}{3}m^3=0$. The last equation shows that $c=-\frac{3}{2}m^2$. So, by (\ref{Derivation equation}), the representation of $D$ in the basis $\{E_1, \cdots, E_5\}$, is of the form $$D=d(m^2,m^2,2m^2,\frac{3}{2}m^2,\frac{3}{2}m^2).$$ \subsection{\textbf{Four-step nilpotent Lie algebra $\frak{l}_{5,7}$, (case A)}} In \cite{Figula-Nagy}, Figula and Nagy have proven that the five-dimensional nilpotent Lie algebras $\frak{n}$ of nilpotency class greater than two are of the forms $\frak{l}_{5,7}$, $\frak{l}_{5,6}$, $\frak{l}_{5,5}$ and $\frak{l}_{5,9}$. In this subsection and the next subsection, we study the necessary and sufficient conditions for the Lie algebra $\frak{l}_{5,7}$ to be an algebraic Ricci soliton. By \cite{Figula-Nagy}, the non-vanishing Lie brackets of $\frak{l}_{5,7}$ are \begin{equation} [E_1,E_2]=mE_3+sE_4+uE_5, \ \ \ [E_1,E_3]=vE_4+wE_5, \ \ \ [E_1,E_4]=xE_5, \end{equation} where for the real numbers $m,s,u,v,w,x$ we have two cases: (case A: $m,v,x>0$, $s=0$ and $w\geq 0$) and (case B: $m,v,x>0$ and $s>0$). In this subsection we study the case A. In the case of $\frak{l}_{5,7}$ (in two cases A and B) for the structure constants we have: \begin{eqnarray*} &&\alpha_{123}=-\alpha_{213}=m, \ \alpha_{124}=-\alpha_{214}=s, \ \alpha_{125}=-\alpha_{215}=u, \\ &&\alpha_{134}=-\alpha_{314}=v, \ \alpha_{135}=-\alpha_{315}=w, \ \alpha_{145}=-\alpha_{415}=x. \end{eqnarray*} The equation (\ref{Main formula}) together with some computations shows that $\frak{l}_{5,7} (case A)$, satisfies the algebraic Ricci soliton equation (\ref{Algebraic Ricci soliton equation}) if and only if \begin{equation*} x=m, \ \ c=-2m^2, \ \ u=w=s=0, \ and \ \ v=\frac{2}{\sqrt{3}}m. \end{equation*} Now, using (\ref{Derivation equation}), the matrix representation of $D$ is of the form $$D=d(\frac{1}{3}m^2,\frac{3}{2}m^2,\frac{11}{6}m^2,\frac{13}{6}m^2,\frac{5}{2}m^2).$$ \subsection{\textbf{Four-step nilpotent Lie algebra $\frak{l}_{5,7}$, (case B)}} As we mentioned above, in this case (case B) for the same structure constants of case A, we have $m,v,x>0$ and $s>0$. We see that the equation (\ref{Main formula}) deduces to a system of equations with no solution. So the Lie algebra $\frak{l}_{5,7}$, (case B) does not admit algebraic Ricci soliton structure. \subsection{\textbf{Four-step nilpotent Lie algebra $\frak{l}_{5,6}$, (case A)}} During this subsection and the next subsection, we have considered the Lie algebra $\frak{l}_{5,6}$. In two cases (cases A and B), by \cite{Figula-Nagy}, the Lie brackets are \begin{equation} [E_1,E_2]=mE_3+sE_4+uE_5, \ \ \ [E_1,E_3]=vE_4+wE_5, \ \ \ [E_1,E_4]=xE_5, \ \ \ [E_2,E_3]=yE_5, \end{equation} where $m,s,u,v,w,x,y$ are real numbers such that $m,v,x,y\neq0$. For the structure constants we have: \begin{eqnarray*} &&\alpha_{123}=-\alpha_{213}=m, \ \alpha_{124}=-\alpha_{214}=s, \ \alpha_{125}=-\alpha_{215}=u, \ \alpha_{134}=-\alpha_{314}=v,\\ &&\alpha_{135}=-\alpha_{315}=w, \ \alpha_{145}=-\alpha_{415}=x, \ \alpha_{235}=-\alpha_{325}=y. \end{eqnarray*} Similar to the Lie algebra $\frak{l}_{5,7}$ we have two cases, case A and B. In the case A we have $s=0$ and $w\geq 0$ and for the case B $s>0$. A direct computation, using (\ref{Main formula}), shows that the Lie algebra $\frak{l}_{5,6} (case A)$, is an algebraic Ricci soliton if and only if (for two cases A and B) \begin{equation*} \left\{ \begin{array}{ll} myu=0, & \\ mvs+mwu+sxu=0, & \\ cm+\frac{3m}{2}(m^2+s^2+u^2)+\frac{x}{2}(xm-sw)=0, & \\ cs+\frac{3s}{2}(m^2+s^2+u^2+v^2)+\frac{1}{2}(-mwx+w^2s+y^2s)+wvu=0, & \\ cu+\frac{3u}{2}(m^2+s^2+u^2+w^2+x^2+y^2)+\frac{v^2u}{2}+vsw=0, & \\ uxv=0, & \\ mvs+mwu-\frac{vxw}{2}=0, & \\ cv+\frac{3v}{2}(s^2+v^2+w^2)+\frac{v}{2}(u^2+y^2)+usw=0, & \\ cw+\frac{3w}{2}(u^2+v^2+w^2+x^2+y^2)+\frac{s}{2}(-mx+sw)+svu=0, & \\ umx=0, & \\ sxu+vxw-\frac{mvs}{2}=0, & \\ cx+\frac{3x}{2}(u^2+w^2+x^2)+\frac{1}{2}(m^2x-msw+xy^2)=0, & \\ mvu=0, & \\ mwu+sxu+vxw=0, & \\ suy+vwy=0, & \\ cy+\frac{3y}{2}(u^2+w^2+y^2)+\frac{y}{2}(s^2+v^2+x^2)=0, & \\ wyx-\frac{msy}{2}=0, & \\ uyx=0. & \end{array} \right. \end{equation*} A direct computation shows that the above equations hold if and only if \begin{equation*} u=w=s=0, m=\pm\sqrt{\frac{3}{2}}x, y=\pm x, v=\pm\sqrt{\frac{3}{2}}x \ and \ c=-\frac{11}{4}x^2. \end{equation*} The equation (\ref{Derivation equation}) shows that for the matrix representation of $D$ we have $$D=d(\frac{3}{4}x^2,\frac{3}{2}x^2,\frac{9}{4}x^2,3x^2,\frac{15}{4}x^2).$$ \subsection{\textbf{Four-step nilpotent Lie algebra $\frak{l}_{5,6}$, (case B)}} In the case B for the same structure constants of the above case, we have $s>0$. Then we see that the system of equations defined by (\ref{Main formula}) has no solution. Therefore similar to the case $\frak{l}_{5,7}$ (case B), the Lie algebra $\frak{l}_{5,6}$ (case B), does not admit an algebraic Ricci soliton structure. \subsection{\textbf{Three-step nilpotent Lie algebra $\frak{l}_{5,5}$}} This section is devoted to $\frak{l}_{5,5}$ which is a three-step nilpotent Lie algebra with one dimensional center. For the real numbers $s,u\geq0$, and $m,v,w>0$, the non-vanishing Lie brackets are as follows: \begin{equation} [E_1,E_2]=mE_4+sE_5, \ \ \ [E_1,E_3]=uE_5, \ \ \ [E_1,E_4]=vE_5, \ \ \ [E_2,E_3]=wE_5. \end{equation} Hence for the structure constants we have: \begin{eqnarray*} &&\alpha_{124}=-\alpha_{214}=m, \ \alpha_{125}=-\alpha_{215}=s, \ \alpha_{135}=-\alpha_{315}=u, \\ &&\alpha_{145}=-\alpha_{145}=v, \ \alpha_{235}=-\alpha_{325}=w. \end{eqnarray*} Now substituting the above structure constants in the equation (\ref{Main formula}), leads us to a system of equations with the following solution, \begin{equation*} u=s=0, v=m, w=\frac{\sqrt{2}}{2}m, c=-\frac{7}{4}m^2. \end{equation*} So the Lie algebra $\frak{l}_{5,5}$ admits an algebraic Ricci soliton structure if and only if the above equations hold. In this case, by (\ref{Derivation equation}), the matrix representation of $D$ in the basis $\{E_1, \cdots, E_5\}$, is $$D=d(\frac{3}{4}m^2,m^2,\frac{3}{2}m^2,\frac{7}{4}m^2,\frac{5}{2}m^2).$$ \subsection{\textbf{Three-step nilpotent Lie algebra $\frak{l}_{5,9}$, (case A)}} For the non-vanishing Lie brackets of this case we have: \begin{equation} [E_1,E_2]=mE_3+sE_4+uE_5, \ \ \ [E_1,E_3]=vE_4, \ \ \ [E_2,E_3]=wE_5, \end{equation} where $m>0$, $w>v>0$ and $s,u\geq0$. So we have: \begin{eqnarray*} &&\alpha_{123}=-\alpha_{213}=m, \ \alpha_{124}=-\alpha_{214}=s, \ \alpha_{125}=-\alpha_{215}=u, \\ &&\alpha_{134}=-\alpha_{314}=v, \ \alpha_{235}=-\alpha_{325}=w. \end{eqnarray*} Easily we see that the system of equations which is defied by (\ref{Main formula}) has no solution. Therefore, the case A of the Lie algebra $\frak{l}_{5,9}$ does not admit Algebraic Ricci soliton structure. \subsection{\textbf{Three-step nilpotent Lie algebra $\frak{l}_{5,9}$, (case B)}} In the case B of the Lie algebra $\frak{l}_{5,9}$, for the same Lie algebra brackets, we have $m,v>0$, $v=w$, $u=0$ and $s\geq0$. Then, the equation (\ref{Main formula}), leads us to the following system of equations: \begin{equation*} \left\{ \begin{array}{l} mvs=0,\\ cm+\frac{3}{2}(m^3+s^2m)=0, \\ cs+\frac{3}{2}(s^3+m^2s)+2v^2s=0,\\ cv+\frac{3}{2}s^2v+2v^3=0,\\ cv+\frac{1}{2}s^2v+2v^3=0. \end{array} \right. \end{equation*} The solution of the above system of equations is as follows: \begin{equation*} s=0, v=\frac{\sqrt{3}}{2}m, c=-\frac{3}{2}m^2. \end{equation*} So the case B of the Lie algebra $\frak{l}_{5,9}$ admits an algebraic Ricci soliton structure with the derivation $$D=d(\frac{5}{8}m^2,\frac{5}{8}m^2,\frac{5}{4}m^2,\frac{15}{8}m^2,\frac{15}{8}m^2).$$ Now, we summarize the above results and give their relations with Lauret classification (Theorem 5.1 of \cite{Lauret4}) in the table \ref{The summarized result}. {\fontsize{7}{0}{\selectfont \begin{table} \centering\caption{The classification of five-dimensional nilsolitons}\label{The summarized result} \begin{tabular}{|p{0.5cm}|p{3.5cm}|p{1.5cm}|p{1.5cm}|p{5cm}|p{1.5cm}|} \hline case & Lie algebra structure & conditions for parameters of Lie algebra structure \newline & Ricci soliton & the constant $c$ and the derivation $D$ & equivalence class in Lauret classification \\ \hline case 2.1 & $[E_1,E_2]=sE_3$, \newline $[E_3,E_4]=mE_5$ & $s\geq m>0$ & $+$, if $s=m$ & $c=-2m^2$,\newline $D=d(\frac{3}{2}m^2,\frac{3}{2}m^2,\frac{3}{2}m^2,\frac{3}{2}m^2,3m^2)$ & $\mu_4'$ \\ \hline case 2.2 & $[E_1,E_2]=mE_4$, \newline $[E_1,E_3]=sE_5$ & $m\geq s>0$ & $+$, if $s=m$ & $c=-2m^2$,\newline $D=d(m^2,\frac{3}{2}m^2,\frac{3}{2}m^2,\frac{5}{2}m^2,\frac{5}{2}m^2)$ & $\mu_6'$ \\ \hline case 2.3 & $[E_1,E_2]=mE_3$ & $m>0$ & $+$ & $c=-\frac{3}{2}m^2$,\newline $D=d(m^2,m^2,2m^2,\frac{3}{2}m^2,\frac{3}{2}m^2)$ & $\mu_7'$ \\ \hline case 2.4 & $[E_1,E_2]=mE_3+sE_4+uE_5$,\newline $[E_1,E_3]=vE_4+wE_5$, \newline $[E_1,E_4]=xE_5$ & $m,v,x>0$, $s=0$ and $w\geq0$ & $+$, if $x=m$, $u=w=s=0$ and $v=\frac{2}{\sqrt{3}}m$ & $c=-2m^2$,\newline $D=d(\frac{1}{3}m^2,\frac{3}{2}m^2,\frac{11}{6}m^2,\frac{13}{6}m^2,\frac{5}{2}m^2)$ & $\mu_1'$ \\ \hline case 2.5 & $"$ & $m,v,x>0$, $w\geq0$ and $s>0$ & $-$ & $-$ & $-$ \\ \hline case 2.6 & $[E_1,E_2]=mE_3+sE_4+uE_5$,\newline $[E_1,E_3]=vE_4+wE_5$, \newline $[E_1,E_4]=xE_5$, \newline $[E_2,E_3]=yE_5$ & $m,v,x,y\neq0$,\newline $s=0$, $w\geq 0$ & $+$, if $u=w=s=0$ \newline $m=v=\pm\sqrt{\frac{3}{2}}x$, \newline $y=\pm x$ & $c=-\frac{11}{4}x^2$,\newline $D=d(\frac{3}{4}x^2,\frac{3}{2}x^2,\frac{9}{4}x^2,3x^2,\frac{15}{4}x^2)$ & $\mu_2'$ \\ \hline case 2.7 & $"$ & $s>0$ & $-$ & $-$ & $-$ \\ \hline case 2.8 & $[E_1,E_2]=mE_4+sE_5$,\newline $[E_1,E_3]=uE_5$, \newline $[E_1,E_4]=vE_5$, \newline $[E_2,E_3]=wE_5$ & $s,u\geq0$, $m,v,w>0$ & $+$, if $s=u=0$, $v=m$ and $w=\frac{\sqrt{2}}{2}m$ & $c=-\frac{7}{4}m^2$,\newline $D=d(\frac{3}{4}m^2,m^2,\frac{3}{2}m^2,\frac{7}{4}m^2,\frac{5}{2}m^2)$ & $\mu_3'$ \\ \hline case 2.9 & $[E_1,E_2]=mE_3+sE_4+uE_5$,\newline $[E_1,E_3]=vE_4$, \newline $[E_2,E_3]=wE_5$, & $m>0$, $w>v>0$, $s,u\geq0$ & $-$ & $-$ & $-$ \\ \hline case 2.10 & $"$ & $m,v>0$,\newline $w=v$, \newline $s\geq0$,\newline $u=0$ & $+$, if $s=0$,\newline $v=\frac{\sqrt{3}}{2}m$ & $c=-\frac{3}{2}m^2$,\newline $D=d(\frac{5}{8}m^2,\frac{5}{8}m^2,\frac{5}{4}m^2,\frac{15}{8}m^2,\frac{15}{8}m^2)$ & $\mu_5'$ \\ \hline \end{tabular} \end{table} }} \begin{remark} For the Lie groups of nilpotency class greater than two we have used the classification given in \cite{Figula-Nagy}, and we only consider Lie algebras which are not direct products of Lie algebras of lower dimension. Therefore, class $\mu_8'$ of Lauret classification does not appear in our classification. \end{remark} \begin{remark} Based on the results given in table \ref{The summarized result}, there are some three and four-step nilmanifolds which are not nilsoliton. \end{remark} \end{document}
math
23,431
\begin{document} \title{Revealing Genuine Steering under Sequential Measurement Scenario} \author{Amit Mukherjee} \email{[email protected] } \affiliation{Physics and Applied Mathematics Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108 , India.} \author{Arup Roy} \affiliation{Physics and Applied Mathematics Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108 , India.} \author{Some Sankar Bhattacharya} \affiliation{Physics and Applied Mathematics Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108 , India.} \author{Biswajit Paul} \affiliation{Department of Mathematics, South Malda College, Malda, West Bengal, India} \author{Kaushiki Mukherjee} \affiliation{Department of Mathematics, Government Girls’ General Degree College, Ekbalpore, Kolkata, India.} \author{Debasis Sarkar} \affiliation{Department of Applied Mathematics, University of Calcutta, 92, A.P.C. Road, Kolkata-700009, India.} \begin{abstract} Genuine steering is still not well understood enough in contrast to genuine entanglement and nonlocality. Here we provide a protocol which can reveal genuine steering under some restricted operations compared to the existing witnesses of genuine multipartite steering. Our method has an impression of some sort of `hidden' protocol in the same spirit of hidden nonlocality, which is well understood in bipartite scenario. We also introduce a genuine steering measure which indicates the enhancement of genuine steering in the final state of our protocol compared to the initial states. \end{abstract} \pacs{03.65.Ud, 03.67.Mn} \maketitle \section{I. INTRODUCTION} Einstein-Podolsky-Rosen steering, the phenomenon that was first discussed by Schrodinger and afterwards considered as a notion of quantum nonlocality, has gained significant attention in recent days\cite{schr,wisemanpra07,wisemanprl07}. This quantum phenomenon, which has no classical analogue, is observed if one of two distant observers, sharing an entangled state, can remotely steer the particle of the other distant observer by performing measurements on his/her particle only. The experimental criteria for analyzing the presence of bipartite steering, first investigated in \cite{M. D. Reid09}, was formalized in ref where the authors generalized this concept for arbitrary systems\cite{wiseman11}. Till date there has been a lot of analysis regarding various features of steering nonlocality such as methods of detection\cite{Howell2013} and quantification of steering\cite{Cavalcanti2013,Aolita2014}, steering of continuous variable systems\cite{adesso15}, loop-hole free demonstration of steering\cite{zeilinger}, applications as a resource of nonlocal correlations in the field of quantum information protocols, exploiting the relation of steering with incompatibility of quantum measurements\cite{bru,ghu} and its ability to detect bound entanglement\cite{ghune14}, etc. Apart from its foundational richness, EPR steering do have multi-faceted applications in practical tasks such as semi-device independent scenario \cite{wiseman2013} where only one party can trust his or her apparatus but the other party's apparatus is not trusted. In that situation the presence of steerable state provides a better chance to allow secure key distribution\cite{branciard2012}. Even for some other tasks such as randomness certification\cite{acin15}, entanglement assisted sub-channel discrimination\cite{watrous}, and secure teleportation through continuous-variables steerable states\cite{reid15} are found to be useful.\\ Being a notion of nonlocality there exists a hierarchy according to which steering is defined as a form of quantum inseparability, intermediate in between entanglement and Bell nonlocality. Considering pure quantum states these three notions are equivalent whereas in general they are inequivalent in case of mixed states \cite{brunner15}. However in the context of comparison of steering nonlocality with that of Bell-nonlocality, it is interesting to mention that analogous to hidden nonlocality\cite{popescu95,gisin96}, existence of hidden steering has been proved in \cite{brunner15} for bi-partite scenario. Just as in the case of exploiting nonlocality beyond Bell scenarios via the notion of hidden nonlocality\cite{popescu95,gisin96}, hidden steering refers to revelation of steering nonlocality under suitable sequential measurements. In this context an obvious interest grows regarding analysis of the same for multipartite scenario.\\ Due to increase in complexity as one shifts from bipartite to multipartite system, till date there has been limited attempts to understand the feature of multipartite steering phenomenon. Analogous to both entanglement and Bell-nonlocality the concept of genuine steering has been established in recent days. In this context it may be mentioned that unlike Bell-nonlocality and entanglement, due to asymmetric nature of steering nonlocality the notion of genuine steering nonlocality lacks uniqueness. However genuine steering was first introduced in \cite{reid} where the authors provided the criteria for detecting genuineness in steering scenario for both continuous as well as discrete variable systems. Later two other notions of genuine steering were introduced in \cite{jeeba} mainly for tripartite framework where two parties measurements are fully specified i.e one party can control remaining. In this context, the author has also designed genuine steering inequalities to detect genuine tripartite steering. Now speaking of genuine steering nonlocality, it may be interesting to explore the possibility of exploiting the same via some suitable sequential measurement protocol. \\ To be precise, our present topic of discussion will continue in the direction of analyzing hidden genuine tripartite steering nonlocality in the framework introduced in \cite{jeeba}. For present topic of discussion we will follow terms and terminologies used in \cite{jeeba}. We will design a protocol involving a sequence of measurements such that initially starting from tripartite states which may not be genuinely steerable, the protocol may generate a genuinely steerable state. Interestingly the initial states which will be used in the protocol do have a bilocal model \cite{acin2014}.\\ The paper has organized as follows. In section[II] we have introduced the notion of steering both in bi partite as well as genuine multipartite scenario. Then in section[III] we have presented suitable sequential operations to achieve the final state. Section[IV] contains our main results then discussion. \section{BACKGROUND} In this section we are basically going to include a brief detailing of the mathematical tools that will be used in our work. \subsection{Genuine tripartite steering} Firstly we discuss the criteria of detecting genuine steering\cite{jeeba}. Correlations $P(a,b,c|x,y,z)$ shared between three parties, say Alice, Bob and Charlie are said to be genuinely steerable\cite{jeeba} from one party, say Charlie to remaining two parties Alice and Bob, if those are inexplicable in the following form: \begin{eqnarray}\label{jeva1} &&P(a,b,c|x,y,z)= \sum_{\lambda}q_\lambda [P(a,b|x,y,\rho_{AB}(\lambda))]P(c|z,\lambda)\nonumber\\ &+&\sum_{\lambda}p_{\lambda}P(a|x,\rho^\lambda_a)P(b|y,\rho^\lambda_b)P(c|z,\lambda). \end{eqnarray} where $P(a,b|x,y,\rho_{AB}(\lambda))$ denotes the nonlocal probability distribution arising from two-qubit state $\rho^\lambda_{AB}$, and $P(a|x, \rho^\lambda _{A})$ and $P(b|y, \rho^\lambda_{B})$ are the distributions arising from qubit states $\rho^\lambda _{A}$ and $\rho^\lambda_B$. Here Charlie performs uncharacterized measurement whereas both Alice and Bob have access to qubit measurements. The tripartite correlation will be called genuinely unsteerable if it is explained by \ref{jeva1} where $\rho_{AB}(\lambda)$ is called hidden state for Alice and Bob side. In \cite{jeeba}, the author designed a detection criteria of tripartite genuine steering(Svetlichny steering), based on Svetlichny inequality\cite{SI}. The detection criterion is given in the form of a Bell-type inequality: \begin{equation}\label{pst5} \langle CHSH_{AB}z_1+CHSH^{'}_{AB}z_0\rangle_{2\times2\times ?}^{NLHS}\leq 2\sqrt{2}. \end{equation} where $CHSH_{AB}$ and $CHSH^{'}_{AB}$ stand for two inequivalent facets defining Bell-CHSH polytope for Alice and Bob and $\{z_0,z_1\}$ are measurements on Charlie's part. Here $NLHS$ stands for nonlocal hidden state whereas $2\times2\times ?$ implies that only two parties(Alice and Bob) have access to qubit measurements but Charlie does not trust his measurement devices and hence are uncharacterized. Alice and bob should have orthonormal measurement settings. If correlations arising due to measurements on any given quantum state($\rho$) violate this inequality(Eq.(\ref{pst5})), then that guarantees genuinely steerable of $\rho$ from Charlie to Alice and Bob. Analogously genuine steerability of $\rho$ from Bob to Charlie and Alice and that from Alice to Charlie and Bob can be guaranteed respectively by violation of the following criteria: \begin{equation}\label{pst5a} \langle CHSH_{BC}x_1+CHSH^{'}_{BC}x_0\rangle_{2\times2\times ?}^{NLHS}\leq 2\sqrt{2}. \end{equation} \begin{equation}\label{pst5b} \langle CHSH_{AC}y_1+CHSH^{'}_{AC}y_0\rangle_{2\times2\times ?}^{NLHS}\leq 2\sqrt{2}. \end{equation} Terms $CHSH_{BC}$, $CHSH^{'}_{BC}$, $CHSH_{AC}$, $CHSH^{'}_{AC}$ have analogous definitions. Hence a state is genuinely steerable from one party to the remaining two parties if it can violate atleast one of these three criteria(Eqs.\ref{pst5},\ref{pst5a},\ref{pst5b}).\\ We now discuss about some relevant tools for measuring genuine multipartite entanglement and genuine steering. \subsection{Genuine multipartite concurrence} We briefly now describe $C_{GM}$, a measure of genuine multipartite entanglement. For pure $n$-partite states($|\psi\rangle$), this measure is defined as \cite{Zma} : $C_{GM}(|\psi\rangle):= \textmd{min}_j\sqrt{2 (1-\Pi_j(|\psi\rangle))}$ where $\Pi_j(|\psi\rangle)$ is the purity of $j^{th}$ bipartition of $|\psi\rangle$. The expression of $C_{GM}$ for $X$ states is given in \cite{Has}. For tripartite $X$ states, \begin{equation}\label{4v} C_{GM}=2\,\textmd{max}_i\{0,|\gamma_i|-w_i\} \end{equation} with $w_i=\sum_{j\neq i}\sqrt{a_jb_j}$ where $a_j$, $b_j$ and $\gamma_j(j=1,2,3,4)$ are the elements of the density matrix of tripartite X state: \[\begin{bmatrix} a_1 & 0 & 0 & 0 & 0 & 0 & 0 & \gamma_1 \\ 0 & a_2 & 0 & 0 & 0 & 0 & \gamma_2 & 0 \\ 0 & 0 & a_3 & 0 & 0 & \gamma_3 & 0 & 0 \\ 0 & 0 & 0 & a_4 & \gamma_4 & 0 & 0 & 0 \\ 0 & 0 & 0 &{\gamma_4}^\ast &b_4 & 0 & 0 & 0 \\ 0 & 0 &{\gamma_3}^\ast &0 & & b_3 & 0 & 0 \\ 0 &{\gamma_2}^\ast & 0 & 0 & 0 & 0 & b_2 & 0 \\ {\gamma_1}^\ast & 0 &0 & 0 & 0 & 0 & 0 & b_1 \\ \end{bmatrix}\] \subsection{Genuine steering measure} First we define genuine steering measure which is analogous to the bi-partite steering measure first described in \cite{angelo16}. This measure is given by the following quantity: \begin{equation}\label{sm} S_{gen}(\rho)= max\{0,\frac{S_n(\rho)-1}{S_n^{max}-1}\} \end{equation} where$S^{max}_{n}=\max_\rho S_{n}(\rho)$ and $S_n(\rho)=\max_\eta S_n(\rho,\eta)$ with the maximization taken over all measurement settings $\eta$ and $0\le S_{gen}(\rho)\le 1$.\\ After giving a brief detailing of our mathematical tools, we now proceed with our results. To start with, we design the sequential measurement protocol based on which we observe the enhanced revelation of genuine steering. \section{Revealing Multipartite Genuine Steering}\label{proto} \begin{figure} \caption{Schematic diagram for preparation and mesurement stage.} \label{sets} \end{figure} The protocol that we propose here is a SLOCC(Stochastic Local Operation and Classical Communication) protocol which consists of two stages: \textit{Preparation Stage} and \textit{Measurement Stage}. We name this protocol as \textit{Sequential Measurement Protocol}. A detailed sketch of the protocol is given below:\\ \textit{Sequential Measurement Protocol}: Three spatially separated parties(say, $A_i;i=1,2,3$) are involved in this protocol. $n$ number of tripartite quantum states can be distributed among them. None of these states violate genuine steering inequality\cite{jeeba}. As each party holds one particle from each of the n tripartite states hence each of the parties holds n number of particles. \subsection{Preparation Stage} \begin{itemize} \item In the preparation stage, every party can perform some joint measurement on their respective $n-1$ particles and then broadcast the results to others. \item At the end of measurements by all the three parties, a tripartite quantum state shared among $A_1$, $A_2$ and $A_3$ is generated. Clearly this final state is always prepared depending upon the measurement results obtained by the parties in the previous step. \end{itemize} \subsection{Measurement Stage} \begin{itemize} \item In the measurement stage, all the three parties can perform any projective measurement in arbitrary directions. But in this stage they are not allowed to communicate among themselves. \item After measurements they can generate a tripartite correlation so that they can verify that this correlation can violate the genuine steering inequality. \end{itemize} We refer to this protocol of sequential measurements by the three parties sharing n states as a sequential measurement protocol (SMP). Having sketched the protocol we now give examples of some families of tripartite states which when used in this network, reveal genuine steering for some members of these families. Such an observation is supported with an increase in the amount of genuine steering, guaranteed by the measure of steering $S_{gen}(\rho)$(Eq.(\ref{sm})).\\ Let the three initial states be given by: \begin{equation}\label{1} \rho_1 = p_1 |\psi_f\rangle\langle \psi_f|+(1- p_1)|001\rangle\langle001| \end{equation} with $|\psi_f\rangle=\cos\theta_1|000\rangle+\sin\theta_1|111\rangle$, $0\leq\theta_1\leq \frac{\pi}{4}$ and $0\leq p_1\leq 1$; \begin{equation}\label{2} \rho_2= p_2 |\psi_m\rangle\langle \psi_m|+(1-p_2)|010\rangle\langle010| \end{equation} with $|\psi_m\rangle=\frac{|000\rangle+|111\rangle}{\sqrt{2}}$ and $0\leq p_2\leq 1$; \begin{equation}\label{3} \rho_3= p_3 |\psi_l\rangle\langle \psi_l|+(1-p_3)|100\rangle\langle100| \end{equation} with $|\psi_l\rangle=\sin\theta_3|000\rangle+\cos\theta_3|111\rangle$,$0\leq\theta_1\leq \frac{\pi}{4}$ and $0\leq p_1\leq 1$. In this context it may be noted that the three initial states have Svetlichny bi-local model under projective measurement for the following restricted range of state parameters: \begin{itemize} \item For first state($\rho_1$) : $p_1 \leq \frac{1}{(1 + \sin[2\theta_1])}$; \item Second state($\rho_2$) : $p_2 \leq \frac{1}{2}$ ; \item Third state($\rho_3$) : $p_3 \leq \frac{1}{(1 + \sin[2\theta_3])}$. \end{itemize} Each of the three parties $A_1$, $A_2$ and $A_3$ performs Bell basis measurements on their respective particles. Depending on a particular output of all the measurements(here $|\psi^{\pm}\rangle=\frac{|01\rangle\pm|10\rangle}{\sqrt{2}}$), a resultant state $\rho_4^{\pm}$ is obtained which after correcting the phase term is given by: \begin{equation}\label{4} \rho_4 = \frac{p_3 |\phi\rangle\langle\phi| + (1 - p_3)\sin^2 \theta_1|100\rangle\langle100|}{\sin^2 \theta_1+ p_3 \cos2\theta_1 \sin^2 \theta_3 } \end{equation} where $|\phi\rangle = \cos\theta_1 \sin\theta_3|000\rangle + \sin\theta_1 \cos\theta_3 |111\rangle$. Clearly $\rho_4$ is independent of $p_1$ and $p_2$. Interestingly, $\rho_4$ can also be generated for some other combination of sequential operations on some different arrangement of particles between the parties $A_i(1,2,3)$ and for different output of Bell measurement. For the initial states $\rho_i$ $(i = 1, 2, 3)$, the amount of genuine entanglement are given by $$ C^{\rho_1}_{GM} = p_1 \sin 2\theta_1 ,$$ $$ C^{\rho_2}_{GM} = p_2 $$ and \begin{equation}\label{6iii} C^{\rho_3}_{GM} = p_3 \sin 2\theta_3 \end{equation} whereas that of $\rho_4$ is given by \begin{equation}\label{6iv} C^{\rho_4}_{GM} = \frac{p_3 \sin2\theta_1\sin2\theta_3 }{2(\sin^2 \theta_1+ p_3 \cos2\theta_1 \sin^2 \theta_3)}. \end{equation} Eq.(\ref{6iii}) indicates that the initial states $\rho_i(i=1,2,3)$ are genuinely entangled for any nonzero value of the state parameters . The maximum value of the genuine steering operators($S_i$)(Eq.\ref{pst5}) under projective measurements, for state $\rho_i(i=1,2,3)$ is given by: \begin{widetext} $$S_1 = \max[2\, p_1\sin2\theta_1, \frac{1}{\sqrt{2}}\sqrt{((1-p_1-p_1 Cos[2 \theta_1])^2+(p_1 Sin[2\theta_1])^2}],$$ \end{widetext} $$S_2 = \max[2\, p_2, \frac{1}{\sqrt{2}}\sqrt{((1-p_2)^2+(p_2)^2)}]\,\,\,\,$$ and \begin{equation}\label{6i} S_3 = \max[2\, p_3\sin2\theta_3, \frac{1}{\sqrt{2}}\sqrt{((1-p_3+p_3 Cos[2 \theta_3])^2+(p_3 Sin[2\theta_3])^2} )|] \end{equation} respectively whereas that for the final state $\rho_4$, it is given by \begin{widetext} $$S_4 = \max[\frac{\,p_3 \sin2\theta_1\sin2\theta_3}{\sin^2 \theta_1+ p_3 \cos2\theta_1 \sin^2 \theta_3},$$ \begin{equation}\label{6ii} \frac{\sqrt{2} \sqrt{(1-p_3+p_3 Cos[2 \theta_3]-Cos[2 \theta_1])^2+(p_3 Sin[2 \theta_1] Sin[2\theta_3])^2}}{2-2(1-p_3)Cos[2 \theta_1]-p_3Cos[2(\theta_1-\theta_3)]-p_3 Cos[2(\theta_1+\theta_3)]}. \end{equation} \end{widetext} It is clear from the maximum value of genuine steering operator(Eqs.(\ref{6i}), (\ref{6ii})) and the measure of entanglement (Eqs.(\ref{6iii}), (\ref{6iv})) of both initial states and final state, that each of them does not violate genuine steering inequalities(Eqs.\ref{pst5},\ref{pst5a},\ref{pst5b}) for $C^{\rho_i}_{GM}\leq \frac{1}{2}(i=1,2,3,4).$ \par Thus to observe genuine steering revelation there should exist some fixed values of the parameters of the three initial Sveltlichny bi-local states with $C^{\rho_i}_{GM}\leq\frac{1}{2}$ such that the final state can have $C^{\rho_4}_{GM}>\frac{1}{2}$. Interestingly we get such states from the families of the initial states $\rho_1$(Eq.(\ref{1})), $\rho_2$(Eq.(\ref{2})) and $\rho_3$(Eq.(\ref{3})). \par For example, let $\theta_1=0.1$, $p_1\leq 0.509 $ , $p_2\leq\frac{1}{2}$, $\theta_3=0.1$ and $p_3\in[0,0.83426]$. Then each of the initial states have Svetlichny bi-local model (moreover one can show that these models are $NS_2$ local\cite{acin2014}) and $C^{\rho_i}_{GM}\leq\frac{1}{2}$. Thus they do not violate genuine steering inequalities(Eqs.\ref{pst5},\ref{pst5a},\ref{pst5b}). \par But when used in our protocol(Sec.\ref{proto}), they can generate a state $\rho_4$ (with $C^{\rho_4}_{GM}>\frac{1}{2}$) which exhibits genuine steering by violating genuine steering inequalities for $p_3 \geq 0.33557$. This guarantees revelation of genuine steering for $p_3 \in [0.33557,0.83426]$. So initially each of these three states are unable to exhibit genuine steering but after the sequential measurements are taken into account they can violate that genuine steering inequality. Now a pertinent question would be whether one can quantify this revelation of genuine steering as observed in our protocol. We deal with this question in the next sub-section. \subsection{Enhancement of the Genuine Steering measure} In this part we show that the prescribed protocol indeed enhances a measure of genuine steering in the resulting state. The amount of genuine steering for the three initial states are: $$S_{gen}(\rho_1)= max\{0,2\, p_1\sin2\theta_1-1\},$$ $$S_{gen}(\rho_2)= max\{0,2\, p_2-1\},$$ \begin{equation} S_{gen}(\rho_3)= max\{0,2\, p_3\sin2\theta_3-1\} \end{equation} whereas for the final states the genuine steerable quantity takes the form: \begin{equation} S_{gen}(\rho_4)= max\{0,\frac{p_3 \sin2\theta_1\sin2\theta_3}{\sin^2 \theta_1+ p_3 \cos2\theta_1 \sin^2 \theta_3}-1\} \end{equation} If we take $p_1=p_3$ and $\theta_1=\theta_3$ then for any values of $p_1$ and $\theta_1$ the final state is more genuinely steerable than the initial ones. \section{CONCLUSION} Genuine steering nonlocality, being a weaker notion of genuine nonlocality is considered to be a resource in various practical tasks. So apart from its theoretical importance, revelation of such a resource under any protocol that allows only classical communication and shared randomness is of immense practical importance. Motivated by that we have attempted to design a SLOCC protocol which demonstrates revelation of `hidden' genuine steering. Our discussion in a restricted sense guarantees the fact that under suitable measurements by the parties involved in the network, our protocol is sufficient to show genuine steering even from some quantum states which have bi-local models. However under our protocol each of the parties having two particles perform Bell basis measurements and the remaining parties perform projective measurements. In brief, this protocol enables one to go beyond the scope of existing witnesses of genuine steering and thus demonstrate genuine steering for a larger class of multipartite states. In this context, it will be interesting to consider more generalized measurement settings by the parties which may be yielding better results. \appendix \end{document}
math
21,510
\begin{document} \title{An exact algorithm for the weighed mutually exclusive maximum set cover problem} \author{Songjian Lu and Xinghua Lu} \institute{Department of Biomedical Informatics,\\ University of Pittsburgh, Pittsburgh, PA 15219, USA\\ Email: [email protected], [email protected]} \maketitle \begin{abstract} In this paper, we introduce an exact algorithm with a time complexity of $O^*(1.325^m)^{\dag}$ \let\thefootnote\relax\footnotetext{$^{\dag}${\bf Note:} Following the recent convention, we use a star $*$ to represent that the polynomial part of the time complexity is neglected.} for the {\sc weighted mutually exclusive maximum set cover} problem, where $m$ is the number of subsets in the problem. This is an NP-hard motivated and abstracted from a bioinformatics problem of identifying signaling pathways based gene mutations. Currently, this probelm is addressed using heuristic algorithms, which cannot guarantee the performance of the solution. By providing a relatively efficient exact algorithm, our approach will like increase the capability of finding better solutions in the application of cancer research. \end{abstract} \section{Introduction} Cancers are genomic diseases in that genomic perturbations, such as mutation of genes, lead to perturbed cellular signal pathways, which in turn lead to uncontrolled cell growth. An important cancer research area is to discover perturbed signal transduction pathways in cancers, in order to gain insights in disease mechanisms and guide patient treatment. It has observed that mutation events among the genes constitute a signaling pathway tend to be occur in a mutually exclusive fashion \cite{sparks1998,TCGA_gbm}. This is because often one mutation in such a pathway may be usually sufficient to disrupt the signal carried by a pathway leading to cancers. Contemporary biotechnologies can easily detect what genes have mutated in tumor cells, providing an unprecedented opportunity to study cancer signaling pathways. However, as each tumor usually has up to hundreds of mutations, some dispersed in different pathways driving tumor genesis while others mutations not related to cancers, it is a challenge to find mutations across different patients that affect a common cancer signaling pathway. The property of mutual exclusivity of mutations in a common pathway can help us to recognize driver mutations within a common pathway~\cite{Ciriello,Miller,Vandin2012}. The problem of finding mutations within a common pathway across tumors, i.e., finding the members of the pathway, can be cast as follows: finding a set of mutually exclusive mutations that cover a maximum number of tumors. This is an NP-hard (this problem is abstracted to the {\sc mutually exclusive maximum set cover} problem), and previous studies~\cite{Ciriello,Miller,Vandin2012} used heuristic algorithms to solve the problem, which could not guarantee the optimal solutions. Another shortcoming of the previous studies is that they do not consider the weight of the mutations. Since the signal carried by a signaling pathway is often reflected as a phenotype, statistical methods can be used to assign a weight to a type of gene mutation by assessing the strength of association of the mutation event and appearance of a phenotype of interest. Therefore, it is more biologically interesting to find a set of mutually exclusive mutations that carries as much weight as possible and covers as many tumors as possible---thus a {\sc weighted mutually exclusive maximum set cover} problem. {\sc mutually exclusive maximum set cover} problem is: given a ground set $X$ of $n$ elements, a collection ${\cal F}$ of $m$ subsets of $X$, try to find a sub-collection ${\cal F'}$ of ${\cal F}$ with minimum number of subsets such that 1) no two subsets in ${\cal F'}$ are overlapped and 2) ${\cal F'}$ covers the maximum number of elements in $X$, i.e. the number of elements of the union of all subsets in ${\cal F'}$ is maximized. If we assign each subset in ${\cal F}$ a weight (a real number) and further require that the weight of ${\cal F'}$, i.e. the weight sum of subsets in ${\cal F'}$, is minimized, then the {\sc mutually exclusive maximum set cover} problem becomes the {\sc weighted mutually exclusive maximum set cover} problem. The research on the {\sc mutually exclusive maximum set cover} and the {\sc weighted mutually exclusive maximum set cover} problems is limit. To our best knowledge, only Bj\"olund et al. ~\cite{bjorklund} gave an algorithm of $O^*(2^n)$ for the problem of finding $k$ subsets in ${\cal F}$ with maximum weight sum that cover all elements in $X$ (the solution may not exists). The {\sc mutually exclusive maximum set cover} problem is obtained by adding constrains to the {\sc set cover} problem, which is a well-known NP-hard problem in Karp's 21 NP-complete problems~\cite{Karp1972}. Much research about the {\sc set cover} problem has been focused on the approximation algorithms, such as papers ~\cite{alon,feige,Kolliopoulos,lund} gave polynomial time approximation algorithms that find solutions whose sizes are at most $c\log n$ times the size of the optimal solution, where $c$ is a constant. There is also plenty research about the {\sc hitting set} problem, which is equivalent to the {\sc set cover} problem. In this direction, people mainly designed fixed-parameter tractable (FPT) algorithms that used the solution sizes $k$ as parameter for the {\sc hitting set} problem under the constrain that sizes of all subsets in the problem are bounded by $d$. For example, Niedermeier et al. \cite{niedemedier} gave a $O^*(2.270^k)$ algorithm for the {\sc $3$-hitting set} problem, and Fernau et al.~\cite{fernau_2} gave a $O^*(2.179^k)$ algorithm respectively. Very recently, people also studied the extension version of the {\sc set cover} problem that find a sub-set ${\cal F'}$ of ${\cal F}$ such that each element in $X$ is covered by at least $t$ subsets in ${\cal F'}$. For example, Hau et al.~\cite{Hua2} designed an algorithm with time complexities of $O^*((t+1)^n)$ for the problem; Lu et al.~\cite{Lu2011} further improved the algorithm under the constrain that there are certain elements in $X$ are included in at most $d$ subsets in ${\cal F}$. These two algorithms can be easily modified to solved the {\sc weighted mutually exclusive maximum set cover} problem. However, as in application, $n$, the number of tumor samples, is large (can be several hundreds). Above two algorithms are not practical. On the other hand, by excluding somatic mutations that are less possible to be related to a pathway in the study, the number of mutations is usually less than the number of tumors. Hence, there is a need to design better algorithms solving the {\sc weighted mutually exclusive maximum set cover} problem and using $m$ the number of subsets (mutations) in ${\cal F}$ as parameter. In this paper, first, we will prove that the {\sc weighted mutually exclusive maximum set cover} problem is NP-hard. Then, we will give an algorithm of running time bounded by $O^*(1.325^m)$ for the problem. This running time complexity is only the worst case upper bound. In our test, this algorithm could solve the problem practically when it was applied to the TCGA data~\cite{TCGA_gbm} for searching the diver mutations. \section{The {\sc weighted mutually exclusive maximum set cover} problem is NP-hard} The formal definition of the {\sc weighted mutually exclusive maximum set cover} problem is: given a ground set $X$ of $n$ elements, a collection ${\cal F}$ of $m$ subsets of $X$, and a weight function $w: {\cal F} \rightarrow [0, \infty)$, if ${\cal F'} =\{S_1,S_2,\ldots,S_h\} \subset {\cal F}$ such that $|(\cup_{i=1}^hS_i)|$ is maximized, and $S_i\cap S_j=\emptyset$ for any $i \neq j$, then we say ${\cal F'}$ is a mutually exclusive maximum set cover of $X$ and $\sum_{i=1}^hw(S_i)$ is the weight of ${\cal F'}$; the goal of the problem is to find a mutually exclusive maximum set cover of $X$ with the minimum weight. In this section, we will prove that the {\sc mutually exclusive maximum set cover} problem, i.e. all subsets in ${\cal F}$ have equal weight, is NP-hard, which would in turn prove that the {\sc weighted mutually exclusive maximum set cover} problem is NP-hard. We will prove the NP-hardness of the {\sc mutually exclusive maximum set cover} problem by reducing another NP-hard problem, the {\sc maximum $3$-set packing} problem, to it. Recall that the {\sc maximum $3$-set packing} problem is: given a collection ${\cal F}$ of subsets, where the size of each subset in ${\cal F}$ is $3$, try to find an ${\cal S} \subset {\cal F}$ such that subsets in ${\cal S}$ are pairwise disjoint and $|{\cal S}|$ is maximized. \begin{theorem}\label{theorem1} The {\sc mutually exclusive maximum set cover} problem is NP-hard. \end{theorem} \begin{proof} Let ${\cal S} = \{S_1,S_2,\ldots,S_m\}$ be an instance of the {\sc maximum $3$-set packing} problem. We create an instance of the {\sc mutually exclusive maximum set cover} problem such that $X = \cup_{i=1}^mS_i$ and ${\cal F} = {\cal S}$. It is obvious that ${\cal P} = \{P_1,P_2,\ldots, P_k\}$ is a solution of the {\sc mutually exclusive maximum set cover} problem if and only if ${\cal P} = \{P_1,P_2,\ldots, P_k\}$ is a solution of the {\sc maximum $3$-set packing} problem. Thus, the {\sc mutually exclusive maximum set cover} problem is NP-hard. \qed \end{proof} \section{The main Algorithm} In this section, we will introduce our main algorithm. The basic idea of our method is branch and bound. The algorithm first finds a subset in ${\cal F}$ and then branches on it. By the mutual exclusivity, if any two subsets in ${\cal F}$ are overlapped, then at most one of them can be chosen into the solution. Hence, suppose that the subset $S$ intersects with other $d$ subsets in ${\cal F}$, then if $S$ is included into the solution, $S$ and other $d$ subsets intersected with $S$ will be removed from the problem, and if $S$ is excluded from the solution, $S$ will be removed from the problem. We continue this process until the resulting sub-problems can be solved in constant or polynomial time. The execution process of the algorithm is going through a search tree and the running of the algorithm is proportional to the number of leaves in the search tree. If letting $T(m)$ be the number of leaves of search tree when call the algorithm with $m$ subsets in ${\cal F}$, then we can obtain the recurrence relation $T(m) \leq T(m -(d+1)) + T(m-1)$. As if $d=0$, the problem can be solved in polynomial time (all subsets in ${\cal F}$ will be included into the solution), $d\geq 1$. Therefore, we can obtain $T(m) \leq 1.619^m$, which means the problem can be solved in $O^*(1.619^m)$ time$^{\ddag}$ \let\thefootnote\relax\footnotetext{$^{\ddag}${\bf Note:} Given a recurrence relation $T(k) \leq \sum_{i=0}^{k-1}c_iT(i)$ such that all $c_i$ are nonnegative real numbers, $\sum_{i=0}^{k-1}c_i>0$, and $T(0)$ represents the leaves, then $T(k) \leq r^k$, where $r$ is the unique positive root of the characteristic equation $t^k - \sum_{i=0}^{k-1}c_it^i=0$ deduced from the recurrence relation~\cite{chen}.}. In this paper, we will improve the running time to solve the problem by carefully selecting subsets in ${\cal F}$ for branching. Before present our major result, we prove three lemmas. Given an instance $(X, {\cal F}, w)$ of the {\sc weighted mutually exclusive maximum set cover} problem, we make a graph $G$ called the set interaction graph such that each subset in ${\cal F}$ makes a node in $G$ and if any two subsets are interacted, an edge is added between them. For the convenience, in the rest of paper, we will use a node in the intersection graph and a subset in ${\cal F}$ in a mixed way. Suppose $C=(V_c, E_c)$ is a connected component of $G$, we denote $(X,{\cal F},w)_C$ the sub-instance induced by component $C$, i.e. $(X,{\cal F},w)_C = (\cup_{S\in V_c}S,V_c,w)$. In the algorithm, when we say $Solution_1$ is better than $Solution_2$ if 1) $Solution_1$ covers more elements in $X$ than $Solution_2$ covers, or 2) $Solution_1$ and $Solution_2$ cover the same number of element in $x$, however the weight of $Solution_1$ is less than the weight of $Solution_2$. In the intersection graph, $neighbor(S)$ includes $S$ and all nodes that are connected to $S$. The first lemma will show that we can find the solution of the problem by finding the solutions of all sub-instances induced by connected components of the intersection graph $G$. \begin{lemma}\label{lemma1} Given an instance $(X, {\cal F}, w)$ of the {\sc weighted mutually exclusive maximum set cover} problem, if the intersection graph obtained from the instance consists of several connected components, then the solution of the problem is the union of solutions of all sub-instances induced by connected components. \end{lemma} \begin{proof} As the subset(s) in each sub-instance has(have) no element(s) in other sub-instance(s), we can solve each sub-instance independently. It obvious that the optimal solutions of all sub-instances will make the optimal solution of the original instance. \qed \end{proof} In next lemma, we will show that if the maximum degree of the intersection graph obtained from the given instances is bounded by $2$, i.e. each subset in the instance is overlapped with at most other $2$ subsets, then the problem can be solved in polynomial time. \begin{figure} \caption{Algorithm for the {\sc weighted mutually exclusive maximum set cover} \label{Algorithm_2} \end{figure} \begin{lemma}\label{lemma2} Given an instance $(X, {\cal F}, w)$ of the {\sc weighted mutually exclusive maximum set cover} problem, if the degree of its intersection graph is bounded by $2$, then the problem can be solved in $O(m^2)$ time. \end{lemma} \begin{proof} We first prove that if the intersection graph has only one connected component, the running time of the algorithm {\bf WMEM-Cover-2} is polynomial. As the degree of the intersection graph is bounded by $2$, the connected component can only be a simple path or a simple ring. Case 1: Suppose that the intersection graph is a simple path. The algorithm first finds the middle node (subset) $x$ of the path; then branches on $x$ such that branch one includes the node into the solution (three subsets will be removed from the problem) and branch two excludes the node from the solution (one subset will be removed from the problem). Hence, if $T(m)$ represents the number of leaves in the search tree, we will have \[T(m) \leq T(m-3) + T(m-1).\] Furthermore, considering that after the branching, the resulting intersection graphs will be split into two connected components with almost equal sizes, we have \[T(m)\leq (T(\lceil (m-3)/2\rceil) + T(\lfloor(m-3)/2\rfloor)) + (T(\lceil(m-1)/2\rceil) + T(\lfloor(m-1)/2\rfloor)) < 4T(m/2).\] From this recurrence relation, we will have \[T(m) \leq 4^{\log m} = m^2.\] Case 2: Suppose that the intersection graph is a simple ring. The algorithm chooses any node and branches on it. Similar to case 1, one branch will remove three subsets from the problem while other branch will remove one subset from the problem. Hence, we will have the recurrence relation \[T(m) \leq T(m-3) + T(m-1).\] Furthermore, after this operation, the resulting intersection graphs in both branches are simple pathes. So with the analysis of case 1, we can obtain \[T(m) \leq (m-3)^2 + (m-1)^2 < 2m^2.\] If the intersection graph of the instance has multiple connected components, then by Lemma~\ref{lemma1}, we can solve sub-instances induced by connected components independently. As each sub-instance induced by a connected component can be solved in polynomial time, the original instance can be solved in polynomial time. It is easy to obtain that the running time is bounded by $O(m^2)$. The correctness of the algorithm is straightforward. The algorithm {\bf WMEM-Cover-2} first chooses a node in the intersection graph, then branches on it. One branch includes the node into the solution while the other branch excludes the node from the solution. Hence, all possible combinations of mutually exclusive covers are considered and the algorithm will returns the best solution, i.e. the solution that covers maximum number of elements in $X$ and has the minimum weight. \qed \end{proof} \begin{figure} \caption{ The main algorithm for the {\sc weighted mutually exclusive maximum set cover} \label{Algorithm_3} \end{figure} \begin{figure} \caption{Different structures in the intersection graph with degree bounded by $3$.} \label{fig_structure} \end{figure} In next lemma, we will present how to improve the running time of algorithm when the degrees of nodes in the intersection graph is bounded by $3$. \begin{lemma}\label{lemma3} Given an instance $(X, {\cal F}, w)$ of the {\sc weighted mutually exclusive maximum set cover} problem, if the degree of its intersection graph is bounded by $3$, then the problem can be solved in $O^*(1.325^m)$ time. \end{lemma} \begin{proof} We suppose that the intersection graph always has a node whose degree is less than $3$. At the beginning, if the degrees of all nodes in the intersection graph are $3$, then after the first branching, both subgraphs will have at least $3$ nodes whose degrees are at most $2$. After that, when the algorithm makes new branchings, it is obvious that there are always new nodes whose degrees will be reduced. Hence, after the first branching, the intersection graph will always keeps at least one node of degree bounded by $2$. The algorithm {\bf WMEM-Cover-3} always first finds a node $x$ of degree $3$, which is the first node that is connected to a node with minimum degree (less than $3$) in the intersection graph, then branches at $x$. We analysis the running time of the algorithm {\bf WMEM-Cover-3} by considering the following cases. {\bf Case 1.} The node $x$ is connected by a simple path $P$ that one end is not connected to any other node (refer to Figure~\ref{fig_structure}-(A)). In the branch of including $x$ into the solution, $x$ and $3$ neighbors of $x$ will be removed. In the branch that excludes $x$ from the solution, $x$ is removed; the simple path $P$ becomes an isolated component and the sub-instance induced by $P$ can be solved in polynomial time; thus at least $2$ nodes will be removed in this branch. We obtain the recurrence relation \[T(m)\leq T(m-4) + T(m-2),\] which leads to $T(m)\leq 1.273^m$. {\bf Case 2.} Both ends of the simple path $P$ are connected to $x$ (refer to Figure~\ref{fig_structure}-(B)), where in this case, the length of the simple path $P$ is at least $2$. Then in the branch of including $x$ into the solution, as the case 1, at least $4$ nodes will be removed and in the branch of excluding $x$ from the solution, the path $P$ also becomes an isolated component. Hence, we will have \[T(m)\leq T(m-4) + T(m-3),\] which leads to $T(m)\leq 1.221^m$. {\bf Case 3.} One end of the simple path $P$ is connected to $x$ while the other end of $P$ is connected to node $y$ that is not $x$, where $x$ and $y$ can be or is not connected by an edge (refer to Figure~\ref{Algorithm_main}-(C)(D)). In the branch that includes $x$ into the solution, as the above cases, at least $4$ nodes will be removed. In the other branch, after $x$ is removed, a node of degree one will be generated. If no node(s) of degree one is in the connected component with nodes of degree $3$, then node(s) of degree one is/are in connected component(s) bounded $2$. Hence, we will have \[T(m)\leq T(m-4) + T(m-2),\] which leads to $T(m)\leq 1.273^m$. If there is at least one node of degree one is in the connected component with nodes of degree $3$, then next branching is as the Case 1. Therefore, even in the worst case, we will have the recurrence relation \[T(m)\leq T(m-4) + T(m-1) \leq T(m-4) + (T(m-5) + T(m-3)),\] which leads to $T(m)\leq 1.325^m$. Above analysis has included all possible situations that a node of degree at most $2$ is connected to a node of degree $3$. Hence, we can obtain that the time complexity of the algorithm is $O^*(1.325^m)$. As Lemma~\ref{lemma2}, the correctness of the algorithm {\bf WMEM-Cover-3} is obvious. \qed \end{proof} \begin{figure} \caption{ The main algorithm for the {\sc weighted mutually exclusive maximum set cover} \label{Algorithm_main} \end{figure} Next, we will present the main result. \begin{theorem}\label{theorem2} The {\sc weighted mutually exclusive maximum set cover} problem can be solved in $O^*(1.325^m)$ time. \end{theorem} \begin{proof} As above lemmas, the correctness of the algorithm {\bf WMEM-Cover-main} is trivial. We only prove the running time of the algorithm. The algorithm {\bf WMEM-Cover-main} always keeps searching the node $x$ with maximum degree in the intersection graph. Then branches on it. If the degree of $x$ is $d$, then we will obtain the recurrence relation \[T(m) \leq T(m-(d+1)) + T(m-1).\] Furthermore, if $d\leq 3$, the algorithm {\bf WMEM-Cover-main} will call the algorithm {\bf WMEM-Cover-3}. Hence $d \geq 4$ for the branching in the algorithm of {\bf WMEM-Cover-main}, which lead to $T(m)\leq 1.325^m$. From Lemma~\ref{lemma3}, if $d\leq 3$, we also have $T(m)\leq 1.325^m$. Therefore, the overall running time of the algorithm {\bf WMEM-Cover-main} is $O^*(1.325^m)$. \qed \end{proof} \section{Conclusion and future works} In this paper, first we proved that the {\sc weighted mutually exclusive maximum set cover} problem is NP-hard. Then we designed the first non-trivial algorithm that uses $m$, the number of subsets in ${\cal F}$ as parameter, for the problem. In our algorithm, we created an intersection graph that can easily help us to find branching subsets that can greatly reduce the time complexity of the algorithm. The running time of our algorithm is $O^*(1.325^m)$, which can easily finish the computation in the application of finding driver mutations if $m$ is less than $100$. By choosing the branching subsets more carefully, we believe that the running time of the algorithm can be further improved. While reducing the running time to solve the {\sc weighted mutually exclusive maximum set cover} problem, which has important applications in cancer study, is appreciated, another variant of the problem should be particularly paid attention to. Strict mutual exclusivity is the extreme case, some tumors may have more than one perturbation to the pathway. Hence, we need to relax the strict mutual exclusivity and modify the problem to the {\sc weighted small overlapped maximum set cover problem}, which allow each tumor to be covered by a small number (such as 2 or 3) of mutations. This is another important problem, which is abstracted from the cancer study, need to be solved. \end{document}
math
22,853
\begin{document} \title{The global regularity for the 3D continuously stratified inviscid quasi-geostrophic equations} \alphauthor{Dongho Chae\\ \ \\ Department of Mathematics\\ Chung-Ang University\\ Seoul 156-756, Republic of Korea\\ email: [email protected]} \date{(to appear in {\em J. Nonlinear Science})} \maketitle \begin{abstract} We prove the global well-posedness of the continuously stratified inviscid quasi-geostrophic equations in $\Bbb R^3$.\\ \ \\ \nonumberoindent{\bf AMS Subject Classification Number:} 35Q86, 35Q35, 76B03\\ \nonumberoindent{\bf keywords:} stratified quasi-geostrophic equations, global regularity \end{abstract} \section{Introduction} \setcounter{equation}{0} Let us consider the continuously stratified quasi-geostrophic equation for the stream function $\psi=\psi(x,y,z,t)$ on $\Bbb R^3$. \begin{eqnarray}\label{qg} q_t+J(\psi, q)+\beta \psi_x =\nonumberu\Delta q + \mathcal{F} \\ \mbox{with}\quad q:=\psi_{xx}+\psi_{yy} +F^2\psi_{zz}.\nonumber \end{eqnarray} Here, $F=L/L_R$ with $L$ the characteristic horizontal length of the flow and $L_R= \sqrt{gH_0}/f_0$ the Rossby deformation radius, $H_0$ the typical depth of the fluid layer and $f_0$ the rotation rate of the fluid. On the other hand, $\nonumberu$ is the viscosity, $\mathcal{F}$ is the external force, {\em which will be set to zero for simplicity}. In the above we used the notation, $J(f,g)=f_xg_y-f_yg_x$. The equation (\ref{qg}) is one of the basic equations in the geophysical fluid flows. For a physical meaning of it we mention that it can be derived from the Boussinesq equations(see\cite{ped, maj00}). In Section 1.6 of \cite{maj} one can also see a very nice explanation of (\ref{qg}) in relation to the other models of the geophysical flows. {\em Below we consider the inviscid case $\nonumberu=0$, and set $\beta=1$ for convenience.} The case $\nonumberu >0$ is much easier to prove the global regularity. Below we introduce the notations $$ v:=v(x,y,z,t)=(-\psi_y, \psi_x, \psi_z), \quad \bar{v}:=(-\psi_y, \psi_x, 0).$$ Rescaling in the $z$ variable as $z\to F^{-1}z$, we have $q=\Delta \psi$. Then the equation (\ref{qg}) in our case can be written as a Cauchy problem, \begin{equation}\label{qg1} \left\{ \alphaligned &q_t +(v \cdot\nonumberabla ) q =-v_2, \\ & q=\Delta \psi, \\ & v|_{t=0}=v_0. \endaligned \right. \end{equation} Comparing the system with the vorticity formulation of the 2D Euler equations, \begin{equation}\label{euler} \omega_t +(\bar{v} \cdot\nonumberabla ) \omega =0, \quad \omega=-(\partial_x^2 +\partial_y ^2 ) \psi, \quad v=(-\psi_y, \psi_x),\end{equation} One can observe a similarity with the correspondence $ q \leftrightarrow \omega$. We note, however, that there exists an extra term, $v_2$ coupled, in the first equation of (\ref{qg1}). Furthermore, more seriously, the relation between $\psi$ and $q$ is given by a full 3D Poisson equation in the second equation of (\ref{qg1}), while in (\ref{euler}) the corresponding one is a 2D equation. As far as the author knows the only mathematical result on the Cauchy problem of (\ref{qg1}) is the local in time well-posedness due to Bennett and Kloeden(\cite{ben}). In the viscous case there is a study of the long time behavior of solutions of (\ref{qg}) by S. Wang(\cite{wan}). Actually the authors of \cite{ben} considered 3D periodic domain for the result, but since their proof used Kato's particle trajectory method(\cite{kat}), it is straightforward to extend the result to the case of whole domain in $\Bbb R^3$(see \cite{maj0} in the case of 3D Euler equations on $\Bbb R^3$). In this paper our aim is to prove the following global regularity of solution to (\ref{qg1}) for a given smooth initial data. \begin{thm} Let $m>7/2$, and $v_0 \in H^m (\Bbb R^3)$ be given. Then, for any $T>0$ there exists unique solution $v\in C([0, T); H^m (\Bbb R^3))$ to the equation (\ref{qg1}). \end{thm} \section{Proof of the main theorem} \setcounter{equation}{0} \nonumberoindent{\bf Proof of Theorem 1.1 } The local well posedness of (\ref{qg1}) for smooth $v_0$ is proved in \cite{ben}, and therefore it suffices to prove the global in time {\em a priori} estimate. Namely, we will show that \begin{equation}\label{apriori} \sup_{0<t<T} \|v(t)\|_{H^m}\leq C(\|v_0\|_{H^m} , T)<\infty \quad \forall T>0 \end{equation} for all $m>7/2$. Taking $L^2$ inner product (\ref{qg1}) by $\psi$, and integrating by part, we obtain immediately \begin{equation}\label{vL2} \|v(t)\|_{L^2}=\|v_0\|_{L^2}, \quad t>0 . \end{equation} Similarly, taking $L^2$ inner product (\ref{qg1}) by $q=\Delta \psi $, and integrating by part, we obtain immediately \begin{equation}\label{wL2} \|q(t)\|_{L^2}=\|q_0\|_{L^2}. \end{equation} Multiplying (\ref{qg1}) by $q |q|^4$, and integrating over $\Bbb R^3$, we obtain after integration by part \begin{eqnarray}\label{L6} \frac{1}{6}\frac{d}{dt} \|q(t)\|_{L^6}^6 & \leq &\int_{\Bbb R^3} |v_2||q|^5 dx\leq \|v_2\|_{L^6}\|q\|_{L^6}^5\nonumber \\ &\leq& C\|\nonumberabla v_2\|_{L^2}\|q\|_{L^6}^5\leq C\|q\|_{L^2}\|q\|_{L^6}^5\nonumber \\ &=& C \|q_0\|_{L^2} \|q\|_{L^6}^5, \end{eqnarray} where we used the Sobolev inequality, $\|f\|_{L^6}\leq C \|\nonumberabla f\|_{L^2}$ in $\Bbb R^3$, and the Calderon-Zygmund estimate(see \cite{ste}), \begin{equation}\label{cz}\|\nonumberabla v\|_{L^p}\leq C_p\|q\|_{L^p}, \quad 1<p<\infty. \end{equation} From (\ref{L6}) we obtain \begin{equation} \|q(t)\|_{L^6} \leq \|q_0\|_{L^6}+ Ct \|q_0\|_{L^2}. \end{equation} Hence, by the Gagliardo-Nirenberg inequality and the Calderon-Zygmund inequality we have \begin{eqnarray} \|v\|_{L^\infty} &\leq& C \|Dv \|_{L^6}^{\frac34} \|v\|_{L^2} ^{\frac14}\leq C \|q \|_{L^6}^{\frac34} \|v\|_{L^2} ^{\frac14} \leq C \|q\|_{L^6} +C\|v\|_{L^2}\nonumber \\ &\leq& C (\|q_0\|_{L^6} +t \|q_0\|_{L^2} ) +C\|v_0\|_{L^2} . \end{eqnarray} We introduce the particle trajectory $\{ X(a,t) \}$ on the plane generated by $\tilde{v}:=(v_1, v_2) $, $$\frac{\partial X(a,t)}{\partial t}= \tilde{v}(X(a,t),t), \quad X(a,0)=a\in \Bbb R^2, $$ We write (\ref{qg1}) in the form $$ \frac{\partial}{\partial t} q(X(a,t),z,t)= -v_2 (X(a,t),z,t), $$ which can be integrated in time as $$ q(X(a,t),z,t) =q_0 (a, z) -\int_0 ^t v_2 (X(a,s),z,s) ds. $$ Thus, we have \begin{equation}\label{wLinf} \|q(t)\|_{L^\infty} \leq \|q_0\|_{L^\infty} +\int_0 ^t \|v(s)\|_{L^\infty} ds \leq C(1+ t^2), \end{equation} where $C=C(\|q_0\|_{L^6}, \|q_0\|_{L^2}, \|v_0\|_{L^2} )$. Combining (\ref{wL2}) and (\ref{wLinf}), using the standard $L^p$ interpolation, one has \begin{equation}\label{wLp} \|q(t)\|_{L^p}\leq C_1(1+t^2),\quad \forall p\in [2, \infty] \end{equation} where $C_1=C_1(\|q_0\|_{L^6}, \|q_0\|_{L^2}, \|v_0\|_{L^2} )$. Taking $D=(\partial_1, \partial_2, \partial_3) $ on (\ref{qg1}), one has $$ Dq_t +(D\bar{v}\cdot\nonumberabla ) q +(\bar{v}\cdot\nonumberabla )D q =-Dv_2. $$ Let $p\geq 2$. Multiplying this equation by $D q |Dq|^{p-2}$ and integrating it over $\Bbb R^3$, we have after integration by part, and using the H\"{o}lder inequality and (\ref{cz}), \begin{eqnarray}\label{wlp1} \frac{1}{p}\frac{d}{dt}\|Dq (t)\|_{L^p} ^p &\leq& \| D v\|_{L^\infty} \|Dq\|_{L^p}^p +\|Dv_2\|_{L^p} \|Dq\|_{L^p}^{p-1}\nonumber \\ &\leq& \| D v\|_{L^\infty} \|Dq\|_{L^p}^p +C \|q\|_{L^p} \|Dq\|_{L^p}^{p-1}, \end{eqnarray} from which we obtain, for $p>3$, \begin{eqnarray}\label{log} \frac{d}{dt}\|Dq (t)\|_{L^p} &\leq& \| Dv\|_{L^\infty}\|Dq\|_{L^p} +C\|q\|_{L^p}\nonumber \\ &\leq& C\{ 1+ \|Dv\|_{BMO} \log (e+\|D^2 v \|_{L^p} )\} \|Dq\|_{L^p} +C\|q\|_{L^p}\nonumber \\ &\leq & C\{ 1+ \|q\|_{BMO} \log (e+\|Dq \|_{L^p} )\} \|Dq\|_{L^p} +C\|q\|_{L^p}\nonumber \\ &\leq & C \{ 1+\|q\|_{L^\infty} \log (e+\|Dq \|_{L^p} )\} \|Dq\|_{L^p} +C\|q\|_{L^p}\nonumber \\ &\leq & C (1+\|q\|_{L^\infty} + \|q\|_{L^p})( e+ \|Dq\|_{L^p} )\log (e+\|Dq \|_{L^p} ),\nonumber \\ \end{eqnarray} where we used the logarithmic Sobolev inequality, \begin{equation}\label{kt} \|f\|_{L^\infty} \leq C\{ 1+ \|f\|_{BMO} \log (e+\|Df \|_{W^{k,p}} )\}, \quad kp >3 \end{equation} proved in \cite{koz}, and the Calderon-Zygmund inequality. By Gronwall's inequality, we obtain from (\ref{log}) that \begin{equation}\label{log1} e+ \|Dq(t)\|_{L^p} \leq (e+ \|Dq_0\|_{L^p})^{\exp \left\{C\int_0 ^t (1+\|q(s)\|_{L^\infty} + \|q(s)\|_{L^p})ds\right\}}. \end{equation} Taking into account (\ref{wLp}) and (\ref{wLinf}), we find from (\ref{log1}) that \begin{equation} \sup_{0<t<T} \|Dq(t)\|_{L^p}\leq C(v_0, T)<\infty \quad \forall T>0, \quad \forall p\in (3, \infty). \end{equation} Combining this with the Gagliardo-Nirenberg inequality and (\ref{cz}), we obtain \begin{eqnarray}\label{gradv} \sup_{0<t<T} \|D v\|_{L^\infty}&\leq& C \sup_{0<t<T} \|D^2 v\|_{L^4} \leq C\sup_{0<t<T} \|Dq\|_{L^4} \nonumber \\ &\leq &C(\|v_0\|_{W^{2,4}}, T)<\infty \quad \forall T>0. \end{eqnarray} For $p\in [2, 3]$, one has from (\ref{wlp1}) that \begin{eqnarray}n \frac{d}{dt}\|Dq (t)\|_{L^p} &\leq& \| Dv\|_{L^\infty} \|Dq\|_{L^p} +C\|q\|_{L^p} \nonumber \\ &\leq& (\|Dv \|_{L^\infty} +\|q\|_{L^p}+1)( \|Dq \|_{L^p} +1), \end{eqnarray}n from which we obtain \begin{equation} \label{wlp} \|Dq (t)\|_{L^p} +1\leq (\|Dq_0 \|_{L^p} +1)\exp \left\{ C\int_0 ^t (\|Dv (s)\|_{L^\infty} +\|q(s)\|_{L^p}+1)ds\right\}. \end{equation} Hence, the estimates (\ref{wLp}) and (\ref{gradv}) imply that \begin{equation}\label{gradw} \sup_{0<t<T} \|Dq (t)\|_{L^p}\leq C(\|v_0\|_{W^{2,p}}, T) <\infty \quad\forall T>0. \end{equation} Taking $D^2 $ on (\ref{qg}), we have $$ D^2q_t +(D^2\bar{v}\cdot\nonumberabla ) q +2(D\bar{v}\cdot\nonumberabla )D q +(v\cdot\nonumberabla ) D^2 q =-D^2v_2. $$ Multiplying this by $D^2 q |D^2q|$ and integrating it over $\Bbb R^3$, we have after integration by part, and using the H\"{o}lder inequality, \begin{eqnarray}\label{D2L3} \frac{1}{3}\frac{d}{dt}\|D^2 q (t)\|_{L^3} ^3 &\leq&\|D^2 v \cdot Dq\|_{L^3} \|D^2 q\|_{L^3}^{2} +2 \|Dv \|_{L^\infty} \|D^2q\|_{L^3} ^3 \nonumber \\ &&\qquad+ \|D^2 v_2\|_{L^3} \|D^2 q\|_{L^3}^{2}\nonumber \\ &\leq& C(\|D^2 v\|_{BMO} \|Dq\|_{L^3} +\|D^2 v\|_{L^3} \|Dq\|_{BMO}) \|D^2 q\|_{L^3}^{2}\nonumber \\ &&\qquad +2 \|Dv \|_{L^\infty} \|D^2q\|_{L^3} ^3 + \|D^2 v_2\|_{L^3} \|D^2 q\|_{L^3}^{2}\nonumber \\ &\leq&C\|Dq\|_{BMO} \|D q\|_{L^3}\|D^2q\|_{L^3} ^2 \nonumber \\ &&\qquad +2 \|Dv \|_{L^\infty} \|D^2q\|_{L^3} ^3 + \|Dq\|_{L^3} \|D^2 q\|_{L^3}^{2}\nonumber \\ &\leq&C \|D q\|_{L^3}\|D^2q\|_{L^3} ^3 +2 \|Dv \|_{L^\infty} \|D^2q\|_{L^3} ^3\nonumber \\ &&\qquad + \|Dq\|_{L^3} \|D^2 q\|_{L^3}^{2}, \end{eqnarray} where we used the following bilinear estimate, proved in \cite{koz1}, $$ \|f\cdot g\|_{L^p}\leq C_p (\|f\|_{L^p} \|g\|_{BMO} +\|f\|_{BMO}\|g\|_{L^p}), \quad p\in (1, \infty) $$ and also the critical Sobolev inequality in $\Bbb R^3$, $$ \|f\|_{BMO}\leq C\|\nonumberabla f\|_{L^3} . $$ From (\ref{D2L3}) one has \begin{eqnarray} \frac{d}{dt}\|D^2q (t)\|_{L^3} &\leq& C ( \|D q\|_{L^3} + \|Dv \|_{L^\infty} ) \|D^2q\|_{L^3}+\|Dq\|_{L^3} \nonumber \\ &\leq & C ( \|D q\|_{L^3} + \|Dv \|_{L^\infty} +1) (\|D^2q\|_{L^3} +1), \end{eqnarray} from which one has \begin{equation} \|D^2 q(t)\|_{L^3}+1 \leq (\|D^2 q_0\|_{L^3} +1)\exp \left\{ C\int_0 ^t ( \|D q(s)\|_{L^3} + \|Dv (s)\|_{L^\infty} +1)ds \right\}. \end{equation} Combining this with (\ref{gradw}) and (\ref{gradv}), we have \begin{equation}\label{bmo} \sup_{0<t<T}\|Dq(t)\|_{BMO}\leq C\sup_{0<t<T} \|D^2 q(t)\|_{L^3} \leq C(\|v_0\|_{W^{3,3}}, T) <\infty \quad \forall T>0. \end{equation} Let $\alpha=(\alpha_1, \alpha_2,\alpha_3)\in (\Bbb N\cup \{0\})^3$ be a muti-index. Let $m>7/2$. Taking $D^\alpha $ on (\ref{qg1}), and multiplying it by $D^\alpha q$, summing over $|\alpha|\leq m-1$, and integrating it over $\Bbb R^3$, we have \begin{eqnarray}\label{gen} \frac12 \frac{d}{dt} \|q(t)\|_{H^{m-1}} ^2&=& \sum_{|\alpha| \leq m-1} (D^\alpha (\bar{v}\cdot \nonumberabla )q , D^\alpha q )_{L^2} -\sum_{|\alpha|\leq m-1} (D^\alpha v_2, D^\alpha q)_{L^2} \nonumber \\ &=& \sum_{|\alpha| \leq m} (D^\alpha (\bar{v}\cdot \nonumberabla )q-(\bar{v}\cdot \nonumberabla ) D^\alpha q , D^\alpha q )_{L^2} -\sum_{|\alpha|\leq m-1} (D^\alpha v_2, D^\alpha q)_{L^2} \nonumber \\ &\leq&\sum_{|\alpha|\leq m-1} \|D^\alpha (\bar{v}\cdot \nonumberabla )q-(\bar{v}\cdot \nonumberabla ) D^\alpha q \|_{L^2}\|q\|_{H^{m-1}} + \|v\|_{H^{m-1}} \|q\|_{H^{m-1}}\nonumber \\ &\leq& C(\|\nonumberabla v\|_{L^\infty}+ \|\nonumberabla q\|_{L^\infty}) (\|q\|_{H^{m-1} } + \|v\|_{H^{m-1}})\|q\|_{H^{m-1}}+ \|v\|_{H^{m-1}} \|q\|_{H^{m-1}},\nonumber \\ \end{eqnarray} where we used the following commutator estimate, $$ \sum_{|\alpha|\leq m} \|D^\alpha (fg)-fD^\alpha g\|_{L^2}\leq C_m (\|\nonumberabla f\|_{L^\infty} \|D^{m-1} g\|_{L^2} +\|D^m f \|_{L^2} \|g\|_{L^\infty} ), $$ proved in \cite{kla}. We observe the following norm equivalence: there exists a constant $K$ independent of $v,q$ such that $$ K^{-1} (\|v\|_{L^2}^2+ \|q\|_{H^{m-1}} ^2)\leq \|v\|_{H^m} ^2\leq K(\|v\|_{L^2}^2+ \|q\|_{H^{m-1}}^2 ), $$ which is an immediate consequence of (\ref{cz}). Since $\|v(t)\|_{L^2}=\|v_0 \|_{L^2}$ as in (\ref{vL2}), one can add $\|v_0\|_{L^2}^2$ to $\|q\|_{H^{m-1}}^2$ in the left hand side of (\ref{gen}) to obtain \begin{eqnarray} \frac{d}{dt} Y(t)&\leq& C (\|\nonumberabla v\|_{L^\infty} +\|\nonumberabla q\|_{L^\infty}+1 )Y(t)\nonumber \\ &\leq& C(\|\nonumberabla v\|_{BMO} +\|\nonumberabla q\|_{BMO} +1)Y(t) \log Y(t) \end{eqnarray} where we set $$ Y(t):=e+\|v(t)\|_{L^2}^2 +\|q(t)\|_{H^m}^2, $$ and used the logarithmic Sobolev inequality in the form (\ref{kt}) for $m>7/2$. By Gronwall's inequality we have \begin{eqnarray}\label{last} e+\|v(t)\|_{H^m} ^2 &\leq& e+C(\|v(t)\|_{L^2}^2+ \|q(t)|_{H^{m-1}} ^2) \nonumber \\ &\leq& e+C(\|v_0\|_{L^2}^2+ \|q_0\|_{H^{m-1}} ^2) ^{\exp\left\{ C\int_0 ^t (\|\nonumberabla v(s)\|_{BMO} +\|\nonumberabla q(s)\|_{BMO} +1) ds\right\}}\nonumber \\ &\leq&e+ (C\|v_0\|_{H^m} ^2 )^{\exp\left\{ C\int_0 ^t (\|\nonumberabla v(s)\|_{L^\infty} +\|\nonumberabla q(s)\|_{BMO} +1) ds\right\}}.\nonumber \\ \end{eqnarray} The estimates (\ref{gradv}) and (\ref{bmo}), combined with (\ref{last}), provides us with (\ref{apriori}). $\square$ $$ \mbox{\bf Acknowledgements } $$ The author would like to thank deeply to Prof. Shouhong Wang for suggesting the problem and helpful discussions. This research is supported partially by NRF Grants no.2006-0093854 and no.2009-0083521. \end{document}
math
14,467
\begin{document} \title{The $k$-metric dimension of a graph} \begin{abstract} As a generalization of the concept of a metric basis, this article introduces the notion of $k$-metric basis in graphs. Given a connected graph $G=(V,E)$, a set $S\subseteq V$ is said to be a $k$-metric generator for $G$ if the elements of any pair of different vertices of $G$ are distinguished by at least $k$ elements of $S$, {\em i.e.}, for any two different vertices $u,v\in V$, there exist at least $k$ vertices $w_1,w_2,\ldots,w_k\in S$ such that $d_G(u,w_i)\ne d_G(v,w_i)$ for every $i\in \{1,\ldots,k\}$. A metric generator of minimum cardinality is called a $k$-metric basis and its cardinality the $k$-metric dimension of $G$. A connected graph $G$ is \emph{$k$-metric dimensional} if $k$ is the largest integer such that there exists a $k$-metric basis for $G$. We give a necessary and sufficient condition for a graph to be $k$-metric dimensional and we obtain several results on the $k$-metric dimension. \end{abstract} {\it Keywords:} $k$-metric generator; $k$-metric dimension; $k$-metric dimensional graph; metric dimension; resolving set; locating set; metric basis. {\it AMS Subject Classification Numbers:} 05C05; 05C12; 05C90. \section{Introduction} The problem of uniquely determining the location of an intruder in a network was the principal motivation of introducing the concept of metric dimension in graphs by Slater in \cite{Slater1975,Slater1988}, where the metric generators were called locating sets. The concept of metric dimension of a graph was also introduced independently by Harary and Melter in \cite{Harary1976}, where metric generators were called resolving sets. Nevertheless, the concept of a metric generator, in its primary version, has a weakness related with the possible uniqueness of the vertex identifying a pair of different vertices of the graph. Consider, for instance, some robots which are navigating, moving from node to node of a network. On a graph, however, there is neither the concept of direction nor that of visibility. We assume that robots have communication with a set of landmarks $S$ (a subset of nodes) which provide them the distance to the landmarks in order to facilitate the navigation. In this sense, one aim is that each robot is uniquely determined by the landmarks. Suppose that in a specific moment there are two robots $x,y$ whose positions are only distinguished by one landmark $s\in S$. If the communication between $x$ and $s$ is unexpectedly blocked, then the robot $x$ will get lost in the sense that it can assume that it has the position of $y$. So, for a more realistic settings it could be desirable to consider a set of landmarks where each pair of nodes is distinguished by at least two landmarks. A natural solution regarding that weakness is the location of one landmark in every node of the graph. But, such a solution, would have a very high cost. Thus, the choice of a correct set of landmarks is convenient for a satisfiable performance of the navigation system. That is, in order to achieve a reasonable efficiency, it would be convenient to have a set of as few landmarks as possible, always having the guarantee that every object of the network will be properly distinguished. From now on we consider a simple and connected graph $G=(V,E)$. It is said that a vertex $v\in V$ distinguishes two different vertices $x,y\in V$, if $d_G(v,x)\ne d_G(v,y)$, where $d_G(a,b)$ represents the length of a shortest $a-b$ path. A set $S\subseteq V$ is a \emph{metric generator} for $G$ if any pair of different vertices of $G$ is distinguished by some element of $S$. Such a name for $S$ raises from the concept of {\em generator} of metric spaces, that is, a set $S$ of points in the space with the property that every point of the space is uniquely determined by its ``distances'' from the elements of $S$. For our specific case, in a simple and connected graph $G=(V,E)$, we consider the metric $d_G:V\times V\rightarrow \mathbb{N}\cup \{0\}$, where $d_G(x,y)$ is defined as mentioned above and $\mathbb{N}$ is the set of positive integers. With this metric, $(V,d_G)$ is clearly a metric space. A metric generator of minimum cardinality is called a \emph{metric basis}, and its cardinality the \emph{metric dimension} of $G$, denoted by $\dim(G)$. Other useful terminology to define the concept of a metric generator in graphs is given at next. Given an ordered set $S=\{s_{1}, s_{2}, \ldots, s_{d}\}\subset V(G)$, we refer to the $d$-vector (ordered $d$-tuple) $r(u|S)=$ $(d_{G}(u,s_{1}),$ $ d_{G}(u,s_{2}), \ldots, d_{G}(u,s_{d}))$ as the \emph{metric representation} of $u$ with respect to $S$. In this sense, $S$ is a metric generator for $G$ if and only if for every pair of different vertices $u,v$ of $G$, it follows $r(u|S)\neq r(v|S)$. In order to avoid the weakness of metric basis described above, from now on we consider an extension of the concept of metric generators in the following way. Given a simple and connected graph $G=(V,E)$, a set $S\subseteq V$ is said to be a \emph{$k$-metric generator} for $G$ if and only if any pair of different vertices of $G$ is distinguished by at least $k$ elements of $S$, {\em i.e.}, for any pair of different vertices $u,v\in V$, there exist at least $k$ vertices $w_1,w_2,\ldots,w_k\in S$ such that \begin{equation}\label{conditionDistinguish} d_G(u,w_i)\ne d_G(v,w_i),\; \mbox{\rm for every}\; i\in \{1,\ldots,k\}. \end{equation} A $k$-metric generator of the minimum cardinality in $G$ will be called a \emph{$k$-metric basis} and its cardinality the \emph{$k$-metric dimension} of $G$, which will be denoted by $\dim_{k}(G)$. As an example we take the cycle graph $C_{4}$ with vertex set $V=\{x_{1}, x_{2}, x_{3}, x_{4}\}$ and edge set $E=\{x_{i}x_{j}:j-i=1\pmod{2}\}$. We claim that $\dim_{2}(C_{4})=4$. That is, if we take the pair of vertices $x_{1}, x_{3}$, then they are distinguished only by themselves. So, $x_{1}, x_{3}$ must belong to every $2$-metric generator for $C_4$. Analogously, $x_{2},x_{4}$ also must belong to every $2$-metric generator for $C_4$. Other example is the graph $G$ in Figure \ref{figDim1}, for which $\dim_{2}(G)=4$. To see this, note that $v_{3}$ does not distinguish any pair of different vertices of $V(G)-\{v_{3}\}$ and for each pair $v_{i}, v_{3},$ $ 1\leq i\leq 5, i\not= 3$, there exist two elements of $V(G)-\{v_{3}\}$ that distinguish them. Hence, $v_3$ does not belong to any $2$-metric basis for $G$. To conclude that $V(G)-\{v_{3}\}$ must be a $2$-metric basis for $G$ we proceed as in the case of $C_{4}$. \begin{figure} \caption{A graph $G$ where $V(G)-\{v_{3} \label{figDim1} \end{figure} Note that every $k$-metric generator $S$ satisfies that $|S|\geq k$ and, if $k>1$, then $S$ is also a $(k-1)$-metric generator. Moreover, $1$-metric generators are the standard metric generators (resolving sets or locating sets as defined in \cite{Harary1976} or \cite{Slater1975}, respectively). Notice that if $k=1$, then the problem of checking if a set $S$ is a metric generator reduces to check condition (\ref{conditionDistinguish}) only for those vertices $u,v\in V- S$, as every vertex in $S$ is distinguished at least by itself. Also, if $k=2$, then condition (\ref{conditionDistinguish}) must be checked only for those pairs having at most one vertex in $S$, since two vertices of $S$ are distinguished at least by themselves. Nevertheless, if $k\ge 3$, then condition (\ref{conditionDistinguish}) must be checked for every pair of different vertices of the graph. The literature about metric dimension in graphs shows several of its usefulness, for instance, applications to the navigation of robots in networks are discussed in \cite{Khuller1996} and applications to chemistry in \cite{Johnson1993,Johnson1998}, among others. This invariant was studied further in a number of other papers including \cite{Bailey2011a,Caceres2007,Chappell2008,Chartrand2000,Chartrand2000b,Fehr2006,Haynes2006,Okamoto2010,Saenpholphat2004,Tomescu2008,Yero2011,Yero2010}. Several variations of metric generators including resolving dominating sets \cite{Brigham2003}, independent resolving sets \cite{Chartrand2000a}, local metric sets \cite{Okamoto2010}, and strong resolving sets \cite{Kuziak2013,Oellermann2007,Sebo2004}, etc. have been introduced and studied. It is therefore our goal to introduce this extension of metric generators in graphs as a possible future tool for other possibly more general variations of the applications described above. We introduce now some other more necessary terminology for the article and the rest of necessary concepts will be introduced the first time they are mentioned in the work. We will use the notation $K_n$, $K_{r,s}$, $C_n$, $N_n$ and $P_n$ for complete graphs, complete bipartite graphs, cycle graphs, empty graphs and path graphs, respectively. If two vertices $u,v$ are adjacent in $G=(V,E)$, then we write $u\sim v$ or we say that $uv\in E(G)$. Given $x\in V(G)$ we define $N_{G}(x)$ to be the \emph{open neighbourhood} of $x$ in $G$. That is, $N_{G}(x)=\{y\in V(G):x\sim y\}$. The \emph{closed neighbourhood}, denoted by $N_{G}[x]$, equals $N_{G}(x)\cup \{x\}$. If there is no ambiguity, we will simply write $N(x)$ or $N[x]$. We also refer to the degree of $v$ as $\delta(v)=|N(v)|$. The minimum and maximum degrees of $G$ are denoted by $\delta(G)$ and $\Delta(G)$, respectively. For a non-empty set $S \subseteq V(G)$, and a vertex $v \in V(G)$, $N_S(v)$ denotes the set of neighbors that $v$ has in $S$, {\it i.e.}, $N_S(v) = S\cap N(v)$. \section{$k$-metric dimensional graphs} It is clear that it is not possible to find a $k$-metric generator in a connected graph $G$ for every integer $k$. That is, given a connected graph $G$, there exists an integer $t$ such that $G$ does not contain any $k$-metric generator for every $k>t$. According to that fact, a connected graph $G$ is said to be a \emph{$k$-metric dimensional graph}, if $k$ is the largest integer such that there exists a $k$-metric basis for $G$. Notice that, if $G$ is a $k$-metric dimensional graph, then for every positive integer $k'\le k$, $G$ has at least a $k'$-metric basis. Since for every pair of different vertices $x,y$ of a graph $G$ we have that they are distinguished at least by themselves, it follows that the whole vertex set $V(G)$ is a $2$-metric generator for $G$ and, as a consequence it follows that every graph $G$ is $k$-metric dimensional for some $k\ge 2$. On the other hand, for any connected graph $G$ of order $n>2$ there exists at least one vertex $v\in V(G)$ such that $\delta(v)\ge 2$. Since $v$ does not distinguish any pair of different neighbours $x,y\in N_G(v)$, there is no $n$-metric dimensional graph of order $n>2$. \begin{remark}\label{remarkKMetric} $\ $Let $G$ be a $k$-metric dimensional graph of order $n$. If $n\ge 3$, then $2\le k\le n-1$. Moreover, $G$ is $n$-metric dimensional if and only if $G\cong K_2$. \end{remark} Next we give a characterization of $k$-metric dimensional graphs. To do so, we need some additional terminology. Given two different vertices $x,y\in V(G)$, we say that the set of \textit{distinctive vertices} of $x,y$ is $${\cal D}_G(x,y)=\{z\in V(G): d_{G}(x,z)\ne d_{G}(y,z)\}$$ and the set of \emph{non-trivial distinctive vertices} of $x,y$ is $${\cal D}_G^*(x,y)={\cal D}_G(x,y)-\{x,y\}.$$ \begin{theorem}\label{theokmetric} $\ $A connected graph $G$ is $k$-metric dimensional if and only if $k=\displaystyle\min_{x,y\in V(G), x\ne y}\vert {\cal D}_G(x,y)\vert .$ \end{theorem} \begin{proof} $\ $(Necessity) If $G$ is a $k$-metric dimensional graph, then for any $k$-metric basis $B$ and any pair of different vertices $x,y\in V(G)$, we have $\vert B\cap {\cal D}_G(x,y)\vert \ge k.$ Thus, $k\le \displaystyle\min_{x,y\in V(G), x\ne y}\vert {\cal D}_G(x,y)\vert .$ Now, we suppose that $k< \displaystyle\min_{x,y\in V(G), x\ne y}\vert {\cal D}_G(x,y)\vert.$ In such a case, for every $x',y'\in V(G)$ such that $\vert B\cap {\cal D}_G(x',y')\vert=k$, there exists a distinctive vertex $z_{x'y'}$ of $x',y'$ with $z_{x'y'}\in {\cal D}_G(x',y')-B$. Hence, the set $$B\cup \left(\displaystyle\bigcup_{x',y'\in V(G):\vert B\cap {\cal D}_G(x',y')\vert=k}\{z_{x'y'}\}\right)$$ is a $(k+1)$-metric generator for $G$, which is a contradiction. Therefore, $k=\displaystyle\min_{x,y\in V(G), x\ne y}\vert {\cal D}_G(x,y)\vert .$ (Sufficiency) Let $a,b\in V(G)$ such that $\displaystyle\min_{x,y\in V(G), x\ne y}\vert {\cal D}_G(x,y)\vert =\vert {\cal D}_G(a,b)\vert=k$. Since the set $$\bigcup_{x,y\in V(G)}{\cal D}_G(x,y)$$ is a $k$-metric generator for $G$ and the pair $a,b$ is not distinguished by $k'>k$ vertices of $G$, we conclude that $G$ is a $k$-metric dimensional graph. \end{proof} \subsection{On some families of $k$-metric dimensional graphs for some specific values of $k$} The characterization proved in Theorem \ref{theokmetric} gives a result on general graphs. Thus, next we particularize this for some specific classes of graphs or we bound its possible value in terms of other parameters of the graph. To this end, we need the following concepts. Two vertices $x,y$ are called \emph{false twins} if $N(x)=N(y)$ and $x,y$ are called \emph{true twins} if $N[x]=N[y]$. Two vertices $x,y$ are \emph{twins} if they are false twins or true twins. A vertex $x$ is said to be a \emph{twin} if there exists a vertex $y\in V(G)-\{x\}$ such that $x$ and $y$ are twins in $G$. Notice that two vertices $x,y$ are twins if and only if ${\cal D}_G^*(x,y)=\emptyset.$ \begin{corollary}\label{remark2dimesional} $\ $A connected graph $G$ of order $n\geq 2$ is $2$-metric dimensional if and only if $G$ has twin vertices. \end{corollary} It is clear that $P_{2}$ and $P_{3}$ are $2$-metric dimensional. Now, a specific characterization for $2$-dimensional trees is obtained from Theorem \ref{theokmetric} (or from Corollary \ref{remark2dimesional}). A \emph{leaf} in a tree is a vertex of degree one, while a \emph{support vertex} is a vertex adjacent to a leaf. \begin{corollary}\label{corolTree2} $\ $A tree $T$ of order $n\geq 4$ is $2$-metric dimensional if and only if $T$ contains a support vertex which is adjacent to at least two leaves. \end{corollary} An example of a $2$-metric dimensional tree is the star graph $K_{1,n-1}$, whose $2$-metric dimension is $\dim_2(K_{1,n-1})=n-1$ (see Corollary \ref{corotree2}). On the other side, an example of a tree $T$ which is not $2$-metric dimensional is drawn in Figure \ref{figTreeD3}. Notice that $S=\{v_{1},v_{3},v_{5},v_{6},v_{7}\}$ is a $3$-metric basis of $T$. Moreover, $T$ is $3$-metric dimensional since $|{\cal D}_T(v_1,v_3)|=3.$ \begin{figure} \caption{$S=\{v_{1} \label{figTreeD3} \end{figure} A {\em cut vertex} in a graph is a vertex whose removal increases the number of components of the graph and an {\em extreme vertex} is a vertex $v$ such that the subgraph induced by $N[v]$ is isomorphic to a complete graph. Also, a {\em block} is a maximal biconnected subgraph\footnote{A biconnected graph is a connected graph having no articulation vertices.} of the graph. Now, let $\mathfrak{F}$ be the family of sequences of connected graphs $G_1,G_2,\ldots,G_t$, $t\ge 2$, such that $G_1$ is a complete graph $K_{n_1}$, $n_1\ge 2$, and $G_i$, $i\ge 2$, is obtained recursively from $G_{i-1}$ by adding a complete graph $K_{n_i}$, $n_i\ge 2$, and identifying one vertex of $G_{i-1}$ with one vertex of $K_{n_i}$. From this point we will say that a connected graph $G$ is a \emph{generalized tree}\footnote{In some works these graphs are called block graphs.} if and only if there exists a sequence $\{G_1,G_2,\ldots,G_t\}\in \mathfrak{F}$ such that $G_t=G$ for some $t\ge 2$. Notice that in these generalized trees every vertex is either, a cut vertex or an extreme vertex. Also, every complete graph used to obtain the generalized tree is a block of the graph. Note, that if every $K_{n_i}$ is isomorphic to $K_2$, then $G_t$ is a tree, justifying the terminology used. With these concepts we give the following consequence of Theorem \ref{theokmetric}, which is a generalization of Corollary \ref{corolTree2}. \begin{corollary}\label{coro-gen-tree} $\ $A generalized tree $G$ is $2$-metric dimensional if and only if $G$ contains at least two extreme vertices being adjacent to a common cut vertex. \end{corollary} The \emph{Cartesian product graph} $G\square H$, of two graphs $G=(V_{1},E_{1})$ and $H=(V_{2},E_{2})$, is the graph whose vertex set is $V(G\square H)=V_{1}\times V_{2}$ and any two distinct vertices $(x_{1},x_{2}),(y_{1},y_{2})\in V_{1}\times V_{2}$ are adjacent in $G\square H$ if and only if either: \begin{enumerate}[(a)] \item\label{cartesian1}$\ $ $x_{1}=y_{1}$ and $x_{2}\sim y_{2}$, or \item\label{cartesian2}$\ $ $x_{1}\sim y_{1}$ and $x_{2}=y_{2}$. \end{enumerate} \begin{proposition}\label{corolGCartesian} $\ $Let $G$ and $H$ be two connected graphs of order $n\ge 2$ and $n'\ge 3$, respectively. If $G\square H$ is $k$-metric dimensional, then $ k\geq 3$. \end{proposition} \begin{proof} $\ $Notice that for any vertex $(a,b)\in V(G\square H)$, $N_{G\square H}((a,b))=(N_{G}(a)\times \{b\})\cup(\{a\}\times N_{H}(b))$. Now, for any two distinct vertices $(a,b),(c,d)\in V(G\square H)$ at least $a\ne c$ or $b\ne d$ and since $H$ is a connected graph of order greater than two, we have that at least $N_{H}(b)\ne \{d\}$ or $N_{H}(d)\ne \{b\}$. Thus, we obtain that $N_{G\square H}((a,b))\ne N_{G\square H}((c,d))$. Therefore, $G\square H$ does not contain any twins and, by Remark \ref{remarkKMetric} and Corollary \ref{remark2dimesional}, if $G\square H$ is $k$-metric dimensional, then $k\geq 3$. \end{proof} \begin{proposition}\label{propKClyce} $\ $Let $C_{n}$ be a cycle graph of order $n$. If $n$ is odd, then $C_n$ is $(n-1)$-metric dimensional and if $n$ is even, then $C_n$ is $(n-2)$-metric dimensional. \end{proposition} \begin{proof} $\ $We consider two cases: \begin{enumerate}[(1)] \item $\ $ $n$ is odd. For any pair of different vertices $u,v\in V(C_{n})$ there exist only one vertex $w\in V(C_{n})$ such that $w$ does not distinguish $u$ and $v$. Therefore, by Theorem \ref{theokmetric}, $C_n$ is $(n-1)$-metric dimensional. \item $\ $ $n$ is even. In this case, $C_{n}$ is $2$-antipodal\footnote{The diameter of $G=(V,E)$ is defined as $D(G)=\max_{u,v\in V(G)}\{d_{G}(u,v)\}$. We say that $u$ and $v$ are antipodal vertices or mutually antipodal if $d_G(u,v)=D(G)$. We recall that $G=(V,E)$ is $2$-antipodal if for each vertex $x\in V$ there exists exactly one vertex $y\in V$ such that $d_G(x,y)=D(G)$.}. For any pair of vertices $u,v\in V(C_{n})$, such that $d(u,v)=2l$, we can take a vertex $x$ such that $d(u,x)=d(v,x)=l$. So, ${\cal D}_G(u,v)=V(C_n)-\{x,y\}$, where $y$ is antipodal to $x$. On the other hand, if $d(u,v)$ is odd, then ${\cal D}_G(u,v)=V(C_n)$. Therefore, by Theorem \ref{theokmetric}, the graph $C_n$ is $(n-2)$-metric dimensional. \end{enumerate} \end{proof} Now, according to Remark \ref{remarkKMetric} we have that every graph of order $n$, different from $K_2$, is $k$-metric dimensional for some $k\le n-1$. Next we characterize those graphs being $(n-1)$-metric dimensional. \begin{theorem} $\ $A graph $G$ of order $n\ge 3$ is $(n-1)$-metric dimensional if and only if $G$ is a path or $G$ is an odd cycle. \end{theorem} \begin{proof} $\ $Since $n\ge 3$, by Remark \ref{remarkKMetric}, $G$ is $k$-metric dimensional for some $k\in \{2,\ldots, n-1\}$. Now, for any pair of different vertices $u,v\in V(P_{n})$ there exists at most one vertex $w\in V(P_{n})$ such that $w$ does not distinguish $u$ and $v$. Then $P_n$ is $(n-1)$-metric dimensional. By Proposition \ref{propKClyce}, we have that if $G$ is an odd cycle, then $G$ is $(n-1)$-metric dimensional. On the contrary, let $G$ be a $(n-1)$-metric dimensional graph. Hence, for every pair of different vertices $x,y\in V(G)$ there exists at most one vertex which does not distinguish $x,y$. Suppose $\Delta(G)>2$ and let $v\in V(G)$ such that $\{u_1,u_2,u_3\}\subset N(v)$. Figure \ref{figCases} shows all the possibilities for the links between these four vertices. Figures \ref{figCases} (a), \ref{figCases} (b) and \ref{figCases} (d) show that $v,u_1$ do not distinguish $u_2,u_3$. Figure \ref{figCases} (c) shows that $u_1,u_2$ do not distinguish $v,u_3$. Thus, from the cases above we deduce that there is a pair of different vertices which is not distinguished by at least two other different vertices. Thus $G$ is not a $(n-1)$-metric dimensional graph, which is a contradiction. As a consequence, $\Delta(G)\le 2$ and we have that $G$ is either a path or a cycle graph. Finally, by Proposition \ref{propKClyce}, we have that if $G$ is a cycle, then $G$ has odd order. \end{proof} \begin{figure} \caption{Possible cases for a vertex $v$ with three adjacent vertices $u_1,u_2,u_3$.} \label{figCases} \end{figure} \subsection{Bounding the value $k$ for which a graph is $k$-metric dimensional}\label{SectionBoundK-dimensional} In order to continue presenting our results, we need to introduce some definitions. A vertex of degree at least three in a graph $G$ will be called a \emph{major vertex} of $G$. Any end-vertex (a vertex of degree one) $u$ of $G$ is said to be a \emph{terminal vertex} of a major vertex $v$ of $G$ if $d_{G}(u,v)<d_{G}(u,w)$ for every other major vertex $w$ of $G$. The \emph{terminal degree} $\operatorname{ter}(v)$ of a major vertex $v$ is the number of terminal vertices of $v$. A major vertex $v$ of $G$ is an \emph{exterior major vertex} of $G$ if it has positive terminal degree. Let $\mathcal{M}(G)$ be the set of exterior major vertices of $G$ having terminal degree greater than one. Given $w\in \mathcal{M}(G)$ and a terminal vertex $u_{j}$ of $w$, we denote by $P(u_j,w)$ the shortest path that starts at $u_{j}$ and ends at $w$. Let $l(u_{j},w)$ be the length of $P(u_j,w)$. Now, given $w\in \mathcal{M}(G)$ and two terminal vertices $u_{j},u_{r}$ of $w$ we denote by $P(u_{j},w,u_{r})$ the shortest path from $u_{j}$ to $u_{r}$ containing $w$, and by $\varsigma(u_{j},u_{r})$ the length of $P(u_{j},w,u_{r})$. Notice that, by definition of exterior major vertex, $P(u_{j},w,u_{r})$ is obtained by concatenating the paths $P(u_{j},w)$ and $P(u_{r},w)$, where $w$ is the only vertex of degree greater than two lying on these paths. Finally, given $w\in \mathcal{M}(G)$ and the set of terminal vertices $U=\{u_{1},u_{2},\ldots,u_{k}\}$ of $w$, for $j\not=r$ we define $\varsigma(w)=\displaystyle\min_{u_{j},u_{r}\in U}\{\varsigma(u_{j},u_{r})\}$ and $l(w)=\displaystyle\min_{u_{j}\in U}\{l(u_{j},w)\}$. \begin{figure} \caption{A graph $G$ where $\varsigma(G)=3$.} \label{example-G-*} \end{figure} From the local parameters above we define the following global parameter $$\varsigma(G)=\min_{w\in \mathcal{M}(G)}\{\varsigma(w)\}.$$ An example which helps to understand the notation above is given in Figure \ref{example-G-*}. In such a case we have $\mathcal{M}(G)=\{v_3,v_5,v_{15}\}$ and, for instance, $\{v_1,v_8,v_{12}\}$ are terminal vertices of $v_3$. So, $v_3$ has terminal degree three ($\operatorname{ter}(v_3)=3$) and it follows that \begin{flalign*} l(v_3)&=\min\{l(v_{12},v_3),l(v_8,v_3),l(v_1,v_3)\}=\min\{1,2,2\}=1, \end{flalign*} and \begin{flalign*} \varsigma(v_3)&=\displaystyle\min\{\varsigma(v_{12},v_1),\varsigma(v_{12},v_8),\varsigma(v_8,v_1)\}=\displaystyle\min\{3,3,4\}=3. \end{flalign*} Similarly, it is possible to observe that $\operatorname{ter}(v_5)=2$, $l(v_5)=1$, $\varsigma(v_5)=3$, $\operatorname{ter}(v_{15})=2$, $l(v_{15})=2$ and $\varsigma(v_{15})=4$. Therefore, $\varsigma(G)=3$. According to this notation we present the following result. \begin{theorem}\label{coroKcota} $\ $Let $G$ be a connected graph such that $\mathcal{M}(G)\ne \emptyset$. If $G$ is $k$-metric dimensional, then $k\leq \varsigma(G)$. \end{theorem} \begin{proof} $\ $We claim that there exists at least one pair of different vertices $x,y\in V(G)$ such that $\vert{\cal D}_G(x,y)\vert=\varsigma(G)$. To see this, let $w\in \mathcal{M}(G)$ and let $u_{1},u_{2}$ be two terminal vertices of $w$ such that $\varsigma(G)=\varsigma(w)=\varsigma(u_{1},u_{2})$. Let $u'_{1}$ and $u'_{2}$ be the vertices adjacent to $w$ in the shortest paths $P(u_{1},w)$ and $P(u_{2},w)$, respectively. Notice that it could happen $u'_{1}=u_{1}$ or $u'_{2}=u_{2}$. Since every vertex $v\not\in V\left(P(u_{1},w,u_{2})\right)-\{w\}$ satisfies that $d_{G}(u'_{1},v)=d_{G}(u'_{2},v)$, and the only distinctive vertices of $u'_{1},u'_{2}$ are those ones belonging to $P(u'_{1},u_{1})$ and $P(u'_{2},u_{2})$, we have that $\vert{\cal D}_G(u'_{1},u'_{2})\vert=\varsigma(G)$. Therefore, by Theorem \ref{theokmetric}, if $G$ is $k$-metric dimensional, then $k\leq \varsigma(G)$. \end{proof} The upper bound of Theorem \ref{coroKcota} is tight. For instance, it is achieved for every tree different from a path as it is proved further in Section \ref{sect-dim-trees}, where the $k$-metric dimension of trees is studied. A {\em clique} in a graph $G$ is a set of vertices $S$ such that the subgraph induced by $S$, denoted by $\langle S\rangle$, is isomorphic to a complete graph. The maximum cardinality of a clique in a graph $G$ is the {\em clique number} and it is denoted by $\omega(G)$. We will say that $S$ is an $\omega(G)$-clique if $|S|=\omega(G)$. \begin{theorem}\label{clique-versus-k} $\ $Let $G$ be a graph of order $n$ different from a complete graph. If $G$ is $k$-metric dimensional, then $k\le n-\omega(G)+1$. \end{theorem} \begin{proof} $\ $Let $S$ be an $\omega(G)$-clique. Since $G$ is not complete, there exists a vertex $v\notin S$ such that $N_S(v)\subsetneq S$. Let $u\in S$ with $v\not\sim u$. If $N_S(v)=S-\{u\}$, then $d(u,x)=d(v,x)=1$ for every $x\in S-\{u\}$. Thus, $\vert{\cal D}_G(u,v)\vert\le n-\omega(G)+1$. On the other hand, if $N_S(v)\ne S-\{u\}$, then there exists $u'\in S-\{u\}$ such that $u'\not\sim v$. Thus, $d(u,v)=d(u',v)=2$ and for every $x\in S-\{u,u'\}$, $d(u,x)=d(u',x)=1$. So, $\vert{\cal D}_G(u,u')\vert\le n-\omega(G)+1$. Therefore, Theorem \ref{theokmetric} leads to $k\le n-\omega(G)+1$. \end{proof} Examples where the previous bound is achieved are those connected graphs $G$ of order $n$ and clique number $\omega(G)=n-1$. In such a case, $n-\omega(G)+1=2$. Notice that in this case there exists at least two twin vertices. Hence, by Corollary \ref{remark2dimesional} these graphs are $2$-metric dimensional. The {\em girth} of a graph $G$ is the length of a shortest cycle in $G$. \begin{theorem}\label{girth-versus-k} $\ $Let $G$ be a graph of minimum degree $\delta(G)\ge 2$, maximum degree $\Delta(G)\ge 3$ and girth $\mathtt{g}(G)\ge 4$. If $G$ is $k$-metric dimensional, then $$k\le n-1-(\Delta(G)-2)\sum_{i=0}^{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-2}(\delta(G)-1)^i.$$ \end{theorem} \begin{proof} $\ $Let $v\in V$ be a vertex of maximum degree in $G$. Since $\Delta(G)\ge 3$ and $\mathtt{g}(G)\ge 4$, there are at least three different vertices adjacent to $v$ and $ N(v)$ is an independent set\footnote{An independent set or stable set is a set of vertices in a graph, no two of which are adjacent.}. Given $u_1,u_2\in N(v)$ and $i\in \{0,\ldots,\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-2\}$ we define the following sets. \begin{align*} A_0&=N(v)-\{u_1,u_2\}.\\ A_1&=\bigcup_{x\in A_0}N(x)-\{v\}.\\ A_2&=\bigcup_{x\in A_1}N(x)-A_0.\\ &\ldots\\ A_{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-2}&=\bigcup_{x\in A_{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-3}}N(x)-A_{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-4}. \end{align*} Now, let $A=\{v\}\cup \left(\displaystyle\bigcup_{i=0}^{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-2}A_i\right)$. Since $\delta(G)\ge 2$, we have that $|A|\ge 1+(\Delta(G)-2)\displaystyle\sum_{i=0}^{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-2}(\delta(G)-1)^i$. Also, notice that for every vertex $x\in A$, $d(u_1,x)=d(u_2,x)$. Thus, $u_1,u_2$ can be only distinguished by themselves and at most $n-|A|-2$ other vertices. Therefore, $\vert{\cal D}_G(u_1,u_2)\vert\le n-|A|$ and the result follows by Theorem \ref{theokmetric}. \end{proof} The bound of Theorem \ref{girth-versus-k} is sharp. For instance, it is attained for the graph in Figure \ref{figUpperBound}. Since in this case $n=8$, $\delta(G)=2$, $\Delta(G)=3$ and $\mathtt{g}(G)=5$, we have that $k\le n-1-(\Delta(G)-2)\sum_{i=0}^{\left\lfloor\frac{\mathtt{g}(G)}{2}\right\rfloor-2}(\delta(G)-1)^i=6$. Table \ref{tableNearlyTwin} shows every pair of different vertices of this graph and their corresponding non-trivial distinctive vertices. Notice that by Theorem \ref{theokmetric} the graph is $6$-metric dimensional. \begin{figure} \caption{A graph that satisfies the equality in the upper bound of Theorem \ref{girth-versus-k} \label{figUpperBound} \end{figure} \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline \mbox{$x,y$} & \mbox{${\cal D}_G^*(x,y)$}\\ \hline $v_1, v_3$ & $\{v_4, v_5, v_7, v_8\}$\\\hline $v_1, v_5$ & $\{v_2, v_4, v_6, v_8\}$\\\hline $v_1, v_6$ & $\{v_4, v_5, v_7, v_8\}$\\\hline $v_1, v_7$ & $\{v_2, v_3, v_5, v_6\}$\\\hline $v_1, v_8$ & $\{v_2, v_3, v_4, v_7\}$\\\hline $v_2, v_5$ & $\{v_1, v_3, v_4, v_8\}$\\\hline $v_2, v_6$ & $\{v_1, v_3, v_5, v_7\}$\\\hline $v_2, v_7$ & $\{v_1, v_3, v_4, v_8\}$\\\hline $v_3, v_4$ & $\{v_1, v_2, v_5, v_8\}$\\\hline $v_3, v_5$ & $\{v_1, v_2, v_6, v_7\}$\\\hline $v_3, v_6$ & $\{v_4, v_5, v_7, v_8\}$\\\hline $v_3, v_7$ & $\{v_2, v_4, v_6, v_8\}$\\\hline $v_4, v_5$ & $\{v_3, v_6, v_7, v_8\}$\\\hline $v_4, v_8$ & $\{v_1, v_3, v_5, v_7\}$\\\hline $v_5, v_7$ & $\{v_1, v_3, v_4, v_8\}$\\\hline $v_7, v_8$ & $\{v_1, v_4, v_5, v_6\}$\\ \hline \end{tabular} \hspace{0.5cm} \begin{tabular}{|c|c|} \hline \mbox{$x,y$} & \mbox{${\cal D}_G^*(x,y)$}\\ \hline $v_1, v_2$ & $\{v_3, v_4, v_5, v_6, v_8\}$\\\hline $v_1, v_4$ & $\{v_2, v_3, v_5, v_7, v_8\}$\\\hline $v_2, v_3$ & $\{v_1, v_4, v_6, v_7, v_8\}$\\\hline $v_2, v_4$ & $\{v_1, v_5, v_6, v_7, v_8\}$\\\hline $v_2, v_8$ & $\{v_3, v_4, v_5, v_6, v_7\}$\\\hline $v_3, v_8$ & $\{v_1, v_2, v_4, v_5, v_7\}$\\\hline $v_4, v_6$ & $\{v_1, v_2, v_3, v_7, v_8\}$\\\hline $v_4, v_7$ & $\{v_1, v_3, v_5, v_6, v_8\}$\\\hline $v_5, v_6$ & $\{v_1, v_2, v_4, v_7, v_8\}$\\\hline $v_5, v_8$ & $\{v_1, v_3, v_4, v_6, v_7\}$\\\hline $v_6, v_7$ & $\{v_2, v_3, v_4, v_5, v_8\}$\\\hline $v_6, v_8$ & $\{v_1, v_2, v_3, v_4, v_5\}$\\\hline & \\\hline & \\\hline & \\\hline & \\ \hline \end{tabular} \caption{Pairs of vertices of the graph in Figure \ref{figUpperBound} and their non-trivial distinctive vertices.} \label{tableNearlyTwin} \end{table} \section{The $k$-metric dimension of graphs} In this section we present some results that allow to compute the $k$-metric dimension of several families of graphs. We also give some tight bounds on the $k$-metric dimension of a graph. \begin{theorem}[Monotony of the $k$-metric dimension]\label{theoBoundForKs} $\ $Let $G$ be a $k$-metric dimensional graph and let $k_1,k_2$ be two integers. If $1\le k_1<k_2\le k$, then $\dim_{k_1}(G)<\dim_{k_2}(G)$. \end{theorem} \begin{proof} $\ $Let $B$ be a $k$-metric basis of $G$. Let $x\in B$. Since all pairs of different vertices in $V(G)$ are distinguished by at least $k$ vertices of $B$, we have that $B-\{x\}$ is a $(k-1)$-metric generator for $G$ and, as a consequence, $\dim_{k-1}(G)\le \left|B-\{x\}\right|<|B|=\dim_{k}(G)$. Proceeding analogously, we obtain that $\dim_{k-1}(G)>\dim_{k-2}(G)$ and, by a finite repetition of the process we obtain the result. \end{proof} \begin{corollary}\label{firstConsequenceMonotony} $\ $Let $G$ be a $k$-metric dimensional graph of order $n$. \begin{enumerate}[{\rm (i)}] \item $\ $For every $r\in\{1,\ldots, k\}$, $\dim_r(G)\ge \dim(G)+(r-1).$ \item $\ $For every $r\in\{1,\ldots, k-1\}$, $\dim_r(G)<n$. \item $\ $If $G\not\cong P_n$, then for any $r\in \{1,\ldots,k\}$, $\dim_{r}(G)\geq r+1.$ \end{enumerate} \end{corollary} \begin{proposition}\label{theoPath2} $\ $Let $G$ be a connected graph of order $n\geq 2$. Then $\dim_{2}(G)=2$ if and only if $G\cong P_{n}$. \end{proposition} \begin{proof} $\;$ It was shown in \cite{Chartrand2000} that $\dim(G)=1$ if and only if $G\cong P_{n}$. (Necessity) If $\dim_{2}(G)=2$, then by Corollary \ref{firstConsequenceMonotony} (i) we have that $\dim(G)=1$, {\it i.e.}, $$2=\dim_{2}(G)\ge \dim(G)+1\ge 2.$$ Hence, $G$ must be isomorphic to a path graph. (Sufficiency) By Corollary \ref{firstConsequenceMonotony} (i) we have $\dim_2(P_n)\ge \dim(P_n)+1=2$ and, since the leaves of $P_n$ distinguish every pair of different vertices of $P_n$, we conclude that $\dim_2(P_n)=2$. \end{proof} Let ${\cal D}_k(G)$ be the set obtained as the union of the sets of distinctive vertices ${\cal D}_G(x,y)$ whenever $\vert{\cal D}_G(x,y)\vert=k$, {\it i.e.}, $${\cal D}_k(G)=\bigcup_{\vert {\cal D}_G(x,y)\vert=k}{\cal D}_G(x,y).$$ \begin{remark}\label{remTauk} $\ $If $G$ is a $k$-metric dimensional graph, then $\dim_{k}(G)\geq \vert {\cal D}_k(G)\vert$. \end{remark} \begin{proof} $\ $Since every pair of different vertices $x,y$ is distinguished only by the elements of ${\cal D}_G(x,y)$, if $\vert {\cal D}_G(x,y)\vert =k$, then for any $k$-metric basis $B$ we have ${\cal D}_G(x,y)\subseteq B$ and, as a consequence, ${\cal D}_k(G)\subseteq B$. Therefore, the result follows. \end{proof} The bound given in Remark \ref{remTauk} is tight. For instance, in Proposition \ref{propTauk} we will show that there exists a family of trees attaining this bound for every $k$. Other examples can be derived from the following result. \begin{proposition}\label{theoDimkn} $\ $Let $G$ be a $k$-metric dimensional graph of order $n$. Then $\dim_k(G)=n$ if and only if $V(G)={\cal D}_k(G)$. \end{proposition} \begin{proof} $\ $Suppose that $V(G)={\cal D}_k(G)$. Now, since every $k$-metric dimensional graph $G$ satisfies that $\dim_k(G)\le n$, by Remark \ref{remTauk} we obtain that $\dim_k(G)=n$. On the contrary, let $\dim_{k}(G)=n$. Note that for every $a,b\in V(G)$, we have $\vert {\cal D}_G(a,b)\vert \ge k$. If there exists at least one vertex $x\in V(G)$ such that $x\notin {\cal D}_k(G)$, then for every $a,b\in V(G)$, we have $\vert {\cal D}_G(a,b)-\{x\}\vert \ge k$ and, as a consequence, $V(G)-\{x\}$ is a $k$-metric generator for $G$, which is a contradiction. Therefore, $V(G)={\cal D}_k(G)$. \end{proof} \begin{corollary}\label{remarkDim2n} $\ $Let $G$ be a connected graph of order $n\geq 2$. Then $\dim_2(G)=n$ if and only if every vertex is a twin. \end{corollary} We will show other examples of graphs that satisfy Proposition \ref{theoDimkn} for $k\ge 3$. To this end, we recall that the \emph{join graph} $G+H$ of the graphs $G=(V_{1},E_{1})$ and $H=(V_{2},E_{2})$ is the graph with vertex set $V(G+H)=V_{1}\cup V_{2}$ and edge set $E(G+H)=E_{1}\cup E_{2}\cup \{uv\,:\,u\in V_{1},v\in V_{2}\}$. We give now some examples of graphs satisfying the assumptions of Proposition \ref{theoDimkn}. Let $W_{1,n}=C_n+K_1$ be the \emph{wheel graph} and $F_{1,n}=P_n+K_1$ be the \emph{fan graph}. The vertex of $K_1$ is called the central vertex of the wheel or the fan, respectively. Since $V(F_{1,4})={\cal D}_3(F_{1,4})$ and $V(W_{1,5})={\cal D}_4(W_{1,5})$, by Proposition \ref{theoDimkn} we have that $\dim_3(F_{1,4})=5$ and $\dim_4(W_{1,5})=6$, respectively. Given two non-trivial graphs $G$ and $H$, it holds that any pair of twin vertices $x,y\in V(G)$ or $x,y\in V(H)$ are also twin vertices in $G+H$. As a direct consequence of Corollary \ref{remarkDim2n}, the next result holds. \begin{remark}\label{coroJoinGG} $\ $Let $G$ and $H$ be two nontrivial graphs of order $n_{1}$ and $n_{2}$, respectively. If all the vertices of $G$ and $H$ are twin vertices, then $G+H$ is $2$-metric dimensional and $$\dim_{2}(G+H)=n_{1}+n_{2}.$$ \end{remark} Note that in Remark \ref{coroJoinGG}, the graphs $G$ and $H$ could be non connected. Moreover, $G$ and $H$ could be nontrivial empty graphs. For instance, $N_{r}+N_{s}$, where $N_r$, $N_s$, $r,s>1$, are empty graphs, is the complete bipartite graph $K_{r,s}$ which satisfies that $\dim_{2}(K_{r,s})=r+s$. \subsection{Bounding the $k$-metric dimension of graphs} We begin this subsection with a necessary definition of the \textit{twin equivalence relation} ${\cal R}$ on $V(G)$ as follows: $$x {\cal R} y \longleftrightarrow N_G[x]=N_G[y] \; \; \mbox{\rm or } \; N_G(x)=N_G(y).$$ We have three possibilities for each twin equivalence class $U$: \begin{enumerate}[(a)] \item $\ $ $U$ is singleton, or \item $\ $ $N_G(x)=N_G(y)$, for any $x,y\in U$ (and case (a) does not apply), or \item $\ $ $N_G[x]=N_G[y]$, for any $x,y\in U$ (and case (a) does not apply). \end{enumerate} We will refer to the type (c) classes as the \textit{true twin equivalence classes} \textit{i.e.,} $U$ is a true twin equivalence class if and only if $U$ is not singleton and $N_G[x]=N_G[y]$, for any $x,y\in U$. Let us see three different examples where every vertex is a twin. An example of a graph where every equivalence class is a true twin equivalence class is $K_r+(K_s\cup K_t)$, $r,s,t\ge 2$. In this case, there are three equivalence classes composed by $r,s$ and $t$ true twin vertices, respectively. As an example where no class is composed by true twin vertices we take the complete bipartite graph $K_{r,s}$, $r,s\ge 2$. Finally, the graph $K_r+N_s$, $r,s\ge 2$, has two equivalence classes and one of them is composed by $r$ true twin vertices. On the other hand, $K_1+(K_r\cup N_s)$, $r,s\ge 2$, is an example where one class is singleton, one class is composed by true twin vertices and the other one is composed by false twin vertices. In general, we can state the following result. \begin{remark} $\ $Let $G$ be a connected graph and let $U_1,U_2,\ldots,U_t$ be the non-singleton twin equivalence classes of $G$. Then $$\dim_2(G)\ge \sum_{i=1}^t|U_i|.$$ \end{remark} \begin{proof} $\ $Since for two different vertices $x,y\in V(G)$ we have that $ {\cal D}_2(x,y)=\{x,y\}$ if and only if there exists an equivalence class $U_i$ such that $x,y\in U_i$, we deduce $${\cal D}_2(G)= \bigcup_{i=1}^t U_i.$$ Therefore, by Remark \ref{remTauk} we conclude the proof. \end{proof} Notice that the result above leads to Corollary \ref{remarkDim2n}, so this bound is tight. Now we consider the connected graph $G$ of order $r+s$ obtained from a null graph $N_r$ of order $r\ge 2$ and a path $P_s$ of order $s\ge 1$ by connecting every vertex of $N_r$ to a given leaf of $P_s$. In this case, there are $s$ singleton classes and one class, say $U_1$, of cardinality $r$. By the previous result we have $\dim_2(G)\ge \vert U_1\vert=r$ and, since $U_1$ is a 2-metric generator for $G$, we conclude that $\dim_2(G)=r.$ We recall that the {\em strong product graph} $G\boxtimes H$ of two graphs $G=(V_{1},E_{1})$ and $H=(V_{2},E_{2})$ is the graph with vertex set $V\left(G\boxtimes H\right)=V_{1}\times V_{2}$, where two distinct vertices $(x_{1},x_{2}),(y_{1},y_{2})\in V_{1}\times V_{2}$ are adjacent in $G\boxtimes H$ if and only if one of the following holds. \begin{itemize} \item $\ $ $x_{1}=y_{1}$ and $x_{2}\sim y_{2}$, or \item $\ $ $x_{1}\sim y_{1}$ and $x_{2}=y_{2}$, or \item $\ $ $x_{1}\sim y_{1}$ and $x_{2}\sim y_{2}$. \end{itemize} \begin{theorem}\label{Generalizacompleto-por-G} $\ $Let $G$ and $H$ be two nontrivial connected graphs of order $n$ and $n'$, respectively. Let $U_1,U_2,\ldots,U_{t}$ be the true twin equivalence classes of $G$. Then $$\dim_2(G\boxtimes H)\ge n'\sum_{i=1}^t|U_i|.$$ Moreover, if every vertex of $G$ is a true twin, then $$\dim_2(G\boxtimes H)= n n'.$$ \end{theorem} \begin{proof} $\ $For any two vertices $a,c\in U_i$ and $b\in V(H)$, \begin{align*}N_{G\boxtimes H}[(a,b)]&=N_G[a]\times N_H[b]\\ &= N_G[c]\times N_H[b]\\ &=N_{G\boxtimes H}[(c,b)]. \end{align*} Thus, $(a,b)$ and $(c,b)$ are true twin vertices. Hence, $${\cal D}_2(G\boxtimes H)\supseteq \bigcup_{i=1}^t U_i\times V(H).$$ Therefore, by Remark \ref{remTauk} we conclude $\dim_2(G\boxtimes H)\ge n'\displaystyle\sum_{i=1}^t|U_i|.$ Finally, if every vertex of $G$ is a true twin, then $ \displaystyle\bigcup_{i=1}^t U_i=V(G)$ and, as a consequence, we obtain $\dim_2(G\boxtimes H)= nn'.$ \end{proof} We now present a lower bound for the $k$-metric dimension of a $k'$-metric dimensional graph $G$ with $k'\ge k$. To this end, we require the use of the following function for any exterior major vertex $w\in V(G)$ having terminal degree greater than one, {\it i.e.}, $w\in \mathcal{M}(G)$. Notice that this function uses the concepts already defined in Section \ref{SectionBoundK-dimensional}. Given an integer $r\le k'$, \[ I_r(w)=\left\{ \begin{array}{ll} \left(\operatorname{ter}(w)-1\right)\left(r-l(w)\right)+l(w), & \mbox{if } l(w)\le\lfloor\frac{r}{2}\rfloor,\\ & \\ \left(\operatorname{ter}(w)-1\right)\lceil\frac{r}{2}\rceil+\lfloor\frac{r}{2}\rfloor, & \mbox{otherwise.} \end{array} \right. \] In Figure \ref{example-G-*} we give an example of a graph $G$, which helps to clarify the notation above. Since every graph is at least $2$-metric dimensional, we can consider the integer $r=2$ and we have the following. \begin{itemize} \item $\ $Since $l(v_3)=1\le \left\lfloor\frac{r}{2}\right\rfloor$, it follows that $I_r(v_3)=\left(\operatorname{ter}(v_3)-1\right)\left(r-l(v_3)\right)+l(v_3)=(3-1)(2-1)+1=3$. \item $\ $Since $l(v_5)=1\le \left\lfloor\frac{r}{2}\right\rfloor$, it follows that $I_r(v_5)=\left(\operatorname{ter}(v_5)-1\right)\left(r-l(v_5)\right)+l(v_5)=(2-1)(2-1)+1=2$. \item $\ $Since $l(v_{15})=2>\left\lfloor\frac{r}{2}\right\rfloor$, it follows that $I_r(v_{15})=\left(\operatorname{ter}(v_{15})-1\right)\left\lceil\frac{r}{2}\right\rceil+\left\lfloor\frac{r}{2}\right\rfloor=(2-1)\left\lceil\frac{2}{2}\right\rceil+\left\lfloor\frac{2}{2}\right\rfloor=2$. \end{itemize} Therefore, according to the result below, $\dim_2(G)\geq 3+2+2=7$. \begin{theorem}\label{theoMuk} $\ $If $G$ is a $k$-metric dimensional graph such that $|\mathcal{M}(G)|\ge 1$, then for every $r\in \{1,\ldots,k\}$, $$\dim_{r}(G)\geq \sum_{w\in \mathcal{M}(G)}I_{r}(w).$$ \end{theorem} \begin{proof} $\ $Let $S$ be an $r$-metric basis of $G$. Let $w\in \mathcal{M}(G)$ and let $u_{i},u_{s}$ be two different terminal vertices of $w$. Let $u'_{i},u'_{s}$ be the vertices adjacent to $w$ in the paths $P(u_{i},w)$ and $P(u_{s},w)$, respectively. Notice that ${\cal D}_G(u'_{i},u'_{s})=$ $V\left(P(u_{i},w,u_{s})\right)-\{w\}$ and, as a consequence, it follows that $\left|S\cap \left(V\left(P(u_{i},w,u_{s})\right)-\{w\}\right)\right|\ge r$. Now, if $\operatorname{ter}(w)=2$, then we have $$\left|S\cap \left(V\left(P(u_{i},w,u_{s})\right)-\{w\}\right)\right|\ge r=I_{r}(w).$$ Now, we assume $\operatorname{ter}(w)>2$. Let $W$ be the set of terminal vertices of $w$, and let $u_{j}'$ be the vertex adjacent to $w$ in the path $P(u_{j},w)$ for every $u_{j}\in W$. Let $U(w)=\displaystyle\bigcup_{u_{j}\in W}V\left(P(u_{j},w)\right)-\{w\}$ and let $x=\displaystyle\min_{u_{j}\in W}\lbrace\left|S\cap V\left(P(u_{j},w)\right)\right|\rbrace$. Since $S$ is an $r$-metric generator of minimum cardinality (it is an $r$-metric basis of $G$), it is satisfied that $0\le x\le \min\lbrace l(w),\lfloor\frac{r}{2}\rfloor\rbrace$. Let $u_{\alpha}$ be a terminal vertex such that $\left|S\cap \left(V\left(P(u_{\alpha},w)\right)-\{w\}\right)\right|=x$. Since for every terminal vertex $u_{\beta}\in W-\{u_{\alpha}\}$ we have that $|S\cap {\cal D}_G(u_{\beta}',u_{\alpha}')|\ge r$, it follows that $\left|S\cap \left(V\left(P(u_{\beta},w)\right)-\{w\}\right)\right|\ge r-x$. Thus, \begin{align*} |S\cap U(w)|&=\left|S\cap \left(V\left(P(u_{\alpha},w)\right)-\{w\}\right)\right|+\\ &+\displaystyle\sum_{\beta=1,\beta\ne \alpha}^{\operatorname{ter}(w)}\left|S\cap \left(V\left(P(u_{\beta},w)\right)-\{w\}\right)\right|\\ &\ge \left(\operatorname{ter}(w)-1\right)(r-x)+x. \end{align*} Now, if $x=0$, then $|S\cap U(w)|\ge \left(\operatorname{ter}(w)-1\right)r>I_r(w)$. On the contrary, if $x>0$, then the function $f(x)=\left(\operatorname{ter}(w)-1\right)(r-x)+x$ is decreasing with respect to $x$. So, the minimum value of $f$ is achieved in the highest possible value of $x$. Thus, $|S\cap U(w)|\ge I_r(w)$. Since $\displaystyle\bigcap_{w\in \mathcal{M}(G)}U(w)=\emptyset$, it follows that $$\displaystyle \dim_{r}(G)\geq \sum_{w\in \mathcal{M}(G)}|S\cap U(w)|\ge \sum_{w\in \mathcal{M}(G)}I_{r}(w).$$ \end{proof} Now, in order to give some consequences of the bound above we will use some notation defined in Section \ref{SectionBoundK-dimensional} to introduce the following parameter. $$\mu(G)=\sum_{v\in{\cal M}(G)}\operatorname{ter}(v).$$ Notice that for $k=1$ Theorem \ref{theoMuk} leads to the bound on the metric dimension of a graph, established by Chartrand \textit{et al.} in \cite{Chartrand2000}. In such a case, $I_1(w)=\operatorname{ter}(w)-1$ for all $w\in \mathcal{M}(G)$ and thus, $$\dim(G)\geq \sum_{w\in \mathcal{M}(G)}\left(\operatorname{ter}(w)-1\right)=\mu(G)-|\mathcal{M}(G)|.$$ Next we give the particular cases of Theorem \ref{theoMuk} for $r=2$ and $r=3$. \begin{corollary}\label{coroMu2} $\ $If $G$ is a connected graph, then $$\dim_{2}(G)\geq \mu(G).$$ \end{corollary} \begin{proof} $\ $If $\mathcal{M}(G)=\emptyset$, then $\mu(G)=0$ and the result is direct. Suppose that $\mathcal{M}(G)\ne\emptyset$. Since $I_2(w)=\operatorname{ter}(w)$ for all $w\in \mathcal{M}(G)$, we deduce that $$\dim_{2}(G)\displaystyle\geq \sum_{w\in \mathcal{M}(G)}\operatorname{ter}(w)=\mu(G).$$ \end{proof} \begin{corollary}\label{coroMu3} $\ $If $G$ is $k$-metric dimensional for some $k\ge 3$, then $$\dim_{3}(G)\geq 2\mu(G)-|\mathcal{M}(G)|.$$ \end{corollary} \begin{proof} $\ $If $\mathcal{M}(G)=\emptyset$, then the result is direct. Suppose that $\mathcal{M}(G)\ne\emptyset$. Since $I_3(w)=2 \operatorname{ter}(w)-1$ for all $w\in \mathcal{M}(G)$, we obtain that $$\dim_{3}(G)\geq \sum_{w\in \mathcal{M}(G)}\left(2 \operatorname{ter}(w)-1\right)=2\mu(G)-|\mathcal{M}(G)|.$$ \end{proof} In next section we give some results on trees which show that the bounds proved in Theorem \ref{theoMuk} and Corollaries \ref{coroMu2} and \ref{coroMu3} are tight. Specifically those results are Theorem \ref{theoTreeDimR} and Corollaries \ref{corotree2} and \ref{corotree3}, respectively. \section{The particular case of trees}\label{sect-dim-trees} To study the $k$-metric dimension of a tree it is of course necessary to know first the value $k$ for which a given tree is $k$-metric dimensional. That is what we do next. In this sense, from now on we need the terminology and notation already described in Section \ref{SectionBoundK-dimensional} and also the following one. Given an exterior major vertex $v$ in a tree $T$ and the set of its terminal vertices $v_1,\ldots,v_{\alpha}$, the subgraph induced by the set $\displaystyle\bigcup_{i=1}^{\alpha} V(P(v,v_i))$ is called a \emph{branch} of $T$ at $v$ (a $v$-branch for short). \begin{theorem}\label{theoTreeDimK} $\ $If $T$ is a $k$-metric dimensional tree different from a path, then $k=\varsigma(T)$. \end{theorem} \begin{proof} $\ $Since $T$ is not a path, $\mathcal{M}(T)\ne \emptyset$. Let $w\in \mathcal{M}(T)$ and let $u_{1},u_{2}$ be two terminal vertices of $w$ such that $\varsigma(T)=\varsigma(w)=\varsigma(u_{1},u_{2})$. Notice that, for instance, the two neighbours of $w$ belonging to the paths $P(w,u_{1})$ and $P(w,u_{2})$, say $u'_{1}$ and $u'_{2}$ satisfy $|{\cal D}_T(u'_{1},u'_{2})|=\varsigma(T)$. It only remains to prove that for every $x,y\in V(T)$ it holds that $|{\cal D}_T(x,y)|\ge\varsigma(T)$. Let $w\in \mathcal{M}(T)$ and let $T_{w}=(V_{w},E_{w})$ be the $w$-branch. Also we consider the set of vertices $V'=V(T)-\bigcup_{w\in \mathcal{M}(T) }V_{w}$. Note that $|V_w|\ge \varsigma(T)+1$ for every $w \in \mathcal{M}(T)$. With this fact in mind, we consider three cases. {\bf Case 1:} $x\in V_w$ and $y\in V_{w'}$ for some $w,w'\in \mathcal{M}(T)$, $w\ne w'$. In this case $x,y$ are distinguished by $w$ or by $w'$. Now, if $w$ distinguishes the pair $x,y$, then at most one element of $V_w$ does not distinguish $x,y$ (see Figure \ref{counterexample}). So, $x$ and $y$ are distinguished by at least $|V_w|-1$ vertices of $T$ or by at least $|V_{w'}|-1$ vertices of $T$.\\ \begin{figure} \caption{In this example, $w$ distinguishes the pair $x,y$, and $z$ is the only vertex in $V_{w} \label{counterexample} \end{figure} {\bf Case 2:} $x\in V'$ or $y\in V'$. Thus, $V'\ne \emptyset$ and, as a consequence, $\vert\mathcal{M}(T)\vert \ge 2$. Hence, we have one of the following situations. \begin{itemize} \item $\ $There exist two vertices $w,w'\in \mathcal{M}(T)$, $w\ne w'$, such that the shortest path from $x$ to $w$ and the shortest path from $y$ to $w'$ have empty intersection, or \item $\ $for every vertex $w''\in \mathcal{M}(T)$, it follows that either $y$ belongs to the shortest path from $x$ to $w''$ or $x$ belongs to the shortest path from $y$ to $w''$. \end{itemize} In the first case, $x,y$ are distinguished by vertices in $V_w$ or by vertices in $V_{w'}$ and in the second one, $x,y$ are distinguished by vertices in $V_{w''}$. {\bf Case 3:} $x,y\in V_{w}$ for some $w\in \mathcal{M}(T)$. If $x,y\in V(P(u_{l},w))$ for some $l\in \{1,\ldots,\operatorname{ter}(w)\}$, then there exists at most one vertex of $V(P(u_{l},w))$ which does not distinguish $x,y$. Since $\operatorname{ter}(w)\ge 2$, the vertex $w$ has a terminal vertex $u_{q}$ with $q\ne l$. So, $x,y$ are distinguished by at least $|V(P(u_{l},w,u_{q}))|-1$ vertices, and since $|V(P(u_{l},w,u_{q}))|\ge \varsigma(T)+1$, we are done. If $x\in V(P(u_{l},w)$ and $y\in V(P(u_{q},w)$ for some $l,q\in \{1,\ldots,\operatorname{ter}(w)\}$, $l\ne q$, then there exists at most one vertex of $V(P(u_{l},w,u_{q}))$ which does not distinguish $x,y$. Since $|V(P(u_{l},w,u_{q}))|\ge \varsigma(T)+1$, the result follows. Therefore, $\varsigma(T)=\displaystyle\min_{x,y\in V(T)}\vert {\cal D}_T(x,y)\vert$ and by Theorem \ref{theokmetric} the result follows. \end{proof} Since any path is a particular case of a tree and its behavior with respect to the $k$-metric dimension is relative different, here we analyze them in first instance. In Proposition \ref{theoPath2} we noticed that the $2$-metric dimension of a path $P_{n}(n\geqslant 2)$ is two. Here we give a formula for the $k$-metric dimension of any path graph for $k\ge 3$. \begin{proposition}\label{propPath} $\ $Let $k\geq 3$ be an integer. For any path graph $P_{n}$ of order $n\geq k+1$, $$\dim_{k}(P_{n})=k+1.$$ \end{proposition} \begin{proof} $\ $Let $v_{1}$ and $v_{n}$ be the leaves of $P_{n}$ and let $S$ be a $k$-metric basis of $P_n$. Since $|S|\ge k\ge 3$, there exists at least one vertex $w\in S\cap (V(P_{n})-\{v_1,v_n\})$. For any vertex $w\in V(P_{n})-\{v_1,v_n\}$ there exist at least two vertices $u,v\in V(P_{n})$ such that $w$ does not distinguish $u$ and $v$. Hence, $|S|=\dim_{k}(P_{n})\geq k+1$. Now, notice that for any pair of different vertices $u,v\in V(P_{n})$ there exists at most one vertex $w\in V(P_{n})-\{v_{1},v_{n}\}$ such that $w$ does not distinguish $u$ and $v$. Thus, we have that for every $S\subseteq V(P_{n})$ such that $|S|=k+1$ and every pair of different vertices $x,y\in V(P_{n})$, there exists at least $k$ vertices of $S$ such that they distinguish $x,y$. So $S$ is a $k$-metric generator for $P_n$. Therefore, $\dim_{k}(P_{n})\le |S|=k+1$ and, consequently, the result follows. \end{proof} Once studied the path graphs, we are now able to give a formula for the $r$-metric dimension of any $k$-metric dimensional tree different from a path which, among other usefulness, shows that Theorem \ref{theoMuk} is tight. \begin{theorem}\label{theoTreeDimR} $\ $If $T$ is a tree which is not a path, then for any $r\in \{1,\ldots, \varsigma(T)\}$, $$\dim_{r}(T)=\sum_{w\in \mathcal{M}(T)}I_{r}(w).$$ \end{theorem} \begin{proof} $\ $Since $T$ is not a path, $T$ contains at least one vertex belonging to $\mathcal{M}(T)$. Let $w\in \mathcal{M}(T)$ and let $T_{w}=(V_{w},E_{w})$ be the $w$-branch. Also we consider the set $V'=V(T)-\bigcup_{w\in \mathcal{M}(T)} V_{w}$. For every $w\in \mathcal{M}(T)$, we suppose $u_{1}$ is a terminal vertex of $w$ such that $l(u_{1},w)=l(w)$. Let $U(w)=\{u_1,u_{2},\ldots,u_{s}\}$ be the set of terminal vertices of $w$. Now, for every $u_{j}\in U(w)$, let the path $P(u_{j},w)=u_{j}u^1_{j}u^2_{j}\ldots u^{l(u_{j},w)-1}_{j}w$ and we consider the set $S(u_{j},w)\subset V\left(P(u_{j},w)\right)-\{w\}$ given by: \[ S(u_{1},w)=\left\{ \begin{array}{ll} \left\{u_{1},u^1_{1},\ldots,u^{l(w)-1}_{1}\right\}, & \mbox{if } l(w)\le\lfloor\frac{r}{2}\rfloor \\ \\ \left\{u_{1},u^1_{1},\ldots,u^{\lfloor\frac{r}{2}\rfloor-1}_{1}\right\}, & \mbox{if } l(w)>\lfloor\frac{r}{2}\rfloor . \end{array} \right. \] and for $j\ne 1$, \[ S(u_{j},w)=\left\{ \begin{array}{ll} \left\{u_{j},u^1_{j},\ldots,u^{r-l(w)-1}_{j}\right\}, & \mbox{if } l(w)\le\lfloor\frac{r}{2}\rfloor ,\\ \\ \left\{u_{j},u^1_{j},\ldots,u^{\lceil\frac{r}{2}\rceil-1}_{j}\right\}, & \mbox{if } l(w)>\lfloor\frac{r}{2}\rfloor . \end{array} \right. \] According to this we have, \[ \left|S(u_{j},w)\right|=\left\{ \begin{array}{ll} l(w), & \mbox{if } l(w)\le\lfloor\frac{r}{2}\rfloor \mbox{ and } u_{j}=u_{1},\\ r-l(w), & \mbox{if } l(w)\le\lfloor\frac{r}{2}\rfloor \mbox{ and } u_{j}\ne u_{1},\\ \lfloor\frac{r}{2}\rfloor, & \mbox{if } l(w)>\lfloor\frac{r}{2}\rfloor \mbox{ and } u_{j}=u_{1},\\ \lceil\frac{r}{2}\rceil, & \mbox{if } l(w)>\lfloor\frac{r}{2}\rfloor \mbox{ and } u_{j}\ne u_{1}. \end{array} \right. \] Let $S(w)=\displaystyle\bigcup_{u_{j}\in U(w)}S(u_{j},w)$ and $S=\displaystyle\bigcup_{w\in \mathcal{M}(T)}S(w)$. Since for every $w\in \mathcal{M}(T)$ it follows that $\displaystyle\bigcap_{u_{j}\in U(w)}S(u_{j},w)=\emptyset$ and $\displaystyle\bigcap_{w\in \mathcal{M}(T)}S(w)=\emptyset$, we obtain that $|S|=\displaystyle\sum_{w\in \mathcal{M}(T)}I_{r}(w)$. Also notice that for every $w\in \mathcal{M}(T)$, such that $\operatorname{ter}(w)=2$ we have $|S(w)|= r$ and, if $\operatorname{ter}(w)>2$, then we have $|S(w)|\ge r+1$. We claim that $S$ is an $r$-metric generator for $T$. Let $u,v$ be two distinct vertices of $T$. We consider the following cases. {\bf Case 1:} $u,v\in V_{w}$ for some $w\in \mathcal{M}(T)$. We have the following subcases. {\bf Subcase 1.1:} $u,v\in V(P(u_{j},w))$ for some $j\in \{1,\ldots,\operatorname{ter}(w)\}$. Hence there exists at most one vertex of $S(w)\cap V(P(u_{j},w))$ which does not distinguish $u,v$. If $\operatorname{ter}(w)=2$, then there exists at least one more exterior major vertex $w'\in \mathcal{M}(T)-\{w\}$. So, the elements of $S(w')$ distinguish $u,v$. Since $|S(w')|\ge r$, we deduce that at least $r$ elements of $S$ distinguish $u,v$. On the other hand, if $\operatorname{ter}(w)>2$, then since $|S(w)|\ge r+1$, we obtain that at least $r$ elements of $S(w)$ distinguish $u,v$. {\bf Subcase 1.2:} $u\in V(P(u_{j},w))$ and $v\in V(P(u_{l},w))$ for some $j,l\in \{1,\ldots,\operatorname{ter}(w)\}$, $j\ne l$. According to the construction of the set $S(w)$, there exists at most one vertex of ($S(w)\cap (V(P(u_{j},w,u_{l}))$) which does not distinguish $u,v$. Now, if $\operatorname{ter}(w)=2$, then there exists $w'\in \mathcal{M}(T)-\{w\}$. If $d(u,w)=d(v,w)$, then the $r$ elements of $S(w)$ distinguish $u,v$ and, if $d(u,w)\ne d(v,w)$, then the elements of $S(w')$ distinguish $u, v$. On the other hand, if $\operatorname{ter}(w)>2$, then since $|S(w)|\ge r+1$, we deduce that at least $r$ elements of $S(w)$ distinguish $u,v$. {\bf Case 2:} $u\in V_{w}, v\in V_{w'}$, for some $w,w'\in \mathcal{M}(T)$ with $w\ne w'$. In this case, either the vertices in $S(w)$ or the vertices in $S(w')$ distinguish $u,v$. Since $|S(w)|\ge r$ and $|S(w')|\ge r$ we have that $u,v$ are distinguished by at least $r$ elements of $S$. {\bf Case 3:} $u\in V'$ or $v\in V'$. Without loss of generality we assume $u\in V'$. Since $V'\ne \emptyset$, we have that there exist at least two different vertices in $\mathcal{M}(T)$. Hence, we have either one of the following situations. \begin{itemize} \item $\ $There exist two vertices $w,w'\in \mathcal{M}(T)$, $w\ne w'$, such that the shortest path from $u$ to $w$ and the shortest path from $v$ to $w'$ have empty intersection, or \item $\ $for every vertex $w''\in \mathcal{M}(T)$, it follows that either $v$ belongs to every shortest path from $u$ to $w''$ or $u$ belongs to every shortest path from $v$ to $w''$. \end{itemize} Notice that in both situations, since $|S(w)|\ge r$, for every $w\in\mathcal{M}(T)$), we have that $u,v$ are distinguished by at least $r$ elements of $S$. In the first case, $u$ and $v$ are distinguished by the elements of $S(w)$ or by the elements of $S(w')$ and, in the second one, $u$ and $v$ are distinguished by the elements of $S(w'')$. Therefore, $S$ is an $r$-metric generator for $T$ and, by Theorem \ref{theoMuk}, the proof is complete. \end{proof} In the case $r=1$, the formula of Theorem \ref{theoTreeDimR} leads to $$\dim(T)=\mu(T)-|\mathcal{M}(T)|,$$ which is a result obtained in \cite{Chartrand2000}. Other interesting particular cases are the following ones for $r=2$ and $r=3$, respectively. That is, by Theorem \ref{theoTreeDimR} we have the next results. \begin{corollary}\label{corotree2} $\ $If $T$ is a tree different from a path, then $$\dim_{2}(T)=\mu(T).$$ \end{corollary} \begin{corollary}\label{corotree3} $\ $If $T$ is a tree different from a path with $\varsigma(T)\ge 3$, then $$\dim_{3}(T)=2\mu(T)-|\mathcal{M}(T)|.$$ \end{corollary} As mentioned before, the two corollaries above show that the bounds given in Corollaries \ref{coroMu2} and \ref{coroMu3} are achieved. We finish our exposition with a formula for the $k$-metric dimension of a $k$-metric dimensional tree with some specific structure, also showing that the inequality $\dim_{k}(T)\ge \vert{\cal D}_{k}(T)\vert$, given in Remark \ref{remTauk}, can be reached. \begin{proposition}\label{propTauk} $\ $Let $T$ be a tree different from a path and let $k\ge 2$ be an integer. If $\operatorname{ter}(w)=2$ and $\varsigma(w)=k$ for every $w\in \mathcal{M}(T)$, then $\dim_{k}(T)=\vert{\cal D}_{k}(T)\vert$. \end{proposition} \begin{proof} $\ $Since every vertex $w\in \mathcal{M}(T)$ satisfies that $\operatorname{ter}(w)=2$ and $\varsigma(w)=k$, we have that $\varsigma(T)=k$. Thus, by Theorem \ref{theoTreeDimK}, $T$ is $k$-metric dimensional tree. Since $I_k(w)=k$ for every $w\in \mathcal{M}(T)$, by Theorem \ref{theoTreeDimR} we have that $\dim_{k}(T)=k|\mathcal{M}(T)|$. Let $u_{r}, u_{s}$ be the terminal vertices of $w$. As we have shown in the proof of Theorem \ref{theoTreeDimK}, for every pair $x,y\in V(T)$ such that $x\notin V\left(P(u_{r},w,u_{s})\right)-\{w\}$ or $y\notin V\left(P(u_{r},w,u_{s})\right)-\{w\}$, it follows that $x,y$ are distinguished by at least $k+1$ vertices of $T$ and so $\vert{\cal D}_T^*(x,y)\vert > k-2$. Hence, if $\vert{\cal D}_T^*(x,y)\vert = k-2$, then $x,y\in V\left(P(u_{r},w,u_{s})\right)-\{w\}$ for some $w\in \mathcal{M}(T)$. If $d(x,w)\ne d(y,w)$, then $x,y$ are distinguished by more than $k$ vertices (those vertices not in $V\left(P(u_{r},w,u_{s})\right)-\{w\}$). Thus, if $\vert{\cal D}_T^*(x,y)\vert = k-2$, then $d(x,w)=d(y,w)$ and, as a consequence, ${\cal D}^*_T(x,y)=V\left(P(u_{r},w,u_{s})\right)-\{x,y,w\}$. Considering that $\left|V\left(P(u_{r},w,u_{s})\right)-\{w\}\right|=k$ and at the same time that $\displaystyle\bigcap_{w\in \mathcal{M}(T)}V\left(P(u_{r},w,u_{s})\right)=\emptyset$, we deduce $\vert{\cal D}_{k}(T)\vert=k|\mathcal{M}(T)|$. Therefore, $\dim_{k}(T)=\vert{\cal D}_{k}(T)\vert$. \end{proof} \begin{figure} \caption{A $3$-metric dimensional tree $T$ for which $\dim_{3} \label{figTree3D} \end{figure} Figure \ref{figTree3D} shows an example of a $3$-metric dimensional tree. In this case $\mathcal{M}(T)=\{w,w'\}$, $\operatorname{ter}(w)=\operatorname{ter}(w')=2$ and $\varsigma(w)=\varsigma(w')=3$. Then Proposition \ref{propTauk} leads to $\dim_{3}(T)=\vert{\cal D}_3(T)\vert=|\{u_{1},u_{2},u_{3},u_{1}',u_{2}',u_{3}'\}|=6$. \end{document}
math
60,403
\begin{document} \title{Asymptotic tail behavior of phase-type scale mixture distributions} \author{Leonardo Rojas-Nandayapa} \author{Wangyue Xie} \affil{\footnotesize School of Mathematics and Physics, The University of Queensland, Brisbane, QLD, Australia, \authorcr [email protected], [email protected]} \date{} \maketitle \begin{abstract} We consider \mathrm{e}mph{phase-type scale mixture} distributions which correspond to distributions of a product of two independent random variables: a phase-type random variable $Y$ and a nonnegative but otherwise arbitrary random variable $S$ called the \mathrm{e}mph{scaling random variable}. We investigate conditions for such a class of distributions to be either light- or heavy-tailed, we explore subexponentiality and determine their maximum domains of attraction. Particular focus is given to phase-type scale mixture distributions where the scaling random variable $S$ has discrete support --- such a class of distributions has been recently used in risk applications to approximate heavy-tailed distributions. Our results are complemented with several examples. \keywords{ phase-type; Erlang; discrete scale mixtures; infinite mixtures; heavy-tailed; subexponential; maximum domain of attraction; products; ruin probability.} \mathrm{e}nd{abstract} \section{Introduction} In this paper, we consider the class of nonnegative distributions defined by the \mathrm{e}mph{Mellin--Stieltjes convolution} \citep{Bingham1987} of two nonnegative distributions $G$ and $H$, given by \begin{equation}\label{PHSM} F(x)=\int_0^\infty G(x/s) \mathrm{d} H(s),\qquad x\ge0. \mathrm{e}nd{equation} A distribution of the form \mathrm{e}qref{PHSM} will be called a \mathrm{e}mph{phase-type scale mixture} if $G$ is a (classical) phase-type (PH) distribution \citep[cf.][]{Latouche1999} and $H$ is a proper nonnegative distribution that we shall call the \mathrm{e}mph{scaling distribution}. A phase-type scale mixture distribution can be seen as the distribution of a random variable $X:=S\cdot Y$ where $S\sim H$ and $Y\sim G$; accordingly, $S$ is referred as the \mathrm{e}mph{scaling random variable}. This terminology is also explained using conditional arguments: observe that $(X|S=s) \sim G_s$ where $G_s(x):=G(x/s)$ corresponds to the distribution of the (scaled) random variable $s\cdot Y$ which is itself a PH distribution, so the distribution $F$ can be thought as a mixture of the scaled PH distributions in $\{G_s:s>0\}$ with respect to the scaling distribution $H$. Our motivation for studying the tail behavior of phase-type scale mixtures is their use for approximating heavy-tailed distributions in risk applications \citep{Bladt2014b}. To introduce such an approach, we shall first recall that the family of (classical) phase-type (PH) distributions, which corresponds to distributions of absorption times of Markov jump processes with one absorbing state and a finite number of transient states. The PH class is particularly attractive since it is tractable and possesses many desirable properties (densities, cumulative distributions, moments and integral transforms have closed-form expressions in terms of matrix exponentials; it is a closed class under scaling, finite mixtures and finite convolutions (cf.~\cite{Assaf1982,O'Cinneide1992}). The PH class is popular for modelling purposes because it is dense in the nonnegative distributions \citep[cf.][]{Asmussen2003}, so one could in principle approximate any nonnegative distribution with an arbitrary precision. This classical approach has been widely studied and reliable methodologies for approximating nonnegative distributions are already available \citep[cf.][]{AsmussenNermanOlsson96}. However, distributions in the PH class are light-tailed and belong to the Gumbel domain of attraction exclusively \citep{Kang1999}. Therefore, the PH class cannot correctly capture the characteristic behavior of a heavy-tailed distribution in spite of its denseness. In fact, this approach may deliver unreliable approximations for important quantities of interest, such as the ruin probability of a Cram\'er--Lundberg risk process with heavy-tailed claim size distributions \citep{VatamidouAdanVlasiouZwart2014}. As an alternative, the PH class has been extended to distributions of absorption times having a countable number of transient states \citep[this approach is attributed to][]{Neuts1981}. The later class, which goes under the name of infinite dimensional phase-type distributions (IDPH), is known to contain heavy-tailed distributions. Nevertheless, the IDPH class is no longer mathematically tractable and it is not fully documented yet (to the best of the authors' knowledge, one of the few published references available outlining its mathematical properties is \cite{Dinghua1996}; another reference of interest is \cite{Greiner1999}, who consider infinite mixtures of exponential distributions to approximate power-tailed distributions). To address this issue, \cite{Bladt2015} propose the use of phase-type scale mixtures having discrete scaling distributions to approximate heavy-tailed distributions. Such a class forms a structured subfamily of the IDPH class that contains the PH class, so it is trivially dense in the nonnegative distributions. Two important advantages over the more general IDPH class are that the class of phase-type scale mixture distributions is mathematically tractable and that it contains a rich variety of heavy-tailed distributions. The class of phase-type scale mixture distributions has great potential in applications in engineering, finance and specifically in insurance. As an example of the later, \cite{Bladt2015} provide renewal results that can be applied to obtain exact expressions for the ruin probability of a classical Cram\'er--Lundberg risk process having claim sizes distributed according to a phase-type scale mixture distribution with discrete scaling. This approach is further explored in \cite{Nardo2016}, where a systematic methodology for approximating arbitrary heavy-tailed distributions via phase-type scale mixtures is provided; such a formulation provides simplified formulas for approximating ruin probabilities with arbitrary claim size distributions. Furthermore, \cite{BladtRojas2017} provide statistical inference procedures based on the EM algorithm to adjust phase-type scale mixtures to heavy-tailed data/distributions. Other references of interest that apply similar ideas to risk models include \cite{HashorvaPakesTang2010} and \cite{VatamidouAdanVlasiouZwart2012}. In spite of the denseness and the mathematically tractability of the class of phase-type scale mixtures, the tail properties of the proposed class are not fully understood yet; this paper concentrates on this issue. In particular, a key aspect in the successful approximation of heavy-tailed distributions via phase-type scale mixtures is the appropriate selection of the scaling distribution. This paper focuses on the theoretical foundations justifying the selections made in some of the applications mentioned above, as well as on providing general guidelines for selecting appropriate scaling distributions. We collect and adapt some known results which are available in different contexts, and we prove new results that will allow us to provide a characterization of the tail behavior of phase-type scale mixtures, as well as a classification of their maximum domains of attraction. We expect our results to be useful for modelling purposes by providing a better understanding of the advantages and limitations of such an approach, as well as providing criteria for selecting appropriate scaling distributions for approximating general heavy-tailed distributions. Our results are summarized below. Firstly, we concentrate on classifying light- and heavy-tailed distributions. A phase-type scale mixture is heavy-tailed if and only if its scaling distribution has unbounded support. An interesting heuristic interpretation of this result is as follows: a PH random variable multiplied with a random variable $S$ is heavy-tailed iff $S$ has unbounded support. We provide a simple proof of this fact but we remark that a proof (unknown to us until recently) was already provided in a different context \citep[cf.][]{SuChen2006,Tang2008b}. Secondly, we focus on the maximum domains of attraction and subexponential properties of the class of phase-type scale mixtures. A classical result for the Fr\'echet case is Breiman's lemma \citep{Breiman1965}, which implies that a phase-type scale mixture with a regularly varying scaling distribution remains regularly varying with the same index (hence subexponential). An analogue closure property exists for the class of Weibullian distributions \citep{ArendarczykDebicki2011}. In addition, we investigate analogue results for scaling distributions in the Gumbel domain of attraction. We show that if a certain higher order derivative of the Laplace--Stieltjes transform of the reciprocal of the scaling random variable $\mathcal{L}_{1/S}(\theta)$ is a von Mises function, then $F\in\mathrm{MDA}(\Lambda)$; in addition, we provide a verifiable condition for subexponentiality. We then specialize in phase-type scale mixture distributions having discrete support. Such a class of distributions is of critical importance in applications due to its mathematical tractability, as these correspond to distributions of the absorption time of a Markov jump process having an infinite number of transient states. We outline a simple methodology which allows us to determine their asymptotic behavior by constructing a phase-type scale mixture distribution with continuous scaling and having an asymptotically proportional tail probability. This methodology can be \mathrm{e}mph{reverse-engineered} so we can construct discrete scaling distributions for approximating the tail probability of some arbitrary target distributions. The rest of the paper is organized as follows. In Section \ref{mysec2}, we set up notations and summarize some of the standard facts on heavy-tailed, phase-type and related distributions. Then we introduce the class of phase-type scale mixtures and examine some of its asymptotic properties. Our main results are presented in Section \ref{mysec3} and \ref{mysec4}. Section \ref{mysec3} is devoted to the general case, while Section \ref{mysec4} is specialized in discrete scaling distributions. In Section \ref{mysec5}, we present our conclusions. \section{Preliminaries} \label{mysec2} In this section we provide a summary of some of the concepts needed for this paper. Most results in this section are standard. A reader familiar with phase-type distributions and extreme value theory can safely skip to subsection \ref{PHM}. First we consider the class of \mathrm{e}mph{phase-type} (PH) distributions. When a distinction is needed, we will refer to this class of distributions as \mathrm{e}mph{classical}, in order to make a clear distinction from the class of phase-type scale mixture distributions. A classical phase-type distribution corresponds to the distribution of the absorption time of a Markov jump process $\{X_t\}_{t\geq0}$ with a finite transient state space $E=\{1,2,\cdots,p\}$ and one absorbing state $0$ \citep[cf.][]{Latouche1999,Asmussen2003}. Phase-type distributions are characterized by a $p$-dimensional row vector $\boldsymbol{\beta}= (\beta_1,\cdots,\beta_p)$ (corresponding to the probabilities of starting the Markov jump process in each of the transient states), and an intensity matrix \begin{equation*} \mathbf{Q}=\left(\begin{array}{cc}0 &\mathbf{0} \\ \boldsymbol{\lambda} & \mathbf{\Lambda} \mathrm{e}nd{array}\right), \mathrm{e}nd{equation*} where $\bm{\Lambda}$ is a $p\times p$ sub-intensity matrix. Since rows in a intensity matrix must sum to $0$, we also have $\bm{\lambda}=-\bm{\Lambda e}$, where $\bm{e}$ is the $p-$dimensional column vector of $1$s. Phase-type distributions are denoted $\mbox{PH}(\boldsymbol{\beta},\boldsymbol{\Lambda})$, and their cumulative distribution functions are given by \begin{equation*} G(x)=1-\boldsymbol{\beta}\mathrm{e}^{\mathbf{\Lambda}x}\mathbf{e},\qquad \forall x>0. \mathrm{e}nd{equation*} In this paper, we are particularly interested in distributions of scaled phase-type random variables $s\cdot Y$ where $Y\sim\mathrm{PH}(\boldsymbol{\beta},\mathbf{\Lambda})$ and $s>0$. From the expression above, it follows easily that $s\cdot Y\sim\mathrm{PH}(\boldsymbol{\beta},\mathbf{\Lambda}/s)$, so the class of phase-type distributions is closed under scaling transformations. The following is a well known result describing the tail behavior of phase-type distributions \citep[cf.][]{Asmussen2003}: \begin{Proposition} \label{myprop2.1} Let $G_s\sim\mathrm{PH}(\boldsymbol{\beta}$, $\mathbf{\Lambda}/s$). The tail probability of $G_s$ can be written as \begin{equation*} \overline G\,_s(x)=\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}\left(\frac{x}{s}\right)^k\mathrm{e}^{\Re(-\lambda_j) x/s} \bigg[c_{jk}^{(1)}\sin(\Im(-\lambda_j)x/s)+c_{jk}^{(2)}\cos(\Im(-\lambda_j) x/s)\bigg]. \mathrm{e}nd{equation*} Here $m$ is the number of Jordan blocks of the matrix $\mathbf{\Lambda}$, $\{-\lambda_j:j=1,\dots,m\}$ are the corresponding eigenvalues and $\{\mathrm{e}ta_{j}:j=1,\dots,m\}$ the dimensions of the Jordan blocks. The values $c_{jk}^{(1)}$, $c_{jk}^{(2)}$ are constants depending on the initial distribution $\boldsymbol{\beta}$, the dimension of the $j$-th Jordan block $\mathrm{e}ta_j$ and the generalized eigenvectors of $\mathbf{\Lambda}$. \mathrm{e}nd{Proposition} All eigenvalues of a sub-intensity matrix $\mathbf{\Lambda}$ have negative real parts and the one with the largest absolute value is always real. Therefore, the asymptotic behavior of a scaled phase-type distribution is determined by the largest eigenvalue and the largest dimension among the Jordan blocks associated to the largest eigenvalue \citep[see also][]{Asmussen2003,Alexandru2006}. It is also well known that if the sub-intensity matrix $\mathbf\Lambda$ is irreducible, then the tail probabilities of phase-type distributions decay exponentially \citep[cf.\, Proposition IX.1.8][]{AsmussenAlbrecher2011}. The assumption that the matrix $\mathbf{\Lambda}$ is not irreducible can be further relaxed if all eigenvalues are real. Also notice that if all the eigenvalues of $\mathbf{\Lambda}$ are real ($\Im(-\lambda_j)=0$), then \begin{equation} \label{PH tail} \overline G\,_s(x)=\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}c_{jk}\left(\frac{x}{s}\right)^k\mathrm{e}^{-\lambda_j x/s}. \mathrm{e}nd{equation} Next, we introduce the class of heavy-tailed distributions that will be used in this paper (various other definitions of heavy-tailed distributions are available in the literature) and discuss several important subfamilies of heavy-tailed distributions. We also provide a brief summary of results connecting extreme value theory with heavy-tailed distributions and subexponentiality. We say that a nonnegative distribution $H$ is \mathrm{e}mph{heavy-tailed} if \begin{equation*} \limsup\limits_{s\to\infty}\overline H(s)\mathrm{e}^{\theta s}=\infty,\quad\forall\theta>0, \mathrm{e}nd{equation*} where $\overline H(s)=1-H(s)$ is the \mathrm{e}mph{tail probability} of the distribution $H$. Otherwise, we say that $H$ is a \mathrm{e}mph{light-tailed} distribution. The definition of light/heavy-tailed distributions is often considered too general for most practical purposes and it is more common to work instead with certain families of distributions. For instance, the so-called \mathrm{e}mph{Embrechts--Goldie} class of distributions \citep{EmbrechtsGoldie1980}, denoted $\mathcal{L}(\lambda)$, consists of nonnegative distributions $H$ having the property \begin{equation*} \lim_{s\to\infty}\dfrac{\overline H(s-t)}{\overline H(s)}=\mathrm{e}^{\lambda t}, \quad \lambda\geq 0,\forall{t}. \mathrm{e}nd{equation*} Distributions in the class $\mathcal{L}(0)$ are heavy-tailed and these are known as long-tailed distributions. In contrast, if $\lambda>0$ then a distribution in the class $\mathcal{L}(\lambda)$ is light-tailed. From Proposition \ref{myprop2.1}, it is clear that a PH distribution is in $\mathcal{L}(\lambda)$ where $-\lambda$ is the largest eigenvalue of the sub-intensity matrix $\boldsymbol{\Lambda}$. An important subclass of heavy-tailed distributions is that of subexponential distributions \cite[cf.][]{FossKorshunovZachary2011}. Such a class of distributions contains practically all the heavy-tailed distributions commonly used. We say that $H$ belongs to the class of subexponential distributions, denoted $H\in\mathcal{S}$, if \begin{equation*} \limsup_{s\to\infty}\frac{\overline H^{*n}(s)}{\overline H(s)}=n, \mathrm{e}nd{equation*} where $\overline H^{*n}$ is the tail probability of the $n$-fold convolution of $H$. Another important subclass of subexponential distributions that is widely applied in actuarial sciences is the class of regularly varying distributions. A distribution $H$ is regularly varying with index $\alpha>0$ if \begin{equation} \lim_{s\to\infty}\frac{\overline H(st)}{\overline H(s)}=t^{-\alpha},\quad t>0, \mathrm{e}nd{equation} and it is denoted $H\in\mathcal{R}_{-\alpha}$. Otherwise, if the limit above is $0$ for all $t>1$, then we say that $H$ is a distribution of \mathrm{e}mph{rapid variation} and it is denoted $H\in\mathcal{R}_{-\infty}$ \citep[cf.][]{Bingham1987}. \subsection{Phase-type scale mixtures}\label{PHM} Next we introduce the class of phase-type scale mixture distributions which is central for this paper. We say a distribution $F(x)$ is a \mathrm{e}mph{phase-type scale mixture} with scaling distribution $H$ and phase-type distribution $G\sim \mathrm{PH}(\boldsymbol{\beta},\mathbf{\Lambda})$, if the distribution $F$ can be written as the Mellin--Stieltjes convolution of $H$ and $G$ (see equation \mathrm{e}qref{PHSM} for a definition). For this definition to be valid, it is implicit that $H$ must be nonnegative without an atom at $0$. Particularly, when the scaling distribution $H$ is discrete and supported over a countable set of nonnegative numbers $\{s_i:i\in{\mathbb N}\}$, then the Mellin--Stieltjes convolution in \mathrm{e}qref{PHSM} reduces to the following infinite series: \begin{equation*} F(x)=\sum_{i=1}^{\infty}p(i) G(x/s_i), \mathrm{e}nd{equation*} where $p(i):=H(s_i)-H(s_{i-1})$ is the probability mass function of $H$ with $s_0=0$. It is not difficult to see that a phase-type scale mixture distribution is absolutely continuous and its density function can be written as \begin{align*} f(x)=\int_0^\infty \frac{g(x/s)}s\mathrm{d} H(s), \mathrm{e}nd{align*} where $g$ is the density of the phase-type distribution. The tail probability of a phase-type scale mixture $\overline F\,:=1-F$ can also be written as a Mellin--Stiltjes convolution of $H$ and $\overline G$: \begin{align*} \overline F\,(x) =1-\int_{0}^{\infty}G(x/s)\mathrm{d} H(s) =\int_0^{\infty}(1-G(x/s))\mathrm{d} H(s) =\int_0^{\infty}\overline G\,(x/s)\mathrm{d} H(s). \mathrm{e}nd{align*} Therefore, using proposition \ref{myprop2.1} it is straightforward to see that there exist constants $c_{jk}^{\prime}$ and $c_{k}^{\prime}$, such that \begin{align*} \overline F\,(x)&\le \sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}c_{jk}^{\prime} \int_0^{\infty}\left(\frac{x}{s}\right)^{k}\mathrm{e}^{\Re(-\lambda_j) x/s}\mathrm{d} H(s)\le \sum_{k=0}^{\mathrm{e}ta-1}c_{k}^{\prime}\int_0^{\infty}\left(\frac{x}{s}\right)^{k}\mathrm{e}^{-\lambda x/s}\mathrm{d} H(s). \label{FullTerms} \mathrm{e}nd{align*} Hence, only the largest real eigenvalue determines the asymptotic behavior of a phase-type scale mixture distribution. In this paper, we are particularly interested in providing sufficient conditions for a phase-type scale mixture to be subexponential. However, the task of determining whether a given heavy-tailed distributions is subexponential or not can be very challenging. We will resort to extreme value theory to address this issue, since there exist a variety of results relating the subexponential property with maximum domains of attraction. The Weibull domain of attraction is composed of distributions with support bounded above, so a phase-type scale mixture cannot belong to such domain. The Fr\'echet domain of attraction is characterized by \mathrm{e}mph{regular variation} \citep{Haan1970}: \begin{equation*} \overline H\in\mathcal{R}_{-\alpha}\quad\Longleftrightarrow\quad H\in\text{MDA}(\Phi_\alpha). \mathrm{e}nd{equation*} This characterisation is relevant to us because regularly varying distributions are subexponential. The Gumbel domain of attraction is more involved. It contains both light- and heavy-tailed distributions. A number of results exist for determining the Gumbel domain of attraction and subexponentiality of a certain distribution. We have listed these in the Appendix since these will be used later. \section{Tail behavior of scaled random variables} \label{mysec3} This section is devoted to characterising the tail properties of the class of phase-type scale mixture distributions. Firstly, we collect some relevant results about the asymptotic tail behavior of products of random variables, which provide sufficient conditions on the scaling random variable $S$ for its associated phase-type scale mixture distribution to be either light- or heavy-tailed. In addition, we extend this result to provide a criteria for more general distributions; we also provide a simplified proof (Theorem \ref{Products}). Secondly, in Subsection \ref{MDA} we focus on determining the maximum domain of attraction of a phase-type scale mixture distribution according to its scaling distribution. In the Fr\'echet case, Breiman's lemma implies that a phase-type scale mixture distribution remains in the Fr\'echet domain of attraction (hence regularly varying) if the scaling distribution is in the same domain. The converse of Breiman's lemma does not hold true in general, and finding sufficient conditions and counterexamples is considered challenging \citep[cf.][]{MikoschJacobsenRosinski2009,DenisovZwart2005,MikoschJessen2006,Damek2014}. For the Gumbel case, we provide conditions on the Laplace transform of reciprocal of the scaling random variable $1/S$ so the associated phase-type scale mixture distribution belongs to the Gumbel domain of attraction, as well as to further determine if it is subexponential. We illustrate with examples that such conditions are verifiable in some important cases. In addition, we also analyse the important class of Weibullian distributions (for a definition see Remark \ref{Weibullian} below) which posseses a closure property under multiplication \citep{ArendarczykDebicki2011}. The result in that paper allows to determine the exact tail behavior of a phase-type scale mixture having a Weibullian scaling distribution. \subsection{Asymptotic tail behavior} The tail behavior of the distribution of a product of nonnegative random variables has attracted a considerable amount of research interest. For instance, \cite{SuChen2006} show that if two random variables $S_1$ and $S_2$ are such that the distribution of $S_1$ is in $\mathcal{L}(\lambda)$ with $\lambda>0$ and $S_2$ has unbounded support, then the distribution of $S_1\cdot S_2$ is in $\mathcal{L}(0)$ (long-tailed), and thus heavy-tailed \citep[see also][]{Tang2008b}. If one further assumes that $S_2$ is Weibullian with parameter $0<p\le 1$, then \cite{LiuTang2010} show that the product $S_1\cdot S_2$ is subexponential. A result which extends beyond the class $\mathcal{L}(\gamma)$ is in \cite{ArendarczykDebicki2011}, where it is shown that the product of two Weibullian random variables with parameters $p_1$ and $p_2$ is Weibullian with parameter $p_1p_2/(p_1+p_2)$ and thus proving that the product of Weibullians can be either light- or heavy-tailed. These results imply that a phase-type scale mixture distribution is heavy-tailed if and only if the scaling distribution has unbounded support. This conclusion can also be obtained from our Theorem \ref{Products} below, where we provide sufficient conditions under which a product of two general random variables can be classified either as light- or heavy-tailed. The simplified proof provided here is elementary. \begin{theorem}\label{Products} Consider $S_1$ and $S_2$ two nonnegative independent random variables with unbounded support, where $S_1\sim H_1$ and $S_2\sim H_2$. Let $H$ be the distribution of the product $S_1\cdot S_2$. \begin{enumerate} \item If there exist $\theta>0$ and $\xi(x)$ a nonnegative function such that \begin{equation}\label{hypo1} \limsup_{x\to\infty}\mathrm{e}^{\theta x}\Big(\overline H_1(x/\xi(x))+\overline H_2(\xi(x))\Big)=0, \mathrm{e}nd{equation} then $H$ is a light-tailed distribution. \item If there exists $\xi(x)$ a nonnegative function such that for all $\theta>0$ it holds that \begin{equation}\label{hypo2} \limsup_{x\to\infty}\ \mathrm{e}^{\theta x}\,\overline H_1(x/\xi(x))\cdot\overline H_2(\xi(x))=\infty, \mathrm{e}nd{equation} then $H$ is a heavy-tailed distribution. \mathrm{e}nd{enumerate} \mathrm{e}nd{theorem} \begin{proof} For the first part consider \begin{align*} \limsup_{x\to\infty}\overline H(x)\mathrm{e}^{\theta x} &=\limsup_{x\to\infty}\mathrm{e}^{\theta x}\int_{0}^{\infty} \overline H_1(x/s)\mathrm{d} H_2(s)\\ &=\limsup_{x\to\infty}\bigg[\mathrm{e}^{\theta x}\int_{0}^{\xi(x)} \overline H_1(x/s)\mathrm{d} H_2(s)+ \mathrm{e}^{\theta x}\int_{\xi(x)}^\infty \overline H_1(x/s)\mathrm{d} H_2(s)\bigg]\\ &\le\limsup_{x\to\infty}\left[\mathrm{e}^{\theta x}\overline H_1(x/\xi(x))+ \mathrm{e}^{\theta x} \overline H_2(\xi(x))\right]=0. \mathrm{e}nd{align*} The last equality holds by the hypothesis \mathrm{e}qref{hypo1}. Hence $H$ is light-tailed. For the second part consider \begin{align*} \limsup_{x\to\infty}\overline H(x)\mathrm{e}^{\theta x} &=\limsup_{x\to\infty}\bigg[\mathrm{e}^{\theta x}\int_{0}^{\xi(x)} \overline H_1(x/s)\mathrm{d} H_2(s)+ \mathrm{e}^{\theta x}\int_{\xi(x)}^\infty \overline H_1(x/s)\mathrm{d} H_2(s)\bigg]\\ &\ge \limsup_{x\to\infty}\left[\mathrm{e}^{\theta x}\overline H_1(x/\xi(x))\overline H_2(\xi(x))\right]=\infty. \mathrm{e}nd{align*} The last equality holds by hypothesis \mathrm{e}qref{hypo2}. Hence $H$ is heavy-tailed. \mathrm{e}nd{proof} The conditions in Theorem \ref{Products} can be easily verified and enables us to provide a classification of the asymptotic tail behavior of products of random variables with more general distributions. Notice that the distributions considered in \cite{SuChen2006} correspond to distributions with log-tail probabilities decaying at a linear rate, i.e. $-\log\overline{H_1}(s)=O(s)$, while the distributions in \cite{ArendarczykDebicki2011} have log-tail probabilities decaying at a power rate, i.e. $-\log\overline{H_i}(s)=O(s^{p_i})$, $i=1,2$. The following example considers distributions with log-tail probabilities decaying at an exponential rate, i.e.\ $-\log\overline{H_i}(s)=O(\mathrm{e}^{s})$. \begin{Example}[Gumbellian products] Let $H_i(x)=1-\mathrm{e}xp\{-\mathrm{e}^{x}+1\}$, $x>0$. We choose $\xi(x)=x^{\gamma}$, with $0<\gamma<1$. Then \begin{equation*} \lim_{x\to\infty}\overline H(x)\mathrm{e}^{\theta x}=\lim_{x\to\infty}\mathrm{e}^{\theta x+1}\left(\mathrm{e}xp\{-\mathrm{e}^{x^{1-\gamma}}\}+\mathrm{e}xp\{-\mathrm{e}^{x^{\gamma}}\}\right)=0,\quad \forall \theta>0. \mathrm{e}nd{equation*} Then the product of two random variables with Gumbellian-type distributions is always light-tailed. The same holds true if we replace $H_2$ with a Weibullian distribution with shape parameter $p>1$. Choose $\xi(x)=x^{\gamma}$, with $\frac1p\leq\gamma<1$ and observe that \begin{equation*} \lim_{x\to\infty}\overline H(x)\mathrm{e}^{\theta x}=\lim_{x\to\infty}\mathrm{e}^{\theta x}\left(\mathrm{e}xp\{-\mathrm{e}^{x^{1-\gamma}}+1\}+x^{\delta}\mathrm{e}^{-x^{\gamma p}}\right)=0,\quad \text{for}\ \theta\in(0,1). \mathrm{e}nd{equation*} \mathrm{e}nd{Example} \subsection{Maximum domains of attraction and subexponentiality} \label{MDA} The scenario in the Fr\'echet domain of attraction is well understood. Breiman's lemma \citep{Breiman1965} implies that a phase-type scale mixture distribution is in the Fr\'echet domain of attraction if its scaling distribution is in the same domain: \begin{Lemma}[\cite{Breiman1965}] \label{Breiman} If $H\in\mathcal{R}_{-\alpha}$ and $M_G(\alpha+\mathrm{e}psilon)<\infty$ for some $\mathrm{e}psilon>0$, then $F\in\mathcal{R}_{-\alpha}$ and \begin{equation} \overline F\,(x) = M_G(\alpha)\overline H(x) (1+{\mathit{o}}(1)),\qquad x\to\infty, \mathrm{e}nd{equation} where $M_G(\alpha)$ is the $\alpha$-moment of $G$. \mathrm{e}nd{Lemma} Phase-type distributions are light-tailed so all their moments are finite. Therefore, a phase-type scale mixture distribution with a scaling distribution in the Fr\'echet domain of attraction remains in the same domain. Furthermore, the norming constants for a phase-type scale mixture distribution $F$ can be chosen as the norming constants of $H$ divided by the $\alpha$-moment of the phase-type distribution $G$, that is \begin{equation*} d_n=0, \quad c_n=\dfrac{1}{M_G(\alpha)}\left(\dfrac{1}{\overline H}\right)^{\leftarrow}(n). \mathrm{e}nd{equation*} Moreover, when the conditions of Breiman's lemma are satisfied, then the scaling and the phase-type scale mixture distributions are regularly varying with the same index of regular variation, thus implying that the tail probabilities of both distributions are asymptotically proportional (with the reciprocal of the $\alpha$-moment of the phase-type distribution being the proportionality constant). This implies that the class of phase-type scale mixture distributions can provide exact asymptotic approximations of the tail probabilities of regularly varying distributions. It is interesting to note that the converse of Breiman's lemma does not hold true in general. Such a problem is considered to be challenging and has attracted considerable research interest, thus resulting in a rich variety of results proving sufficient conditions and counterexamples; for instance, \cite{MikoschJessen2006} provide a comprehensive list of earlier references; the most general results are given in \cite{MikoschJacobsenRosinski2009} and \cite{DenisovZwart2005} (see also \cite{Damek2014} for a multivariate version). It is not difficult to verify that some subclasses (for instance, exponential, Erlang and hyperexponential) of PH distributions satisfy the sufficient conditions for the converse of Breiman's lemma provided in \cite{MikoschJacobsenRosinski2009}. We also conjecture that in general PH distributions satisfy the above conditions but a proof remains unknown to us. The situation is less understood in the Gumbel domain of attraction. We start by noting that in the Gumbel case, a phase-type scale mixture $F$ and its scaling distribution $H$ will have very different tail behaviors (this is contrast to the Fr\'echet case where Breiman's lemma implies that these have asymptotically proportional tail behavior). In particular, the tail probability of a scaling distribution in the Gumbel domain of attraction is tail equivalent to a von Mises functions, hence rapidly varying. In such a case the tail distribution of the phase-type scale mixture will be much heavier than its scaling distribution: \begin{Proposition} If $H\in\mathcal{R}_{-\infty}$, then \begin{equation}\label{tails differ} \limsup_{x\to\infty}\frac{\overline H(x)}{\overline F\,(x)}=0. \mathrm{e}nd{equation} \mathrm{e}nd{Proposition} \begin{proof} To show this we take $t>1$ and observe that there exists a constant $C$ such that \begin{align*} \overline F\,(x)=\mathbb{P}[SY>x]\geq \mathbb{P}[SY>x,Y \geq t]\geq&\mathbb{P}[S>x/t]\mathbb{P}[Y\geq t]=\overline H(x/t)C, \mathrm{e}nd{align*} Then \begin{equation*} \limsup_{x\to\infty}\dfrac{\overline H(x)}{\overline F\,(x)}\leq\dfrac{1}{C}\limsup_{x\to\infty}\dfrac{\overline H(x)}{\overline H(x/t)}=0,\quad t>1. \mathrm{e}nd{equation*} \mathrm{e}nd{proof} The lognormal and Weibullian distributions are rapidly varying. \begin{Remark} This result fleshes out a limitation of the aforementioned approach for approximating distributions in the Gumbel domain of attraction. The tail probability of a phase-type scale mixture distribution will be much heavier than its target distribution, if the scaling distribution is chosen within the same family of target distributions and with similar parameters. We show later that in some cases we are able to construct phase-type scale mixture distributions with the same asymptotic behavior as their target distributions if we vary the value of parameters. Such is the case of Weibullian distributions. \mathrm{e}nd{Remark} Next we look for sufficient conditions of the scaling distribution so its corresponding phase-type scale mixture will belong to the Gumbel domain of attraction and be subexponential. We restrict our focus to phase-type distributions with sub-intensity matrices having only real eigenvalues. \begin{theorem} \label{Lemma.2} Let $V(x)=(-1)^{\mathrm{e}ta-1}\mathcal{L}_{1/S}^{(\mathrm{e}ta-1)}(x)$ where $\mathrm{e}ta$ is the largest dimension among the Jordan blocks associated to the largest eigenvalue of the sub-intensity matrix. If $V(\cdot)$ is a von Mises function, then $F\in\mathrm{MDA}(\Lambda)$. Moreover, $F$ is subexponential if \begin{equation*} \liminf_{x\to\infty}\frac{V(tx)V^{\prime}(x)}{V^{\prime}(tx)V(x)}>1,\qquad \forall t>1. \mathrm{e}nd{equation*} \mathrm{e}nd{theorem} \begin{proof} We can write that \begin{equation*} \overline F\,(x)= \sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} \int_0^{\infty}c_{jk}\left(\frac{x}{s}\right)^k\mathrm{e}^{-\lambda_j x/s}\mathrm{d} H(s) =\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} c_{jk}\, \frac{(-1)^kx^k}{\lambda_j^k}\, \mathcal{L}_{1/S}^{(k)}(\lambda_j x). \mathrm{e}nd{equation*} Since $V(x)=(-1)^{\mathrm{e}ta-1}\mathcal{L}_{1/S}^{(\mathrm{e}ta-1)}(x)$ is a von Mises function, then $V(x)$ is of rapid variation \citep{Bingham1987}. This implies that \begin{equation}\label{FtailGumbel} \overline F\,(x)\sim c\frac{x^{\mathrm{e}ta-1}}{\lambda^{\mathrm{e}ta-1}}V(\lambda x), \mathrm{e}nd{equation} where $c$ is some constant, $-\lambda$ is the largest eigenvalue of the sub-intensity matrix and $\mathrm{e}ta$ is the largest dimension among the Jordan blocks associated to $-\lambda$. Then it is not difficult to see that \begin{equation*} \lim_{x\to\infty}\frac{\overline F\,(x)F^{\prime\prime}(x)}{(F^\prime(x))^2}= \lim_{x\to\infty}\frac{V(\lambda x)(-V^{\prime\prime}(\lambda x))} {\big(-V^{\prime}(\lambda x)\big)^2}=-1. \mathrm{e}nd{equation*} This holds true because by hypothesis $V(x)=(-1)^{\mathrm{e}ta-1}\mathcal{L}_{1/S}^{(\mathrm{e}ta-1)}(x)$ is a von Mises function. Hence $F\in\mathrm{MDA}(\Lambda)$ and the first part result follows. For the second part, we observe that the auxiliary function $a(x)=\overline F\,(x)/F^{\prime}(x)$ obeys the following asymptotic equivalence \begin{equation*} a(x)=\frac{\overline F\,(x)}{F^{\prime}(x)}\sim\frac{V(\lambda x)}{-\lambda V^{\prime}(\lambda x)}. \mathrm{e}nd{equation*} The distribution $F$ is subexponential if \begin{equation*} \liminf_{x\to\infty}\frac{a(tx)}{a(x)}= \liminf_{x\to\infty}\frac{V(\lambda tx)V^{\prime}(\lambda x)}{V^{\prime}(\lambda tx)V(\lambda x)}>1,\qquad \forall t>1, \mathrm{e}nd{equation*} hence subexponentiality of $F$ follows. \mathrm{e}nd{proof} Theorem \ref{Lemma.2} can be applied to the lognormal case: \begin{Example}[Lognormal scaling] \label{myex lognormal} Assume $H\sim\mathrm{LN}(\mu,\sigma^2$), then $F$ is a subexponential distribution in the Gumbel domain of attraction. \mathrm{e}nd{Example} \begin{proof} W.l.o.g.\ we can assume $\mu=0$ since $\mathrm{e}^\mu$ is a scaling constant. In such a case the symmetry of the normal distribution implies that the Laplace--Stieltjes transform of $1/S$ is the same as that of $S$, i.e. \begin{equation*} \mathcal{L}_{1/S}(x)=\mathcal{L}_{S}(x). \mathrm{e}nd{equation*} An asymptotic approximation of the $k$-th derivative of the Laplace--Stieltjes transform of the lognormal distribution is given in \cite{Asmussen2014}: \begin{equation*} \mathcal{L}_{S}^{(k)}(x)=(-1)^k\mathcal{L}_S(x)\mathrm{e}xp\{-k\omega_0(x)+\dfrac{1}{2}\sigma_0(x)^2k^2\}(1+{\mathit{o}}(1)), \mathrm{e}nd{equation*} where \begin{equation*} \omega_k(x)=\mathcal{W}(x\sigma^2\mathrm{e}^{k\sigma^2}),\quad \sigma_k(x)^2=\dfrac{\sigma^2}{1+\omega_k(x)}, \mathrm{e}nd{equation*} and $\mathcal{W}(\cdot)$ is the Lambert W function. Hence we verify that \begin{align*} \lim_{x\to\infty}\dfrac{V(x)(-V^{\prime\prime}(x))}{(-V^{\prime}(x))^2} &=\lim_{x\to\infty}\dfrac{\mathrm{e}^{-(\mathrm{e}ta-1)\omega_0(x)+\frac{1}{2}\sigma_0(x)^2(\mathrm{e}ta-1)^2}\cdot\left(-\mathrm{e}^{-(\mathrm{e}ta+1)\omega_0(x)+\frac{1}{2}\sigma_0(x)^2(\mathrm{e}ta+1)^2}\right)}{\mathrm{e}^{-2\mathrm{e}ta\omega_0(x)+\sigma_0(x)^2\mathrm{e}ta^2}}\\ &=-\lim_{x\to\infty}\mathrm{e}xp\{\sigma_0(x)^2\}=-\lim_{x\to\infty}\mathrm{e}xp\left\{\dfrac{\sigma^2}{1+\omega_0(x)}\right\}. \mathrm{e}nd{align*} As $\omega_k(x)$ is asymptotically of order $\log(x)$ as $x\to\infty$, then ${\sigma^2}{(1+\omega_0(x))^{-1}}\to 0$ as $x\to\infty$. Then the last limit is equal to $-1$, so we have shown that $F(x)\in\mathrm{MDA}(\Lambda)$. Furthermore, \begin{align*} \lim_{x\to\infty}\dfrac{a(tx)}{a(x)}&=\lim_{x\to\infty}\dfrac{(-1)^{\mathrm{e}ta-1}\mathcal{L}^{(\mathrm{e}ta-1)}_{1/S}(tx)\cdot(-1)^{\mathrm{e}ta-1}\mathcal{L}^{(\mathrm{e}ta)}_{1/S}(x)}{(-1)^{\mathrm{e}ta-1}\mathcal{L}^{(\mathrm{e}ta)}_{1/S}(tx)\cdot(-1)^{\mathrm{e}ta-1}\mathcal{L}^{(\mathrm{e}ta-1)}_{1/S}(x)}\\ &=\lim_{x\to\infty}\dfrac{\mathrm{e}^{-(\mathrm{e}ta-1)\omega_0(xt)+\frac{1}{2}\sigma_0(xt)^2(\mathrm{e}ta-1)^2}\cdot \mathrm{e}^{-\mathrm{e}ta\omega_0(x)+\frac{1}{2}\sigma_0(x)^2\mathrm{e}ta^2}}{\mathrm{e}^{-\mathrm{e}ta\omega_0(xt)+\frac{1}{2}\sigma_0(xt)^2\mathrm{e}ta^2}\cdot \mathrm{e}^{-(\mathrm{e}ta-1)\omega_0(x)+\frac{1}{2}\sigma_0(x)^2(\mathrm{e}ta-1)^2}}\\ &=\lim_{x\to\infty}\mathrm{e}xp\left\{-\omega_0(x)+\omega_0(xt)+\dfrac{1}{2}\sigma_0(xt)^2(2\mathrm{e}ta-1)+\dfrac{1}{2}\sigma_0(x)^2(1-2\mathrm{e}ta)\right\}\\ &=\lim_{x\to\infty}\mathrm{e}xp\left\{-\omega_0(x)+\omega_0(x)+\omega_0(t)+{\mathit{O}}(\omega_0(x)^{-1})\right\}=t>1. \mathrm{e}nd{align*} Thus $F$ is a subexponential distribution. \mathrm{e}nd{proof} \begin{Example}[Exponential scaling] \label{myex4} Let $H\sim\mathrm{e}xp(\beta)$. Then $F$ is a subexponential distribution in the Gumbel domain of attraction. \mathrm{e}nd{Example} \begin{proof} Observe that $1/S$ has an inverse gamma distribution with a Laplace--Stieltjes transform given in terms of a modified Bessel function of the second kind \citep{Raqab1965}: \begin{equation*} \mathcal{L}_{1/S}(x)=\int_0^{\infty}\mathrm{e}^{- x/s}\beta\mathrm{e}^{-\beta s}\mathrm{d} s=2\sqrt{\beta x}\text{BesselK}(1, 2\sqrt{\beta x}). \mathrm{e}nd{equation*} Furthermore, its $n$-th derivative can be calculated explicitly also in terms of a modified Bessel function of the second kind: \begin{equation*} \mathcal{L}_{1/S}^{(n)}(x)=\int_0^{\infty}\left(-\frac{1}{s}\right)^n\mathrm{e}^{-x/s}\beta\mathrm{e}^{-\beta s}\mathrm{d} s=(-1)^n\cdot 2\,\beta^{\frac{n+1}{2}}x^{-\frac{n-1}{2}}\text{BesselK}(n-1, 2\sqrt{\beta x}). \mathrm{e}nd{equation*} Asymptotically it holds true that \begin{equation*} \mathcal{L}_{1/S}^{(n)}(x)\sim (-1)^n\sqrt{\pi}\beta^{\frac{2n+1}{4}} x^{-\frac{2n-1}{4}}\mathrm{e}^{-2\sqrt{\beta x}},\quad x\to\infty. \mathrm{e}nd{equation*} Hence, it follows that \begin{equation*} \lim_{x\to\infty}\dfrac{V(x)(-V^{\prime\prime}(x))}{(-V^{\prime}(x))^2} =-1. \mathrm{e}nd{equation*} Therefore, $V(x)$ is a von Mises function and $F\in\mathrm{MDA}(\Lambda)$. Moreover, if $t>1$ then \begin{align*} \lim_{x\to\infty}\dfrac{a(tx)}{a(x)}=\lim_{x\to\infty}\dfrac{V(tx)V^{\prime}(x)}{V^{\prime}(tx)V(x)}=\sqrt{t}>1. \mathrm{e}nd{align*} Thus F is a subexponential distribution. \mathrm{e}nd{proof} \begin{Remark} Notice that it is possible to generalize the result of the previous example for a gamma scaling distribution, because an expression for the Laplace--Stieltjes transform of an inverse gamma distribution is known and given in terms of a modified Bessel function of the second kind. However, it involves a number of tedious calculations and therefore omitted. Note as well that in such a case it is possible to test directly if $\overline F\,$ is a von Mises function, but the calculations become cumbersome. Finally, we remark that the results of \cite{LiuTang2010} imply the subexponentiality of the exponential case. \mathrm{e}nd{Remark} \begin{Remark} If $H$ is a discrete scaling distribution, then we can obtain an analogue result to that of of Theorem \ref{Lemma.2}. Define \begin{equation*} \mathcal{DL}_{1/S}(x)=\sum_{i=1}^{\infty}\mathrm{e}^{-x/i}p(i) \mathrm{e}nd{equation*} as the Laplace--Stieltjes transform of discrete scaling random variable $S$ with probability mass function $p(i)$. Then the tail probability of the phase-type scale mixture is: \begin{equation*} \overline F\,(x)= \sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} \sum_{i=1}^{\infty}c_{jk}\left(\frac{x}{i}\right)^k\mathrm{e}^{-\lambda_j x/i}p(i) =\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} c_{jk}\, \frac{(-1)^kx^k}{\lambda_j^k}\, \mathcal{DL}_{1/S}^{(k)}(\lambda_j x). \mathrm{e}nd{equation*} If $V(x)=(-1)^{\mathrm{e}ta-1}\mathcal{DL}_{1/S}^{(\mathrm{e}ta-1)}(x)$ is a von Mises function, then $F\in\mbox{MDA}(\Lambda$). \mathrm{e}nd{Remark} We close this section with an important remark regarding Weibullian scalings. \begin{Remark}[Weibullian scaling] \label{Weibullian} A nonnegative distribution $H$ is said to be Weibullian with shape parameter $p>0$ \citep{ArendarczykDebicki2011} if \begin{equation*} \overline H(s)= Cs^{\delta}\mathrm{e}xp(-\lambda s^{p})(1+o(1)),\quad \lambda,C>0, \delta\in\mathbb{R}. \mathrm{e}nd{equation*} A Weibullian distribution with parameter $p$ is heavy-tailed if $0<p<1$, while it is light-tailed if $p\ge 1$. Notice that a phase-type distribution is Weibullian with shape parameter equal to $1$. Therefore, Lemma 2.1 of \cite{ArendarczykDebicki2011} implies that a phase-type scale mixture having a Weibullian scaling distribution with scale parameter $p$ will be Weibullian with shape parameter ${p_1}(1+p_1)^{-1}<1$, thus heavy-tailed. Furthermore, Lemma 2.1 in \cite{ArendarczykDebicki2011} provides exact expressions for each of the parameters $C$, $\delta$ and $\lambda$, so in principle one can use this result to replicate exactly the tail behavior of a Weibullian distribution via a phase-type scale mixture distribution. \mathrm{e}nd{Remark} \section{Discrete scaling distributions} \label{mysec4} Next we focus on the case of phase-type scale mixture distributions having scaling distributions supported over countable sets of strictly positive numbers. These distributions are particularly tractable since these correspond to distributions of absorption times of Markov jump processes with an infinite number of transient states. This class of distributions is of great importance for applications involving heavy-tailed phenomena, since a variety of quantities of interest can be calculated exactly. Such is the case of ruin probabilities in the Cr\'amer-Lundberg process having claims sizes distributed according to a phase-type scale mixture \citep[cf.][]{Bladt2014b,Nardo2016}. Notice for instance, that such exact results are not available for the case of continuous scaling distributions. We remark however, that some of the methodologies for determining domains of attraction and subexponentiality described in the previous section are not always implementable in a straightforward way for discrete scaling distributions. One of the main difficulties is the calculation of asymptotic equivalent expressions for the infinite series defining the tail probabilities. Below we describe a simple methodology which can be used to extend results for continuous scaling distributions to their discrete scaling distributions counterparts; such a methodology provides mild conditions under which the asymptotical behavior of an infinite series is asymptotically equivalent to that of a certain function defined via a definite integral. \begin{Proposition} \label{integral approx} Let $I_u:\mathbf{Z}^+\to\mathbb{R}^+$ be collection of functions indexed by $u\in(0,\infty)$. Suppose that for each $u>0$ there exists a measurable and bounded function $I'_u:\mathbb{R}^+\to\mathbb{R}$ such that $I(u;k)=I'(u;k)$ for all $k\in\mathbb{Z}^+$ and \begin{equation*} \int_{0}^{\infty}I'(u;y)\mathrm{d} y - M(u)\le\sum_{k=0}^{\infty}I(u;k)\le\int_{0}^{\infty}I'(u;y)\mathrm{d} y +M(u), \mathrm{e}nd{equation*} where $M(u)\ge\max\{I'(u;y):y>0\}$ is some upper bound for the function $I'(u;y)$. If \begin{equation*} \lim\limits_{u\to\infty}\dfrac{M(u)}{\int_0^{\infty}I'(u;y)\mathrm{d} y}=0, \mathrm{e}nd{equation*} then the following asymptotic relationship holds \begin{align*} \lim_{u\to\infty}\dfrac{\sum_{k=0}^{\infty}I(u;k)}{\int_0^{\infty}I'(u;y)\mathrm{d} y}=1. \mathrm{e}nd{align*} \mathrm{e}nd{Proposition} The method provides a verifiable condition under which the infinite series can be replaced by an asymptotic integral. The next example is taken from \cite{Bladt2014b}. \begin{Example}[Zeta scaling] \label{Zeta} Let $\alpha\ge 2$ and assume $H\sim\mathrm{Zeta}(\alpha)$. Such a distribution is determined by $p(i)=i^{-\alpha}/\zeta(\alpha)$, $i\in{\mathbb N}$ and $\zeta(\cdot)$ is the Riemann zeta function. Then $F$ is in the Fr\'echet domain of attraction. \mathrm{e}nd{Example} We remark that Breiman's lemma could have been used instead to determine the exact asymptotic behavior because the tail probability $\overline H(i)$, $i=1,2,\dots$ forms a regularly varying sequence, so $\overline H\in\mathcal{R}_{-\alpha}$ \citep{Bingham1987}. Nevertheless, this example is included here to illustrate the simplicity of the method proposed. \begin{proof} $H$ is supported over all the natural numbers, so the tail probability of corresponding phase-type scale mixture can be written as \begin{equation*} \overline F\,(x)=\sum_{i=1}^{\infty}p(i)\overline G\,(x/i). \mathrm{e}nd{equation*} Recall that the expression of $\overline G\,(\cdot)$ has been given in \mathrm{e}qref{PH tail}, then we have \begin{align*} \overline F\,(x)=\sum_{i=1}^{\infty}\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}c_{jk}\left(\frac{x}{i}\right)^{k}\mathrm{e}^{-\lambda_j x/i}\frac{i^{-\alpha}}{\zeta(\alpha)} =\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}\sum_{i=1}^{\infty}\dfrac{c_{jk}x^k}{\zeta(\alpha)}i^{-(\alpha+k)}\mathrm{e}^{-\lambda_j x/i}. \mathrm{e}nd{align*} Consider the functions $I'_{jk}(x;y)=x^ky^{-(\alpha+k)}\mathrm{e}^{-\lambda_j x/y}$ and note that each of these functions attains their single local maximum at $\hat{y}={\lambda_j x}(\alpha+k)^{-1}>0$, for all $x>0$. Therefore, \begin{equation*} \int_0^{\infty}I'_{jk}(x;y)\mathrm{d} y-M_{jk}(x;\hat{y})\leq \sum_{i=1}^{\infty}x^{k}i^{-(\alpha+k)}\mathrm{e}^{-\lambda_j x/i} \leq\int_0^{\infty}I'_{jk}(x;y)\mathrm{d} y+M_{jk}(x;\hat{y}). \mathrm{e}nd{equation*} Observe that \begin{equation*} M_{jk}(x;\hat{y})=x^k\mathrm{e}^{-(\alpha+k)}\left(\frac{\lambda_j}{\alpha+k}\right)^{-(\alpha+k)}x^{-(\alpha+k)}=cx^{-\alpha}, \mathrm{e}nd{equation*} and \begin{equation*} I'_{jk}(x):=x^k\int_0^{\infty}y^{-(\alpha+k)}\mathrm{e}^{-\lambda_j x/y}\mathrm{d} y=\frac{\Gamma(\alpha+k-1)}{\lambda^{\alpha+k-1}}x^{-\alpha+1}, \mathrm{e}nd{equation*} so $M_{jk}(x;\hat{y})$ is of negligible order with respect to $I'_{jk}(x)$. Then it follows that \begin{align*} \overline F\,(x)\sim \sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}\dfrac{c_{jk}}{\zeta(\alpha)}I'_{jk}(x) =\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1}\dfrac{c_{jk}\Gamma(\alpha+k-1)}{\zeta(\alpha)\lambda^{\alpha+k-1}}x^{-\alpha+1},\quad x\to\infty. \mathrm{e}nd{align*} Thus $F(x)\in$MDA($\Phi_{\alpha-1}$). Let $C=\sum\limits_{j=1}\limits^{m}\sum\limits_{k=0}\limits^{\mathrm{e}ta_j-1}\dfrac{c_{jk}\Gamma(\alpha+k-1)}{\zeta(\alpha)\lambda^{\alpha+k-1}}$, then the norming constants can be chosen as \begin{equation*} d_n=0,\quad c_n=\left(\frac{1}{\overline F\,}\right)^{\leftarrow}(n)=\left(\frac{C}{n}\right)^{\frac{1}{\alpha-1}}. \mathrm{e}nd{equation*} \mathrm{e}nd{proof} \begin{Example}[Geometric scaling] Let $H\sim\mathrm{Geo}(p)$ and $G$ be PH distribution whose sub-intensity matrix has only real eigenvalues. Then $F$ is a subexponential distribution in the Gumbel domain of attraction. \mathrm{e}nd{Example} \begin{proof} Let $p(i)=pq^i$ where $q=1-p$. Since the geometric distribution has unbounded support, then the associated phase-type scale mixture is heavy-tailed. We next verify that it belongs to the Gumbel domain of attraction. \begin{align*} \overline F\,(x)=\sum_{i=1}^{\infty}\overline G\,\left(x/i\right)pq^i. \mathrm{e}nd{align*} Let $I'(x;y)=\overline G\,(x/y)p \mathrm{e}xp\{-|\log q|y\}$ satisfies the conditions in Proposition \ref{integral approx}. Since the sine and cosine functions are bounded, then it is not difficult to use Proposition \ref{myprop2.1} to show that there exists a constant $c_1$ such that \begin{equation*} M(x):=I(x;\hat{y})\le x^{\frac{k}{2}}\mathrm{e}^{-2\sqrt{x\lambda |\log q|}}(c_1+{\mathit{o}}(1)),\qquad x\to\infty, \mathrm{e}nd{equation*} where $\lambda$ is the largest eigenvalue in absolute value and $k$ is its largest multiplicity. If the sub-intensity matrix has real eigenvalues then by using Lemma 2.1 in \citep{ArendarczykDebicki2011} we obtain that \begin{align*} \int_0^{\infty}I'(x;y)\mathrm{d} y&=p\int_0^\infty \overline G\,(x/y)\mathrm{e}^{-y|\log q|}\mathrm{d} y= x^{k/2+1/4}\mathrm{e}^{-2\sqrt{x\lambda|\log q|}}(C_1+{\mathit{o}}(1)),\qquad x\to\infty. \mathrm{e}nd{align*} So, the value of $M(x)$ is asymptotically negligible with respect to the value of the integral and we conclude that \begin{equation*} \overline F\,(x)\sim p\int_{0}^{\infty}\overline G\,(x/y)\mathrm{e}^{-y|\log q|}\mathrm{d} y =\frac{p}{|\log q|}\int_0^\infty \overline G\,(x/y)\mathrm{d} H(y), \mathrm{e}nd{equation*} where $H\sim \mathrm{e}xp(|\log q|)$. Hence, by tail equivalence, the distribution $F$ inherits all the asymptotic properties of its continuous counterpart, namely, a phase-type scale distribution with exponential scaling distribution with parameter $|\log q|$. \mathrm{e}nd{proof} \begin{Remark} We shall recall that the geometric version can be seen as the discrete counterpart of the exponential distribution obtained as a discretization. More precisely, the geometric distribution can be seen as a distribution supported over $\mathbb{Z}^+$ and defined by \begin{equation*} H(k)=\mathcal{H}(k), \quad k=0,1,2,\cdots, \mathrm{e}nd{equation*} where $\mathcal{H}\sim\mbox{exp}(|\log q|)$. The probability mass function of $H$ is given by $h(k)=\mathcal{H}(k)-\mathcal{H}(k-1)$. This idea can be extended in order to select scaling distributions for approximating heavy-tailed distributions in the Gumbel domain of attraction. Suppose we want to approximate the tail probability of an absolutely continuous distribution $\mathcal{H}$ supported over $(0,\infty)$ via a discrete phase-type scale mixture distribution. One way to proceed is to construct a discrete distribution supported over $\mathbb{N}$ defined by $h(k)=\mathcal{H}(k)-\mathcal{H}(k-1)$; we refer to this construction as a \mathrm{e}mph{discretization} of $\mathcal{H}$. Moreover, the density of $\mathcal{H}$ can be used to construct a function $I'(u;k)$. In such a case the tail behavior of a phase-type scale mixture having a discretized scaling distribution inherits the asymptotic properties of its continuous counterpart. This idea is better illustrated with the following example, which suggests a methodology for approximating the tail probability of a lognormal distribution. \mathrm{e}nd{Remark} \begin{Example}[Lognormal discretization] Let $H$ be a discrete lognormal distribution with parameters $\mu$, $\sigma$ and supported over $\{0,1,2,\cdots\}$. Assume $\mu=0$. The tail probability $\overline F\,$ is given by \begin{equation*} \overline F\,(x)=\sum_{i=1}^{\infty}\overline G\,(x/i)\left[H(i)-H(i-1)\right]=\sum_{i=1}^{\infty}\overline G\,(x/i)\int_{i-1}^ih(y)\mathrm{d} y, \mathrm{e}nd{equation*} where $h(\cdot)$ is the density of lognormal distribution. Since $\overline G\,(x/y)$ is increasing in $y$, then we can easily find a lower bound: \begin{equation*} \overline F\,(x)= \sum_{i=1}^{\infty}\int_{i-1}^i\overline G\,(x/i)h(y)\mathrm{d} y\ge \int_0^{\infty}\overline G\,(x/y)h(y)\mathrm{d} y. \mathrm{e}nd{equation*} For the upper bound, we have \begin{align*} \overline F\,(x)&\le \sum_{i=1}^{\infty}\int_{i-1}^i\overline G\,(x/(y+1))h(y)\mathrm{d} y= \sum_{i=1}^{\infty}\int_{i-1}^i\overline G\,(x/(y+1))[h(y)-h(y+1)+h(y+1)]\mathrm{d} y\\ &\le\int_0^{\infty}\overline G\,(x/y)h(y)\mathrm{d} y+\int_0^{\infty}\overline G\,(x/(y+1))[h(y)-h(y+1)]\mathrm{d} y. \mathrm{e}nd{align*} For the second integral in the above, we have \begin{align*} &\int_0^{\infty}\overline G\,(x/(y+1))[h(y)-h(y+1)]\mathrm{d} y\\ =&\int_0^{1}\overline G\,(x/(y+1))[h(y)-h(y+1)]\mathrm{d} y+\int_1^{\infty}\overline G\,(x/(y+1))[h(y)-h(y+1)]\mathrm{d} y\\ \le & c_1\overline G\,(x/2)+c_2\int_1^{\infty}\overline G\,(x/(y+1))\dfrac{h(y+1)}{(y+1)^{\beta}}\mathrm{d} y, \mathrm{e}nd{align*} where $c_1$, $c_2>0$ are some constants and $0<\beta<1$. It is not difficult to obtain this upper bound: firstly, it is easy to prove for $y\ge 1$, $\log(y+1)-\log(y)\le 1/y$, consequently, $\log^2(y+1)-\log^2(y)\le 2\log(y+1)/y$; then we have \begin{align*} \dfrac{h(y)}{h(y+1)}-1&=\dfrac{y+1}{y}\mathrm{e}xp\left\{\dfrac{\log^2(y+1)-\log^2(y)}{2\sigma^2}\right\}-1\\ &\le \mathrm{e}xp\left\{\dfrac{1}{y}+\dfrac{\log(y+1)}{\sigma^2y}\right\}-1\\ &\le c\left(\dfrac{1}{y}+\dfrac{\log(y+1)}{\sigma^2y}\right), \text{where}\ c>0\ \text{is some constant},\\ &\le \dfrac{c_2}{(y+1)^{\beta}}. \mathrm{e}nd{align*} Define \begin{equation*} I'_{jk}(x):=x^k\int_0^{\infty}y^{-k}\mathrm{e}^{-\lambda_jx/y}h(y)\mathrm{d} y. \mathrm{e}nd{equation*} From Example \ref{myex lognormal}, we know that \begin{align*} \int_0^{\infty}\overline G\,(x/y)h(y)\mathrm{d} y &= \sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} c_{jk}\int_0^{\infty}(x/y)^k\mathrm{e}^{-\lambda_jx/y}h(y)\mathrm{d} y=\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} c_{jk}\dfrac{(-1)^kx^k}{\lambda_j^k}\mathcal{L}_Y^{(k)}(\lambda_jx)\\ &=\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} c_{jk}\, \frac{x^k}{\lambda_j^k}\mathcal{L}_{Y}\mathrm{e}xp\{-k\omega_0(\lambda_jx)+\dfrac{1}{2}\sigma_0(\lambda_jx)^2k^2\}. \mathrm{e}nd{align*} So \begin{align*} I'_{jk}(x) &=\left(\frac{x}{\lambda_j}\right)^k\mathcal{L}_Y\mathrm{e}xp\{-k\omega_0(\lambda_jx)+\dfrac{1}{2}\sigma_0(\lambda_jx)^2k^2\}. \mathrm{e}nd{align*} It is obvious that $c_1\overline G\,(x/2)$ vanishes faster than $I'_{jk}(x)$, so we can define \begin{align*} M_{jk}(x):=x^k\int_0^{\infty}y^{-k-\beta}\mathrm{e}^{-\lambda_jx/y}h(y)\mathrm{d} y, \mathrm{e}nd{align*} since \begin{align*} &c_2\int_1^{\infty}\overline G\,(x/(y+1))\dfrac{h(y+1)}{(y+1)^{\beta}}\mathrm{d} y =c_2\int_2^{\infty}\overline G\,(x/y)\dfrac{h(y)}{y^{\beta}}\mathrm{d} y\\ &\le\sum_{j=1}^{m}\sum_{k=0}^{\mathrm{e}ta_j-1} c_{jk}\int_0^{\infty}(x/y)^ky^{-\beta}\mathrm{e}^{-\lambda_jx/y}h(y)\mathrm{d} y. \mathrm{e}nd{align*} By a similar approximation as in Example \ref{myex lognormal}, we can see \begin{align*} M_{jk}(x)&=(-1)^{k+\beta}\frac{x^k}{\lambda_j^{k+\beta}}\mathcal{L}_Y^{(k+\beta)}(\lambda_jx)\\ &=\frac{x^k}{\lambda_j^{k+\beta}}\mathcal{L}_Y\mathrm{e}xp\{-(k+\beta)\omega_0(\lambda_jx)+\dfrac{1}{2}\sigma_0(\lambda_jx)^2(k+\beta)^2\}. \mathrm{e}nd{align*} So $M_{jk}(x)$ is negligible compared to integral $I'_{jk}(x)$. Thus, the phase-type scale mixture distribution with discrete lognormal scaling has the same asymptotic behavior as the phase-type scale mixture distribution with lognormal scaling. \mathrm{e}nd{Example} \subsection{Non-lattice supports} The examples in the previous subsection may suggest that a phase-type scale mixture having a discretized scaling distribution will inherit the asymptotic properties of its continuous counterpart. However, such a discretization cannot be made arbitrarily. The following example illustrates this fact. \begin{Example} Let $H\in\mathcal{R}_{-\alpha}$ be a continuous distribution and $S$ be a discrete random variable supported over $\{s_1,s_2,\cdots\}$ satisfying \begin{equation*} \mathbb{P}(S=s_i)=H(s_{i})-H(s_{i-1}), \quad i=1,2,\cdots. \mathrm{e}nd{equation*} Suppose there exists $\mathrm{e}psilon>0$ and $i_0\in{\mathbb N}$ such that $\forall i>i_0$, it holds that $s_{i+1}>s_i(1+\mathrm{e}psilon)$. Then \begin{align*} \limsup_{x\to\infty}\dfrac{\mathbb{P}[S>(1+\mathrm{e}psilon)x]}{\mathbb{P}[S>x]} =\limsup_{i\to\infty}\dfrac{\mathbb{P}[S>(1+\mathrm{e}psilon)s_i]}{\mathbb{P}[S>s_i]} =\limsup_{i\to\infty}\dfrac{\mathbb{P}[S>s_{i}]}{\mathbb{P}[S>s_{i}]} =1. \mathrm{e}nd{align*} Then $S$ does not have a regularly varying distribution. Suppose that $Y\sim\mbox{Erlang}(\lambda,k)$. According to Example 4.4 in \cite{MikoschJacobsenRosinski2009}, the distribution of phase-type scale mixture random variable $S\cdot Y$ is not regularly varying. \mathrm{e}nd{Example} Nevertheless, such a discretization will provide a \mathrm{e}mph{reasonable} approximation to a regularly varying distribution. The following is a continuation of our previous example and it shows that such a distribution satisfies an analogue of Breiman's lemma. \begin{Example}\label{converse of Breiman} Let $K>0$ and define $H_K$ a discrete distribution supported over $\{s_i:i\in\mathbb{Z}^+\}$, where $s_i=\mathrm{e}xp(i/K)$, and determined by \begin{equation*} H_K(s_i)=1-s_{i}^{-\alpha}, \qquad \forall i\in\mathbb{Z}^+. \mathrm{e}nd{equation*} The distribution $H_K$ can be seen as a discretization over a geometric progression of a Pareto distribution having tail probability $H(x)=x^{-\alpha}$ supported over $[1,\infty)$. The following argument shows that $H_K$ is no longer a regularly varying distribution. Notice that for all $t>1$ there exist $n\in\mathbb{Z}^+$ such that $s_{n}< t\le s_{n+1}$, hence \begin{align*} \liminf_{x\to\infty}\frac{\overline H_K(xt)}{\overline H_K(x)}& =s^{-\alpha}_{n+1}, & \limsup_{x\to\infty}\frac{\overline H_K(xt)}{\overline H_K(x)}& =\begin{cases}s^{-\alpha}_n&t<s_{n+1}\\ s^{-\alpha}_{n+1}&t=s_{n+1}. \mathrm{e}nd{cases} \mathrm{e}nd{align*} Thus, according to Example 4.4 in \cite{MikoschJacobsenRosinski2009}, the Mellin--Stieltjes convolution of an Erlang distribution $G$ with the distribution $H$ given above is no longer of regular variation (the conditions described in Proposition \ref{integral approx} are not satisfied for this example either). In spite of this, we can still analyse certain aspects of the asymptotic behavior of such a Mellin--Stieltjes convolution. For that purpose, note that the following inequalities hold for all $w>1$ \begin{equation*} \mathrm{e}^{-\alpha/K}\overline H(w)< \overline H_K(w)\le \overline H(w), \mathrm{e}nd{equation*} hence we obtain that \begin{align*} \mathrm{e}^{-\alpha/K}\int_0^\infty \overline H(x/s)dG(s)< \int_0^\infty \overline H_K(x/s)dG(s)\le \int_0^\infty \overline H(x/s)dG(s). \mathrm{e}nd{align*} Using Breiman's lemma we find that \begin{align*} \mathrm{e}^{-\alpha/K}<\liminf \frac{\overline F\,(x)}{M_G(\alpha)\overline H(x)}\le\limsup \frac{\overline F\,(x)}{M_G(\alpha)\overline H(x)}\le 1. \mathrm{e}nd{align*} A heuristic interpretation of the inequalities above is that aysmptotically the tail probability $\overline F\,$ \mathrm{e}mph{oscillates} between two regularly varying tails, so this example illustrates a behavior similar to that described by Breiman's lemma. Notice that the range of oscillation collapses as $K\to\infty$, which is consistent with the fact that $H_K\to H$ weakly. A better asymptotic approximation in the following argument is particularly sharp for numerical purposes. Consider \begin{align*} \overline F\,(x)=\int_0^{\infty}\overline G\,\left(x/s\right)\mathrm{d} H_K(s)&=(1-\mathrm{e}^{-\alpha/K})\sum_{i=0}^{\infty}\overline G\,(x\mathrm{e}^{-i/K})\mathrm{e}^{-\alpha i/K}. \mathrm{e}nd{align*} Let $I(x;i)=\overline G\,(x\mathrm{e}^{-i/K})\mathrm{e}^{-\alpha i/K}$. The infinite series can be approximated via the integral \begin{align*} \int_{0}^{\infty}I(x;y)\mathrm{d} y =\int_{0}^{\infty}\overline G\,\left( x\mathrm{e}^{-y/K}\right)\mathrm{e}^{-\alpha y/K }\mathrm{d} y &=K\int_{1}^{\infty}\overline G\,\left(\frac xw\right)w^{-(\alpha+1) }\mathrm{d} w =\frac{K}\alpha\int_{1}^{\infty}\overline G\,\left(\frac xw\right)\mathrm{d} {H}(w). \mathrm{e}nd{align*} Since $G$ is such that $M_G(\alpha+\mathrm{e}psilon)<\infty$ for all $\mathrm{e}psilon>0$, then Breiman's lemma implies that \begin{align*} {\overline F\,(x)}&\approx \frac{1-\mathrm{e}^{-\alpha/K}}{\alpha/K}M_G(\alpha)\overline H(x). \mathrm{e}nd{align*} This approximation is consistent with the bounds found above, since for all $w>0$ it holds that \begin{equation*} \mathrm{e}^{-w}\le \frac{1-\mathrm{e}^{-w}}{w}\le 1. \mathrm{e}nd{equation*} Hence, the asymptotic approximation suggested is contained in between the asymptotic bounds previously found. \mathrm{e}nd{Example} The previous example demonstrates that the tail behavior of a phase-type scale mixture distribution having a discretized scaling distribution is clearly affected by the selection of the support. Naturally, better approximations will be obtained by taking a finer partition of the support. The natural choice is to use a discretization of the target distribution over some lattice. However, this approach is not always suitable for numerical purposes, because in practice there is only a finite number of terms of the infinite series that can be computed, so these series are typically truncated. By selecting a discretization over a geometric progression, we will obtain infinite series that converge at faster rates, so these can be truncated earlier. More importantly, such geometric progressions still provide reasonable approximations of the tail probability as shown above. This approach has been tested successfully in \cite{Nardo2016}, where they considered discretizing a Pareto distribution over a geometric progression and used the corresponding phase-type scale mixture distribution to approximate Pareto claim size distributions in ruin probability calculations. This selection of the scaling distribution is of critical importance in \cite{BladtRojas2017} for estimating the parameters of a phase-type scale mixture distribution via the EM algorithm. Such an estimation procedure is iterative, so in each step it is necessary to compute a number of sufficient statistics involving these infinite series. The selection of a geometric support allows us to compute the estimators within a reasonable time. \section{Conclusion} \label{mysec5} We considered the class of phase-type scale mixtures. Such distributions arise from the product of two random variables $S\cdot Y$, where $S\sim H$ is a nonnegative random variable and $Y\sim G$ is a phase-type random variable. Such a class is mathematically tractable and can be used to approximate heavy-tailed distributions. We provided a collection of results which can be used to determine the asymptotic behavior of a distribution in such a class. For instance, if the scaling distribution $H$ has unbounded support, then the associated phase-type scale mixture distribution is heavy-tailed. We also provided verifiable conditions which can be employed to classify the maximum domains of attraction and determine subexponentiality. In particular, we were able to find phase-type scale mixture distributions with equivalent asymptotic behavior for regularly varying and Weibullian distributions. It is not the case for the lognormal for which it is more difficult to suggest an appropriate scaling distribution. We considered the case of phase-type scale mixture distributions having discrete scaling distributions since these are of critical importance in applications. We described a simple methodology which allows to establish the asymptotic proportionality of these distributions with respect to their continuous counterparts. We exhibited important advantages and limitations of this approach to approximate heavy-tailed distributions and analysed several important examples. We remark that most of the results obtained here can be extended to an analogue class of \mathrm{e}mph{matrix exponential scale mixture distributions} without too much effort. We note that some of our results were proven under the assumption that the phase-type distribution has a sub-intensity matrix having only real eigenvalues. Nevertheless, we conjecture that such results holds for general phase-type and matrix-exponential distributions. We also conjecture that a phase-type distribution is $\alpha$-regularly varying determining but this remains an open problem. \section{Appendix} In this appendix we revise some classical results providing conditions for determining if a distribution belongs to the Gumbel domain of attraction and if it is subexponential. A main result in extreme value theory indicates that a distribution $H$ belongs to the Gumbel domain of attraction iff $\overline H$ is tail-equivalent to a von Mises function. The following provides sufficient conditions for a distribution to be a von Mises function. \begin{theorem}[\cite{Haan1970}]\label{mydef2.2} Let $\overline H$ be a twice differentiable nonnegative distribution with unbounded support. Then $\overline H$ is a von Mises function iff there exists $s_0$ such that $\overline H^{\prime\prime}(s)<0$ for all $s>s_0$, and \begin{equation}\label{cond.Gumbel} \lim_{s\to\infty}\frac{\overline H(s)\overline H^{\prime\prime}(s)}{\left(\overline H^{\prime}(s)\right)^2}=-1. \mathrm{e}nd{equation} Moreover, von Mises functions are functions of rapid variation \citep[cf.][]{Bingham1987}. \mathrm{e}nd{theorem} \cite{GoldieResnick1988} provide a sufficient condition for an absolutely continuous distribution $H\in\mathrm{MDA}(\Lambda)$ to be subexponential: \begin{theorem}[\cite{GoldieResnick1988}] \label{Th.GoldieResnick} Let $H\in\mathrm{MDA}(\Lambda)$ be an absolutely continuous function with density $h$, then $H\in\mathcal{S}$ if \begin{equation}\label{cond.Gumbel.Sub} \liminf_{s\to\infty}\frac{\overline H(ts)}{h(ts)}\frac{h(s)}{\overline H(s)}>1,\qquad \forall t>1. \mathrm{e}nd{equation} \mathrm{e}nd{theorem} Therefore, since a phase-type scale mixture distribution is not only absolutely continuous but twice differentiable and its second derivative is negative, then we can verify if it belongs to the Gumbel domain of attraction by just checking the condition \mathrm{e}qref{cond.Gumbel} in Theorem \ref{mydef2.2}. Subexponentiality can be checked via condition \mathrm{e}qref{cond.Gumbel.Sub} in Theorem \ref{Th.GoldieResnick}. \mathrm{e}nd{document}
math
67,731
\begin{document} \title{A generic framework for genuine multipartite entanglement detection} \author{Xin-Yu Xu} \author{Qing Zhou} \author{Shuai Zhao} \author{Shu-Ming Hu} \affiliation{Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China} \author{Li Li} \email{[email protected]} \author{Nai-Le Liu} \email{[email protected]} \author{Kai Chen} \email{[email protected]} \affiliation{Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China} \affiliation{CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China} \affiliation{Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China} \maketitle \begin{abstract} Design of detection strategies for multipartite entanglement stands as a central importance on our understanding of fundamental quantum mechanics and has had substantial impact on quantum information applications. However, accurate and robust detection approaches are severely hindered, particularly when the number of nodes grows rapidly like in a quantum network. Here we present an exquisite procedure that generates novel entanglement witness for arbitrary targeted state via a generic and operational framework. The framework enjoys a systematic and high-efficient character and allows to substantiate genuine multipartite entanglement for a variety of states that arise naturally in practical situations, and to dramatically outperform currently standard methods. With excellent noise tolerance, our framework should be broadly applicable to witness genuine multipartite entanglement in various practically scenarios, and to facilitate making the best use of entangled resources in the emerging area of quantum network. \end{abstract} \section{Introduction} \label{sec:introduction} As a unique property in quantum theory, entanglement \cite{RevModPhys.81.865} is recognized as a kind of quantum resource \cite{RevModPhys.91.025001} and plays a central role in numerous quantum computing and quantum communication tasks \cite{bennett2000quantum,PhysRevLett.70.1895,RevModPhys.81.1301,feynman1982simulating,deutsch1985quantum}. The ability to generate an increasing number of entangled particles is an essential benchmark for quantum information processing. In past decades, considerable efforts have been made to prepare larger and more complex entangled states in various platforms \cite{Luo620,arute2019quantum,PhysRevLett.105.210504,yao2012,doi:10.1126/science.abg7812,yokoyama2013ultra,PhysRevLett.112.155304}, which experimental systems are currently evolving from several qubits to noisy intermediate scale quantum system (NISQ) \cite{Preskill2018quantumcomputingin}. The developments of quantum technologies raise immediately important questions regarding characterization of quantum entanglement of underlying systems. In bipartite systems, various theoretical works have been contributed, such as separability criterions \cite{PhysRevLett.77.1413,HORODECKI1997333,chen205017quantum,rudolph2005further} and entanglement measures \cite{PhysRevLett.95.040504,PhysRevLett.95.210501,plenio2014introduction}, which provide standard tools for characterizing bipartite entanglement. For good reviews, please refer to Refs.\ \cite{RevModPhys.81.865,GUHNE20091,friis2019entanglement}. When it comes to multipartite systems, the problem is much more complicated. The entanglement structure becomes much richer for multipartite systems \cite{PhysRevLett.108.110501,zhou2019detecting}, since the number of possible divisions grows exponentially with the system size \cite{RevModPhys.81.865}. This leads to many types of multipartite entanglement, ranging from non-fully-separable to genuine multipartite entanglement (GME). In the following, we focus on the detection of genuine multipartite entanglement, which is an essential task for multipartite quantum communication and quantum computing tasks. For the detection of GME, many standard tools in the bipartite case, such as separability criterions, become infeasible since they only detect entanglement between two partitions. Meanwhile, a tomographic reconstruction of quantum state required in these methods becomes time-consuming and computationally difficult in the multipartite case. For genuine multipartite entanglement detection, entanglement witness (EW) \cite{HORODECKI19961,terhal2000bell,lewenstein2000optimization,hyllus2005relations,lewenstein2001characterization} provides an elegant solution both theoretically and experimentally without need of having full tomographic knowledge about the state. Moreover, it is also known that witness operator can also be used to estimate entanglement measures \cite{PhysRevLett.98.110502}. On account of simplicity and efficiency of entanglement witness, it has been widely used for experimental certification of GME in many platforms, such as trapped ions \cite{PhysRevX.8.021012,PhysRevLett.106.130506}, photonic qubits \cite{PhysRevLett.95.210502,gao2010experimental,PhysRevX.8.021072,PhysRevLett.124.160503}, and superconducting qubits\cite{PhysRevLett.122.110501}. Most available GME witnesses are tailored towards some specific states, for instance, the Greenberger-Horne-Zeilinger (GHZ) states \cite{Greenberger1989}, W-states \cite{PhysRevA.62.062314}, graph states \cite{PhysRevA.69.062311,hein2006entanglement}, and so on. Despite few general methods for the construction of GME witness have been proposed \cite{PhysRevLett.92.087902,PhysRevLett.106.190502,PhysRevLett.113.100501,PhysRevLett.111.110503}, their performance is very limited, especially as the size of system grows. One major drawback is the limited scope of noise resistance. For example, the fidelity-based method \cite{PhysRevLett.92.087902} is a canonical witness construction and widely used nowadays. Its noise tolerance decreases dramatically as the system size increases. In realistic NISQ systems, however, the noise always inevitably grows with the system size. In fact, it has been shown that the fidelity witnesses fail to detect a large amount of mixed entangled states \cite{PhysRevLett.124.200502}. To find more robust GME witnesses, numerical methods have been introduced \cite{PhysRevLett.106.190502},which, however, suffer from expensive computational costs as the system size grows. Hence, although it is known that for any entangled state there exists some EW to detect it \cite{HORODECKI19961}, how to construct a desirable EW to recognize a GME state is still a formidable challenge. In this work, we propose a generic framework to design robust GME witnesses by analytical and systematic construction. We start by introducing an exquisite method for GME witness with a novel lifting from any set of bipartite EWs. This establishes the link between the standard tools developed in the bipartite case and the GME witness construction. We then provide a well-designed class of optimal bipartite EWs that allows the design of robust GME witnesses for arbitrary pure GME states with our method. The performance of this framework on many typical classes of GME states is further evaluated in terms of white noise tolerance. It can be shown that the framework outperforms the most widely used fidelity-based method with certainty, and outperforms much better than the best known EWs in many cases. Finally, benefiting from the high robustness of the resulting witnesses, we also demonstrate further applications of the framework, such as to provide a tighter lower bounds on the genuine multipartite entanglement measures and detecting unfaithful GME states \cite{PhysRevLett.124.200502}. \section{Results} \subsection{Preliminaries} To start with, we first give the precise definition of biseparable, genuine multipartite entanglement and entanglement witness. A pure state is called \textit{biseparable} if it can be written as a tensor product of two state vectors, i.e., $|\psi_A\rangle\otimes|\psi_{\bar{A}}\rangle$. Then a mixed state is called biseparable if it can be decomposed into a mixture of pure biseparable states, formally, \begin{equation} \rho_{bs}=\sum_{A|\bar{A},i}p_{A|\bar{A},i} |\psi_A^i\rangle\langle\psi_A^i|\otimes|\psi_{\bar{A}}^i\rangle\langle\psi_{\bar{A}}^i|, \end{equation} where the summation can be performed over different bipartitions $A|\bar{A}$ of the whole system. A state that is not biseparable is referred to as genuine multipartite entangled. To detect the GME states, the most widely used method is to find an observable $\mathcal{W}_{GME}$ that is nonnegative for all separable states and has negative expectation value on at least one GME state. Then for some multipartite quantum state $\rho$, the fact $Tr(\mathcal{W}_{GME}\rho) < 0$ will reveal the existence of genuine multipartite entanglement, and the $\mathcal{W}_{GME}$ is called a \textit{GME witness}. Moreover, given two EWs $\mathcal{W}_1$ and $\mathcal{W}_2$, if there exists $\lambda > 0$ such that $\mathcal{W}_1 - \lambda \mathcal{W}_2$ is positive semidefinite, i.e., $\mathcal{W}_1 \succeq \lambda \mathcal{W}_2$, one says that $\mathcal{W}_2$ is \textit{finer} than $\mathcal{W}_1$ \cite{lewenstein2000optimization}. The finer witness operator $\mathcal{W}_2$ detects more entangled states than $\mathcal{W}_1$. An EW is \textit{optimal} if no finer EW exists. \subsection{Design GME witness from a complete set of bipartite EWs} Due to its non-negativity over all biseparable states, a GME witness $\mathcal{W}_{GME}$ also serves as bipartite EW with respect to each possible bipartition of the whole system. In other words, there exists a complete set of bipartite EWs $\{\mathcal{W}_{A|\bar{A}}\}$ satisfying $\mathcal{W}_{GME} \succeq \mathcal{W}_{A|\bar{A}}$ for each bipartition $A|\bar{A}$. This fact, from the opposite point of view, indicates that the GME witness $\mathcal{W}_{GME}$ is designed based on the set $\{\mathcal{W}_{A|\bar{A}}\}$ according to the constraint $\mathcal{W}_{GME} \succeq \mathcal{W}_{A|\bar{A}}$. This naturally provides a general framework for constructing GME witnesses from a complete set of bipartite EWs. Remarkably, the set $\{\mathcal{W}_{A|\bar{A}}\}$ itself cannot be directly used to detect GME states, as there exist biseparable states that are entangled with respect to every possible bipartition \cite{GUHNE20091}. While there are two crucial issues with such a framework. The first one is how to find the operator satisfying $\mathcal{W}_{GME} \succeq \mathcal{W}_{A|\bar{A}}$, and the second one is to decide which set of bipartite EWs should be used. Optimal solutions to these two problems is hard in general, and there have been only a few previous related studies on these issues. In Ref.\ \cite{PhysRevLett.113.100501}, an alternative solution was proposed to establish a connection between positive maps and multipartite EWs, where EWs detecting multipartite bound entangled state have been obtained. While in the following, we present a novel alternatively solution which is capable of constructing robust GME witnesses. \subsection{An operational framework for constructing robust GME witness} Any mixed GME state contains at least one pure GME state as a component, while the remaining components can be treated as noises. In order to detect mixed GME states with linear EW, it is natural to employ a witness operator for the pure GME component that is sufficiently robust to noise from the other components. In fact, the set of all optimal GME witnesses for all pure GME states will be sufficient to detect all GME states. However, finding all optimal GME witnesses is naturally a formidable task. Therefore, to advance a solution to this problem, we propose an operational framework to construct a class of robust GME witnesses for all pure GME states. To address the problem of lifting any given set of bipartite EWs to multipartite, one can accomplish it in two steps: (1) For the first step, each bipartite EW $\mathcal{W}_{A|\bar{A}}$ is decomposed into some projectors. Note that the entanglement witness is designed for some pure entangled state $|\psi\rangle$. Hence we extract a term $-|\psi\rangle\langle\psi|$ before the decomposition. That is, the bipartite EWs are rewritten as $\mathcal{W}_{A|\bar{A}} = \mathcal{O}_{A|\bar{A}} - |\psi\rangle\langle\psi|$, and a spectral decomposition of $\mathcal{O}_{A|\bar{A}} = \mathcal{W}_{A|\bar{A}}+|\psi\rangle\langle\psi|$ is performed \begin{equation}\label{bEW} \mathcal{O}_{A|\bar{A}}=\sum_{|\vec{v}_{i,A|\bar{A}}\rangle \in \mathcal{S}_{A|\bar{A}}} c_{i,A|\bar{A}} |\vec{v}_{i,A|\bar{A}}\rangle\langle \vec{v}_{i,A|\bar{A}}|, \end{equation} with $\mathcal{S}_{A|\bar{A}}$ being the set of eigenvectors and $c_{i,A|\bar{A}}$ being the corresponding eigenvalues. All these eigenvectors are collected into a set $\mathcal{S} = \cup_{A|\bar{A}} \mathcal{S}_{A|\bar{A}}$. (2). For the second step, the obtained set $\mathcal{S}$ is divided into $m$ subsets $\{\mathcal{S}_k\}_{k=1}^m$, such that the vectors from different subsets are orthogonal with each other. Denote $\tilde{I}_k$ as the identity operator on the subspace $V_k$ spanned by the state vectors from subset $\mathcal{S}_k$, and $c_k=max_{|\vec{v}_{i,A|\bar{A}}\rangle \in \mathcal{S}_k} c_{i,A|\bar{A}} $ as the maximal coefficient attached to the state vectors in $\mathcal{S}_k$. With the above preparation and notation, we proceed to the following Theorem: \begin{theorem} Given any pure GME state $|\psi\rangle$ and a set of bipartite EWs $\{\mathcal{W}_{A|\bar{A}}\}$ detecting $|\psi\rangle$ for all possible $A|\bar{A}$, the following operator $\mathcal{\hat{W}}$ \begin{equation} \mathcal{\hat{W}}=\sum_{k=1}^m c_k \tilde{I}_k -|\psi\rangle\langle\psi|, \end{equation} is nonnegative over all biseparable states, where the $c_k$ and $\tilde{I}_k$ have been defined above. \end{theorem} \begin{proof} To prove the statement, it suffices to observe \begin{equation} \begin{aligned} \mathcal{\hat{W}}-\mathcal{W}_{A|\bar{A}}=&\sum_{k=1}^m c_k \tilde{I}_k -\mathcal{O}_{A|\bar{A}} \\ =&\sum_{k=1}^m \left(c_k \tilde{I}_k-\sum_{|v_{i,A|\bar{A}}\rangle \in \mathcal{S}_k \cap \mathcal{S}_{A|\bar{A}}} c_{i,A|\bar{A}} |\vec{v}_{i,A|\bar{A}}\rangle\langle \vec{v}_{i,A|\bar{A}}|\right) \\ \ge& \sum_{k=1}^m c_k\left(\tilde{I}_k-\sum_{|v_{i,A|\bar{A}}\rangle \in \mathcal{S}_k \cap \mathcal{S}_{A|\bar{A}}} |\vec{v}_{i,A|\bar{A}}\rangle\langle \vec{v}_{i,A|\bar{A}}|\right) \\ \ge& 0, \end{aligned} \end{equation} where the inequalities can be derived directly from the definitions of $c_k$ and $\tilde{I}_k$. \end{proof} The above construction can be interpreted geometrically. That is, noise from different subspaces has different degrees of influence on the entanglement properties of the target state. The influence is characterized by the coefficients $c_k$, and a small $c_k$ indicates that noise from this subspace hardly affects the entanglement property of target state. Therefore, Theorem 1 can be seen as robust GME witness construction with the help of some prior knowledge of the target state, which comes from the set of bipartite EWs $\{\mathcal{W}_{A|\bar{A}}\}$. Remarkably, Theorem 1 itself cannot be used as an operational framework for GME witness construction, since the resulting operators can be positive semidefinite and fail to detect any GME state. In fact, one can hardly expect a nontrivial result when the set of bipartite EWs $\mathcal{W}_{A|\bar{A}}$ are chosen randomly. Fortunately, standard tools exist for constructing bipartite EWs based on positive maps. In the following, in order to obtain an operational and generic framework for GME witness construction, we provide a promising choice on the set of bipartite EWs, which are designed for the target states based on partial transposition. Under any given bipartition $A|\bar{A}$, the target state $|\psi\rangle$ can be written in a Schmidt decomposition form $|\psi\rangle=\sum_{i=0}^{r_A-1} \sqrt{\lambda_{i,~A|\bar{A}}}|i_Ai_{\bar{A}}\rangle$, with $r_A$ being the corresponding Schmidt rank. Note that here the local dimension of the Hilbert space need not be fixed. Then we introduce a class of bipartite EWs $\mathcal{W}_{o,A|\bar{A}}$ in order to use them in the construction of GME witness. \begin{equation}\label{eq:obEW} \mathcal{W}_{o,A|\bar{A}}= \sum_{i,j=0}^{r_A-1} \sqrt{\lambda_{i,~A|\bar{A}}\lambda_{j,~A|\bar{A}}} |i_Aj_{\bar{A}}\rangle\langle i_Aj_{\bar{A}}|-|\psi\rangle\langle\psi|. \end{equation} The choice of $\mathcal{W}_{o,A|\bar{A}}$ is mainly based on two considerations. Firstly, $\mathcal{W}_{o,A|\bar{A}}+|\psi\rangle\langle\psi|$ naturally takes the decomposition form in the Eq.\ (\ref{bEW}). Secondly, the above $\mathcal{W}_{o,A|\bar{A}}$ are a class of optimal bipartite EWs. For a detailed illustration and discussion on the $\mathcal{W}_{o,A|\bar{A}}$, please refer to Appendix.\ \ref{sec:appendix bipartite EW}. These bipartite EWs, together with Theorem 1, promise a generic framework to construct GME witnesses with certainty. The explicit procedure is as follows: \begin{enumerate}[(1).] \item Firstly, find the set $\mathcal{S}$. For each bipartition $M|\bar{M}$, calculate the Schmidt decomposition of $|\psi\rangle$ with respect to $M|\bar{M}$, \begin{equation} |\psi\rangle=\sum_{i=0}^{r_{M|\bar{M}}-1} \lambda_{i,M|\bar{M}}|\varphi_{i,M}\rangle|\varphi_{i,\bar{M}}\rangle, \end{equation} with $r_{M|\bar{M}}$ being the Schmidt rank under this bipartition. A total of $r^2_{M|\bar{M}}$ vectors will be added to the set $\mathcal{S}$, and each of them has a corresponding coefficient. This is denoted by \begin{equation} \{(\sqrt{\lambda_{i,M|\bar{M}}\lambda_{j,M|\bar{M}}},~|\varphi_{i,M}\rangle|\varphi_{j,\bar{M}}\rangle)\}_{i,j=0}^{r_{M|\bar{M}}-1}. \end{equation} After traversing all possible bipartitions, one will end up with a set of vectors $\mathcal{S}$ as well as their corresponding coefficients, that is, $\{(c_k,~|\psi_k\rangle)\}_{k=1}^{|\mathcal{S}|}$. \item Secondly, find the finest division of $\mathcal{S}$ such that vectors from different subsets are orthogonal with each other. This can be achieved with the following steps: \begin{enumerate}[(i)] \item Put the first element $|\psi_1\rangle$ of $\mathcal{S}$ into an empty subset $\mathcal{S}_1$. \item For every other vector in $\mathcal{S}-\mathcal{S}_1$, if it is not orthogonal with all vectors in the set $\mathcal{S}_1$, it is added into $\mathcal{S}_1$. Repeat this step until no new vector can be added to $\mathcal{S}_1$. \item For the rest vectors in $\mathcal{S}-\mathcal{S}_1$, repeat the above two steps to obtain $\mathcal{S}-\mathcal{S}_1-\mathcal{S}_2$, $\mathcal{S}-\mathcal{S}_1-\mathcal{S}_2-\mathcal{S}_3$, $\cdots$, until one has classified all the elements of $\mathcal{S}$. \item One obtain a division $\mathcal{S}=\sum_{k=1}^{m}\mathcal{S}_k$. \end{enumerate} \item Thirdly, calculate the subspace spanned by the vectors in subset $\mathcal{S}_k$. By performing Schmidt orthogonalization of the vectors in $\mathcal{S}_k$, one can derive the subspace spanned by these vectors and obtain the identity operator $\tilde{I}_k$ on this subspace. \item Finally, for each subset $\mathcal{S}_k$, find the maximal coefficients $c_k$ attached to the vectors in it, and construct a GME witness using Theorem 1. \end{enumerate} There are two remarks to note about this method. Firstly, the resulting witness from the above procedure is always finer than the commonly used GME fidelity witness $\mathcal{W}_F =\lambda I-|\psi\rangle\langle\psi| $ for $|\psi\rangle$, with $\lambda=\max_{A|\bar{A}} \lambda_{0,A|\bar{A}}$ (Here it is assumed that the Schmidt coefficients $\lambda_{i,A|\bar{A}}$ are in decreasing order). To illustrate this, note that if the bipartite EWs are chosen as the bipartite fidelity witness $\mathcal{W}_{F,A|\bar{A}} = \lambda_{0,A|\bar{A}} I-|\psi\rangle\langle\psi|$, by applying Theorem 1, the obtained operator is nothing but the $\mathcal{W}_F$. Whereas by checking $\mathcal{W}_{F,A|\bar{A}}-\mathcal{W}_{o,A|\bar{A}} \succeq 0$, it is straightforward to verify that the above $\mathcal{W}_{o,A|\bar{A}}$ is finer than the bipartite fidelity witness $\mathcal{W}_{F,A|\bar{A}}$. Therefore, when Theorem 1 is applied to the set of $\mathcal{W}_{o,A|\bar{A}}$, the resulting GME witness strictly outperforms the corresponding fidelity witness $\mathcal{W}_{F}$. Secondly, one starts from a complete set of bipartite EWs in the above construction, leading to EWs that detect genuine multipartite entanglement. While if one starts from a smaller set of bipartite EWs, the method allows also for flexible applications in verifying other kinds of multipartite entanglement, e.g., characterizing the entanglement depth. \section{Examples} To help a better understanding as well as quantitatively investigating the robustness of the framework, we proceed to some explicit examples, where the white noise tolerance is employed as a figure of merit to evaluate its performance in practice. The white noise tolerance of some witness operator $\mathcal{W}$ for $|\psi\rangle$ is the critical value of $p$ such that the mixed state $pI/d+(1-p)|\psi\rangle\langle\psi|$ is not detected by $\mathcal{W}$. \subsection{$W$-state} To investigate the asymptotic behavior of this framework with an increasing system size, we start with the $n$-qubit $W$-state $|W_n\rangle=(|00\cdots01\rangle$$+|00\cdots10\rangle+$$\cdots+|10\cdots00\rangle)/\sqrt{n}$, which is widely used in quantum information processing tasks. For the $W$-state, we find the GME witness (see Appendix.\ \ref{sec:appendix w state} for a proof.) \begin{equation} \mathcal{W}_{|W_n\rangle}=\frac{n-1}{n}\mathcal{P}^n_1+\frac{\sqrt{\lfloor n/2\rfloor(n-\lfloor n/2\rfloor)}}{n} (\mathcal{P}^n_0 +\mathcal{P}^n_2)-|W_n\rangle\langle W_n|, \end{equation} with $\mathcal{P}^n_i=\sum_m\pi_m(|0\rangle^{\otimes n-i}|1\rangle^{\otimes i})\pi_m(\langle 0|^{\otimes n-i}\langle 1|^{\otimes i})$, where the summation $m$ is over all possible permutation of $|0\rangle^{\otimes n-i}|1\rangle^{\otimes i}$. The $\mathcal{W}_{|W_n\rangle}$ recovers a class of EWs presented in Ref.\ \cite{Bergmann_2013}, which are the most powerful ones for the $W$-state presently. Its white noise tolerance also tends to 1 for an increasing number of qubits. While for the fidelity witness, its white noise tolerance is $1/(n(1-1/2^n))$, tending to $1/n$ for large $n$. \subsection{Graph state} Graph states are a class of genuine multipartite entangled states that are of great importance for measurement-based quantum computation \cite{briegel2009measurement} and quantum error correction \cite{PhysRevA.65.012308}, etc. In Refs.\ \cite{PhysRevLett.106.190502,PhysRevA.84.032310}, the authors have developed powerful entanglement witnesses for graph states. While our framework suggests that there is still much room for improvement in the robustness of these existing results. More specifically, we focus on a typical class of graph state---the $n$-qubit ($n\ge4$) linear cluster states $|Cl_n\rangle$ in this example. The $|Cl_n\rangle$ can be expressed by a set of stabilizers $\{g_i\}_{i=1}^n$, with $g_i=Z_{i-1}X_iZ_{i+1}$ ($2\le i \le n-1$), $g_1=X_1Z_2$ and $g_n=Z_{n-1}X_n$ respectively, where the $X$ and $Z$ are Pauli operators. All the common eigenstates of these stabilizers introduce a complete basis, i.e., the graph state basis. This basis can be denoted by $|\vec{a}\rangle_{Cl_n}$, with $\vec{a}=a_1a_2\cdots a_n\in \{0,1\}^n$, such that $g_i|\vec{a}\rangle_{Cl_n}=(-1)^{a_i}|\vec{a}\rangle_{Cl_n}$ for $i=1,\cdots,n$. Specially, the $|Cl_n\rangle$ corresponds to $|00\cdots0\rangle_{Cl_n}$. When applied to the linear cluster state, our framework results in a GME witness which is diagonal under the graph state basis, (For the explicit construction process, we refer to Appendix.\ \ref{sec:appendix graph state}.) \begin{equation} \mathcal{W}_{Cl_n}=\sum_{k=1}^{\lceil n/3 \rceil} \sum_{\vec{a}\in V_k}\frac{1}{2^k-1}|\vec{a}\rangle_{Cl_n}\langle\vec{a}|-|Cl_n\rangle\langle Cl_n|. \end{equation} Here a vector $\vec{a}$ belongs to $V_k$ if there exist at most $k$ for the number of `$1$'s in $\vec{a}$, such that their distance with each other are larger than $2$ at the same time (for instance, $1101100$ belongs to $V_2$ while $1001011$ belongs to $V_3$). \begin{figure*} \caption{ In this figure we illustrate the performance of $\mathcal{W} \label{fig:fig1} \end{figure*} Its white noise tolerance $p_{Cl_n}$ of the $\mathcal{W}_{Cl_n}$ is presented in Fig.\ \ref{fig:fig1}. It is observed that the $\mathcal{W}_{Cl_n}$ can outperform the best known class of EWs provided in the Ref.\ \cite{PhysRevLett.106.190502} for $n>5$. Meanwhile, the white noise tolerance $p_{Cl_n}$ exhibits a similar asymptotic behavior as in the first example, that is, tending to $1$ for large $n$. We remark that while the resulting EWs are quite robust, they are not optimal. In fact, the optimality of the bipartite EWs employed in the construction is not sufficient to guarantee the optimality of the resulting GME witness. For some explicit target states, one may either analytically or numerically optimize the result. While a systematic and operational improvement of this framework remains an open question. A brief discussion on this issue is provided at the end of Appendix.\ \ref{sec:appendix graph state}. \subsection{Multipartite states admitting Schmidt decomposition} In the above examples, we benchmark our method with some well studied states. And now we turn to other less investigated states, where this method remains powerful. A typical class is the multipartite states admitting Schmidt decomposition. Without loss of generality, such state takes the form $|\phi_s\rangle=\sum_{i=0}^{d-1}\sqrt{\lambda_i}|i\rangle^{\otimes n}$, where the $\lambda_i$ are in decreasing order. This class of states includes high-dimensional GHZ states $ |GHZ_n^d\rangle = \sum_{i=0}^{d-1} |i\rangle^{\otimes n}/\sqrt{d}$ as a typical case when all the Schmidt coefficients are equal. For the multipartite states admitting Schmidt decomposition, our method leads to a class of optimal EWs (see Appendix.\ \ref{sec:appendix GHZ state} for a proof.) \begin{equation}\label{eq:eg3} \mathcal{W}_{|\phi_s\rangle}=\sum_{\substack{i,j=0,\\i< j}}^{d-1} \sum_{r=1}^{n-1}\sum_m \sqrt{\lambda_i\lambda_j}\pi_m(|i\rangle^{\otimes r}|j\rangle^{\otimes n-r})\pi_m(\langle i^{\otimes r}|\langle j|^{\otimes n-r})+\sum_{i=0}^{d-1}\lambda_i|i\rangle\langle i|^{\otimes n}-|\phi_s\rangle\langle\phi_s|, \end{equation} where $\pi_m(|i\rangle^{\otimes r}|j\rangle^{\otimes n-r})$ is a permutation of $|i\rangle^{\otimes r}|j\rangle^{\otimes n-r}$ and the summation of $m$ is over all possible permutations. \begin{figure*} \caption{ In sub-figure (a),(b),(c) and (d), we show how the white noise tolerance of different EWs varies with an increasing qudit number $n$, with $d=3,~4,~5,~6$ respectively. The target state is $d$-dimensional GHZ states, which belongs to the class of states in Example 1. In each sub-figure, the white noise tolerance of the GME witness in the Eq.\ (\ref{eq:eg3} \label{fig:ghz} \end{figure*} The white noise tolerance of $\mathcal{W}_{|\phi_s\rangle}$ is $p_{\mathcal{W}_{|\phi_s\rangle}}=(1-\sum_{i=0}^{d-1}\lambda_i^2)/(1-\sum_{i=0}^{d-1}\lambda_i^2+\frac{2^{n-1}-1}{d^n}((\sum_{i=0}^{d-1}\sqrt{\lambda_i})^2-1)))$. The $p_{\mathcal{W}_{|\phi_s\rangle}}$ tends to $1$ for large $n$ when $d > 2$. As a comparison, the best-known GME witness for this kind of states comes from the fidelity-based method, with $\mathcal{W}_F^{|\psi_s\rangle} = \lambda_0 I- |\phi_s\rangle\langle\phi_s|$. The white noise tolerance of $\mathcal{W}_F^{|\psi_s\rangle}$ is $(1-\lambda_0)/(1-1/d^n)$, which tends to $1-\lambda_0\le 1-1/d$ with an increasing system size. For the special case of $n$-qudit GHZ states $|GHZ_n^d\rangle$, the performance of our construction and the fidelity-based method is compared in Fig.\ \ref{fig:ghz}, where a significant improvement is demonstrated. Note that for $n$-qubit GHZ states $|GHZ_n\rangle$, the fidelity witness is already optimal, and hence we start from the local dimension $d=3$ in Fig.\ \ref{fig:ghz}. \subsection{The four-qubit singlet state} Multi-qubit singlet states are another interesting family of multi-qubit states. They are invariant under a simultaneous unitary rotation on all qubits ($U^{\otimes n}|\psi\rangle\langle \psi| {U^{\dag}}^{\otimes n} =|\psi\rangle\langle \psi|$). In the four-qubit case, all four-qubit singlet states live in a two-dimensional subspace of the whole Hilbert space. Without loss of generality, it can be denoted as \begin{equation} |\varphi_4\rangle = a |\psi_{12}^-\rangle\otimes |\psi_{34}^-\rangle + e^{i\theta}b |\psi_{13}^-\rangle\otimes |\psi_{24}^-\rangle, \end{equation} with the constraint $a^2+b^2+cos(\theta)ab=1$ and $|\psi_{12}^-\rangle$ being the two-qubit singlet state $(|01\rangle-|10\rangle)/\sqrt{2}$ on the first two qubits. Specially, for the choice of $\theta = \pi/2$, one arrives at a class of four-qubit singlet states decided by a single parameter $|\varphi_4(a)\rangle = a |\psi_{12}^-\rangle\otimes |\psi_{34}^-\rangle + \sqrt{1-a^2} |\psi_{13}^-\rangle\otimes |\psi_{24}^-\rangle$ with $a \in [-1,1]$. For this class of state $|\varphi_4(a)\rangle$, our framework results in the following GME witness \begin{equation} \mathcal{W}_4(a)= \alpha \mathcal{P}^4_2 +\frac{1}{2}(\mathcal{P}^4_1 + \mathcal{P}^4_3) +\frac{1}{4}(\mathcal{P}^4_0 + \mathcal{P}^4_4)- |\varphi_4\rangle\langle \varphi_4|, \end{equation} where $\alpha = \max\{(1+3(1-a^2))/4, (1+3a^2)/4\} \ge 5/8$. While the fidelity based witness for such state is $\mathcal{W}_4'(a) = \alpha I - |\varphi_4\rangle\langle \varphi_4|$. In Appendix.\ \ref{sec:appendix singlet}, a further discussion of the entanglement detection for multi-qubit singlet states is proposed based on our framework. Consequently, we have provided a generic framework for detecting arbitrary target GME state in a noisy systems by constructing robust GME witnesses. Firstly, by benchmarking its performance on some well-studied states, it is observed that this framework results in robust GME witnesses that perform comparable to the current best witnesses for these states. For other less investigated states, the most widely used method to construct EW for them is the fidelity-based method. As shown in these examples, our framework can provide a significant improvement compared with the fidelity-based method. This also leads to the conjecture that a large amount of pure GME states become fairly robust to noise as the system size increases. Secondly, the advantage of our framework against the fidelity-based method comes with no experimental overheads. This benefits from the fact that the $\sum_k c_k \tilde{I}_k$ term in this construction is usually diagonal in some well-defined basis, such as the graph state basis and the computational basis. Finally, it should be stressed that Theorem 1 can be applied not only to the class of bipartite EWs shown in Eq.\ (\ref{eq:obEW}), but also to other classes of bipartite EWs. This potentially results in some different GME witnesses. Further example is provided in Appendix.\ \ref{sec:appendix generalization}. \section{Applications of the resulting GME witnesses} \subsection{Detection of unfaithfulness} The unfaithful entangled states are a large class of states that cannot be recognized with any fidelity witness and have been attracted both theoretical and experimental interests \cite{PhysRevLett.126.140503,zhan2020detecting,PhysRevA.103.042417,PhysRevLett.127.220501}. Therefore, given that we have already gained access to construct finer GME witnesses than the fidelity-based method, it is natural to investigate their ability on the detection of unfaithful GME states. In general, deciding whether an entangled state is unfaithful is a nontrivial task, since one has to prove that the state is not detected by all fidelity witnesses, rather than a certain one. In bipartite case, a necessary and sufficient criterion for a state $\rho_{AB}$ to be unfaithful has been proposed \cite{PhysRevLett.126.140503}, while for multipartite case, it remains an open question on characterization of unfaithfulness. To avoid this difficulty and verify an EW indeed detects unfaithfulness, we limit our attention to a special class of states $\rho(p)=p I/d^n+(1-p) \rho_0$. that there is an upper bound on the white noise tolerance of any fidelity witness for arbitrary state. Denote $\mathcal{W}_F=\alpha I-\rho'$ as an arbitrary fidelity witness, then one can derive its white noise tolerance $p_F$ for arbitrary $\rho(p)$ by solving $Tr(\mathcal{W}_F\rho(p))=0$, which leads to \begin{equation} p_F=\max\{\frac{Tr(\rho_0\rho')-\alpha}{Tr(\rho_0\rho')-1/d^n},0\}. \end{equation} Then it is straightforward to see that $p_F\le (1-1/d)/(1-1/d^n)$, due to the fact that $Tr(\rho_0\rho')\le 1$ and $\alpha \ge 1/d$. Hence it can be concluded that an EW can be employed to detect some unfaithful entangled states, as long as its white noise tolerance for some state is higher than $(1-1/d)/(1-1/d^n)$. This is precisely the case for many GME witnesses constructed with our framework. For example, in an $n$-qubit case, this upper bound is $1/(2(1-1/2^n))$ and decreases to $1/2$ as $n$ grows. While our framework provides large amount of EWs with white noise tolerance converging to $1$, allowing for the certification of unfaithfulness of many states in $n$-qubit case. \subsection{Estimating entanglement measures} Moreover, a witness operator is useful not only for entanglement certification, but also for entanglement quantification. To start with, we briefly review the method developed in Ref.\ \cite{PhysRevLett.98.110502} for optimally estimating some entanglement measure $E$ given the expectation value of some witness operator $\mathcal{W}$. The task can be described as finding the lower bound \begin{equation} \epsilon(w)=\inf_{\rho} \{E(\rho)|Tr(\rho \mathcal{W})=w\}, \end{equation} where the infimum is taken over all states compatible with the data $w=Tr(\rho \mathcal{W})$. Note that $\epsilon(w)$ is a convex function, and thus there exist bounds of the type \begin{equation} \epsilon(w) \ge r\cdot w-c, \end{equation} for an arbitrary $r$. By inserting $w=Tr(\rho \mathcal{W})$ and $E(\rho) \ge \epsilon(w)$, it is observed that \begin{equation} c \ge r \cdot Tr(\rho \mathcal{W})-E(\rho), \end{equation} should be satisfied for any $\rho$. Hence given a "slope" $r$, the optimal constant $c$ is \begin{equation} c= \hat{E}(r\cdot \mathcal{W}):=\sup_{\rho} \{ r\cdot Tr(\rho \mathcal{W})-E(\rho)\}. \end{equation} Finally, an optimal lower bound is obtained after optimizing $r$: \begin{equation} \epsilon(w)=\sup_r \{r\cdot w - \hat{E}(r\cdot \mathcal{W})\}. \end{equation} It should be remarked that we limit our discussions into the nontrivial case where a negative expectation $w$ of a witness operator is observed in the following. In this case, the optimal "slope" $r$ should always be negative. Now, suppose that the $\mathcal{W}_2$ is a finer EW than the $\mathcal{W}_1$, satisfying $\mathcal{W}_2 \preceq \mathcal{W}_1$. It is straightforward to see that \begin{equation} \hat{E}(r\cdot \mathcal{W}_1) \ge \hat{E}(r\cdot \mathcal{W}_2). \end{equation} Therefore, when these two operators $\mathcal{W}_1$ and $\mathcal{W}_2$ have the same expectation value $w_0$, \begin{equation} \begin{aligned} \epsilon_2(w_0)= &\sup_r \{r\cdot w_0 - \hat{E}(r\cdot \mathcal{W}_2)\} \\ \ge &\sup_r \{r\cdot w_0 - \hat{E}(r\cdot \mathcal{W}_1)\} \\ =& \epsilon_1(w_0). \end{aligned} \end{equation} For the same target state $\rho$, the expectations $w_1$ and $w_2$ of these two witness operators always satisfy $w_1 \ge w_2$, which leads to $\epsilon_2(w_2) \ge \epsilon_2(w_1) \ge \epsilon_1(w_1)$. That is, a finer EW provides a tighter lower bound on the entanglement measure for the same state. Hence, our framework enables a better estimation of the entanglement measures of genuine multipartite entanglement than the fidelity-based method. To quantitatively investigate the improvement from these new GME witness, we discuss the estimation on the geometry measure of genuine multipartite entanglement for noisy $n$-partite $d$-dimensional GHZ states $\rho_{n,d}(p)=pI/d^n+(1-p) |GHZ_n^d\rangle\langle GHZ_n^d|$, with $ |GHZ_n^d\rangle = \sum_{i=0}^{d-1} |i\rangle^{\otimes n}/\sqrt{d}$. For arbitrary multipartite pure state $|\psi\rangle$, its geometric measurement of GME is defined by $E_G(|\psi\rangle) = 1-\max_{|\phi_{bs}\rangle}|\langle\phi_{bs}|\psi\rangle|^2$, with $|\phi_{bs}\rangle$ being arbitrary pure biseparable state. The geometric measure of GME is extended to mixed states by the convex roof construction \begin{equation} E_G(\rho)=\inf \limits_{p_i,|\psi_i\rangle}\sum_i p_i E_G(|\psi_i\rangle), \end{equation} where the minimization runs over all possible decompositions $\rho=\sum_i p_i|\psi_i\rangle\langle\psi_i|$. Based on the result in Ref.\ \cite{PhysRevApplied.13.054022}, one can derive a lower bound $\epsilon_f^{n,d}(p)$ for $E_G(\rho_{n,d}(p))$ \begin{equation} E_G(\rho_{n,d}(p)) \ge \epsilon_f^{n,d}(p) :=1-\gamma(S), \end{equation} where $\gamma(S)=[\sqrt{S}+\sqrt{(d-1)(d-S)}]^2/d$ with $S=\max \{1, d(1-p)+p/d^{n-1}\}$. This is just the lower bound related to the fidelity witness $\mathcal{W}_F=I/d-|GHZ_n^d\rangle\langle GHZ_n^d|$. Whereas it has been proved in the previous section that finer EW is accessible with our method, that is, \begin{equation} \begin{aligned} \mathcal{W}_{o,|GHZ_n^d\rangle}=&\sum_{\substack{i,j=0,\\i< j}}^{d-1} \sum_{r=1}^{n-1} \sum_m \frac{1}{d}\pi_m(|i\rangle^{\otimes r}|j\rangle^{\otimes n-r})\pi_m(\langle i^{\otimes r}|\langle j|^{\otimes n-r}) \\ &+\sum_{i=0}^{d-1}\frac{1}{d}|i\rangle\langle i|^{\otimes n}-|GHZ_n^d\rangle\langle GHZ_n^d|. \end{aligned} \end{equation} With the expectation value $w_{n,d}(p)=Tr(\rho_{n,d}(p)\mathcal{W}_{o,|GHZ_n^d\rangle})$ from this finer EW, a lower bound $\epsilon_o^{n,d}(p)$ can be derived by employing the technique developed in Ref.\ \cite{PhysRevLett.98.110502}: \begin{equation} \label{lowerbd} E_G(\rho_{n,d}(p)) \ge \epsilon_o^{n,d}(p) :=\sup \limits_{r}\left\{r\cdot w_{n,d}(p)-\hat{E}_G(r\mathcal{W}_{o,|GHZ_n^d\rangle})\right\}, \end{equation} with $r$ being a real number, and \begin{equation}\label{hatE} \hat{E}_G(r\mathcal{W}_{o,|GHZ_n^d\rangle})=\sup\limits_{|\psi\rangle}\sup\limits_{|\phi_{bs}\rangle} \left\{\langle\psi|(r\mathcal{W}_{o,|GHZ_n^d\rangle}+|\phi_{bs}\rangle\langle\phi_{bs}|)|\psi\rangle-1\right\}, \end{equation} where the maximization runs over all pure state $|\psi\rangle$ and biseparable state $|\phi_{bs}\rangle$. Furthermore, in this special case, it can be verified that one has to choose $|\phi_{bs}\rangle$ as a state having the largest overlap with $|GHZ_n^d\rangle$, which results in \begin{equation} \hat{E}_G(r\mathcal{W}_{o,|GHZ_n^d\rangle})=\frac{1-r}{2}+\frac{1}{2}\sqrt{(1-r)^2+4r\frac{d-1}{d}}+\frac{r}{d}-1. \end{equation} By inserting this equation into Eq.\ (\ref{lowerbd}), the lower bound $\epsilon_o^{n,d}(p)$ can be solved directly. \begin{figure*} \caption{ In this figure, we choose $d=3$ and $n=3,~5,~7,~9$ as examples to compare the lower bound $\epsilon_o^{n,d} \label{fig:measure} \end{figure*} In Fig.\ \ref{fig:measure}, we have shown the results for $d=3$ and $n=3,~5,~7,~9$ as examples, to illustrate the performance of our method with an increasing system size. As the number of subsystems grows, the critical value of $p$ tends to $1$, when the lower bound $\epsilon_o^{n,d}(p)$ vanishes. Meanwhile, the $\epsilon_o^{n,d}(p)$ is always larger than the $\epsilon_f^{n,d}(p)$ above, which vanishes at $p=1-1/d$ for large $n$. That is, the new EWs $\mathcal{W}_{o,|GHZ_n^d\rangle}$ are able to provide a better estimation on the geometric measure of GME for $\rho_{n,d}(p)$. It remains open whether $\epsilon_o^{n,d}(p)$ equals $E_G(\rho_{n,d}(p))$. However, it is still reasonable to expect that such new GME witnesses can provide faithful estimations on entanglement measures without the need for quantum tomography, as they are already robust enough. \section{Conclusion and outlook} In summary, we have developed a exquisite framework and scheme for genuine multipartite entanglement detection, and demonstrated its operability and universality by applying it on typical GME states that arise in practice. In particular, this is achieved using a novel method to bring any complete set of bipartite EWs to a single GME witness. This method allows to make full use of some prior information about the target state to improve the noise resistance. In fact, the resulting GME witnesses turn out to be quite robust, whose white noise tolerance converge to $1$ in many cases. As a consequence, this framework holds great practical potential in real-life situations, especially for detecting entanglement in noisy multipartite or high-dimensional systems. This will play a certain role in facilitating the solution of the very challenging problem of genuine multipartite entanglement detection. In addition to genuine multipartite entanglement, we remark that our method is highly flexible and admits natural generalizations for detecting other types of entanglement. A relevant case is entanglement detection in quantum networks, which is currently under active investigations. In quantum networks, multipartite entanglement exhibits novel features due to the complex network topology \cite{PhysRevLett.125.240505,PhysRevA.103.L060401,PhysRevLett.128.220501}, and better techniques are urgently needed for the characterization of genuine network multipartite entanglement. Finally, it will also be interesting to seek for further extension of the framework in high-order entanglement detection \cite{PhysRevLett.105.210504} as well as bound entanglement detection. \section*{Acknowledgments} We thank Yi-Zheng Zhen for very valuable discussion. This work has been supported by the National Natural Science Foundation of China (Grants No. 62031024, 11874346, 12174375), the National Key R$\&$D Program of China (2019YFA0308700), the Anhui Initiative in Quantum Information Technologies (AHY060200), and the Innovation Program for Quantum Science and Technology (No. 2021ZD0301100). \appendix \section*{Appendix} \section{Proof and discussions of the bipartite EW in Eq. (5)}\label{sec:appendix bipartite EW} \subsection {A class of bipartite entanglement witness} Let $|\phi\rangle$ be an arbitrary pure entangled state in the $d\times d$ dimensional Hilbert space $\mathcal{H}_d\otimes\mathcal{H}_d$. Without loss of generality, one can assume $|\phi\rangle=\sum_{i=0}^{d-1}\sqrt{\lambda_i}|ii\rangle$, where all $\lambda_i\ge 0$ are Schmidt coefficients in decreasing order and $\sum_i \lambda_i=1$. One can define a positive operator $Q$ as \begin{equation}\label{eq:Q} Q=\sum_{\substack{i,j=0,\\i<j}}^{d-1} \sqrt{\lambda_i\lambda_j}(|ij\rangle-|ji\rangle)(\langle ij|-\langle ji|), \end{equation} which can be used for constructing an EW for $|\phi\rangle$. \begin{lemma} The partial transpose of $Q$ provides an optimal EW $\mathcal{W}_o^{|\phi\rangle}$, which $\mathcal{W}_o^{|\phi\rangle}$ reads \begin{equation} \mathcal{W}_o^{|\phi\rangle}=Q^{\Gamma}=\sum_{i,j=0}^{d-1} \sqrt{\lambda_i\lambda_j}|ij\rangle\langle ij|-|\phi\rangle\langle\phi| \end{equation} \end{lemma} \begin{proof} To prove that the $\mathcal{W}_o^{|\phi\rangle}$ is an EW, note that it is of the form $Q^{\Gamma}$ with $Q$ being positive semidefinite ($Q\succeq 0$). Thus for all separable states $Tr(\rho_{sep}\mathcal{W}_o^{|\phi\rangle}) = Tr(\rho_{sep}^{\Gamma}Q)\ge 0$. Meanwhile, $Tr(\mathcal{W}_o^{|\phi\rangle}|\phi\rangle\langle\phi|)=\sum_{i}\lambda_i^2-1=-\sum_{i\ne j}\lambda_i\lambda_j<0$. Then $\mathcal{W}_o^{|\phi\rangle}$ is an EW by definition. To show the optimality of $\mathcal{W}_o^{|\phi\rangle}$, it is sufficient to prove that the set of pure separable states $\{|\phi_1\rangle\otimes|\phi_2\rangle\}$ satisfying $\langle\phi_1|\langle\phi_2|\mathcal{W}_o|\phi_1\rangle|\phi_2\rangle=0$ span the whole Hilbert space $\mathcal{H}_d\otimes\mathcal{H}_d$ \cite{PhysRevA.62.052310}. For qubit case, one has $\mathcal{W}_o^{(2)}=\sqrt{\lambda_0\lambda_1}(|01\rangle-|10\rangle)(\langle 01|-\langle10|)^{\Gamma}$. It is easy to verify that the set of separable states $\{|00\rangle,~(|0\rangle+|1\rangle)(|0\rangle+|1\rangle)/2,~(|0\rangle+i|1\rangle)(|0\rangle-i|1\rangle)/2,~|11\rangle\}$ satisfying $Tr(\rho_{sep}\mathcal{W}_o^{(2)}) = 0$. This set of states span the whole Hilbert space $\mathcal{H}_2\otimes\mathcal{H}_2$. In fact, it has been shown that any decomposable EW acting on $\mathcal{H}_2\otimes\mathcal{H}_d$ is optimal iff it takes the form $\mathcal{W}=Q^{\Gamma}$ for some $Q\succeq 0$ \cite{Augusiak_2011}. Similarly, in the qudit case ($d>2$), there exist separable states $\{|ee\rangle,~(|e\rangle+|f\rangle)(|e\rangle+|f\rangle)/2,~(|e\rangle+i|f\rangle)(|e\rangle-i|f\rangle)/2,~|ff\rangle\}$ satisfying $Tr(\rho_{sep}\mathcal{W}_o^{|\phi\rangle})=0$, for each pair $0\le e< f \le d-1$. These states span the same space with $\{|ee\rangle,~|ef\rangle,~|fe\rangle,~|ff\rangle\}$. By iterating over all $e<f$, one ends up with a set of separable states spanning the whole space $\mathcal{H}_d\otimes\mathcal{H}_d$. Thus the EW $\mathcal{W}_o^{|\phi\rangle}$ is optimal. This finishes the proof. \end{proof} \subsection{Detection of bipartite unfaithful state} Remarkably, for the $|\phi\rangle$, the most widely used fidelity witness reads $\mathcal{W}_F^{|\phi\rangle} = \lambda_0 I -|\phi\rangle\langle\phi|$. It is straightforward to observe that $\mathcal{W}_F^{|\phi\rangle} -\mathcal{W}_o^{|\phi\rangle}\succeq 0$, which means that the $\mathcal{W}_o^{|\phi\rangle}$ is finer than the $\mathcal{W}_F^{|\phi\rangle}$. This leads to a byproduct that the $\mathcal{W}_o^{|\phi\rangle}$ can detect unfaithful states. Unfaithful states are entangled states which can not be detected by all fidelity witnesses \cite{PhysRevLett.124.200502}, namely, an entangled state $\rho$ is unfaithful if and only if $Tr(\rho W_F^{|\psi\rangle}) \ge 0$ for all $|\psi\rangle$. Therefore, the relationship $\mathcal{W}_F^{|\phi\rangle} -\mathcal{W}_o^{|\phi\rangle} \succeq 0$ itself is not sufficient to demonstrate that the extra entangled states detected by $\mathcal{W}_o^{|\phi\rangle}$ is unfaithful. And a further clarification is required to justify the statement that $\mathcal{W}_o^{|\phi\rangle}$ detects unfaithful state. Now we would like to provide qualitative and quantitative characterization on the ability to detect unfaithfulness of the $\mathcal{W}_{o}^{|\phi\rangle}$. Consider the class of states $\rho_{|\phi\rangle}(p)=pI/d^2+(1-p)|\phi\rangle\langle\phi|$. From the Observation 1 in Ref.\ \cite{PhysRevLett.126.140503}, it is known that such states are faithful if and only if it is detected by $\mathcal{W}_m=I/d-|\phi_d^+\rangle\langle\phi_d^+|$, with $|\phi_d^+\rangle$ being the maximally entangled state $\sum_{i=0}^{d-1}1/\sqrt{d}|ii\rangle$. By solving $Tr(\rho_{|\phi\rangle}(p)\mathcal{W}_{m})=0$, we obtain that the white noise tolerance of $\mathcal{W}_m$ for $|\phi\rangle$ is \begin{equation} p_f^{|\phi\rangle}=\frac{\sum_{i,j=0}^{d-1}\sqrt{\lambda_i\lambda_j}-1}{\sum_{i,j=0}^{d-1}\sqrt{\lambda_i\lambda_j}-\frac{1}{d}}. \end{equation} That is, $\rho_{|\phi\rangle}(p)$ is faithful when $p<p_f^{|\phi\rangle}$. Similarly, one can obtain the white noise tolerance of $\mathcal{W}_o^{|\phi\rangle}$ for $|\phi\rangle$, which is \begin{equation} p_o^{|\phi\rangle}=\frac{1-\sum_{i=0}^{d-1}\lambda_i^2}{1-\sum_{i=0}^{d-1}\lambda_i^2+\frac{1}{d^2}(\sum_{i,j=0}^{d-1}\sqrt{\lambda_i\lambda_j}-1)}. \end{equation} It can be observed that \begin{equation} \begin{aligned} \frac{1/p_{o}^{|\phi\rangle}-1}{1/p_{f}^{|\phi\rangle}-1}=&\frac{(\sum_{i,j=0}^{d-1}\sqrt{\lambda_i\lambda_j}-1)^2}{d(d-1)(1-\sum_i \lambda_i^2)} \\ =&1+\frac{(\sum_{i\ne j}\sqrt{\lambda_i\lambda_j})^2-d(d-1)(1-\sum_i \lambda_i^2)}{d(d-1)(1-\sum_i \lambda_i^2)} \\ \le&1+\frac{d(d-1)(\sum_{i\ne j}\lambda_i\lambda_j)-d(d-1)(1-\sum_i \lambda_i^2)}{d(d-1)(1-\sum_i \lambda_i^2)} \\ =&1+\frac{d(d-1)((\sum_{i=0}^{d-1}\lambda_i)^2-1)}{d(d-1)(1-\sum_i \lambda_i^2)}=1. \end{aligned} \end{equation} In other words, $p_{f}^{|\phi\rangle} \le p_{o}^{|\phi\rangle}$, where the inequality comes from the Cauchy–Schwarz inequality, and takes equality if $d=2$ or $\lambda_i=1/d$ for all $i$. Therefore, the $\mathcal{W}_o^{|\phi\rangle}$ can always detect unfaithful state $\rho_{|\phi\rangle}(p)$ for $p\in[p_f^{|\phi\rangle},~p_o^{|\phi\rangle})$, unless $|\phi\rangle$ being a two-qubit state or maximally entangled. \begin{table}[t] \centering \begin{tabular}{|c |c |c |c |c |c |} \hline d &3 &4&5&6&7 \\ \hline $l_d$&0.2679&0.4202&0.5195&0.5896&0.6624\\ \hline \end{tabular} \caption{ Maximal unfaithful length $l_d$ from the class of EWs $\mathcal{W}_o^{|\phi\rangle}$. We remark that the optimization of $l_d$ may arrive at a local maximum. We use enough random starting points to support the claim that we arrive at the global maximum. } \label{tab:unfaithful_max} \end{table} As a quantitative investigation, we numerically maximize the interval length $l_d$ of $[p_f^{|\phi\rangle},~p_o^{|\phi\rangle})$ over all $|\phi\rangle$ for different local dimension $d$. We name $l_d$ the maximal unfaithful length from the class of EWs $\mathcal{W}_o^{|\phi\rangle}$, and the results are listed in Table. \ref{tab:unfaithful_max} for $d=3,4,\cdots,7$. It can be seen that $l_d$ grows significantly with an increasing dimension $d$, indicating that the $\mathcal{W}_o^{|\phi\rangle}$ can greatly outperform the fidelity witness. This is also in agreement with the statement that most states are unfaithful as claimed in Ref.\ \cite{PhysRevLett.124.200502}. Except for the $l_d$, one may be also interested in the average performance of this new class of EWs on unfaithfulness detection. As a comparison, it is natural to consider two interval $[p_e^{|\phi\rangle}, p_f^{|\phi\rangle})$ and $[p_o^{|\phi\rangle}, p_f^{|\phi\rangle})$, where the $p_e^{|\phi\rangle}$ is the critical value such that $\rho_{|\phi\rangle}(p)$ becomes separable. The former interval contains all unfaithful $\rho_{|\phi\rangle}(p)$, while the latter contains the part that can be detected by the class of $\mathcal{W}_o^{|\phi\rangle}$. Then one can use $avg_{|\phi\rangle} (p_o^{|\phi\rangle}-p_f^{|\phi\rangle})/(p_e^{|\phi\rangle}-p_f^{|\phi\rangle})$ to evaluate the average performance of $\mathcal{W}_o^{|\phi\rangle}$ for detecting unfaithfulness, as shown in Table. \ref{tab:unfaithful_avg}. It is observed that a large percentage of unfaithful states have been detected. This is also the premise that GME witnesses constructed from this class of bipartite EWs can detect multipartite unfaithful state. \begin{table}[hbtp] \centering \begin{tabular}{|c |c |c |c |c |c |} \hline d &3 &4&5&6&7 \\ \hline $avg_{|\phi\rangle} (p_o^{|\phi\rangle}-p_f^{|\phi\rangle})$ &0.0804&0.0969&0.0963&0.0909&0.0848\\ \hline $avg_{|\phi\rangle} (p_e^{|\phi\rangle}-p_f^{|\phi\rangle})$ &0.1190&0.1460&0.1457&0.1379&0.1286\\ \hline $avg_{|\phi\rangle} \frac{p_o^{|\phi\rangle}-p_f^{|\phi\rangle}}{p_e^{|\phi\rangle}-p_f^{|\phi\rangle}}$ &0.5605&0.5937&0.6089&0.6181&0.6248\\ \hline \end{tabular} \caption{ Average performance of $\mathcal{W}_o^{|\phi\rangle}$ for detecting unfaithfulness. For different local dimension $d$, the average is taken by randomly generating $10^7$ pure states in $\mathcal{H}_d \otimes \mathcal{H}_d$. Since any pure bipartite state admits a Schmidt decomposition $|\phi\rangle = \sum_i \sqrt{\lambda_i} |ii\rangle$, we replace the randomly generated pure bipartite states with random vectors $(\sqrt{\lambda_0},\cdots,\sqrt{\lambda_{d-1}})$ uniformly distributed on the $d$-dimensional unit sphere. Moreover, the critical value $p_e^{|\phi\rangle}$ is $\frac{d^2\sqrt{\lambda_0\lambda_1}}{1+d^2\sqrt{\lambda_0\lambda_1}}$ according to the results in Ref.\ \cite{PhysRevA.59.141}, assuming that the Schmidt coefficients $\lambda_i$ are in decreasing order.} \label{tab:unfaithful_avg} \end{table} \subsection{Generalization of Lemma 1}\label{sec:appendix generalization} Finally, we provide a generalization of Lemma 1. For the above entangled state$|\phi\rangle$, one can construct another positive operator \begin{equation}\label{eq:general_Q} \tilde{Q}=\sum_{\substack{i,j=0,\\i<j}}^{d-1} (\alpha_{ij}|ij\rangle-\beta_{ij}|ji\rangle)(\alpha_{ij}\langle ij|-\beta_{ij}\langle ji|), \end{equation} instead of $Q$, where $\alpha_{ij}\beta_{ij}=\sqrt{\lambda_i\lambda_j}$ and the $\alpha_{ij},\beta_{ij}$ are all positive. The operator $\tilde{\mathcal{W}}_o=\tilde{Q}^{\Gamma}$ is also optimal EW and applicable in our framework for GME witness construction. Here, the proof of the optimality of $\tilde{\mathcal{W}}_o$ is similar to the case in Lemma 1. It is sufficient to verify that the set of state $\{|ee\rangle,~|ff\rangle,~(\sqrt{\beta_{ef}}|e\rangle + \sqrt{\alpha_{ef}}|f\rangle)\otimes(\sqrt{\alpha_{ef}}|e\rangle + \sqrt{\beta_{ef}}|f\rangle),~(\sqrt{\beta_{ef}}|e\rangle + i\sqrt{\alpha_{ef}}|f\rangle)\otimes(\sqrt{\alpha_{ef}}|e\rangle - i\sqrt{\beta_{ef}}|f\rangle)\}_{e,f=0}^{d-1}$ have zero expectation value when measured with $\tilde{\mathcal{W}}_o$, and span the whole $d^2$-dimensional Hilbert space. \section{Proof of the examples} In this section, we will show explicitly how this construction can be applied to some commonly used multipartite entangled states, and make further discussions on the results. \subsection{$W$-state}\label{sec:appendix w state} The $W$-state is an important class of multiqubit entangled states. A class of EWs for $W$-state which can outperform significantly than the fidelity witness has been proposed in Ref.\ \cite{Bergmann_2013}. In Ref.\ \cite{Bergmann_2013}, the authors construct an operator at first, and then prove that this operator is decomposable bipartite EW with respect to all possible bipartitions. While our construction is in the opposite direction. We construct a complete set of bipartite EWs for $W$-state, and lift them to a GME witness. Although different method has been used, our construction recovers the result in Ref.\ \cite{Bergmann_2013}. We start with the simplest 3-qubit case, where the target state is \begin{equation} |W_3\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle). \end{equation} For the bipartition $1|23$, the EW constructed from Lemma 1 is of the form \begin{equation} \begin{aligned} \mathcal{W}_{1|23}^{|W_3\rangle} &=\frac{\sqrt{2}}{3}(|000\rangle\langle000|+|1\psi^+\rangle\langle1\psi^+|) \\ &~~~~+\frac{2}{3}|0\psi^+\rangle\langle0\psi^+|+\frac{1}{3}|100\rangle\langle100|-|W_3\rangle\langle W_3|, \end{aligned} \end{equation} with $|\psi^+\rangle=(|01+10\rangle)/\sqrt{2}$. And for the other two bipartitions, the $\mathcal{W}_{2|13}$ and $\mathcal{W}_{3|12}$ can be obtained after permutation of qubits. Then for $|W_3\rangle$, the set $\mathcal{S}$ reads \begin{equation} \begin{aligned} \mathcal{S}=\{&|000\rangle\langle000|,|001\rangle\langle001|,~|010\rangle\langle010|,~|100\rangle\langle100|,\\ &|0\psi^+\rangle\langle0\psi^+|,~|0_2\psi_{13}^+\rangle\langle0_2\psi_{13}^+|,~|\psi^+0\rangle\langle\psi^+0|,\\ &|1\psi^+\rangle\langle1\psi^+|,~|1_2\psi_{13}^+\rangle\langle1_2\psi_{13}^+|,~|\psi^+1\rangle\langle\psi^+1|\}. \end{aligned} \end{equation} These states in $\mathcal{S}$ can be grouped into 3 subsets according to the procedure in the main text: \begin{equation} \begin{aligned} \mathcal{S}_1&=\{|000\rangle\langle000|\},\\ \mathcal{S}_2&=\{|001\rangle\langle001|,~|010\rangle\langle010|,~|100\rangle\langle100|,\\ &~~~~~~~|0\psi^+\rangle\langle0\psi^+|,~|0_2\psi_{13}^+\rangle\langle0_2\psi_{13}^+|,~|\psi^+0\rangle\langle\psi^+0|\}\\ \mathcal{S}_3&=\{|1\psi^+\rangle\langle1\psi^+|,~|1_2\psi_{13}^+\rangle\langle1_2\psi_{13}^+|,~|\psi^+1\rangle\langle\psi^+1|\}, \end{aligned} \end{equation} and the corresponding $\alpha_k$ by Theorem 1 is \begin{equation} \alpha_1=\sqrt{2}/3, ~\alpha_2=2/3,~\alpha_3=\sqrt{2}/3 \end{equation} respectively. This result in a GME witness \begin{equation} \begin{aligned} \mathcal{W}_{|W_3\rangle}=&\frac{\sqrt{2}}{3}(|000\rangle\langle000|+|101\rangle\langle101|+|011\rangle\langle011|+|110\rangle\langle110|)\\ &+\frac{2}{3}(|001\rangle\langle001|+|010\rangle\langle010|+|100\rangle\langle100|)-|W_3\rangle\langle W_3|. \end{aligned} \end{equation} Moreover, by employing the generalization of Lemma 1 in Eq.\ (\ref{eq:general_Q}), one obtains \begin{equation} \mathcal{W}'_{1|23}=\left[(a|0\rangle|00\rangle-b|1\rangle|\psi^+\rangle)(a\langle0|\langle00|-b\langle1|\langle\psi^+|)\right]^{\Gamma_1}, \end{equation} where $a$, $b$ are positive numbers and satisfy $ab=\sqrt{2}/3$. The other two bipartite EWs are obtained immediately after rearrangement of the qubits. For this set of bipartite EWs, the EW $\mathcal{W}_{|W_3\rangle}$ can be generalized into \begin{equation} \begin{aligned} \mathcal{W}_{|W_3\rangle}'=&a^2|000\rangle\langle000|+b^2(|101\rangle\langle101|+|011\rangle\langle011|+|110\rangle\langle110|)\\ &+\frac{2}{3}(|001\rangle\langle001|+|010\rangle\langle010|+|100\rangle\langle100|)-|W_3\rangle\langle W_3|. \end{aligned} \end{equation} In $n$-qubit cases, if a subsystem $A$ contains $m$ qubits, the corresponding bipartite EW from the Lemma 1 is of the form ($1\le m\le n-1$) \begin{equation} \mathcal{W}_{m|n-m}=\sqrt{\frac{m(n-m)}{n^2}}|\psi\rangle_{m|n-m}\langle\psi|^{\Gamma_A}, \end{equation} where $|\psi\rangle_{m|n-m}={|0\rangle^{\otimes m}}_A{|0\rangle^{\otimes n-m}}_{\bar{A}}-|W_m\rangle_A|W_{n-m}\rangle_{\bar{A}}$. Then the set $\mathcal{S}$ for $|W_n\rangle$ can still be grouped into 3 subsets: \begin{equation} \{|0^{\otimes n}\rangle\},~\{|\pi_m(0^{\otimes n-1}1)\rangle\},~\{|\pi_m'(0^{\otimes n-2}1^{\otimes 2})\rangle\}, \end{equation} with the corresponding coefficients $\alpha_k$ being \begin{equation} \begin{aligned} \alpha_1&=\max_m \sqrt{\frac{m(n-m)}{n^2}}=\frac{\sqrt{\lfloor n/2\rfloor(n-\lfloor n/2\rfloor)}}{n},\\ \alpha_2&=\max_m \frac{n-m}{n}=\frac{n-1}{n}, \\ \alpha_3&=\max_m \sqrt{\frac{m(n-m)}{n^2}}=\frac{\sqrt{\lfloor n/2\rfloor(n-\lfloor n/2\rfloor)}}{n}. \end{aligned} \end{equation} Therefore we arrive at the following $\mathcal{W}_{|W_n\rangle}$, \begin{equation} \label{eq:w_state} \mathcal{W}_{|W_n\rangle}=\frac{n-1}{n}\mathcal{P}_1+\frac{\sqrt{\lfloor n/2\rfloor(n-\lfloor n/2\rfloor)}}{n} (|0\rangle\langle 0|^{\otimes n} +\mathcal{P}_2)-|W_n\rangle\langle W_n|, \end{equation} with $\mathcal{P}_i=\sum_m\pi_m(|0\rangle^{\otimes n-i}|1\rangle^{\otimes i})\pi_m(\langle 0|^{\otimes n-i}\langle 1|^{\otimes i})$, where the summation $m$ is over all possible permutation of $|0\rangle^{\otimes n-i}|1\rangle^{\otimes i}$. The EW $\mathcal{W}_{|W_n\rangle}$ can also be generalized in a similar manner with the $\mathcal{W}_{|W_3\rangle}$, so as to recover the results of Ref.\ \cite{Bergmann_2013}. Although ending up with the same witness operator, our construction provides a different insight on why the $\mathcal{W}_{|W_n\rangle}$ takes such a form. \subsection{Graph states}\label{sec:appendix graph state} Before discussing the construction of GME witnesses for the graph states, we first give a brief introduction to the graph states. A graph is a pair $G=(V,E)$ of sets, where the elements of $V$ are called vertices, and the elements of $E$ are edges connecting the vertices. For example, $(1,2) $ represents the edge connecting vertex $1$ and $2$. Two vertices are called neighboring if they are connected by an edge. A graph can also be represented by the adjacency matrix $\Gamma$ with \begin{equation} \Gamma_{ij} = \left\{ \begin{aligned} &1, \quad if ~(v_i,~v_j) \in E,\\ &0, \quad otherwise. \end{aligned} \right. \end{equation} Then, an $n$-qubit graph state $|G\rangle$ is defined with an $n$-vertex graph $G$ whose vertices correspond to qubits and edges correspond to control-Z (C-Z) gate between two qubits. Graph state can be expressed with a set of stabilizers \begin{equation} g_i = X_i\prod_{j \in N(i)} Z_j, i = 1,\cdots,n, \end{equation} where $X_i$ and $Z_i$ are Pauli operators on qubit (vertex) $i$, and $N(i)$ is the neighborhood of $i$ (\textit{i.e.} the set of vertices directly connected to $i$ by edges). These operators $g_i$ commute with each other and $|G\rangle$ is the common eigenstate of them such that \begin{equation} \forall i=1,\cdots,n,~g_i|G\rangle=|G\rangle, \end{equation} Moreover, all the $2^n$ common eigenstates of these $g_i$ form a basis named graph state basis. Each term in this basis is uniquely decided by the eigenvalues of $g_i$. As the eigenvalues of $g_i$ are either $1$ or $-1$, the graph state basis can be denoted by a vector $\vec{a} \in \{0,1\}^n$ such that \begin{equation}\label{eq:graph_state_basis} \forall i=1,\cdots,n,~g_i|a_1\cdots a_n\rangle_G = (-1)^{a_i}|a_1\cdots a_n\rangle_G. \end{equation} And the density matrix of $|\vec{a}\rangle_G$ is \begin{equation} |a_1\cdots a_n\rangle_G\langle a_1\cdots a_n|=\prod_{i=1}^{n}\frac{(-1)^{a_i}g_i +I}{2}. \end{equation} Specially, the graph state $|G\rangle$ is denoted as $|0\cdots0\rangle_G$. Remarkably, by choosing the graph state basis instead of the computational basis, the calculation of GME witness construction can be greatly simplified, without needing to perform the Schmidt decomposition. Firstly, the partial transposition of a graph state is diagonal under the graph state basis, namely, $|\vec{a}_0\rangle_G\langle \vec{a}_0|^{T_A}$ is of the form $\sum_{\vec{a}} c_{\vec{a}} |\vec{a}\rangle_G\langle \vec{a}|$. Meanwhile, the operator $Q$ in Eq.\ (\ref{eq:Q}) can be seen as a linear combination of all the eigenstates with negative eigenvalue of $|G\rangle\langle G|^{T_A}$. Therefore, when Lemma 1 is applied to the graph state $|G\rangle$, the resulting bipartite EW $\mathcal{W}_{o,A|\bar{A}}^{|G\rangle}$ is diagonal in the graph state basis. In this case, the vectors in the set $\mathcal{S}$ can be taken as the base vectors $|\vec{a}\rangle_G$, such that the construction in Theorem 1 is easy to achieve. In the following, we propose an explicit procedure for finding the decomposition of $\mathcal{W}_{o,A|\bar{A}}^{|G\rangle}$ in the graph state basis. Firstly, for the given bipartition $A|\bar{A}$, the adjacency matrix $\Gamma$ can be decomposed into following blocks \begin{equation}\label{eq:adj_matrix} \left( \begin{aligned} &G_A~~\Gamma_{A|\bar{A}} \\ &\Gamma_{A|\bar{A}}~~G_{\bar{A}} \end{aligned} \right). \end{equation} We denote $k=rank(\Gamma_{A|\bar{A}})$ as the rank of the submatrix $\Gamma_{A|\bar{A}}$. It is known that a graph state can be transformed into tensor product of $k$ Bell states across the partitions $A$ and $\bar{A}$, using C-Z gates within each partition and local complementation operations \cite{PhysRevA.69.062311}. Here the local complementation $\tau_a$ on a vertex $a$ is defined as follows: $\tau_a : G \to \tau_a(G)$, such that the edge set $E'$ of the new graph $\tau_a(G)$ is $E'= E\cup E\left(N\left(a\right),N\left(a\right)\right)-E\cap E\left(N\left(a\right),N\left(a\right)\right)$. The local complementation $\tau_a(G)$ can be implemented with the following local unitary operation \cite{PhysRevA.69.062311}: \begin{equation} U_a(G) = (-iX_a)^{1/2}\prod_{b\in N(a)}(iZ_b)^{1/2}. \end{equation} After this operation, $|G\rangle$ is turned into $|\tau_a(G)\rangle$ and the stabilizers of $|G\rangle$ transform according to the following equations: \begin{equation}\label{eq:stabizer_transfer} \begin{aligned} &U_a(G)g_b^GU_a(G)=g_a^{\tau_a(G)}g_b^{\tau_a(G)},~if ~b\in N(a); \\ &U_a(G)g_b^GU_a(G)=g_b^{\tau_a(G)}, ~~~~~~~~~if ~b\notin N(a). \end{aligned} \end{equation} Meanwhile, we remark that a bipartite EW $\mathcal{W}_{A|\bar{A}}$ for $|G\rangle$ has been transformed into another bipartite EW $\mathcal{W}_{A|\bar{A}}'$ for $|G'\rangle$ after some local unitary operation with respect to $A|\bar{A}$. Hence our task for constructing bipartite EW of the initial graph state $|G\rangle$ has been turned into finding a bipartite EW for $|Bell\rangle^{\otimes k}$ by employing Lemma 1, a much easier task compared with the initial one. Secondly, after reversing the above transformation process from $|G\rangle$ to $|Bell\rangle^{\otimes k}$, the EW for $|Bell\rangle^{\otimes k}$ which is diagonal in Bell state basis will be turned back into a bipartite EW for $|G\rangle$ which is diagonal in graph state basis. \begin{figure} \caption{Representation of $n$-qubit linear cluster state with graph, where $n$ qubits are connected one by one with C-Z gates.} \label{fig:appendix_fig1} \end{figure} With the above foundation, we move on to a explicit discussion on a typical class of graph states: linear cluster state $|Cl_n\rangle$. Linear cluster state is represented with the graph in Fig.\ \ref{fig:appendix_fig1}. We call the bipartition $A|\bar{A}$ a rank-$k$ bipartition if $rank(\Gamma_{A|\bar{A}})=k$, with the $\Gamma_{A|\bar{A}}$ defined in Eq.\ (\ref{eq:adj_matrix}). All rank-$1$ bipartitions of linear cluster state have only two possible types of the subgraph on the boundary $\Gamma_{A|\bar{A}}$ (Fig.\ \ref{fig:appendix_fig2}). \begin{figure} \caption{Possible subgraphs on the boundary across rank-1 bipartition. For a given bipartition $A|\bar{A} \label{fig:appendix_fig2} \end{figure} Any other edge is deleted by C-Z gates within each partition. For the type-1 subgraph $G_{i,i+1}$, the bipartite EW reads \begin{equation} \begin{aligned} \mathcal{W}_{G_{i,i+1}}=&\frac{1}{2}\left(|0_i0_{i+1}\rangle_{G_{i,i+1}}\langle 0_i0_{i+1}|+|0_i1_{i+1}\rangle_{G_{i,i+1}}\langle 0_i1_{i+1}|+|1_i0_{i+1}\rangle_{G_{i,i+1}}\langle 1_i0_{i+1}|\right. \\ &\left. +|1_i1_{i+1}\rangle_{G_{i,i+1}}\langle 1_i1_{i+1}|\right) -|G_{i,i+1}\rangle\langle G_{i,i+1}|. \end{aligned} \end{equation} by employing Lemma 1, where the state vectors like $|0_i1_{i+1}\rangle_{G_{i,i+1}}$ are graph state basis defined in the Eq.\ (\ref{eq:graph_state_basis}), and the `$0$'s on other vertices are omitted for simplicity here and after. Note that the $g_j^{G_{i,i+1}}$ can be transformed back to the $g_j^{Cl_n}$ by employing C-Z gates without disturbing the eigenvalue of the state vector. Therefore the bipartite EW for the original state $|Cl_n\rangle$ is \begin{equation} \begin{aligned} \mathcal{W}_{A|\bar{A}}=&\frac{1}{2}\left(|Cl_n\rangle\langle Cl_n|+|0_i1_{i+1}\rangle_{Cl_n}\langle 0_i1_{i+1}|+|1_i0_{i+1}\rangle_{Cl_n}\langle 1_i0_{i+1}|\right. \\ &\left. +|1_i1_{i+1}\rangle_{Cl_n}\langle 1_i1_{i+1}|\right)-|Cl_n\rangle\langle Cl_n|, \end{aligned} \end{equation} when formulated in the graph state basis. After normalizing the $\mathcal{W}_{A|\bar{A}}$ to meet the constraint $Tr(\mathcal{W}_{A|\bar{A}}|Cl_n\rangle\langle Cl_n|)=-1$, we obtain \begin{equation} \mathcal{W}_{A|\bar{A}}'=|0_i1_{i+1}\rangle_{Cl_n}\langle 0_i1_{i+1}|+|1_i0_{i+1}\rangle_{Cl_n}\langle 1_i0_{i+1}|+|1_i1_{i+1}\rangle_{Cl_n}\langle 1_i1_{i+1}|-|Cl_n\rangle\langle Cl_n|, \end{equation} as the bipartite EW used in our construction. This bipartite EW contributes the following terms to the set $\mathcal{S}$: \begin{equation} \{|0_i1_{i+1}\rangle_{Cl_n},~|1_i0_{i+1}\rangle_{Cl_n},~|1_i1_{i+1}\rangle_{Cl_n}\}. \end{equation} This set is denoted as $\mathcal{S}_{i,type-1}=\{01,~10,~11\}_{i,i+1}$ for short. Meanwhile, the type-2 subgraph $G_{i,i+1,i+2}$ in Fig.\ \ref{fig:appendix_fig2} can be transformed into the type-1 subgraph after applying $C_{Z,(i,i+2)}$, $U_i(G_{i,i+1,i+2})$ and $C_{Z,(i,i+2)}$ sequentially. We remark that the local complementation operation $U_i(G_{i,i+1,i+2})$ may change the corresponding eigenvalue when $g_j^{G_{i,i+1,i+2}}$ turns into $g_j^{G_{i,i+1}}$, which is decided by the Eq.\ (\ref{eq:stabizer_transfer}). Therefore, this kind of bipartitions contribute the following set of states to the set $\mathcal{S}$: \begin{equation} \mathcal{S}_{i,type-2}=\{010,~101,~111\}_{i,i+1,i+2}. \end{equation} In summary, all the rank-$1$ bipartitions contribute an operator $R_1$ by our construction. If one denotes the $V_1$ as the set of vectors from $\{0,1\}^n$ such that the maximal distance between the `$1$'s appearing in each vector is smaller than 3, the $R_1$ can be formulated as \begin{equation} R_1=\sum_{\vec{a}\in V_1}|\vec{a}\rangle_{Cl_n}\langle\vec{a}|. \end{equation} For rank-$2$ bipartitions, their boundaries are composed of two rank-$1$ boundaries. For example, if there are two type-$1$ parts, the bipartite EW takes the form \begin{equation} \mathcal{W}_{A|\bar{A}}=\frac{1}{4}\sum_{\vec{a}\in\{0,1\}^4}|\vec{a}_{i,i+1,j,j+1}\rangle_{Cl_n}\langle \vec{a}_{i,i+1,j,j+1}|-|Cl_n\rangle\langle Cl_n|. \end{equation} After normalization, the bipartite EW reads \begin{equation} \mathcal{W}_{A|\bar{A}}=\frac{1}{3}\sum_{\substack{\vec{a}\in\{0,1\}^4,\\\vec{a}\ne\vec{0}}}|\vec{a}_{i,i+1,j,j+1}\rangle_{Cl_n}\langle \vec{a}_{i,i+1,j,j+1}|-|Cl_n\rangle\langle Cl_n|. \end{equation} Such bipartite EW contribute the following new terms to the set $S$: \begin{equation} S_{i,type-1}\otimes S_{j,type-1}=\{0101,~0110,~0111,~1001,~1010,~1011,~1101,~1110,~1111\}_{i,i+1,j,j+1}. \end{equation} The contribution of other possibilities can be decided in a similar manner as above for the type-2 subgraph. All these rank-$2$ bipartitions contribute a set $V_2$ to $\mathcal{S}$. Here a vector from $\{0,1\}^n$ belongs to $V_2$ if there exist at most two `$1$'s whose distance is larger than $2$ at the same time in the vector. Finally, all the rank-$2$ bipartitions introduce an operator $R_2$ to our construction, with \begin{equation} R_2=\sum_{\vec{a}\in V_2}\frac{1}{3}|\vec{a}\rangle_{Cl_n}\langle\vec{a}|. \end{equation} For a rank-$k$ bipartition, the subgraph on the boundary is nothing but a combination of $k$ rank-$1$ part. After repeating the above process, it is shown that all the rank-$k$ bipartitions contribute the following operator $R_k$: \begin{equation} R_k=\sum_{\vec{a}\in V_k}\frac{1}{2^k-1}|\vec{a}\rangle_{Cl_n}\langle\vec{a}|. \end{equation} A vector $\vec{a}$ belongs to $V_k$ if there exist at most $k$ for the number of `$1$'s in $\vec{a}$, such that their distance with each other are larger than $k$ at the same time. It can be observed immediately that $k\le \lceil n/3 \rceil$, indicating that the partition whose rank is higher than $\lceil n/3 \rceil$ gives no extra contribution. After considering all the bipartitions, we end up with the GME witness $\mathcal{W}_{Cl_n}$ introduced in the main text, namely, \begin{equation} \mathcal{W}_{Cl_n}=\sum_{k=1}^{\lceil n/3 \rceil} R_k-|Cl_n\rangle\langle Cl_n|. \end{equation} As an example, for $4$-qubit cluster state, \begin{equation} \mathcal{W}_{Cl_4}=\sum_{\vec{a}\in V_1}|\vec{a}\rangle_G\langle \vec{a}|+\frac{1}{3}\sum_{\vec{a}\in V_2}|\vec{a}\rangle_G\langle \vec{a}|-|G\rangle\langle G|, \end{equation} where $V_1$ is the set $\{0001,~0010,~0011,~0100,~0101,~0110,~0111,~1000,~1010,~1100,~1110\}$, and $V_2$ is the set $\{1001,~1011,~1101,~1111\}$. Remarkably, in $4$-qubit case, the best known EW is \cite{PhysRevLett.106.190502} \begin{equation} \mathcal{W}_{Cl_4}^{opt}=\sum_{\vec{a}\in V_1}|\vec{a}\rangle_G\langle \vec{a}|-|G\rangle\langle G|. \end{equation} It is finer than the $\mathcal{W}_{Cl_4}$ above. That is, while our approach is already quite powerful, there is still room for improvement. In this particular case, the improvement can be achieved by an elaborate choice of the set of bipartite EWs, instead of using Lemma 1 only. If the bipartite EWs for $13|24$ and $14|23$ in the above construction are replaced by \begin{equation} \begin{aligned} \mathcal{W}_{13|24}=&|0001\rangle_{Cl_4}\langle0001| + |0100\rangle_{Cl_4}\langle0100|+|0101\rangle_{Cl_4}\langle0101| +|0011\rangle_{Cl_4}\langle0011| \\ &+|0110\rangle_{Cl_4}\langle0110| +|0111\rangle_{Cl_4}\langle0111| -|Cl_4\rangle\langle Cl_4|, \\ \mathcal{W}_{14|23}=&|0001\rangle_{Cl_4}\langle0001| + |0010\rangle_{Cl_4}\langle0010|+|0101\rangle_{Cl_4}\langle0101| +|0011\rangle_{Cl_4}\langle0011| \\ &+|0110\rangle_{Cl_4}\langle0110| +|0111\rangle_{Cl_4}\langle0111| -|Cl_4\rangle\langle Cl_4|, \end{aligned} \end{equation} respectively, one can recover the $\mathcal{W}_{Cl_4}^{opt}$ with Theorem 1. With this example on $4$-qubit cluster state, we highlight that Lemma 1 is just an alternative choice which ends up with robust GME witnesses. Our construction in fact allows a flexible choice on the set of EWs to be lifted to multipartite case, and a suitable choice can further improve its performance. Moreover, it should be remarked that our discussion was based on the partial transposition throughout this paper, to obtain higher noise resistance. If bipartite EWs in the construction are designed by other positive maps (e.g., the Choi's map), different classes of GME witness can be found. This may help to harness the full potential of Theorem 1 in future work. \subsection{Multipartite states admitting Schmidt decomposition.}\label{sec:appendix GHZ state} A special case of multipartite entangled states is the multipartite states admitting Schmidt decomposition. Without loss of generality, we can assume that such states are of the form $|\phi_s\rangle=\sum_{i=0}^{d-1}\sqrt{\lambda_i}|i\rangle^{\otimes n}$ with $\lambda_i\ge 0 $ in decreasing order. Then the Lemma 1 gives a set of bipartite EWs $\mathcal{W}_{A|\bar{A}}^{|\phi_{SD}\rangle}$: \begin{equation} \mathcal{W}_{A|\bar{A}}^{|\phi_{SD}\rangle}=\sum_{i,j=0}^{d-1} \sqrt{\lambda_i\lambda_j}{|i\rangle^{\otimes k}}_A{|j\rangle^{\otimes n-k}}_{\bar{A}}{\langle i|^{\otimes k}}_A {\langle j|^{\otimes n-k}}_{\bar{A}}-|\phi_s\rangle\langle\phi_s|, \end{equation} where $k=|A|$ is the number of qudits in subsystem $A$. For these bipartite EWs, the set $S$ is \begin{equation} \{\pi_m(|i\rangle^{\otimes r}|j\rangle^{\otimes n-r})\}_{r,i,j,\pi_m}\cup\{|l\rangle^{\otimes n}\}_{l=0}^{d-1}, \end{equation} with $r=1,2,\cdots,n-1$, $i,j=0,1,\cdots,d-1$ ($i<j$) and $\pi_m$ being all possible permutations of $|i\rangle^{\otimes r}|j\rangle^{\otimes n-r}$. Note that all state vectors in $S$ are orthogonal with each other, thus our construction ends up with the following multipartite EW \begin{equation} \begin{aligned} \mathcal{W}_{|\phi_s\rangle}=&\sum_{\substack{i,j=0,\\i< j}}^{d-1} \sum_{r=1}^{n-1} \sum_m \sqrt{\lambda_i\lambda_j}\pi_m(|i\rangle^{\otimes r}|j\rangle^{\otimes n-r})\pi_m(\langle i^{\otimes r}|\langle j|^{\otimes n-r}) \\ &+\sum_{i=0}^{d-1}\lambda_i|i\rangle\langle i|^{\otimes n}-|\phi_s\rangle\langle\phi_s|, \end{aligned} \end{equation} where the summation of $m$ is over all possible permutations $\pi_m(|i\rangle^{\otimes r}|j\rangle^{\otimes n-r})$ of $|i\rangle^{\otimes r}|j\rangle^{\otimes n-r}$. Moreover, similar to the case of proving the optimality of $\mathcal{W}_o^{|\phi\rangle}$ in the first section, one can verify the optimality of ${W}_{|\phi_s\rangle}$ by checking that all the biseparable states satisfying $Tr(\mathcal{W}_{|\phi_s\rangle}\rho_{bs})=0$ span the whole Hilbert space $\mathcal{H}_d^{\otimes n}$. \subsection{GME witness for multi-qubit singlet states}\label{sec:appendix singlet} Multi-qubit singlet states are of particular experimental interest, while the GME witness for them is less investigated. In this example, it is shown that our framework works well for the multi-qubit singlet states. In the main text, we provide the result for a specific class of four-qubit singlet states. While here we begin with the discussion on general four-qubit singlet states \begin{equation} |\varphi_4\rangle = a |\psi_{12}^-\rangle\otimes |\psi_{34}^-\rangle + e^{i\theta}b |\psi_{13}^-\rangle\otimes |\psi_{24}^-\rangle, \end{equation} with the constraint $a^2+b^2+cos(\theta)ab=1$ and $|\psi_{12}^-\rangle$ being the two-qubit singlet state $(|01\rangle-|10\rangle)/\sqrt{2}$ on the first two qubits. By performing our construction procedure for all four-qubit singlet states, it is observed that the set $\mathcal{S}$ is always divided into $5$ subsets and the identity operators on the corresponding subspaces are just $\{\mathcal{P}_i^4\}_{i=0}^4$ (The $\mathcal{P}_i^4$ has been defined below the Eq.\ (\ref{eq:w_state})). More specifically, the resulting witness is \begin{equation} \mathcal{W}_4 = c_2 \mathcal{P}_2^4 + c_1 (\mathcal{P}_1^4 + \mathcal{P}_3^4) + c_0 (\mathcal{P}_0^4 + \mathcal{P}_4^4) - |\varphi_4\rangle\langle\varphi_4|, \end{equation} with the coefficients decided by \begin{equation} \begin{aligned} c_2 &= \max\{1-\frac{3}{4}a^2, 1-\frac{3}{4}b^2, \frac{3}{4}(a^2+b^2) -\frac{1}{2}\}, \\ c_1 &= \frac{1}{2}, \\ c_0 &= \max\{\frac{1}{2}-\frac{1}{4}(a^2+b^2),\frac{1}{4}a^2,\frac{1}{4}b^2\}. \end{aligned} \end{equation} Specially, with a choice of $\theta = \pi/2$, this recovers the EW in the main text. While if $a=-1$, $b=1$ and $\theta=0$, $|\varphi_4\rangle$ becomes a biseparable state $|\psi_{14}^-\rangle\otimes |\psi_{23}^-\rangle$ and the corresponding EW become positive semidefinite. When the number of qubit grows, achieving a generic expression becomes more complicated. To investigate the GME witness construction in this case, we consider the following six-qubit singlet state \begin{equation} \begin{aligned} |\varphi_6\rangle = &\frac{1}{2}\left(|\psi_{12}^-\rangle\otimes |\psi_{34}^-\rangle \otimes |\psi_{56}^-\rangle + i |\psi_{13}^-\rangle\otimes |\psi_{24}^-\rangle \otimes |\psi_{56}^-\rangle \right. \\ & \left. + i |\psi_{12}^-\rangle\otimes |\psi_{35}^-\rangle \otimes |\psi_{46}^-\rangle - |\psi_{13}^-\rangle\otimes |\psi_{25}^-\rangle \otimes |\psi_{46}^-\rangle\right), \end{aligned} \end{equation} for which we arrive at the GME witness \begin{equation} \mathcal{W}_6= \frac{5}{8} \mathcal{P}_3^6 + \frac{1}{2} (\mathcal{P}_2^6 + \mathcal{P}_4^6) + \frac{1}{4} (\mathcal{P}_1^6 + \mathcal{P}_5^6) + \frac{1}{8} (\mathcal{P}_0^6 + \mathcal{P}_6^6) - |\varphi_6\rangle\langle\varphi_6|. \end{equation} Based on these results, it is reasonable to conjecture that for some $2n$-qubit singlet state $|\varphi_{2n}\rangle$, there exists a GME witness taking the form \begin{equation} \mathcal{W}_{2n}= c_n \mathcal{P}_n^{2n} + \sum_{i=0}^{n-1} c_i(\mathcal{P}_i^{2n} + \mathcal{P}_{2n-i}^{2n})- |\varphi_{2n}\rangle\langle\varphi_{2n}|. \end{equation} with $c_i \ge c_{i-1} \ge 0$ for $i=1,\cdots, n$ and $c_n$ is the maximal squared overlap between $|\varphi_6\rangle$ and biseparable states. Moreover, if $c_i$ scales with $(1/2)^{-(n-i+1)}$ as in the four- and six-qubit case, a high white noise tolerance tending to $1$ can be expected for a large number of qubit. \end{document}
math
79,733
\begin{document} \title{Exact Solution of the Klein-Gordon Equation for the Hydrogen Atom Including Electron Spin} \centerline{151 Fairhills Dr., Ypsilanti, MI 48197} \centerline{E-mail: [email protected]} \begin{abstract} The term describing the coupling between total angular momentum and energy-momentum in the hydrogen atom is isolated from the radial Dirac equation and used to replace the corresponding orbital angular momentum coupling term in the radial K-G equation. The resulting spin-corrected K-G equation is a second order differential equation that contains no matrices. It is solved here to generate the same energy eigenvalues for the hydrogen atom as the Dirac equation. \end{abstract} \section{Introduction} The K-G equation is the quantum mechanical expression of the relativistic energy-momentum relationship. Naturally, the principle of energy-momentum conservation is applicable to all particles, irrespective of their spin, but the spin is relevant in so far as it affects the energy-momentum of the particle. The purpose of this paper is to derive a K-G equation that describes the hydrogen atom by including the affect of the spin on energy-momentum in the Hamiltonian for the electron. The more conventional approach to the hydrogen atom is to factorize the free-field K-G equation using Dirac matrices giving the Dirac equation. Electromagnetic interactions are then introduced into the Hamiltonian of the Dirac equation through minimal coupling. The result is a system of first-order differential equations that operate on the components of a bi-spinor wavefunction. The spinless K-G equation for the hydrogen atom is introduced in section 2 of this paper. A key point of interest is that the angular momentum term in the radial component of this equation can be expressed in terms of a simple function $\eta_{\epsilon \kappa}$ of two integers $\epsilon$ and $\kappa$. In the context of the spinless K-G equation, $\epsilon=0$ and $\kappa=l$ where $l$ is the orbital angular momentum quantum number. It is shown that both the eigenfunctions and energy eigenvalues of the K-G equation can also be expressed in terms of the $\eta_{0l}$-coupling function. The $\eta_{0l}$-coupling is therefore observed to completely encapsulate the concept of orbital angular momentum in the K-G formalism. In section 3, it is found the $\eta_{\epsilon \kappa}$ function also pervades Dirac theory except that in this case $\epsilon=1$ and $\kappa=\pm(j+\frac{1}{2}) $ where $j$ is the total angular momentum quantum number of the electron including both orbital angular momentum and spin. On the basis of this result, the replacement $\eta_{0 l} \rightarrow \eta_{1 \kappa}$ is considered as a means of including electron spin in the K-G equation. The spin-corrected K-G equation generates scalar eigenfunctions and contains no matrices. It is solved here to generate the same energy eigenvalues of hydrogen atom as the Dirac equation. \section{The Spin-0 K-G Equation} The K-G equation determining the wavefunction $\psi$ for an electron of mass $m_0$ at a radial distance $r$ from a proton can be expressed in the form \begin{equation} \label{eq: KGHA0} -c^2 \nabla^2 \psi + \frac{m_0^2c^4}{\hbar^2}\psi = \left( \imath\frac{\partial}{\partial t} + c \frac{\alpha}{r} \right)^2 \psi \end{equation} where \begin{equation} \label{eq: totE} E\psi = \imath \hbar \frac{\partial \psi}{\partial t} \end{equation} is the total energy of the electron, $\alpha$ is the fine structure constant, $c$ is the velocity of light and $\hbar$ is Planck's constant divided by $2\pi$. Eqs. (\ref{eq: KGHA0}) and (\ref{eq: totE}) thus constitute an approximate model of the hydrogen atom neglecting the spin of the electron and the finite mass of the proton. The solution \cite{JN} to eqs. (\ref{eq: KGHA0}) and (\ref{eq: totE}) in spherical polar coordinates $(r,\theta,\phi)$ takes the separable form \begin{eqnarray} \label{eq: psi_ha} \psi_{\epsilon nlm}(r,\theta,\phi,t) = R_{\epsilon nl}(r)Y_{lm}(\theta, \phi)\exp(-\imath E_{\epsilon nl}t / \hbar) \end{eqnarray} where $n,l,m$ are the hydrogen quantum numbers. The additional index $\epsilon$ has been included so the results in this section can be part of a more general discussion in later sections. For the purposes of this section, $\epsilon$ can be set equal to zero. Eq. (\ref{eq: KGHA0}) separates to give the radial equation \begin{equation} \label{eq: KGHA0_R1} \left[ \frac{1}{r^2}\frac{\partial}{\partial r} \left( r^2 \frac{\partial}{\partial r} \right) + \frac{E_{0nl}^2}{\hbar^2 c^2} + \frac{2E_{0nl}}{\hbar c}\frac{\alpha}{r} - \frac{m_0^2c^2}{\hbar^2} + \frac{\alpha^2}{r^2} - \frac{l(l+1)}{r^2}\right]R_{0nl} = 0 \end{equation} alongside the orbital angular momentum equation \begin{equation} \hat{L}^2Y_{lm} = l(l+1)Y_{lm} \end{equation} where $\hat{L}$ is the orbital angular momentum operator. The solution of eq. (\ref{eq: KGHA0_R1}) is known to take the form \begin{equation} \label{eq: rwav_kg0} R_{0nl}(r) = \frac{\mathcal{N}_{0nl}}{r^{\eta_{0 l}}} \exp \left( -\frac{r}{r_{0nl}} \right)\sum_{k=0}^{n} a_k r^k \end{equation} where \begin{equation} \label{eq: couplingFunc} \eta_{\epsilon \kappa} = \frac{1+\epsilon}{2} \pm \sqrt{\left(\kappa+\frac{1-\epsilon}{2} \right)^2-\alpha^2} \end{equation} $\mathcal{N}_{0nl}$, $r_{0nl}$ and $a_k$ are constants. It is helpful that eq. (\ref{eq: couplingFunc}) can also be used to simplify eq. (\ref{eq: KGHA0_R1}) to give \begin{equation} \label{eq: KGHA0_R2} \left[ \frac{1}{r^2}\frac{\partial}{\partial r} \left( r^2 \frac{\partial}{\partial r} \right) + \frac{E_{0nl}^2}{\hbar^2 c^2} + \frac{2E_{0nl}}{\hbar c}\frac{\alpha}{r} - \frac{m_0^2c^2}{\hbar^2} + \frac{\eta_{0 l}(1-\eta_{0 l})}{r^2} \right]R_{0nl} = 0 \end{equation} Inserting the wavefunction (\ref{eq: rwav_kg0}) into eq. (\ref{eq: KGHA0_R2}) it is readily shown that \begin{eqnarray} \label{sum1} \sum_{k=0}^n a_k \left\{ \frac{k(k-1)r^{k-2}}{r_{0nl}^{2}} + \left[(1-\eta_{0 l})\frac{r_{0nl}}{r}-1 \right]\frac{2kr^{k-1}}{r_{0nl}^{2}} \right\} + \nonumber \\ \left[\frac{E_{0nl}^2}{\hbar^2 c^2} + \frac{2E_{0nl}}{\hbar c}\frac{\alpha}{r} - \frac{m_0^2c^2}{\hbar^2} - \frac{2(1-\eta_{0l})}{r_{0nl}r}+\frac{1}{r_{0nl}^{2}}\right]\sum_{k=0}^n a_k r^k = 0 \end{eqnarray} In this, terms in $r^n$ and $r^{n-1}$ equate to give \begin{equation} \label{bohrRadius} r_{0nl} = \frac{\hbar^2c^2}{\sqrt{E^2_{0nl} - m_0^2c^4}} \end{equation} \begin{equation} E_{0nl} = \frac{\hbar c (n+1-\eta_{0l})}{\alpha r_{0nl}} \end{equation} Combining these two results leads to the energy eigenvalues for the hydrogen atom from the spinless K-G equation: \begin{equation} \label{energy_ha} E_{0nl} = m_0c^2 \left[ 1 + \frac{\alpha^2}{(n+1-\eta_{0l})^2}\right]^{-1/2} \end{equation} Note, eq (\ref{eq: couplingFunc}) for $\eta_{\epsilon k}$ contains a $\pm$ sign. The negative sign is usually chosen since in this case eq. (\ref{energy_ha}) corresponds to the Sommerfeld energy spectrum for the hydrogen atom. By comparison, the positive sign predicts a much higher binding energy sometimes called the hydrino state. Clearly, the $\eta_{0l}$-function encapsulates the orbital angular momentum quantum number $l$ in eqs. (\ref{eq: KGHA0_R2}) through (\ref{energy_ha}). It can therefore be said that that angular momentum influences energy-momentum in the spin-0 K-G equation through the $\eta_{0l}$-coupling function. \section{The Spin-$\frac{1}{2}$ K-G Equation} The Dirac equation for the hydrogen atom has two 4-component solutions of the form \begin{eqnarray} \Psi^{\pm}=\left( \begin{array}{r} \pm v(r)\chi^{\pm}(\theta,\phi) \\ u(r)\chi^{\mp}(\theta,\phi) \\ \end{array} \right) \end{eqnarray} where $\chi^{\mp}(\theta,\phi)$ are spinors, \begin{equation} \label{eq: dirac_u} u(r) = r^{-\eta_{1 \kappa}} \exp \left( -\frac{r}{r_{1N\kappa}} \right) \sum_{k=0}^{N} a_k r^k \end{equation}, \begin{equation} \label{eq: dirac_v} v(r) = r^{-\eta_{1 \kappa}} \exp \left( -\frac{r}{r_{1N\kappa}} \right) \sum_{k=0}^{N} b_k r^k \end{equation} are scalar functions and \begin{equation} \label{bohrRadiusD} r_{1N\kappa} = \frac{\hbar^2c^2}{\sqrt{E^2_{1N\kappa} - m_0^2c^4}} \end{equation} Here, $E_{1N\kappa}$ denotes the energy eigenvalues describing the fine structure of the hydrogen atom. For a more detailed comparison to the literature \cite{DFL}, the integer $\kappa$ is related to the total angular momentum quantum number $j$ through the expression \begin{equation} \kappa = \pm\left(j+\frac{1}{2} \right) \end{equation} and \begin{equation} \label{bigN} N=n-j-\frac{1}{2} \end{equation} where n is the principle quantum number of the hydrogen atom. It is clear from eqs. (\ref{eq: dirac_u}) and (\ref{eq: dirac_v}) that the $\eta_{1\kappa}$-coupling function encapsulates total angular momentum in the Dirac formalism as the $\eta_{0 l}$-coupling function encapsulates orbital angular momentum in the spinless K-G equation. It is of interest on the strength of this result, to consider the replacement \begin{equation} \epsilon = 1, \quad n \rightarrow N, \quad l \rightarrow \kappa \end{equation} as a possible means of introducing electron spin into the Hamiltonian of the K-G equation (\ref{eq: KGHA0_R2}). This gives \begin{equation} \label{eq: KGHA1_R1} \left[ \frac{1}{r^2}\frac{\partial}{\partial r} \left( r^2 \frac{\partial}{\partial r} \right) + \frac{E_{1N\kappa}^2}{\hbar^2 c^2} + \frac{2E_{1N\kappa}}{\hbar c}\frac{\alpha}{r} - \frac{m_0^2c^2}{\hbar^2} + \frac{\eta_{1 \kappa}(1-\eta_{1 \kappa})}{r^2} \right]R_{1N\kappa} = 0 \end{equation} to be a trial form of the radial K-G equation for spin-$\frac{1}{2}$ particles. Eqs. (\ref{eq: KGHA0_R2}) and (\ref{eq: KGHA1_R1}) are identical in form. The solution to eq. (\ref{eq: KGHA1_R1}) is therefore readily inferred to be \begin{equation} \label{eq: rwav_kg1} R_{1N\kappa}(r) = \frac{\mathcal{N}_{1N\kappa}}{r^{\eta_{1 \kappa}}} \exp \left( -\frac{r}{r_{1N\kappa}} \right)\sum_{k=0}^{N} a_k r^k \end{equation} identical in form to eq. (\ref{eq: rwav_kg0}). Inserting this back into eq. (\ref{eq: KGHA1_R1}) gives \begin{eqnarray} \label{sum2} \sum_{k=0}^n a_k \left\{ \frac{k(k-1)r^{k-2}}{r_{1N\kappa}^{2}} + \left[(1-\eta_{1 \kappa})\frac{r_{1N\kappa}}{r}-1 \right]\frac{2kr^{k-1}}{r_{1N\kappa}^{2}} \right\} + \nonumber \\ \left[\frac{E_{1N\kappa}^2}{\hbar^2 c^2} + \frac{2E_{1N\kappa}}{\hbar c}\frac{\alpha}{r} - \frac{m_0^2c^2}{\hbar^2} - \frac{2(1-\eta_{1 \kappa})}{r_{1N\kappa}r}+\frac{1}{r_{1N\kappa}^{2}}\right]\sum_{k=0}^n a_k r^k = 0 \end{eqnarray} identical in form to eq. (\ref{sum1}). In eq. (\ref{sum2}), terms in $r^n$ and $r^{n-1}$ equate to give eq. (\ref{bohrRadiusD}) and the relationship \begin{equation} \label{energy_kg1} E_{1N\kappa} = \frac{\hbar c (n+1-\eta_{1\kappa})}{\alpha r_{1N\kappa}} \end{equation} respectively. Combining eqs. (\ref{bohrRadiusD} ) and (\ref{energy_kg1}) together leads to the expression: \begin{equation} \label{energy_dirac1} E_{1N\kappa} = m_0c^2 \left[ 1 + \frac{\alpha^2}{(N+1-\eta_{1\kappa})^2}\right]^{-1/2} \end{equation} These are the energy eigenvalues of the hydrogen atom based on the spin-$1/2$ K-G equation. Using eqs. (\ref{eq: couplingFunc}) and (\ref{bigN}), this result can be rewritten \begin{equation} \label{energy_dirac2} E_{1N\kappa} = m_0c^2 \left[ 1 + \frac{\alpha^2}{(n-|\kappa|+\sqrt{\kappa^2-\alpha^2})^2}\right]^{-1/2} \end{equation} identical to the energy eigenvalues from the Dirac equation for the hydrogen atom. \section{Concluding Remarks} It has been shown for both the spinless K-G and Dirac equations for the hydrogen atom that a coupling function exists to take account of the influence of angular momentum on the Hamiltonian for the electron. It has been further shown that if the coupling function for the spinning electron from the Dirac equation is used to replace the coupling function in the spinless K-G equation, the spin-corrected K-G equation can be solved exactly to give the same energy eigenvalues as the Dirac equation. \end{document}
math
12,019
\begin{document} \title[Large deviations for SRW on percolation clusters] {\large Quenched large deviations for simple random walks on percolation clusters including long-range correlations} \author[Noam Berger, Chiranjib Mukherjee and Kazuki Okamura]{} \maketitle \thispagestyle{empty} \centerline{\sc Noam Berger \footnote{Hebrew University Jerusalem and TU Munich, Boltzmannstrasse 3, Garching 85748, {\tt [email protected]}}, Chiranjib Mukherjee \footnote{University of M\"unster, Einsteinstrasse 62, M\"unster 48149, Germany, {\tt [email protected]}} and Kazuki Okamura\footnote{Research Institute for Mathematical Sciences, Kyoto University, Kyoto, 606-8502, {\tt [email protected]} }} \renewcommand{\thefootnote}{} \footnote{\textit{AMS Subject Classification:} 60J65, 60J55, 60F10, {60K37}.} \footnote{\textit{Keywords:} Large deviations, random walk on percolation clusters, long-range correlations, random interlacements, Gaussian free field, random cluster model} \centerline{\textit{TU Munich and Hebrew University Jerusalem, University of M\"unster, Kyoto University}} \begin{center} \today {\rm e} nd{center} \begin{quote}{\small {\bf Abstract:} We prove a {\it{quenched}} large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on $\mathbb{Z}^d$ ($d\geq 2$). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and {the vacant set of random interlacements}(for $d\geq 3$) and the level sets of the Gaussian free field ($d\geq 3$). Inspired by the methods developed by Kosygina, Rezakhanlou and Varadhan (\cite{KRV06}) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (\cite{Y08}) and Rosenbluth (\cite{R06}) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the {\it{pair empirical measures}} of the environment Markov chain in the non-elliptic case of SRWPC . Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of {\it{translation-invariance}} stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster. } {\rm e} nd{quote} \section{Motivation, introduction and main results} We consider a simple random walk on the infinite cluster of some bond and site percolation models on $\mathbb{Z}^d$, $d\geq 2$. The percolation models under interest include classical Bernoulli bond and site percolation, as well as models that exhibit long-range correlations, including {the random-cluster model}, random interlacements and the vacant set of random interlacements in $d\geq 3$, and the level set of the Gaussian free field (also for $d\geq 3$). Conditional on the event that the origin lies in the infinite open cluster, it is known that a law of large numbers and quenched central limit theorem hold (see \cite{SS04}, \cite{MP07}, \cite{BB07} and \cite{PRS15}). Treatment of these classical questions for these models need care because of its inherent {\it{non-ellipticity}} -- a problem which permeates in several forms in the above mentioned literature. Questions on large deviation principles (LDP) in the quenched setting for general random walks in {\it{elliptic}} random environments (RWRE) have also been studied. In $d=1$, first Greven and den Hollander (\cite{GdH94}) for i.i d. and uniformly elliptic random environments, and then Comets, Gantert and Zeitouni (\cite{CGZ00}) for stationary, ergodic and uniformly elliptic random environments, derived quenched LDP for the mean velocity of a a RWRE and obtained explicit variational formulas for the rate function. For $d\geq 1$, Zerner (\cite{Z98}, see also Sznitman (\cite{S94})) proved quenched LDP under the assumption that the logarithm of the random walk transition probabilities possesses finite $d$-th moment and the random environment enjoys the {\it{nestling property}}. His method is based on proving shape theorems invoking the sub-additive ergodic theorem. Using the sub-additivity more directly, Varadhan (\cite{V03}) proved a quenched LDP dropping the {\it{nestling}} assumption and assuming uniform ellipticity for the random environment. However, the use of sub-additivity in the above results did not lead to any desired formula for the rate function. Kosygina, Rezakhanlou and Varadhan (\cite{KRV06}) derived a novel method for proving quenched LDP using the {\it{environment seen from the particle}} in the context of a diffusion with a random drift assuming some growth conditions on the random drift (ellipticity) and obtained a variational formula for the rate function. This method goes parallel to quenched homogenization of random Hamilton-Jacobi-Bellman (HJB) equations. Rosenbluth (\cite{R06}) adapted this theory to the ``level-1" large deviation analysis of the rescaled location of a multidimensional random walk in random environments and also obtained a formula for the rate function. The assumption regarding the growth condition on the random drift imposed in \cite{KRV06} under which homogenization of HJB takes place, or quenched large deviation principle for the rescaled law of the diffusion holds, now translates to the assumption that logarithm of the random walk transition probabilities possesses finite $d+{\rm e} ps$ moment, for some ${\rm e} ps>0$ (see \cite{R06}). Under the same moment assumption, Yilmaz (\cite{Y08}) extended this work to a ``level-2" LDP for the law of the pair empirical measures of the environment Markov chain and subsequently Rassoul-Agha and Sepp\"al\"ainen (\cite{RS11}) proved a ``level-3" LDP for the empirical process for the environment Markov chain. Like Rosenbluth (\cite{R06}), both \cite{Y08} and \cite{RS11} obtained variational formulas for the corresponding rate functions. This method has been further exploited for studying free energy for directed and non-directed random walks in a unbounded random potential (see the works of Rassoul-Agha, Sepp\"al\"ainen and Yilmaz \cite{RSY13, RSY14} and Georgiou et al. \cite{GRSY13,GRS14}). We also refer to the works of Armstrong and Souganidis (\cite{AS12}, see also \cite{LS05, LS10, AT14}) for the continuous analogue of \cite{RSY13} concerning homogenization of random Hamilton Jacobi Bellman equations in unbounded environments. Roughly speaking, all these results in the aforementioned literature work only under the assumption that $V:=-\log {\mathfrak{p}}i \in L^p(\mathbb{P})$ with $p>d$, where ${\mathfrak{p}}i$ denotes the random walk transition probabilities in the elliptic random environment whose law is denoted by $\mathbb{P}$. Thus, the aforementioned literature does not cover the case $V= \infty$ pertinent to the case of a random walk on a supercritical percolation cluster, an important model that carries the aforementioned inherent non-ellipticity of the random environment. In this context, it is the goal of the present article to develop a unifying approach for proving quenched large deviation principles for the distribution of the empirical measures of the environment Markov chain of SRWPC ({\it{level-2}}) and subsequently deduce the particle dynamics of the rescaled location ({\it{level-1}}) of the walk on the cluster. We start with a precise mathematical layout of the random environments under consideration including the bond and site percolations on $\mathbb{Z}^d$. \secdef \subsct\sbsect{The percolation models under interest.}\label{sec-intro-models} {We fix $d\geq 2$ and denote by $\mathbb B_d$ the set of nearest neighbor edges of the lattice $\mathbb{Z}^d$ and by $\mathcal U_d=\{{\mathfrak{p}}m e_i\}_{i=1}^d$ the set of edges from the origin to its nearest neighbor. We will now phrase out the basic set up of the bond and site percolation models on $\mathbb{Z}^d$ which we will be of working with in this article.} {For every bond percolation model, we will set $\Omega = \{0,1\}^{\mathbb B_d}$ to be the space of all {\it{percolation configurations}} $\omega=(\omega_b)_{b\in\mathbb B_d}$. In other words, $\omega_b=1$ refers to the edge $b$ being {\it{present}} or {\it{open}}, while $\omega_b=0$ implies that it is {\it{vacant}} or {\it{closed}}. We consider a random subgraph $\mathcal{C}$ of $\mathbb{Z}^d$ whose vertices and edges are $\mathbb{Z}^d$ and the set of open edges, respectively. We call each connected component of the random graph a {\it{cluster}}, and if a cluster contains infinitely many vertices then we call it an {\it infinite cluster}. For $x, y \in \mathbb{Z}^d$, we write $x \sim y$ if $x$ and $y$ are connected in $\mathcal{C}$. If $x \sim y$, let ${\rm d}_{\textrm{ch}}(x,y)$ be the graph distance on the cluster containing $x$ and $y$, specifically, the minimal length of paths connecting $x$ and $y$ in $\mathcal{C}$.} Let $\mathcal B$ be the Borel-$\sigma$-algebra on $\Omega$ defined by the product topology. We call elements of $\mathcal B$ {\it events} and say that an event $A$ is {\it increasing} if the following holds: Whenever $\omega = (\omega_b)_b \in A$ and $\omega^{{\mathfrak{p}}rime} = (\omega^{{\mathfrak{p}}rime}_b)_b $ satisfies that $\omega^{{\mathfrak{p}}rime}_b \ge \omega_b$ for each $b \in \mathbb B_d$, $\omega^{{\mathfrak{p}}rime} \in A$. Note that $\mathbb{Z}^d$ acts as a group on $(\Omega, \mathcal B)$ via translations. In other words, for each $x\in \mathbb{Z}^d$, $\tau_x: \Omega \longrightarrow \Omega$ acts as a {\it{shift}} given by $(\tau_x\omega)_b= \omega_{x+b}, b \in \mathbb B_d$. Let $\mathbb{P}$ be a probability measure on $\Omega$. Let $\Omega_0 = \{\omega\colon 0\in \mathcal C_\infty(\omega)\}$, and if $\mathbb{P}(\Omega_0) > 0$, we then define the conditional probability $\mathbb{P}_0$ by $$ \mathbb{P}_0(A) = \mathbb{P}\big(A\big|\Omega_0\big) \qquad A\in\mathcal B. $$ {If we consider a site percolation model, then we let $\Omega = \{0,1\}^{\mathbb Z^d}$. We agree to call a site $x\in \mathbb{Z}^d$ {\it{present}} or {\it{open}} if $\omega_x =1$, and {\it{vacant}} or {\it{closed}} if $\omega_x =0$ . The notation we set up for the bond percolation model in the above paragraph now carry over to the site percolation set up pertaining to a random subgraph $\mathcal{C}$ of $\mathbb{Z}^d$ whose vertices and edges are the set of open sites and the set of edges whose two endpoints are open sites, respectively.} {We will now postulate a set of conditions imposed on the bond and site percolation models and subsequently describe the explicit models under interest that satisfy these conditions. The general requirements are the following: \\ {\bf{Assumption 1.}} For $\mathbb{P}$-a.e. $\omega$, there exists a unique infinite cluster $\mathcal C_\infty(\omega)$ in $\mathbb{Z}^d$. Note that under this assumption, $\mathbb{P}(\Omega_0) > 0$ and consequently, $\mathbb{P}_0$ is well-defined. \\ {\bf{Assumption 2.}} For each $x \in \mathbb{Z}^d \setminus \{0\}$, $\mathbb{P}$ is invariant and ergodic with respect to the transformation $\tau_x$. \\ {\bf{Assumption 3.}} We assume that there exist $c_1, {\rm d}ots, c_4 > 0$ such that for each $x \in \mathbb{Z}^d$, \[ \mathbb{P}[{\rm d}_{\textrm{ch}}(0,x) \ge c_1 |x|_1 ; 0,x \in {\mathcal C}_\infty(\omega) ] \leq c_2 {\rm e} xp(-c_3 (\log |x|_1)^{1+c_4}). \] We will need to impose further assumptions. Let us first define, for any fixed $\omega\in\Omega_0$ and $e\in\mathcal U_d$, \begin{equation}\label{def-n} k(\omega,e)=\inf\{k\geq 1: \tau_{ke}\,\,\omega\in\Omega_0\}. {\rm e} nd{equation} Note that under Assumption 2, by the Poincar\'e recurrence theorem (cf. \cite[Section 2.3]{P89}), $k(\omega,e)$ is finite $\mathbb{P}_0$-a.s. {\bf{Assumption 4.}} With the above definition of $k(\omega,e)$, we then, assume that there exist $c_5, c_6>0$ so that \[ \mathbb{P}_0 \big[d_{\mathrm{ch}}(0, k(\omega, e)e) > n \big] \le c_5 {\rm e} xp(-c_6 n). \] {\bf{Assumption 5.}} The FKG inequality holds, specifically, $\mathbb{P}(A \cap B) \ge \mathbb{P}(A)\mathbb{P}(B)$ holds for every two increasing events $A$ and $B$ in $\Omega$. } We now turn to a precise description of the specific models that we will be concerned with, and all the following models will satisfy our requirements listed above (see Lemma \ref{chemdist} - Lemma \ref{lemma-FKG}). \subsubsection{The bond percolation models.} \label{sec-intro-bond} We first describe two classical models related to bond percolation. \noindent $\bullet$ {\bf{I.I.D. bond percolation.}} We fix the {\it{percolation parameter}} $p\in(0,1)$ and denote by $$ \mathbb{P}=\mathbb{P}_p:=\big(p {\rm d}elta_1+ (1-p) {\rm d}elta_0\big)^{\mathbb B_d} $$ the product measure with marginals $\mathbb{P}(\omega_b=1)=p=1- \mathbb{P}(\omega_b=0)$. Note that the product measure $\mathbb{P}$ is invariant under the action of the translation group $\{\tau_x\}_x$. It is known that there is a critical percolation probability $p_c=p_c(d)$ which is the infimum of all $p$'s such that $\mathbb{P}(0\in \mathcal C_\infty)>0$. In this paper we only consider the case $p>p_c$. By Burton-Keane's uniqueness theorem (\cite{BK89}), the infinite cluster is unique and so $\mathcal C_\infty$ is connected with $\mathbb{P}$-probability one. \mathbb{E}dskip \noindent $\bullet$ {\bf{Random cluster model.}} The second example is the random-cluster model, which is a natural extension of Bernoulli bond percolation. However, this models exhibits long range correlations and one necessarily drops the i.i.d. structure present in the first example. Let us shortly recall the basic structure and the salient properties of this model. Let $d \ge 2$, $p \in [0,1]$, $q \ge 1$, and let also $\Lambda$ be a box in $\mathbb{Z}^{d}$ with boundary condition $\xi \in \{0,1\}^{\mathbb B_d}$. Let $\mathbb{P}_{\Lambda, p, q}^{\xi}$ be the random-cluster measure on $\Lambda$, defined as $$ \mathbb{P}_{\Lambda, p, q}^{\xi}(\{\omega\}) = \frac{1}{Z} \,\, p^{n(\omega)}\,\, (1-p)^{|\Lambda| - n(\omega)}\,\, q^{o(\omega)}. $$ Here $Z$ is a normalizing constant that makes $\mathbb{P}_{\Lambda, p, q}^{\xi}$ a probability measure, while $n(\omega)$ is the number of edges in $\Lambda \cap \omega$, $|\Lambda|$ is the number of all edges in $\Lambda$. $o(\omega)$ is the number of open clusters of $\omega_{\Lambda, \xi}$ intersecting $\Lambda$, where $$ \omega_{\Lambda, \xi} =\begin{cases} \omega \quad\bar{\mu}ox{on}\,\,\Lambda \\ \xi \quad\bar{\mu}ox{outside}\,\, \Lambda. {\rm e} nd{cases} $$ Let $$ \mathbb{P}_{p,q}^{\ssup{ b}} = \,\,\, \lim_{\Lambda \to \mathbb{Z}^d} \mathbb{P}_{\Lambda, p, q}^{{\ssup{b}}} $$ In other words, $\mathbb{P}_{p,q}^{\ssup{b}}$ is the extremal infinite-volume limit random-cluster measures, with free (for $b=0$) and wired (for $b=1$) conditions respectively. For each $b\in \{0,1\}$, let $$ p_{c}^{\ssup b}(q) = \inf \bigg\{p \in [0,1] : \mathbb{P}_{p,q}^{\ssup b}(0 \leftrightarrow \infty) > 0\bigg\}, \quad b = 0, 1. $$ Then, $p_{c}^{\ssup 0}(q) = p_{c}^{\ssup 1}(q) \in (0,1)$ (\cite[(5.4)]{G06}) and we write this as $p_{c}(q)$. It is well-known that, for both $b=0$ and $b=1$, the measure $\mathbb{P}:=\mathbb{P}_{p, q}^{\ssup{ b}}$ is invariant and ergodic with respect to $\tau_{x}$ for every $x \in \mathbb{Z}^{d} \setminus \{0\}$ and for all $p \in [0,1]$ and $q\geq 1$ (\cite[(4.19) and (4.23)]{G06}). Furthermore, for every $p > p_{c}(q)$, there exists a unique infinite cluster $\mathcal{C}_{\infty}$, $\mathbb{P}_{p,q}^{b}$-a.s. by \cite[Theorem 5.99]{G06}, For our purpose, we also need the notion of {\it{slab critical probability}}, which is defined as follows. For $d \ge 3$, we let \begin{equation}\label{slab-d3} \begin{aligned} &S(L, n) := [0, L-1] \times [-n, n]^{d-1} \\ & \widehat{p}_{c}(q, L) := \inf \left\{ p : \liminf_{n \to \infty} \inf_{x \in S(L,n)} \mathbb{P}_{S(L,n), p,q}^{\ssup{ 0}}(0 \leftrightarrow x) > 0 \right\} \\ &\widehat{p}_{c}(q) := \lim_{L \to \infty} \widehat{p}_{c}(q, L). {\rm e} nd{aligned} {\rm e} nd{equation} For $d = 2$, for $e_{n} = (n, 0) \in \mathbb{R}^{2}$, we let \begin{equation}\label{slab-d2} \begin{aligned} & {p_{g}(q)} := \sup \left\{p : \lim_{n \to \infty} \frac{- \log \mathbb{P}_{p,q}^{\ssup{ 0}}(0 \leftrightarrow e_{n})}{n} > 0 \right\}, \\ &\widehat{p}_{c}(q) := \frac{q(1-p_{g}(q))}{p_{g}(q) + q(1 - p_{g}(q))} {\rm e} nd{aligned} {\rm e} nd{equation} and we have the bound ${1 > } \ \widehat{p}_{c}(q) \ge p_{c}(q)$. Although equality is believed to be true in the last relation (\cite[Conjecture 5.103]{G06}), to the best of our knowledge, the only known proofs are available only for the case $q = 1$ (i.e., the case of Bernoulli bond percolation (see Grimmett and Marstrand \cite{GM90}), and for $d = 2$ and every $q\geq 1$ (see Beffara and Duminil-Copin \cite{BD12}), and for $d \ge 3$ and $q=2$ (i.e., {\it{FK-Ising model}}, see Bodineau \cite{B05}). We will henceforth work in the regime that $$ p > \widehat{p}_{c}(q), $$ and will write $\mathbb{P}= \mathbb{P}_{p, q}^{\ssup{ b}}$ and $\mathbb{P}_0=\mathbb{P}(\cdot|\,0\in \mathcal C_\infty)$ throughout the rest of the article. If $d\geq 3$, in order to show that Assumption 4 is satisfied by the random cluster model, for technical reasons we will consider only the free boundary case. We point out that in the process of proving our main results (stated in Section \ref{sec-results}) corresponding to the random cluster model, we prove some geometric properties of this model as a necessary by-product. In particular, we prove a ``chemical distance estimate" between two points in the infinite cluster $\mathcal C_\infty$ (see {Lemma \ref{chemdist}}), and also obtain exponential tail bounds for the graph distance between the origin and the ``first arrival" of the infinite cluster $\mathcal C_\infty$ on each coordinate direction (see {Lemma \ref{lemma-ell}}). Although both results are part of the standard folklore in the i.i.d. percolation literature, the proofs of these two assertions for the random cluster model seem to be new, to the best of our knowledge. \subsubsection{Site percolation models.}\label{sec-intro-site} \ The second class of models we are interested in concerns {\it{site percolations}}, which include the classical Bernoulli i.i.d. percolation as well as models that carry long-range correlation. We turn to short descriptions of these models. \noindent $\bullet$ {\bf{Random interlacements in $d\geq 3$.}} This model was introduced by Sznitman \cite{Sz10}. Let $\mathbb T_N= \big(\mathbb{Z}/N\mathbb{Z}\big)^d$ be the discrete torus in $d\geq 3$. For every $u>0$, the {\it{random interlacement}} $\mathcal I^{\ssup u}$ is defined to be a subset of $\mathbb{Z}^d$ which arises as the local limit, as $N\to\infty$ of the sites visited by a simple random walk in $\mathbb T_N$ until time $\lfloor uN^d \rfloor$. For every finite subset $K\subset\mathbb{Z}^d$ with capacity $\mathrm{cap}(K)$, the distribution of $\mathcal I^{\ssup u}$ is given by $$ \mathbb{P}\big[\mathcal I^{\ssup u} \cap K= {\rm e} mptyset\big]= {\rm e} ^{-u \, \mathrm{cap}(K)}, $$ Furthermore, {for every $u > 0$,} $\mathbb{P}$-almost surely, the set $\mathcal I^{\ssup u}$ is an infinite connected subset of $\mathbb{Z}^d$ (see \cite[(2.21)]{Sz10}), exhibits long range correlations given by \begin{equation}\label{RI-corr} \bigg|\mathbb{P}\big[x,y\in \mathcal I^{\ssup u}\big]- \mathbb{P}\big[x\in \mathcal I^{\ssup u}\big] \, \mathbb{P}\big[y\in \mathcal I^{\ssup u}\big]\bigg| \sim \big(1+|x-y|\big)^{2-d}. {\rm e} nd{equation} {See \cite[(1.68)]{Sz10} for details.} \mathbb{E}dskip \noindent {$\bullet$ {\bf{Vacant set of the random interlacements in $d\geq 3$.}}} The {\it{vacant set of random interlacements}} $\mathcal V^{\ssup u}$ is defined to be the complement of the random interlacement $\mathcal I^{\ssup u}$ at level $u$, i.e., $$ \mathcal V^{\ssup u} = \mathbb{Z}^d \setminus \mathcal I^{\ssup u} \qquad \mathbb{P}\big[K\subset \mathcal V^{\ssup u} \big]= {\rm e} ^{-u \mathrm{cap}(K)}. $$ Furthermore, $\mathcal V^{\ssup u}$ also exhibits polynomially decaying correlation as in {\rm e} qref{RI-corr}. It is known that there exists $u_\star\in (0,\infty)$ such that almost surely, for every $u > u_\star$, all connected components of $\mathcal V^{\ssup u}$ are finite (\cite{TW11}), while for $u < u_\star$, $\mathcal V^{\ssup u}$ contains an infinite connected component $\mathcal C_\infty$, which is unique (\cite{T09aap}). Geometric properties of the random interlacements and {the vacant sets of them} have been studied extensively. {Cerny-Popov (\cite{CP12}) obtained sharp estimates on the graph distance in random interlacement}, and, Drewitz-Rath-Sapozhnikov (\cite{DRS14}) obtained sharp estimates on the graph distance { in the vacant set of random interlacement}, assuming that $u\in (0,\overline u)$ for some $\overline u \leq u_\star$, where $\overline u$ is introduced in \cite[Theorem 2.5]{DRS14}. Although it is believed that $\overline u= u_\star$, we will henceforth assume that $$ u<\overline u. $$ and in this regime, as before, we will write $\mathbb{P}_0=\mathbb{P}(\cdot|\, 0\in \mathcal C_\infty)$. \mathbb{E}dskip \noindent $\bullet$ {\bf{{Level sets of } Gaussian free fields in $d\geq 3$.}} This model has a strong background in statistical physics (see \cite{LS86} and \cite{Sh07} for a mathematical survey). The Gaussian free field on $\mathbb{Z}^d$ for $d \geq 3$, is a centered Gaussian field $\varphi = \big(\varphi(x)\big)_{x\in Z^d}$ under the probability measure $\mathbb{P}$ with covariance function $$ \mathbb{E}[\varphi(x)\varphi(y)] = g(x,y)=c_d |x-y|^{2-d}, $$ given by the Green function of the simple random walk on $\mathbb{Z}^d$. This leads to long range correlations exhibited by random field $\varphi$. For every $h\in \mathbb{R}$, the {\it{excursion set above level $h$}} is defined as $$ E^{\geq h} = \{x \in \mathbb{Z}^d \colon \varphi(x)\geq h\} $$ and it is known that there exists $h_\star\in [0,\infty)$ such that for every $h<h_\star$, $\mathbb{P}$-almost surely, $ E^{\geq h}$ contains a unique infinite connected component and for every $h > h_\star$, all the connected components of $ E^{\geq h}$ are finite. Like in the case of random interlacements and vacant set of random interlacements, results on the graph distance for the excursion level set $ E^{\geq h}$ were also obtained in \cite{DRS14} on the sub-regime $(-\infty, \overline h)$ for $\overline h\leq h_\star$. \cite[Remark 2.9]{DRS14} conjectures that $\overline h= h_\star \in (0,\infty)$ in all $d\geq 3$ and as before, we will also assume that $$ h\in (-\infty, \overline h), $$ which guarantees that the level set $E^{\geq h}$ has a unique infinite connected component $\mathcal C_\infty$ and as usual, we will write $\Omega_0=\{0\in\mathcal C_\infty\}$ and will work with the conditional measure $$ \mathbb{P}_0=\mathbb{P}(\cdot| 0\in\mathcal C_\infty). $$ \secdef \subsct\sbsect{The simple random walk on the percolation models.}\label{sec-SRWPC} We now define a (discrete time) simple random walk on the unique supercritical percolation cluster $\mathcal C_\infty$ corresponding to the percolation models discussed in the last section. Let a random walk start at the origin and at each unit of time, the walk moves to a nearest neighbor site chosen uniformly at random from the accessible neighbors. More precisely, for each $\omega\in \Omega_0$, $x\in \mathbb{Z}^d$ and $e\in \mathcal U_d$, we set \begin{equation}\label{pidef} {\mathfrak{p}}i_\omega(x,e)=\frac{ \1_{\{\omega_e=1\}} \circ \tau_x}{\sum_{{e^{\mathfrak{p}}rime \in \mathcal U_d}} \1_{\{\omega_{e^{\mathfrak{p}}rime}=1\}} \circ \tau_x} \in [0,1], {\rm e} nd{equation} and define a simple random walk $X=(X_n)_{n\geq 0}$ as a Markov chain taking values in $\mathbb{Z}^d$ with the transition probabilities \begin{equation}\label{pitransit} \begin{aligned} &P^{{\mathfrak{p}}i,\omega}_{0}(X_0=0)=1,\\ &P_{0}^{{\mathfrak{p}}i,\omega}\big(X_{n+1}= x+e\big| X_n=x\big)={\mathfrak{p}}i_\omega(x,e). {\rm e} nd{aligned} {\rm e} nd{equation} This is a canonical way to ``put" the Markov chain on the infinite cluster $\mathcal C_\infty$. Henceforth, we will refer to this Markov chain as the {\it{simple random walk on the percolation cluster}} (SRWPC). Let us remark that in the expression of ${\mathfrak{p}}i_\omega(x,e)$ as well as $P_{0}^{{\mathfrak{p}}i,\omega}$ we have apparently used the notation for bond percolation models appearing in Section \ref{sec-intro-bond}. Very similar expression can be used for these objects pertaining to the site percolation models introduced in Section \ref{sec-intro-site} too. To alleviate notation, throughout the rest of the article we will continue to write the expressions {\rm e} qref{pidef} and {\rm e} qref{pitransit} for the transition kernels ${\mathfrak{p}}i_\omega(x,e)$ and transition probabilities $P_{0}^{{\mathfrak{p}}i,\omega}$ for the SRWPC corresponding to all the percolation models. \section{Main results}\label{sec-results} In Section \ref{sec-results-1} we will introduce the environment Markov chain, its empirical measures and certain relative entropy functionals which will be used later. In Section \ref{sec-results-2}, we will announce our main results. In Section \ref{sec-results-3} we will carry out a sketch of the existing proof technique related to elliptic RWRE (\cite{Y08},\cite{R06}), comment on the approach taken in the present paper regarding SRWPC and underline the differences to the earlier approach. \secdef \subsct\sbsect{The environment Markov chain.}\label{sec-results-1} For each $\omega\in \Omega_0$, we consider the process $(\tau_{X_n}\omega)_{n\geq 0}$ which is a Markov chain taking values in the space of environments $\Omega_0$. This is the {\it{environment seen from the particle}} and it plays an important r\^ole in the present context, see section 3.1 for a detailed description. We denote by \begin{equation}\label{localtime} \mathfrak L_n= \frac 1n \sum_{k=0}^{n-1} {\rm d}elta_{\tau_{X_k}\omega,{X_{k+1}-X_{k}}} {\rm e} nd{equation} the empirical measure of the environment Markov chain and the nearest neighbor steps of the SRWPC $(X_n)_{n\geq 0}$. This is a random element of ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$, the space of probability measures on $\Omega_0\times \mathcal U_d$. Note that $\Omega_0\times\mathcal U_d$ inherits the induced product topology from $\Omega\times \mathcal U_d= \{0,1\}^{\mathbb B_d}\times\mathcal U_d$, while ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$ is equipped with the usual weak topology, with convergence being determined by convergence of integrals against continuous and bounded functions $f$ on $\Omega_0\times\mathcal U_d$. Note that under the weak topology ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$ is compact ($\Omega_0 \subset \Omega$ is closed and hence also compact). The empirical measures $\mathfrak L_n$ were introduced and their large deviation behavior (in the {\it{quenched setting}}) for elliptic random walks in random environments were studied by Yilmaz (\cite{Y08}). We note that, via the mapping $(\omega,e)\mapsto (\omega, \tau_e\omega)$ the space ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$ is embedded into ${\mathcal M}_1(\Omega_0\times \Omega)$, and hence, every element $\mu\in {\mathcal M}_1(\Omega_0\times \mathcal U_d)$ can be thought of as the {\it{pair empirical measure}} of the environment Markov chain. In this terminology, we can define its marginal distributions by \begin{equation}\label{marginals} \begin{aligned} &{\rm d} (\mu)_1(\omega)= \sum_{e\in \mathcal U_d} {\rm d} \mu(\omega,e), \\ & {\rm d} (\mu)_{2}(\omega)= \sum_{\omega^{\mathfrak{p}}rime\colon \, \tau_e\omega^{\mathfrak{p}}rime=\omega} {\rm d} \mu(\omega^{\mathfrak{p}}rime,e)=\sum_{e\in \mathcal U_d} {\rm d} \mu(\tau_{-e}\omega,e). {\rm e} nd{aligned} {\rm e} nd{equation} Here $(\mu)_1$ is a measure on $\Omega_0$ and $(\mu)_2$ is a measure on $\Omega$. A relevant subspace of ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$ is given by \begin{equation}\label{relevantmeasures} \begin{aligned} {\mathcal M}_1^\star={\mathcal M}_{1}^ {\star}(\Omega_0\times \mathcal U_d)&=\bigg\{\mu\in{\mathcal M}_1(\Omega_0 \times \mathcal U_d)\colon\, (\mu)_1=(\mu)_ 2\ll \mathbb{P}_0 \, \,\bar{\mu}ox{and}\, \,\mathbb{P}_0\bar{\mu}ox{- almost surely,}\\ &\qquad\qquad\frac{{\rm d} \mu(\omega,e)}{{\rm d} (\mu)_{ 1}(\omega)} >0 \,\, \bar{\mu}ox{if and only if}\,\,\omega_e=1\,\bar{\mu}ox{for}\, e\in \mathcal U_d\bigg\}. {\rm e} nd{aligned} {\rm e} nd{equation} We remark that, here $(\mu)_1=(\mu)_ 2$ means that $(\mu)_ 2$ is supported on $\Omega_0$ and $(\mu)_1=(\mu)_ 2$. Furthermore, Lemma \ref{onetoone} shows that elements in ${\mathcal M}_1^\star$ are in one-to-one correspondence to Markov kernels (w.r.t. the environment process) on $\Omega_0$ which admit invariant probability measures which are absolutely continuous with respect to $\mathbb{P}_0$. Finally, we define a {\it{relative entropy functional}} $\mathfrak I: {\mathcal M}_1(\Omega_0 \times \mathcal U_d) \rightarrow [0,\infty]$ via \begin{equation}\label{Idef} \mathfrak I(\mu) = \begin{cases} \int_{\Omega_0} \sum_{e\in \mathcal U_d} {\rm d}\mu(\omega,e) \log \frac{{\rm d} \mu(\omega,e)}{{\rm d}(\mu)_{1}(\omega) {\mathfrak{p}}i_\omega(0,e)} \quad\bar{\mu}ox{if}\,\mu\in{\mathcal M}_1^\star, \\ \infty\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\bar{\mu}ox{else.} {\rm e} nd{cases} {\rm e} nd{equation} For every continuous, bounded and real valued function $f$ on $\Omega_0 \times \mathcal U_d$, we denote by $$ \mathfrak I^\star(f)= \sup_{\mu\in{\mathcal M}_1(\Omega_0 \times \mathcal U_d)} \big\{ \langle f,\mu\rangle - \mathfrak I(\mu)\big\} $$ the {\it{Fenchel-Legendre transform}} of $\mathfrak I(\cdot)$. Likewise, for every $\mu\in {\mathcal M}_1(\Omega_0 \times \mathcal U_d)$, $\mathfrak I^{\star\star}(\mu)$ denotes the {\it{Fenchel-Legendre transform}} of $\mathfrak I^\star(\cdot)$. \secdef \subsct\sbsect{Main results: Quenched large deviation principle.}\label{sec-results-2} We are now ready to state the main result of this paper, which proves a large deviation principle for the distributions $P^{{\mathfrak{p}}i,\omega}_0 \mathfrak L_n^{-1}$ on ${\mathcal M}_1(\Omega_0\times\mathcal U_d)$ (usually called {\it{level-2}} large deviations) and the distributions $P^{{\mathfrak{p}}i,\omega}_0 {\frac {X_n}{n}}^{-1}$ on $\mathbb{R}^d$ (usually called {\it{level-1}} large deviations). Both statements hold true for $\mathbb{P}_0$- almost every $\omega\in \Omega_0$ and in the case of elliptic RWRE, these already exist in the literature (see Yilmaz \cite{Y08} for level-2 large deviations and Rosenbluth \cite{R06} for level-1 large deviations) with the assumption which requires the $p$-th moment of the logarithm of the RWRE transition probabilities to be finite, for $p>d$. In the present context, due to zero transition probabilities of the SRWPC, we necessarily have to drop this moment assumption. Before we announce our main result precisely, let us remind the reader that all the percolation models that were required to satisfy Assumptions 1-5 in Section \ref{sec-intro-models} or were specifically introduced in Section \ref{sec-intro-bond} and Section \ref{sec-intro-site}, are assumed to be supercritical, the origin is always contained in the unique infinite cluster $\mathcal C_\infty$, $\mathbb{P}_0=\mathbb{P}(\cdot| \{0\in\mathcal C_\infty\})$ denotes the conditional environment measure and $P^{{\mathfrak{p}}i,\omega}_0$ stands for the transition probabilities for SRWPC defined in {\rm e} qref{pitransit}. Here is the statement of our first main result. \begin{theorem}[Quenched LDP for the pair empirical measures]\label{thmlevel2} Let $d\geq 2$. Then for $\mathbb{P}_0$- almost every $\omega\in \Omega_0$, the distributions of $\mathfrak L_n$ under $P^{{\mathfrak{p}}i,\omega}_0$ satisfies a large deviation principle in the space of probability measures on ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$ equipped with the weak topology. The rate function $\mathfrak I^{\star\star}$ is the double Fenchel-Legendre transform of the functional $\mathfrak I$ defined in {\rm e} qref{Idef}. Furthermore, $\mathfrak I^{\star\star}$ is convex and has compact level sets. {\rm e} nd{theorem} In other words, for $\mathbb{P}_0$- almost every $\omega\in \Omega_0$, \begin{equation}\label{ldpub} \limsup_{n\to\infty} \frac 1n \log P^{{\mathfrak{p}}i,\omega}_0 \big(\mathfrak L_n\in \mathcal C\big) \leq -\inf_{\mu\in \mathcal C} \mathfrak I^{\star\star}(\mu) \quad \forall\,\,\mathcal C\subset {\mathcal M}_1(\Omega_0 \times \mathcal U_d)\,\,\bar{\mu}ox{closed}, {\rm e} nd{equation} and \begin{equation}\label{ldplb} \limsup_{n\to\infty} \frac 1n \log P^{{\mathfrak{p}}i,\omega}_0 \big(\mathfrak L_n\in \mathcal G\big) \geq -\inf_{\mu\in \mathcal G} \mathfrak I^{\star\star}(\mu) \quad \forall\,\,\mathcal G\subset {\mathcal M}_1(\Omega_0 \times \mathcal U_d)\,\,\bar{\mu}ox{open}. {\rm e} nd{equation} \noindent A standard computation shows that the functional $\mathfrak I$ defined in {\rm e} qref{Idef} is convex on ${\mathcal M}_1(\Omega_0\times\mathcal U_d)$. The following lemma, whose proof is based on the ``zero speed regime" of the SRWPC under a supercritical drift and is deferred to until Section \ref{sec-6}, shows that $\mathfrak I^{\star\star}\ne \mathfrak I$. \begin{lemma}\label{nonlsc} Let $d\geq 2$. Then $\mathfrak I$ is not lower-semicontinuous on ${\mathcal M}_1(\Omega_0\times \mathcal U_d)$. Hence, $\mathfrak I\ne \mathfrak I^{\star\star}$. {\rm e} nd{lemma} We remark that Theorem \ref{thmlevel2} is an easy corollary to the existence of the limit $$ \lim_{n\to\infty} \frac 1n \log E^{{\mathfrak{p}}i,\omega}_0 \big\{{\rm e} xp\{n\big \langle f, \mathfrak L _n\big\rangle\big\}\big\} =\lim_{n\to\infty} \frac 1n \log E^{{\mathfrak{p}}i,\omega}_0 \bigg\{{\rm e} xp\bigg( \sum_{k=0}^{n-1}f\big(\tau_{X_k}\omega, X_k-X_{k-1}\big)\bigg)\bigg\}, $$ for every continuous, bounded function $f$ on $\Omega_0 \times \mathcal U_d$ and the symbol $\langle f, \mu\rangle$ denotes, in this context, the integral $\int_{\Omega_0} {\rm d} \mathbb{P}_0 (\omega) \sum_{e\in \mathcal U_d} f(\omega,e) {\rm d} \mu(\omega,e)$. We formulate it as a theorem. \begin{theorem}[Logarithmic moment generating functions]\label{thmmomgen} For $d\geq 2$, $p> p_c(d)$ and every continuous and bounded function $f$ on $\Omega_0 \times \mathcal U_d$, $$ \lim_{n\to\infty} \frac 1n \log E^{{\mathfrak{p}}i,\omega}_0 \bigg\{{\rm e} xp\bigg( \sum_{k=0}^{n-1}f\big(\tau_{X_k}\omega, X_k-X_{k-1}\big)\bigg)\bigg\} = \sup_{\mu\in {\mathcal M}_{1}^{\star}} \big\{\langle f,\mu\rangle- \mathfrak I(\mu)\big\} \quad\mathbb{P}_0-\bar{\mu}ox{a.s.} $$ {\rm e} nd{theorem} We will first prove Theorem \ref{thmmomgen} and deduce Theorem \ref{thmlevel2} directly. Note that via the contraction map $\xi: {\mathcal M}_1(\Omega_0 \times \mathcal U_d) \longrightarrow \mathbb{R}^d$, $$ \mu\mapsto \int_{\Omega_0} \sum_e \, e\,{\rm d}\mu(\omega,e), $$ we have $\xi(\mathfrak L_{n})= \frac{X_n-X_0}n= \frac {X_n} n$. Our second main result is the following corollary to Theorem \ref{thmlevel2}. \begin{cor}[Quenched LDP for the mean velocity of SRWPC]\label{thmlevel1} Let $d\geq 2$. Then the distributions $P^{{\mathfrak{p}}i,\omega}_0\big(\frac {X_n}n\in \cdot\big)$ satisfy a large deviation principle with a rate function $$ \begin{aligned} J(x)&= \inf_{\mu\colon \xi(\mu)=x} \mathfrak I(\mu)\qquad x\in\mathbb{R}^d. {\rm e} nd{aligned} $$ {\rm e} nd{cor} \begin{remark}\label{rmk-Kubota} Note that Corollary \ref{thmlevel1} has been obtained by Kubota (\cite{K12}) for the SRWPC based on the method of Zerner (\cite{Z98}). Kubota used sub-addtivity and overcame the lack of the moment criterion of Zerner by using classical results about the geometry of the percolation. This way he obtained a rate function which is convex and is given by the Legendre transform of the {\it{Lyapunov exponents}} derived by Zerner (\cite{Z98}). However, using the sub-additive ergodic theorem one does not get any expression or formula for the rate function, nor does the sub-additivity seem amenable for deriving a {\it{level 2}} quenched LDP as in Theorem \ref{thmlevel2}. {\rm e} nd{remark} \begin{remark}\label{rmk-Mourrat} Mourrat (\cite{M12}) also considered level-1 quenched large deviation principle for a model of random walk in random potential containing the non-elliptic case, by taking a strategy similar to \cite{Z98}. Note that the framework in \cite{M12} gives equal probability with each path of a fixed length in an infinite cluster, so the random walk can be regarded as a Markov chain on the {\it augmented} space by adding a cemetery point to $\mathbb{Z}^d$ (see \cite{Z98-I}). As we will see, our arguments will rely on the random walk being a Markov chain on an infinite percolation cluster $\mathcal{C}_{\infty}$, and it will be intiguing to consider extensions of our results to the random walk in random potential (RWRP) framework considered in \cite{M12}. Furthermore, given the broad range of models covered in the present paper, it will also be interesting to consider potentials that are only invariant and ergodic w.r.t. spatial shifts, while dropping the i.i.d. requirement imposed in \cite{M12}. However, in order to derive large deviation principle for random walks on such random environments, it is desirable to have good chemical distance estimates on the infinite cluster. {\rm e} nd{remark} \secdef \subsct\sbsect{Survey of earlier proof technique in the elliptic case and comparison with our method.}\label{sec-results-3} Earlier relevant work for quenched large deviations was carried out by Kosygina-Rezakhanlou-Varadhan (\cite{KRV06}) for elliptic diffusions in a random drift. Rosenbluth (\cite{R06}) first adapted this approach to the case of elliptic RWRE and derived a level-1 quenched large deviation principle for the distribution of the mean-velocity (the so-called {\it{level-1}} large deviations, recall Corollary \ref{thmlevel1}). Yilmaz (\cite{Y08}) then extended Rosenbluth's work on elliptic RWRE to a finer large deviation result for the pair empirical measures of the environment Markov chain (the so-called {\it{level-2}} large deviations, recall Theorem \ref{thmlevel2}). In the present case of deriving similar level-2 quenched large deviations for SRWPC, as a guiding philosophy, we also follow the main steps of Yilmaz (\cite{Y08}). However, due to fundamental obstacles that come up in several facets stemming from the inherent non-ellipticity of the percolation models, an actual execution of the existing method \cite{Y08} fails for the present case of SRWPC. In order to put our present work in context, in this section we will present a brief survey on the existing method that treated the elliptic case of RWRE (\cite{Y08}), and to emphasize the similarities and differences of our approach to the earlier one, and we will also provide a comparative description of the main strategy for the proof of Theorem \ref{thmmomgen} that allows the treatment of models that are non-elliptic (like SRWPC), while simplifying the earlier proof technique used for the elliptic case. This will also underline the technical novelty of the present work. \secdef \subsct\sbsect{Comparison of our proof techniques with the earlier approach used for elliptic RWRE:} As mentioned before, the purpose of the present subsection is to compare the proof techniques in the present paper to those of previous work on elliptic RWRE (\cite{KRV06,R06,Y08}). In particular, and unlike the rest of the paper, the current subsection is intended for readers familiar with the techniques and ideas of those papers. To keep notation consistent, in this survey we will continue to denote by $\mathbb{P}$ the law of a stationary and ergodic random environment and by ${\mathfrak{p}}i(\omega,\cdot)$ we will denote the random walk transition probabilities in the random environment. One of the requirements under which earlier results concerning elliptic RWRE (\cite{R06,Y08}) is the {\it{moment condition}} requiring $\int |\log{\mathfrak{p}}i|^{d+{\rm e} ps}{\rm d}\mathbb{P}<\infty$ for some ${\rm e} ps>0$. The crucial argument is the existence of the limiting logarithmic moment generating function (recall Theorem \ref{thmmomgen}) whose proof splits into three main steps: \noindent {\it{Lower bound.}} For models in elliptic RWRE, the lower bound part is based on a classical change of measure argument for the environment Markov chain, followed by an application of an ergodic theorem for the tilted Markov chain. This ergodic theorem is standard (see Kozlov \cite{K85}, Papanicolau-Varadhan \cite{PV81}) in the elliptic case where the (tilted) Markov chain transition probabilities are assumed to be strictly positive (as in the case studied in \cite{Y08}). In the current case of SRWPC, the lower bound also follows the standard method of tilting the environment Markov chain as the elliptic RWRE case. However, for the tilted environment Markov chain for the percolation models, the requisite ergodic theorem needs to be extended to the non-elliptic case which is the content of Theorem \ref{ergodicthm}. \noindent{\it{Upper bound.}} For the elliptic RWRE case, the upper bound part of the proof starts with a ``perturbation" of the exponential moment of the pair empirical measures $\mathfrak L_n$ defined in {\rm e} qref{localtime}. This perturbation comes from integrating certain ``gradient functions" w.r.t. the local times $\mathfrak L_n$, and these gradient functions are intrinsically defined by the spatial action of the translation group $\mathbb{Z}^d$ on the environment space. In the elliptic case (\cite{Y08}, \cite{R06} and \cite{KRV06}), the class $\mathcal K$ of such gradient functions $F\in\mathcal K$ are required to satisfy the {\it{closed loop condition}} that underlines their gradient structure, a moment condition that requires $F\in L^{d+{\rm e} ps}(\mathbb{P})$, and a mean-zero condition that demands $\mathbb{E}^{\mathbb{P}}[F]=0$. Any such $F\in \mathcal K$ leads to its {\it{corrector}} $V_F(\omega,x)=\sum_{j=0}^{n-1} F(\tau_{x_j}\omega,x_{j+1}-x_j)$ which is defined as the integral of the gradient $F$ along any path $x_0,{\rm d}ots,x_n=x$ between two fixed points $x_0$ and $x_n$. Note that the choice of the path does not influence the integral, thanks to the closed loop condition imposed on $F$. For any $F\in\mathcal K$, Rosenbluth (\cite{R06}) then proved that, the corresponding corrector $V_F$ has a ``sub-linear growth at infinity". Roughly speaking, this means, $\mathbb{P}$-almost surely, $|V_F(\omega,x)|=o(|x|)$ as $|x|\to\infty$. This is a crucial technical step in Rosenbluth's work that is proved adapting the original approach of \cite{KRV06} involving Sobolev embedding theorem and invoking Garsia-Rodemich-Rumsey estimate, and his the proof there hinges on the moment condition $F\in L^{d+{\rm e} ps}(\mathbb{P})$.$^1$ \footnote{$^{1}$Recall that the elliptic random environment is also required to satisfy the moment condition $\mathbb{E}^{\mathbb{P}}[|\log{\mathfrak{p}}i|^{d+{\rm e} ps}]<\infty$.} Since for elliptic RWRE, $\mathbb{P}$ is invariant w.r.t. the translations, one then exploits the mean-zero condition of the gradients $F$ and invokes the ergodic theorem to get the desired sub-linearity property. This property implies, in particular, that the effect of the aforementioned perturbation by the corrector $V_F$ in the exponential moment is indeed negligible. This is the crucial argument for the upper bound part for the existing literature on elliptic RWRE. Now for the upper bound part for SRWPC, already the aforementioned moment condition of the elliptic case fails (zeroes of SRWPC transition probabilities ${\mathfrak{p}}i$ already make the first moment $\mathbb{E}_0(|\log{\mathfrak{p}}i|)$ possibly infinite). Hence, we are not entitled to follow the method of Rosenbluth (\cite{R06}, see also \cite{GRSY13}) for proving the sub-linear growth property of the correctors. Moreover, the crucial mean-zero condition required in the elliptic case also fails for percolation due to the fundamental fact that the {\it{spatial action of the shifts $\tau_e$ on $\Omega_0$ is not $\mathbb{P}_0$-measure preserving}}. The lack of these two properties requires that we reformulate the conditions on our class of gradients. Besides the closed loop property in the infinite cluster, we demand uniform boundedness of the gradients in $\mathbb{P}_0$-norm and the validity of an ``induced mean-zero property" to circumvent the above mentioned non-invariant nature of the spatial shifts $\tau_e$ w.r.t. $\mathbb{P}_0$, see Section \ref{subsec-classG} for details. With these assumptions, we prove the requisite "sub-linear growth" property of the correctors corresponding to our gradients, see Theorem \ref{sublinearthm}. Our approach for proving this sub-linearity property is therefore different from the existing literature (\cite{R06}, \cite{GRSY13}). Instead, it is based on techniques from ergodic theory, combined with geometric arguments that capture precise control on the ``chemical distance" (or the geodesic distance) between two points $x$ and $y$ in the infinite cluster $\mathcal C_\infty$ (proved in Lemma \ref{chemdist}), as well as exponential tail bounds for the shortest distance between the origin and the first arrival of the cluster in the positive parts of the co-ordinate axes (proved in Lemma \ref{lemma-ell}). Given the above sub-linear growth property on the infinite cluster which holds the pivotal argument, we then carry out the same ``corrector perturbation" approach as in the elliptic case to the desired upper bound property, see Lemma \ref{ub}. \noindent{\it{Equivalence of lower and upper bounds.}} Having established both lower and upper bounds, one then faces the task of matching these two bounds. In the case of elliptic diffusions with a random drift, a seminal idea was introduced in \cite{KRV06} by applying convex variational analysis followed by applications of certain min-max theorems. The success of this ``min-max" approach relies on, among other requirements, ``compactness" of the underlying variational problem. In the elliptic case, this can be achieved by truncating the variational problem at a finite level which allows the application of the min-max theorems, followed by an approximation procedure by letting the truncation level to infinity. In the lattice, i.e., for elliptic RWRE a similar idea was used (\cite{Y08}, \cite{R06}) in order to use the min-max argument. Indeed, by restricting the variational problem to a finite region in the environment space $\Omega$ and taking conditional expectation w.r.t. a finite $\sigma$-algebra $B_k$, \cite{Y08} then used the min-max theorems for every fixed $k$. Roughly speaking, this leads to the study of conditional expectations \begin{equation}\label{def-Fk} F_k:=\mathbb{E}\big[f_k-f_k\circ \tau_e| B_{k-1}\big], {\rm e} nd{equation} for test functions $f_k$, and one needs to prove that $F_k\to F$ as $k\to\infty$ such that $F\in \mathcal K$ (where $\mathcal K$ is the class of gradients with the required properties discussed in the upper bound part). Note that, for every fixed $k$, $F_k$ is not a gradient. However, exploiting the underlying assumption $\mathbb{E}^\mathbb{P}[|\log{\mathfrak{p}}i|^{d+{\rm e} ps}]<\infty$, one shows that $\{F_k\}_k$ remains uniformly bounded in $L^{d+{\rm e} ps}(\mathbb{P})$ so that one can take a weak limit $F$. After successive application of the tower property for the conditional expectations, one then proves that the limit $F$ is indeed a gradient (i.e., satisfies the aforementioned closed loop condition), $F\in L^{d+{\rm e} ps}(\mathbb{P})$. Furthermore, $\mathbb{E}_\mathbb{P}[F]=0$, which readily comes for free from {\rm e} qref{def-Fk} and the {\it{invariant action}} of $\tau_e$ w.r.t. the environment law $\mathbb{P}$. In particular, $F\in\mathcal K$ and modulo some technical work, this fact also matches the lower and upper bound of the limiting logarithmic moment generating function for the elliptic RWRE case. \noindent Now for the ``equivalence of bounds" for SRWPC, one can also try to emulate the strategy of (\cite{Y08}, \cite{R06}) by carrying out the same convex variational analysis and applying the same min-max theorems by restricting to a finite region and conditional on a finite $\sigma$-algebra $B_k$. However, taking the conditional expectation as in {\rm e} qref{def-Fk} w.r.t. $\mathbb{E}_0$ any attempt towards deriving the requisite properties stated in Section \ref{subsec-classG} of the limiting function $F$ completely fails. Note that in conditional expectation w.r.t. $\mathbb{E}_0$, one involves the measure $\mathbb{P}_0$ that is not preserved under the action of the shifts $\tau_e$. In particular, we are not entitled to use any tower property. Plus, conditioning w.r.t. a finite $\sigma$-algebra $B_k$ is incompatible for handling possibly long excursions of the infinite cluster before hitting the coordinate axes on each direction, which is a crucial issue one has to handle in order to prove the requisite induced mean-zero property of our limiting gradient. \noindent Therefore, for the equivalence of bounds, we take a different route based on an {\it{entropy coercivity}} and {\it{entropy penalization}} method, which constitutes Section \ref{sec-proof-ldp}. This approach seems to be more natural in that it exploits the built-in structure of relative entropies that is already present in the underlying variational formulas. We make use of the coercivity property of the relative entropies in Lemma \ref{thm-lbub-lemma1} and Lemma \ref{thm-lbub-lemma2} to overcome the lack of the compactness in our variational analysis. One advantage of this method is that our variational analysis leads to the study of {\it{gradients}} directly, where we can work with functions \begin{equation}\label{def-Gn-intro} G_n(\omega,e)= g_n(\omega)-g_n(\tau_e\omega), {\rm e} nd{equation} on the infinite cluster (see Lemma \ref{lemma-last}), instead of relying on conditional expectations like in {\rm e} qref{def-Fk}. Given the gradient structure of $G_n$, and the estimates proved in Lemma \ref{chemdist} and \ref{lemma-ell}, our analysis then also shows that the limiting gradients satisfy all the desired properties formulated in Section \ref{subsec-classG} (see Lemma \ref{lemma-last}) and the lower and upper bounds are readily matched. We also remark that the argument in our approach works equally well for the elliptic RWRE model considered before, see Remark \ref{rmk-simplify}. In particular, our method completely avoids the tedious effort needed in the earlier approach through the use of conditional expectations, tower property and Mazur's theorem in order to show that the limit of $F_k$ defined in {\rm e} qref{def-Fk} is a gradient, and the equivalence of upper and lower bounds. In our approach, any weak limit of $G_n$ defined in {\rm e} qref{def-Gn-intro} is immediately a gradient and this readily makes the lower and the upper bound match (again, it is imperative here that we can work with $G_n$ which is itself a gradient, unlike {\rm e} qref{def-Fk}). We refer to \cite[Sect.3.3]{R06} or \cite[Sect.2.1.3]{Y08} for a comparison with our approach in proving Theorem \ref{thm-lbub}. \begin{remark}[Differences to the Kipnis-Varadhan corrector]\label{KVremark} Let us finally remark that the class of gradient functions introduced in Section \ref{subsec-classG} share some similarities to the gradient of Kipnis-Varadhan corrector which is a central object of interest for reversible random motions in random media. Particularly for SRWPC this is crucial for proving a quenched central limit theorem (\cite{SS04}, \cite{MP07}, \cite{BB07}, \cite{PRS15})-- the corrector expresses the deformation caused by a harmonic embedding of the random walk in the infinite cluster in $\mathbb{R}^d$, and modulo this deformation, the random walk becomes a martingale. However, our gradient functions that are defined in Section \ref{subsec-classG} are structurally different from the gradient of the Kipnis-Varadhan corrector. Though they share similar properties as {\it{gradients}}, our gradients miss the above mentioned {\it{harmonicity}} property enjoyed by the Kipnis-Varadhan corrector. This can be explained by the fact that large deviation lower bounds are based on a certain {\it{tilt}} which spoils any inherent reversibility of the model, which is a crucial base of Kipnis-Varadhan theory.\qed {\rm e} nd{remark} The rest of the article is organized as follows. In Section \ref{sec-lb}, Section \ref{sec-classG} and Section \ref{sec-proof-ldp} we prove the lower bound, the upper bound and the equivalence of bounds for Theorem \ref{thmmomgen}, respectively. Section \ref{sec-6} is devoted to the proofs of Theorem \ref{thmmomgen}, Theorem \ref{thmlevel2}, Corollary \ref{thmlevel1} and Lemma \ref{nonlsc}. \section{Lower bounds of Theorem \ref{thmlevel2} and Theorem \ref{thmmomgen}}\label{sec-lb} We first introduce a class of environment Markov chains for SRWPC and prove an ergodic theorem for these in Section \ref{sec-ergthm}. We then derive the lower bounds for Theorem \ref{thmlevel2} and Theorem \ref{thmmomgen} in Section \ref{subsec-lb}. \secdef \subsct\sbsect{An ergodic theorem for Markov chains on non-elliptic environments}\label{sec-ergthm} In this section we need some input from the environment seen from the particle, which, with respect to a suitably changed measure, possesses important ergodic properties. Recall that, given the transition probabilities ${\mathfrak{p}}i$ from {\rm e} qref{pidef}, for $\mathbb{P}_0$- almost every $\omega\in \Omega_0$, the process $(\tau_{X_n}\omega)_{n\geq 0}$ is a Markov chain with transition kernel $$ (R_{\mathfrak{p}}i g)(\omega)= \sum_{e\in\mathcal U_d} {\mathfrak{p}}i_\omega(0,e) g(\tau_e \omega), $$ for every function $g$ on $\Omega_0$ which is measurable and bounded. We need to introduce a class of transition kernels on the space of environments. We denote by $\widetilde \mathbb{P}i$ the space of functions $\tilde{\mathfrak{p}}i: \Omega_0\times \mathcal U_d\rightarrow [0,1]$ which are measurable in $\Omega_0$, $\sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e)=1$ for almost every $\omega\in \Omega_0$ and for every $\omega\in \Omega_0$ and $e\in \mathcal U_d$, \begin{equation}\label{pitilde} \tilde{\mathfrak{p}}i(\omega,e)=0 \,\,\,\bar{\mu}ox{if and only if}\,\,\, {\mathfrak{p}}i_\omega(0,e)=0. {\rm e} nd{equation} For every $\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i$ and $\omega\in \Omega_0$, we define the corresponding quenched probability distribution of the Markov chain $(X_n)_{n\geq 0}$ by \begin{equation} \begin{aligned} &P^{\tilde{\mathfrak{p}}i,\omega}_0(X_0=0)=1\\ &P^{\tilde{\mathfrak{p}}i,\omega}_0(X_{n+1}=x+e| X_n=x)= \tilde{\mathfrak{p}}i(\tau_x\omega,e). {\rm e} nd{aligned} {\rm e} nd{equation} With respect to every $\tilde{\mathfrak{p}}i\in \widetilde \mathbb{P}i$ we also have a transitional kernel $$ (R_{\tilde {\mathfrak{p}}i} g)(\omega)= \sum_{e\in\mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e) g(\tau_e \omega), $$ for every measurable and bounded $g$. For every measurable function ${\mathfrak{p}}hi\geq 0$ with $\int {\mathfrak{p}}hi {\rm d} \mathbb{P}_0=1$, we say that the measure ${\mathfrak{p}}hi{\rm d} \mathbb{P}_0$ is $R_{\tilde {\mathfrak{p}}i}$-invariant, or simply $\tilde{\mathfrak{p}}i$-invariant, if, \begin{equation}\label{invdensity} {\mathfrak{p}}hi(\omega)= \sum_{e\in\mathcal U_d} \tilde{\mathfrak{p}}i\big(\tau_{-e}\omega, e\big) {\mathfrak{p}}hi\big(\tau_{-e}\omega\big). {\rm e} nd{equation} Note that in this case, \begin{equation}\label{invdensity_g} \int g(\omega){\mathfrak{p}}hi(\omega)d\mathbb{P}_0(\omega) = \int (R_{\tilde {\mathfrak{p}}i} g) (\omega){\mathfrak{p}}hi(\omega)d\mathbb{P}_0(\omega), {\rm e} nd{equation} for every bounded and measurable $g$. We denote by $\mathcal E$ such pairs of $(\tilde {\mathfrak{p}}i,{\mathfrak{p}}hi)$, i.e., \begin{equation}\label{ergodicpair} \mathcal E= \bigg\{ (\tilde {\mathfrak{p}}i, {\mathfrak{p}}hi)\colon \, \tilde{\mathfrak{p}}i\in \widetilde\mathbb{P}i, {\mathfrak{p}}hi\geq 0, \mathbb{E}_0({\mathfrak{p}}hi)=1, \, {\mathfrak{p}}hi{\rm d} \mathbb{P}_0 \,\mathrm{is}\, \,\tilde{\mathfrak{p}}i-\,\mathrm{invariant}\bigg\}. {\rm e} nd{equation} We need an elementary lemma which we will be using frequently. Recall the set ${\mathcal M}_1^\star$ from {\rm e} qref{relevantmeasures}. \begin{lemma}\label{onetoone} There is a one-to-one correspondence between the sets ${\mathcal M}_1^\star$ and $\mathcal E$. {\rm e} nd{lemma} \begin{proof} Given arbitrarily $(\tilde{\mathfrak{p}}i, {\mathfrak{p}}hi)\in \mathcal E$, we take \begin{equation}\label{themap} \begin{aligned} {\rm d} \mu(\omega,e)= \tilde{\mathfrak{p}}i(\omega,e) {\mathfrak{p}}hi(\omega) \,{\rm d} \mathbb{P}_0 = \tilde{\mathfrak{p}}i(\omega,e) \bigg(\sum_{ \tau_e\omega^{\mathfrak{p}}rime=\omega} \tilde{\mathfrak{p}}i(\omega^{\mathfrak{p}}rime,e) {\mathfrak{p}}hi(\omega^{\mathfrak{p}}rime)\bigg) \, {\rm d} \mathbb{P}_0. {\rm e} nd{aligned} {\rm e} nd{equation} By {\rm e} qref{marginals}, $\mathbb{P}_0$-almost surely, $$ \begin{aligned} {\rm d}(\mu)_1(\omega) =\sum_{e\in \mathcal U_d} {\rm d} \mu(\omega,e) = \sum_{ \tau_e\omega^{\mathfrak{p}}rime=\omega} \tilde{\mathfrak{p}}i(\omega^{\mathfrak{p}}rime,e) {\mathfrak{p}}hi(\omega^{\mathfrak{p}}rime) \, {\rm d} \mathbb{P}_0 = \sum_{ \tau_e\omega^{\mathfrak{p}}rime=\omega}{\rm d} \mu(\omega^{\mathfrak{p}}rime,e) ={\rm d} (\mu)_2(\omega). {\rm e} nd{aligned} $$ Hence, $(\mu)_1=(\mu)_2\ll \mathbb{P}_0$. Furthermore, if the edge $0\leftrightarrow e$ is present in the configuration $\omega$ (i.e., $\omega(e)=1$), then ${\mathfrak{p}}i_\omega(0,e)>0$, and by our requirement {\rm e} qref{pitilde}, $$ \frac{{\rm d}\mu(\omega,e)}{{\rm d}(\mu_1)(\omega)} = \tilde {\mathfrak{p}}i(\omega,e)>0, $$ Hence $\mu \in \mathcal M_1^\star$. Conversely, given arbitrarily $\mu \in \mathcal M_1^\star$, we can choose $(\tilde {\mathfrak{p}}i, {\mathfrak{p}}hi)= (\frac {{\rm d} \mu}{{\rm d} (\mu)_1}, \frac {{\rm d} (\mu)_1}{{\rm d} \mathbb{P}_0})$ and readily check that $(\tilde {\mathfrak{p}}i, {\mathfrak{p}}hi) \in \mathcal E$. {\rm e} nd{proof} We now state and prove the following ergodic theorem for the environment Markov chain under every transition kernel $\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i$. Theorem \ref{ergodicthm} is an extension of a similar statement (see Kozlov \cite{K85}, Papanicolau-Varadhan \cite{PV81}) that holds for elliptic transition kernels $\widetilde{\mathfrak{p}}i(\cdot,e)$ to the non-elliptic case. \begin{theorem}\label{ergodicthm} Fix $\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i$. If there exists a probability measure $\mathbb Q \ll \mathbb{P}_0$ which is $\tilde{\mathfrak{p}}i$-invariant, then the following three implications hold: \begin{itemize} \item $\mathbb Q\sim \mathbb{P}_0$. \item $\mathbb Q$ is ergodic for the environment Markov chain with transition kernel $\tilde{\mathfrak{p}}i$. \item There can be at most one such measure $\mathbb Q$. {\rm e} nd{itemize} In particular, every $\widetilde{\mathfrak{p}}i$-invariant set of environments will have $\mathbb{P}_0$-measure $0$ or $1$, as $\mathbb Q\sim\mathbb{P}_0$, and $\mathbb Q$ is ergodic. {\rm e} nd{theorem} \begin{proof} We fix $\tilde{\mathfrak{p}}i\in \widetilde\mathbb{P}i$ and let $\mathbb Q\ll \mathbb{P}_0$ be $\tilde {\mathfrak{p}}i$- invariant. We prove the theorem in three steps. \noindent {\bf{Step 1:}} We will first show that, $\frac {{\rm d} \mathbb Q} {{\rm d} \mathbb{P}_0}>0$ $\mathbb{P}_0$- almost surely. This will imply that $\mathbb Q\sim \mathbb{P}_0$. Indeed, to the contrary, let us assume that, $0< \mathbb{P}_0(A) <1$ where $A= \big\{\omega\colon \frac {{\rm d} \mathbb Q} {{\rm d} \mathbb{P}_0}(\omega)>0\big\}$. Then, $\mathbb Q\sim \mathbb{P}_0(\cdot| A)$. If we sample $\omega_1 \in \Omega_0$ according to $\mathbb Q$ and $\omega_2$ according to $\tilde{\mathfrak{p}}i(\omega_1,\cdot)$, then the distribution of $\omega_2$ is absolutely continuous with respect to $\mathbb Q$ (recall $\mathbb Q$ is $\tilde{\mathfrak{p}}i$ invariant) and thus, on $A^c$, the distribution of $\omega_2$ has zero measure. This implies that, for almost every $\omega_1\in A$ and every $e\in \mathcal U_d$ such that $\tilde {\mathfrak{p}}i(\omega_1,e)>0$, $\tau_e \omega_1 \in A$. Since $\tilde {\mathfrak{p}}i \in\widetilde \mathbb{P}i$, for almost every $\omega_1\in A$ and every $e\in \mathcal U_d$ such that ${\mathfrak{p}}i(\omega_1,e)>0$, $\tau_e \omega_1 \in A$. Now if we sample $\omega_1$ according to $\mathbb{P}_0(\cdot |A)$ and $\omega_2$ according to ${\mathfrak{p}}i(\omega_1,\cdot)$, then, with probability $1$, $\omega_2\in A$. In other words, $A$ is invariant under ${\mathfrak{p}}i$ (more precisely, $A$ is invariant under the Markov kernel $R_{\mathfrak{p}}i$). Since $\mathbb{P}_0$ is ${\mathfrak{p}}i$-ergodic (see \cite[Proposition 3.5]{BB07}), $\mathbb{P}_0(A)\in \{0,1\}$. By our assumption, $\mathbb{P}_0(A)=1$. \noindent {\bf{Step 2:}} Now we prove that the environment Markov chain with initial law $\mathbb Q$ and transition kernel $\tilde{\mathfrak{p}}i$ is $\mathbb{P}_0$ ergodic. Let us assume on the contrary, that for some measurable $D$, $\mathbb Q(D)>0$, $\mathbb Q(D^c)>0$ and $D$ is $\tilde {\mathfrak{p}}i$ invariant. Hence $\mathbb{P}_0(D)>0$ and $\mathbb{P}_0(D^c)>0,$ by $\mathbb Q \sim \mathbb{P}_0$. Further, the conditional measure $\mathbb Q_D(\cdot)= \mathbb Q(\cdot| D)$ is $\tilde{\mathfrak{p}}i$ invariant and $\mathbb Q_D\ll \mathbb{P}_0$. But $\mathbb Q_D(D^c)=0$ and hence, $\frac{{\rm d}\mathbb Q_D}{{\rm d} \mathbb{P}_0}(D^c)=0$. This contradicts the first step. \noindent {\bf{Step 3:}} We finally prove uniqueness of every $\mathbb Q$ which is $\tilde{\mathfrak{p}}i$- invariant and absolutely continuous with respect to $\mathbb{P}_0$. Let $\Omega^{\mathbb{Z}}$ be the space of the trajectories $({\rm d}ots,\omega_{-1},\omega_0,\omega_1,{\rm d}ots)$ of the environment chain, $\mu_{\mathbb Q}$ the measure associated to the transition kernel $\tilde {\mathfrak{p}}i$ whose finite dimensional distributions are given by $$ \mu_\mathbb Q\big((\omega_{-n},{\rm d}ots,\omega_n) \in A\big)= \int_A \mathbb Q({\rm d} \omega_{-n}) {\mathfrak{p}}rod_{j=-n}^{n-1}\tilde{\mathfrak{p}}i\big(\omega_j, {\rm d} \omega_{j+1}\big). $$ for every finite dimensional cylinder set $A$ in $\Omega^\mathbb{Z}$. Let $T: \Omega^\mathbb{Z} \longrightarrow \Omega^\mathbb{Z}$ be the shift given by $(T\omega)_n= \omega_{n+1}$ for all $n\in \mathbb{Z}$. Since $\mathbb Q$ is $\tilde{\mathfrak{p}}i$- invariant and ergodic, by Birkhoff's theorem, $$ \lim_{n\to\infty} \frac{1}{n} \sum_{k=0}^{n-1} g \circ T^k = \int g {\rm d} \mu_\mathbb Q, $$ $\mu_\mathbb Q$ (and hence $\mu_{\mathbb{P}_0}$) almost surely for every bounded and measurable $g$ on $\Omega^\mathbb{Z}$. Since the environment chain $(\tau_{X_k}\omega)_{k\geq 0}$ has the same law w.r.t. $\int P^{\tilde{\mathfrak{p}}i,\omega}_0 {\rm d} \mathbb Q$ as $(\omega_0,\omega_1,{\rm d}ots)$ has w.r.t. $\mu_\mathbb Q$, if $f(\omega_0)= g(\omega_0,\omega_1,{\rm d}ots)$, then $$ \lim_{n\to\infty} \frac 1 n\sum_{k=0}^{n-1} f \circ \tau_{X_k}= \lim_{n\to\infty} \frac 1 n\sum_{k=0}^{n-1} g \circ T^k = \int g {\rm d} \mu_\mathbb Q= \int f {\rm d} \mathbb Q, $$ for every bounded and measurable $f$ on $\Omega$. The uniqueness of $\mathbb Q$ follows. {\rm e} nd{proof} \begin{cor}\label{ergcor} For every pair $(\tilde{\mathfrak{p}}i,{\mathfrak{p}}hi)\in \mathcal E$ and every continuous and bounded function $f: \Omega_0 \times \mathcal U_d \rightarrow \mathbb{R}$, $$ \lim_{n\to\infty} \frac 1n \sum_{k=0}^{n-1} f(\tau_{X_k}\omega, X_{k+1}-X_k)= \int_{\Omega_0} \, {\rm d}\mathbb{P}_0\, {\mathfrak{p}}hi(\omega) \sum_e f(\omega,e) \tilde{\mathfrak{p}}i(\omega,e), \ \ \bar{\mu}ox{$\mathbb{P}_0 \times P^{\tilde {\mathfrak{p}}i, \omega}_{0}$-a.s. } $$ {\rm e} nd{cor} \begin{proof} This is an immediate consequence of Theorem \ref{ergodicthm} and Birkhoff's ergodic theorem. {\rm e} nd{proof} \secdef \subsct\sbsect{Proof of lower bounds.}\label{subsec-lb} We now prove the required lower bound {\rm e} qref{ldplb}. Its proof follows a standard change of measure argument and given Theorem \ref{ergodicthm}, although the argument is very similar to Yilmaz (\cite{Y08}), we present this short proof for convenience of the reader and to keep the article self-contained. Recall the definition of $\mathfrak I$ from {\rm e} qref{Idef}. \begin{lemma}[The lower bound]\label{lemmalb} For every open set $\mathcal G$ in ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$, $\mathbb{P}_0$- almost surely, \begin{equation}\label{eqlemmalb} \begin{aligned} \liminf_{n\to\infty} \frac 1n \log P^{{\mathfrak{p}}i,\omega}_0 \big(\mathfrak L_n \in \mathcal G\big) &\geq - \inf_{\mu\in \mathcal G} \mathfrak I(\mu) \\ &=- \inf_{\mu\in \mathcal G} \mathfrak I^{\star\star}(\mu). {\rm e} nd{aligned} {\rm e} nd{equation} {\rm e} nd{lemma} \begin{proof} For the lower bound in {\rm e} qref{eqlemmalb}, it is enough to show that, for every $\mu\in {\mathcal M}_1^\star$ and every open neighborhood $\mathcal U$ containing $\mu$, \begin{equation}\label{lb0} \liminf_{n\to\infty} \frac 1n \log P^{{\mathfrak{p}}i,\omega}_0\big(\mathfrak L_n \in \mathcal U\big) \geq - \mathfrak I(\mu). {\rm e} nd{equation} Given $\mu\in{\mathcal M}_1^\star$, from Lemma \ref{onetoone} we can get the pair \begin{equation}\label{map} (\tilde {\mathfrak{p}}i, {\mathfrak{p}}hi)= \bigg(\frac {{\rm d} \mu}{{\rm d} (\mu)_1}, \frac {{\rm d} (\mu)_1}{{\rm d} \mathbb{P}_0}\bigg) \in \mathcal E, {\rm e} nd{equation} and by Theorem \ref{ergodicthm}, \begin{equation}\label{lb1} \lim_{n\to\infty} P^{\tilde{\mathfrak{p}}i,\omega}_0\big(\mathfrak L_n \in \mathcal U\big) =1. {\rm e} nd{equation} Further, $$ \begin{aligned} P^{{\mathfrak{p}}i,\omega}_0\big(\mathfrak L_n \in \mathcal U\big) &= E^{\tilde{\mathfrak{p}}i,\omega}_0 \bigg\{\1_{\{\mathfrak L_n \in \mathcal U\}} \frac{{\rm d} P_0^{{\mathfrak{p}}i,\omega}}{{\rm d} P_0^{\tilde{\mathfrak{p}}i,\omega}}\bigg\} \\ &=\int {\rm d} P^{\tilde{\mathfrak{p}}i,\omega}_0 \bigg\{\1_{\{\mathfrak L_n \in \mathcal U\}} {\rm e} xp\bigg\{-\log\,\frac{{\rm d} P_0^{\tilde{\mathfrak{p}}i,\omega}}{{\rm d} P_0^{{\mathfrak{p}}i,\omega}}\bigg\}\bigg\} . {\rm e} nd{aligned} $$ Hence, by Jensen's inequality, $$ \begin{aligned} \liminf_{n\to\infty}\frac 1n \log P^{{\mathfrak{p}}i,\omega}_0\big(\mathfrak L_n \in \mathcal U\big) &\geq \liminf_{n\to\infty}\frac 1n \log P^{\tilde{\mathfrak{p}}i,\omega}_0\big(\mathfrak L_n \in \mathcal U\big) \\ &\qquad-\limsup_{n\to\infty}\frac 1{nP^{\tilde{\mathfrak{p}}i,\omega}_0\big(\mathfrak L_n \in \mathcal U\big)} \int_{\{\mathfrak L_n \in \mathcal U\}} {\rm d} P^{\tilde{\mathfrak{p}}i,\omega}_0 \bigg\{\log\,\frac{{\rm d} P_0^{\tilde{\mathfrak{p}}i,\omega}}{{\rm d} P_0^{{\mathfrak{p}}i,\omega}}\bigg\}\\ &= -\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \sum_{|e|=1} \tilde{\mathfrak{p}}i(\omega,e) \log \frac{\tilde{\mathfrak{p}}i(\omega,e)}{{\mathfrak{p}}i_\omega(0,e)}\\ &=- \mathfrak I(\mu), {\rm e} nd{aligned} $$ where the first equality follows from {\rm e} qref{lb1} and corollary \ref{ergcor} and the second equality follows from {\rm e} qref{map}. This proves {\rm e} qref{lb0}. Finally, since $\mathcal G$ is open, $\inf_{\mu\in \mathcal G} \mathfrak I(\mu)= \inf_{\mu\in \mathcal G} \mathfrak I^{\star\star}(\mu)$ (see \cite{R70}). This proves the equality in {\rm e} qref{eqlemmalb} and the lemma. {\rm e} nd{proof} We now prove the lower bound for the limiting logarithmic moment generating function required for Theorem \ref{thmmomgen}. \begin{cor}\label{corlb} For every continuous and bounded function $f: \Omega_0 \times \mathcal U_d\longrightarrow \mathbb{R}$ and for $\mathbb{P}_0$-almost every $\omega\in \Omega_0$, \begin{equation}\label{lb} \begin{aligned} \liminf_{n\to\infty} \frac 1n\log E^{{\mathfrak{p}}i,\omega}_0 \bigg\{ {\rm e} xp \bigg(\sum_{k=0}^{n-1} f\big(\tau_{X_k}\omega, X_{k+1}-X_k\big)\bigg)\bigg\} &\geq \sup_{\mu\in {\mathcal M}_{1}^{\star}} \big\{\langle f,\mu\rangle- \mathfrak I(\mu)\big\}\\ &=\sup_{\mu\in {\mathcal M}_{1}(\Omega_0 \times \mathcal U_d)} \big\{\langle f,\mu\rangle- \mathfrak I(\mu)\big\}. {\rm e} nd{aligned} {\rm e} nd{equation} {\rm e} nd{cor} \begin{proof} This follows immediately from Varadhan's lemma and Lemma \ref{lemmalb}. {\rm e} nd{proof} We denote the variational formula in Corollary \ref{corlb} by \begin{equation}\label{Lf} \begin{aligned} \overline H(f)&=\sup_{\mu\in {\mathcal M}_1^\star} \big\{\langle f, \mu\rangle- \mathfrak I(\mu)\big\}\\ &= \sup_{(\tilde{\mathfrak{p}}i,{\mathfrak{p}}hi)\in \mathcal E} \bigg\{\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \sum_{|e|=1} \tilde{\mathfrak{p}}i(\omega,e) \bigg\{f(\omega,e) - \log \frac{\tilde{\mathfrak{p}}i(\omega,e)}{{\mathfrak{p}}i_\omega(0,e)}\bigg\}\bigg\}, {\rm e} nd{aligned} {\rm e} nd{equation} and recall from Lemma \ref{onetoone} the one-to-one correspondence between elements of the set ${\mathcal M}_1^\star$ and the pairs $\mathcal E$ (see {\rm e} qref{map}, {\rm e} qref{Idef}). For the variational analysis that follows in Section \ref{sec-proof-ldp}, it is convenient to write down a more tractable representation of the above variational formula. This is based on the following observation, which was already made by Kosygina-Rezakhanlou-Varadhan (\cite{KRV06}) and used by Yilmaz (\cite{Y08}) and Rosenbluth (\cite{R06}). Recall that by {\rm e} qref{invdensity_g}, if $({\mathfrak{p}}hi,\tilde{\mathfrak{p}}i)\in \mathcal E$, then for every bounded and measurable function $g$ on $\Omega_0$, \begin{equation}\label{thm-lbub1} \sum_e \int {\mathfrak{p}}hi(\omega) \tilde{\mathfrak{p}}i(\omega,e) \big(g(\omega)- g(\tau_e\omega)\big){\rm d}\mathbb{P}_0(\omega) = 0. {\rm e} nd{equation} On the other hand, if $({\mathfrak{p}}hi,\tilde{\mathfrak{p}}i)\notin \mathcal E$, then for some bounded and measurable function $g$ on $\Omega_0$, the above integral on the left hand side is non-zero. By taking constant multiples of such a function $g$, we see that $$ \inf_{g}\, \int {\mathfrak{p}}hi(\omega) \sum_{e\in\mathcal U_d}\,\,\tilde{\mathfrak{p}}i(\omega,e) \big(g(\omega)-g(\tau_e\omega)\big)\,{\rm d}\mathbb{P}_0(\omega)= \begin{cases} 0 \qquad{\bar{\mu}ox{if}} \,\,({\mathfrak{p}}hi,\tilde{\mathfrak{p}}i)\in \mathcal E\\ -\infty \qquad{\bar{\mu}ox{else.}} {\rm e} nd{cases} $$ with the infimum being taken over every bounded and measurable function $g$. Hence, we can rewrite {\rm e} qref{Lf} as \begin{equation}\label{thm-lbub2a} \begin{aligned} \overline H(f) &=\sup_{{\mathfrak{p}}hi}\, \sup_{\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i}\,\inf_{g} \bigg[\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \bigg\{ \sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e) \bigg(f(\omega,e) -\log\frac{\widetilde{\mathfrak{p}}i_\omega(0,e)}{{\mathfrak{p}}i_\omega(0,e)} +\big(g(\omega)-g(\tau_e\omega)\big)\bigg)\bigg\}\bigg] {\rm e} nd{aligned} {\rm e} nd{equation} \section{Upper bound for the proof of Theorem \ref{thmmomgen}}\label{sec-classG} We will now introduce the class of relevant gradient functions in Section \ref{subsec-classG}, derive an important property of these functions in Section \ref{subsec-sublinear} and prove the desired upper bound of Theorem \ref{thmmomgen} in Section \ref{subsec-ub}. \secdef \subsct\sbsect{The class $\mathcal G_\infty$ of gradients and the corresponding correctors}\label{subsec-classG} We introduce a class of functions which will play an important role for the large deviation analysis to follow. However, before introducing this class we need the notion of the {{\rm e} m induced shift} on $\Omega_0$. {Recall {\rm e} qref{def-n}.} Then the {{\rm e} m induced shift} is defined as \begin{equation}\label{def-sigmae} \sigma_e(\omega)=\tau_{k(\omega,e)e}\,\,\omega. {\rm e} nd{equation} It is well-known that, for every $e\in\mathcal U_d$, $\sigma_e\colon \Omega_0\rightarrow \Omega_0$ is $\mathbb{P}_0$-measure preserving and ergodic (\cite[Theorem 3.2]{BB07}). Furthermore, for every $k\in \mathbb{N}$, we inductively set \begin{equation}\label{def-nk} n_1(\omega,n)=k(\omega,e) \qquad n_{k+1}(\omega,e)=n_k(\sigma_e\omega,e). {\rm e} nd{equation} Now we turn to the definition of $\mathcal G_\infty$. We say that a function $G: \Omega_0 \times \mathcal U_d \longrightarrow \mathbb{R}$ is in class $\mathcal G_\infty$ if it satisfies the conditions {\rm e} qref{unifbound}, {\rm e} qref{closedloop} and {\rm e} qref{meanzero} listed below: \begin{itemize} \item {\bf{Uniform boundedness.}} For every $e\in \mathcal U_d$, \begin{equation}\label{unifbound} \mathrm{ess} \sup_{\mathbb{P}_0} G(\cdot,e) =A< \infty. {\rm e} nd{equation} \item {\bf{Closed loop on the cluster.}} Let $(x_0,{\rm d}ots,x_n)$ be a closed loop on the infinite cluster $\mathcal C_\infty$ (i.e., $x_0,x_1,{\rm d}ots, x_n$ is a nearest neighbor occupied path so that $x_0=x_n$). Then, \begin{equation}\label{closedloop} \sum_{j=0}^{n-1} G(\tau _{x_j} \omega, x_{j+1}-x_j) =0 \quad \mathbb{P}_0- \bar{\mu}ox{almost surely}. {\rm e} nd{equation} For every $G\in \mathcal G_\infty$, the closed loop condition has two important consequences. First, along every nearest neighbor occupied path $(x_0, x_1, {\rm d}ots, x_m)$ so that $x_0=0$ and $x_m=x$ on $\mathcal C_\infty$, for every $G(\cdot,\cdot)$ that satisfies {\rm e} qref{closedloop}, we can define the {\it{corrector}} corresponding to $G$ as \begin{equation}\label{Psidef} V(\omega,x)= V_G(\omega,x)= \sum_{j=0}^{m-1} G(\tau _{x_j} \omega, x_{j+1}-x_j). {\rm e} nd{equation} By {\rm e} qref{closedloop}, this definition is clearly independent of the chosen path for almost every $\omega\in \{x\in \mathcal C_\infty\}$. Also note that, for every $G$ that satisfies {\rm e} qref{closedloop}, $V=V_G$ satisfies the following {\it{Shift covariance}} condition: For $\mathbb{P}_0$-almost every $\omega\in \Omega_0$ and all $x, y\in \mathcal C_\infty$, \begin{equation}\label{def-shiftcov} V(\omega,x)- V(\omega,y)= V(\tau_y\omega, x-y). {\rm e} nd{equation} \item {\bf{Zero induced mean:}} Recall the definition of $k(\omega,e)$ from {\rm e} qref{def-n} and write \begin{equation}\label{v_e_def} v_e= k(\omega,e) \, e {\rm e} nd{equation} for every $\omega\in\Omega$ and $e\in \mathcal U_d$. Let $\big\{0=x_0,x_1,\ldots,x_k=k(\omega,e)\,e\big\}$ be an $\omega$-open path from $0$ to $k(\omega,e)\,e$. For every $G(\cdot,\cdot)$ that satisfies {\rm e} qref{closedloop}, we again write \begin{equation}\label{V_ve} V(\omega,v_e)=V_G(\omega, v_e)= \sum_{i=0}^{k-1} G(\tau_{x_i}\omega,x_{i+1}-x_i). {\rm e} nd{equation} Again, the choice of the path doesn't influence $V(\omega,v_e)$. We then say that $V=V_G$ satisfies the {\it{induced zero mean property}} by requiring that for every $e\in\mathcal U_d$, \begin{equation}\label{meanzero} \mathbb{E}_0\big[V(\cdot,v_e)\big]=0. {\rm e} nd{equation} {\rm e} nd{itemize} \secdef \subsct\sbsect{Sub-linear growth of the correctors at infinity}\label{subsec-sublinear} This section is devoted to the proof of the following important property of functions in the class $\mathcal G_\infty$. \begin{theorem}[Sub-linear growth at infinity on the cluster]\label{sublinearthm} For every $G\in \mathcal G_\infty$, $V=V_G$ has at most sub-linear growth at infinity on the infinite cluster $\mathbb{P}_0$- almost surely,. In other words, $$ \lim_{n\to\infty} \max_{\heap{x\in \mathcal C_\infty}{|x|\leq n}} \frac {|V(\omega,x)|}n =0. $$ {\rm e} nd{theorem} Before we present the proof of Theorem \ref{sublinearthm}, which is carried out at the end of this section, we need some important estimates related to the geometry of the infinite percolation cluster $\mathcal C_\infty$ presented in the following two lemmas. Lemma \ref{chemdist} gives a precise bound on the shortest distance of two points in the infinite cluster (the {\it{chemical distance}}) and Lemma \ref{lemma-ell} gives an exponential tail bound on the graph distance between the origin and the the first arrival $v_e=k(\omega,e)e$ (recall {\rm e} qref{v_e_def}), of the cluster on the positive part of each of the coordinate directions. Both lemmas are well-known in the literature covering i.i.d. Bernoulli percolation and the site percolation model discussed in Section \ref{sec-intro-site}. For the random cluster model, contents of these two results are new, to the best of our knowledge. Apart from the proof of Theorem \ref{sublinearthm}, both lemmas will be helpful in carrying out our variational analysis in Section \ref{sec-proof-ldp} (see the proof of Lemma \ref{lemma-last-last}). We first turn to the following estimate on the {\it{chemical distance}} ${\rm d}_{\mathrm{ch}}(x,y)={\rm d}_{\mathrm{ch}}(\omega;\,x,y)$ of two points $x,y \in \mathcal C_\infty$, which is defined to be the minimal length of an $\omega$-open path connecting $x$ and $y$ in the configuration $\omega\in \Omega_0$. The following result, originally proved by Antal and Pisztora (\cite{AP96}) for supercritical i.i.d. Bernoulli percolation, asserts that the chemical distance of two points in the cluster is comparable to their Euclidean distance. \begin{lemma}\label{chemdist} Assumption 3 holds for the percolation models introduced in Section \ref{sec-intro-bond} and Section \ref{sec-intro-site}. In particular, let us fixe ${\rm d}elta>0$. Then there exists a constant $\rho=\rho(p,d)$ such that, $\mathbb{P}_0$- almost surely, for every $n$ large enough and points $x,y\in \mathcal C_\infty$ with $|x|<n, |y|< n$ and ${\rm d}elta n/2\leq|x-y|< {\rm d}elta n$, we have ${\rm d}_{\mathrm{ch}}(x,y) < \rho {\rm d}elta n$. {\rm e} nd{lemma} \begin{proof} For i.i.d. Bernoulli bond and site percolation model (recall Section \ref{sec-intro-bond}), the statement of this lemma follows from the classical estimate of Antal-Pisztora (Theorem 1.1, \cite{AP96})). We now prove the lemma for the supercritical random-cluster model. Recall that we assume that $p > \widehat{p}_{c}(q)$. For every $r\geq 0$, we define a box $B_{0}(r) := [-r, r]^{d}$ and set, for any $z \in\mathbb{Z}^d$, $N\in\mathbb{N}$, $$ B_{z}(N) = \tau_{(2N+1)z} B_{0}(5N/4) $$ Here $\tau_{z}$ is the transformation on $\mathbb{Z}^{d}$ defined by $\tau_{z}(x) = z + x$. We define $R_{z}^{(N)}$ to be the event in $\{0,1\}^{\mathbb B_d}$ satisfying the following three conditions: \begin{itemize} \item There exists a {\it{unique crossing open cluster}} for $B_{z}(N)$. In other words, there is a connected subset $\mathcal{C}$ of an open cluster such that it is contained in $B_{z}(N)$, and, for all $d$ directions there is a path in $\mathcal{C}$ connecting the left face and the right face of $B_{z}(N)$. \item The cluster in the above requirement intersects all boxes with diameter larger than $N/10$. \item All open clusters with diameter larger than $N/10$ are connected in $B_{z}(N)$. {\rm e} nd{itemize} Recall the measures $\mathbb{P}_{\Lambda, p,q}^{\ssup \xi}$ and $\mathbb{P}_{p,q}^{\ssup b}$ corresponding to the random cluster model. Then, under the map $$ {\mathfrak{p}}hi_{N} : \{0,1\}^{\mathbb B_d} \to \{0,1\}^{\mathbb{Z}^d} \qquad ({\mathfrak{p}}hi_{N}\omega)_{z} = \1_{R_{z}^{(N)}}(\omega)\quad\forall\, z \in \mathbb{Z}^{d}, $$ we let $$ \mathbb{P}_{p,q,N}^{\ssup b}= \mathbb{P}_{p,q}^{\ssup b} \, \circ \, {\mathfrak{p}}hi_N^{-1} $$ to be the image measure of $\mathbb{P}_{p,q}^{\ssup b}$. By \cite[Theorem 3.1]{P96} for $d \ge 3$ and \cite[Theorem 9]{CM04} for $d = 2$, we see that there exist constants $c_{1}^{{\mathfrak{p}}rime}, c_{2}^{{\mathfrak{p}}rime} > 0$ (depending only on $d$, $p$ and $q$), such that for every $N \ge 1$ and $i \in \mathbb{Z}^{d}$, $$ \sup_{\xi \in \Omega}\, \mathbb{P}_{B_{z}(N), p, q}^{\ssup \xi}\,\, \bigg[ \big(R_{z}^{(N)}\big)^{c}\bigg] \le c_{1}^{{\mathfrak{p}}rime}\,\, {\rm e} ^{-c_{2}^{{\mathfrak{p}}rime} N}. $$ Let $Y_{z} : \{0,1\}^{\mathbb{Z}^d} \to \{0,1\}$ be the projection mapping to the coordinate $z \in \mathbb{Z}^{d}$. By using the DLR property for the random-cluster model (\cite[Section 4.4]{G06}), for both boundary conditions $b$, $$ \lim_{N \to \infty} \sup_{z \in \mathbb{Z}^{d}} \mathrm{ ess.sup }\, \, \mathbb{P}_{p,q,N}^{\ssup b}\,\, \bigg[Y_{z} = 0 \bigg | \sigma\bigg(Y_{x} : |x-z|_{\infty} \ge 2\bigg)\bigg] = 0. $$ By using \cite[Theorem 1.3]{LSS97}, we see that there exists a function $\overline{p}(\cdot): \mathbb{N}\to [0,1)$ such that $\overline{p}(N) \to 1$ as $N \to \infty$ and the Bernoulli product measure $$ \mathbb{P}^{\star}_{\overline{p}(N)}=\big(\overline p(N)\,\,{\rm d}elta_1\,\,+\,\,\big(1-\overline p(N)\,\big){\rm d}elta_0\big)^{\mathbb{Z}^d} $$ on $\{0,1\}^{\mathbb{Z}^d}$ with parameter $\overline p(N)$ is dominated by $\mathbb{P}_{p, q, N}^{\ssup b}$ for each $N$, i.e., for every increasing event $A$, $$ \mathbb{P}^{\star}_{\overline{p}(N)}(A) \leq \mathbb{P}_{p, q, N}^{\ssup b}(A). $$ Given the above estimate, we can now repeat the arguments in (p. 1047, \cite{AP96}) to conclude that \begin{equation}\label{est-chemdist} \mathbb{P}_0\bigg\{{\rm d}_{\mathrm{ch}}(x,y)> \rho |x-y|, \ x, y \in \mathcal C_\infty\bigg\} \leq {\rm e} ^{-c|x-y|}. {\rm e} nd{equation} For some suitably chosen $\rho>0$. Borel-Cantelli lemma and our assumption that $|x-y|\geq {\rm d}elta n/2$ now conclude the proof of Lemma \ref{chemdist} for random cluster models. For the site percolation models (i.e., random interlacements, its vacant set and the level sets of the Gaussian free field) introduced in Section \ref{sec-intro-site}, Lemma \ref{chemdist} follows from the estimate $$ \mathbb{P}_0\bigg\{{\rm d}_{\mathrm{ch}}(x, y) \geq \rho\, |x-y|, \,x, y \in \mathcal C_\infty\bigg\}\,\leq c_{1} \,\,{\rm e} ^{ - c_1 \,\,(\log |x-y|)^{1+c_{2}}}, $$ for constants $c_1,c_2>0$ and every $x\in\mathbb{Z}^d$. This statement and its proof can be found in \cite[Theorem 1.3]{DRS14}. The above estimate and the assumption $|x-y| \ge {\rm d}elta n / 2$ concludes the proof of Lemma \ref{chemdist}. {\rm e} nd{proof} For every $e\in\mathcal U_d$, we recall that $v_e=k(\omega,e)e$, recall {\rm e} qref{def-n} and {\rm e} qref{v_e_def}. Let ${\rm e} ll={\rm e} ll(\omega)$ denote the shortest path distance from $0$ to $v_e$. Then we have the following tail estimate on ${\rm e} ll$: \begin{lemma}\label{lemma-ell} Assumption 4 holds for the percolation models introduced in Section \ref{sec-intro-bond} and Section \ref{sec-intro-site}. In particular, for some constant $c_1,c_2>0$, $$ \mathbb{P}_0\big\{{\rm e} ll>n\big\}\leq c_1\,{\rm e} ^{-c_2 n}. $$ {\rm e} nd{lemma} \begin{proof} Lemma \ref{lemma-ell} follows from (\cite[Lemma 4.3]{BB07}) for i.i.d. Bernoulli bond and site percolations, and from \cite[Section 5]{PRS15} for the site percolation models appearing in Section \ref{sec-intro-site}. We turn to the requisite estimate corresponding to the random cluster model defined in Section \ref{sec-intro-bond}. Let us first handle the case $d \ge 3$ and recall the definition of slab-critical probability $\widehat p_c(q)$ from {\rm e} qref{slab-d3} and recall that we assume $p>\widehat p_c(q)$. Then we can take a large number $L$ so that $p > \widehat p_c(q, L)$ and $[0, L-1]\times\mathbb{Z}^{d-1}$ contains an infinite cluster, which is a subset of the unique infinite cluster $\mathcal C_\infty$. For every $e\in \mathcal U_d$, we recall the definition of $k(\omega,e)$ from {\rm e} qref{def-n} and note that we write $v_e=k(\omega,e)e$. Also, by symmetry of the random-cluster measure, we can assume $e = e_1$ without loss of generality. Then, \begin{equation}\label{lemma-ell-1} \bigg\{|v_e|\geq L n;\, 0\in\mathcal C_\infty\bigg\} \subset \bigcap_{i=1}^n\, \tau_{i Le}(A_L^c), {\rm e} nd{equation} where $$ A_L := \bigcup_{j = 1}^{L} \bigg\{0 \leftrightarrow je \textup{ in } [0,L-1] \times \mathbb{Z}^{d-1} \bigg\}. $$ We also define, for $m \ge 1$, $$ A_{L, m} := \bigcup_{j = 1}^{L} \{0 \leftrightarrow je \textup{ in } S(L,m) \}. $$ Then $\{A_L\}_{L>0}$ and $\{A_{L,m}\}_{m\geq 1}$ are increasing events. Then, by the DLR property of the random-cluster measure with the free boundary condition (\cite[Definition 4.29]{G06}), and the extremality of the random-cluster measure with the free boundary condition (\cite[Lemma 4.14]{G06}), $$ \mathbb{P}_{p,q}^{\ssup 0}\left(\bigcap_{i=1}^n\, \tau_{i Le}(A_{L,m}^c)\right) \le \left(1-\mathbb{P}_{S(L,m), p,q}^{\ssup 0}(A_{L,m})\right)^n $$ Since $p> \widehat p_c(q, L)$, $$ \liminf_{m \to \infty} \mathbb{P}_{S(L,m), p,q}^{\ssup 0}(A_{L,m}) > 0. $$ Hence for some $0 < a(L) < 1$, \begin{equation}\label{lemma-ell-2} \mathbb{P}_{p,q}^{(0)}\left(\bigcap_{i=1}^n\, \tau_{i Le}(A_{L}^c)\right) = \lim_{m \to \infty} \mathbb{P}_{p,q}^{(0)}\left(\bigcap_{i=1}^n\, \tau_{i Le}(A_{L,m}^c)\right) \le a(L)^n {\rm e} nd{equation} Then {\rm e} qref{lemma-ell-1} and the above estimate imply that for $d\geq 3$, $\mathbb{P}_0\big\{|v_e|\geq n\big\}$ decays exponentially in $n$. To prove this statement in $d=2$, we again recall the definition of the slab-critical probability $\widehat p_c(q)$ and note that $p>\widehat p_c(q)$. In this regime, we have exponential decay of truncated connectivity (see \cite[Theorem 5.108 and the following paragraph]{G06}). In other words, if $\mathcal C$ denotes an open cluster at the origin, then \begin{equation}\label{lemma-ell-4} \lim_{n\to\infty}\frac 1n \log \mathbb{P}_{p,q}^{\ssup 0}\bigg\{\big|\mathcal C\big| \geq n^2\,\,;\big|\mathcal C\big|<\infty\bigg\}<0. {\rm e} nd{equation} In this super-critical regime $p>p_c(q)$, we also have exponential decay of dual connectivity (see \cite[Theorems 1 and 2]{BD12}). In other words, in the dual random cluster model in $d=2$, the probability for two points $x$ and $y$ to be connected by a path decays exponentially fast with respect to the distance between $x$ and $y$. We remark that in this case the infinite volume limits of the random-cluster measures with free or wired boundaries are identical if $p > p_c (q)$. See \cite[Theorem 4.63, (4.36) and Theorem 6.17]{G06}. Futhermore, in this case, \cite{BD12} shows $\widehat{p}_c(q) = p_c(q)$ for every $q \ge 1$. To show that $\mathbb{P}_0\big\{|v_e|\geq n\big\}$ decays exponentially in $n$ in $d=2$, we now let $B_n$ to be the box $\{1,{\rm d}ots,n\}\times\{1,{\rm d}ots, n\}$. Then on the event $\{|v_e|\geq n, \,\,; 0\in\mathcal C_\infty\}$, none of the boundary sites $\{je: j = 1,...,n\}$ are in $\mathcal C_\infty$. Hence, either at least one of these sites is in a finite component of size larger than $n$ or there exists a dual crossing of $B_n$ in the direction of $e$. The probabilities of both these events are exponentially small in $n$ by {\rm e} qref{lemma-ell-4} and the exponential decay of dual connectivity. Hence $\mathbb{P}_0\big\{|v_e|\geq n\big\}$ decays exponentially in $n$ for $d\geq 2$. To conclude the proof of Lemma \ref{lemma-ell}, we note that for every ${\rm e} ps>0$, \begin{equation}\label{lemaa-ell-3} \big\{{\rm e} ll>n\big\} = \bigcup_{j=1}^{\lceil{\rm e} ps n\rceil} \bigg\{{\rm d}_{\mathrm{ch}}(0,je)>n\,;\,\,0,je\in \mathcal C_\infty\bigg\}\,\, \bigcup \bigg\{|v_e| \geq \lceil{\rm e} ps n\rceil\bigg\}. {\rm e} nd{equation} Since $\mathbb{P}_0$-probabilities of the events in the first union are exponentially small by the uniform estimate {\rm e} qref{est-chemdist} on the chemical distance ${\rm d}_{\mathrm{ch}}(0, je)$, and $\mathbb{P}_0\big\{|v_e|\geq n\big\}$ also decays exponentially in $n$, we now invoke union of events bound and absorb the linear factor coming from the number of events in the exponential bound and end up with the proof of Lemma \ref{lemma-ell}. {\rm e} nd{proof} \begin{lemma}\label{lemma-FKG} The FKG inequality (i.e., Assumption 5) holds for percolation models introduced in Section \ref{sec-intro-bond} and Section \ref{sec-intro-site}. {\rm e} nd{lemma} \begin{proof} For the proof of The FKG inequality we refer to \cite[Theorem 4.17]{G06} for the random cluster model, to \cite{T09} for the random interlacement and the vacant set of it, and to \cite[Remark 1.4]{R15} for the level sets of Gaussian free fields. {\rm e} nd{proof} Before we turn to the proof of Theorem \ref{sublinearthm}, we will need another technical fact. Note that, by Birkhoff's ergodic theorem, $$ \lim_{n\to\infty}\frac 1 {(2n+1)^d} \sum_{|x|\leq n} \, \1\big\{x\in \mathcal C_\infty\big\}=\theta(p) \qquad\mathbb{P}_0-\,\bar{\mu}ox{ a.s.} $$ where $\theta(p)=\mathbb{P}(0\leftrightarrow\infty)$ is the percolation density. We will need a stronger version of the above result and its argument will use the one dimensional pointwise ergodic theorem and an induction argument on the dimension. We will prove this result for every discrete point process (i.e. a shift invariant ergodic random subset of $\mathbb Z^d$). For our case, we will take our infinite cluster $\mathcal C_\infty$ to be the point process. \begin{lemma}\label{lemma-ergodic} Let $\mathcal P$ be a discrete point process in $d$ dimensions, and let $C^{\ssup d}=[a_1,b_1]\times\cdots\times[a_d,b_d]$ be a cube in $\mathbb R^d$. Then for almost every $\omega$, $$ \lim_{n\to\infty}\frac{|\mathcal P\cap n\,\,C^{\ssup d}|} {|n\,\,C^{\ssup d}|} = \Theta $$ where $\Theta=\mathbb{P}(0\in\mathcal P)$ is the density of $\mathcal P$. {\rm e} nd{lemma} \begin{proof} We will prove the Lemma by induction on the dimension $d$. For $d=1$, the Lemma follows directly from the pointwise ergodic theorem, when we subtract the sum in $[a_1n]$ from that in $[b_1n]$. We now assume that the statement holds for dimension $d-1$. We fix ${\rm e} ps>0$ and $K$ say that $n\in\mathbb Z$ is good if for every $k>K$, $$ \left| \frac{|\mathcal P\cap \{n\}\times k([a_2,b_2]\times\cdots\times[a_d,b_d])|} {|k([a_2,b_2]\times\cdots\times[a_d,b_d])|} - \Theta \right| <{\rm e} ps. $$ Note that if $K$ is large enough, then by the induction hypothesis the probability that $0$ is good is greater than $1-{\rm e} ps$. So by the one dimensional result, a.s. for all $n$ large enough, proportion larger than $1-{\rm e} ps$ of the numbers in $[a_1n,b_1n]$ are good, and the statement of the lemma follows. {\rm e} nd{proof} We now turn to the proof of Theorem \ref{sublinearthm}. \noindent {\bf{Proof of Theorem \ref{sublinearthm}:}} Let us fix every $G\in \mathcal G_\infty$ and for every nearest neighbor occupied path $0=x_0,{\rm d}ots,x_n=x$ in $\mathcal C_\infty$, let $V(\omega,x)=V_G(\omega,x)=\sum_{j=0}^{n-1} G(\tau_{x_j},x_{j+1}-x_j)$ as defined in {\rm e} qref{Psidef}. Recall that we have to show \begin{equation}\label{eq0} \lim_{n\to\infty} \max_{\heap{x\in\mathcal C_\infty}{|x|\leq n}} \,\,\frac{|V(\omega,x)|}n =0\qquad\mathbb{P}_0-\,\,\mathrm{a.s.} {\rm e} nd{equation} Let us first make an observation based on the facts proved in Lemma \ref{chemdist} and Lemma \ref{lemma-ell} . Indeed, with of $V(\omega,v_e)$ defined in {\rm e} qref{V_ve}, Lemma \ref{lemma-ell} and our uniform bound assumption {\rm e} qref{unifbound} imply that $\mathbb{E}_0 [|V(\omega,v_e)|] <\infty$. Furthermore, $\mathbb{E}_0[V(\omega, v_e)]=0$ by our induced mean-zero assumption {\rm e} qref{meanzero}. If we now write $F(\omega)= V(\omega,v_e)$ and recall that $n_{k+1}(\omega,e)=n_k(\sigma_e\omega,e)$ from {\rm e} qref{def-nk}, then $V(\omega,v_e)=V(\omega, n_k(\omega,e) \, e)= \sum_{j=0}^{k-1} F \, \circ \,\sigma^j_e(\omega)$. Since the induced shift $\sigma_e:\Omega_0\rightarrow\Omega_0$ is measure-preserving and ergodic, by Birkhoff's ergodic theorem, \begin{equation}\label{eq00} \lim_{k\to\infty} \frac 1 k V(\omega, n_k(\omega,e) \, e)=0 \qquad\mathbb{P}_0-\,\,\mathrm{a.s.} {\rm e} nd{equation} We now fix an arbitrary ${\rm e} ps>0$. We claim that \begin{equation}\label{eq01} \lim_{n\to\infty} \frac 1{n^d} \sum_{\heap {x\in \mathcal C_\infty}{|x|\leq n}} \, \1_{\big\{|V(x,\omega)|>{\rm e} ps n\big\}} =0 \qquad\mathbb{P}_0-\,\,\mathrm{a.s.} {\rm e} nd{equation} Actually {\rm e} qref{eq00} forms the core of the argument for the proof of {\rm e} qref{eq01}. Indeed, given {\rm e} qref{eq00}, the proof of the claim {\rm e} qref{eq01} for all the percolation models including long-range correlations introduced in Section \ref{sec-intro-bond} and Section \ref{sec-intro-site}, now closely follows the proof of \cite[Theorem 5.4]{BB07} deduced for i.i.d. Bernoulli percolation. In fact, the crucial fact \cite[(5.28)]{BB07} can be proved using the FKG inequality. Recall that the FKG inequality asserts that for two increasing events $A$ and $B$ (i.e, events that are preserved by addition of open edges), $\mathbb{P}(A\cap B)\geq \mathbb{P}(A)\mathbb{P}(B)$. Hence, based on the assertion {\rm e} qref{eq00} we have just proved and using Lemma \ref{lemma-FKG}, we can repeat the arguments of \cite[Theorem 5.4]{BB07} to prove the assertion \cite[(5.28)]{BB07} therein and thus deduce {\rm e} qref{eq01}. Then, for every arbitrary ${\rm e} ps>0$, {\rm e} qref{eq01} in particular implies that, for $n$ large enough, \begin{equation}\label{eq1} \sum_{\heap {x\in \mathcal C_\infty}{|x|\leq n}} \, \1_{\big\{|V(x,\omega)|>{\rm e} ps n\big\}} < {\rm e} ps n^d \qquad\mathbb{P}_0-\,\,\mathrm{a.s.} {\rm e} nd{equation} Let us make another observation based on Lemma \ref{chemdist}. Recall that $\theta(p)>0$ denotes the percolation density, i.e., $\theta(p)$ is the probability that $0$ is in the infinite open cluster $\mathcal C_\infty$. Also, for every arbitrary ${\rm e} ps>0$ as before, let us set \begin{equation}\label{delta-def} {\rm d}elta= \frac 12 \bigg(\frac{4{\rm e} ps}{\theta(p)}\bigg)^{\frac 1d}. {\rm e} nd{equation} Then by Lemma \ref{chemdist}, for every $x,y\in \mathcal C_\infty$ with $|x|<n$, $|y|<n$ and ${\rm d}elta n /2\leq |x-y| < {\rm d}elta n$, \begin{equation}\label{eq2} {\rm d}_{\bar{\mu}ox{ch}}(x,y) < \rho {\rm d}elta n. {\rm e} nd{equation} Finally, let us recall Lemma \ref{lemma-ergodic}. Hence for every fixed ${\rm d}elta>0$, for every $n$ large enough and $\mathbb{P}_0$-almost surely, in a ball of radius ${\rm d}elta n$ in $\mathcal C_\infty \cap [-n,n]^d$ there are at least ${\rm d}elta^d (2n)^d \,\frac {\theta} 2$ points in $\mathcal C_\infty$ (Lemma \ref{lemma-ergodic} suffices for the above statement because we take the infinite cluster $\mathcal C_\infty$ as our point process, and we use Lemma \ref{lemma-ergodic} for finitely many cubes $C^{\ssup d}$). Then for our choice of ${\rm d}elta$ as required in {\rm e} qref{delta-def}, \begin{equation}\label{eq3} \begin{aligned} \#\big\{\bar{\mu}ox{points in a box of radius}\,\, {\rm d}elta n\,\,\bar{\mu}ox{in}\,\,[-n,n]^d\,\,\bar{\mu}ox{in}\,\, \mathcal C_\infty\} > 2{\rm e} ps n^d. {\rm e} nd{aligned} {\rm e} nd{equation} Given {\rm e} qref{eq1} and {\rm e} qref{eq3}, we now claim that, for large enough $n$ and every $x\in [-n,n]^d$, there exists $y\in [-n,n]^d \cap \mathcal C_\infty$ so that $|y-x|< {\rm d}elta n$ and $$ |V(\omega,y)| \leq {\rm e} ps n\qquad\mathbb{P}_0-\,\,\mathrm{a.s.} $$ Indeed, by {\rm e} qref{eq1} there are at most ${\rm e} ps n^d$ points $z\in [-n,n]^d$ such that $|V(\omega,z)| \geq {\rm e} ps n$ and by {\rm e} qref{eq3}, there are at least $2{\rm e} ps n^d$ points in $B_{n{\rm d}elta}(x)\cap \mathcal C_\infty$. Hence, we have at least one point $y\in [-n,n]^d \cap \mathcal C_\infty$ such that ${\rm d}elta n /2\leq |y-x|< {\rm d}elta n$ and $|V(\omega,y)| \leq {\rm e} ps n$, $\mathbb{P}_0$- almost surely. Let us now prove {\rm e} qref{eq0}. Recall the definition of $V$ from {\rm e} qref{Psidef}. Then, by {\rm e} qref{eq2}, $$ \begin{aligned} \big| V(\omega,x) - V(\omega,y)\big| &\leq {\rm d}_{\bar{\mu}ox{ch}}(x,y) \,\bar{\mu}ox{ess} \sup_{\omega- \mathbb{P}_0} G(\omega, x)\\ &\leq \rho {\rm d}elta n A, {\rm e} nd{aligned} $$ for some $A< \infty$, recall {\rm e} qref{unifbound}. Since $|V(\omega,y)| \leq {\rm e} ps n$, then $\mathbb{P}_0$- almost surely, $$ \begin{aligned} |V(\omega,x)| &\leq |V(\omega,y)|+ \rho {\rm d}elta n A \\ & \leq{\rm e} ps n+ \rho {\rm d}elta n A. {\rm e} nd{aligned} $$ Since ${\rm e} ps>0$ is arbitrary and ${\rm d}elta\to 0$ as ${\rm e} ps\to 0$ according to {\rm e} qref{delta-def}, Theorem \ref{sublinearthm} is proved. \qed We have an immediate corollary to Theorem \ref{sublinearthm}. \begin{cor}\label{cor-sublinear} Let $G\in \mathcal G_\infty$. For every ${\rm e} ps>0$, there exists $c_{\rm e} ps=c_{\rm e} ps(\omega)$ so that, for every sequence of points $(x_k)_{k=0}^n$ on $\mathcal C_\infty$ with $x_0=0$ and $|x_{k+1}-x_{k}|=1$, $$ \bigg| \sum_{k=0}^{n-1} G(\tau_{x_k} \omega, x_{k+1}-x_k)\bigg| \leq c_{\rm e} ps+n{\rm e} ps. $$ In particular, \begin{equation}\label{ub2} \sum_{k=0}^{n-1} G(\tau_{x_k} \omega, x_{k+1}-x_k) \geq -c_{\rm e} ps- n{\rm e} ps. {\rm e} nd{equation} {\rm e} nd{cor} \secdef \subsct\sbsect{Proof of the upper bound for Theorem \ref{thmmomgen}.}\label{subsec-ub} We now prove the upper bound in Theorem \ref{thmmomgen} using the sub-linear growth property of gradient functions established in Theorem \ref{sublinearthm} and Corollary \ref{cor-sublinear}. \begin{lemma}[The upper bound]\label{ub} For $\mathbb{P}_0$- almost every $\omega\in \Omega_0$, $$ \limsup_{n\to\infty} \frac 1n\log E_{0}^{{\mathfrak{p}}i,\omega} \bigg\{ {\rm e} xp \big\{\sum_{k=0}^{n-1} f\big(\tau_{X_k}\omega, X_{k+1}-X_k\big)\big\}\bigg\} \leq \inf_{G\in \mathcal G_\infty}\Lambda(f, G), $$ where \begin{equation}\label{UfG} \Lambda(f, G)=\mathrm{ess}\sup_{\mathbb{P}_0} \bigg(\log \sum_{e} \1_{\{\omega_e=1\}}{\mathfrak{p}}i_\omega(0,e) {\rm e} xp\big\{f(\omega,e)+ G(\omega,e)\big\}\bigg). {\rm e} nd{equation} {\rm e} nd{lemma} \begin{proof} Fix $G\in \mathcal G_\infty$. By the definition of the Markov chain $P^{{\mathfrak{p}}i,\omega}_0$ we have $\mathbb{P}_0$-a.s., $$ \begin{aligned} &E^{{\mathfrak{p}}i,\omega}_0\bigg\{{\rm e} xp\bigg\{ f(\tau_{X_{k}}\omega,X_{k+1}-X_{k})+G(\tau_{X_{k+1}}\omega,X_{k+1}-X_{k})\bigg\}\bigg|X_{k}\bigg\} \\ &= \sum_{|e|=1} {\mathfrak{p}}i_\omega\big(X_{k},X_{k}+e\big) {\rm e} ^{f(\tau_{X_{k}}\omega,e)+ G(\tau_{X_{k}}\omega,e)}\\ &=\sum_{|e|=1}\1_{\{(\tau_{X_{k}}\omega)(e)=1\}} {\mathfrak{p}}i_\omega\big(X_{k},X_{k}+e\big) {\rm e} ^{f(\tau_{X_{k}}\omega,e)+ G(\tau_{X_{k}}\omega,e)}\\ &\leq {\rm e} ^{\Lambda(f, G)}, {\rm e} nd{aligned} $$ where the uniform upper bound follows from {\rm e} qref{UfG}. Invoking the Markov property and successive conditioning, we have \begin{equation}\label{ub1} E^{{\mathfrak{p}}i,\omega}_0\bigg\{{\rm e} xp\bigg\{\sum_{k=0}^{n-1} \bigg(f(\tau_{X_k}\omega,X_{k+1}-X_{k})+G(\tau_{X_k}\omega,X_{k+1}-X_{k})\bigg)\bigg\}\bigg\} \leq {\rm e} ^{n\Lambda(f, G)}. {\rm e} nd{equation} We now recall Corollary \ref{cor-sublinear} and plug in the lower bound {\rm e} qref{ub2} in {\rm e} qref{ub1}. Then if we divide both sides by $n$, take logarithm and pass to $\limsup_{n\to\infty}$, we obtain the upper bound $$ \limsup_{n\to\infty} \frac 1n\log E^{{\mathfrak{p}}i,\omega}_0 \bigg\{ {\rm e} xp \big\{\sum_{k=0}^{n-1} f\big(\tau_{X_k}\omega, X_{k+1}-X_k\big)\big\}\bigg\} \leq \Lambda(f, G)+ {\rm e} ps. $$ Passing to ${\rm e} ps\to 0$ and subsequently taking $\inf_{G\in \mathcal G_\infty}$ we finish the proof of the lemma. {\rm e} nd{proof} \section{Equivalence of bounds: Min-max Theorems based on entropic coercivity}\label{sec-proof-ldp} In this section we turn to the proof of the crucial fact that the lower bound obtained from Corollary \ref{corlb} and the upper bound from Lemma \ref{ub} indeed match. The following theorem holds the key argument of our analysis and will also finish the proof of Theorem \ref{thmmomgen}. Recall the lower bound variational formula $\overline H(f)$ from {\rm e} qref{thm-lbub2a}, and the upper bound variational formula $\Lambda(f,G)$ from {\rm e} qref{UfG}. \begin{theorem}[Equivalence of bounds]\label{thm-lbub} For every continuous and bounded function $f$ on $\Omega_0\times\mathcal U_d$, $$ \begin{aligned} \overline H(f)&= \inf_{G\in\mathcal G_\infty} \,\mathrm{ess}\,\sup_{\mathbb{P}_0}\,\bigg(\log\sum_{e\in\mathcal U_d} \1_{\omega(e)=1}\,{\mathfrak{p}}i_\omega(0,e)\, {\rm e} xp\big\{f(\omega,e)+ G(\omega,e)\big\}\bigg) \\ &=\inf_{G\in\mathcal G_\infty} \Lambda(f,G) {\rm e} nd{aligned} $$ {\rm e} nd{theorem} We will prove Theorem \ref{thm-lbub} in several steps. Note that we already know that $\overline H(f)\leq \inf_{G\in\mathcal G_\infty} \Lambda(f,G)$ and it remains the prove the inequality in the opposite direction. The first step is to invoke a min-max argument to exchange the order of $\sup_{\widetilde{\mathfrak{p}}i}$ and $\inf_g$ in {\rm e} qref{thm-lbub2}, and subsequently solve the maximization problem in $\widetilde{\mathfrak{p}}i$. The resulting assertion is \begin{lemma}[Entropic coercivity in $\widetilde{\mathfrak{p}}i$]\label{thm-lbub-lemma1} For every continuous and bounded function $f$ on $\Omega_0\times\mathcal U_d$, $$ \overline H(f)=\sup_{{\mathfrak{p}}hi}\, \inf_{g} \int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) L(g,\omega) $$ where \begin{equation}\label{thm-lbub5} L(g, \omega)=L_f(g,\omega)=\log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ g(\omega)-g(\tau_e\omega)\big\}\bigg). {\rm e} nd{equation} {\rm e} nd{lemma} \begin{proof} Let us rewrite {\rm e} qref{thm-lbub2a} as \begin{equation}\label{thm-lbub2} \begin{aligned} \overline H(f)&= \sup_{{\mathfrak{p}}hi}\, \sup_{\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i}\,\inf_{g} \bigg[\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \bigg\{ \sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e) \big(F(\omega,e) -\log\widetilde{\mathfrak{p}}i(\omega,e)\big)\bigg\}\bigg], {\rm e} nd{aligned} {\rm e} nd{equation} where \begin{equation}\label{def-F} F(\omega,e)=F({\mathfrak{p}}i,f,g,\omega,e)=f(\omega,e) + \log {{\mathfrak{p}}i_\omega(0,e)} +\big(g(\omega)-g(\tau_e\omega)\big). {\rm e} nd{equation} and the infimum is being taken over bounded and measurable $g$. For every $\widetilde{\mathfrak{p}}i$ and $g$, let us write the functional \begin{equation}\label{def-frakF} \begin{aligned} \mathfrak F(\widetilde{\mathfrak{p}}i,g)&= \int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \bigg\{ \sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e) \big[f(\omega,e) + \log {{\mathfrak{p}}i_\omega(0,e)} +\big(g(\omega)-g(\tau_e\omega)\big) -\log\widetilde{\mathfrak{p}}i(\omega,e)\big]\bigg\} \\ &=\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \bigg\{ \sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e) \big[F(\omega,e) -\log\widetilde{\mathfrak{p}}i(\omega,e)\big]\bigg\} {\rm e} nd{aligned} {\rm e} nd{equation} with $F(\omega,e)$ defined in {\rm e} qref{def-F} (recall {\rm e} qref{thm-lbub2}). Let us fix an arbitrary density ${\mathfrak{p}}hi$ (i.e., ${\mathfrak{p}}hi\geq 0$ and $\mathbb{E}_0{\mathfrak{p}}hi=1$). First we would like to show that, \begin{equation}\label{thm-lbub-minmax1} \sup_{\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i}\,\,\inf_{g} \mathfrak F(\widetilde{\mathfrak{p}}i,g)= \inf_g\,\,\sup_{\tilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i} \mathfrak F(\widetilde{\mathfrak{p}}i,g). {\rm e} nd{equation} This requires the following coercivity argument. Note that, corresponding to each $\widetilde{\mathfrak{p}}i\in \widetilde\mathbb{P}i$, we have the entropy functional $$ \mathrm{Ent}(\mu_{\widetilde{\mathfrak{p}}i})=\int\sum_e \widetilde{\mathfrak{p}}i(\omega,e)\log\widetilde{\mathfrak{p}}i(\omega,e) {\mathfrak{p}}hi(\omega){\rm d}\mathbb{P}_0(\omega) $$ for the probability measure ${\rm d}\mu_{\widetilde{\mathfrak{p}}i}(\omega,e)= \widetilde{\mathfrak{p}}i(\omega,e) \big({\mathfrak{p}}hi(\omega){\rm d}\mathbb{P}_0(\omega)\big) \in {\mathcal M}_1(\Omega_0\times \mathcal U_d)$. Then for every fixed ${\mathfrak{p}}hi$, the map $\widetilde{\mathfrak{p}}i\mapsto \mathrm{Ent}(\mu_{\widetilde{\mathfrak{p}}i})$ is convex, lower semi-continuous and has weakly compact sub-level sets (i.e., for every $a\in \mathbb{R}$, the set $\{\widetilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i\colon \mathrm{Ent}(\mu_{\widetilde{\mathfrak{p}}i}) \leq a\}$ is weakly compact). Furthermore, for every probability density ${\mathfrak{p}}hi$, every continuous and bounded function $f$ on $\Omega_0\times\mathcal U_d$ and bounded measurable function $g$, and for every $\widetilde{\mathfrak{p}}i\in \widetilde\mathbb{P}i$, $$ \begin{aligned} \int {\mathfrak{p}}hi(\omega) \sum_{e\in\mathcal U_d} \widetilde {\mathfrak{p}}i(\omega,e) F(\omega,e){\rm d}\mathbb{P}_0&= \int {\mathfrak{p}}hi(\omega) \sum_{e\in\mathcal U_d} \widetilde {\mathfrak{p}}i(\omega,e) \big[f(\omega,e) + \log {{\mathfrak{p}}i_\omega(0,e)} +\big(g(\omega)-g(\tau_e\omega)\big] {\rm d}\mathbb{P}_0 \\ &\leq \big(\|f\|_\infty+2\|g\|_\infty\big) \,\int {\mathfrak{p}}hi(\omega) \sum_{e\in\mathcal U_d} \widetilde {\mathfrak{p}}i(\omega,e){\rm d}\mathbb{P}_0\\ &=\|f\|_\infty+2\|g\|_\infty:=C<\infty. {\rm e} nd{aligned} $$ We conclude that for every bounded and measurable $g$, the map $\widetilde{\mathfrak{p}}i\mapsto \mathfrak F(\widetilde{\mathfrak{p}}i,g)$ is concave, weakly upper-semicontinuous and has weakly compact super-level sets $\{\widetilde{\mathfrak{p}}i\colon\, \mathfrak F(\widetilde{\mathfrak{p}}i,g)\geq a\}$ for every $a\in \mathbb{R}$. Furthermore, for every $\widetilde{\mathfrak{p}}i\in \widetilde\mathbb{P}i$, the map $g\mapsto \mathfrak F(\widetilde{\mathfrak{p}}i,g)$ is linear and continuous in $g$. Then, in view of {\it{Von-Neumann's min-max theorem}} (p. 319, \cite{AE84}), the equality {\rm e} qref{thm-lbub-minmax1} holds. Hence, \begin{equation}\label{thm-lbub3} \begin{aligned} \overline H(f) = \sup_{{\mathfrak{p}}hi}\, \inf_{g}\, \sup_{\widetilde{\mathfrak{p}}i}\, \bigg[\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \bigg\{ &\sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e)\big[F(\omega,e)-\log \widetilde{\mathfrak{p}}i(\omega,e)\big]\bigg\}\bigg] {\rm e} nd{aligned} {\rm e} nd{equation} Since the integrand above depends only locally in $\widetilde{\mathfrak{p}}i$, we can bring the $\sup_{\widetilde{\mathfrak{p}}i\in\widetilde\mathbb{P}i}$ inside the integral, and solve the variational problem $$ \sup_{\widetilde{\mathfrak{p}}i}\sum_{e\in \mathcal U_d} \tilde{\mathfrak{p}}i(\omega,e)\big[F(\omega,e)-\log \widetilde{\mathfrak{p}}i(\omega,e)\big] $$ subject to the Lagrange multiplier constraint $\sum_e \widetilde{\mathfrak{p}}i(\cdot,e)=1$. The maximizer is $$ \widetilde{\mathfrak{p}}i(\cdot,e)=\frac{{\rm e} xp[F(\omega,e)]}{\sum_{e\in\mathcal U_d}{\rm e} xp[F(\omega,e)]}, $$ and if we plug in this value in {\rm e} qref{thm-lbub3} and recall the definition of $F(\omega,e)$ from {\rm e} qref{def-F}, then {\rm e} qref{thm-lbub3} leads us to \begin{equation}\label{thm-lbub4} \begin{aligned} \overline H(f) &= \sup_{{\mathfrak{p}}hi}\, \inf_{g} \bigg[\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) \log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ g(\omega)-g(\tau_e\omega)\big\}\bigg)\bigg] \\ &=\sup_{{\mathfrak{p}}hi}\, \inf_{g} \,\,\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega) L(g,\omega), {\rm e} nd{aligned} {\rm e} nd{equation} which concludes the proof of Lemma \ref{thm-lbub-lemma1}. {\rm e} nd{proof} Now we would like to exchange $\sup_{{\mathfrak{p}}hi}$ and $\inf_{g}$ in {\rm e} qref{thm-lbub4}. For this, we need to invoke a {\it{compactification}} argument based on an {\it{entropy penalization method}}. This is the the content of the next lemma. \begin{lemma}[Entropy penalization and coercivity in ${\mathfrak{p}}hi$]\label{thm-lbub-lemma2} For every continuous and bounded function $f$ on $\Omega_0\times\mathcal U_d$, $$ \overline H(f) \geq \liminf_{{\rm e} ps\to 0}\inf_{g} {\rm e} ps \log \mathbb{E}_0\bigg[{\rm e} ^{{\rm e} ps^{-1} L(g,\cdot)}\bigg] $$ where $L(g,\cdot)$ is the functional defined in {\rm e} qref{thm-lbub5}. {\rm e} nd{lemma} \begin{proof} We start from {\rm e} qref{thm-lbub4}. For every probability density ${\mathfrak{p}}hi\in L^1_+(\mathbb{P}_0)$, note that its entropy functional $$ \mathrm{Ent}({\mathfrak{p}}hi)= \int {\mathfrak{p}}hi(\omega)\,\log{\mathfrak{p}}hi(\omega)\,{\rm d}\mathbb{P}_0(\omega). $$ is always non-negative by Jensen's inequality. Hence, for every fixed ${\rm e} ps>0$, we have a lower bound \begin{equation}\label{thm-lbub6} \begin{aligned} \overline H(f) &\geq \sup_{{\mathfrak{p}}hi}\, \inf_{g} \bigg[\int {\rm d}\mathbb{P}_0(\omega)\,{\mathfrak{p}}hi(\omega)\bigg( L(g,\omega) - {\rm e} ps \log{\mathfrak{p}}hi(\omega)\bigg)\bigg]. {\rm e} nd{aligned} {\rm e} nd{equation} Again, ${\mathfrak{p}}hi\mapsto \mathrm{Ent}({\mathfrak{p}}hi)$ is convex and weakly lower semicontinuous in $L^1_+(\mathbb{P}_0)$, with its sub-level sets $\{{\mathfrak{p}}hi\colon \,\int {\mathfrak{p}}hi\log{\mathfrak{p}}hi{\rm d}\mathbb{P}_0\leq a\}$ being weakly compact in $L^1_+(\mathbb{P}_0)$ for all $a\in\mathbb{R}$. Also, by {\rm e} qref{thm-lbub5}, for every bounded $f$ on $\Omega_0\times\mathcal U_d$ and bounded $g$, and for every ${\mathfrak{p}}hi$, $$ \int {\mathfrak{p}}hi(\omega) L(g,\omega){\rm d}\mathbb{P}_0= \int {\mathfrak{p}}hi(\omega)\log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ g(\omega)-g(\tau_e\omega)\big\}\bigg) \leq \|f\|_\infty+2 \|g\|_\infty=C<\infty. $$ Then, if we write \begin{equation}\label{def-Acal} \mathcal A_{\rm e} ps(g,{\mathfrak{p}}hi)=\int {\rm d}\mathbb{P}_0(\omega)\,\big[{\mathfrak{p}}hi(\omega) L(g,\omega) - {\rm e} ps {\mathfrak{p}}hi(\omega)\log{\mathfrak{p}}hi(\omega)\big], {\rm e} nd{equation} then, for every ${\rm e} ps>0$, as in Lemma \ref{thm-lbub-lemma1}, for every bounded and measurable $g$, the map $g\mapsto \mathcal A_{\rm e} ps(g,{\mathfrak{p}}hi)$ is convex and continuous and the map ${\mathfrak{p}}hi\mapsto \mathcal A_{\rm e} ps(g,{\mathfrak{p}}hi)$ is concave and upper semicontinuous with compact super-level sets (i.e. the set $\{{\mathfrak{p}}hi\colon\, \mathcal A_{\rm e} ps(g,{\mathfrak{p}}hi) \geq a\}$ is weakly compact for all $a\in\mathbb{R}$). Applying Von-Neumann's min-max theorem once more, we can swap the order of $\sup_{\mathfrak{p}}hi$ and $\inf_g$ in {\rm e} qref{thm-lbub6}. Hence, \begin{equation}\label{thm-lbub8} \begin{aligned} \overline H(f) \geq \inf_{g} \,\sup_{{\mathfrak{p}}hi}\, \mathcal A_{\rm e} ps(g,{\mathfrak{p}}hi) &=\inf_{g} {\rm e} ps \log \mathbb{E}_0\bigg[{\rm e} ^{{\rm e} ps^{-1} L(g,\cdot)}\bigg] \\ &\geq \liminf_{{\rm e} ps\to 0}\inf_{g} {\rm e} ps \log \mathbb{E}_0\bigg[{\rm e} ^{{\rm e} ps^{-1} L(g,\cdot)}\bigg]. {\rm e} nd{aligned} {\rm e} nd{equation} We remark that the second identity above follows from a standard perturbation argument in ${\mathfrak{p}}hi$ and the definition of $\mathcal A_{\rm e} ps$ set in {\rm e} qref{def-Acal}. Indeed, for every admissible class of test functions ${\mathfrak{p}}si$, we need to solve for ${\mathfrak{p}}hi$ by setting $$ \frac{{\rm d}}{{\rm d}{\rm e} ta}\bigg|_{{\rm e} ta=0} \bigg[\mathcal A_{\rm e} ps(g,{\mathfrak{p}}hi+{\rm e} ta{\mathfrak{p}}si)\bigg]=0 $$ for every fixed ${\rm e} ps>0$ and $g$, and subject to the condition $\int {\mathfrak{p}}hi {\rm d}\mathbb{P}_0=1$. The solution is given by $$ {\mathfrak{p}}hi(\cdot)= \frac{{\rm e} xp\{{\rm e} ps^{-1} L(g,\cdot)\}}{\mathbb{E}_0[{\rm e} xp\{{\rm e} ps^{-1} L(g,\cdot)\}]}. $$ If we substitute this value of ${\mathfrak{p}}hi$ in {\rm e} qref{def-Acal}, then we are led to the identity {\rm e} qref{thm-lbub8}. This concludes the proof of Lemma \ref{thm-lbub-lemma2}. {\rm e} nd{proof} We need the following important lemma, whose proof is deferred until the end of the proof of Theorem \ref{thm-lbub}. Recall that $\mathcal U_d=\{{\mathfrak{p}}m u_i\}_{i=1}^d$ the nearest neighbors of the origin $0$. \begin{lemma}\label{lemma-last} For every given ${\rm e} ta>0$, there exists a sequence ${\rm e} ps_n\to 0$ and a sequence $g_n$ of bounded measurable functions such that, \begin{equation}\label{thm-lbub9} {\rm e} ta+ \overline H(f) \geq {\rm e} ps_n \log \mathbb{E}_0\bigg[{\rm e} ^{{\rm e} ps_n^{-1} L(g_n,\cdot)}\bigg], {\rm e} nd{equation} and for every $u\in\mathcal U_d$ for every $p\geq 1$, \begin{equation}\label{def-grad-n} G_n(\omega,u)= \1\{0\in\mathcal C_\infty\}\,\1\{\omega(u)=1\} \,\,\big(g_n(\omega)- g_n(\tau_u\omega)\big) {\rm e} nd{equation} converges weakly in $L^p(\mathbb{P}_0)$ as well as in distribution (as random variables) along some subsequence to some $G(\cdot,u)$. Furthermore, $G \in \mathcal G_\infty$. {\rm e} nd{lemma} We first assume the above lemma and prove Theorem \ref{thm-lbub}. For this purpose, we need another lemma. \begin{lemma}\label{lemma-monotonicity} For every $\lambda>0$, every probability measure $\mu$ and for every random variable $X$ with finite exponential moment, if we set $$ {\mathfrak{p}}si(\lambda)=\log\mathbb{E}^{\ssup\mu}\big[{\rm e} ^{\lambda X}\big], $$ then the map $\lambda\mapsto \frac{{\mathfrak{p}}si(\lambda)}{\lambda}$ is increasing in $[0,\infty)$. {\rm e} nd{lemma} \begin{proof} Indeed, ${\mathfrak{p}}si(\lambda)$ is convex and twice differentiable in $\lambda$. In particular, ${\mathfrak{p}}si^{{\mathfrak{p}}rime{\mathfrak{p}}rime}(\lambda)>0$, ${\mathfrak{p}}si(0)=0$ and $$ \bigg(\frac{{\mathfrak{p}}si(\lambda)}{\lambda}\bigg)^{{\mathfrak{p}}rime}= \frac{{\mathfrak{p}}si^{\mathfrak{p}}rime(\lambda)}{\lambda}- \frac{{\mathfrak{p}}si^{\mathfrak{p}}rime(\lambda)}{\lambda^2}=\frac{\lambda{\mathfrak{p}}si^{\mathfrak{p}}rime(\lambda)-{\mathfrak{p}}si(\lambda)}{\lambda^2}. $$ Since $\lambda{\mathfrak{p}}si^{\mathfrak{p}}rime(\lambda)-{\mathfrak{p}}si(\lambda)$ is $0$ at $\lambda=0$ and $(\lambda{\mathfrak{p}}si^{\mathfrak{p}}rime(\lambda)-{\mathfrak{p}}si(\lambda))^{\mathfrak{p}}rime=\lambda{\mathfrak{p}}si^{{\mathfrak{p}}rime{\mathfrak{p}}rime}>0$, we conclude that $\lambda\mapsto \frac{{\mathfrak{p}}si(\lambda)}{\lambda}$ is increasing in $\lambda>0$. {\rm e} nd{proof} We now continue with the proof of Theorem \ref{thm-lbub} assuming Lemma \ref{lemma-last}. \noindent {\bf{Proof of Theorem \ref{thm-lbub}:}} Fix ${\rm e} ta>0$. Note that Lemma \ref{lemma-monotonicity} and {\rm e} qref{thm-lbub9} imply that for $\lambda>0$ and large enough $n$, $$ \begin{aligned} {\rm e} ta+ \overline H(f) &\geq \frac 1\lambda \log \mathbb{E}_0\big[{\rm e} ^{\lambda L(g_n,\cdot)}\big]. {\rm e} nd{aligned} $$ For every $M>0$, let us remark that $x\mapsto {\rm e} xp\big\{\mathrm{min}\{x,M\}\big\}$ is bounded and continuous. Then we can plug in the expression for $L(g_n,\cdot)$ from {\rm e} qref{thm-lbub5} in the last bound and recall the definition of $G_n$ from {\rm e} qref{def-grad-n} to get, $$ \begin{aligned} {\rm e} ta+ \overline H(f) &\geq\frac 1\lambda \log \mathbb{E}_0\bigg[{\rm e} xp\bigg\{\lambda\log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ \mathrm{min}\{G_n(\omega,e),M\}\big\}\bigg)\bigg\}\bigg] {\rm e} nd{aligned} $$ Let us also remark that $f$ is continuous and bounded in $\Omega_0\times\mathcal U_d$. If we now let $n\to\infty$, the first part of Lemma \ref{lemma-last} implies that $G_n(\omega,e)$ converges weakly to some $G(\omega,e)$. Hence, \begin{equation}\label{thm-lbub10} {\rm e} ta+ \overline H(f) \geq\frac 1\lambda \log \mathbb{E}_0\bigg[{\rm e} xp\bigg\{\lambda\log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ \mathrm{min}\{G(\omega,e),M\}\big\}\bigg)\bigg\}\bigg]. {\rm e} nd{equation} If we now let $M\uparrow\infty$ and use the monotone convergence theorem, we get $$ {\rm e} ta+ \overline H(f) \geq\frac 1\lambda \log \mathbb{E}_0\bigg[{\rm e} xp\bigg\{\lambda\log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ G(\omega,e)\big\}\bigg)\bigg\}\bigg]. $$ If we now let $\lambda\to\infty$, we deduce that \begin{equation}\label{thm-lbub11} \begin{aligned} {\rm e} ta+ \overline H(f) &\geq\mathrm{ess}\,\sup_{\mathbb{P}_0}\, \log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ G(\omega,e)\big\}\bigg) \\ &\geq\inf_{G\in\mathcal G_\infty}\,\mathrm{ess}\,\sup_{\mathbb{P}_0}\, \log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ G(\omega,e)\big\}\bigg), {\rm e} nd{aligned} {\rm e} nd{equation} and in the last lower bound we invoked the second part of Lemma \ref{lemma-last} which asserts that $G\in \mathcal G_\infty$. Since the choice of ${\rm e} ta>0$ was arbitrary, the last bound proves Theorem \ref{thm-lbub}, assuming Lemma \ref{lemma-last}. \qed We now the owe the reader the proof of Lemma \ref{lemma-last}. \noindent {\bf{Proof of Lemma \ref{lemma-last}:}} We will prove Lemma \ref{lemma-last} in two main steps. In the first step we will show that the sequence of formal gradients $G_n$ defined in {\rm e} qref{def-grad-n} is uniformly integrable and converges along a subsequence to some $G$. In the next step we will show that the limit $G$ belongs to the class $\mathcal G_\infty$ introduced in Section \ref{subsec-classG}. \noindent{\bf{Step 1: Proving $L^p(\mathbb{P}_0)$ boundedness and uniform integrability of $G_n$.}} First we want to prove that $G_n$ defined in {\rm e} qref{def-grad-n} is uniformly bounded in $L^p(\mathbb{P}_0)$ for every $p\geq 1$ and $G_n$ is also uniformly integrable. Note that by {\rm e} qref{thm-lbub8}, there exists a ${\rm e} ps_n\to 0$ and a sequence $(g_n)_n$ of bounded measurable functions so that $$ {\rm e} ps_n\log\mathbb{E}_0\bigg[{\rm e} xp\bigg\{{\rm e} ps_n^{-1} \log\bigg(\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{f(\omega,e)+ g_n(\omega)-g_n(\tau_e\omega)\big\}\bigg)\bigg\}\bigg] \leq \overline H(f). $$ Since $f$ is bounded, $f(\omega,e)\geq -\|f\|_\infty$ and by Lemma \ref{lemma-monotonicity}, in particular we have \begin{equation}\label{thm-lbub12} \mathbb{E}_0\bigg[\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{g_n(\omega)-g_n(\tau_e\omega)\big\}\bigg] \leq {\rm e} xp\{\overline H(f)+\|f||_\infty\} {\rm e} nd{equation} Recall that $\mathcal U_d=\{{\mathfrak{p}}m u_i\}_{i=1}^d$ are the nearest neighbors of the origin $0$. For every $u={\mathfrak{p}}m u_i$, let $\Omega_{0,u}$ denote the set of configurations $\omega$ such that both $0$ and $u$ are in the infinite cluster $\mathcal C_\infty(\omega)$ and the edge $0\leftrightarrow u$ is present (i.e., $\omega_u=1$). Then $\mathbb{P}(\Omega_{0,u})>0$ and we set $\mathbb{P}_{0,u}(\cdot)=\mathbb{P}(\cdot| \Omega_{0,u})$. Now for every $u={\mathfrak{p}}m u_i$, if the edge $0\leftrightarrow u$ is present, then ${\mathfrak{p}}i_\omega(0,u)\geq 1/(2d)>0$ and for some constant $C>0$, {\rm e} qref{thm-lbub12} implies \begin{equation}\label{thm-lbub13} \mathbb{E}_{0,u}\big[{\rm e} xp\big\{g_n(\omega)-g_n(\tau_u\omega)\big\}\big] \leq C. {\rm e} nd{equation} Now again by {\rm e} qref{thm-lbub12}, $$ \begin{aligned} \mathbb{E}_{0,u}\bigg[\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_{\tau_u\omega}(0,e){\rm e} xp\big\{g_n(\tau_u\omega)-g_n(\tau_e\tau_u\omega)\big\}\bigg] &= \mathbb{E}_0\bigg[\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{g_n(\omega)-g_n(\tau_e\omega)\big\} \big| \omega(0,-u)=1 \bigg] \\ &\leq C \mathbb{E}_0\bigg[\sum_{e\in\mathcal U_d}{\mathfrak{p}}i_\omega(0,e){\rm e} xp\big\{g_n(\omega)-g_n(\tau_e\omega)\big\} \bigg]\\ &\leq C{\rm e} xp\{\overline H(f)+\|f||_\infty\}. {\rm e} nd{aligned} $$ Now if the edge $0\leftrightarrow u$ is present in the configuration $\omega$ (i.e., $\omega_u=1$), the edge $-u\leftrightarrow 0$ is present in the configuration $\tau_u\omega$ (i.e., ${\mathfrak{p}}i_{\tau_u\omega}(0,-u)\geq 1/(2d)>0$) and hence, again \begin{equation}\label{thm-lbub14} \mathbb{E}_{0,u}\big[{\rm e} xp\big\{g_n(\tau_u\omega)-g_n(\tau_{-u}\tau_u\omega)\big\}\big] =\mathbb{E}_{0,u}\big[{\rm e} xp\big\{g_n(\tau_u\omega)-g_n(\omega)\big\}\big]\leq C. {\rm e} nd{equation} Let $G_n$ be the sequence defined in {\rm e} qref{def-grad-n}, while $G_n^+$ and $G_n^-$ denote its positive and the negative parts respectively. Then $|G_n|=G_n^++G_n^-$ and it follows from {\rm e} qref{thm-lbub13} and {\rm e} qref{thm-lbub14} that the sequence $G_n$ is uniformly bounded in $L^p(\mathbb{P}_0)$ for every $p\geq 1$, and thus it is also uniformly integrable and uniformly tight under $\mathbb{P}_0$. Consequently, $G_n$ converges weakly in $L^p(\mathbb{P}_0)$ and also in distribuition (as random variables) along a subsequence to some $G$. \noindent{\bf{Step 2: Proving that $G\in \mathcal G_\infty$.}} To conclude that $G\in \mathcal G_\infty$, note that the fact that $G$ is bounded in the essential supremum norm in $\mathbb{P}_0$ follows from the first inequality in the display {\rm e} qref{thm-lbub11}$^{1}.$\footnotetext{$^1$ Note that the display {\rm e} qref{thm-lbub11} followed only from the first part of Lemma \ref{lemma-last} (i.e., the fact that that $G_n$ converges weakly along a subsequence to some $G$), which we have just proved in Step 1. In particular, {\rm e} qref{thm-lbub11} does not use the second part of Lemma \ref{lemma-last} which asserts that $G\in \mathcal G_\infty$, which we are proving currently in Step 2.} Furthermore, the fact that $G$ satisfies the closed loop property {\rm e} qref{closedloop} on the infinite cluster $\mathcal C_\infty$ also follows easily from the structure of $G_n$ which is a gradient field on the infinite cluster $\mathcal C_\infty$. Indeed, let $0 = x_0, x_1, {\rm d}ots, x_j = 0$ be a closed path in the lattice and let us set $$ B(x_0,{\rm d}ots,x_j)= \{ 0 = x_0, x_1, {\rm d}ots, x_j=0 \textup{ is a closed path in } \mathcal C_{\infty} \} $$ and fix an arbitrarily measurable event $A$ in $\Omega_0$. Since $\mathbb{P}_0=\mathbb{P}(\cdot|\{0\leftrightarrow\infty\})$ and $\mathbb{P}$ is invariant under the shifts $\tau_x$ for every $x\in\mathbb{Z}^d$, it follows from the weak $L^2(\mathbb{P}_0)$ convergence of $G_n$ to $G$ from Lemma \ref{lemma-last} that for each $1 \le i \le j$, $$ \lim_{n \to \infty} \mathbb{E}_0 \bigg[\1_{A \cap B(x_0, x_1, {\rm d}ots, x_j)} (\omega) G_n(\tau_{x_{i-1}}\omega, x_i - x_{i-1})\bigg] = \mathbb{E}_0 \bigg[\1_{A \cap B(x_0, x_1, {\rm d}ots, x_j)} (\omega) G(\tau_{x_{i-1}}\omega, x_i - x_{i-1})\bigg] $$ By the definition of $G_n$ set forth in {\rm e} qref{def-grad-n}, $$ \sum_{i=1}^j G_n(\tau_{x_{i-1}}\omega, x_i - x_{i-1}) = 0, \, \bar{\mu}ox{ for } \,\mathbb{P}_0\textup{-a.e. } \omega \in P(x_0, x_1, {\rm d}ots, x_j). $$ Hence, $$ \mathbb{E}_0\bigg[ \1_{A \cap P(x_0, x_1, {\rm d}ots, x_j)} (\omega) \sum_{i = 1}^{j} G(\tau_{x_{i-1}}\omega, x_i - x_{i-1})\bigg] = 0. $$ Since $A$ is taken arbitrarily, $$ \1_{B(x_0, x_1, {\rm d}ots, x_j)} (\omega) \sum_{i = 1}^{j} G(\tau_{x_{i-1}}\omega, x_i - x_{i-1}) = 0, \ \mathbb{P}_0\textup{-a.s. }\omega. $$ This proves the closed loop property of $G$. It remains to check the induced zero mean property {\rm e} qref{meanzero} of $G$. The following lemma will finish the proof of Lemma \ref{lemma-last}. Hence, the proof of Theorem \ref{thm-lbub} will also be concluded. \begin{lemma}[Induced mean zero property of the limit $G$]\label{lemma-last-last} The limiting gradient $G$ appearing in Lemma \ref{lemma-last} satisfies the induced mean zero property defined in {\rm e} qref{meanzero}. Hence, $G\in \mathcal G_\infty$. {\rm e} nd{lemma} \begin{proof} Let us fix $e\in\mathcal U_d$ and recall that ${\rm e} ll$ denotes the graph distance from $0$ to $v_e=k(\omega,e)e$, and fix $(x_0=0,x_1,\ldots,x_{\rm e} ll)$ a shortest open path to $v_e=k(\omega,e)e$. We also recall from Section \ref{sec-classG} that the induced shift $\sigma_e\colon \Omega_0\rightarrow\Omega_0$ defined by $\sigma_e(\omega)=\tau_{k(\omega,e)e}(\omega)$ is $\mathbb{P}_0$-measure preserving. Hence, for every bounded measurable $g_n$ \begin{equation}\label{eq:gdifzero} \mathbb{E}_0\big[g_n(\tau_{k(\omega,e)e}\omega)-g_n(\omega)\big]= \mathbb{E}_0\big[g_n\circ\sigma_e-g_n\big]=0. {\rm e} nd{equation} Let us write, $$ F_M=\mathbb{E}_0\bigg[V(\omega,\,k(\omega,e)e)\, \1_{{\rm e} ll<M}\bigg]= \mathbb{E}_0\bigg[\sum_{j=0}^{{\rm e} ll-1}\, G\big(\tau_{x_i}\omega, x_{i+1}-x_i\big)\,\, \1_{{\rm e} ll<M}\bigg]. $$ We claim that, \begin{equation}\label{claim_FM} F_M\to 0 \quad\bar{\mu}ox { as }\,\,M\to\infty. {\rm e} nd{equation} Note that, by {\rm e} qref{def-grad-n}, $$ F_M=\mathbb{E}_0\bigg[\1_{{\rm e} ll\leq M}\,\sum_{j=0}^{{\rm e} ll-1} \lim_{n\to\infty}\, G_n(\tau_{x_j}\omega, x_{j+1}-x_j)\bigg] $$ We would like show that, indeed, \begin{equation}\label{claim-FM-0} F_M =\mathbb{E}_0\bigg[\1_{{\rm e} ll<M}\,\,\lim_{n\to\infty}\big(g_n(\tau_{k(\omega,e)e}\omega)-g_n(\omega)\big)\bigg], {\rm e} nd{equation} and consequently use {\rm e} qref{eq:gdifzero} to conclude \begin{equation}\label{claim-FM-1} \big|F_M\big|\leq\liminf_{n\to\infty}\bigg|\mathbb{E}_0\bigg[ {\1}_{{\rm e} ll\geq M} \big(g_n(\tau_{k(\omega,e)e}\omega)-g_n(\omega)\big)\bigg] \bigg|. {\rm e} nd{equation} Indeed, let us fix an integer $j < M$ and a finite path $0 = x_0, x_1, {\rm d}ots, x_j$ from the origin. Let $$ B(x_0, x_1, {\rm d}ots, x_j ) := \{{\rm e} ll = j, 0 = x_0, x_1, {\rm d}ots, x_j \textup{ is a path in } \mathcal C_{\infty} \} $$ Then, $$ \{{\rm e} ll = j\} = \bigcup_{0 = x_0, x_1, {\rm d}ots, x_j} \{{\rm e} ll = j\} \cap B(x_0, x_1, {\rm d}ots, x_j ). $$ We can choose $\widetilde B(x_0, x_1, {\rm d}ots, x_j ) \subset B(x_0, x_1, {\rm d}ots, x_j )$ such that we have a finite and disjoint union \begin{equation}\label{eq-tilde-B} \{{\rm e} ll = j\} = \bigcup_{0 = x_0, x_1, {\rm d}ots, x_j} \{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j ). {\rm e} nd{equation} Then it suffices to show that for each fixed path $0 = x_0, x_1, {\rm d}ots, x_j$, $$ \lim_{n \to \infty} \mathbb{E}_0\bigg[\1_{\{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j )}\,\,\big(g_n(\tau_{x_j}\omega)-g_n(\omega)\big)\bigg] = \mathbb{E}_0\bigg[\1_{\{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j )}\,\,\sum_{i = 1}^{j} G (\tau_{x_{i-1}}\omega, x_i - x_{i-1})\bigg], $$ Equivalently, we need to show that for each $i=1,{\rm d}ots,j$, $$ \lim_{n \to \infty} \mathbb{E}_0\bigg[\1_{\{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j )}\,\,\big(g_n(\tau_{x_i}\omega) - g_n(\tau_{x_{i-1}}\omega)\big)\bigg] = \mathbb{E}_0\bigg[\1_{\{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j )}\,\,G (\tau_{x_{i-1}}\omega, x_i - x_{i-1})\bigg], $$ But this statement follows from the fact that $$ \1_{\{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j )}\,\,\big(g_n(\tau_{x_i}\omega) - g_n(\tau_{x_{i-1}}\omega)\big) = \1_{\{{\rm e} ll = j\} \cap \widetilde B(x_0, x_1, {\rm d}ots, x_j )} G_n (\tau_{x_{i-1}}\omega, x_i - x_{i-1}). $$ and Lemma \ref{lemma-last} which implies convergence of $G_n (\tau_{x_{i-1}}\omega, x_i - x_{i-1})$ to $G (\tau_{x_{i-1}}\omega, x_i - x_{i-1})$ along a subsequence weakly in $L^2$ under the measure $\mathbb{P}(\cdot | x_{i-1}, x_i \in \mathcal{C}_{\infty})$. This proves {\rm e} qref{claim-FM-0}, which also implies {\rm e} qref{claim-FM-1} when combined with {\rm e} qref{eq:gdifzero}. We now need to estimate the right hand side of {\rm e} qref{claim-FM-1}, which is \begin{equation}\label{eq-claim-FM-3} \begin{aligned} \bigg|\mathbb{E}_0\bigg[ {\1}_{{\rm e} ll\geq M} \bigg(g_n(\tau_{k(\omega,e)e}\omega)-g_n(\omega)\bigg)\bigg] \bigg| &=\bigg|\mathbb{E}_0\bigg[ \sum_{j=M}^\infty{\1}_{{\rm e} ll= j} \bigg(\sum_{i=0}^{j-1} \big(g_n(\tau_{x_i}\omega)-g_n(\tau_{x_{i+1}}\omega)\big)\bigg)\bigg] \bigg|\\ &\leq\mathbb{E}_0\bigg[ \sum_{j=M}^\infty{\1}_{{\rm e} ll= j} \bigg(\sum_{i=0}^{j-1} \big|G_n(\tau_{x_i}\omega,x_{i+1}-x_i)\big|\bigg)\bigg] \bigg| \\ {\rm e} nd{aligned} {\rm e} nd{equation} For each $j\geq M$ we will now estimate $\mathbb{E}_0\bigg[{\1}_{{\rm e} ll= j} \bigg(\sum_{i=0}^{j-1} |G_n(\tau_{x_i}\omega, x_{i+1}-x_i)|\bigg)\bigg]$. For this, it is enough to estimate $$ \sum_{0 = x_0, {\rm d}ots, x_j} \mathbb{E}_0\bigg[{\1}_{{\rm e} ll= j \cap \widetilde P (x_0, {\rm d}ots, x_j)} \bigg(\sum_{i=0}^{j-1} G_n(\tau_{x_i}\omega, x_{i+1}-x_i)\bigg)\bigg] $$ where $\widetilde B(x_0,{\rm d}ots,x_j)$ is the event defined before so that {\rm e} qref{eq-tilde-B} holds. Then by Lemma \ref{lemma-ell} , we can choose two constants $c_1, c_2$ such that for every $j \ge 1$, $$ \mathbb{P}_0 ({\rm e} ll = j) \le c_1 {\rm e} xp(- c_2 j). $$ Furthermore, let $p, q \geq 1$ be such that $(1/p) + (1/q) =1$ and \begin{equation}\label{eq-rho} \rho:=\frac{c_1 (2d)^{1/q}} {{\rm e} xp(c_2 / p)} <1. {\rm e} nd{equation} Now by the H\"older's inequality, writing $|G_n|=G_n^+ + G_n^-$ and invoking {\rm e} qref{thm-lbub13} and {\rm e} qref{thm-lbub14} again, we get $$ \begin{aligned} & \mathbb{E}_0\bigg[{\1}_{\{{\rm e} ll= j\} \cap \widetilde B (x_0, {\rm d}ots, x_j)} |G_n(\tau_{x_i}\omega, x_{i+1}-x_i)| \bigg] \\ &\leq \mathbb{E}_0\bigg[ {\1}_{x_{i}, x_{i+1} \in \mathcal{C}_{\infty}} |G_n(\tau_{x_i}\omega, x_{i+1}-x_i)|^q \bigg]^{1/q} \mathbb{P}_0 \bigg[\{{\rm e} ll= j\} \cap \widetilde B (x_0, {\rm d}ots, x_j))\bigg]^{1/p}. \\ &\leq C_q \mathbb{P}_0 \bigg[\{{\rm e} ll= j\} \cap \widetilde B (x_0, {\rm d}ots, x_j)\bigg]^{1/p}. {\rm e} nd{aligned} $$ Since the number of paths $0 = x_0, {\rm d}ots, x_j$ is bounded by $(2d)^j$, by the H\"older's inequality, $$ \sum_{0 = x_0, {\rm d}ots, x_j} \mathbb{P}_0 ({\rm e} ll= j \cap \widetilde P (x_0, {\rm d}ots, x_j))^{1/p} \le \left(c_1 (2d)^{1/q}{\rm e} xp(-c_2 / p)\right)^j. $$ Hence, $$ \begin{aligned} \sum_{0 = x_0, {\rm d}ots, x_j} \mathbb{E}_0\bigg[{\1}_{{\rm e} ll= j \cap \widetilde P (x_0, {\rm d}ots, x_j)} \bigg(\sum_{i=0}^{j-1} G_n(\tau_{x_i}\omega, x_{i+1}-x_i)\bigg)\bigg] &\leq j \left(c_1 (2d)^{1/q}{\rm e} xp(-c_2 / p)\right)^j \\ &=j \rho^j {\rm e} nd{aligned} $$ Since $\rho<1$ by our assumption in {\rm e} qref{eq-rho}, the claim {\rm e} qref{claim_FM} follows from {\rm e} qref{eq-claim-FM-3} and {\rm e} qref{claim-FM-1}. Recall the definition of the corrector $V(\omega,k(\omega,e)e)=\sum_{i=0}^{{\rm e} ll-1} G(\tau_{x_i}\omega,x_{i+1}-x_i)$ corresponding to the limit $G$ of $G_n$. To prove the induced mean zero property {\rm e} qref{meanzero} for $V$, we have to show that $\mathbb{E}_0\big[V(\omega,k(\omega,e)e)\big]=0$. For this, we note that $V(\omega,k(\omega,e)e)$ is the almost sure pointwise limit of $V(\omega,k(\omega,e)e){\1}_{{\rm e} ll\leq M}$ as $M\to \infty$. Furthermore, $|V(\omega,k(\omega,e)e){\1}_{{\rm e} ll\leq M}|\leq \|G\|_\infty{\rm e} ll$ for all $M$ and $\mathbb{E}_0 ({\rm e} ll) <\infty$ by Lemma \ref{lemma-ell}, so by the dominated convergence theorem $$ \mathbb{E}_0\big[V(\omega,k(\omega,e)e)\big] = \lim_{M\to\infty}\mathbb{E}_0\big[V(\omega,k(\omega,e)e)\,{\1}_{{\rm e} ll\leq M}\big] = \lim_{M\to\infty}F_M= 0, $$ while the last identity follows from {\rm e} qref{claim_FM}. We conclude that $G$ satisfies the induced mean zero property defined in {\rm e} qref{meanzero}. Hence, $G\in \mathcal G_\infty$ and the proofs of Lemma \ref{lemma-last-last} and that of Lemma \ref{lemma-last} are finished. This also concludes the proof of Theorem \ref{thm-lbub}. {\rm e} nd{proof} \begin{remark}\label{rmk-simplify} As mentioned in Section \ref{sec-results-3}, let us point out that in the case of an elliptic RWRE, our arguments based on entropic coercivity used in Lemma \ref{thm-lbub-lemma1} and Lemma \ref{thm-lbub-lemma2} simplify the earlier approach used in \cite{R06,Y08}. Indeed, if $\mathbb{P}$ continues to denote the law of a stationary and ergodic random environment with ${\mathfrak{p}}i$ being the transition probabilities of a random walk in $\mathbb{Z}^d$ such that $\int |\log{\mathfrak{p}}i|^p \,{\rm d}\mathbb{P}<\infty$ with $p>d$, then following our arguments before, we can set (as in {\rm e} qref{def-grad-n}) $G_n(\omega,e)=g_n(\omega)-g_n(\tau_e\omega)$. Then following Step 1 of the proof of Lemma \ref{lemma-last} we can show that $\{G_n^+\}_n$ as well as $\{G_n^-\}_n$, and hence $\{G_n\}_n$, all remain bounded in $L^p(\mathbb{P})$. Then every weak limit $G$ of $G_n$ defined here is immediately a gradient so that the closed loop as well as the mean zero property (i.e. $\mathbb{E}^\mathbb{P}(G)=0$) is readily satisfied, thanks to translation invariance of $\mathbb{P}$. {\rm e} nd{remark} \section{Proofs of Theorem \ref{thmmomgen}, Theorem \ref{thmlevel2}, Corollary \ref{thmlevel1} and Lemma \ref{nonlsc}}\label{sec-6} \noindent {\bf{Proof of Theorem \ref{thmmomgen}:}} The proof of Theorem \ref{thmmomgen} is readily finished by combining the lower bound from Corollary \ref{corlb}, the upper bound from Lemma \ref{ub} and the equivalence of bounds from Theorem \ref{thm-lbub}. \qed \noindent {\bf{Proof of Theorem \ref{thmlevel2}:}} By Theorem \ref{thmmomgen}, $$ \lim_{n\to\infty} \frac 1n \log E^{{\mathfrak{p}}i,\omega}_0 \big\{{\rm e} xp\big\{n \langle f, \mathfrak L_n\rangle\big\}\big\} = \sup_{\mu\in{\mathcal M}_1^\star} \big\{ \langle f, \mu\rangle- \mathfrak I(\mu)\big\}= \sup_{\mu\in{\mathcal M}_1(\Omega_0 \times \mathcal U_d)} \big\{ \langle f, \mu\rangle- \mathfrak I(\mu)\big\}= \mathfrak I^\star(\mu). $$ Since $\Omega_0$ is a closed subset of $\Omega=\{0,1\}^{\mathbb B_d}$ and hence, is compact, ${\mathcal M}_1(\Omega_0 \times \mathcal U_d)$ is compact in the weak topology. The upper bound {\rm e} qref{ldpub} for all closed sets now follows from Theorem 4.5.3 \cite{DZ98}. The lower bound {\rm e} qref{ldplb} has been proven in Lemma \ref{lemmalb}. \qed \noindent {\bf{Proof of Corollary \ref{thmlevel1}}}: The claim follows by contraction principle once we show that $\inf_{\xi(\mu)=x} \mathfrak I(\mu)= \inf_{\xi(\mu)=x} \mathfrak I^{\star\star}(\mu)$. This is easy to check using convexity of $\mathfrak I$ and $\mathfrak I^{\star\star}$. \qed \noindent {\bf{Proof of Lemma \ref{nonlsc}: The zero speed regime of SRWPC under a drift.}} \noindent For every $\beta>1$, we define $$ {\mathfrak{p}}i^{\ssup \beta}(\omega,e)= \frac{V(e) \1_{\{\omega(e)=1\}}} {\sum_{e^{\mathfrak{p}}rime\in \mathcal U_d}V(e^{\mathfrak{p}}rime) \1_{\{\omega(e^{\mathfrak{p}}rime)=1\}}} \in \widetilde\mathbb{P}i, $$ where $$ V(e)= \begin{cases} \beta>1 \qquad\bar{\mu}ox{if } e=e_1,\\ 1 \qquad\qquad\bar{\mu}ox{else}. {\rm e} nd{cases} $$ Let $X_n^{\ssup \beta}$ be the Markov chain with transition probabilities ${\mathfrak{p}}i^{\ssup\beta}$. By \cite{BGP02} and \cite{Sz02}, there exists $\beta_u=\beta_u(p,d)>0$ so that for $\beta>\beta_u$, the limiting speed $$ \lim_{n\to\infty} \frac{X_n^{\ssup \beta}}{n}, $$ which exists and is an almost sure constant is zero. For the Bernoulli (bond and site) percolation cases the last statement follows from \cite{BGP02} and \cite[Theorem 4.1]{Sz02}. Since the finite energy property (recall the proof of Lemma \ref{lemma-ell} for the random cluster model) holds for the random-cluster model \cite{G06} and level sets of Gaussian free field \cite[Remark 1.6]{RS13}, the proof of \cite[Theorem 4.1]{Sz02} is applicable, while in the case of random interlacements (for which the finite energy property fails), the statement regarding the zero speed of the random walk $X_n^{\ssup \beta}$ follows from \cite{FP16}. \noindent Then, by Kesten's lemma (see \cite{k75}), there exists no ${\mathfrak{p}}hi\in L^1(\mathbb{P}_0)$ so that $({\mathfrak{p}}i^{\ssup\beta},{\mathfrak{p}}hi)\in \mathcal E$. We split the proof into two cases. Suppose there exists a neighborhood $\mathfrak u$ of ${\mathfrak{p}}i^{\ssup\beta}$ so that every $\tilde{\mathfrak{p}}i^{\ssup\beta} \in \overline {\mathfrak u}$ fails to have an invariant density. Then, for every $\tilde{\mathfrak{p}}i^{\ssup\beta} \in \overline {\mathfrak u}$ and any probability density ${\mathfrak{p}}hi\in L^1(\mathbb{P}_0)$, let $\mu_\beta$ be the corresponding element in ${\mathcal M}_1(\Omega_0\times\mathcal U_d)$ (i.e., ${\rm d}\mu_\beta(\omega,e)={\mathfrak{p}}i^{\ssup\beta}(\omega,e){\mathfrak{p}}hi(\omega){\rm d}\mathbb{P}_0(\omega)$). Since $$(\tilde{\mathfrak{p}}i^{\ssup\beta},{\mathfrak{p}}hi)\notin\mathcal E, $$ by Lemma \ref{onetoone}, $\mu_\beta\notin {\mathcal M}_1^\star$. Then, $\mathfrak I(\mu_\beta)=\infty$ by {\rm e} qref{Idef}. If $\mathfrak I$ were lower semicontinuous on ${\mathcal M}_1(\Omega_0\times \mathcal U_d)$, then $\mathfrak I=\mathfrak I^{\star\star}$ and by Theorem \ref{thmlevel2}, \begin{equation}\label{eq:superexp} P^{{\mathfrak{p}}i,\omega}_0\big\{\mathfrak L_n \in \mathfrak n\big\} {\rm e} nd{equation} would decay super-exponentially for $\mathbb{P}_0$- almost every $\omega\in \Omega_0$, with $\mathfrak n$ being some neighborhood of $\mu_\beta$. However, since for every $\omega$, the relative entropy of ${\mathfrak{p}}i^{\ssup\beta}(\omega,\cdot)$ w.r.t. ${\mathfrak{p}}i_\omega(0,\cdot)$ is bounded below and above, the probability in {\rm e} qref{eq:superexp} decays exponentially and we have a contradiction. Assume that there exists no such neighborhood $\mathfrak u$ of ${\mathfrak{p}}i^{\ssup\beta}$. Let $\tilde{\mathfrak{p}}i_n\to{\mathfrak{p}}i^{\ssup\beta}$ such that for all $n\in \mathbb{N}$, $\tilde{\mathfrak{p}}i_n$ has an invariant density ${\mathfrak{p}}hi_n$ and $(\tilde{\mathfrak{p}}i_n,{\mathfrak{p}}hi_n)\in\mathcal E$. If $(\mu_n)_n$ is the sequence corresponding to $(\tilde{\mathfrak{p}}i_n,{\mathfrak{p}}hi_n)$, since ${\mathcal M}_1(\Omega_0\times\mathcal U_d)$ is compact, $\mu_n\mathbb{R}ightarrow\mu_\beta$ weakly along a subsequence. However, by our choice of $\beta>\beta_u$, $({\mathfrak{p}}i^{\ssup\beta},{\mathfrak{p}}hi)\notin \mathcal E$ for every density ${\mathfrak{p}}hi$ and hence $\mu_\beta\notin {\mathcal M}_1^\star$ and $\mathfrak I(\mu_\beta)=\infty$. But, $$ \lim_{n\to\infty} \mathfrak I(\mu_n)= \int {\rm d}\mathbb{P}_0 {\mathfrak{p}}hi(\omega) \sum_{e\in \mathcal U_d} {\mathfrak{p}}i^{\ssup\beta}(\omega,e) \log \frac{{\mathfrak{p}}i^{\ssup\beta}(\omega,e)}{{\mathfrak{p}}i_\omega(0,e)}, $$ which is clearly finite. This proves that $\mathfrak I$ is not lower semicontinuous. \qed {\bf{Acknowledgments.}} Most of the work in this paper was carried out when the second author was a visiting assistant professor at the Courant Institute of Mathematical Sciences, New York University in the academic year 2015-2016, and its hospitality is gratefully acknowledged. The third author was supported by Grant-in-Aid for Research Activity Start-up (15H06311) and Grant-in-Aid for JSPS Fellows (16J04213). The authors would like to thank S. R. S. Varadhan for reading an early draft of the manuscript and many valuable suggestions. We also thank Firas Rassoul-Agha and Atilla Yilmaz for their comments on a preprint version of the present article. Finally, we would like to thank an anonymous referee for a very careful reading and many valuable suggestions that led to a more elaborate version of our manuscript. \begin{thebibliography}{WWWW98} \bibitem[AP96]{AP96} {\sc P. Antal} and {\sc A. Pisztora}. \hskip .11em plus .33em minus .07em On the chemical distance for supercritical Bernoulli percolation, \hskip .11em plus .33em minus .07em {\it Ann. Probab.} {\bf 24}, no. 2, 1036-1048, (1996) \bibitem[AS12]{AS12} {\sc S. Armstrong} and {\sc P. Souganidis}, \hskip .11em plus .33em minus .07em Stochastic homogenization of Hamilton--Jacobi and degenerate Bellman equations in unbounded environments, \hskip .11em plus .33em minus .07em {\it Journal de Mathematiques Pures et Appliquees}, {\bf 97}, 5, 460-504, 2012 \bibitem[AE84]{AE84} {\sc J.-P. Aubin} and {\sc I. Ekeland}, \hskip .11em plus .33em minus .07em {\it Applied nonlinear analysis}, \hskip .11em plus .33em minus .07em{Pure Appl. Math. (N. Y.)}, A Wiley and Interscience Publication, John Wiley and Sons, Inc., New York, 1984. \bibitem[AT14]{AT14} {\sc S. Armstrong} and {\sc H. Tran}, \hskip .11em plus .33em minus .07em {Stochastic homogenization of viscous Hamilton-Jacobi equations and application} \hskip .11em plus .33em minus .07em {\it Anal. PDE}, no. 8, (2014), 1969-2007 \bibitem[BD12]{BD12} {\sc V. Beffara} and {\sc H. Duminil-Copin}. \hskip .11em plus .33em minus .07em The self-dual point of the two-dimensional random-cluster model is critical for $q \ge 1$, \hskip .11em plus .33em minus .07em {\it Probab. Theory Related Fields}, {\bf 153} (2012), 511-542. \bibitem[BB07]{BB07} {\sc N. Berger} and {\sc M. Biskup}. \hskip .11em plus .33em minus .07em Quenched invariance principle for random walk on percolation clusters, \hskip .11em plus .33em minus .07em {\it Probab. Theory Rel. Fields}, {\bf 137}, Issue 1-2, 83-120, (2007) \bibitem[BGP03]{BGP02} {\sc N. Berger} and {\sc N. Gantert} and {\sc Y. Peres}. \hskip .11em plus .33em minus .07em The speed of biased random walk on percolation clusters, \hskip .11em plus .33em minus .07em {\it Probab. Theory Rel. Fields}, {\bf 126}, no. 2, 221--242, (2003) \bibitem[B05]{B05} {\sc T. Bodineau}. \hskip .11em plus .33em minus .07em Slab percolation for the Ising model, \hskip .11em plus .33em minus .07em Probab. Theory and Relat. Fields 132 (2005) 83-118. \bibitem[BK89]{BK89} {\sc R. M. Burton} and {\sc M. Keane}, \hskip .11em plus .33em minus .07em {Density and uniqueness in percolation}, \hskip .11em plus .33em minus .07em {\it{Comm. Math. Phys.}}, {\bf 121}, no. 3, 501-505, (1989) { \bibitem[CP12]{CP12} {\sc J. Cerny} and {\sc S. Popov} \hskip .11em plus .33em minus .07em On the internal distance in the interlacement set \hskip .11em plus .33em minus .07em {\it Electron. J. Probab.} 17, no. 29, 1-25. } \bibitem[CGZ00]{CGZ00} {\sc F. Comets}, {\sc N. Gantert} and {\sc O. Zeitouni}, \hskip .11em plus .33em minus .07em{Quenched, annealed and functional large deviations for one dimensional random walks in random environments}, \hskip .11em plus .33em minus .07em{Prob. Theory Rel. Fields}, {\bf 118}, 65-114, (2000), \bibitem[CM04]{CM04} {\sc O. Couronn\'e} and {\sc R. J. Messikh}, \hskip .11em plus .33em minus .07em{ Surface order large deviations for 2D FK-percolation and Potts models}, \hskip .11em plus .33em minus .07em{Stoc. Proc. Appl.} 113 (2004) 81-99. \bibitem[DZ98]{DZ98} {\sc A. Dembo} and {\sc O. Zeitouni,} \hskip .11em plus .33em minus .07em \textit{Large Deviations Techniques and Applications,} \hskip .11em plus .33em minus .07em {Second edition}, Springer, New York (1998). \bibitem[DRS14]{DRS14} {\sc A. Drewitz}, {\sc B. R\'ath}, and {\sc A. Sapozhnikov}, \hskip .11em plus .33em minus .07em {On chemical distances and shape theorems in percolation models with long-range correlations.} \hskip .11em plus .33em minus .07em {J. Math. Phys.} {\bf 55} (2014), no. 8, 083307, 30 pp. \bibitem[FP16]{FP16} {\sc A. Fribergh and S. Popov} \hskip .11em plus .33em minus .07em Biased random walk on the interlacement set, \hskip .11em plus .33em minus .07em arXiv:1610.02979. \bibitem[GM04]{GM04} {\sc O. Garet} and {\sc R. Marchand}, \hskip .11em plus .33em minus .07em Asymptotic shape for the chemical distance and first-passage percolation on the infinite Bernoulli cluster, \hskip .11em plus .33em minus .07em ESAIM Probab. Stat. 8 (2004), 169-199. \bibitem[GRS14]{GRS14} {\sc N. Giorgiou}, {\sc F. Rassoul-Agha} and {\sc T. Sepp\"al\"ainen}, \hskip .11em plus .33em minus .07em{Variational formulas and cocycle solutions for directed polymer and percolation model}, {\hskip .11em plus .33em minus .07em {\it Preprint}}, (2014) \bibitem[GRSY13]{GRSY13} {\sc N. Giorgiou}, {\sc F. Rassoul-Agha}, {\sc T. Sepp\"al\"ainen} and {\sc A. Yilmaz}, \hskip .11em plus .33em minus .07em{Ratios of partition functions for the log-gamma polymer}, {\hskip .11em plus .33em minus .07em {\it Ann. Probab}}, to appear, (2013) \bibitem[GdH98]{GdH94} {\sc A. Greven} and {\sc F. den Hollander}, \hskip .11em plus .33em minus .07em{Large deviations for a random walk in a random environment}, \hskip .11em plus .33em minus .07em{{\it Ann. Prob.}}, {\bf 22}, 1381-1428, (1998) \bibitem[G99]{G99} {\sc G. R. Grimmett}, \hskip .11em plus .33em minus .07em{Percolation}, \hskip .11em plus .33em minus .07em{Springer-Verlag}, 2006. \bibitem[G06]{G06} {\sc G. R. Grimmett}, \hskip .11em plus .33em minus .07em{The random-cluster model}, \hskip .11em plus .33em minus .07em{Springer-Verlag}, 2006. \bibitem[GM90]{GM90} {\sc G. Grimmett} and {\sc J. Marstrand}, \hskip .11em plus .33em minus .07em{The supercritical phase of percolation is well behaved,} \hskip .11em plus .33em minus .07em{Proc. R. Soc. Lond. Ser. A} 430 (1990) 439-457. \bibitem[K75]{k75} {\sc H. Kesten}, \hskip .11em plus .33em minus .07em{Sums of stationary sequences can not grow slower than linearly}, \hskip .11em plus .33em minus .07em{Proc. AMS}, {\bf 49}, 205-211, (1975) \bibitem[KV86]{KV86} {\sc C. Kipnis} and {\sc S.R.S.~Varadhan}, \hskip .11em plus .33em minus .07em Limit theorem for additive functionals of reversible Markov chains and application to simple exclusions, \hskip .11em plus .33em minus .07em {\it Comm. Math. Phys.} {\bf{104}}, 1-19, (1986) \bibitem[KRV06]{KRV06} {\sc E. Kosygina}, {\sc F. Rezakhanlou} and {\sc S. R. S. Varadhan.} \hskip .11em plus .33em minus .07em{Stochastic homogenization of Hamilton-Jacobi-Bellmann equations}, \hskip .11em plus .33em minus .07em{\it Comm. Pure Appl. Math.}, {\bf 59}, 1489-1521, (2006) \bibitem[K85]{K85} {\sc S. M. Kozlov}, \hskip .11em plus .33em minus .07em{The averaging effect and walks in inhomogeneous environments.} \hskip .11em plus .33em minus .07em{\it Uspekhi Mat Nayuk}, (Russian math surveys), {\bf 40}, 73-145, (1985) \bibitem[K12]{K12} {\sc N. Kubota}, \hskip .11em plus .33em minus .07em {Large deviations for simple random walks on supercritical percolation clusters.} \hskip .11em plus .33em minus .07em{Kodai Mathematical Journal} {\bf 35}, Number 3, 560-575, (2012) \bibitem[LS86]{LS86} {\sc J. L. Lebowitz} and {\sc H. Saleur.} \hskip .11em plus .33em minus .07em{Percolation in strongly correlated systems.} \hskip .11em plus .33em minus .07em{Phys. A,} 138: pp. 194-205 (1986). \bibitem[LSS97]{LSS97} {\sc T. Liggett, R. Schonmann} and {\sc A. Stacey}, \hskip .11em plus .33em minus .07em {Domination by product measures, } \hskip .11em plus .33em minus .07em {Ann. Probab.} {\bf 25} (1997) 71-95. \bibitem[LS05]{LS05} {\sc P. L. Lions} and {P. Souganidis}, \hskip .11em plus .33em minus .07em{Homogenization for viscous Hamilton-Jacobi equations in stationary, ergodic media,} \hskip .11em plus .33em minus .07em{Comm. Partial Differential Equations}, {\bf 30}, (2005), no. 1-3, 335-376. \bibitem[LS10]{LS10} {\sc P. L. Lions} and {P. Souganidis}, \hskip .11em plus .33em minus .07em{Stochastic homogenization for Hamilton-Jacobi and viscous Hamilton-Jacobi equations with convex nonlinearities-revisited,} \hskip .11em plus .33em minus .07em{Comm. Mathematical Sciences}, {\bf 8}, (2010), no. 2, 627-637. \bibitem[M12]{M12} {\sc J-C. Mourrat}, \hskip .11em plus .33em minus .07em{Lyapunov exponents, shape theorems and large deviations for random walks in random potential,} \hskip .11em plus .33em minus .07em{\it ALEA Lat. Am. J. Probab. Math. Stat}. {\bf 9}, 165-211, (2012) \bibitem[MP07]{MP07} {\sc P. Matheiu} and {\sc A. Piatnitski}, \hskip .11em plus .33em minus .07em Quenched invariance principle for random walks on percolation clusters , \hskip .11em plus .33em minus .07em{Proceedings of the Royal Society A.}, 463, 2287-2307, (2007) \bibitem[PV81]{PV81} {\sc G. C. Papanicolaou} and {\sc S. R. S. Varadhan}, \hskip .11em plus .33em minus .07em{ Boundary value problems with rapidly os- cillating random coefficients}, In Random fields, Vol. I, II (Esztergom, 1979), volume 27 of Colloq. Math. Soc. Janos Bolyai, pages 835-873. North-Holland, Amsterdam, 1981 \bibitem[P89]{P89} {\sc K. Petersen}, \hskip .11em plus .33em minus .07em Ergodic Theory. \hskip .11em plus .33em minus .07em Corrected reprint of the 1983 original. Cambridge Studies in Advanced Mathematics, vol 2. Cambridge University Press, Cambridge, 1989. \bibitem[P96]{P96} {\sc A. Pisztora, } \hskip .11em plus .33em minus .07em{Surface order large deviations for Ising, Potts and percolation models, } \hskip .11em plus .33em minus .07em{Probab. Theory Relat. Fields} {\bf 104} (1996) 427-466. \bibitem[PRS15]{PRS15} {\sc E. B. Procaccia}, {\sc R. Rosenthal} and {\sc A. Sapozhnikov}, \hskip .11em plus .33em minus .07em{Quenched invariance principle for simple random walk on clusters in correlated percolation models}, \hskip .11em plus .33em minus .07em{Probab. Theory Relat. Fields} DOI 10.1007/s00440-015-0668-y. \bibitem[R70]{R70} {\sc R. T. Rockafellar}, \hskip .11em plus .33em minus .07em{Convex analysis,} \hskip .11em plus .33em minus .07em{Princeton University Press}, Princeton, N. J., 1997. \bibitem[R15]{R15} {\sc P.-F. Rodriguez}, \hskip .11em plus .33em minus .07em{A 0-1 law for the massive Gaussian free field}, \hskip .11em plus .33em minus .07em{preprint,} arXiv:1505.08169. \bibitem[R06]{R06} {\sc J. Rosenbluth} \hskip .11em plus .33em minus .07em Quenched large deviations for multidimensional random walks in a random environment: a variational formula, \hskip .11em plus .33em minus .07em{\it PhD thesis}, NYU, arxiv:0804.1444v1 \bibitem[RS11]{RS11} {\sc F. Rassoul-Agha} and {\sc T. Sepp\"al\"ainen}, \hskip .11em plus .33em minus .07em{Process-level quenched large deviations for random walk in a random environment}, \hskip .11em plus .33em minus .07em{\it Ann. Inst. H. Poincar\'e Prob. Statist.}, {\bf 47}, 214-242, (2011) \bibitem[RSY13]{RSY13} {\sc F. Rassoul-Agha}, {\sc T. Sepp\"al\"ainen} and {\sc A. Yilmaz}, \hskip .11em plus .33em minus .07em{Quenched free energy and large deviations for random walk in random potential}, \hskip .11em plus .33em minus .07em{\it Comm. Pure and Appl. Math}, {\bf 66}, 202-244, (2013) \bibitem[RSY14]{RSY14} {\sc F. Rassoul-Agha}, {\sc T. Sepp\"al\"ainen} and {\sc A. Yilmaz}, \hskip .11em plus .33em minus .07em{Variational formulas and disorder regimes of random walks in random potential}, \hskip .11em plus .33em minus .07em{\it Bernoulli}, {\bf 23}, 1, (2017), 405-431 \bibitem[RS13]{RS13} {\sc P.-F. Rodriguez and A.-S. Sznitman} \hskip .11em plus .33em minus .07em{Phase transition and level-set percolation for the Gaussian free field}, \hskip .11em plus .33em minus .07em{Commun. Math. Phys.} 320, 571-601 (2013) \bibitem[S07]{Sh07} {\sc S. Shefield} \hskip .11em plus .33em minus .07em{Gaussian free fields for mathematicians}, \hskip .11em plus .33em minus .07em{Probab. Theory and Related Fields} 139 (2007) 521-541. \bibitem[SS04]{SS04} {\sc V. Sidoravicius} and {\sc A. S. Sznitman}, \hskip .11em plus .33em minus .07em Quenched invariance principles for walks on clusters of percolation or among random conductances, \hskip .11em plus .33em minus .07em{Probability theory and related fields.}, 129, 219-244, (2004) \bibitem[S94]{S94} {\sc A. S. Sznitman}, \hskip .11em plus .33em minus .07em {Shape theorem Lyapunov exponents and large deviations for Brownian motion in a Poissonian potential}, \hskip .11em plus .33em minus .07em{\it Comm. Pure. Appl. Math.}, {\bf 47}, 1655-1688, (1994) \bibitem[S03]{Sz02} {\sc A. S. Sznitman}, \hskip .11em plus .33em minus .07em {On the anisotropic walk on the supercritical percolation cluster}, \hskip .11em plus .33em minus .07em{\it Comm. Math. Phys.}, 240, Issue 1-2, 123-148, (2003) \bibitem[S10]{Sz10} {\sc A. S. Sznitman}, \hskip .11em plus .33em minus .07em{Vacant set of random interlacements and percolation}, \hskip .11em plus .33em minus .07em{Ann. Math.}, 171 (2), 2039-2087, (2010). \bibitem[T09]{T09} {\sc A. Teixeira}, \hskip .11em plus .33em minus .07em{Interlacement percolation on transient weighted graphs}, \hskip .11em plus .33em minus .07em{Elec. J. Probab.} 14 (2009), no. 54, 1604-1628. \bibitem[T09']{T09aap} {\sc A. Teixeira}, \hskip .11em plus .33em minus .07em{On the uniqueness of the infinite cluster of the vacant set of random interlacements}, \hskip .11em plus .33em minus .07em{Adv. Appl. Prob.} 19 (2009), 454-466. \bibitem[TW11]{TW11} {\sc A. Teixeira and D. Windisch}, \hskip .11em plus .33em minus .07em{On the fragmentation of a torus by random walk.} \hskip .11em plus .33em minus .07em{Comm. Pure Appl. Math.} 64 (12), 1599-1646. \bibitem[V03]{V03} {\sc S. R. S. Varadhan}, \hskip .11em plus .33em minus .07em{large deviations for random walk in random environment}, \hskip .11em plus .33em minus .07em{Comm. Pure Appl. Math}, {\bf 56}, Issue 8, 1222-1245, (2003) \bibitem[Y08]{Y08} {\sc A. Yilmaz}, \hskip .11em plus .33em minus .07em {Quenched large deviations for random walk in random environment}, \hskip .11em plus .33em minus .07em {\it Comm. Pure Appl. Math}, {\bf 62}, Issue 8, 1033- 1075, (2009) \bibitem[Z98]{Z98} {\sc M. Zerner}, \hskip .11em plus .33em minus .07em{Lyapunov exponents and quenched large deviations for multidimensional random walks in random environment}, \hskip .11em plus .33em minus .07em{\it Ann. Prob.}, {\bf{26}}, No. 4, 1446-1476, (1998) { \bibitem[Z98-I]{Z98-I} {\sc M. Zerner}, \hskip .11em plus .33em minus .07em Directional decay of the Green's function for a random nonnegative potential on $\mathbb{Z}^d$. \hskip .11em plus .33em minus .07em {\it Ann. Appl. Probab.} 8 (1), 246-280 (1998). } {\rm e} nd{thebibliography} {\rm e} nd{document}
math
154,638
\begin{document} \title{On the Baum--Connes conjecture with coefficients for linear algebraic groups} \author{Maarten Solleveld} \address{IMAPP, Radboud Universiteit Nijmegen, Heyendaalseweg 135, 6525AJ Nijmegen, the Netherlands} \email{[email protected]} \date{\today} \thanks{ The author is supported by a NWO Vidi grant "A Hecke algebra approach to the local Langlands correspondence" (nr. 639.032.528).} \subjclass[2010]{46L80, 20G99, 19K99} \maketitle \begin{abstract} We prove the Baum--Connes conjecture with arbitrary coefficients for some classes of groups: (1) Linear algebraic groups over a non-archimedean local field. (2) Linear algebraic groups over the adeles of a global field $k$, provided that at every archimedean place of $k$ the associated Lie group is amenable. (3) All closed subgroups of the above groups. This includes linear algebraic groups over global fields - with the same condition as in (2). \texttt{Proof incomplete, problems in Lemma 2.2.a!} \end{abstract} \tableofcontents \section*{Introduction} Let $G$ a locally compact group, always assumed to be Hausdorff and second countable. The Baum--Connes conjecture (BC) asserts that two K-groups associated to $G$ are naturally isomorphic. The right hand side, or analytic side, of the conjecture is the K-theory of the reduced $C^*$-algebra $C_r^* (G)$. The left hand side (or topological side) $K_*^{\rm top} (G)$ is the equivariant K-homology of a universal space for proper $G$-actions. Baum and Connes \cite{BaCo} conjectured that the assembly map \[ \mu : K_*^{\rm top} (G) \to K_* (C_r^* (G)) \] is an isomorphism. More general versions of the conjecture include coefficient algebras. For a given $G$-$C^*$-algebra $B$ one can replace $C_r^* (G)$ by the reduced crossed product $C_r^* (G,B)$. The topological side is given in \cite[Definition 9.1]{BCH} in terms of Kasparov KK-theory \cite{Kas}. Namely, if $X$ is a universal space for proper $G$-actions, then \begin{equation}\label{eq:1.4} K_*^{\rm top} (G,B) = K_*^G (X,B) = \lim_{X' \subset X ,\, X' / G \text{ compact}} KK_*^G (C_0 (X'), B) . \end{equation} The assembly map $\mu$ admits a natural generalization to this context \cite[(9.2)]{BCH}. The following is known as the Baum--Connes conjecture with coefficients (BCC). \begin{conjintro}\label{conj:1} Let $G$ a second countable locally compact group and let $B$ be any $G$-$C^*$-algebra. Then the assembly map \[ \mu^B : K_*^G (X,B) \to K_* (C_r^* (G,B)) \] is an isomorphism (of abelian groups). \end{conjintro} In case Conjecture \ref{conj:1} holds whenever $B$ is algebra the compact operators on some separable Hilbert space, we say that $G$ satisfies the twisted Baum--Connes conjecture. For an introduction to BC(C) for discrete groups we refer to the very readable booklet \cite{Val}. Major advantages of Conjecture \ref{conj:1} over the ordinary BC (with coefficients $\mathbb C$) are that BCC is inherited by closed subgroups \cite{ChEc2} and by extensions with amenable groups \cite{CEO}. In view of these properties (which make ample use of Kasparov's KK-theory) we sometimes assume in our proofs that our coefficient algebras are separable. But this is actually no restriction, because for exact groups BC with separable coefficients implies BC with arbitrary coefficients (Corollary \ref{cor:A.2}). The main goal of this paper is to verify Conjecture \ref{conj:1} for groups of the form $\mathcal G (R)$, where $\mathcal G$ is a linear algebraic group and $R$ is some topological ring over which $\mathcal G$ is defined. Some (but not too many) of these groups are compact or have the Haagerup property, see \cite[\S 1.4]{Jul2}. Conjecture \ref{conj:1} is already known in the following cases: \begin{itemize} \item compact groups \cite{Jul}, \item amenable groups and more generally groups with the Haagerup property \cite{HiKa}. \end{itemize} On the other hand, several counterexamples to BCC have been worked out, see \cite{HLS} All these counterexamples involve non-exact groups \cite{BGW}, so it might be more prudent to state Conjecture \ref{conj:1} only for exact groups. Fortunately all the groups we encounter in this paper are exact (that follows from \cite{KiWa}), so this issue need not bother us. The ordinary Baum--Connes conjecture is known for larger classes of groups than BCC. In particular it has been shown for almost connected locally compact groups \cite{CEN}, for linear algebraic groups over $p$-adic fields \cite{CEN} and for reductive groups over local fields \cite{Laf}. We point out that for some linear algebraic groups over local fields of positive characteristic BC was open (but see below). We refer to \cite{ELN} for detailed investigation of such groups. Now we come to our main results. \begin{thmintro}\label{thm:3} Let $F$ be a non-archimedean local field and let $\mathcal G$ be a linear algebraic group defined over $F$. Endow $\mathcal G (F)$ with the topology coming from the metric of $F$. Then $\mathcal G (F)$ and all its closed subgroups satisfy BCC. \end{thmintro} For all groups in Theorem \ref{thm:3}, the injectivity of the assembly map $\mu^B$ is due to Kasparov and Skandalis \cite{Kas,KaSk}. The surjectivity of the analogous map in the context of Banach KK-theory was shown by Lafforgue \cite{Laf}. In particular there exists a certain unconditional completion $\mathcal S_t (\mathcal G (F),B)$ of $C_c (\mathcal G (F),B)$ such that \begin{equation}\label{eq:L1} \mu^B_{\mathcal S_t (\mathcal G (F)} : K_*^{\mathcal G (F)} (X,B) \to K_* (\mathcal S_t (\mathcal G (F),B)) \end{equation} is a bijection. See Section \ref{sec:1} for a discussion of these methods. Our new idea is to consider algebras of rapidly decreasing functions from $\mathcal G (F)$ to a coefficient algebra $B$. We require that these functions decrease (in the $L^2$-sense) more rapidly than $\ell^n$ for all $n \in \mathbb Z$, where $\ell : \mathcal G (F) \to \mathbb R$ is a length function coming from the action of $\mathcal G (F)$ on its Bruhat--Tits building $X$. The crucial point is that, with a generalization of results of Vign\'eras \cite{Vig}, one can find such a Fr\'echet algebra which is dense and holomorphically closed in $C_r^* (\mathcal G (F),B)$ (Theorem \ref{thm:2.4}), and contains many elements of $\mathcal S_t (\mathcal G (F),B)$. Together with \eqref{eq:L1} this enables us to establish the surjectivity of $\mu^B$. \\ We also consider linear algebraic groups $\mathcal G$ defined over a global field $k$ (with the discrete topology). Recall that the adelic group $\mathcal G (\mathbf A_k)$ contains $\mathcal G (k)$ as a discrete subgroup. \begin{thmintro}\label{thm:5} (see Theorem \ref{thm:4.3}) \\ Suppose that for every infinite place $v$ of $k$ the Lie group $\mathcal G (k_v)$ is amenable. Then the groups $\mathcal G (\mathbf A_k)$ and $\mathcal G (k)$, as well as all their closed subgroups, satisfy the Baum--Connes conjecture with arbitrary coefficients. \end{thmintro} Of course there are a lot of interesting closed subgroups of $\mathcal G (\mathbf A_k)$ or $\mathcal G (k)$, far too many to list here. Suffice it to refer to \cite{PlRa} and the references therein. It would be very nice if our method to prove Theorem \ref{thm:3} could be adjusted to a real reductive algebraic group $G$. We tried this, but so far it did not work out. One problem is that rapid decay in an $L^2$-sense does in general not imply rapid decay in an $L^\infty$-sense. This makes it difficult to fit an algebra of rapidly decreasing functions on $G$ in a unconditional completion (in the sense of \cite{Laf}). As far as arbitrary algebraic groups over $\mathbb R$ are involved, our current methods do suffice to show that $\mathcal G (\mathbf A_k)$ satisfies the twisted Baum--Connes conjecture (Theorem \ref{thm:4.5}). For this no condition at the archimedean places of $k$ (as in Theorem \ref{thm:5}) is needed. Finally, we mention one well-known consequence of the surjectivity of the assembly map. Kadison and Kaplansky conjectured that, for a torsion--free discrete group $\Gamma$, the reduced $C^*$-algebra $C_r^* (\Gamma)$ contains no non-trivial idempotents. As explained in \cite[\S 7]{BCH} and \cite[\S 6.3]{Val}, this can be deduced from the surjectivity of \[ \mu : K_0^{\rm top} (\Gamma) \to K_0 (C_r^* (\Gamma)) . \] From Theorems \ref{thm:3} and \ref{thm:5} we get: \begin{thmintro}\label{thm:6} Let $G$ be a group as in Theorem \ref{thm:3} or Theorem \ref{thm:5}. Let $\Gamma$ be a torsion-free subgroup of $G$ which is discrete in the subspace topology. Then $C_r^* (\Gamma)$ contains no idempotents other than 0 and 1. \end{thmintro} {\bf Acknowledgments}. We thank Kang Li, Siegfried Echterhoff and Vincent Lafforgue for their helpful comments and discussions. \section{The methods of Kasparov and Lafforgue} \label{sec:1} In this section we recall important previous results about the Baum--Connes conjecture for groups acting on suitable metric spaces. Let $G$ be a locally compact group, always tacitly assumed to be Hausdorff and second countable. We first suppose that $G$ acts properly and isometrically on an affine building $X$ in the sense of \cite{BrTi1,Tit}. These assumptions include that the action is continuous (as usual for topological group actions). Bruhat and Tits showed that $X$ is a CAT(0)-space \cite[3.2.1]{BrTi1}, that it has unique geodesics \cite[2.5.13]{BrTi1} and that every compact subgroup of $G$ fixes a point of this affine building \cite[Proposition 3.2.4]{BrTi1}. By \cite[Proposition 1.8]{BCH} these properties guarantee that $X$ is a universal space for proper $G$-actions. In particular the domain of the Baum--Connes assembly map becomes $K_*^{\rm top} (G) = K_*^G (X)$. In this section (but not after that) we also consider complete simply connected Riemannian manifolds with nonpositive sectional curvature which is bounded from below and has bounded covariant derivative. When a locally compact group $G$ acts properly and isometrically on such a space $X$, the same arguments as for affine buildings ensure that $X$ is a universal space for proper $G$-actions. Then \eqref{eq:1.4} is again valid. Typical examples are symmetric spaces associated to reductive Lie groups. Another kind of metric spaces to which the results of this section apply are called "bolic" \cite{KaSk2}. More precisely, we fix $\delta \in \mathbb R_{>0}$ and we let $(X,d)$ be a metric space such that: \begin{itemize} \item It is uniformly locally finite. \item $(X,d)$ is weakly $\delta$-geodesic \cite[Definition 2.1]{KaSk2}. \item $(X,d)$ satisfies \cite[(B2')]{KaSk2} for the given $\delta$ and satisfies condition \cite[(B1)]{KaSk2} for all $\delta' > 0$. \end{itemize} By the local finiteness, $X$ is discrete as a topological space. If a locally compact group $G$ acts properly and isometrically on $X$, then $X$ cannot be a universal example for proper $G$-actions (unless $G$ is compact), because it is discrete. From now on we assume that $G$ acts properly and isometrically on a space $X$ of one of the above three kinds. With the dual Dirac method Kasparov and Skandalis \cite{KaSk} constructed an extremely useful element $\gamma \in KK^G_0 (\mathbb C,\mathbb C)$. Via the descent map \[ KK^G_0 (\mathbb C,\mathbb C) \to KK_0 (C_r^* (G),C_r^* (G)) \] and the product in KK-theory, $\gamma$ gives rise to an endomorphism of $K_* (C_r^* (G))$. More generally, for every $\sigma$-unital $G$-$C^*$-algebra $B$, $\gamma$ determines an element of $\mathrm{End} \big( K_* (C_r^* (G,B)) \big)$ \cite[\S 3.12]{Kas}. \begin{thm}\label{thm:1.1} \textup{\cite{Kas,KaSk,KaSk2}} \\ Let $G$ and $X$ be as above. \enuma{ \item $\gamma$ is idempotent in $KK_0^G (\mathbb C,\mathbb C)$. \item For every $\sigma$-unital $G$-$C^*$-algebra $B$, the assembly map \[ \mu^B : K_*^G (X,B) \to K_* (C_r^* (G,B)) \] is injective and has image $\gamma \cdot K_* (C_r^* (G,B))$. } \end{thm} Thus BC for $G$ with coefficients $B$ becomes equivalent to: \begin{equation}\label{eq:1.1} \text{the image of } \gamma \text{ in } \mathrm{End} \big( K_* (C_r^* (G,B)) \big) \text{ is the identity.} \end{equation} Obviously \eqref{eq:1.1} would be implied by $\gamma = 1 \in KK_0^G (\mathbb C,\mathbb C)$. Although that statement has indeed been proven for some groups acting on CAT(0)-spaces, it is known to be false for many others. This is where Lafforgue's work \cite{Laf} comes in. He developed a KK-theory for Banach algebras, which admits a natural transformation from Kasparov's KK-theory. The advantage is that the image of $\gamma$ in $KK_{Ban}^G (\mathbb C,\mathbb C)$ can be 1 even if $\gamma \neq 1$ in $KK_0^G (\mathbb C,\mathbb C)$. On the other hand, it is not known whether there exists a natural descent map from $KK_{Ban}^G (\mathbb C,\mathbb C)$ to $KK_{Ban}(C_r^* (G),C_r^* (G))$. For such a descent, one rather has to replace $C_r^* (G)$ by suitable Banach algebra completions of $C_c (G)$. \begin{defn}\label{def:1.2} A norm on $C_c (G)$ (or an a completion thereof) is unconditional if every $f \in C_c (G)$ has the same norm as its absolute value $|f|$. A Banach algebra $\mathcal A (G)$ containing $C_c (G)$ as dense subalgebra is said to be an unconditional completion (for $G$) if its norm is unconditional. \end{defn} For every unconditional completion $\mathcal A (G)$ and every $G$-$C^*$-algebra $B$, there exists a version $\mathcal A (G,B)$ of the crossed product of $B$ with $G$. Furthermore Lafforgue exhibited a Banach algebra version \[ \mu_{\mathcal A (G)}^B : K_*^{\rm top} (G,B) \to K_* (\mathcal A (G,B)) \] of the assembly map. \begin{thm}\label{thm:1.3} \textup{\cite{Laf}} \\ Suppose that a locally compact second countable group $G$ acts properly and isometrically on a space $X$ of one of the above three kinds. For every $G$-$C^*$-algebra $B$, and every unconditional completion $\mathcal A (G)$: \enuma{ \item The image of $\gamma$ in $\mathrm{End}\big( K_* (\mathcal A (G,B)) \big)$ is 1. \item $\mu_{\mathcal A (G)}^B : K_*^{\rm top} (G,B) \to K_* (\mathcal A (G,B))$ is a bijection. } \end{thm} The archetypical example of an unconditional completion is $L^1 (G)$. In that case Theorem \ref{thm:1.3} says that \begin{equation}\label{eq:1.2} \mu_{L^ 1 (G)}^B : K_*^{\rm top} (G,B) \to K_* (L^1 (G,B)) \end{equation} is an isomorphism for all $G,B$ as above. In general the bijectivity of \eqref{eq:1.2} is known as the Bost conjecture for $G$ (with coefficients $B$). The difference between $K_* (C_r^* (G,B))$ and $K_* (\mathcal A (G,B))$ is an analytic issue, which is our main concern in this paper. \begin{prop}\label{prop:1.4} \textup{\cite[Proposition 1.6.4]{Laf}} \\ In the setting of Theorem \ref{thm:1.3}, suppose that \[ \norm{f}_{\mathcal A (G)} = \norm{f^*}_{\mathcal A (G)} \geq \norm{f}_{C_r^* (G)} \qquad \forall f \in C_c (G) . \] Then $C_c (G,B) \to C_r^* (G,B)$ extends to a Banach algebra homomorphism $\mathcal A (G,B) \to C_r^* (G,B)$ and (when $B$ is $\sigma$-unital) the induced map \[ K_* (\mathcal A (G,B)) \to K_* (C_r^* (G,B)) \] commutes with multiplication by $\gamma$. \end{prop} Proposition \ref{prop:1.4} and Theorem \ref{thm:1.1} show that in $K_* (C_r^* (G,B))$ the images of $\mu^B$, of $\gamma$ and of $K_* (\mathcal A (G,B))$ coincide. That leads to a criterion for BC with coefficients for $G$: \begin{cor}\label{cor:1.5} In the setting of Theorem \ref{thm:1.3}, suppose that $B$ is $\sigma$-unital and that for every class $p \in K_* (C_r^* (G,B))$ there exists an unconditional completion $\mathcal A (G)$ such that: \begin{itemize} \item $\norm{f}_{\mathcal A (G)} = \norm{f^*}_{\mathcal A (G)} \geq \norm{f}_{C_r^* (G)} \qquad \forall f \in C_c (G)$, \item $p$ lies in the image of $K_* (\mathcal A (G,B)) \to K_* (C_r^* (G,B))$. \end{itemize} Then $\mu^B : K_*^{\rm top} (G,B) \to K_* (C_r^* (G,B))$ is a bijection. \end{cor} Corollary \ref{cor:1.5} applies in particular when \begin{equation}\label{eq:1.3} K_* (\mathcal A (G,B)) \to K_* (C_r^* (G,B)) \end{equation} can be proven to be surjective. In that case the comparison with $K_*^G (X,B)$ shows that \eqref{eq:1.3} is bijective. It is not so easy to establish directly that \eqref{eq:1.3} is surjective. When $\mathcal A (G,B)$ would be closed under the holomorphic functional calculus of $C_r^* (G,B)$, that would follow from the density theorem in K-theory \cite[Th\'eor\`eme A.2.1]{Bos}. Unfortunately, that seems to be rare for general $B$. Let $\mathcal G$ be a reductive algebraic group over a local field $F$ and endow $G = \mathcal G (F)$ with the topology coming from the metric of $F$. Lafforgue \cite{Laf} constructed completions of $C_c (G,B)$ with several relevant properties. Let $\Xi : G \to \mathbb R$ be Harish-Chandra's spherical function and let $\ell : G \to \mathbb R_{\geq 0}$ be a length function associated to the action of $G$ on either its symmetric space ($F$ archimedean) or its Bruhat--Tits building ($F$ non-archimedean). For $t \in \mathbb R$ we define an unconditional norm on $C_c (G,B)$ by \[ \norm{f}_{\mathcal S_t (G,B)} = \sup_{g \in G} \norm{f(g)}_B \Xi (g)^{-1} (1 + \ell (g))^t . \] Let $\mathcal S_t (G,B)$ be the completion of $C_c (G,B)$ with respect to the above norm and abbreviate $\mathcal S_t (G) = \mathcal S_t (G,\mathbb C)$. \begin{prop}\label{prop:3.5} There exists $r_G \in \mathbb N$ such that for all $t > r_G$: \enuma{ \item $\mathcal S_t (G)$ is an unconditional completion of $C_c (G)$; \item $\mathcal S_t (G,B)$ is a Banach algebra; \item $\mathcal S_t (G,B)$ is contained in $C_r^* (G,B)$ and the inclusion map is continuous. } \end{prop} \begin{proof} (a) According to \cite[Lemme II.1.5]{Wal} (for $F$ non-archimedean), \cite[p. 279]{HC} (for $F$ archimedean, $\mathcal G$ semisimple) and \cite[Lemma 27]{Vig} ($F$ archimedean) there exists $r_G \in \mathbb N$ such that \[ \int_G \Xi (g)^2 (1 + \ell (g))^{-t} \textup{d}\mu (g) < \infty \quad \text{for all } t > r_G . \] Now \cite[Proposition 4.4.4]{Laf} says that $\mathcal S_t (G)$ is a Banach algebra for $t > r_G$.\\ (b) This follows from part (a) and \cite[Proposition 1.5.1]{Laf}.\\ (c) See \cite[Propositions 4.5.2 and 4.8.2]{Laf}. \end{proof} For large $t$, Corollary \ref{cor:1.5} applies to the algebra $\mathcal S_t (G)$. \begin{thm}\label{thm:1.6} \textup{\cite{Laf}} \\ Let $\mathcal G$ be a reductive algebraic group defined over a local field $F$ and write $G = \mathcal G (F)$. For $t > r_G$, $\mathcal S_t (G)$ is an unconditional completion of $C_c (G)$ which is holomorphically closed in $C_r^* (G)$. As a consequence, the Baum--Connes conjecture (with trivial coefficients) holds for $G$. \end{thm} The properties of reductive groups which Lafforgue uses \cite[\S 4.1]{Laf} are quite specific, they are not available for most other groups. Furthermore Theorem \ref{thm:1.6} is not known with nontrivial coefficient algebras. In fact, in \cite[p. 93]{Laf} some obstructions are mentioned. \section{Spaces of rapidly decreasing functions} \label{sec:2} The goal of this paragraph is to make full use of results of Vign\'eras, which produce holomorphically closed subalgebras of $C^*$-algebras. Let $G$ be locally compact Hausdorff group with a Haar measure $\mu$. Let $B$ be any $G$-$C^*$-algebra. Recall that $C_c (G,B)$ acts on the Hilbert $C^*$-module $L^2 (G,B)$, by a combination of the convolution product of $G$ and the product of $B$. The reduced crossed product $C_r^* (G,B)$ is the closure of $C_c (G,B)$ with respect to the operator norm from $\mathcal B (L^2 (G,B))$. For $a \in C_r^* (G,B)$, we denote the corresponding bounded operator on $L^2 (G,B)$ by $\lambda (a)$. Let $\ell : G \to \mathbb R_{\geq 0}$ be a Borel-measurable length function with \begin{equation}\label{eq:2.1} \ell (g) = \ell (g^{-1}) \quad \text{for all } g \in G . \end{equation} We note that pointwise multiplication by $\sigma := 1 + \ell$ is an unbounded operator on $L^2 (G,B)$. For $A \in \mathcal B (L^2 (G,B))$ we define a (possibly unbounded) operator \[ D(A) : v \mapsto \sigma A(v) - A (\sigma v) . \] Notice that $D(A) = [\sigma,A]$, so that $D$ is a derivation. Consider the vector space \[ V_\ell^\infty (G,B) = \{ a \in C_r^* (G,B) : D^n (\lambda (a)) \in \mathcal B (L^2 (G,B)) \; \forall n \in \mathbb Z_{\geq 0} \} , \] with the topology given by the seminorms \[ a \mapsto \norm{D^n (\lambda (a))}_{\mathcal B (L^2 (G,B))} \quad n \in \mathbb Z_{\geq 0} . \] We will now formulate a version of the results of \cite[\S 7]{Vig} for $V_\ell^\infty (G,B)$. We note that, although Vign\'eras works exclusively with coefficients $\mathbb C$, her arguments are equally valid with other coefficient $G$-$C^*$-algebras. \begin{thm}\label{thm:2.4} Let $G$ be a locally compact group and let $B$ be a $G$-$C^*$-algebra. \enuma{ \item $V_\ell^\infty (G,B)$ is a Fr\'echet algebra containing $C_c (G,B)$. \item The inclusion $V_\ell^\infty (G,B) \to C_r^* (G,B)$ is continuous, with dense image. \item The set of invertible elements in the unitization $V_\ell^\infty (G,B)^+$ is open, and inversion is a continuous map from this set to itself. \item An element of $V_\ell^\infty (G,B)^+$ is invertible if and only if its image in $C_r^* (G,B)^+$ is invertible. } \end{thm} \begin{proof} (a) By \cite[Lemma 15]{Vig} $V_\ell^\infty (G,B)$ is a Fr\'echet space. Since $D$ is a derivation, $V_\ell^\infty (G,B)$ is closed under the multiplication of $C_r^* (G,B)$ and multiplication is jointly continuous for the topology of $V_\ell^\infty (G,B)$. Suppose that $a \in C_c (G,B)$. For $v \in L^2 (G,B)$ and $g' \in G$ we compute \begin{align} \nonumber (D (\lambda (a)) v)(g') & = \sigma (g') \int_G a(g) g (v (g^{-1} g')) \textup{d}\mu (g) - \int_G a(g) g (v (g^{-1} g')) \sigma (g^{-1}g') \textup{d}\mu (g) \\ \label{eq:2.2} & = \int_G (\sigma (g') - \sigma (g^{-1} g')) a(g) g (v (g^{-1} g')) \textup{d}\mu (g) \end{align} As $\ell$ is a length function and by \eqref{eq:2.1}: \[ |\sigma (g') - \sigma (g^{-1} g')| = | \ell (g') - \ell (g^{-1} g')| \leq \ell (g^{-1}) = \ell (g) \] By \cite[Theorem 1.2.11]{Sch} $\ell$ is bounded on the support of $a$, say by $C_a$. Then \eqref{eq:2.2} entails \[ \norm{D(\lambda (a))(v)}_{L^2 (G,B)} \leq C_a \norm{a * v}_{L^2 (G,B)} \leq C_a \norm{\lambda (a)}_{\mathcal B (L^2 (G,B))} \norm{v}_{L^2 (G,B)} . \] With induction we see that $D^n (\lambda (a)) \in \mathcal B (L^2 (G,B))$ for all $n \in \mathbb Z_{\geq 0}$, so $C_c (G,B)$ is contained in $V_\ell^\infty (G,B)$. \\ (b) As $\norm{D^0 (\lambda (a))}_{\mathcal B (L^2 (G,B))} = \norm{a}_{C_r^* (G,B)}$, the inclusion is continuous. Its image contains $C_c (G,B)$, so is dense in $C_r^* (G,B)$.\\ (c) Any invertible element of $V_\ell^\infty (G,B)^+$ is of the form $z + a$ with $z \in \mathbb C^\times$ and $a \in V_\ell^\infty (G,B)$. As multiplication by $z^{-1} \in \mathbb C^\times$ is certainly continuous, it suffices to consider the subset $1 + V_\ell^\infty (G,B)$ of $V_\ell^\infty (G,B)^+$. For $a \in V_\ell^\infty (G,B)$ with $\norm{a}_{C_r^* (G,B)} < 1$, \cite[Lemma 16]{Vig} shows that \begin{equation}\label{eq:2.4} 1 - a \text{ is invertible in } V_\ell^\infty (G,B)^+ . \end{equation} (See \cite[Theorem 5.12]{SolThesis} for an analogous argument in a different context.) The same calculation entails that $a \mapsto (1 - a)^{-1}$ is continuous around 0 in $V_\ell^\infty (G,B)$.\\ (d) This follows from parts (a),(b),(c), \eqref{eq:2.4} and \cite[Lemma 17]{Vig}. \end{proof} Suppose that $K \subset G$ is a compact open subgroup such that $\ell$ is $K$-biinvariant. Let $e_K \in L^2 (G)$ be $\mu (K)^{-1}$ times the indicator function of $K$. This is a projection in $C_r^* (G)$ and in the multiplier algebra of $C_r^* (G,B)$. Right multiplication by $e_K$ just means averaging a measurable function $f : G \to B$ over $K$, making it right-$K$-invariant. Left multiplication of $f$ by $e_K$ can described explicitly as \begin{equation}\label{eq:2.12} (e_K * f)(g) = \mu (K)^{-1} \int_K k (f(k^{-1}g)) \textup{d}\mu (k) . \end{equation} If $f = e_K * f$, then we say that $f$ is twisted left-$K$-invariant. Equivalently, $f(kg) = k (f(g))$ for all $g \in G, k \in K$. For $r \in \mathbb R$ we define a norm on $C_c (G,B)$ by \[ \nu_r (f) = \Big( \int_G \norm{f(g)}_B^2 \sigma (g)^{2r} \textup{d}\mu (g) \Big)^{1/2} . \] \texttt{Note: $\nu_0 (f)$ is not the norm of $f$ in the Hilbert $C^*$-module $L^2 (G,B)$,\\ that would be \[ \norm{f}_{L^2 (G,B)} = \norm{ \int_G f(g) f(g)^* \textup{d} \mu (g) }_B^{1/2} \] } Let $S_\ell^\infty (G,B)$ be the completion of $C_c (G,B)$ with respect to the family of norms $\nu_r \; (r \in \mathbb Z)$. Following \cite{Vig}, we call it the space of $\ell$-rapidly decreasing functions in $L^2 (G,B)$. See \cite{Sch} for many similar dense Fr\'echet subspaces of $L^2 (G,B)$. Let $e_K S_\ell^\infty (G, B) e_K$ be the subspace of $S_\ell^\infty (G,B)$ consisting of right-$K$-invariant, twisted left $K$-invariant maps. Equivalently, $e_K S_\ell^\infty (G, B) e_K$ is the closure of\\ $e_K C_c (G, B) e_K$ with respect to the norms $\nu_r \; (r \in \mathbb Z)$. Write $e_K C_r^* (G, B) e_K$ and $e_K V_\ell^\infty (G, B) e_K$ for the subalgebras of right-$K$-invariant, twisted left-$K$-invariants element in, respectively, $C_r^* (G,B)$ and $V_\ell^\infty (G,B)$. Notice that $e_K$ is the identity element of the multiplier algebra of $e_K C_r^* (G, B) e_K$. \begin{lem}\label{lem:2.6} \enuma{ \item $e_K V_\ell^\infty (G, B) e_K \subset e_K S_\ell^\infty (G, B) e_K$. \texttt{Probably not true!} \item $e_K V_\ell^\infty (G, B) e_K$ is closed under the holomorphic functional calculus of\\ $e_K C_r^* (G, B) e_K$. } \end{lem} \begin{proof} (a) The unitization $B^+$ of $B$ is also a $G$-$C^*$-algebra, in a natural way. Theorem \ref{thm:2.4} also applies with $B^+$ instead of $B$. The identity element of $e_K C_r^* (G, B^+) e_K$ is $e_K$. For $a \in e_K V_\ell^\infty (G, B) e_K \subset e_K V_\ell^\infty (G, B^+) e_K$: \[ D(\lambda (a)) (e_K) = \sigma (a * e_K) - a * (\sigma e_K) = \sigma a - a = \ell a . \] With induction we obtain $D^n (\lambda (a)) (e_K) = \ell^n a$. By assumption\\ $D^n (\lambda (a)) \in \mathcal B (L^2 (G,B^+))$, so \[ \ell^n a \in L^2 (G,B^+) \quad \forall n \in \mathbb Z_{\geq 0}. \] \texttt{The norm of $L^2 (G,B^+)$ differs from $\nu_0$, it is not clear whether the above implies that $\nu_r (a)$ is finite!} This says that $\nu_r (a) < \infty$ for all $r \in \mathbb Z$, so \[ a \in S_\ell^\infty (G,B^+) \cap e_K V_\ell^\infty (G, B) e_K \subset e_K S_\ell^\infty (G, B) e_K . \] (b) Recall from \cite[\S A.1.5]{Bos} that the holomorphic functional calculi of\\ $(e_K V_\ell^\infty (G, B) e_K)^+$ and $(e_K C_r^* (G, B) e_K)^+$ can both be expressed as \[ f(a) = (2 \pi i)^{-1} \int_\Gamma F(z) (z - a)^{-1} \textup{d} z , \] where $\Gamma$ is a suitable contour around the spectrum of $a$, and $F$ is a primitive of a holomorphic function $f$. From this expression we see that it suffices to prove that every element of $(e_K V_\ell^\infty (G, B) e_K)^+$ which is invertible in $(e_K C_r^* (G, B) e_K)^+$, is already invertible in $(e_K V_\ell^\infty (G, B) e_K)^+$. Let $a \in (e_K V_\ell^\infty (G, B ) e_K)^+ \cap \big( (e_K C_r^* (G, B) e_K)^+ \big)^\times$. Notice that $(e_K C_r^* (G, B) e_K)^+$ is naturally embedded in $e_K C_r^* (G, B^+) e_K$. We can identify its unit element with $e_K$. This has to be distinguished from the unit element of $C_r^* (G,B^+)^+$, which we denote simply by 1. Then $a + (1 - e_K)$ is invertible in $C_r^* (G,B^+)^+$, with inverse $a^{-1} + (1 - e_K)$. By Theorem \ref{thm:2.4}.d \[ a^{-1} + (1 - e_K) \in V_\ell^\infty (G,B^+)^+ . \] Then also \[ a^{-1} = e_K ( a^{-1} + 1 - e_K) \in V_\ell^\infty (G,B^+)^+ . \] At the same time $a^{-1} \in (e_K C_r^* (G, B ) e_K)^+$, so \[ a^{-1} \in V_\ell^\infty (G,B^+)^+ \cap (e_K C_r^* (G, B ) e_K)^+ \subset (e_K V_\ell^\infty (G,B) e_K)^+ . \qedhere \] \end{proof} From the density theorem for K-theory \cite[Th\'eor\`eme A.2.1]{Bos}, Theorem \ref{thm:2.4} and Lemma \ref{lem:2.6}.b we immediately conclude: \begin{cor}\label{cor:3.1} The Fr\'echet algebra homomorphisms \[ V_\ell^\infty (G, B) \to C_r^* (G, B) \text{ and } e_K V_\ell^\infty (G, B) e_K \to e_K C_r^* (G, B) e_K \] induce isomorphisms on K-theory. \end{cor} \section{Linear algebraic groups over non-archimedean local fields} \label{sec:3} In this paragraph $F$ is a non-archimedean local field and $\mathcal G$ is a connected reductive group defined over $F$. We endow $G = \mathcal G (F)$ with the topology coming from the metric of $F$, making it into a locally compact totally disconnected Hausdorff group. We denote the Bruhat--Tits building of $\mathcal G (F)$ by $X$. More generally our below arguments work for (possibly disconnected) quasi-reductive groups over non-archimedean local fields, by \cite{Sol2}. But since every quasi-reductive group is embedded in a reductive group, nothing would be gained by working in that generality. For background on the upcoming notions, we refer to \cite{Tit}. We fix a special vertex $x_0$ of $X$ and we let $G_{x_0}$ be its stabilizer in $G$. By the properness of the action, $G_{x_0}$ is compact. Because $G$ preserves the polysimplicial structure of $X$, the $G$-orbit of $x_0$ consists of vertices. Those lie discretely in $X$, so $G_{x_0}$ is open in $G$. We normalize the Haar measure of $G$ so that $\mu (G_{x_0}) = 1$. We define \[ \ell : G \to \mathbb R_{\geq 0}, \quad \ell (g) = d (g x_0, x_0 ). \] Since $G$ acts continuously and isometrically on $X$, this is a continuous length function and $\ell (g) = \ell (g^{-1})$. Notice that $\ell$ is biinvariant under $G_{x_0}$. Let $S$ be a maximal $F$-split torus of $G$, such that $x_0$ lies in the apartment $\mathbb A_S$ of $X$ associated to $S$. Then $M := Z_G (S)$ is a minimal Levi subgroup of $G$. It has a unique maximal compact subgroup, namely $M_{\mathrm{cpt}} = M \cap G_{x_0}$. Then $M / M_{\mathrm{cpt}}$ can be identified with a lattice in the apartment $\mathbb A_S$. Let $P = M U$ be a minimal parabolic subgroup of $G$, with unipotent radical $U$ and Levi factor $M$. (To be precise, one should say something like $\mathcal P$ is a minimal parabolic $F$-subgroup of $\mathcal G$, and $G = \mathcal G (F), P = \mathcal P (F)$.) Recall the Iwasawa decomposition: \begin{equation}\label{eq:2.3} G = P G_{x_0}. \end{equation} The torus $Z(M)^\circ$ acts algebraically on the Lie algebra of $U$, and that representation decomposes as a direct sum of algebraic characters $\chi : Z(M)^\circ \to F^\times$. Let $\norm{}_F$ denote the norm of $F$. As $Z(M)^\circ$ is cocompact in $M$, $\norm{\chi}_F$ extends uniquely to a character $M \to \mathbb R_{>0}$. We write \[ M^+ = \{ m \in M : \norm{\chi (m)}_F \leq 1 \text{ for all } \chi \text{ which appear in Lie}(U) \} . \] The Cartan decomposition says that the natural map \begin{equation}\label{eq:2.7} M^+ / M_{\mathrm{cpt}} \to G_{x_0} \backslash G / G_{x_0} \quad \text{is bijective.} \end{equation} Notice that $\delta_P (m) \leq 1$ for all $m \in M^+$, where $\delta_P : P \to \mathbb R_{>0}$ is the modular function. Using \eqref{eq:2.3} we extend $\delta_P$ to a right-$G_{x_0}$-invariant function on $G$. From \cite[\S II.1]{Wal} we recall that Harish-Chandra's $\Xi$-function \[ \Xi (g) = \int_{G_{x_0}} \delta_P (kg)^{1/2} \textup{d}\mu (k) \] is $G_{x_0}$-biinvariant. Recall from Proposition \ref{prop:3.5} that for $t \in \mathbb R_{> r_G}$ there exist unconditional completions $\mathcal S^t (G)$ and Banach algebras $\mathcal S_t (G,B)$ Let $K \subset G_{x_0}$ be an open subgroup. Then $K$ is also closed in the compact Hausdorff group $G_{x_0}$, and hence compact. Since $\ell$ and $\Xi$ are $G_{x_0}$-biinvariant, they descend to functions $K \backslash G / K \to \mathbb R$. For a right-$K$-invariant, twisted left-$K$-invariant measurable function $f : G \to B$, \eqref{eq:2.12} and the $G$-invariance of the norm of $B$ imply that $\norm{f (g)}_B$ is $K$-biinvariant. Writing $\sigma = 1 + \ell$, we find that for such $f$: \[ \norm{f}_{\mathcal S_t (G,B)} = \sup_{g \in K \backslash G / K} \norm{f(g)}_B \Xi (g)^{-1} \sigma (g)^t . \] Imposing (twisted) $K$-biinvariance enables us to fit functions in $\mathcal S_t (G,B)$: \begin{lem}\label{lem:2.1} For all $f \in e_K S_\ell^\infty (G,B) e_K$ and all $t > r_G$: $\norm{f}_{\mathcal S_t (G,B)} < \infty$. In particular $\mathcal S_t (G,B)$ contains all elements of $V_\ell^\infty (G,B)$ that are right-invariant and twisted left-invariant under some compact open subgroup of $G_{x_0}$. \end{lem} \begin{proof} Since $K$ is open and compact, the space $K \backslash G / K$ is discrete and its elements have finite volume (from the measure on $G$). Choose a set of representatives $k_i \; (i = 1,\ldots ,[G_{x_0}:K])$ for $G_{x_0} / K$ and a set of representatives $m_j (j \in J)$ for $M^+ / M_{\mathrm{cpt}}$. By the Cartan decomposition \eqref{eq:2.7}, the natural map \begin{equation}\label{eq:2.8} \{ k_{i'}^{-1} m_j k_i : 1 \leq i,i' \leq [G_{x_0} : K], j \in J \} \to K \backslash G / K \end{equation} is surjective. The map \begin{equation}\label{eq:2.9} M / M_{\mathrm{cpt}} \to X : m M_{\mathrm{cpt}} \mapsto m x_0 \end{equation} sends $M / M_{\mathrm{cpt}}$ bijectively to a lattice in the apartment $\mathbb A_S$. Combining \eqref{eq:2.8} and \eqref{eq:2.9} with the definition of $\ell$, we deduce that $K \backslash G / K$ has polynomial growth with respect to $\ell$. Knowing that, \cite[Lemma 9]{Vig} says that the rapid decay of $f$ (in the $L^2$-sense) is equivalent to \begin{equation}\label{eq:2.11} \sup_{g \in K \backslash G / K} \norm{f(g)}_B \mu (KgK)^{1/2} \sigma (g)^t < \infty \quad \text{for all } t \in \mathbb R . \end{equation} Notice that $\mu (KgK)$ and $\sigma (g)$ are $G_{x_0}$-biinvariant. From \cite[p. 241]{Wal} we see that there exist $C_K, C'_K \in \mathbb R_{>0}$ such that \begin{equation}\label{eq:2.5} C_K \delta_P (m)^{-1} \leq \mu (K m K) \leq C'_K \delta_P (m)^{-1} \quad \text{for all } m \in M^+. \end{equation} By \eqref{eq:2.8} and \eqref{eq:2.5}, the condition \eqref{eq:2.11} is equivalent to \begin{equation}\label{eq:2.10} \sup_{i,i',j} \norm{f(k_{i'}^{-1} m_j k_i)}_B \delta_P (m_j)^{-1/2} \sigma (m_j)^t < \infty \quad \text{for all } t \in \mathbb R . \end{equation} We recall from \cite[Lemma II.1.1]{Wal} that there exist $C_1,C_2 \in \mathbb R_{>0}$ and $d \in \mathbb N$ such that \[ C_1 \delta_P (m)^{1/2} \leq \Xi (m) \leq C_2 \delta_P (m)^{1/2} \sigma (m)^d \quad \text{for all } m \in M^+. \] With that, \eqref{eq:2.10} becomes equivalent to \[ \sup_{i,i',j} \norm{f(k_{i'}^{-1} m_j k_i)}_B \Xi (m_j)^{-1} \sigma (m_j)^t < \infty \quad \text{for all } t \in \mathbb R . \] In view of the $K$-biinvariance of $\norm{f(g)}_B$ and $G_{x_0}$-biinvariance of the other involved terms, this says that \[ \norm{f}_{\mathcal S_t (G,B)} = \sup_{g \in G} \norm{f(g)}_B \Xi (g)^{-1} \sigma (g)^t \] is finite. For the second claim we use Lemma \ref{lem:2.6}.a. \end{proof} In view of Corollary \ref{cor:3.1}, Lemma \ref{lem:2.1} and Proposition \ref{prop:3.5}, every class from\\ $K_* ( e_K C_r^* (G, B) e_K)$ can be represented by elements of matrix algebras over\\ $\mathcal S_t (G,B)^+$ (for $t > r_G$). This enables us to apply Corollary \ref{cor:1.5} and to prove: \begin{thm}\label{thm:3.2} Let $F$ be a non-archimedean local field and let $\mathcal G$ be a connected reductive algebraic group defined over $F$. Endow $G = \mathcal G (F)$ with the topology coming from the metric of $F$. Let $B$ be a $\sigma$-unital $G$-$C^*$-algebra. The assembly map \[ \mu^B : K_*^{\rm top} (G,B) \to K_* (C_r^* (G,B)) \] is a bijection. \end{thm} \begin{proof} As $G$ is totally disconnected and locally compact, its identity element admits a neighborhood basis consisting of compact open subgroups $K$ \cite[\S 3.4.6]{Bou}. In particular $\bigcup_K e_K C_c (G , B) e_K$ is dense in $C_c (G,B)$. We partially order these subgroups $K$ by reverse inclusion. Then \[ \{ e_K : K \subset G \text{ compact open subgroup} \} \] is an approximate identity consisting of projections in the multiplier algebra of $C_r^* (G,B)$. Consequently \begin{equation}\label{eq:3.3} C_r^* (G,B) = \varinjlim_K e_K C_r^* (G,B) e_K , \end{equation} where the limits are taken in the category of $C^*$-algebras. By the continuity of topological K-theory \begin{equation}\label{eq:3.2} K_* (C_r^* (G,B)) = \varinjlim_K K_* ( e_K C_r^* (G, B) e_K) . \end{equation} Pick any class $p \in K_* (C_r^* (G,B))$. By \eqref{eq:3.2} and Corollary \ref{cor:3.1} it lies in the image of $K_* ( e_K V_\ell^\infty (G, B) e_K )$, for a suitable compact open subgroup $K \subset G_{x_0}$. Then Lemma \ref{lem:2.1} and Proposition \ref{prop:3.5}.b imply that $p$ can be represented by an element in a matrix algebra over the Banach algebra $\mathcal S_t (G,B)^+$, for any $t > r_G$. In particular $p$ lies in the image of $K_* (\mathcal S_t (G,B)) \to K_* (C_r^* (G,B))$. This holds for arbitrary $p$, so $K_* (\mathcal S_t (G,B)) \to K_* (C_r^* (G,B))$ is surjective. In Proposition \ref{prop:3.5}.a we saw that $\mathcal S_t (G)$ is an unconditional completion of $C_c (G)$. As $\Xi (g^{-1}) = \Xi (g)$ \cite[Lemme II.1.4]{Wal}, \[ \norm{f^*}_{\mathcal S_t (G)} = \norm{f}_{\mathcal S_t (G)} \quad \text{for all } f \in C_c (G). \] Proposition \ref{prop:3.5}.c enables us to rescale this norm so that \[ \norm{f}_{C_r^* (G)} \leq \norm{f}_{\mathcal S_t (G)} \quad \text{for all } f \in C_c (G). \] Now we checked all the assumptions of Corollary \ref{cor:1.5}, so we can finally apply that result. \end{proof} Using the permanence properties of the Baum--Connes conjecture with coefficients (discussed in the appendix), we can generalize Theorem \ref{thm:3.2} to larger classes of groups. \begin{thm}\label{thm:3.3} Let $G$ be as in Theorem \ref{thm:3.2} and let $H$ be a closed subgroup of $G$. Let $G'$ be a second countable, exact, locally compact group with an amenable closed normal subgroup $N$ such that $G'/N$ is isomorphic (as topological group) to $H$. Then $H$ and $G'$ satisfy BC with arbitrary coefficients. \end{thm} \begin{proof} Every separable $C^*$-algebra is $\sigma$-unital, so Theorem \ref{thm:3.2} says in particular that $G$ satisfies BC with arbitrary separable coefficients. Apply Theorem \ref{thm:5.3} and to $G$ and $H$ to get the desired result for $H$. Then apply Theorem \ref{thm:A.4} to $H,G'$ and $N$ to obtain the claim for $G'$. \end{proof} We note that Theorem \ref{thm:3.3} applies to every linear algebraic group over $F$, because such a group can be embedded as a closed subgroup in $GL_n (F)$ for some $n \in \mathbb N$. \section{Linear algebraic groups over global fields} \label{sec:4} In this section we consider linear algebraic groups $\mathcal G$ defined over a global field $k$. The points of $\mathcal G$ over the ring of adeles of $k$ form a locally compact group, usually called an adelic group. BC (with trivial coefficients) for reductive adelic groups has been obtained by Baum, Millington and Plymen in \cite{BMP1}. Later Chabert, Echterhoff and Oyono-Oyono \cite[Theorem 0.7]{CEO} were able to show that all linear algebraic adelic groups over number fields satisfy BC. We will generalize these results to all linear algebraic groups and all global fields. Like the aforementioned work, our proofs rely on the following. \begin{thm}\textup{\cite[Theorem 1.1]{BMP2}} \label{thm:4.2} \\ Let $G$ be a second countable locally compact group. Let $(G_n )_{n=1}^\infty$ be an increasing sequence of open subgroups, with $\bigcup_{n=1}^\infty G_n = G$. Let $B$ be a $G$-$C^*$-algebra and suppose that each $G_n$ satisfies BC with coefficients $B$. Then $G$ satisfies the Baum--Connes conjecture with coefficients $B$. \end{thm} Let $\mathbf A_{k,\mathrm{fin}}$ be the ring of finite adeles of $k$, that is, the restricted product of the non-archimedean completions $k_v$. Let $(v_i )_{i=1}^n$ be an ordering of the finite places of $k$ and let $\mathfrak o_{v_i}$ denote the ring of integers of $k_{v_i}$. Then $\mathbf A_{k,\mathrm{fin}}$ can be expressed as the increasing union of the open subrings \begin{equation}\label{eq:4.1} \mathbf A_{k,n} := k_{v_1} \times \cdots \times k_{v_n} \times \prod\nolimits_{i>n} \mathfrak o_{v_i} . \end{equation} The ring of adeles $\mathbf A_k$ is the direct product $\mathbf A_{k,\mathrm{fin}} \times \prod_{v | \infty} k_v$, where the latter product runs over all infinite places of $k$. When $k$ is a global function field, there are no infinite places and $\mathbf A_k = \mathbf A_{k,\mathrm{fin}}$. On the other hand, every number field does possess infinite places (but only finitely many). Like in \eqref{eq:4.1} we can write $\mathbf A_k$ as the increasing union of the open subrings \begin{equation}\label{eq:4.2} \mathbf A_{k,n} \times \prod_{v | \infty} k_v = k_{v_1} \times \cdots \times k_{v_n} \times \prod_{i>n} \mathfrak o_{v_i} \times \prod_{k|\infty} k_v . \end{equation} Via the diagonal embedding, $k$ can be realized as a discrete cocompact subring of $\mathbf A_k$ \cite[Theorem IV.2.2]{Wei}. \begin{thm}\label{thm:4.3} Let $\mathcal G$ be a linear algebraic group defined over a global field $k$. \enuma{ \item $\mathcal G (\mathbf A_{k,\mathrm{fin}})$ satisfies BC with arbitrary coefficients. \item Suppose that, for every infinite place $v$ of $k$, $\mathcal G (k_v)$ satisfies BC with arbitrary separable coefficients (e.g. $\mathcal G (k_v)$ is compact or solvable or, more generally, amenable). Then the adelic group $\mathcal G (\mathbf A_k)$ satisfies BC with coefficients. } \end{thm} \begin{proof} The proof of part (a) is analogous to that of part (b) and slightly simpler, so we omit it.\\ (b) By \eqref{eq:4.1} $\mathcal{G}(\mathbf{A}_{k,\mathrm{fin}})$ is the increasing union of its open subgroups $\mathcal G (\mathbf A_{k,n}) \times \prod_{v | \infty} \mathcal G (k_v)$. By Theorem \ref{thm:4.2} it suffices to establish the theorem for each of the subgroups \[ G_n := \mathcal G (\mathbf A_{k,n}) \times \prod_{v | \infty} \mathcal G (k_v) = \mathcal G (k_{v_1}) \times \cdots \times \mathcal G (k_{v_n}) \times \prod_{i > n} \mathcal G (\mathfrak o_{v_i}) \times \prod_{v | \infty} \mathcal G (k_v) . \] By Theorem \ref{thm:3.3} each $\mathcal G (k_{v_i})$ satisfies BCC. By Tychonoff's Theorem the product of compact groups $\prod\nolimits_{i > n} \mathcal G (\mathfrak o_{v_i})$ is again compact. By \cite[Theorem 3.17.i]{ChEc2} BC with separable coefficients is inherited by finite direct products of groups, so $G_n$ satisfies BC with arbitrary separable coefficients. With Corollary \ref{cor:A.2} we can lift the separability requirement. \end{proof} With the permanence properties of BCC from the appendix, we can generalize Theorem \ref{thm:4.3}. For $\mathcal G (k)$ the below was already stated in \cite[Theorem 1.3]{BMP2}. \begin{cor}\label{cor:4.6} Let $\mathcal G$ be a linear algebraic group defined over a global field $k$ and suppose that, for every infinite place $v$ of $k$, $\mathcal G (k_v)$ is amenable. Let $H$ be a closed subgroup of $\mathcal G (\mathbf A_k)$, for instance $\mathcal G (k)$ with the discrete topology. Let $G'$ be a second countable, exact, locally compact group with an amenable closed normal subgroup $N$ such that $G'/N$ is isomorphic (as topological group) to $H$. Then $H$ and $G'$ satisfy BC with arbitrary coefficients. \end{cor} \begin{proof} Apply Theorems \ref{thm:4.3} and \ref{thm:5.3} and to $G$ and $H$ to get the desired result for $H$. Since $k$ embeds discretely in $\mathbf A_k$, $\mathcal G (k)$ embeds in $\mathcal G (\mathbf A_k)$ as a discrete subgroup, and it is an example of such an $H$. Then apply Theorem \ref{thm:A.4} to $H,G'$ and $N$ to obtain the claim for $G'$. \end{proof} Unfortunately part (b) of Theorem \ref{thm:4.3} does not apply to all linear algebraic groups over number fields, because the Baum--Connes conjecture with coefficients is still open for many reductive Lie groups. A strong result in that direction was proven by Chabert, Echterhoff and Nest. We present a slightly simplified version: \begin{thm}\label{thm:4.4} \textup{\cite[Theorem 1.2]{CEN}} \\ Let $G$ be a second countable locally compact group, such that the identity component $G^\circ$ is a Lie group and $G / G^\circ$ is compact. (For instance, $G$ can be a finite dimensional Lie group with only finitely many components.) Then $G$ satisfies the twisted Baum--Connes conjecture. \end{thm} We generalize this to adelic groups. \begin{thm}\label{thm:4.5} Let $\mathcal G$ be a linear algebraic group defined over a global field $k$. Then the adelic group $\mathcal G (\mathbf A_k)$ satisfies the twisted Baum--Connes conjecture. \end{thm} \begin{proof} Let $\mathcal K (H)$ be the algebra of compact operators on a separable Hilbert space $H$, and suppose that it carries the structure of a $\mathcal G (\mathbf A_k)$-$C^*$-algebra. We have to show that $\mathcal G (\mathbf A_k)$ satisfies BC with coefficients $\mathcal K (H)$. Write $N = \prod_{v | \infty} \mathcal G (k_v)$. As the $\mathbb R$-points of an algebraic group, this is a (finite dimensional) Lie group with only finitely many connected components. Suppose that $L \subset \mathcal G (\mathbf A_k)$ is a closed subgroup containing $N$ as cocompact subgroup. Then \[ L / N \subset \mathcal G (\mathbf A_k) / N \cong \mathcal G (\mathbf A_{k,\mathrm{fin}}) \] is totally disconnected, and hence $L^\circ = N^\circ$. Consequentely $L / L^\circ = L / N^\circ$ is compact, and by Theorem \ref{thm:4.4} $L$ satisfies BC with coefficients $\mathcal K (H)$. The above checks that the conditions of \cite[Theorem 2.1]{CEO} are fulfilled. The statement of \cite[Theorem 2.1]{CEO} is: BC for $\mathcal G (\mathbf A_k)$ with coefficients $\mathcal K (H)$ is equivalent to BC for $(\mathcal G (\mathbf A_k),N)$ with coefficients $C_r^* (N,\mathcal K (H))$. Here the action of $(\mathcal G (\mathbf A_k), N)$ on $C_r^* (N,\mathcal K (H))$ is twisted. By \cite[Theorem 1]{Ech} this twisted action is $\mathcal G (\mathbf A_k)$-equivariantly Morita equivalent to an ordinary action of $\mathcal G (\mathbf A_k) / N$ on another $C^*$-algebra, say $B$. Actually, since \[ \mathcal G (\mathbf A_k) = N \times \mathcal G (\mathbf A_{k,\mathrm{fin}}) \] we may take $B = C_r^* (N,\mathcal K (H))$. Then BC for $(\mathcal G (\mathbf A_k),N)$ with coefficients \\ $C_r^* (N,\mathcal K (H))$ is equivalent to BC for $\mathcal G (\mathbf A_k)/N \cong \mathcal G (\mathbf A_{k,\mathrm{fin}})$ with coefficients $B$ \cite[Proposition 5.6]{ChEc1}. The latter holds by Theorem \ref{thm:4.3}.a. \end{proof} \appendix \section{Permanence properties of Baum--Connes with coefficients} For technical reasons, we prove some of the results in the body of our paper initially only for separable coefficient algebras. In this appendix we discuss how the Baum--Connes conjecture with separable coefficients for an exact group $G$ implies BC for $G$ with coefficients in an arbitrary $G$-$C^*$-algebra. This is made possible by the work of Chabert--Echterhoff \cite{ChEc2} on the continuity of the topological side of BCC. Let $G$ be a second countable, locally compact group and let $B$ be any $G$-$C^*$-algebra. Let $\{ B_i : i \in I\}$ be the set of separable $G$-stable sub-$C^*$-algebras of $B$, partially ordered by inclusion. The second countablity of $G$ entails that every element of $B$ is contained in such a separable subalgebra $B_i$. It follows that \begin{equation}\label{eq:A.1} \lim_{i \in I} B_i \cong B \qquad \text{as } G-C^*\text{-algebras.} \end{equation} \begin{prop}\label{prop:A.1} Let $G,B$ and the $B_i$ be as above. There is a natural isomorphism \[ \lim_{i \in I} K_*^{\rm top} (G,B_i) \cong K_*^{\rm top} (G,B) . \] \end{prop} \begin{proof} In \cite[Proposition 7.1]{ChEc2} this was proven when $B$ is separable and $\{ B_i : i \in I\}$ is an arbitrary inductive system of separable $G$-$C^*$-algebras with direct limit $B$. We check that the arguments in \cite{ChEc2} also work when $B$ is not separable. In \cite{ChEc2} $K_*^{\rm top} (G,B)$ is exhibited as a direct limit of groups $KK_*^G (C_0 (X),B_i)$, where $i \in I$ and $X$ runs through some collection of proper $G$-spaces. The maps relating these KK-groups to the limit group are given by Kasparov products \begin{equation}\label{eq:A.2} KK_*^G (C_0 (X'),C_0 (X)) \otimes_\mathbb Z KK_*^G (C_0 (X),B_i) \otimes_\mathbb Z KK_*^G (B_i,B) \to KK_*^G (C_0 (X'),B) . \end{equation} The only involved element of $KK_*^G (B_i,B)$ is associated to the inclusion $B_i \to B$, while the only relevant elements of $KK_*^G (C_0 (X'),C_0 (X))$ are those induced by a continuous map $X \to X'$ and Bott elements in $KK_*^G (C_0 (X \otimes \mathbb R), C_0 (X))$ or $KK_*^G (C_0 (X'), C_0 (X' \otimes \mathbb R))$. As Chabert and Echterhoff observe, the associativity of the Kasparov product is needed to exchange the order of certain direct limits. After that, their arguments do not use any properties of the separable coefficient algebras $B_i$, they only involve various constructions with commutative $C^*$-algebras. We point out that, by \cite[Theorems 2.11 and 2.14.5]{Kas}, the associativity of the Kasparov product in \eqref{eq:A.2} holds even if $B$ is not separable. With this in mind, the entire proof of \cite[Proposition 7.1]{ChEc2} also applies to our possibly non-separable $G$-$C^*$-algebra $B$. \end{proof} As the continuity of the analytic side of Baum--Connes follows from exactness of the group, Proposition \ref{prop:A.1} has the following consequence: \begin{cor}\label{cor:A.2} Let $G$ be a second countable, exact, locally compact group. Suppose that $G$ satisfies the Baum--Connes conjecture with coefficients in any separable $G$-$C^*$-algebra. Then $G$ satisfies BCC, that is, BC with arbitrary (possibly non-separable) coefficients. \end{cor} \begin{proof} The inclusion $B_i \to B$ and the naturality of the assembly map \cite[\S 9]{BCH} yield a commutative diagram \[ \begin{array}{ccc} K_*^{\rm top} (G,B_i) & \xrightarrow{\mu^{B_i}} & K_* (C_r (G,B_i)) \\ \downarrow & & \downarrow \\ K_*^{\rm top} (G,B) & \xrightarrow{\mu^B} & K_* (C_r (G,B)) \end{array} \] From the exactness of $G$ and \eqref{eq:A.1} we get a natural isomorphism \[ \lim_{i \in I} C_r^* (G,B_i) \cong C_r^* (G,B) . \] Combine these with Proposition \ref{prop:A.1}. Using the assumption that every $\mu^{B_i}$ is an isomorphism, we find that $\mu^B$ is an isomorphism as well. \end{proof} From \cite[Theorem 2.5]{ChEc2} and Corollary \ref{cor:A.2} we get: \begin{thm}\label{thm:5.3} Let $G$ be a second countable, exact, locally compact group and let $H$ be a closed subgroup of $G$. Suppose that $G$ satisfies the Baum--Connes conjecture with arbitrary separable coefficients. Then $H$ satisfies BCC, that is, BC with arbitrary coefficients. \end{thm} Chabert, Echterhoff and Oyono-Oyono \cite{CEO} proved a permanence property of BCC with respect to extensions, which we now generalize to arbitrary coefficient algebras. \begin{thm}\label{thm:A.4} Let $G$ be a second countable, exact, locally compact group and let $N$ be a closed normal amenable subgroup of $G$. Suppose that $G/N$ satisfies BC with arbitrary separable coefficients. Then $G$ satisfies BCC (with arbitrary coefficients). \end{thm} \begin{proof} By Corollary \ref{cor:A.2} it suffices to prove BC for $G$ with coefficients in an arbitrary separable $G$-$C^*$-algebra $B$. Suppose that $L$ is an extension of $N$ by a compact group. Then $L$ inherits the amenability of $N$, so by \cite{HiKa} it satisfies BC with arbitrary separable coefficients. This shows that the assumptions of \cite[Theorem 2.1]{CEO} are satisfied by $(G,N)$. As moreover $B$ is separable, we may apply that result. It says that the bijectivity of the assembly map $\mu^B$ is equivalent to: \[ \text{the pair } (G,N) \text{ satisfies Baum--Connes with coefficients } C_r^* (N,B). \] The statement involves a twisted action of $(G,N)$ on $C_r^* (N,B)$. Fortunately, by \cite[Theorem 1]{Ech} this twisted action is $G$-equivariantly Morita equivalent to an ordinary action of $G/N$ on another separable $C^*$-algebra, say $B'$. Then Baum--Connes for $(G,N)$ with coefficients $C_r^* (N,B)$ is equivalent to Baum--Connes for $G/N$ with coefficients $B'$ \cite[Proposition 5.6]{ChEc1}. That holds by assumption. \end{proof} \end{document}
math
52,614
\begin{document} \begin{center} \Large\textbf{Spectral asymptotics for Stretched Fractals}\\ \large Elias Hauser\footnote{Institute of Stochastics and Applications, University of Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany, E-mail: [email protected]}\end{center} \tableofcontents \begin{abstract} The Stretched Sierpinski Gasket (or Hanoi attractor) was subject of several prior works. In this work we use this idea of \textit{stretching} self-similar sets to obtain non-self-similar ones. We are able to do this for a subset of the connected p.c.f. self-similar sets that fulfill a certain connectivity condition. We construct Dirichlet forms and study the associated self-adjoint operators by calculating the Hausdorff dimension w.r.t. the resistance metric as well as the leading term of the eigenvalue counting function. \end{abstract} \section{Introduction} In this work we introduce the so called \textit{stretched fractals} which originate by altering the construction of p.c.f. self-similar sets. We construct Dirichlet forms on these non-self-similar sets and conduct spectral asymptotics on the associated self-adjoint operators.\\ Studying asymptotic behavior of the spectrum of laplacians is an important tool in physics, for example, to understand the behavior of heat and waves in the underlying media. We are in particular interested in the asymptotic growing of the eigenvalues. For the classical laplacian on bounded domains $\Omega\subset \mathbb{R}^d$ we know that the Dirichlet eigenvalue counting function $N_D^\Omega$ grows asymptotically like \begin{align*} N_D^\Omega(x)\sim x^\frac{d}{2} \end{align*} This result is originally due to Weyl \cite{we11}. Nature, however, is not build up of smooth structures, but rather porous, disordered and finely structured material. In the 70s, therefore, the interest grew in studying fractals, which are much better suited to describe natural structures and phenomena. To understand the physical behavior on such objects we need a laplacian. Kigami constructed such an operator first on the Sierpinski Gasket \cite{kig89} and later on the so called p.c.f. self-similar sets \cite{kig93} by a sequence of operators on the approximating graphs. This is called the analytical approach. Shima \cite{shim} and Fukushima-Shima \cite{fushim} calculated the leading term of the eigenvalue counting function of the laplacian on the Sierpinski Gasket. Later Kigami and Lapidus calculated the asymptotic growing for p.c.f. self-similar sets in \cite{kig93}. Contrary to a conjecture by Berry in \cite{ber1,ber2}, the leading term does not coincide with $\frac{d_H}2$, where $d_H$ denotes the Hausdorff dimension. \\ In this work we want to examine some non-self-similar sets and calculate the leading term for operators in such a case. One example is the Stretched Sierpinski Gasket which was analyzed geometrically in \cite{af12}. \begin{figure} \caption{Stretching the Sierpinski Gasket} \label{ssgfigure} \end{figure} Let $p_1,p_2,p_3$ be the vertex points of an equilateral triangle with side length 1 and $\alpha \in(0,1)$ \begin{align*} G_i(x)&:=\frac{1-\alpha}2(x-p_i) +p_i, \ i\in \{1,2,3\} \\[0.1cm] e_1&:=\{\lambda G_2(p_3)+(1-\lambda)G_2(p_3) \ : \lambda \in (0,1)\}\quad e_2,e_3 \ \text{analog} \end{align*} The unique compact set $K$ which fulfills \begin{align*} K=G_1(K)\cup G_2(K) \cup G_3(K) \cup e_1\cup e_2\cup e_3 \end{align*} is called the Stretched Sierpinski Gasket. In \cite{afk17} the authors constructed resistance forms on this set and in \cite{haus17} the author calculated the leading term of the associated operators after introducing measures. These results were refined in \cite{haus18} where it was shown that there are oscillations in the leading term of the eigenvalue counting function. These oscillations are typical for such highly symmetrical sets. \\ In this work we want to generalize the idea of \textit{stretching} to more p.c.f. self-similar sets which we will call \textit{stretched fractals}. This work is structured as follows. In chapter~\ref{chapter2} we construct stretched fractals and include some examples. In chapter~\ref{chapter3} we build a sequence of approximating graphs and introduce the notion of \textit{regular sequences of harmonic structures} which is a generalization of the term \textit{harmonic structure} in the self-similar case. Afterwards we construct resistance forms on stretched fractals in chapter~\ref{chapter4}, presumed we have a harmonic structure. In chapter~\ref{chap_meas_oper} we describe measures on stretched fractals which allows us to get Dirichlet forms from the resistance forms and thus the self-adjoint operators that we want to study. In chapter~\ref{chap_cond_haus} we introduce some conditions that are necessary to calculate both Hausdorff dimension in resistance metric as well as the leading term of the eigenvalue counting function which is done in chapter~\ref{chap_cond_haus} resp. chapter~\ref{chapter7}. Lastly we list some open problems and further ideas in chapter~\ref{chapter8}. \section{Stretched fractals}\label{chapter2} In \cite{haus17,haus18} we analyzed the Stretched Sierpinski Gasket analytically. It is constructed by lowering the contraction ratios of the similitudes of the self-similar Sierpinski Gasket and filling the arising holes with one-dimensional lines (Figure~\ref{ssgfigure}). We want to generalize this construction of \textit{stretching} to more self-similar fractals. For the Sierpinski Gasket $S$ it was essential that two copies $F_i(S)$ and $F_j(S)$ only intersect at a single point. Therefore, it is clear how we have to connect these copies if we stretch them apart. In general we need the fractal that we want to stretch to be finitely ramified. In this case we can connect the copies that get stretched away from each other by one-dimensional lines. These are the so called \textit{p.c.f. self-similar fractals} introduced by Kigami in \cite{kig93}. The notion of p.c.f. self-similar sets is well known thus we only want to recall the most important properties and also alter the notion slightly. \subsection{Definition of stretched fractals} Let $(F_1,\ldots,F_N)$ be the IFS of a connected p.c.f. self-similar fractal $F\subset \mathbb{R}^d$. That means \begin{align*} F=\bigcup_{i=1}^N F_i(F) \end{align*} where $F_i$ are contracting similitudes with distinct unique fixed points $q_i$.\\ We will introduce some notation that is commonly used. We denote the alphabet by $\mathcal{A}:=\{1,\ldots,N\}$ and all words of finite length $\mathcal{A}^\ast:=\bigcup_{n\geq 1}\mathcal{A}^n$ and $\mathcal{A}^\ast_0:=\bigcup_{n\geq 0}\mathcal{A}^n$ if we also want to include the empty word. For $w=(w_1,\ldots,w_n)$ we denote by $F_w:=F_{w_1}\circ\ldots\circ F_{w_n}$ the composition of the similitudes and by $F_w:=\operatorname{id}$ the identity if $w=\emptyset$ is the empty word.\\ We can now define the critical set $\mathcal{C}$. This set plays an important role in the construction of \textit{stretched fractals}: \begin{align*} \mathcal{C}:=\bigcup_{\substack{i,j\in \mathcal{A} \\ i\neq j}} F_i(F)\cap F_j(F) \end{align*} That means $\mathcal{C}$ are the points where $1$-cells meet. With this we define the so called post critical set $\mathcal{P}$. \begin{align*} \mathcal{P}:=\{x\in F \ | \ \exists w\in \mathcal{A}^\ast: F_w(x)\in \mathcal{C} \} \end{align*} The post critical set $\mathcal{P}$ consists of all points that get mapped to the critical set $\mathcal{C}$ by finite compositions of the similitudes $F_1,\ldots,F_N$. For p.c.f. self-similar sets we know that $\#\mathcal{P}<\infty$. For nested fractals $\mathcal{P}$ is made up of the essential fixed points (see \cite[Example 8.5]{kig93}). In general $\mathcal{P}$ can have elements that are no fixed points (see Hata's tree in chapter~\ref{hataex}). To still be able to stretch these fractals we need to make an assumption for $\mathcal{P}$. We only consider connected p.c.f. self-similar sets, such that \begin{align} \forall p \in \mathcal{P} \ \exists w\in \mathcal{A}^\ast_0 \text{ and a fixed point } q_i\in\mathcal{P} \text{ such that } p=F_w(q_i) \tag{C1}\label{pcfcond}\\ q_i\notin\mathcal{C} \ \forall i\in\mathcal{A} \tag{C2} \label{pcfcond2} \end{align} That means, each post critical point is the image of a fixed point under finite composition of the similitudes or one itself. This is obviously true for nested fractals and it is also true for Hata's tree which is not a nested fractal. The fixed points are not allowed to be critical points themselves.\\ We want to introduce a quantity that describes the \textit{level of connectedness} at $\mathcal{C}$. This value is called the \textit{multiplicity} of a point $c\in \mathcal{C}$ and it counts how many $1$-cells meet at $c$. We can define this value for all $x\in F$: \begin{align*} \rho(x):=\#\{i\in\mathcal{A} \ | \ x\in F_i(F)\} \end{align*} We can also count the $n$-cells that meet at $x$: \begin{align*} \rho_n(x):=\#\{w\in\mathcal{A}^n\ |\ x\in F_w(F)\} \end{align*} As it turns out this gives us the same value $\rho(c)=\rho_n(c)$ for $c\in \mathcal{C}$. This fact was proved by Lindstr\o m in \cite[Prop. IV.16]{lin90} for nested fractals but it only used the nesting property which is also true for all p.c.f. self-similar sets \cite{kig93}. Now the critical set $\mathcal{C}$ with $\rho(c)$ for all $c\in \mathcal{C}$ describes how the fractal is connected. In particular we have $\rho(c)\geq 2$ for all $c\in\mathcal{C}$. Next we want to be able to say which post critical points get mapped to $c\in \mathcal{C}$. \begin{align*} \forall c\in\mathcal{C} \ : \ &\exists w^{c,1},\ldots, w^{c,\rho(c)}\in \mathcal{A}^\ast \text{ and fixed points } q^c_1,\ldots,q^c_{\rho(c)}\in\mathcal{P} \text{ such that}\\ &F_{w^{c,l}}(q^c_l)=c, \ \forall l\in\{1,\ldots,\rho(c)\} \\ &\text{where } w^{c,l}_1 \text{ are pairwise distinct.} \end{align*} We can do this since we consider p.c.f. self-similar sets that fulfill (\ref{pcfcond}). The first letters $w^{c,l}_1$ are different which indicates that $c$ belongs to $\rho(c)$ many different $1$-cells.\\ We can now define a new IFS $(G_1,\ldots,G_N)$ in the following way with $0<\alpha<1$: \begin{align*} G_i:=\alpha(F_i-q_i)+q_i \end{align*} This procedure lowers the contraction ratio of $F_i$ by multiplying it with $\alpha$ and it does this in a way that preserves the fixed point. It sort of compresses the image of $F_i$ linearly into its fixed point. The attractor of $(G_1,\ldots,G_N)$ will be denoted by $\Sigma_\alpha$.\\ By lowering the contraction ratios the copies get disconnected. I.e. the attractor of the new IFS becomes totally disconnected. The copies were connected at the critical set $\mathcal{C}$. We want to save the degree of connectedness by introducing one-dimensional lines connecting the copies.\\[.2cm] $\forall c\in\mathcal{C}$ define \begin{align*} e_{c,l}:=\{\lambda G_{w^{c,l}}(q^c_l)+(1-\lambda)c, \ \lambda\in[0,1]\}, \ \forall l\in\{1,\ldots,\rho(c)\} \end{align*} $G_{w^{c,l}}(q^c_l)$ is the point that got stretched away from $c$. Due to (\ref{pcfcond2}) $e_{c,l}$ is a one-dimensional object for all $c,l$. For $w\in\mathcal{A}^\ast$ we denote $e^w_{c,l}:=G_w(e_{c,l})$. Now we can define the stretched fractal associated to the p.c.f. self-similar set $F$: \begin{definition} The unique compact set $K_\alpha$ that fulfills the equation \begin{align*} K_\alpha =\bigcup_{i=1}^N G_i(K_\alpha) \cup \bigcup_{c\in \mathcal{C}}\bigcup_{l=1}^{\rho(c)} e_{c,l} \end{align*} is called the \textit{stretched fractal} associated to $F$.\\ \noindent The unique compact set $\Sigma_\alpha$ that fulfills \begin{align*} \Sigma_\alpha=\bigcup_{i=1}^N G_i(\Sigma_\alpha) \end{align*} is called the fractal part of $K_\alpha$. \end{definition} We can imagine the construction by fixing the points of $\mathcal{C}$ and stretching the copies away from each $c\in\mathcal{C}$ and then adding lines connecting the copies with $c$ like a spider's web. Since the fixed points of $G_i$ and $F_i$ are the same we ensure that $q^c_l$ and thus $G_{w^{c,l}}(q^c_l)$ are elements of $\Sigma_\alpha$. By this $K_\alpha$ is a connected set. Therefore, (C1) ensures connectedness. We will include a few examples of stretched fractals at the end of this chapter.\\ Solutions of equations like the one in this definition are already known. Barnsley denoted such a setting in \cite[Chapter 3.4]{bar06} by \textit{IFS with condensation} where $\bigcup_{c\in \mathcal{C}}\bigcup_{l=1}^{\rho(c)} e_{c,l}$ is called the condensation set. Since this is compact, so is the unique solution $K_\alpha$. In \cite{fra18} Jonathan Fraser called such a solution an \textit{inhomogeneous self-similar set} and calculated the box dimension. This is much harder than to calculate the Hausdorff dimension since the box dimension is not countably stable. In particular the lower box dimension is not even finitely stable. The result for the Hausdorff dimension with respect to the Euclidean metric is calculated very easily due to its countable stability. From \cite[Lemma 3.9]{sn08} we know with the so called \textit{orbital set} $\mathcal{O}$: \begin{align*} \mathcal{O}=\bigcup_{w\in\mathcal{A}^\ast_0}G_w\left(\bigcup_{c\in \mathcal{C}}\bigcup_{l=1}^{\rho(c)} e_{c,l}\right) \end{align*} that \begin{align*} K_\alpha=\Sigma_\alpha\cup \mathcal{O}=\overline{\mathcal{O}} \end{align*} \begin{proposition}\label{prop21} \begin{align*} \dim_{H,e}(K_\alpha)=\max\{\dim_{H,e}(\Sigma_\alpha),1\} \end{align*} \end{proposition} This value strongly depends on the stretching parameter $\alpha$. The resistance forms, however, will only depend on the topology which does not depend on $\alpha$: \begin{proposition}\label{prop22} The $K_\alpha$ are pairwise homeomorphic for different $\alpha$. \end{proposition} \begin{proof} We denote by $G_w^\alpha$ the similitudes which correspond to $K_\alpha$ as well as $e_{c,l}^{\alpha,w}$ for $w\in\mathcal{A}^\ast_0$. We know that $\Sigma_\alpha$ is homeomorphic to $\mathcal{A}^\mathbb{N}$ by the coding map $\iota^\alpha$ which maps $\mathcal{A}^\mathbb{N}$ to $\Sigma_\alpha$ by \begin{align*} \iota^\alpha(w)=\bigcap_{n\geq 1} G^\alpha_{w_1,\ldots,w_n}(K_\alpha) \end{align*} For $\alpha_1,\alpha_2\in (0,1)$ with $\alpha_1\neq \alpha_2$ we thus know that $\Sigma_{\alpha_1}$ and $\Sigma_{\alpha_2}$ are homeomorphic by the homeomorphism \begin{align*} \varphi_{\alpha_1,\alpha_2}:=\iota_{\alpha_2}\circ (\iota_{\alpha_1})^{-1} \end{align*} Also we know that $e_{c,l}^{\alpha}$ is homeomorphic to $[0,1]$ for all $c\in \mathcal{C} $ and $l\in \{1,\ldots,\rho(c)\}$. We denote the homeomorphism by $\iota_{c,l}^\alpha$. We can extend $\varphi_{\alpha_1,\alpha_2}$ to all $e_{c,l}^{\alpha_1,w}$ with $w\in \mathcal{A}^\ast_0$ by \begin{align*} \varphi_{\alpha_1,\alpha_2}|_{e^{\alpha_1,w}_{c,l}}:=G_w^{\alpha_2}\circ \iota_{c,l}^{\alpha_2}\circ (\iota_{c,l}^{\alpha_1})^{-1}\circ (G_w^{\alpha_1})^{-1} \end{align*} We see that the extended $\varphi_{\alpha_1,\alpha_2}$ is a homeomorphism between $K_{\alpha_1}$ and $K_{\alpha_2}$. \end{proof} We therefore omit the parameter $\alpha$ in the notation and only write $K$ for the stretched fractal. Similar we only write $\Sigma$ for $\Sigma_\alpha$. This also means that the Hausdorff dimension with respect to the Euclidean metric is not a very good quantity to describe the analysis of $K$. In chapter~\ref{chap_cond_haus} we are going to calculate the Hausdorff dimension with respect to the resistance metric, which is much better suited for this job.\\ We give some further notation. For $w\in \mathcal{A}^\ast_0$ \begin{itemize} \item $K_w:=G_w(K)$, $K_n:=\bigcup_{w\in\mathcal{A}^n}K_w$ \item $J_n:=\overline{K\backslash K_n}$ \item $\Sigma_w:=G_w(\Sigma)$, $\Sigma_n:=\bigcup_{w\in\mathcal{A}^n}\Sigma_w$ \end{itemize} We take the closure of $K\backslash K_n$ when defining $J_n$ to include the endpoints of the one-dimensional lines. $K_w$ is called an $n$-cell if $|w|=n$. \subsection{Examples} In this section we include some examples of p.c.f. self-similar fractals that we can stretch. \subsubsection{Stretched Sierpinski Gasket} The Stretched Sierpinski Gasket was subject of prior work. It was analyzed geometrically in \cite{af12} and analytically in \cite{af13} and \cite{akt16}. In \cite{afk17} the authors introduced so called completely symmetric resistance forms which satisfy full symmetry of the set. In \cite{haus17} the leading term of the eigenvalue counting function of the associated operators was calculated. \begin{figure} \caption{Stretched Sierpinski Gasket} \end{figure} The Sierpinski Gasket has three similitudes and three critical points which all have multiplicity 2. The post critical set consists of all three fixed points of the similitudes which are the corner points of the big triangle (see \cite[Example 8.2]{kig93}). The words $w^{c,l}$ all have length one since the Sierpinski Gasket is nested. \subsubsection{Stretched Level 3 Sierpinski Gasket} \begin{figure} \caption{Stretched Level 3 Sierpinski Gasket} \end{figure} The Level 3 Sierpinski Gasket has six similitudes and seven critical points. Six of the critical points have multiplicity 2, the inner critical point, however, has multiplicity 3. By connecting the copies of all three 1-cells that got stretched away from this point to it, we keep the level of connectedness. The post critical set consists of the essential fixed points which are again the corner points of the outer triangle. \subsubsection{Stretched Sierpinski Gasket in higher dimensions} There is a generalization of the Sierpinski Gasket to higher dimensions \cite{dh77}. These are nested fractals that can be stretched. \begin{figure} \caption{Stretched Sierpinski Gasket in $\protect\mathbb{R} \end{figure} In $\mathbb{R}^d$ we have $d+1$ similitudes, where each copy of the stretched fractal is connected to all other copies by connecting lines over the critical points. We have $\frac{d(d+1)}2$ many critical points which all have multiplicity 2. The post critical points are all the fixed points of the similitudes. \subsubsection{Stretched Lindstr\o m Snowflake} \begin{figure} \caption{Stretched Lindstr\o m Snowflake} \end{figure} The Lindstr\o m Snowflake was introduced by Lindstr\o m in \cite{lin90} as an example for nested fractals. It has seven similitudes and 12 critical points which all have multiplicity 2. The post critical set consists of the essential fixed points which are the fixed points of the outer six similitudes. \subsubsection{Stretched Vicsek Set} \begin{figure} \caption{Stretched Vicsek Set} \end{figure} The Vicsek Set consists of five similitudes and four critical points with multiplicity 2. The post critical points are the fixed points of the outer four similitudes. \subsubsection{Stretched Hata's tree}\label{hataex} We have the following similitudes \begin{align*} F_1(x)&=\frac 1{\sqrt{12}}\begin{pmatrix}\sqrt{3} &1\\1&-\sqrt{3}\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix} \\ F_2(x)&=\frac 23\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}x_1\\ x_2\end{pmatrix}+\begin{pmatrix}\frac 13\\ 0\end{pmatrix} \end{align*} This corresponds to the case $\alpha=\frac 12 +\frac{\sqrt{3}}6 i$ in \cite{hata85}. \begin{figure} \caption{Hata's tree} \end{figure} There is one critical point at $c=\begin{pmatrix}\frac 13\\0\end{pmatrix}$ with multiplicity 2 and the post critical set is \begin{align*} \mathcal{P}:=\left\{\begin{pmatrix}0\\0 \end{pmatrix}, \begin{pmatrix}1\\0 \end{pmatrix}, \begin{pmatrix} 1/2 \\ 1/\sqrt{12} \end{pmatrix}\right\} \end{align*} We have $\begin{pmatrix} 1/2 \\ 1/\sqrt{12} \end{pmatrix}=F_1\begin{pmatrix}1\\0 \end{pmatrix}$ and $c=F_1\begin{pmatrix} 1/2 \\ 1/\sqrt{12} \end{pmatrix}=F_2\begin{pmatrix}0\\0\end{pmatrix}$ which means we have the words $w^{c,1}=2$ and $w^{c,2}=11$. Therefore, even though Hata's tree is not nested (since it lacks the symmetry axiom) it fulfills our conditions (\ref{pcfcond}) and (\ref{pcfcond2}) and thus we are able to stretch it.\\ Stretching this set with $\alpha=\frac 9{10}$ gives us the following two similitudes \begin{align*} G_1(x)&=\frac 9{10\sqrt{12}}\begin{pmatrix}\sqrt{3} &1\\1&-\sqrt{3}\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix} \\ G_2(x)&=\frac 35\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}x_1\\ x_2\end{pmatrix}+\begin{pmatrix}\frac 25\\ 0\end{pmatrix} \end{align*} According to the construction we need to connect the points $G_1^2\begin{pmatrix}1\\0 \end{pmatrix}$ and $G_2\begin{pmatrix}0\\0\end{pmatrix}$ with $c$. This leads to the connecting lines \begin{align*} e_1=\{\lambda(\tfrac 9{10})^2\begin{pmatrix}\frac 13\\0\end{pmatrix}+(1-\lambda)\begin{pmatrix}\frac 13\\0\end{pmatrix}, \lambda\in(0,1)\}\\ e_2=\{\lambda \begin{pmatrix}\frac 13\\0\end{pmatrix} +(1-\lambda) \begin{pmatrix}\frac 25\\0\end{pmatrix}, \lambda\in(0,1)\} \end{align*} \begin{figure} \caption{Stretched Hata's tree} \end{figure} \section{Graph approximation and harmonic structures}\label{chapter3} To be able to introduce Dirichlet forms on stretched fractals we need to approximate $K$ by a sequence of finite graphs and choose resistances on the graph edges. This is the goal of this chapter. \subsection{Graph approximation} We start with the post critical set as vertices and connect all of them pairwise. \begin{align*} V_0:=\mathcal{P}, \ E_0:=\{\{x,y\}\ | \ x,y\in V_0, \ x\neq y\} \end{align*} In the next graph we have two kinds of vertices. One originate by applying the similitudes $G_j$ to the points of $V_0$: \begin{align*} P_1:=\bigcup_{j=1}^N G_j(V_0) \end{align*} The other kind are the critical points describing how we want to connect the cells $G_j(V_0)$: \begin{align*} C_1:=\mathcal{C} \end{align*} The union of these two parts gives us the set of vertices $V_1:=P_1\cup C_1$. \\ Now we want to describe how these vertices are connected. The points of $G_j(V_0)$ should be connected in the same way as $V_0$ was. This gives us the edge relation on $P_1$: \begin{align*} E_1^\Sigma :=\{\{G_ix,G_iy\}\ | \ \{x,y\}\in E_0, \ i\in\mathcal{A}\} \end{align*} The points that got stretched away from points in $\mathcal{C}$ should again be connected with these points to reflect the geometry of $K$. \begin{align*} E_1^I:=E_{1,1}^I:=\{\{c,G_{w^{c,l}}(q^c_l)\}\ | \ c\in\mathcal{C}, l\in\{1,\ldots,\rho(c)\}\} \end{align*} We know that $G_{w^{c,l}}(q^c_l)$ is an element of $P_1$. This gives us the graph $(V_1,E_1)$ with vertices $V_1=P_1\cup C_1$ and edge set $E_1:=E_1^\Sigma\cup E_1^I$. \begin{figure} \caption{$\protect(V_1,E_1)$ for the Stretched Level 3 Sierpinski Gasket} \end{figure} We are now ready to define the whole sequence of graphs. In general the vertices will consist of two different kinds of points. \begin{align*} P_n&:=\bigcup_{w\in\mathcal{A}^n}G_w (V_0)\\ C_{k,k}&:=\bigcup_{w\in\mathcal{A}^{k-1}}G_w (\mathcal{C})\\ C_n&:=\bigcup_{k=1}^n C_{k,k}\\ \mathcal{R}ightarrow V_n&:=P_n\cup C_n \end{align*} Similar we define the edge set. \begin{align*} E_n^\Sigma&:=\{\{G_wx,G_wy\}\ | \ \{x,y\}\in E_0, \ w\in\mathcal{A}^n\}\\ E_{k,k}^I&:=\{\{G_wx,G_wy\}\ | \ \{x,y\}\in E_{1,1}^I, \ w\in\mathcal{A}^{k-1}\}\\ E_n^I&:=\bigcup_{k=1}^n E_{k,k}^I\\ \mathcal{R}ightarrow E_n&:=E_n^\Sigma\cup E_n^I \end{align*} We will call the edges in $E_n^I$ connecting edges and the ones in $E_n^\Sigma$ fractal edges.\\ This leads to a sequence of graphs $\Gamma_n:=(V_n,E_n)$. We introduce some notation: \begin{align*} V_\ast:=\bigcup_{n\geq 0} V_n\\ P_\ast:=\bigcup_{n\geq 1} P_n\\ C_\ast:=\bigcup_{n\geq 1} C_n \end{align*} We know from general theory that $P_\ast $ is dense in $\Sigma$. \begin{align*} \mathcal{R}ightarrow \overline{V}_\ast=\Sigma\cup C_\ast \end{align*} This can be seen with \cite[Lemma 3.9]{sn08} if we choose $\mathcal{C}$ as the condensation set or inhomogeneity. \subsection{Harmonic structures} Until now we have the approximating graphs. We need resistances on the edges to define quadratic forms and thus operators. Define resistance functions \begin{align*} r_n:E_n\rightarrow [0,\infty] \end{align*} that assign each edge in $E_n$ a resistance.\\[.2cm] We want to choose resistances on $E_n$ in such a way that the electrical networks $(V_n,E_n,r_n)$ are all equivalent and thus a compatible sequence (compare \cite[Def. 2.5]{kig03}). Similar to the self-similar case it suffices to have the existence of $r_0$ and $r_1$ such that $(V_0,E_0,r_0)$ and $(V_1,E_1,r_1)$ are equivalent. Such values (or functions) will be called a harmonic structure in analogy to the self-similar case (compare \cite[Def. 9.5]{kig03}). \\ For an edge $e=\{x,y\}$ we write $G_i(e):=\{G_i(x),G_i(y)\}$. Choose values \begin{align*} r_e:=r_0(e)\in(0,\infty], \ \forall e \in E_0\\ \rho_e:=r_1(e)\in(0,\infty) , \ \forall e\in E_1^I \end{align*} and $0<\lambda<1$. Then define $r_1$ on the remaining edges in $E^\Sigma_1$ as follows: \begin{align*} r_1(G_i(e)):=\lambda r_0(e), \ \forall e \in E_0, i\in \mathcal{A} \end{align*} With this we have chosen all values for $r_0$ and $r_1$. Since we allow that $r_0(e)=\infty$ we need to make sure that the network is connected. \begin{definition} Let $(V,E)$ be a finite graph and $r: E\rightarrow [0,\infty]$. \\We call an electrical network $(V,E,r)$ connected if for all $p,\tilde p\in V$ there exist $\{p_0,p_1\},\ldots,\{p_{n-1},p_n\}\in E$ with $p_0=p$ and $p_n=\tilde p$ such that $r(\{p_i,p_{i+1}\})<\infty$ for all $i\in\{0,\ldots,n-1\}$.\\ \end{definition} If the electrical networks $(V_0,E_0,r_0)$ and $(V_1,E_1,r_1)$ are equivalent and the network $(V_0,E_0,r_0)$ is connected we call $(r_0,\lambda,\{\rho_e\}_{e\in E_1^I})$ a harmonic structure for $(G_1,\ldots,G_N)$. \\[.2cm] \textit{Electrically equivalent} can also be expressed in terms of quadratic forms. We also write $r_0(x,y):=r_0(\{x,y\})$. \begin{align*} E_0(f):=\sum_{\{x,y\}\in E_0} \frac 1{r_0(x,y)}(f(x)-f(y))^2\\ E_1(f):=\sum_{\{x,y\}\in E_1} \frac 1{r_1(x,y)}(f(x)-f(y))^2 \end{align*} with $r_0$ and $r_1$ chosen like before. The trace of $E_1(\cdot)$ on $V_0$ is \begin{align*} E_1|_{V_0}(g)=\inf\{E_1(f)\ | \ f:V_1\rightarrow \mathbb{R}, \ f|_{V_0}=g\} \end{align*} We can now give a definition of harmonic structures. \begin{definition}[Harmonic structure] $(r_0,\lambda,\{\rho_e\}_{e\in E_1^I})$ is a harmonic structure on~$K$ if and only if \begin{enumerate} \item $(V_0,E_0,r_0)$ is connected \item $E_1|_{V_0}(g)=E_0(g)\quad \text{for all}\quad g:V_0\rightarrow \mathbb{R}$ \end{enumerate} \end{definition} For fixed $r_0$ we cannot expect that $\lambda$ and $\{\rho_e\}_{e\in E_1^I}$ are unique. In fact, this is a major feature of stretched fractals (see \cite{afk17} or chapter~\ref{chapexamples}).\\ For the Stretched Level 3 Sierpinski Gasket you can see the resistances in the following figure.\\ \begin{figure} \caption{Resistances on $\protect(V_0,E_0)$ and $\protect(V_1,E_1)$} \end{figure} In the next graph approximation the electrical network $(V_2,E_2,r_2)$ has to be equivalent to $(V_1,E_1,r_1)$ and thus to $(V_0,E_0,r_0)$. The edges in $E_1^I$ are still part of $E_2$ and are not transformed in any way, so they will have the same resistance as in $(V_1,E_1,r_1)$. The cells $G_iV_0$ get divided in the same fashion as $V_0$ was in the first step but the resistances are now scaled by $\lambda$ compared to the values on $(V_0,E_0,r_0)$. We can therefore choose another \textit{harmonic structure} to get electrically equivalent networks. We choose the same resistances for all $1$-cells.\\ For the Stretched Level 3 Sierpinski Gasket you can see the second graph approximation in the following picture. The dotted lines indicate that the problem of choosing resistances is exactly the same as before in the first graph approximation. \begin{figure} \caption{Resistances on second graph approximation} \end{figure} We can follow this procedure in each step and thus we have to choose a sequence of harmonic structures: \begin{align*} \mathcal{R}:=(r_0,\lambda_i,\{\rho_e^i\}_{e\in E_1^I})_{i\geq 1} \end{align*} such that $(r_0,\lambda_i,\{\rho_e^i\}_{e\in E_1^I})$ is a harmonic structure for all $i$. Notice that $r_0$ has to be the same for all harmonic structures.\\ With this sequence we can define the values for $r_n$: \begin{enumerate} \item $r_n$ on $E_n^\Sigma$ \begin{align*} r_n(G_we):=\lambda_1\cdots\lambda_n r_0(e), \\ e\in E_0, w\in\mathcal{A}^n \end{align*} \item $r_n$ on $E_n^I$ \begin{align*} r_1(e)&=\rho^1_e, \ e\in E^I_1\\[.4cm] r_n(G_we)&=\lambda_1\cdots \lambda_{|w|}\rho_e^{|w|+1}, \\ w&\in \bigcup_{k=1}^{n-1} \mathcal{A}^k, \ e\in E_1^I, \ n\geq 2 \end{align*} \end{enumerate} By the definition of harmonic structures $(V_n,E_n,r_n)_{n\geq 0}$ is a sequence of equivalent electrical networks.\\[.2cm] Since $r_0$ is fixed for the whole sequence of harmonic structures we omit it in the notation of $\mathcal{R}$ whenever we don't explicitly need it. Additionally for the sake of notation we denote $\boldsymbol{\rho}^i:=\{\rho_e^i\}_{e\in E_1^I}$. \begin{definition}[Regular sequence of harmonic structures] Let $\mathcal{R}=(\lambda_i,\boldsymbol{\rho}^i)_{i\geq 1}$ be a sequence of harmonic structures (with fixed $r_0$). We call $\mathcal{R}$ a regular sequence of harmonic structures if it fulfills the following two conditions: \begin{enumerate}[(1)] \item $\exists \lambda^\ast<1$ such that $\lambda_i\leq \lambda^\ast$ for all $i$ \item $\rho^\ast:=\sup\{\rho\ | \ \rho \in\boldsymbol{\rho}^i, \ i\geq 1\}<\infty$ \end{enumerate} \end{definition} The condition $(1)$ is an immediate generalization of regular harmonic structures from \cite[Def. 9.5]{kig03}. The condition $(2)$ is a technical condition that we need to show the existence of resistance forms on $K$. \subsection{Examples}\label{chapexamples} We only have to consider the graphs $(V_0,E_0)$ and $(V_1,E_1)$ and choose resistances on the edges in accordance to this chapter such that the electrical networks are equivalent. \subsubsection{Stretched Sierpinski Gasket} This has been handled in \cite{afk17}. However, the edge set $E^I_1$ was slightly different since the copies got connected by only one end-to-end edge. \begin{figure} \caption{Harmonic structure on Stretched Sierpinski Gasket} \label{harm_ssg} \end{figure} Let us choose the resistances as in Figure~\ref{harm_ssg}. That means $r_0\equiv 1$. With this choice we are in the framework of \cite{afk17}. From this work we know that \begin{align} \frac 53\lambda+\rho=1 \label{ssgharmeq} \end{align} This can be seen by a quick calculation with the help of the $\mathcal{D}elta$-Y transformation. All sequences $\mathcal{R}=(\lambda_i,\boldsymbol{\rho}^i)_{i\geq 1}$ with $\boldsymbol{\rho}^i\equiv \frac {\rho_i} 2$ and $\tfrac 53\lambda_i+\rho_i=1$ for all $i$ are regular sequences of harmonic structures. From (\ref{ssgharmeq}) we know that $0<\lambda_i< \frac 35$ where the upper bound $\tfrac 35$ is exactly the renormalization factor in the self-similar case \cite[Example 8.2]{kig93}. \subsubsection{Stretched Level 3 Sierpinski Gasket} We choose the resistances on $(V_0,E_0)$ and $(V_1,E_1)$ in the following way. \begin{figure} \caption{Harmonic structure on Stretched Level 3 Sierpinski Gasket} \label{harm_ssg3} \end{figure} These networks are electrically equivalent if and only if the following equation holds. \begin{align*} 5\lambda^2+\lambda\left(\frac{25}3\rho-\frac 73\right)+5\rho^2-\rho=0 \end{align*} We can show that this allows pairs $(\lambda,\rho)$ for all $\lambda\in(0,\frac 7{15})$. If we choose such a harmonic structure in each step we get regular sequences of harmonic structures. The upper limit $\frac 7{15}$ for $\lambda$ is exactly the renormalization in the self-similar case \cite{str00}. \subsubsection{Stretched Sierpinski Gasket in higher dimensions} The graph $(V_0,E_0)$ consists of the complete graph with $d+1$ knots where all edges have resistance $1$. In the first graph approximation we have $d+1$ complete graphs which are all connected over a critical point to all other $d$ complete graphs. The remaining knots are the fixed points of the similitudes. The resistances of the edges in the complete graphs are $\lambda$ and the ones on the connecting edges are $\frac \rho 2$. \begin{figure} \caption{Harmonic structure on Sierpinski Gasket in $\protect\mathbb{R} \label{harm_ssgd} \end{figure} With the help of the star-mesh-transformation, which is a generalization of the $\mathcal{D}elta-Y$-transformation and originally due to Campbell \cite{ca11}, we can show that these networks are equivalent if and only if \begin{align*} \lambda\cdot \frac{d+3}{d+1}+\rho=1 \end{align*} which means that we can reach every $\lambda\in(0,\frac{d+1}{d+3})$ by a pair $(\lambda,\rho)$. The upper limit is the renormalization in the self-similar case \cite{dh77}. \subsubsection{Stretched Vicsek Set} Let us choose the following resistances: \begin{figure} \caption{Harmonic structure on Stretched Vicsek Set} \label{harm_svicsek} \end{figure} A quick calculation shows that these networks are equivalent if for $(\lambda,\rho)$ it holds that \begin{align*} 3\lambda+ 4\rho=1 \end{align*} The choice of $r_0\equiv 1$ comes from the resistances in the self-similar case. In this case this is the only choice which gives us a non-degenerate harmonic structure (see \cite[pp. 85--86]{barl98}). So we use this information and generalize it to the stretched case. We can reach all $\lambda\in(0,\frac 13)$ where again the upper bound is the renormalization in the self-similar case \cite{barl98}. \subsubsection{Stretched Hata's tree} We view the graphs $(V_0,E_0)$ and $(V_1,E_1)$ with the following resistances: \begin{figure} \caption{Harmonic structure on Stretched Hata's tree} \label{harm_shata} \end{figure} Note that the resistance on the dotted edge is $\infty$ but the graph is still connected. The networks are equivalent if \begin{align*} \lambda^2+\lambda+\rho=1 \end{align*} This is solvable for all $\lambda\in(0,\frac{\sqrt{5}-1}2)$ where the upper bound is the renormalization in the self-similar case \cite[Example 8.4]{kl93}. Note, however, that $r_0$ explicitly depends on $\lambda$. Since $r_0$ has to stay the same when we choose a sequence of harmonic structures we see that here in this case $(\lambda_i)_{i\geq 1}$ has to be constant. \section{Resistance forms}\label{chapter4} In this chapter we want to construct resistance forms on stretched fractals. The theory of resistance forms can be found in \cite{kig12} and we want to include a definition at this point. \begin{definition}\label{defires} Let $X$ be a set. A pair $(\mathcal{E},\mathcal{F})$ is called a resistance form on $X$ if it satisfies the following conditions (RF1) through (RF5): \begin{enumerate}[(RF1)] \item $\mathcal{F}$ is a linear subspace of $\ell(X)=\{u|u:X\rightarrow\mathbb{R}\}$ containing constants and $\mathcal{E}$ is a non-negative symmetric quadratic form on $\mathcal{F}$. $\mathcal{E}(u)=0$ if and only if $u$ is constant on $X$. \item Let $\sim$ be the equivalence relation on $\mathcal{F}$ defined by $u\sim v$ if and only if $u-v$ is constant on $X$. Then $(\mathcal{F}/{\sim},\mathcal{E})$ is a Hilbert space. \item If $x\neq y$, then there exists $u\in \mathcal{F}$ such that $u(x)\neq u(y)$. \item For any $p,q\in X$, \begin{align*} \sup\left\{\frac{|u(p)-u(q)|^2}{\mathcal{E}(u)}\ |\ u\in \mathcal{F},\mathcal{E}(u)>0\right\} \end{align*} is finite. The above supremum is denoted by $R_{(\mathcal{E},\mathcal{F})}(p,q)$ and it is called the resistance metric on $X$ associated with the resistance form $(\mathcal{E},\mathcal{F})$. \item For any $u\in \mathcal{F}, \overline{u}\in \mathcal{F}$ and $\mathcal{E}(\overline{u})\leq \mathcal{E}(u)$, where $\overline{u}$ is defined by \begin{align*} \overline{u}(p)=\begin{cases} 1 & \text{if } u(p)\geq 1\\ u(p) & \text{if } 0<u(p)<1\\ 0 & \text{if } u(p)\leq 0 \end{cases} \end{align*} \end{enumerate} \end{definition} \vspace*{1cm} We consider regular sequences of harmonic structures (with fixed $r_0$) \begin{align*} \mathcal{R}=(\lambda_i,\underbrace{\{\rho^i_{\{x,y\}}, \ \{x,y\}\in E_1^I\}}_{\boldsymbol{\rho}^i:=})_{i\geq 1} \end{align*} i.e. $\exists \lambda^\ast <1$ with $\lambda_i\leq \lambda^\ast, \ \forall i$. Also we have $r_n$ like before and write $r_n(e)=r_n(\{x,y\})=:r_n(x,y)$.\\ The resistance form will consist of two parts that represent the fractal and the line part that is present in these stretched fractals. The fractal part is very similar to the usual resistance form on the self-similar set (i.e. attractor of the $F_i$).\\[.2cm] We will first construct a resistance form on $V_\ast$ which doesn't consider the one-dimensional lines. This can be extended to a quadratic form on the closure of $V_\ast$ w.r.t. the resistance metric. Next we show that the Euclidean and resistance metric introduce the same topology, that means the closure of $V_\ast$ is the same with either one. We thus have a resistance form on $\Sigma\cup C_\ast$. The next step is to substitute parts of the resistance form and introduce Dirichlet energies on the one-dimensional lines. This is then shown to be a resistance form on the whole set $K$. Again the resistance metric (on $K$) introduces the same topology as the Euclidean metric. \subsection{Resistance form on $\protect V_\ast$} First we define a quadratic form on the approximating graphs that is associated to the energy on the electrical network.\\ \textbf{1. Fractal part}\\[.1cm] Let $u: V_0\rightarrow \mathbb{R}$. \begin{align*} \hat{\mathcal{E}}_{\mathcal{R},0}(u):=Q_{r_0}^\Sigma(u):=\sum_{\{x,y\}\in E_0} \frac 1{r_0(x,y)}(u(x)-u(y))^2 \end{align*} With this define a quadratic form for $u:V_n\rightarrow \mathbb{R}$ by \begin{align*} \hat{\mathcal{E}}_{\mathcal{R},n}^\Sigma(u):=\sum_{w\in\mathcal{A}^n} \frac 1{\lambda_1\cdots \lambda_n}Q_{r_0}^\Sigma (u\circ G_w) \end{align*} Use the abbreviation $\delta_n:=\lambda_1\cdots \lambda_n$.\\ \textbf{2. Line part}\\[.1cm] For $u:V_1\rightarrow \mathbb{R}$. \begin{align*} Q_{\boldsymbol{\rho}}^I(u):=\sum_{\{x,y\}\in E_1^I}\frac 1{\rho_{\{x,y\}}} (u(x)-u(y))^2 \end{align*} Again we can use this to define a quadratic form for $u:V_n\rightarrow \mathbb{R}$: \begin{align*} \hat{\mathcal{E}}_{\mathcal{R},n}^I(u):=Q_{\boldsymbol{\rho}^1}^I(u)+\sum_{k=2}^n \frac 1{\lambda_1\cdots \lambda_{k-1}}\underbrace{\sum_{w\in\mathcal{A}^{k-1}}Q_{\boldsymbol{\rho}^k}^I(u\circ G_w)}_{Q_{\boldsymbol\rho^k,k}^I(u):=} \end{align*} We denote $\gamma_1:=1$ and $\gamma_k:=\delta_{k-1}=\lambda_1\cdots \lambda_{k-1}$ for $k\geq 2$. Then this writes as follows for $n\geq 1$ \begin{align*} \hat{\mathcal{E}}_{\mathcal{R},n}^I(u):=\sum_{k=1}^n \frac 1{\gamma_k}Q_{\boldsymbol \rho^k,k}^I(u) \end{align*} We can now define a quadratic form that is defined on $\ell(V_n)=\{u|u:V_n\rightarrow \mathbb{R}\}$ for $n\geq 1$. \begin{align*} \hat{\mathcal{E}}_{\mathcal{R},n}(u):=\hat{\mathcal{E}}^\Sigma_{\mathcal{R},n}(u)+\hat{\mathcal{E}}^I_{\mathcal{R},n}(u) \end{align*} Since $V_n$ is finite, all these quadratic forms are resistance forms and since the graphs form a sequence of equivalent electrical networks the sequence of resistance forms $(\hat{\mathcal{E}}_{\mathcal{R},n},\ell(V_n))_{n\geq 0}$ builds a sequence of compatible resistance forms. That means $(\hat{\mathcal{E}}_{\mathcal{R},n}(u|_{V_n}))_{n\geq 0}$ is a non-decreasing sequence for all $u\in \ell( V_\ast)$ and, therefore, a limit exists in $[0,\infty]$. \begin{align*} \hat{\mathcal{E}}_\mathcal{R}(u):=\lim_{n\rightarrow\infty}\hat{\mathcal{E}}_{\mathcal{R},n}(u|_{V_n}) \end{align*} which is defined on \begin{align*} \hat{\mathcal{F}}_{\mathcal{R}}:=\{u\ |\ u\in\ell(V_\ast), \ \lim_{n\rightarrow\infty}\hat{\mathcal{E}}_{\mathcal{R},n}(u|_{V_n})<\infty\} \end{align*} From general theory it follows that $(\hat{\mathcal{E}}_\mathcal{R},\hat{\mathcal{F}}_\mathcal{R})$ is a resistance form on $V_\ast$ (see \cite[Theorem 3.13]{kig12}). \subsection{Resistance form on $\protect \Sigma\cup C_\ast$} By general theory (again \cite[Theorem 3.13]{kig12}) we know that this can be extended to a resistance form on $\overline{V}_\ast$ where the closure is taken w.r.t. the resistance metric of $(\hat{\mathcal{E}}_\mathcal{R},\hat{\mathcal{F}}_\mathcal{R})$. We will denote this metric by $\hat{R}_{\mathcal{R}}(\cdot,\cdot)$. We are, however, interested in a resistance form on $\Sigma\cup C_\ast$. We want to show that the resistance metric and the Euclidean metric are inducing the same topology and, therefore, we can get a resistance form on $\overline{V}_\ast=\Sigma\cup C_\ast$ where the closure is taken with either metric.\\[.2cm] Denote by $R_{\mathcal{R},m}(\cdot,\cdot)$ the resistance metric on $V_m$ from $(\hat{\mathcal{E}}_{\mathcal{R},m},\ell(V_m))$. The diameter of a set $X$ w.r.t. a metric $d$ is denoted by $\operatorname{diam}(X,d):=\sup\{d(x,y) \ : \ x,y\in X\}$. \begin{lemma}\label{lem41} $\operatorname{diam}(V_m,R_{\mathcal{R},m})\leq c<\infty, \ \forall m$ with a constant $c$ that only depends on $\lambda^\ast$, $\rho^\ast$ and $r_0$. \end{lemma} \begin{proof} Define a constant: \begin{align*} C:=\left(\sum_{c \in \mathcal{C}}\rho(c)\right)\rho^\ast+ N\sum_{\substack{e\in E_0\\ r_0(e)<\infty}} r_0(e) \end{align*} The first sum is exactly the number of connecting edges in $(V_1,E_1)$ that means we could also write $\#E_1^I$. We then multiply it by an upper bound for all $\rho^i_e$. The second part is the sum of all finite resistances in $E_0$ and then multiplied by the number of similitudes. Note that $\lambda_1\leq 1$. That means $C$ is an upper bound for the sum of all finite resistances on $(V_1,E_1)$ independent of the choice of $\boldsymbol \rho$ from $\boldsymbol{\rho}^i$.\\ Now let $q$ be any point of $V_1$, then it holds that \begin{align*} R_{\mathcal{R},1}(q,p)\leq C, \ \forall p\in V_0 \end{align*} Since $(V_1,E_1)$ is connected there is a path from $q$ to $p$ where each edge has finite resistance and is only used once. In $C$ we count each edge of $E_1$ with finite resistance and, therefore, get an upper bound of the summed up resistances along this path. Due to the triangle inequality and the fact that the effective resistance is always less or equal to the direct resistance we get the desired inequality.\\ \begin{figure} \caption{Connect $\protect V_1$ with $\protect V_0$} \end{figure} Next let $q_1$ be any point of $V_2$ and look for a path to the next point in $V_1$ (let's call it $p_1$). The problem is the same as from $V_1$ to $V_0$ but the resistances are multiplied by $\lambda_1$. That means \begin{align*} R_{\mathcal{R},2}(q_1,p_1)\leq \lambda_1 C \end{align*} \begin{figure} \caption{Connect $\protect V_2$ with $\protect V_1$} \end{figure} Now let $q\in V_n$ and we want to define a sequence of points in $V_k$ from some $p\in V_0$ to $q$. First assume that $q\in P_n$, that means $q=G_{w_1\cdots w_n}(\tilde p)$ for some $\tilde p\in V_0$. Then define \begin{align*} q_n&:=q\\ q_k&:=G_{w_1\cdots w_k}(\tilde p), \ k =1,\ldots,n-1\\ q_0&:=p\in V_0,\ (\text{arbitrary}) \end{align*} \begin{figure} \caption{Path from $\protect q$ to $\protect p$} \end{figure} Actually we can choose any point $\tilde p\in V_0$ for the definition of $q_k$, it is only important that $q_k$ and $q_{k+1}$ are in the same $k+1$-cell. If $q$ is not in $P_n$, that means $q\in C_n$, we have to add an additional point $q_{n}\in P_n$. Choose one that is connected to $q$ in $\Gamma_n$ and define $q_{n+1}=q$. This is always possible and the resistance is always $\leq \rho^\ast$. \begin{align*} \mathcal{R}ightarrow R_{\mathcal{R},n}(q,p)&\leq \underbrace{p^\ast}_{\text{if $q$ is not in $P_n$}} + \sum_{k=1}^n R_{\mathcal{R},n}(q_k,q_{k-1})\\ &\leq p^\ast +\sum_{k=1}^nR_{\mathcal{R},k}(q_k,q_{k-1})\\ &\leq p^\ast + \sum_{k=1}^n\underbrace{\lambda_1\cdots \lambda_{k-1}}_{:=1 \text{ for } k=1}C\\ &\leq p^\ast + C\sum_{k=1}^n (\lambda^\ast)^{k-1}\\ &\leq p^\ast + C\sum_{k=0}^\infty (\lambda^\ast)^{k}=:\tilde C<\infty \end{align*} This holds, since the sequence of harmonic structures is regular and therefore $\lambda^\ast<1$.\\ Now if $q,\tilde q\in V_n$ then choose any point $p\in V_0$. \begin{align*} \mathcal{R}ightarrow R_{\mathcal{R},n}(q,\tilde q)&\leq R_{\mathcal{R},n}(q,p)+R_{\mathcal{R},n}(p,\tilde q)\\ &\leq 2\tilde C \end{align*} Therefore \begin{align*} \operatorname{diam}(V_n,R_{\mathcal{R},n})\leq 2\tilde C, \ \forall n \end{align*} The points $q_0,\ldots,q_n$ can be chosen very arbitrarily, the only condition is that $q_{k-1}$ and $q_k$ are in the same $k$-cell. Because of this we are allowed to choose the same point in $V_0$ for $q$ and $\tilde q$. \end{proof} In the self-similar case some rescaling property of the resistance form was very important. We have something similar here, but not quite as nice. \begin{lemma}[Rescaling of $\hat{\mathcal{E}}_\mathcal{R}$] \label{lemrescalinghat} Let $\mathcal{R}=(\lambda_i,\boldsymbol \rho^i)_{i\geq 1}$ be a sequence of harmonic structures for $r_0$ and let $\mathcal{R}^{(n)}:=(\lambda_{n+i},\boldsymbol\rho^{n+i})_{i\geq 1}$ be the sequence that starts at $n+1$. Then it holds for $u\in \hat{\mathcal{F}}_\mathcal{R}$ that $u\circ G_w\in\hat{\mathcal{F}}_{\mathcal{R}^{(n)}}$ for all $w\in\mathcal{A}^n$ and: \begin{align*} \hat{\mathcal{E}}_{\mathcal{R}}(u)=\sum_{w\in\mathcal{A}^n}\frac 1{\delta_n}\hat{\mathcal{E}}_{\mathcal{R}^{(n)}}(u\circ G_w)+\sum_{k=1}^n\frac 1{\gamma_k} Q_{\boldsymbol \rho^k,k}^I(u) \end{align*} \end{lemma} \begin{proof} \begin{align*} \hat{\mathcal{E}}_{\mathcal{R},n+m}(u)&=\hat{\mathcal{E}}^\Sigma_{\mathcal{R},n+m}(u)+\hat{\mathcal{E}}^I_{\mathcal{R},n+m}(u)\\ &=\sum_{w\in\mathcal{A}^{n+m}}\frac 1{\delta_{n+m}}Q_{r_0}^\Sigma(u\circ G_w)+\sum_{k=1}^{m+n}\frac 1{\gamma_k} \sum_{w\in\mathcal{A}^{k-1}}Q_{\boldsymbol\rho^k}^I(u\circ G_w)\\ &=\sum_{w\in\mathcal{A}^n} \frac 1{\delta_n}\sum_{\tilde w\in\mathcal{A}^m}\frac 1{\lambda_{n+1}\cdots\lambda_{n+m}}Q_{r_0}^\Sigma(u\circ G_{ w}\circ G_{\tilde w})\\ &\hspace*{.5cm} + \sum_{k=1}^n \frac 1{\gamma_k} Q_{\boldsymbol{\rho}^k,k}(u)\\ &\hspace*{.5cm} + \sum_{w\in\mathcal{A}^n}\frac 1{\delta_n}\sum_{k=1}^{m}\underbrace{\frac 1{\lambda_{n+1}\cdots \lambda_{n+k-1}}}_{:=1 \text{ for } k=1}\sum_{\tilde w\in\mathcal{A}^{k-1}}Q_{\boldsymbol\rho^k}^I(u\circ G_w\circ G_{\tilde w})\\ &=\sum_{w\in\mathcal{A}^n}\frac 1{\delta_n}\left(\hat{\mathcal{E}}^\Sigma_{\mathcal{R}^{(n)},m}(u\circ G_w)+\hat{\mathcal{E}}^I_{\mathcal{R}^{(n)},m}(u\circ G_w)\right)+\sum_{k=1}^n \frac 1{\gamma_k}Q_{\boldsymbol\rho^k,k}(u)\\ &=\sum_{w\in\mathcal{A}^n}\frac 1{\delta_n}\hat{\mathcal{E}}_{\mathcal{R}^{(n)},m}(u\circ G_w)+\sum_{k=1}^n\frac 1{\gamma_k}Q_{\boldsymbol\rho^k,k}(u) \end{align*} By taking the limit for $m\rightarrow\infty$ we get the desired result. \end{proof} \begin{lemma} \label{lem43} $\operatorname{diam}(G_wV_\ast,\hat R_{\mathcal{R}})\leq c\delta_n$ for all $w\in\mathcal{A}^n$ with a constant $c$ only depending on $\lambda^\ast$, $\rho^\ast$ and $r_0$. \end{lemma} \begin{proof} From the rescaling we immediately get for all $w\in\mathcal{A}^n$: \begin{align*} \frac 1{\delta_n}\hat{\mathcal{E}}_{\mathcal{R}^{(n)}}(u\circ G_w) \leq \hat{\mathcal{E}}_{\mathcal{R}}(u) \end{align*} Let $p,q\in G_w(V_\ast)$, that means there exist $x,y\in V_\ast$ such that $p=G_w(x)$ and $q=G_w(y)$. For $u\in\hat{\mathcal{F}}_\mathcal{R}$ \begin{align*} \frac{|u(p)-u(q)|^2}{\hat{\mathcal{E}}_\mathcal{R}(u)}\leq \delta_n \frac{|u(G_w(x))-u(G_w(y))|^2}{\hat{\mathcal{E}}_{\mathcal{R}^{(n)}}(u\circ G_w)}\leq \delta_n \hat R_{\mathcal{R}^{(n)}}(x,y) \end{align*} Since $x,y\in V_\ast$ there exists $k\in\mathbb{N}$ with $x,y\in V_k$. Then the effective resistance between $x$ and $y$ can be calculated with the effective resistance on the graph $(V_k,E_k)$ with the resistance function that belongs to the sequence $\mathcal{R}^{(n)}$. From Lemma~\ref{lem41} we know that there is a constant $c$ that only depends on $\lambda^\ast$, $ \rho^\ast$ and $r_0$, and thus it is valid for $\mathcal{R}^{(n)}$ for all $n$. \begin{align*} \hat R_{\mathcal{R}^{(n)}}(x,y)=R_{\mathcal{R}^{(n)},k}(x,y)\leq c \end{align*} This leads to \begin{align*} \frac{|u(p)-u(q)|^2}{\hat{\mathcal{E}}_\mathcal{R}(u)}\leq \delta_n c, \ \forall u \in\hat{\mathcal{F}}_\mathcal{R} \end{align*} Taking the supremum over all $u\in \hat{\mathcal{F}}_\mathcal{R}$ leads to $\hat R_{\mathcal{R}}(p,q)\leq \delta_n c$. This holds for all $p,q\in G_w(V_\ast)$ which gives us the desired result. \end{proof} This means the diameter of $n$-cells goes to $0$ for smaller cells (small in terms of big $n$). This is very important to compare Cauchy sequences. Roughly: Cauchy sequences have to be in smaller getting cells (or in some fixed $C_k$). The diameter of small $n$-cells (i.e. big $n$) goes to zero for either resistance and also Euclidean metric. We now want to give an exact proof of this fact. \begin{lemma} \label{lem44} The topology of $V_\ast$ is the same with either resistance metric $\hat{R}_\mathcal{R}$ or Euclidean metric. \end{lemma} \begin{proof} We show that $(V_\ast,\hat{R}_\mathcal{R})$ and $(V_\ast,|\cdot|_e)$ have the same Cauchy sequences.\\ First let $(x_i)_{i\geq 1}$ be a Cauchy sequence with respect to the Euclidean metric $|\cdot|_e$ in $V_\ast$. Then there are two possibilities:\\ Since $P_n$ and $C_n$ are positively separated w.r.t. $|\cdot|_e$, there is either an $i_0\geq 1$ with $x_i=x\in C_n$ for all $i\geq i_0$ or there is a $w\in\mathcal{A}^\ast$ $(w=(w_1w_2\cdots))$ with $\forall m \exists i_m\geq 1$ such that $\forall i\geq i_m: x_i\in G_{w_1\cdots w_m}(V_\ast)$. In fact, this is only true since the $n$-cells are also positively separated. This is due to the stretching and it is not true in the self-similar case!\\ In the first case it is obviously also a Cauchy sequence with respect to the resistance metric. Let's therefore look at the second case. \\ From Lemma~\ref{lem43} we know that $\operatorname{diam}(G_{w_1\cdots w_m}(V_\ast),\hat{R}_{\mathcal{R}})\leq \delta_m c \rightarrow 0$. Therefore, we have for all $k,l\geq i_m$ that $\hat{R}_{\mathcal{R}}(x_k,x_l)\leq \delta_m c$ which makes $(x_i)_{i\geq 1}$ a Cauchy sequence w.r.t. the resistance metric.\\ Now take any Cauchy sequence $(x_i)_{i\geq 1}$ with respect to the resistance metric $\hat{R}_\mathcal{R}$. We have \begin{align*} V_\ast=\sum_{w\in\mathcal{A}^n}G_w(V_\ast)\dot\cup C_n \end{align*} For $w\in\mathcal{A}^n$ define \begin{align*} u&\equiv 1, \ \text{on } G_w(V_\ast)\\ u&\equiv 0, \ \text{on } G_w(V_\ast)^c \end{align*} We can easily see that $u\in \hat{\mathcal{F}}_\mathcal{R}$. and thus \begin{align*} \hat{R}_{\mathcal{R}}(x,y)\geq \frac{|u(x)-u(y)|^2}{\hat{\mathcal{E}}_\mathcal{R}(u)}=\frac 1{\hat{\mathcal{E}}_\mathcal{R}(u)}>0 \end{align*} for all $x\in G_w(V_\ast)$ and $y\in G_w(V_\ast)^c$. Therefore \begin{align*} \inf\{\hat{R}_{\mathcal{R}}(x,y) \ : \ x\in G_w(V_\ast), y \in C_n\}&>0 \\ \text{and also } \ \inf\{\hat{R}_{\mathcal{R}}(x,y) \ : \ x\in G_w(V_\ast), y \in G_{\tilde w}(V_\ast)\}&>0 \end{align*} Since we have only finitely many $n$-cells we can even find a common bound for all $n$-cells. This means, that the $n$-cells are positively separated w.r.t. to $\hat{R}_\mathcal{R}$ and also positively separated away from $C_n$. We can therefore use the same argument as before: There is either an $x$ in some $C_n$ such that $(x_i)_{i\geq 1}$ gets trapped in $x$ or we have smaller getting cells where all but finitely many $x_i$ lie. In either case $(x_i)_{i\geq 1}$ is also a Cauchy sequence w.r.t. the Euclidean metric. \end{proof} Due to this Lemma we know that $\overline V_\ast=\Sigma\cup C_\ast$ where the closure is taken with respect to the resistance metric $\hat R_\mathcal{R}$ if $\mathcal{R}$ is a regular sequence of harmonic structures. If $\mathcal{R}$ is not regular we are not able to prove this result. In this case it could happen that $\overline V_\ast$ is a proper subset of $\Sigma\cup C_\ast$ and thus we don't get a resistance form on $\Sigma\cup C_\ast$. This is an analogy to the self-similar case (compare \cite[Prop. 20.7]{kig12}). Therefore, the choice of the terms \textit{regular} and \textit{harmonic structure} is justified. \subsection{Resistance form on $\protect K$} Until now we have resistance forms on $\Sigma\cup C_\ast$. However, we want to have resistance forms on $K$, that means we need to substitute the differences along the edges that represent connecting lines with some form that considers all values of $u$ along this line and not just the endpoints. For these one-dimensional lines we can use the usual Dirichlet energy. \\ Consider the edges in $E_1^I$. These have a one-to-one correspondence with the connecting lines $e_{c,l}$. For $\{x,y\}\in E_1^I$ we know that $x$ and $y$ are the endpoints of $e_{c,l}$ and thus define $\xi_{e_{c,l}}(t):=\xi_{xy}(t):=tx+(1-t)y$, for $t\in [0,1]$. That means $u\circ \xi_{xy}$ maps $u|_{e_{c,l}}$ to be a function on $[0,1]$. Look at the Dirichlet energy on this line \begin{align*} \mathcal{D}_{e_{c,l}}(u):=\mathcal{D}_{xy}(u):=\int_0^1 \left(\frac{d(u\circ \xi_{xy})}{dz}\right)^2dz \end{align*} This can be defined if $u|_{e_{c,l}}\circ \xi_{e_{c,l}}$ is in $H^1[0,1]$. We see that this doesn't depend on the orientation of $\xi_{xy}$, therefore, the choice of endpoints of $e_{c,l}$ is not important.\\ For the first summand of the quadratic form we need to sum over all edges in $E_1^I$. \begin{align*} \mathcal{D}_{\boldsymbol\rho}(u):=\sum_{\{x,y\}\in E_1^I} \frac 1{\rho_{\{x,y\}}} \mathcal{D}_{xy}(u) \end{align*} Now we define the quadratic form $\mathcal{E}_\mathcal{R}$ that will replace $\hat{\mathcal{E}}_\mathcal{R}$ similar to its definition: \begin{align*} \mathcal{E}^I_{\mathcal{R},n}(u):=\sum_{k=1}^n \frac 1{\gamma_k} \underbrace{\sum_{w\in\mathcal{A}^{k-1}} \mathcal{D}_{\boldsymbol \rho^k}(u\circ G_w)}_{\mathcal{D}_{\boldsymbol \rho^k,k}(u):=} \end{align*} and the whole form \begin{align*} \mathcal{E}_{\mathcal{R},n}(u):=\mathcal{E}_{\mathcal{R},n}^\Sigma(u)+\mathcal{E}_{\mathcal{R},n}^I(u) \end{align*} Again we want to define a limit of these quadratic forms but we don't know if this is well defined. We introduce some further notation. \begin{align*} H^1(e^w_{c,l}):=\{u|u: e^w_{c,l}\rightarrow \mathbb{R}, \ u\circ \xi_{e^w_{c,l}} \in H^1[0,1]\} \end{align*} \begin{lemma}\label{lem45} $(\mathcal{E}_{\mathcal{R},n}(u))_{n\geq 0}$ is non-decreasing for $u\in C(K)$ with $u|_{e^w_{c,l}}\in H^1(e^w_{c,l})$ for all $c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}$ and $w\in\mathcal{A}^\ast_0$. \end{lemma} \begin{proof} \textit{(1.)} \begin{align*} \hat{\mathcal{E}}^I_{\mathcal{R},n}(u)\leq \mathcal{E}^I_{\mathcal{R},n}(u), \ \forall u \end{align*} This is true since $(u(1)-u(0))^2\leq \int_0^1 \left(\frac{du}{dx}\right)^2 dx$ for all $u\in H^1[0,1]$. This is the only difference between $\hat{\mathcal{E}}^I_{\mathcal{R},n}$ and $ \mathcal{E}^I_{\mathcal{R},n}$. The sums and prefactors are the same. This holds in general for $Q_{\boldsymbol\rho}^I(u)\leq \mathcal{D}_{\boldsymbol\rho}(u)$.\\[.2cm] \textit{(2.)} \begin{align*} \mathcal{E}_{\mathcal{R},n}(u)\leq \mathcal{E}_{\mathcal{R},n+1}(u), \ \forall u \end{align*} Since $(\lambda_{n+1},\boldsymbol\rho^{n+1})$ is a harmonic structure we have \begin{align*} Q_{r_0}^\Sigma(u)&\leq \frac 1{\lambda_{n+1}} \sum_{i\in \mathcal{A}}Q_{r_0}^\Sigma(u\circ G_i)+Q_{\boldsymbol \rho^{n+1}}^I(u)\\ \mathcal{R}ightarrow \sum_{w\in\mathcal{A}^n}Q_{r_0}^\Sigma(u\circ G_w)&\leq \frac 1{\lambda_{n+1}} \sum_{w\in \mathcal{A}^{n+1}}Q_{r_0}^\Sigma(u\circ G_w)+\sum_{w\in\mathcal{A}^n}Q_{\boldsymbol \rho^{n+1}}^I(u\circ G_w) \end{align*} Applying $(1.)$ for $Q_{\boldsymbol\rho^{n+1}}^I$ and multiplying by $\frac 1{\delta_n}=\frac 1{\gamma_{n+1}}$ on both sides we get \begin{align*} \mathcal{E}_{\mathcal{R},n}^\Sigma(u)\leq \mathcal{E}_{\mathcal{R},n+1}^\Sigma(u)+\frac 1{\gamma_{n+1}}\sum_{w\in\mathcal{A}^n}\mathcal{D}_{\boldsymbol\rho^{n+1}}(u\circ G_w) \end{align*} Now if we add $\sum_{k=1}^n\frac 1{\gamma_k} \mathcal{D}_{\boldsymbol\rho^k,k}(u)$ on both sides we get the desired result. \end{proof} We see that taking the limit is a well defined object in $[0,\infty]$ and, therefore, write \begin{align*} \mathcal{E}_{\mathcal{R}}(u):=\lim_{n\rightarrow\infty} \mathcal{E}_{\mathcal{R},n}(u) \end{align*} and we define the domain as the functions with finite energy. \begin{align*} \mathcal{F}_\mathcal{R}:=\left\{u\ | \ u \in C(K), \ \begin{array}{l} u|_{e^w_{c,l}}\in H^1(e^w_{c,l})\ \forall c\in\mathcal{C}, l\in\{1,\ldots,\rho(c)\},w\in \mathcal{A}^\ast_0, \\ \lim_{n\rightarrow \infty}\mathcal{E}_{\mathcal{R},n}(u)<\infty\end{array}\right\} \end{align*} This form fulfills the same rescaling as $\hat{\mathcal{E}}_\mathcal{R}$: \begin{lemma}[Rescaling of $\mathcal{E}_\mathcal{R}$] \label{lemrescaling}For all $u\in\mathcal{F}_\mathcal{R}$ and $w\in\mathcal{A}^n$ we have $u\circ G_w\in\mathcal{F}_{\mathcal{R}^{(n)}}$ and \begin{align*} \mathcal{E}_\mathcal{R}(u)=\sum_{w\in\mathcal{A}^n}\frac 1{\delta_n} \mathcal{E}_{\mathcal{R}^{(n)}}(u\circ G_w) + \sum_{k=1}^n \frac 1{\gamma_k} \mathcal{D}_{\boldsymbol{\rho}^k,k}(u) \end{align*} \end{lemma} \begin{proof}This works exactly the same as for $\hat{\mathcal{E}}_\mathcal{R}$ in Lemma~\ref{lemrescalinghat}. \end{proof} We now want to show that $(\mathcal{E}_\mathcal{R},\mathcal{F}_\mathcal{R})$ is indeed a resistance form. \begin{theorem}\label{theoresform} Let $\mathcal{R}=(\lambda_i,\boldsymbol{\rho}^i)_{i\geq 1}$ be a regular sequence of harmonic structures on $K$. Then $(\mathcal{E}_\mathcal{R},\mathcal{F}_\mathcal{R})$ is a resistance form on $K$ where the associated resistance metric is inducing the same topology as the Euclidean metric. \end{theorem} In order to show Theorem~\ref{theoresform} we have to show (RF1)--(RF5) of Definition~\ref{defires}. \begin{lemma}\label{lem48} There is a constant $c$ only depending on $\lambda^\ast$, $\rho^\ast$ and $r_0$ such that we have for all $u\in\mathcal{F}_\mathcal{R}$ and $x,y\in K$ \begin{align*} |u(x)-u(y)|^2\leq c \mathcal{E}_\mathcal{R}(u) \end{align*} \end{lemma} \begin{proof} We have three distinct cases: \begin{enumerate}[(1)] \item $x,y\in \overline V_\ast$ \item $x\in \overline V_\ast$, $y\notin \overline V_\ast$ \item $x,y\notin \overline V_\ast$ \end{enumerate} For case (1) notice: If $u\in \mathcal{F}_\mathcal{R}$ we have that $u|_{\overline V_\ast}\in \hat \mathcal{F}_\mathcal{R}$ since \begin{align*} \hat\mathcal{E}_\mathcal{R}(u|_{\overline V_\ast})\leq \mathcal{E}_\mathcal{R}(u) \end{align*} Since $(\hat\mathcal{E}_\mathcal{R},\hat\mathcal{F}_\mathcal{R})$ can be extended to a resistance form on $\overline V_\ast$ we get from Lemma~\ref{lem43} \begin{align*} |u(x)-u(y)|^2\leq c_1 \hat \mathcal{E}_\mathcal{R}(u|_{\overline V_\ast})\leq c_1 \mathcal{E}_\mathcal{R}(u) \end{align*} with a constant $c_1$ only depending on $\lambda^\ast$, $\rho^\ast$ and $r_0$.\\ Now consider case (2): $x\in \overline V_\ast$ and $y\notin \overline V_\ast$: That means $y$ is in some $e^w_{c,l}$ with $c\in\mathcal{C}$, $l\in\{1,\ldots,\rho(c)\}$, $w\in\mathcal{A}^\ast_0$ and in particular it is not one of the endpoints. Let $p$ be one of the endpoints of $e^w_{c,l}$, we may choose $p:=G_w(c)$. Then $p\in \overline V_\ast$ which means \begin{align*} |u(p)-u(x)|^2\leq c_1\mathcal{E}_\mathcal{R}(u) \end{align*} Now $y,p\in e^w_{c,l}$, the resistance of $e^w_{c,l}$ is $\gamma_{|w|}\rho^{|w|+1}_{c,l}$. We thus have \begin{align*} |u(y)-u(p)|^2\leq \underbrace{\gamma_{|w|}\rho^{|w|+1}_{c,l}}_{\leq \rho^\ast} \frac 1{\gamma_{|w|}\rho^{|w|+1}_{c,l}}\mathcal{D}_{e^w_{c,l}}(u)\leq \rho^\ast \mathcal{E}_\mathcal{R}(u) \end{align*} The last inequality holds since the Dirichlet energy on $e_{c,l}^w$ is only one part of the whole energy. Since $(a+b)^2\leq 2a^2+2b^2$ we get \begin{align*} |u(x)-u(y)|^2&=|u(x)-u(p)+u(p)-u(y)|^2\\ &\leq 2|u(x)-u(p)|^2+2|u(p)-u(y)|^2\\ &\leq 2(c_1+\rho^\ast) \mathcal{E}_\mathcal{R}(u) \end{align*} We can use these two cases to handle the last one (3): $x,y\notin \overline V_\ast$ \\ Choose any $p\in \overline V_\ast$, then \begin{align*} |u(x)-u(p)|^2\leq 2(c_1+\rho^\ast)\mathcal{E}_\mathcal{R}(u)\\ |u(y)-u(p)|^2\leq 2(c_1+\rho^\ast)\mathcal{E}_\mathcal{R}(u) \end{align*} and thus \begin{align*} |u(x)-u(y)|^2&\leq 2 |u(x)-u(p)|^2+2|u(y)-u(p)|^2\\ &\leq \underbrace{8(c_1+\rho^\ast)}_{c:=}\mathcal{E}_\mathcal{R}(u) \end{align*} Therefore, it holds for all $x,y\in K$ and $u\in \mathcal{F}_\mathcal{R}$: \begin{align*} |u(x)-u(y)|^2\leq c\mathcal{E}_\mathcal{R}(u) \end{align*} \end{proof} In analogy to Lemma~\ref{lem43} we can refine these results with the help of the rescaling property. \begin{corollary} \label{cor49} $x,y\in K_w$ with $w\in \mathcal{A}^n$ and $u\in \mathcal{F}_\mathcal{R}$: \begin{align*} |u(x)-u(y)|^2\leq c\delta_n\mathcal{E}_R(u) \end{align*} \end{corollary} \begin{proof} Since the constant $c$ from Lemma~\ref{lem48} only depends on $\lambda^\ast$, $\rho^\ast$ and $r_0$ it holds also for $(\mathcal{E}_{\mathcal{R}^{(n)}},\mathcal{F}_{\mathcal{R}^{(n)}})$. There are $x^\prime, y^\prime \in K$ with $x=G_w(x^\prime)$ and $y=G_w(y^\prime)$. From the rescaling we know $u\circ G_w\in \mathcal{F}_{\mathcal{R}^{(n)}}$ and thus \begin{align*} |u(x)-u(y)|^2&=|u(G_w(x^\prime))-u(G_w(y^\prime))|^2\\ &\leq c\mathcal{E}_{\mathcal{R}^{(n)}}(u\circ G_w)\\ &\leq c\delta_n \mathcal{E}_\mathcal{R}(u) \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{theoresform}:] \ \\ (RF1): $\mathcal{F}_\mathcal{R}$ is a linear space and $\mathcal{E}_\mathcal{R}(u)\geq 0$ is obviously satisfied. If $\mathcal{E}_\mathcal{R}(u)=0$, then $\hat\mathcal{E}_\mathcal{R}(u|_{\overline V_\ast})=0$. Since this is a resistance form on $\overline V_\ast$ we know that $u$ is constant on $\overline V_\ast$. Also we know that $\mathcal{D}_{e^w_{c,l}}(u)=0$ for all $e^w_{c,l}$ and thus $u$ is constant on all of them. Since $G_w(c)\in e^w_{c,l}\cap \overline V_\ast$ the constants have to be the same on all parts and, therefore, $u$ is constant on $K$.\\[.2cm] (RF2): Fix any $p\in V_0$, then it is enough to show that $\mathcal{F}_{\mathcal{R},0}:=\{u|u\in\mathcal{F}_\mathcal{R}, \ u(p)=0\}$ is complete with respect to $\mathcal{E}_\mathcal{R}$. Let $(u_n)_{n\geq 1}$ be a Cauchy sequence with respect to $\mathcal{E}_\mathcal{R}$. I.e. \begin{align*} \mathcal{E}_\mathcal{R}(u_n-u_m)\rightarrow 0, \ \text{for } n\geq m, \ m\rightarrow\infty \end{align*} \begin{align*} |u_n(x)-u_m(x)|^2&=|(u_n-u_m)(x)-(u_n-u_m)(p)|^2\\ &\leq c\mathcal{E}_\mathcal{R}(u_n-u_m) \end{align*} That means we have uniform convergence for $(u_n)_{n\geq 1}$ and, therefore, there is a $u\in C(K)$ with $u_n\rightarrow u$. Since $\mathcal{D}_{e^w_{c,l}}$ is a resistance form itself and $\mathcal{D}_{e^w_{c,l}}(u_n-u_m)\rightarrow 0$ we get that $u|_{e^w_{c,l}}\in H^1(e^w_{c,l})$. \\[.2cm] It remains to show that $u_n\rightarrow u$ with respect to $\mathcal{E}_\mathcal{R}$ and that $\mathcal{E}_\mathcal{R}(u)<\infty$. \begin{align*} \mathcal{E}_{\mathcal{R},k}(u_n-u_m)\leq \mathcal{E}_\mathcal{R}(u_n-u_m)\leq \underbrace{\sup_{m\geq n}\mathcal{E}_\mathcal{R}(u_n-u_m)}_{<\infty} \end{align*} If we let $m$ go to infinity we get \begin{align*} \mathcal{E}_{\mathcal{R},k}(u_n-u)\leq \sup_{m\geq n}\mathcal{E}_\mathcal{R}(u_n-u_m) \end{align*} We were able to substitute $u_m$ for $u$ in the limit since in $\mathcal{E}_{\mathcal{R},k}$ only squared differences of $u$ and Dirichlet energies appear. We already know that $u_m$ converges to $u$ with respect to them. Next we let $k\rightarrow\infty$ \begin{align*} \mathcal{E}_\mathcal{R}(u_n-u)\leq \sup_{m\geq n}\mathcal{E}_\mathcal{R}(u_n-u_m) \end{align*} That means $u_n-u\in\mathcal{F}_\mathcal{R}$ and since $u_n\in\mathcal{F}_\mathcal{R}$ this implies $u\in\mathcal{F}_\mathcal{R}$ because this is a linear space. Also for $n\rightarrow\infty $ we get \begin{align*} \mathcal{E}_\mathcal{R}(u_n-u)\rightarrow 0\quad \text{ and thus } \quad u_n\xrightarrow{\mathcal{E}_\mathcal{R}}u. \end{align*} (RF3): (1) $x$ or $y\notin \overline V_\ast$ \\ Without loss of generality let $x$ be this point. Then there exists an $e^w_{c,l}$ with $x\in e^w_{c,l}$ but $x\notin \overline V_\ast$. \begin{align*} \mathcal{R}ightarrow \exists u\in\mathcal{F}_\mathcal{R}: \ u(x)=1, \ u|_{(e^w_{c,l})^c}\equiv 0 \end{align*} For example we could use linear interpolation between $x$ and the endpoints of $e^w_{c,l}$. Then $u\in \mathcal{F}_\mathcal{R}$ and $u(\tilde y)<1$ for all $\tilde y\neq x$.\\ (2) $x,y\in \overline V_\ast$:\\ We find a $u$ in the extended domain of $\hat F_\mathcal{R}$ with $u(x)\neq u(y)$. We can extend $u$ to a function $\tilde u$ by linear interpolation on all $e^w_{c,l}$. Then \begin{align*} \mathcal{E}_\mathcal{R}(\tilde u)=\hat\mathcal{E}_\mathcal{R}(u) \end{align*} and thus $\tilde u\in \mathcal{F}_\mathcal{R}$ with $\tilde u(x)=u(x)\neq u(y)=\tilde u(y)$.\\[.2cm] (RF4): This follows with Lemma~\ref{lem48}. \\[.2cm] (RF5): We have $\overline u=(0\vee u)\wedge 1$. It is clear that $\overline u \in C(K)$. Also $\overline u|_{e^w_{c,l}}\in H^1(e^w_{c,l})$.\\ We see that \begin{align*} |\overline u(x)-\overline u(y)|^2\leq |u(x)-u(y)|^2, \ \forall u,x,y \end{align*} and also \begin{align*} \mathcal{D}_{e^w_{c,l}}(\overline u)\leq \mathcal{D}_{e^w_{c,l}}( u) \end{align*} for all $e^w_{c,l}$. \begin{align*} &\mathcal{E}_{\mathcal{R},n}(\overline u)\leq \mathcal{E}_{\mathcal{R},n}(u)\\ \mathcal{R}ightarrow \ &\mathcal{E}_\mathcal{R}(\overline u)\leq \mathcal{E}_\mathcal{R}(u) \end{align*} $\mathcal{R}ightarrow \overline u \in \mathcal{F}_\mathcal{R}$.\\ So far we have shown that $(\mathcal{E}_\mathcal{R},\mathcal{F}_\mathcal{R})$ is a resistance form on $K$. It remains to show that the topologies with respect to either resistance or Euclidean metric are the same.\\ Let $\iota:(K,|\cdot|_e)\rightarrow (K,R_\mathcal{R})$ be the identity mapping and $(x_n)_{n\geq 1}$ a sequence in $K$ with $x_n\xrightarrow{|\cdot|_e} x$. We have to show that $(x_n)_{n\geq 1}$ also converges to $x$ with the resistance metric to show that $\iota$ is continuous. \\[.2cm] We have three cases: \begin{enumerate}[(1)] \item $x$ lies in the interior of some $e^w_{c,l}$ \item $x\in C_\ast$ \item $x\in \overline V_\ast\backslash C_\ast=\Sigma$ \end{enumerate} Consider (1) $\mathcal{R}ightarrow \exists n_0\geq 0 \ : \ \forall n\geq n_0 \ : \ x_n\in e^w_{c,l}$. Let $u\in\mathcal{F}_\mathcal{R}$ \begin{align*} \frac{|u(x_n)-u(x)|^2}{\mathcal{E}_\mathcal{R}(u)}\leq \frac{|u(x_n)-u(x)|^2}{(\gamma_{|w|}\rho^{|w|+1}_{c,l})^{-1}\mathcal{D}_{e^w_{c,l}}(u)} \end{align*} Now $\mathcal{D}_{e^w_{c,l}}$ itself is a resistance form and its associated resistance metric $\frac{|x-y|_e}{\operatorname{diam}(e^w_{c,l},|\cdot|_e)}$. \begin{align*} \mathcal{R}ightarrow \frac{|u(x_n)-u(x)|^2}{\mathcal{E}_\mathcal{R}(u)}\leq \gamma_{|w|}\rho^{|w|+1}_{c,l} \frac{|x_n-x|_e}{\operatorname{diam}(e^w_{c,l},|\cdot|_e)} \end{align*} \begin{align*} \mathcal{R}ightarrow R_\mathcal{R}(x_n,x)\leq \frac{\gamma_{|w|}\rho^{|w|+1}_{c,l}}{\operatorname{diam}(e^w_{c,l},|\cdot|_e)} |x_n-x|_e \xrightarrow{n\rightarrow\infty} 0 \end{align*} (2) $x\in C_\ast$ i.e. $x=G_w(c)$ for some $c\in\mathcal{C}$ and $w\in\mathcal{A}^\ast_0$. Then $\exists n_0\geq 0 $ such that \begin{align*} \forall n \geq n_0\ : \ x_n\in \bigcup_{l\in \{1,\ldots,\rho(c)\}}e^w_{c,l} \end{align*} That means it may jump around the various lines that are connected to $x=G_w(c)$ in this ``spider-net''. We decompose the sequence $(x_n)_{n\geq 1}$ into various subsequences $\{x_n| \ x_n\in e^w_{c,l}\}$, $\forall l$ and $\{x_n|\ x_n=x\}$. For the latter it is clear that $R_\mathcal{R}(x_n,x)\rightarrow 0$ and for the first we can apply (1). We thus have $R_\mathcal{R}(x_n,x)\rightarrow x$ for all subsequences and thus for the whole sequence $(x_n)_{n\geq 1}$.\\ (3) There is a word $w\in \mathcal{A}^\mathbb{N}$ such that $x=\lim_{m\rightarrow\infty} G_{w_1\cdots w_m}(p)$. \\ Now either (I) $\forall m$ we have $x_n\in G_{w_1\cdots w_m}(K)$ for $n$ big enough \\ or (II) the sequence $(x_n)_{n\geq 1}$ can be divided into two parts where the first contains all points that behave like (I) and in the second are all points that do not (i.e. are in some edges $e^w_{c,l}$). \\ For the first case (I) we know that the diameter of $n$-cells goes to $0$ by Corollary~\ref{cor49} and thus it converges in resistance metric. For the second case (II) we can apply the ideas we already introduced. \\ That means the identity map $\iota: (K,|\cdot|_e)\rightarrow(K,R)$ is continuous. Since $(K,|\cdot|_e)$ is compact, so is $(K,R)$ and thus $\iota^{-1}$ is also continuous. Therefore, the topologies are the same. \end{proof} \section{Measures and operators}\label{chap_meas_oper} Until now we have resistance forms. To get Dirichlet forms and thus operators we need to introduce measures. These measures have to fulfill some requirements. They have to be locally finite (i.p. finite due to the compactness of $K$) and to be supported on the whole set $K$. \subsection{Measures} We want to describe the measures on $K$ as the sum of a fractal- and a line-part in accordance to the geometric appearance of $K$. \\ It is clear how the fractal part of the measure has to look like. We want as much symmetry as possible. Therefore, we use the normalized self-similar measure on $K$ which distributes mass equally onto the $m$-cells: \begin{align*} \mu_\Sigma(K_w)=\mu_\Sigma(\Sigma_w)=\left(\frac 1N\right)^{|w|} \end{align*} This gives us a measure on $K$ that fulfills: \begin{align*} \mu_\Sigma=\sum_{i=1}^N \frac 1N \cdot \mu_\Sigma \circ G_i^{-1} \end{align*} We see, however, that $\mu_\Sigma$ is only supported on the fractal dust $\Sigma$ which is the attractor of $(G_1,\ldots,G_N)$. This is a proper subset of $K$. Therefore, $\mu_\Sigma$ doesn't have full support. That means we can't use $\mu_\Sigma$ to get Dirichlet forms. This measure is too rough to measure the one-dimensional lines. We therefore need another measure that is able to measure these lines.\\ For this line part we want to ignore the length of $e^w_{c,l}$ according to the one-dimensional Lebesgue measure $\lambda^1$. Since we are analyzing $K$ only topologically, this value is not giving us much information. We need a measure that assigns these lines some weight such that these weights are finite when summed up. For the initial lines $e_{c,l}$ we set \begin{align*} \mu_I(e_{c,l}):=a_{c,l} \end{align*} with $a_{c,l}>0$ for $c\in \mathcal{C}$ and $l\in \{1,\ldots,\rho(c)\}$.\\ How should this measure scale for lines $e^w_{c,l}$. For symmetry reasons we want that the scaling is independent of the $m$-cell that we consider. We thus define \begin{align*} \mu_I(e^w_{c,l}):=\beta^{|w|}a_{c,l} \end{align*} with some $\beta>0$. We easily see that we need $\beta<\frac 1N$ to get a finite measure on $J:=\bigcup_{n\geq 1}J_n$. On the lines we define the measure as follows \begin{align*} \mu_I|_{e^w_{c,l}}:= \beta^{|w|}a_{c,l} \cdot \frac{\lambda^1}{\lambda^1(e^w_{c,l})}, \ w\in \mathcal{A}^\ast_0 \end{align*} This means it behaves like the one-dimensional Lebesgue measure on $e^w_{c,l}$ but it is normalized and then scaled by $\beta^{|w|}a_{c,l}$. Therefore, it doesn't depend on the value of $\lambda^1(e^w_{c,l})$. If $\beta<\frac 1N$ we have $a:=\mu_I(J)<\infty$. We choose the $a_{c,l}$ such that $\mu_I(J)=1$, by dividing with $a$. Calculating $\mu_I(J)$ gives us \begin{align*} \sum_{\substack{c\in\mathcal{C}\\l\in\{1,\ldots,\rho(c)\}}} a_{c,l}=1-\beta N \end{align*} If $\beta\rightarrow 0$ then more mass is distributed to bigger edges (big in the sense of short words $w$) and if $\beta\rightarrow \frac 1N$ the mass is distributed more equally which displays the geometry better. As a matter of fact $\beta=\frac 1N$ is not possible, that means the real geometry of $K$ is distorted by $\mu_I$.\\ We know that $J$ is dense in $K$. Therefore, $\mu_I$ has full support and can be used to get Dirichlet forms. The measures that we will consider will be convex combinations of the two measures:\\ Let $\eta \in (0,1]$ \begin{align*} \mu_\eta:=\eta \mu_I+(1-\eta)\mu_\Sigma \end{align*} $\eta=0$ is not allowed, since $\mu_\Sigma$ doesn't have full support. $\mu_1=\mu_I$, however, can be used alone. In this case we don't have any fractal part in the measure and this will reflect in the spectral asymptotics.\\ Now we want to know how $\mu_\eta$ scales with $w$. For the fractal part this is clear. We have \begin{align*} \mu_\Sigma(K_w)=\left(\frac 1N\right)^{|w|} \end{align*} For the line part we have the following. \begin{align*} 1&=N^{|w|} \mu_I(K_w)+ \sum_{\substack{c\in \mathcal{C}\\l\in\{1,\ldots,\rho(c)\}\\ \tilde w: |\tilde w|<|w|}}\mu_I(e^{\tilde w}_{c,l})\\ &=N^{|w|} \mu_I(K_w)+\sum_{k=0}^{|w|-1}N^k\cdot\sum_{\substack{c\in\mathcal{C}\\l\in\{1,\ldots,\rho(c)\}}}a_{c,l}\beta^k\\ &=N^{|w|} \mu_I(K_w)+\frac{1-(\beta N)^{|w|}}{1-\beta N}\sum_{\substack{c\in\mathcal{C}\\l\in\{1,\ldots,\rho(c)\}}}a_{c,l}\\ &=N^{|w|} \mu_I(K_w)+1-(\beta N)^{|w|} \end{align*} That means $\mu_I(K_w)=\beta^{|w|}$. For $\mu_\eta$ this leads to \begin{align*} \beta^{|w|}\leq \mu_\eta(K_w)\leq \left(\frac 1N\right)^{|w|} \end{align*} \subsection{Operators} With these measures we can define Dirichlet forms and, therefore, operators on $L^2(K,\mu_\eta)$. Let $\mathcal{R}$ be a regular sequence of harmonic structures. \\ \indent Now since $(K,R_\mathcal{R})$ is compact we have $\mathcal{D}_\mathcal{R}:=\overline{\mathcal{F}_\mathcal{R}\cap C_0(K)}^{\mathcal{E}^{\frac 12}_{\mathcal{R},1}}=\mathcal{F}_\mathcal{R}$. \begin{lemma} \label{lem51} Let $\mathcal{R}$ be a regular sequence of harmonic structures. Then $(\mathcal{E}_\mathcal{R},\mathcal{D}_\mathcal{R})$ is a regular Dirichlet form on $L^2(K,\mu_\eta)$. \end{lemma} \begin{proof} From Theorem~\ref{theoresform} we know that $(\mathcal{E}_\mathcal{R},\mathcal{F}_\mathcal{R})$ is a resistance form on $K$. By \cite[Cor. 6.4]{kig12} $(\mathcal{E}_\mathcal{R},\mathcal{F}_\mathcal{R})$ is regular and then the statement follows with \cite[Theo. 9.4]{kig12}. \end{proof} Introducing Dirichlet boundary conditions we get another Dirichlet form with $\mathcal{D}_\mathcal{R}^0:=\{u|u\in\mathcal{D}_\mathcal{R}, \ u|_{V_0}\equiv 0\}$. \begin{lemma} \label{lem52}Let $\mathcal{R}$ be a regular sequence of harmonic structures.\\ Then $(\mathcal{E}_\mathcal{R}|_{\mathcal{D}_\mathcal{R}^0\times\mathcal{D}_\mathcal{R}^0}, \mathcal{D}_\mathcal{R}^0)$ is a regular Dirichlet form on $L^2(K,\mu_\eta|_{K\backslash V_0})$. \end{lemma} \begin{proof} This follows with Lemma~\ref{lem51} and \cite[Theorem 10.3]{kig12} or \cite[Theorem 4.4.3]{fot}. \end{proof} We denote the associated self-adjoint operators with dense domains by $-\mathcal{D}elta_N^{\mu_\eta,\mathcal{R}}$ resp. $-\mathcal{D}elta_D^{\mu_\eta,\mathcal{R}}$. \begin{lemma}\label{lem53}$-\mathcal{D}elta_N^{\mu_\eta,\mathcal{R}}$ and $-\mathcal{D}elta_D^{\mu_\eta,\mathcal{R}}$ have discrete non-negative spectrum. \end{lemma} \begin{proof} Since $(K,R_\mathcal{R})$ is compact it follows with \cite[Lemma 9.7]{kig12} that the inclusion map $\iota: \mathcal{D}_\mathcal{R}\hookrightarrow C(K)$ with the norms $\mathcal{E}_\mathcal{R}^{\frac 12}$ resp. $||\cdot ||_{\infty}$ is a compact operator. Since the inclusion map from $C(K)$ to $L^2(K,\mu)$ is continuous the inclusion from $\mathcal{D}_\mathcal{R}$ to $L^2(K,\mu)$ is a compact operator and, therefore, the spectrum of $-\mathcal{D}elta_N^{\mu,\mathcal{R}}$ is discrete and non-negative with \cite[Theo. 5 Chap. 10]{bs87}. Since $\mathcal{D}_\mathcal{R}^0\subset \mathcal{D}_\mathcal{R}$ the same follows for $-\mathcal{D}elta_D^{\mu,\mathcal{R}}$ by \cite[Theo. 4 Chap. 10]{bs87}. \end{proof} \section{Conditions and Hausdorff dimension in resistance metric}\label{chap_cond_haus} In chapter~\ref{chap_meas_oper} we constructed Dirichlet forms and thus self-adjoint operators on stretched fractals. We needed regular sequences of harmonic structures to do so. Now we want to analyze these operators by calculating some values that give a further description of the underlying fractal. These values are the Hausdorff dimension calculated with respect to the resistance metric and the asymptotic growing of the eigenvalue counting function. But to be able to do this we need to introduce some conditions on the sequences of harmonic structures. \subsection{Conditions}\label{conditions} We need the following conditions: We only consider regular sequences of harmonic structures $\mathcal{R}=(\lambda_k,\boldsymbol{\rho}^k)_{k\geq 1}$ such that there exists a $\lambda\in(0,\lambda^\ast]$ with \setcounter{equation}{0} \begin{align} \sum_{k=1}^\infty| \lambda- \lambda_k|<\infty \label{eqcond} \end{align} With the limit comparison test we can easily show that then $\sum_{k=1}^\infty |\ln(\lambda^{-1}\lambda_k)|$ converges and thus \begin{align*} \prod_{i=1}^\infty \lambda^{-1}\lambda_i \in (0,\infty) \end{align*} This means the sequence $a_m:=\prod_{i=1}^m \lambda^{-1}\lambda_i$ is bounded from above and below: \begin{align*} \tilde\kappa_1 \lambda^m \leq \delta_m \leq \tilde\kappa_2 \lambda^m \end{align*} For $\delta_m^{(n)}=\lambda_{n+1}\cdots \lambda_{n+m}=\frac{\delta_{n+m}}{\delta_n}$ this means \begin{align*} \frac{\tilde \kappa_1}{\tilde \kappa_2} \lambda^m \leq \delta^{(n)}_m \leq \frac{\tilde \kappa_2}{\tilde \kappa_1} \lambda^m \end{align*} Without loss of generality we can assume that $\tilde \kappa_1\leq 1 \leq \tilde \kappa_2$ and thus with $\kappa_1:=\frac{\tilde \kappa_1}{\tilde \kappa_2} $ and $\kappa_2:=\frac{\tilde \kappa_2}{\tilde \kappa_1} $ we get for all $n$ and $m$: \begin{align*} \kappa_1\lambda^m\leq \delta^{(n)}_m\leq \kappa_2\lambda^m \end{align*} This means if we have a regular sequence of harmonic structures with (\ref{eqcond}) we have control over the resistances that appear in the rescaling of the quadratic form in Lemma~\ref{lemrescaling}. And we have this control for all sequences $\mathcal{R}^{(n)}$ with the same constants $\kappa_1$ and $\kappa_2$. \subsection{Hausdorff dimension in resistance metric} The Hausdorff dimension is a value which describes the size of a set. It strongly depends on the metric that we choose to calculate it. In Proposition~\ref{prop21} we calculated the Hausdorff dimension of stretched fractals with respect to the Euclidean metric. This, however, is not a very meaningful value to describe the analysis of a set. We saw that the resistance forms do not depend on the stretching parameter but only on the topology of $K$. The resistance metric is a better choice to describe the analytic structure of the stretched fractal, so we want to calculate the Hausdorff dimension with respect to this resistance metric. \begin{align*} J:=\hspace*{0.4cm}\smashoperator[lr]{\bigcup_{\substack{c\in \mathcal{C}, w\in \mathcal{A}^\ast_0,\\ l\in \{1,\ldots,\rho(c)\}}}}\hspace*{0.3cm} e^w_{c,l}=\bigcup_{n\geq 1}J_n \end{align*} Then $K=\Sigma\cup J$ (note: not disjoint). We calculate the dimension of the two parts and due to the stability the Hausdorff dimension of the union will be the bigger of the two. \begin{lemma} \label{lem61}For any regular sequence of harmonic structures $\mathcal{R}$ we have \begin{align*}\dim_{H,R_\mathcal{R}} J=1\end{align*} \end{lemma} \begin{proof} We show that $\dim_{H,R_\mathcal{R}}(e^w_{c,l})=1$ for all $e^w_{c,l}$. The result follows with $\sigma$-stability.\\[.2cm] To show this we want to find constants $a,b$ with \begin{align*} a|x-y|_e\leq R_\mathcal{R}(x,y)\leq b|x-y|_e \end{align*} for all $x,y\in e^w_{c,l}$.\\ (1.) $R_\mathcal{R}(x,y)\leq b|x-y|_e$\\[.1cm] For this we consider $u\in \mathcal{F}_\mathcal{R}$ with $u(x)=1$ and $u(y)=0$. \begin{align*} \mathcal{R}ightarrow \mathcal{E}_\mathcal{R}(u)&\geq \frac 1{\gamma_{|w|}\rho^{|w|+1}_{c,l}} \mathcal{D}_{e^w_{c,l}}(u)\\ &\geq \frac 1{\gamma_{|w|}\rho^{|w|+1}_{c,l}} \frac{\operatorname{diam}(e^w_{c,l},|\cdot|_e)}{|x-y|_e} \end{align*} for all such $u$. This means for the resistance metric \begin{align*} R_\mathcal{R}(x,y)\leq \frac{\gamma_{|w|}\rho^{|w|+1}_{c,l}}{\operatorname{diam}(e^w_{c,l},|\cdot|_e)} |x-y|_e \end{align*} (2.) $R_\mathcal{R}(x,y)\geq a|x-y|_e$\\[.1cm] Without loss of generality let $|x-G_w(c)|_e>|y-G_w(c)|_e$. Then define $u$ as follows: \begin{align*} u(x)=0 , \ u(y)=1, \ \text{ and linear interpolation between them} \end{align*} Also continue $u$ constant $0$ from $x$ to the endpoint which is not $G_w(c)$ and from $y$ to $G_w(c)$ with $1$. \\ Now we want to copy this behavior onto the other lines $e^w_{c,\tilde l}$ with $\tilde l\in\{1,\ldots,\rho(c)\}$ and $\tilde l\neq l$. That means we want that \begin{align*} u\circ \xi_{e^w_{c,\tilde l}}(t) = u\circ \xi_{e^w_{c,l}}(t) ,\ \forall t\in[0,1] \end{align*} Outside of these edges, we set the function constant $0$. You can see the construction in Figure~\ref{resspider}. \begin{figure} \caption{Construction of $\protect u$ on connecting lines} \label{resspider} \end{figure} Then $u\in\mathcal{F}_\mathcal{R}$ and we can calculate the energy of $u$. \begin{align*} \mathcal{E}_\mathcal{R}(u)=\left(\sum_{\tilde l\in \{1,\ldots,\rho(c)\}} \frac 1{\gamma_{|w|}\rho^{|w|+1}_{c,\tilde l}}\right) \cdot \frac{\operatorname{diam}(e^w_{c,l},|\cdot|_e)}{|x-y|_e} \end{align*} Note that the different lines $e^w_{c,\tilde l}$ can have different length (w.r.t. to $|\cdot|_e$), but since we stretched the function in such a way that the proportion of the different parts of $u$ stays the same, the energy is calculated in this way. Since $u$ is one of the functions for which the supremum is taken at \begin{align*} R_\mathcal{R}(x,y)=\sup\left\{\frac{|u(x)-u(y)|^2}{\mathcal{E}_\mathcal{R}(u)}, \ u\in \mathcal{F}_\mathcal{R}, \ \mathcal{E}_\mathcal{R}(u)>0\right\} \end{align*} we get \begin{align*} R_\mathcal{R}(x,y)\geq a |x-y|_e \end{align*} The constants $a,b$ depend on various things, but they are constant for a fixed $e^w_{c,l}$ and hold for all $x,y\in e^w_{c,l}$. \end{proof} Next we want to calculate the Hausdorff dimension of $\Sigma$. This is a self-similar set and $R_\mathcal{R}|_\Sigma$ is a metric on $\Sigma$. We can apply the ideas of \cite{kig95} to calculate this value. \begin{lemma} \label{lem62}Let $\mathcal{R}$ be a regular sequence of harmonic structures that fulfills the conditions. Then \begin{align*}\operatorname{diam}(\Sigma_w,R_\mathcal{R})\leq c \lambda^n,\ \forall w\in\mathcal{A}^n \end{align*} \end{lemma} \begin{proof} We know from Corollary~\ref{cor49} that \begin{align*} \operatorname{diam}(K_w,R_\mathcal{R})\leq c\delta_n \end{align*} Since $\Sigma_w\subset K_w$ we get \begin{align*} \operatorname{diam}(\Sigma_w,R_\mathcal{R})\leq \operatorname{diam}(K_w,R_\mathcal{R})\leq c\delta_n\leq c\kappa_1 \lambda^n \end{align*} \end{proof} \begin{lemma} \label{lem63}Let $\mathcal{R}$ be a regular sequence of harmonic structures that fulfills the conditions. Then there is an $M\geq 0$ and $c>0$ such that for all $x\in \Sigma$ we have \begin{align*} \#\{w\in \mathcal{A}^n \ | \ R_\mathcal{R}(x,\Sigma_w)\leq c\lambda^n\}\leq M+1, \ \forall n\in \mathbb{N} \end{align*} \end{lemma} \begin{proof} Since \begin{align*} R_\mathcal{R}(x,y)=\sup\left\{\frac{|u(x)-u(y)|^2}{\mathcal{E}_\mathcal{R}(u)}\ | \ u\in \mathcal{F}_\mathcal{R}, \ \mathcal{E}_\mathcal{R}(u)>0\right\} \end{align*} we get for a fixed $u\in \mathcal{F}_\mathcal{R}$ with $u(x)=0$ and $u(y)=1$ \begin{align*} R_\mathcal{R}(x,y)\geq \frac 1{\mathcal{E}_\mathcal{R}(u)} \end{align*} We are looking for a $u$ such that this estimate is good enough. Let $w\in \mathcal{A}^n$, $y\in \Sigma_w$ and $x\in \Sigma\backslash \Sigma_w$. We want to define a function $u_n$ on $V_n$ and then extend it harmonically to a $\tilde u_n\in\mathcal{F}_\mathcal{R}$. Under harmonic extension the energy doesn't change, so we are able to calculate $\mathcal{E}_\mathcal{R}(\tilde u_n)$. Define \begin{align*} u_n:= 1, \ \text{ on } G_w(V_0) \end{align*} Now search for all $n$-cells that are connected to $G_w(V_0)$ over some $c\in C_\ast$. There are at most $M:=\#\mathcal{C}\#V_0$ many of those (see \cite[Lemma 3.3]{kig95}). Set $u_n=1$ on all $c\in C_\ast$ that are connected to $G_w(V_0)$ by some line in $J$ and also $1$ on the endpoints that intersect with the other $n$-cells. Set $u_n=0$ on all remaining points of $V_n$. This procedure is illustrated in this picture for the Stretched Level 3 Sierpinski Gasket. \begin{figure} \caption{Construction of $\protect u_n$} \end{figure} Next we extend $u_n$ harmonically to $\tilde u_n\in \mathcal{F}_\mathcal{R}$. We can calculate the energy by \begin{align*} \mathcal{E}_\mathcal{R}(\tilde u_n)=\mathcal{E}_{\mathcal{R},n}(u_n)&\leq M\cdot \#E_0\cdot \frac 1{\delta_n\min_{e\in E_0}r_0(e)}\\ &\leq \frac{M\#E_0}{\kappa_1\min_{e\in E_0}r_0(e)}\cdot \lambda^{-n} \end{align*} This leads to \begin{align*} R_\mathcal{R}(x,y)\geq \frac{\kappa_1\min_{e\in E_0}r_0(e)}{M\#E_0}\cdot \lambda^n \end{align*} This procedure can be done for all $y\in \Sigma_w$ and $x$ such that $\tilde u_n(x)=0$, that means all $x$ that are not in $\Sigma_w$ and all other connected $n$-cells. There are, therefore, at most $M+1$ many $n$-cells (including $\Sigma_w$ itself) for which this construction doesn't work. This gives us the desired result. \end{proof} Now we are able to calculate the Hausdorff dimension of $K$. \begin{theorem} \label{theodim}Let $\mathcal{R}=(\lambda_i,\boldsymbol\rho^i)_{i\geq 1}$ be a regular sequence of harmonic structures that fulfills the conditions, then \begin{align*} \dim_{H,R_\mathcal{R}}(K)=\max\left\{1,\frac{\ln N}{-\ln \lambda}\right\} \end{align*} \end{theorem} \begin{proof} From Lemmata~\ref{lem62} and \ref{lem63} it follows with \cite[Theo. 2.4. or Cor. 1.3]{kig95} that \begin{align*} \dim_{H,R_\mathcal{R}}(\Sigma)=\frac{\ln N}{-\ln \lambda} \end{align*} With Lemma~\ref{lem61} and $K=\Sigma\cup J$ we get the result. \end{proof} \subsection{Examples} With this result and the harmonic structures that we calculated in chapter~\ref{chapexamples} we are now able to calculate the values of the Hausdorff dimension w.r.t the resistance metric of these stretched fractals for different choices of regular sequences of harmonic structures that fulfill the conditions of chapter~\ref{conditions}. For comparison we also list the values in the self-similar case. In the second column we list all possible values in the stretched case. \renewcommand{1}{2} \begin{center} \begin{tabular}{|C{3cm}|C{3cm}|C{2.5cm} L{2.1cm}|} \cline{2-4} \multicolumn{1}{c|}{}& \multicolumn{3}{c|}{$\operatorname{dim}_{H,R_\mathcal{R}}$ } \\ \cline{2-4} \multicolumn{1}{c|}{}& self-similar & \multicolumn{2}{c|}{stretched}\\ \hline Sierpinski Gasket & $\frac{\ln 3}{-\ln \frac 35}$ & $\max\{1,\frac{\ln 3}{-\ln \lambda}\}$, & $ \lambda\in(0,\frac 35]$ \\ \hline Level 3 Sierpinski Gasket &$\frac{\ln 6}{-\ln \frac 7{15}}$ & $\max\{1,\frac{\ln 6}{-\ln \lambda}\}$, & $ \lambda\in(0,\frac 7{15}]$ \\ \hline Sierpinski Gasket in $\mathbb{R}^d$ & $\frac{\ln (d+1)}{-\ln (\frac {d+1}{d+3})}$ & $\max\{1,\frac{\ln (d+1)}{-\ln \lambda}\}$,& $ \lambda\in(0,\frac {d+1}{d+3}]$ \\ \hline Vicsek Set & $\frac{\ln 5}{-\ln \frac 13}$ & $\max\{1,\frac{\ln 5}{-\ln \lambda}\}$,& $ \lambda\in(0,\frac 13]$\\ \hline Hata's tree & $\frac{\ln 2}{\ln 2-\ln(\sqrt{5}-1)}$ &$\max\{1,\frac{\ln 2}{-\ln\lambda} \}$, & $\lambda\in (0,\frac{\sqrt{5}-1}2)$ \\ \hline \end{tabular} \end{center} \renewcommand{1}{1} The values of the self-similar case was calculated in general by \cite{kig95}. With the renormalization factors we get the according values. In general the value in the stretched case is less or equal than in the self-similar case. We, however, are able to get the same value in all but one case. For Hata's tree we saw that we can only choose constant sequences of $\lambda_i$. Therefore, they cannot converge to the upper bound and thus we can't reach the same value as in the self-similar case. \section{Spectral asymptotics}\label{chapter7} Let $\mu$ be any of the allowed measures $\mu_\eta$ with $\eta\in(0,1]$ and $\mathcal{R}$ a regular sequence of harmonic structures. Due to Lemma~\ref{lem53} we can write the eigenvalues in non-decreasing order and study the eigenvalue counting function. We denote by $\lambda_k^{N,\mu,\mathcal{R}}$ the $k$-th eigenvalue of $-\mathcal{D}elta_N^{\mu,\mathcal{R}}$ resp. $\lambda_k^{D,\mu,\mathcal{R}}$ for $-\mathcal{D}elta_D^{\mu,\mathcal{R}}$ with $k\geq 1$. Now we can define the eigenvalue counting functions \begin{align*} N_N^{\mu,\mathcal{R}}(x):=\# \{k\geq 1 | \lambda_k^{N,\mu,\mathcal{R}}\leq x\}\\ N_D^{\mu,\mathcal{R}}(x):=\# \{k\geq 1 | \lambda_k^{D,\mu,\mathcal{R}}\leq x\} \end{align*} Since $\mathcal{D}_\mathcal{R}^0\subset \mathcal{D}_R$ and $\dim \mathcal{D}_\mathcal{R} /\mathcal{D}_\mathcal{R}^0=N$ (since this quotient space consists of the harmonic functions) we get \begin{align*} N_D^{\mu,\mathcal{R}}(x)\leq N_N^{\mu,\mathcal{R}}(x)\leq N_D^{\mu,\mathcal{R}}(x)+N, \ \forall x \end{align*} We want to study the asymptotic behavior of the eigenvalue counting functions. However, we can only calculate the leading term of the eigenvalue counting functions for regular sequences of harmonic structures that fulfill the conditions of chapter~\ref{chap_cond_haus}. In the following paragraph we will state the results for such sequences. \subsection{Results} The next theorem summarizes the results for the leading term for various regular sequences of harmonic structures and measures. \begin{theorem}\label{theospec} Let $\mathcal{R}$ be a regular sequence of harmonic structures that fulfills the conditions and $\mu=\mu_\eta$ with $\eta\in(0,1]$. Then there exist constants $0<C_1,C_2<\infty$ and $x_0\geq 0$ such that for all $x\geq x_0$: \begin{align*} C_1x^{\frac 12 d_S^{\mathcal{R},\mu}}\leq N_D^{\mu,\mathcal{R}}(x)\leq N_N^{\mu,\mathcal{R}}(x)\leq C_2x^{\frac 12 d_S^{\mathcal{R},\mu}} \end{align*} with \begin{align*} d_S^{\mathcal{R},\mu}=\begin{cases} \max\{1,\frac{\ln N^2}{\ln N- \ln \lambda}\}, \ \text{for } \mu=\mu_\eta \text{ with } \eta \in (0,1)\\[.2cm] \max\{1,\frac{\ln N^2}{- \ln( \beta\lambda)}\}, \ \text{for } \mu=\mu_1=\mu_I, \ \text{if} \ \beta\neq \frac 1{N^2\lambda} \end{cases} \end{align*} \end{theorem}\vspace*{0.3cm} The leading term is the maximum of the two values. One value corresponds to the fractal part inside the stretched fractal. However, if $\lambda$ gets too small the one-dimensional lines become the dominant part and leading term becomes $1$.\\ The constants $C_1$ and $C_2$ depend on $\mathcal{R}$ and $\mu$. We call the value $d_S^{\mathcal{R},\mu_{0.5}}(K)=:d_S^{\mathcal{R}}(K)$ the spectral dimension of the stretched fractal $K$. We see that the scaling parameter $\beta$ of the line part of the measure doesn't appear in the leading term if the fractal part of the measure exists. \\ We see that the choice of the regular sequence of harmonic structures as well as the choice of the measure has a big influence on the analysis on $K$. \begin{remark}In Theorem~\ref{theodim} we calculated $\dim_{H,R_\mathcal{R}}(K)=\max\{1,\frac{\ln N}{-\ln \lambda}\}$. We see that the following relation holds: \begin{align*} d_S^{\mathcal{R}}(K)=\frac{2\dim_{H,R_\mathcal{R}}(K)}{\dim_{H,R_\mathcal{R}}(K)+1} \end{align*} This relation was shown to hold for p.c.f. self-similar sets in $\cite{kl93}$ and we just saw that it is also valid for stretched fractals. \end{remark} \subsection{Examples} We want to list the values for the examples for which we calculated the harmonic structures and compare them to the self-similar case. The measure $\tilde\mu_\Sigma$ that we use in the self-similar case is the self-similar measure that assigns each n-cell the same weight. \renewcommand{1}{2.3} \begin{center} \begin{tabular}{|C{3cm}|C{1.8cm}|C{3cm}| C{3cm}|} \cline{2-4} \multicolumn{1}{c|}{}& \multicolumn{3}{c|}{$d_S^{\mu,\mathcal{R}}$ } \\ \cline{2-4} \multicolumn{1}{c|}{}& self-similar & \multicolumn{2}{c|}{stretched}\\ \hline Measure & $\tilde \mu_\Sigma$ & $\mu_\eta$,\ $\eta\in(0,1)$ &$\mu_1$\\ \hline \hline & & $\max\{1,\frac{\ln 9}{\ln 3- \ln \lambda}\}$ & $\max\{1,\frac{\ln 9}{- \ln \beta \lambda}\}$ \\ \cline{3-4} Sierpinski Gasket&$\frac{\ln 9}{\ln 5} $ &\multicolumn{2}{c|}{$ \lambda\in(0,\frac 35]$} \\ \cline{3-4} &&$\beta \in (0,\frac 13)$&$\beta \in (0,\frac 13), \beta\neq \frac 1{9\lambda}$ \\ \hline & & $\max\{1,\frac{2\ln 6}{\ln 6- \ln \lambda}\}$ & $\max\{1,\frac{2\ln 6}{- \ln \beta \lambda}\}$ \\ \cline{3-4} \begin{minipage}[c][0.5cm]{\linewidth}\begin{center} Level 3 Sierpinski Gasket\end{center}\end{minipage} & $\frac{2\ln 6}{\ln 6-\ln\frac 7{15}} $&\multicolumn{2}{c|}{$ \lambda\in(0,\frac 7{15}]$}\\ \cline{3-4} & &$\beta \in (0,\frac 16)$&$\beta \in (0,\frac 16), \beta\neq \frac 1{6^2\lambda}$ \\ \hline & & $\max\{1,\frac{2\ln(d+1)}{\ln (d+1)- \ln \lambda}\}$& $\max\{1,\frac{2\ln(d+1)}{- \ln \beta \lambda}\}$\\ \cline{3-4} \begin{minipage}[c][0.5cm]{\linewidth}\begin{center} Sierpinski Gasket in $\mathbb{R}^d$\end{center}\end{minipage}&$\frac{2\ln(d+1)}{\ln(d+3) } $&\multicolumn{2}{c|}{$ \lambda\in(0,\frac {d+1}{d+3}]$}\\ \cline{3-4} &&$\beta \in (0,\frac 1{d+1})$& \renewcommand{1}{1.2} \begin{tabular}{c} $\beta \in (0,\frac 1{d+1})$,\\$ \beta\neq \frac 1{(d+1)^2\lambda}$ \end{tabular}\renewcommand{1}{2} \\ \hline & & $\max\{1,\frac{2\ln 5}{\ln 5- \ln \lambda}\}$& $\max\{1,\frac{2\ln 5}{- \ln \beta \lambda}\}$\\ \cline{3-4} Vicsek Set&$\frac{2\ln 5}{\ln 15} $&\multicolumn{2}{c|}{ $ \lambda\in(0,\frac 13]$} \\ \cline{3-4} &&$\beta \in (0,\frac 15)$&$\beta \in (0,\frac 15), \beta\neq \frac 1{5^2\lambda}$ \\ \hline && $\max\{1, \frac{\ln 4}{\ln 2-\ln\lambda} \}$& $ \max\{1,\frac{\ln 4}{-\ln\beta\lambda} \}$\\ \cline{3-4} Hata's tree & $\frac{\ln 4}{\ln 4-\ln (\sqrt{5}-1)}$&\multicolumn{2}{c|}{ $ \lambda\in(0,\frac{\sqrt{5}-1}2)$} \\ \cline{3-4} &&$\beta \in (0,\frac 12)$&$\beta \in (0,\frac 12), \beta\neq \frac 1{4\lambda}$ \\ \hline \end{tabular} \end{center} \renewcommand{1}{1} The values for the self-similar column come from the result from \cite{kl93} together with the renormalization factors for these examples. As for the Hausdorff dimension the values for $d_S^{\mu,\mathcal{R}}$ are less or equal than the corresponding values in the self-similar case. They can reach the same value in all examples but Hata's tree. \subsection{Proof of Theorem~\protect\ref{theospec}} We will carry out the proof for $\mu=\mu_\eta$ with $\eta\in(0,1)$ and in the end show what happens for $\mu=\mu_1$. The main technique for the proof is the Dirichlet-Neumann bracketing as in \cite{kaj10} where it was applied to self-similar sets. We split the proof in the upper and lower estimate. \subsubsection{Upper estimate} We obtain the upper estimate by successively adding new Neumann boundary conditions at the points $V_m\backslash V_0$ thus making the domain bigger and, therefore, increasing the eigenvalue counting function. We can introduce the Neumann conditions by defining the following domains: \begin{align*} \mathcal{D}_\mathcal{R}^{K_m}&:=\{u|u\in L^2(K_m,\mu|_{K_m}),\ \exists f\in \mathcal{D}_\mathcal{R} : f|_{K_m}=u\}\\ \mathcal{D}_\mathcal{R}^{J_m}&:=\{u|u\in L^2(J_m,\mu|_{J_m}), \ \forall e^w_{c,l}\subset J_m \exists f\in \mathcal{D}_\mathcal{R} : f|_{e^w_{c,l}}=u|_{e^w_{c,l}}\} \end{align*} Since the lines $e^w_{c,l}$ in $J_m$ are decoupled by the new Neumann boundary conditions we can see that \begin{align*} \mathcal{D}_\mathcal{R}^{J_m}=\bigoplus_{\substack{c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\w\in A^k, k<m-1}} H^1(e^w_{c,l}) \end{align*} We also notice that $\mathcal{D}_\mathcal{R}^{K_m}\perp \mathcal{D}_\mathcal{R}^{J_m}$ and \begin{align*} \mathcal{D}_\mathcal{R}\subset \mathcal{D}_\mathcal{R}^{K_m}\oplus\mathcal{D}_\mathcal{R}^{J_m} \end{align*} We can define a new quadratic form $\tilde \mathcal{E}_\mathcal{R}$ on this bigger domain for $f=g+h$ with $g\in \mathcal{D}_\mathcal{R}^{K_m}$ and $h\in \mathcal{D}_\mathcal{R}^{J_m}$. \begin{align*} \tilde \mathcal{E}_\mathcal{R}(f):=\mathcal{E}^\Sigma_\mathcal{R}(g)+\sum_{k=m+1}^\infty \frac 1{\gamma_k}\mathcal{D}_{\boldsymbol{\rho}^k,k}(g)+\sum_{k=1}^m\frac 1{\gamma_k}\mathcal{D}_{\boldsymbol{\rho}^k,k}(h) \end{align*} and \begin{align*} \mathcal{E}_\mathcal{R}^{K_m}(g)&:=\mathcal{E}^\Sigma_\mathcal{R}(g)+\sum_{k=m+1}^\infty \frac 1{\gamma_k}\mathcal{D}_{\boldsymbol{\rho}^k,k}(g)\\ \mathcal{E}_\mathcal{R}^{J_m}(h)&:=\sum_{k=1}^m\frac 1{\gamma_k}\mathcal{D}_{\boldsymbol{\rho}^k,k}(h) \end{align*} \begin{lemma} \label{lem72} $(\tilde{\mathcal{E}}_\mathcal{R},\mathcal{D}_\mathcal{R}^{K_m}\oplus\mathcal{D}_\mathcal{R}^{J_m})$, $(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m})$ and $(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m})$ are regular Dirichlet forms with discrete non-negative spectrum and $\tilde{\mathcal{E}}_\mathcal{R}=\mathcal{E}_\mathcal{R}^{K_m}\oplus\mathcal{E}_\mathcal{R}^{J_m}$. \end{lemma} \begin{proof} $(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m})$ is just the sum of scaled Dirichlet energies on one-dimensional edges, hence it is a regular Dirichlet form on $L_2(J_m,\mu|_{J_m})$ with discrete non-negative spectrum. Since $K_m$ is closed $(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m})$ is a regular resistance form due to \cite[Theo. 8.4]{kig12} and hence a regular Dirichlet form on $L_2(K_m,\mu|_{K_m})$ with \cite[Theo. 9.4]{kig12}. Due to the same Theorem \cite[Theo. 8.4]{kig12} it follows that the associated resistance metric equals the restriction of $R_\mathcal{R}$ to ${K_m}\times{K_m}$. Therefore, since $K_m$ is closed we know that $(K_m,R_\mathcal{R}|_{K_m})$ is compact. The proof for discrete non-negative spectrum works like in the proof of Lemma~\ref{lem53}. The results for $\tilde{\mathcal{E}}_\mathcal{R}$ follow immediately. \end{proof} The eigenvalue counting function has many dependencies. For a Dirichlet form~$\mathcal{E}$ with domain $\mathcal{D}$ in the Hilbert space $L^2(K,\mu)$ we denote the eigenvalue counting function at point $x$ by $N(\mathcal{E},\mathcal{D},\mu,x)$. This is the same as the eigenvalue counting function of the self-adjoint operator associated to the Dirichlet form. In our case the measure is always $\mu$ or its restriction to the particular part. We will therefore omit it in the notation. For the eigenvalue counting functions of the newly introduced Dirichlet forms this means: \begin{align*} N_N^{\mu,\mathcal{R}}(x)\leq N(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m},x)+N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x), \ \forall x\geq 0 \end{align*} The introduction of the new Neumann boundary conditions leads to the decoupling of the $m$-cells and the lines adjoining them. Therefore, the calculations can be done separately. We start with $(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m})$ which we will call the fractal part.\\ \textbf{U.1: Fractal part $(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m})$}\\[.1cm] Define new measures on $K$ as follows for $w\in \mathcal{A}^\ast$. \begin{align*} \mu^w:=\mu(K_w)^{-1}\mu\circ G_w \end{align*} $\mu^w$ is a measure on the whole $K$ but it only reflects the features of $\mu$ on $K_w$. We notice a few immediate properties. \begin{align*} \mu^w(K)=\mu(K_w)^{-1}\mu(K_w)=1, \ \forall w \end{align*} as well as \begin{align*} \int_K u\circ G_wd\mu^w=\mu(K_w)^{-1}\int_{K_w} ud\mu \end{align*} Now for the upper estimate of the fractal part we use the so called \textit{uniform Poincar\'e inequality} (see \cite{kaj10}) for a $C_{PI}\in (0,\infty)$. We define $\mathcal{R}^{(0)}:=\mathcal{R}$, then it holds for all $n\geq 0$ that for all $u\in \mathcal{D}_{\mathcal{R}^{(n)}}$ \begin{align*} \mathcal{E}_{\mathcal{R}^{(n)}}(u)\geq C_{PI} \int_K |u-\overline u^{\mu^w}|^2d\mu^w \end{align*} where $\overline u^\nu=\int_K u d\nu$. The constant $C_{PI}$ is independent of $n$ as well as $w$. This can be seen easily. Due to Lemma~\ref{lem48} we know that there is a constant $\mathcal{M}\in(0,\infty)$ only depending on $\lambda^\ast$, $\rho^\ast$ and $r_0$ with \begin{align*} R_{\mathcal{R}^{(n)}}(p,q)\leq \mathcal{M}, \ \forall p,q\in K , \ \forall n \end{align*} \begin{align*} \mathcal{M}\mathcal{E}_{\mathcal{R}^{(n)}}(u)\geq R_{\mathcal{R}^{(n)}}(p,q)\mathcal{E}_{\mathcal{R}^{(n)}}(u)&\geq |u(p)-u(q)|^2\\[0.2cm] \mathcal{R}ightarrow \int_K\int_K \mathcal{M}\mathcal{E}_{\mathcal{R}^{(n)}}(u) d\mu^w(q)d\mu^w(p)&\geq \int_K\int_K |u(p)-u(q)|^2d\mu^w(q)d\mu^w(p)\\ &\geq \int_K \left( u(p)-\int_K u(q)d\mu^w(q)\right)^2d\mu^w(p)\\ &=\int_K|u(p)-\overline u^{\mu^w}|^2d\mu^w(p) \end{align*} \begin{align*} \mathcal{R}ightarrow \mathcal{E}_{\mathcal{R}^{(n)}}(u)\geq \frac 1{\mathcal{M}\mu^w(K)^2}\int_K |u-\overline u^{\mu^w}|^2d\mu^w &=\frac 1{\mathcal{M}}\int_K |u-\overline u^{\mu^w}|^2d\mu^w \end{align*} That means we have $C_{PI}=\frac 1{\mathcal{M}}$ which holds for all $\mathcal{R}^{(n)}$.\\ We have $N^m$ independent cells in $K_m$ that means the first $N^m$ eigenvalues are all $0$, because the functions that are constant on each $m$-cell are in $\mathcal{D}_\mathcal{R}^{K_m}$. We are interested in the first non-zero eigenvalue which we will call $\lambda^m_{N^m+1}$. Let $u\in \mathcal{D}_\mathcal{R}^{K_m}$ be the normalized eigenfunction to this eigenvalue $\lambda^m_{N^m+1}$, then $u$ is orthogonal to every $v$ that is constant on the $m$-cells, since this is a linear combination of eigenfunctions to lower eigenvalues. \begin{align*} \lambda_{N^m+1}^m&=\mathcal{E}_\mathcal{R}^{K_m}(u)\\ &\stackrel{(1)}{=}\frac 1{\delta_m}\sum_{w\in \mathcal{A}^m} \mathcal{E}_{\mathcal{R}^{(n)}}(u\circ G_w)\\ &\stackrel{PI}{\geq}\frac 1{\kappa_2 \lambda^m} \sum_{w\in \mathcal{A}^m} C_{PI}\underbrace{\int_K|u\circ G_w-\overline{u\circ G_w}^{\mu^w}|^2d\mu^w}_{=:\star} \end{align*} In (1) we used the rescaling of the energy (Lemma~\ref{lemrescaling}). For $\star$ we have \begin{align*} &\int_K (u\circ G_w -\overline{u\circ G_w}^{\mu^w})^2d\mu^w\\ &\hspace*{1.5cm}=\int_K (u\circ G_w)^2d\mu^w - 2\int_K u\circ G_w\cdot \overline{u\circ G_w}^{\mu^w}d\mu^w + \underbrace{\int_K (\overline{u\circ G_w}^{\mu^w})^2d\mu^w}_{\geq 0}\\ &\hspace*{1.5cm}\geq \frac 1{\mu(K_w)}\int_{K_w} u^2d\mu-2\frac 1{\mu(K_w)}\underbrace{\int_{K_w} u \cdot \overline{u\circ G_w}^{\mu^w}d\mu}_{=0, \text{ since u orth. on const.}}\\ &\hspace*{1.5cm}= \frac 1{\mu(K_w)} \int_{K_w} u^2d\mu \end{align*} Back to $\lambda_{N^m+1}^m$: \begin{align*} \mathcal{R}ightarrow \lambda^m_{N^m+1} &\geq \frac 1{\kappa_2 \lambda^m}\sum_{w\in\mathcal{A}^m} C_{PI}\frac 1{\mu(K_w)} \int_{K_w} u^2d\mu \\ &\geq \lambda^{-m} \frac {C_{PI}}{\kappa_2\max \mu(K_w)}\int_K u^2d\mu \\ &\geq \frac{\lambda^{-m}}{N^{-m}} \frac{C_{PI}}{\kappa_2}=C_u\left(\frac N\lambda\right)^m \end{align*} We have, $\lambda_{N^m+1}^m\geq C_u (N/\lambda)^m$, that means $$x< C_u(N/\lambda)^m \mathcal{R}ightarrow N(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m},x)\leq N^m$$ For $x\geq C_u$ take $m\in \mathbb{N}$ such that $C_u(N\lambda^{-1})^{m-1}\leq x< C_u(N\lambda^{-1})^m$. (It's always true that $N\lambda^{-1}>1$.) \begin{align*} \mathcal{R}ightarrow N(\mathcal{E}_\mathcal{R}^{K_m},\mathcal{D}_\mathcal{R}^{K_m},x)&\leq N^m \leq N\cdot N^{m-1}= N \left( \left(\frac N\lambda\right)^{\frac{\ln(N)}{\ln(N/\lambda)}}\right)^{m-1}\\ &=N \left(\left( \frac N\lambda\right)^{m-1}\right)^{\frac{\ln(N)}{\ln(N/\lambda)}}\leq N\left( \frac x{C_u} \right)^{\frac{\ln(N)}{\ln(N/\lambda)}}\\ &\leq \underbrace{N C_u^{-{\frac{\ln(N)}{\ln(N/\lambda)}}}}_{C_2^\prime:=} x^{\frac{\ln(N)}{\ln(N/\lambda)}} \end{align*} \textbf{U.2 Line part $(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m})$}\\[.1cm] Due to the decoupling through the Neumann boundary conditions the domain and form split into \begin{align*} \mathcal{E}^{J_m}_\mathcal{R}&=\bigoplus_{\begin{array}{c} c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array} }\frac 1{\gamma_{|w|+1}\rho^{|w|+1}_{c,l}} \int_0^1 \left(\frac{d(\cdot\circ \xi_{e^w_{c,l}})}{dx}\right)^2 d\mu\\ \mathcal{D}_\mathcal{R}^{J_m}&=\bigoplus_{\begin{array}{c}c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array} }H^1(e^w_{c,l}) \end{align*} Then it holds for the eigenvalue counting function that \begin{align*} N&(\mathcal{E}^{J_m}_\mathcal{R},\mathcal{D}^{J_m}_\mathcal{R},x)= \\&\sum_{\begin{array}{c} c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array} }N\left(\frac 1{\gamma_{|w|+1}\rho^{|w|+1}_{c,l}} \int_0^1 \left(\frac{d(\cdot\circ \xi_{e^w_{c,l}})}{dx}\right)^2 d\mu, H^1(e^w_{c,l}),x\right) \end{align*} The scaling parameter for the measure on the line part scales the integral in the following way: \begin{align*} \int_0^1 \left(\frac{d(u\circ \xi_{e^w_{c,l}})}{dx}\right)d\mu =\frac 1{(1-\eta)a_{c,l}\beta^{|w|}}\int_0^1\left(\frac{d(u\circ \xi_{e^w_{c,l}})}{dx}\right)^2dx \end{align*} Therefore, there is a one-to-one correspondence of the eigenvalues between the standard Neumann Laplacian on $(0,1)$ and the restriction of the energy to one edge. \begin{align*} N\left(\frac 1{\gamma_{|w|+1}\rho^{|w|+1}_{c,l}} \int_0^1 \left(\frac{d(\cdot\circ \xi_{e^w_{c,l}})}{dx}\right)^2 d\mu, H^1(e^w_{c,l}),x\right)&\\=N(-\mathcal{D}elta_N|_{(0,1)},&(1-\eta)a_{c,l}\beta^{|w|}\gamma_{|w|+1}\rho^{|w|+1}_{c,l}x) \end{align*} With \begin{align*} N(-\mathcal{D}elta_N|_{(0,1)},x)\leq \frac 1\pi \sqrt{x}+1, \ \forall x\geq 0 \end{align*} we get \begin{align*} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)&=\hspace*{1.3cm}\smashoperator[lr]{\sum_{\begin{array}{c}c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array}} }\hspace*{1cm}N(-\mathcal{D}elta_N|_{(0,1)},(1-\eta)a_{c,l}\beta^{|w|}\gamma_{|w|+1}\rho^{|w|+1}_{c,l}x)\\ &\leq \sum_{\begin{array}{c}c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array}} \frac 1\pi \sqrt{(1-\eta)a_{c,l}\beta^{|w|}\gamma_{|w|+1}\rho^{|w|+1}_{c,l}x} \end{align*} Since $(1-\eta)\leq 1$ as well as $a_{c,l}\leq 1$ and $\rho^{k}_{c,l}\leq \rho^\ast$ we have \setcounter{equation}{1} \begin{align} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)&\leq \sum_{\begin{array}{c}c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array}} \frac 1\pi \sqrt{\beta^{|w|}\gamma_{|w|+1}\rho^\ast x}+1\nonumber\\ &=\sum_{\begin{array}{c} w\in\mathcal{A}^n \\ n<m\end{array}}\#E_I^1( \frac{ \rho^\ast}\pi\sqrt{\beta^{|w|}\gamma_{|w|+1} x}+1)\nonumber\\ &=\sum_{k=0}^{m-1} N^k\#E_I^1( \frac{ \rho^\ast}\pi\sqrt{\beta^{k}\gamma_{k+1} x}+1)\nonumber\\ &=\sum_{k=0}^{m-1} \#E_I^1 N^k + \sum_{k=0}^{m-1}\#E_I^1\frac {\rho^\ast}\pi\sqrt{N^{2k}\beta^k\gamma_{k+1} x}\nonumber\\ &\leq\#E_I^1 \frac{N^m-1}{N-1}+\sum_{k=0}^{m-1}\#E_I^1\frac {\rho^\ast\sqrt{\kappa_2}}\pi\sqrt{N^{2k}\beta^k\lambda^k x}\nonumber\\ &\leq \frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2x}}\pi \sum_{k=0}^{m-1} \sqrt{N^2\beta \lambda}^k\label{equpperbound} \end{align} From here on we have to distinguish a few cases: \begin{enumerate} \item $\lambda>\frac 1N$ and $\frac 1{N^2\lambda}\leq\beta<\frac 1N$ \item ($\lambda>\frac 1N$ and $0<\beta<\frac 1{N^2\lambda}$) or $\lambda \leq \frac 1N$ \end{enumerate} Let us consider the first case and additionally assume that $\beta\neq \frac 1{N^2\lambda}$. Then $N^2\beta\lambda>1$ and we get from (\ref{equpperbound}): \begin{align*} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)&\leq \frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2}}{\pi(\sqrt{N^2\beta\lambda}-1)}\sqrt{N^2\beta \lambda}^m\sqrt x \end{align*} For the fractal part we chose $m$ according to $x$ by $C_u(N\lambda^{-1})^{m-1}\leq x< C_u(N\lambda^{-1})^m$. Therefore, \begin{align*} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)&\leq \frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2}}{\pi(\sqrt{N^2\beta\lambda}-1)}\sqrt{N^2\beta\lambda}^m \sqrt{C_u(N\lambda^{-1})^m}\\ &=\frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2C_u}}{\pi(\sqrt{N^2\beta\lambda}-1)}\sqrt{N^3\beta}^m \end{align*} Since $\beta<\frac 1N$ we get \begin{align*} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)&\leq \frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2C_u}}{\pi(\sqrt{N^2\beta\lambda}-1)}N^m \end{align*} Now if $\beta=\frac 1{N^2\lambda}$ we can change to $\tilde \beta:=\beta+\epsilon$ with $\frac 1{N^2\lambda}< \tilde \beta <\frac 1N$ and still get the result.\\ This means we get a constant $C_2^{\prime\prime}$ such that for $x$ with $C_u(N\lambda^{-1})^{m-1}\leq x< C_u(N\lambda^{-1})^m$ we have \begin{align*} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)\leq C_2^{\prime\prime} N^m \end{align*} With the same calculations as for the fractal part we get the same order $\frac{\ln (N)}{\ln (N/\lambda)}$ for the upper bound. That means for $x\geq C_u$ there is a constant $C_2$, such that \begin{align*} N_N^{\mu,\mathcal{R}}(x)\leq C_2 x^{\frac{\ln (N)}{\ln (N/\lambda)}} \end{align*} We still have to show the second case. Here we always have $N^2\beta\lambda<1$. This means we get from (\ref{equpperbound}): \begin{align*} N(\mathcal{E}_\mathcal{R}^{J_m},\mathcal{D}_\mathcal{R}^{J_m},x)&\leq \frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2x}}\pi \sum_{k=0}^{\infty} \sqrt{N^2\beta \lambda}^k\\ &=\leq \frac {\#E_I^1}{N-1} N^m+ \frac{\#E_I^1\rho^\ast\sqrt{\kappa_2}}\pi \frac{1}{1-\sqrt{N^2\beta\lambda}}\cdot x^\frac 12 \end{align*} For the first term with $N^m$ the calculation from before gives us the upper bound with order $\frac{\ln(N)}{\ln(N/\lambda)}$. Now if $\lambda>\frac 1N$ this is bigger than $\frac 12$ and thus it is the bigger order of asymptotic growing. \\ However, if $\lambda\leq \frac 1N$ we have $\frac{\ln(N)}{\ln(N/\lambda)}\leq \frac 12$ and thus $x^{\frac 12}$ is the leading term. \\ These estimates give us the desired upper bounds. \subsubsection{Lower estimate} The idea to get a lower bound is to add new Dirichlet boundary conditions on $V_m$ which makes the domain smaller and thus lowers the eigenvalue counting function. \begin{align*} \mathcal{D}_{\mathcal{R},m}^0&:=\{u|u\in \mathcal{D}_\mathcal{R}^0,\ u|_{V_m}\equiv 0\}\\ \mathcal{D}_{\mathcal{R},w}^0&:=\{u|u \in \mathcal{D}_{\mathcal{R},m}^0 ,\ u|_{K_w^c}\equiv 0\}, \ w\in \mathcal{A}^m\\ \mathcal{D}_{\mathcal{R}, e^w_{c,l}}^0&:= \{u|u\in D_{\mathcal{R},m}^0 ,\ u|_{(e^w_{c,l})^c}\equiv 0\},\ w\in \mathcal{A}^k, k<m \end{align*} With this we have \begin{lemma}\label{lem73} $(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},m}^0)$, $(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},w}^0)$ and $(\mathcal{E}_\mathcal{R},\mathcal{D}^0_{\mathcal{R},e_{c,l}^w})$ are regular Dirichlet forms with discrete non-negative spectrum. \end{lemma} \begin{proof} Since $K\backslash V_m$ is open, $(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},m}^0)$ is a regular Dirichlet form with \cite[Theorem 10.3]{kig12} or \cite[Theorem 4.4.3]{fot}. Since $\mathcal{D}_{R,m}^0\subset \mathcal{D}_\mathcal{R}^0$ the spectrum is discrete and non-negative with \cite[Theo. 4 Chap. 10]{bs87}. Since $K_w\backslash V_m$ for $w\in A^m$ and $e^w_{c,l}\backslash V_m$ for $w\in A^{k}$ with $k<m$ are also open the rest of the statement follows analogously. \end{proof} Again we get an estimate on the eigenvalue counting function: \begin{align*} N(\mathcal{E}_\mathcal{R}|_{\mathcal{D}_{\mathcal{R},m}^0\times \mathcal{D}_{\mathcal{R},m}^0}, \mathcal{D}_{\mathcal{R},m}^0,x)\leq N_D^{\mu,\mathcal{R}}(x) \end{align*} Due to the finite ramification and the fact that functions in $\mathcal{D}_{\mathcal{R},m}^0$ have to be zero in $V_m$, this domain splits into the domain restricted to the different parts. \begin{align*} \mathcal{D}_{\mathcal{R},m}^0=\left(\bigoplus_{w\in\mathcal{A}^m}\mathcal{D}_{\mathcal{R},w}^0\right) \bigoplus \left(\bigoplus_{\substack{c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n, n<m}}\mathcal{D}_{\mathcal{R}, e^w_{c,l}}^0\right) \end{align*} That means for the eigenvalue counting function, $\forall x\geq 0$ \begin{align*} \sum_{w\in\mathcal{A}^m} N(\mathcal{E}_{\mathcal{R}},\mathcal{D}_{\mathcal{R},w}^0,x)+\sum_{\substack{c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n, n<m}}N(\mathcal{E}_\mathcal{R},D_{\mathcal{R},e^w_{c,l}}^0,x)\leq N_D^{\mu,\mathcal{R}}(x) \end{align*} Again due to the decoupling, the individual eigenvalue counting functions can be calculated separately.\\ \textbf{L.1 Fractal part $(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},w}^0)$}\\[.1cm] This time we want an upper estimate on the first eigenvalue of $(\mathcal{E}_\mathcal{R}, \mathcal{D}_{\mathcal{R},w}^0)$ which is positive due to the Dirichlet boundary conditions. This gives us a lower estimate for $N(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},w}^0,x)$. The first eigenvalue can be calculated via the following fact \begin{align*} \lambda_1^w=\inf_{u\in \mathcal{D}_{\mathcal{R},w}^0} \frac{\mathcal{E}_\mathcal{R}(u)}{||u||_\mu^2} \end{align*} where $||u||_\mu$ denotes the $L^2$ norm with respect to $\mu$. This leads to \begin{align*} \lambda^w_1\leq \frac{\mathcal{E}_\mathcal{R}(u)}{||u||_\mu^2}, \ \text{ for each } u\in \mathcal{D}_{\mathcal{R},w}^0 \end{align*} The idea is to find a $u\in \mathcal{D}_{\mathcal{R},w}^0$ which is ``good enough''.\\ Let us consider the fixed $m$-cell $K_w$. We have Dirichlet boundary conditions on $V_m$. There are $\#V_0$ many points of $V_m$ in $K_w$. Take the smallest $j\in\mathbb{N}$ such that $N^j>\#V_0$. There are $N^j$ many $m+j$-cells inside $K_w$ which means that there is at least one that doesn't include any points of $V_m$. Therefore, there are no Dirichlet boundary conditions anywhere in this $m+j$-cell $K_{\hat w}$ with $|\hat w|=m+j$. We, however, have to look for an even smaller cell. We want to do the same procedure again and look for a cell that has no common points of $V_{m+j}$ with $K_{\hat w}$. With the same arguments there is an $m+2j$-cell $K_{\tilde w}$ with $|\tilde w|=m+2j$ inside $K_{\hat w}$ that fulfills this requirement.\\ We now want to construct a function on $K_w$ that is in $\mathcal{D}_{\mathcal{R},w}^0$ with the help of $K_{\tilde w}$. The construction is very similar to the one in the proof of Lemma~\ref{lem63} where we calculated the Hausdorff dimension of $K$ with respect to the resistance metric. Define $u_m$ on $G_{\tilde w}(V_0)$ to be constant $1$. Now search for all $m+2j$-cells that are connected to $K_{\tilde w}$ over some $c\in \mathcal{C}_\ast$. There are at most $M=\#\mathcal{C}\#V_0$ many of those. Set $u_m=1$ on all $c\in C_\ast$ that are connected to $G_{\tilde w}(V_0)$ in $E_{m+2j}$ and also $1$ on all other points that are connected to these $c$. By the way we chose $K_{\tilde w}$ and $K_{\hat w}$ we made sure that all points where we set $u_m$ to be $1$ are not in $V_m$. On all other points of $V_{m+2j}$ we choose $u_m$ to be $0$. Then extend $u_m$ harmonically to be a function in $\mathcal{D}_{\mathcal{R},w}^0$. Note that the Dirichlet conditions on $V_m$ are fulfilled. Again similar to Lemma~\ref{lem63} we can calculate the energy of $u_m$. \begin{align*} \mathcal{E}_\mathcal{R}(u_n)=\mathcal{E}_{\mathcal{R},m+2}(u_n)&\leq M\cdot \#E_0\cdot \frac 1{\delta_{m+2j}\min_{e\in E_0}r_0(e)}\\ &\leq \frac{M\#E_0}{\kappa_1 \min_{e\in E_0}r_0(e)}\cdot \lambda^{-(m+2j)} \end{align*} We also need a lower estimate for the $L^2$-norm of $u_m$ to get an upper estimate on $\lambda_1^w$. But we know that $u_m$ is constant $1$ on $K_{\tilde w}$. Therefore \begin{align*} ||u_m||_\mu^2&=\int_{K_w}|u_m|^2d\mu\\ &\geq \int_{K_{\tilde w}} \underbrace{|u_m|^2}_{=1}d\mu\\ &=\mu(K_{\tilde w})\\[.2cm] \mathcal{R}ightarrow \lambda_1^w&\leq \frac {M\#E_0}{\kappa_1\min_{e\in E_0}r_0(e)} \frac{\lambda^{-(m+2j)}}{\mu(K_{\tilde w})} \end{align*} We recall that our measures $\mu=\mu_\eta=\eta\mu_I+(1-\eta)\mu_\Sigma$. That means we have \begin{align*} \mu(K_w)=\mu_\eta(K_w)&\geq (1-\eta)\mu_\Sigma(K_w)\\ &=(1-\eta)\left(\frac 1N\right)^{|w|} \end{align*} This leads to \begin{align*} \lambda_1^w&\leq \frac {M\#E_0}{\kappa_1\min_{e\in E_0}r_0(e)(1-\eta)}(N\lambda^{-1})^{m+2j}\\ &=\underbrace{\frac {M\#E_0(N\lambda^{-1})^{2j}}{\kappa_1\min_{e\in E_0}r_0(e)(1-\eta)}}_{C_l:=} \cdot \left(\frac N\lambda\right)^m \end{align*} Note that $j$ is independent of $m$. For $x\geq C_l(N\lambda^{-1})$ choose $m\in \mathbb{N}$ such that \begin{align*} C_l(N\lambda^{-1})^m\leq x<C_l(N\lambda^{-1})^{m+1} \end{align*} For these $x$ it holds that at least one eigenvalue of $(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},w}^0)$ is smaller than $x$. \begin{align*} N(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},w}^0,x)&\geq 1\\ \mathcal{R}ightarrow \sum_{w\in \mathcal{A}^m} N(\mathcal{E}_\mathcal{R},\mathcal{D}_{\mathcal{R},w}^0,x)&\geq N^m = \frac 1N ((N\lambda^{-1})^{m+1})^{\frac{\ln N}{\ln (N\lambda^{-1})}}\\ &\geq \underbrace{\frac 1N C_l^{\frac{\ln N}{\ln (N\lambda^{-1})}}}_{C_1:=}x^{\frac{\ln N}{\ln (N\lambda^{-1})}} \end{align*} \textbf{L.2 Line part}\\[.1cm] In the previous calculations we saw that the fractal part already gives a lower bound with the same order as the upper bound for $\lambda>\frac 1N$. Therefore, the influence of the line part cannot be bigger than the fractal part. We can use the trivial estimate \begin{align*} \sum_{\begin{array}{c}c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array}}N(\mathcal{E}_\mathcal{R}|_{\mathcal{D}_{\mathcal{R},e^w_{c,l}}^0\times \mathcal{D}_{\mathcal{R},e^w_{c,l}}^0}, \mathcal{D}_{\mathcal{R},e^w_{c,l}}^0, x)\geq 0 \end{align*} If, however, $\lambda\leq \frac 1N$ this order of $\frac{\ln(N)}{\ln(N/\lambda)}$ is at most $\frac 12$, so it is not the one we want. To achieve the right one, we can use just one of the one-dimensional lines, say $e_{c,l}$: \begin{align*} \sum_{\begin{array}{c}c\in \mathcal{C}, l\in\{1,\ldots,\rho(c)\}\\ w\in \mathcal{A}^n , n<m\end{array}}&N(\mathcal{E}_\mathcal{R}|_{\mathcal{D}_{\mathcal{R},e^w_{c,l}}^0\times \mathcal{D}_{\mathcal{R},e^w_{c,l}}^0}, \mathcal{D}_{\mathcal{R},e^w_{c,l}}^0, x)\\ &\geq N(\mathcal{E}_\mathcal{R}|_{\mathcal{D}_{\mathcal{R},e_{c,l}}^0\times \mathcal{D}_{\mathcal{R},e_{c,l}}^0}, \mathcal{D}_{\mathcal{R},e_{c,l}}^0, x)\\ &\geq N(-\mathcal{D}elta_D|_{(0,1)},a_{c,l}\rho^1_{c,l}x)\\ &\geq \frac 1\pi \sqrt{a_{c,l}\rho^1_{c,l}} \cdot x^{\frac 12} \end{align*} This suffices to show the desired result if our measure includes the fractal part.\\ \textbf{Remaining: }$\boldsymbol{\mu=\mu_1=\mu_I}$\\[.1cm] We still need to show the case if $\mu=\mu_1=\mu_I$. Then we know that \begin{align*} \mu_I(K_w)=\beta^{|w|} \end{align*} Whenever we used $(1-\eta)\left(\frac 1N\right)^{|w|}\leq \mu_\eta(K_w)\leq \left(\frac 1N\right)^{|w|}$ in the proof for $\mu_\eta$ with $\eta\in (0,1)$ we can exchange this estimate with \begin{align*} \mu(K_w)=\beta^{|w|} \end{align*} For $\beta\neq\frac 1{N^2\lambda}$ the rest of the proof works exactly the same as in the case $\eta\in(0,1)$ and this leads to the asymptotic growing \begin{align*} \max\left\{\frac{\ln N}{-\ln (\beta \lambda)},1\right\} \end{align*} However, if $\beta=\frac 1{N^2\lambda}$, i.e. $N^2\beta\lambda=1$ we can't change $\beta$ to $\tilde \beta=\beta+\epsilon$ as in the case $\eta\in(0,1)$ since we need the exact value $\beta$ for the future calculation. This leads to an additional $log(x)$ term in the upper bound. We will not include this result in the theorem since it doesn't fit to the other cases. $\square$ \section{Outlook and further research}\label{chapter8} \subsection*{Existence of regular harmonic structures} The idea of this work was very similar to \cite{kl93}. Namely, if we have a regular harmonic structure we can choose a sequence and thus get resistance forms. After choosing a measure we get Dirichlet forms and thus operators. We showed the existence of regular harmonic structures for a few examples by explicitly calculating them. The question remains, in which cases such a regular harmonic structure exists. One possible approach is to show that if we have a harmonic structure in the self-similar case, this also induces one in the stretched case. In all our examples this was the case, since we always used the same resistances on $(V_0,E_0)$ as in the self-similar case. This means, the choice of $r_0$ was influenced by the existence of a regular harmonic structure on the self-similar set. If we have no way to compare it to the self-similar case we would still like to prove existence for as many sets as possible. The first set of fractals for which we would like to try this would be stretched nested fractals. As in \cite{lin90} this could mean getting the existence without knowing the value of $\lambda$. \subsection*{Comparison of $\protect d_S$ in the self-similar and the stretched case} We saw in the examples that the values for Hausdorff dimension and leading term for the asymptotics in the stretched case are less or equal than in the self-similar case. We believe this is always true. We can give heuristic arguments for this conjecture. If we set the resistances $\rho=0$ on all connecting edges in $E^I_1$ this would mean that points that were connected by this edge get identified with each other. This gives us back the first graph approximation in the self-similar case. $(V_0,E_0)$ is the same for either self-similar and stretched case. By increasing $\rho>0$ on the edges in $E^I_1$ we still want to have an equivalent network for $(V_0,E_0)$. This means that the effective resistance between those points in $V_0$ has to stay the same. However, we know from general electrical theory that if we increase the resistances on the connecting edges, the resistances on the fractal edges in $E_1^\Sigma$ have to decrease in order to keep the effective resistances at the same level. For Hata's tree we saw that the same values as for the self-similar case was not possible. This gives rise to the question in which cases this is possible and to find criteria to characterize stretched fractals. \subsection*{More general harmonic structures and measures} The harmonic structures that we used are very symmetric. We have the same renormalization in each cell. This is a big restriction and there will likely be stretched fractals for which there is no regular harmonic structure that fulfills this symmetry. This means we need to generalize our notion of harmonic structure to allow different scaling in different cells. We, however, believe that there isn't any new difficulty in obtaining Hausdorff dimension and the leading term in the asymptotics. We need to introduce a few more indices and to make sure the scalings for different cells converge on their own to a limit. With such conditions we should be able to prove the results with a combination of the proof in this work and the ideas from \cite{kl93} or \cite{kaj10} concerning partitions of the word space. The same holds for the measures that we used. These were very symmetric and we should substitute them for more general ones. We want to allow different scaling in different cells for both fractal- and line-part of the measure. But again, there should be no new difficulties in obtaining Hausdorff dimension and leading term of the spectral asymptotics by connecting the ideas of this work and \cite{kl93,kaj10}. \subsection*{Does the fractal part of the resistance form really exist} Besides the construction of resistance forms on the Stretched Sierpinski Gasket, the main result of \cite{afk17} examined the resistance forms $\mathcal{E}_\mathcal{R}$. In particular the authors studied the fractal part $\mathcal{E}_\mathcal{R}^\Sigma$ and showed that it only survives in a special case. In our notion this is the case $\sum_{i\geq 1} |\lambda_i-\frac 35|<\infty$. In all other cases we have $f\in \mathcal{F}_\mathcal{R} \mathcal{R}ightarrow \mathcal{E}_\mathcal{R}^\Sigma(f)=0$. This question can be generalized to stretched fractals. When does the fractal part $\mathcal{E}_\mathcal{R}^\Sigma$ in the resistance form of a stretched fractal survive. The immediate conjecture is: If $\lambda_{ss}$ is the renormalization in the self-similar case we conjecture that $\mathcal{E}_\mathcal{R}^\Sigma$ survives if and only if we have $\sum_{i\geq 1} |\lambda_i-\lambda_{ss}|<\infty$ for the sequence of regular harmonic structures. However, it is not possible to apply the same proof as in \cite{afk17} since it strongly depends on the value of $\lambda_{ss}=\frac 35$. These values are not known in general. \subsection*{More stretching} We were able to stretch p.c.f. self-similar fractals that fulfilled a certain connectivity condition (\ref{pcfcond},\ref{pcfcond2}). We did this by introducing one-dimensional lines. We could fill the holes with other objects than just lines. For example for each $c\in \mathcal{C}$ we could fill the hole with a fractal that has $\rho(c)$ many boundary points. We saw that the one-dimensionality of the lines influenced the dimension as well as the leading term of the spectral asymptotics. It would be interesting to see how other objects would influence these values. \begin{figure} \caption{Filling the hole with self-similar Sierpinski Gasket} \end{figure} We can also stretch sets that are not p.c.f., for example, the unit square $[0,1]^2$. It is the attractor of four similitudes \begin{align*} F_1\begin{pmatrix}x_1\\x_2\end{pmatrix}&=\begin{pmatrix}0.5&0\\0&0.5 \end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}\\ F_2\begin{pmatrix}x_1\\x_2\end{pmatrix}&=\begin{pmatrix}0.5&0\\0&0.5 \end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}+\begin{pmatrix}0.5\\0\end{pmatrix}\\ F_3\begin{pmatrix}x_1\\x_2\end{pmatrix}&=\begin{pmatrix}0.5&0\\0&0.5 \end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}+\begin{pmatrix}0\\0.5\end{pmatrix}\\ F_4\begin{pmatrix}x_1\\x_2\end{pmatrix}&=\begin{pmatrix}0.5&0\\0&0.5 \end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}+\begin{pmatrix}0.5\\0.5\end{pmatrix} \end{align*} However, there is not an obvious way to connect the copies if we lower the contraction ratios. We could still be using one-dimensional lines. It is also not obvious how we have to place these lines. This procedure, however, changes the connectedness of the fractal and gives us a completely new fractal which has to be analyzed geometrically and analytically. We have to place the lines in such a way that it connects the \mbox{1-cells}~$\Sigma_i$ to ensure connectedness. \begin{figure} \caption{Stretched unit square - version 1} \end{figure} We can also place the lines somewhere else. \begin{figure} \caption{Stretched unit square - version 2} \end{figure} Another way to connect the copies is to use two-dimensional areas. \begin{figure} \caption{Stretched unit square - version 3} \end{figure} This gives us a completely different fractal. The two-dimensional part will dominate the geometric and analytical appearance. There are many ways to connect the copies between one- and two-dimensional objects. This gives rise to many new and interesting fractals. \end{document}
math
127,360
\begin{document} \title{\LARGE \bf Distributed and Constrained \( \mathcal{H} \thispagestyle{empty} \pagestyle{empty} \begin{abstract} Design of optimal distributed linear feedback controllers to achieve a desired aggregate behavior, while simultaneously satisfying state and input constraints, is a challenging but important problem in many applications. System level synthesis is a recent technique which has been used to reparametrize the optimal control problem as a convex program. Prior work on system level synthesis with state and input constraints has included closed-loop finite impulse response and locality constraints or, in the case where these constraints were lifted using a simple pole approximation, only a centralized design was considered. However, closed-loop finite impulse response and locality constraints cannot be satisfied in many applications. Furthermore, the centralized design using the simple pole approximation lacks robustness to communication failures and disturbances, has high computational cost and does not preserve data privacy of local controllers. The main contribution of this work is to develop a distributed solution to system level synthesis with the simple pole approximation in order to incorporate state and input constraints without closed-loop finite impulse response or locality constraints, and in a distributed implementation. To achieve this, it is first shown that the dual of this problem is a distributed consensus problem. Then, an algorithm is developed based on the alternating direction method of multipliers to solve the dual while recovering a primal solution, and a convergence certificate is provided. Finally, the method's performance is demonstrated on a test case of control design for distributed energy resources that collectively provide stability services to the power grid. \end{abstract} \copyrightnotice \section{Introduction} Optimal design of linear feedback controllers with state and input constraints is a challenging but important problem. One celebrated approach is the Youla parametrization \cite{youla_param} which casts the optimal control problem as a convex program in terms of the Youla parameter, while input-output parametrization (IOP) \cite{iop} is a more recent method that focuses on output feedback. Along this direction, the recent work \cite{sls} introduced system level synthesis (SLS) whereby controllers are parametrized in terms of the closed-loop system responses. The resulting optimization problem is convex yet infinite-dimensional, hence intractable in general. To overcome this in the setting with state and input constraints, the authors employ a finite impulse response (FIR) approximation of the closed-loop transfer functions \cite{chen2019slsconstraints}. Unfortunately, this approximation is not feasible for stabilizable but uncontrollable systems and, even when feasible, gives rise to deadbeat control which suffers a number of shortcomings such as high computational cost and lack of robustness to uncertainty and disturbances due to its large control gains \cite{dbc_shortcomings}. Furthermore, \cite{sls}, \cite{chen2019slsconstraints} include additional closed-loop locality constraints, which are helpful for distributed implementations of SLS, but are not satisfied in many networked systems with coupling throughout the different areas in the network, such as power systems. By contrast, in \cite{msls_placeholder} a computationally-efficient simple pole approximation (SPA) is developed for which the closed-loop is not FIR, there are no closed-loop locality constraints, suboptimality certificates are derived, feasibility for stabilizable plants is guaranteed and prior knowledge of the system's optimal poles can be integrated. This approach does not result in deadbeat control since the system responses are not FIR. Moreover, state and input constraints can be non-conservatively incorporated in the SPA formulation as illustrated in \cite{spa_dvpp_placeholder}. The goal of this work is to enable a distributed implementation of SLS with SPA. This offers a number of advantages over the centralized approach, including robustness to communication failures, uncertainty and disturbances, scalability due to the distribution of the computational burden and preservation of data privacy of agents owing to the absence of a central coordinator. Achieving this distributed deployment ultimately amounts to solving a distributed optimization problem. In particular, the considered problem structure of SPA subject to a peer-to-peer communication setup belongs to the recently-introduced Distributed Aggregative Optimization (DAO) framework \cite{dao_main}. Existing DAO algorithms typically require differentiability and strong convexity of the objective function (see \cite{dao_main}, \cite{dao_2}) which is not guaranteed to hold for SPA control design. Although \cite{dao_3} relaxes the strong convexity assumption, it requires vanishing step sizes, which substantially reduces the convergence rate. Moreover, these methods do not have the ability to incorporate constraints coupled across multiple devices, which can often appear in practice (e.g.\ see \autoref{sec:test_case}). The main contribution of this work lies in the development of a new optimal linear feedback control design method with non-conservative state and input constraints, no closed-loop FIR or locality constraints, and a distributed and convex implementation. To do so, starting from the centralized formulation of SLS with SPA we show that the dual is a consensus problem and, thus, can be solved using the distributed algorithm proposed in \cite{consensus_admm}, which is based on the alternating direction method of multipliers (ADMM) \cite{admm_classic}. This dual consensus ADMM approach was employed in \cite{dual_consensus_zero} and \cite{dual_consensus_large} to solve decentralized resource allocation problems. Here, a general distributed optimization scheme is developed, which is an extension of these prior methods to a larger class of objective functions. Under weak assumptions, convergence certificates are provided that guarantee convergence of the primal variables to the set of primal solutions, which is a stronger convergence result than in prior work \cite{dual_consensus_zero,dual_consensus_large}, where either stronger assumptions are made (such as smoothness, strong convexity, and full rank of associated matrices), or it is only shown that each limit point is a minimizer of the primal problem, but cannot guarantee the existence of any limit points (so the primal variables could diverge towards infinity). Then, the algorithm is specialized for the SPA control design by exploiting the underlying problem structure to simplify the ADMM subproblems thus improving computational efficiency. A recent application of distributed control is heterogeneous ensemble control \cite{verena_dvpp}, where the goal is to design local controllers so that in aggregate they achieve a desired dynamic behavior as well as possible, subject to device limitations and coupling constraints. Prior distributed solution methods for this problem rely primarily on heuristics for disaggregating desired behavior among the agents, cannot explicitly incorporate state and input constraints, and require manual tuning of controller parameters \cite{verena_dvpp, bjork_dvpp}. The distributed control design developed here has bounded suboptimality \cite{msls_placeholder}, explicitly includes state, input and coupling constraints non-conservatively, and does not require any manual tuning related to controller specifications, addressing the limitations of prior work. Its effectiveness for solving heterogeneous ensemble control is demonstrated in \autoref{sec:test_case} for control of distributed energy resources (DERs) to collectively provide frequency control to the power grid. The remainder of this paper is structured as follows. In \autoref{sec:problem_formulation}, we review SLS and SPA, and formulate the optimal control design problem. In \autoref{sec:dual_consensus}, we abstract the previous problem, develop a distributed ADMM-based algorithm to solve it and establish its convergence. \autoref{sec:test_case} explains our simulation setup and demonstrates our algorithm's performance, while \autoref{sec:conclusion} concludes the paper. \textit{Notation:} Let \( \mathbb{N}, \mathbb{R}, \mathbb{C} \) respectively denote the set of natural, real and complex numbers, while \( \overline{\mathbb{R}} = \mathbb{R} \cup \{+\infty\} \). \( \mathbb{R}^n \) is the \( n \)-dimensional Euclidean space and \( \norm{\cdot} \) its norm, and let \( \mathbb{R}nex = \mathbb{R}^n \cup \{ \infty \} \) denote the extended Euclidean space, also known as the Riemann sphere. Given a matrix \( A \) we denote \( A^T \) its transpose, and for some matrix \( B \) of appropriate dimensions \( [A;B] \) is their vertical concatenation. A function \( f : \mathbb{R}^n \to \overline{\mathbb{R}} \) is \textit{proper} and \textit{closed} if its epigraph is, respectively, non-empty and closed. Let \( \Gamma_n \) denote the set of proper, closed and convex functions \( f: \mathbb{R}^n \to \overline{\mathbb{R}} \). Then, for \( f \in \Gamma_n \) the \textit{conjugate} of \( f \) is defined as \( \conj{f}(y) := \sup_x \{ y^T x - f(x) \} \) and satisfies \( \conj{f} \in \Gamma_n \), the \textit{proximal operator} is \( \proxim{f}^{\rho}(x) := \argmin_y \{f(y) + \frac{\rho}{2} \norm{x - y}_2^2\} \) for any \( \rho > 0 \), and the \textit{subdifferential} is the set-valued map \( \partial f(x) := \{ u \in \mathbb{R}^n ~|~ (\forall y \in \mathbb{R}^n) ~ f(y) \geq f(x) + u^T (y - x) \} \). We let \( \mathcal{I}_{\mathcal{X}} \) denote the \textit{indicator function} of \( \mathcal{X} \subseteq \mathbb{R}^n \) for which it holds\ \( \mathcal{I}_{\mathcal{X}}(x) = 0 \) for \( x \in \mathcal{X} \), and \( \mathcal{I}_{\mathcal{X}}(x) = + \infty \) otherwise. For any sequence \( (x^k)_{k \in \mathbb{N}} \subseteq \mathbb{R}nex \) let \( \omega(x^k) \) denote the \( \omega \) limit set: the set of points \( y \in \mathbb{R}nex \) such that there exists a subsequence of \( (x^k)_{k \in \mathbb{N}} \) which converges to \( y \). For any set \( \mathcal{S} \subseteq \mathbb{R}nex \), we say that \( (x^k)_{k \in \mathbb{N}} \) converges to \( \mathcal{S} \), denoted by \( x^k \to \mathcal{S} \), if for every open neighborhood \( V \) of \( \mathcal{S} \), there exists \( N > 0 \) finite such that \( k \geq N \) implies \( x^k \in V \). \section{Problem Formulation and Background} \label{sec:problem_formulation} We consider a collection of \( \mathcal{N} = \{1, \ldots, N\} \) discrete-time LTI systems, referred to as agents, with local dynamics \begin{equation} \begin{aligned} x_i^{k+1} & = A_i x_i^k + B_i u_i^k + \hat{B}_i w^k \\ y_i^k & = C_i x_i^k \end{aligned} \end{equation} for each agent \( i \in \mathcal{N} \) and time step \( k \in \mathbb{N} \). We denote \( x_i^k \in \mathbb{R}^{n_{x,i}} \) the state and \( u_i^k \in \mathbb{R}^{n_{u,i}} \) the individual control signal of each system, \( w^k \in \mathbb{R}^{n_{w}} \) is an external disturbance, while \( y_i^k \in \mathbb{R}^{n_{y,i}} \) is the output. We endow each agent with a dynamic state feedback controller of the form \( U_i(z) = K_i(z) X_i(z) \), where \( X_i(z)\) and \( U_i(z) \) are the z-transforms of the signals \( x_i^k \) and \( u_i^k\), respectively. Then, the agent-specific closed loop transfer function mapping disturbance to output is \( \tfwy{,i}(z) = C_i (zI - A_i - B_i K_i(z))^{-1} \hat{B}_i \) and \( \tfwx{,i}(z), \tfwu{,i}(z) \) are defined similarly, while we let \( \Phi_{\text{des}}(z) \) be a desired transfer function, i.e., a design choice. Our goal is to solve the following model matching problem \begin{equation} \label{eq:sls_initial} \begin{alignedat}{2} & \underset{K_1(z), \ldots, K_N(z)}{\textrm{minimize }}~ && \Big\lVert \sum_{i \in \mathcal{N}}\tfwy{,i}(z) - \Phi_{\text{des}}(z) \Big\rVert_{\mathcal{H}_2}^2 \\ & ~~\textrm{subject to} && \tfwx{,i}(z), \tfwu{,i}(z) \in \mathcal{R} \end{alignedat} \end{equation} where \( \norm{\cdot}_{\mathcal{H}_2} \) is the \( \mathcal{H}_2 \) norm and \( \mathcal{R} \) is the Hardy space of real, rational, strictly proper and stable transfer functions, that additionally satisfy time-dependent constraints on states and inputs, for a collection of known disturbance signals (which can include, e.g., impulses and/or steps). Unfortunately, \eqref{eq:sls_initial} is non-convex in \( K_i(z) \). In their seminal work \cite{sls}, the authors propose SLS whereby the previous problem is reformulated by using \( \tfwx{,i}(z), \tfwu{,i}(z) \) as design variables. This reparametrization renders the problem convex, at the price of imposing additional affine constraints. The resulting controller for each agent can be recovered as \( K_i(z) = \tfwu{,i}(z) \tfwx{,i}^{-1}(z) \), although realizations exist that do not require any transfer function inversion. Crucially, controller recovery relies solely on local information, i.e., \( \tfwx{,i}(z), \tfwu{,i}(z) \), so no additional communication is necessary for this step. Nonetheless, the problem is infinite-dimensional, and hence intractable in this form. A FIR approximation is proposed in \cite{chen2019slsconstraints} to obtain a tractable control design problem, but it suffers from the drawbacks discussed in the introduction. In order to address the limitations of the closed-loop FIR and locality constraints, \cite{msls_placeholder} proposed an approximation of \( \tfwx{,i}(z), \tfwu{,i}(z) \) using simple poles as follows \begin{align} \tfwx{,i}(z) = \sum_{p \in \mathcal{P}_i}^{} G_{p,i} \frac{1}{z - p}, ~ \tfwu{,i}(z) = \sum_{p \in \mathcal{P}_i}^{} H_{p,i} \frac{1}{z - p} \end{align} where \( \mathcal{P}_i \subseteq \mathbb{C} \) is a fixed finite set of complex poles inside the unit disk and closed under complex conjugation, and the decision variables \( G_{p,i}, H_{p,i} \) are complex matrix coefficients associated with pole \( p \in \mathcal{P}_i \) and agent \( i \in \mathcal{N} \). The authors prove convergence to a globally optimal solution as the number of poles increases, and suboptimality bounds based on the geometry of the pole selection are provided \cite{msls_placeholder}. This approximation renders \eqref{eq:sls_initial} both convex and tractable. Following \cite{msls_placeholder}, we denote the impulse response of \( \Phi_{x,i} \) and \( \Phi_{u,i} \) at time step \( k \) as \( \mathcal{J}^k [\Phi_{x,i}] := \sum_{p \in \mathcal{P}_i} G_{p,i} p^{k-1} \) and \( \mathcal{J}^k [\Phi_{u,i}] := \sum_{p \in \mathcal{P}_i} H_{p,i} p^{k-1} \), respectively. Then, to solve problem \eqref{eq:sls_initial} we recognize that the \( \mathcal{H}_2 \) norm is well-approximated by the Frobenius norm of the (sufficiently large) finite-horizon impulse response \cite{msls_placeholder}. Moreover, for a known disturbance signal \( (w^k)_{k \in \mathbb{N}} \) the state of system \( i \in \mathcal{N} \) at step \( k \in \mathbb{N} \) is given by \( x_i^k = \sum_{l=0}^{k} \mathcal{J}^{k-l} [\Phi_{x,i}] \hat{B}_i w^l \), while the input is \( u_i^k = \sum_{l=0}^{k} \mathcal{J}^{k-l} [\Phi_{u,i}] \hat{B}_i w^l \). Crucially, observe that \( \mathcal{J}^k [\Phi_{x,i}] \) (resp.\ \( \mathcal{J}^k [\Phi_{u,i}] \)) depends linearly on the decision variable \( G_{p,i} \) (resp.\ \( H_{p,i} \)) and hence the constraint \( x_i^k \in \mathcal{C} \) (resp.\ \( u_i^k \in \mathcal{C} \)) is convex for any convex set \( \mathcal{C} \). In fact, we may impose this constraint for a collection of disturbance signals while retaining convexity. The last step in our problem formulation is to convert the decision variables from complex matrices to real vectors by representing the real and imaginary parts of \( G_{p,i} \) and \( H_{p,i} \) as separate matrices and then employing vectorization. The resulting optimization problem can be represented in the abstract form \begin{subequations} \label{eq:abmm} \begin{alignat}{2} & \underset{x_1, \ldots, x_N}{\textrm{minimize }}~ && \Big\lVert \sum\nolimits_{i \in \mathcal{N}} D_i x_i - d \Big\rVert ^2 \\ & \textrm{subject to}~~~ && E_i x_i = e_i, ~ \forall i \in \mathcal{N} \label{eq:abmm_eq} \\ & && M_i x_i \leq m_i, ~ \forall i \in \mathcal{N} \label{eq:abmm_ineq} \\ & && \sum\nolimits_{i \in \mathcal{N}} N_i x_i = 0 \label{eq:abmm_couple}, \end{alignat} \end{subequations} where \( x_1, \ldots, x_N \) are the agent-specific decision variables, while \( d, e_i, m_i \) and \( D_i, M_i, N_i \) are vectors and matrices of appropriate dimensions. A detailed and centralized version of \eqref{eq:abmm} can be found in \cite{spa_dvpp_placeholder}. The objective function corresponds to the original problem where \( \sum_{i \in \mathcal{N}} D_i x_i \) and \( d \) represent the finite-horizon aggregate and desired impulse response, respectively. Constraints \eqref{eq:abmm_eq}, \eqref{eq:abmm_ineq} express state, input and output constraints of the system under a collection of worst-case disturbances; \eqref{eq:abmm_eq} also includes the SLS affine constraints. Finally, coupling constraints among the devices are prescribed in \eqref{eq:abmm_couple}. In this work, we aim at solving \eqref{eq:abmm} under the assumption that agents do not share their individual problem data and only peer-to-peer communication is possible. Formally, the communication network is described by the graph \( \mathcal{G} = (\mathcal{N}, \mathcal{E}) \), where nodes correspond to agents and \( \mathcal{E} \subseteq \mathcal{N} \times \mathcal{N} \) is the set of edges. The pair \( (i,j) \subseteq \mathcal{N} \times \mathcal{N} \) belongs to \( \mathcal{E} \) if and only if agents \( i \) and \( j \) can directly communicate. We denote \( \mathcal{N}_i := \{j \in \mathcal{N} ~|~ (i,j) \in \mathcal{E} \} \) the set and \( d_i := | \mathcal{N}_i| \) the number of neighbors of agent \( i \in \mathcal{N} \). \section{Main Results} \label{sec:dual_consensus} In this section, we develop a distributed algorithm to solve a generalization of \eqref{eq:abmm} and provide a convergence certificate. The algorithm and results are then specialized to \eqref{eq:abmm}. \subsection{Dual Consensus ADMM} For the sake of generality, we consider the problem \begin{equation} \label{eq:admm_primal} \begin{alignedat}{2} & \underset{x_1, \ldots, x_N}{\textrm{minimize }}~ && \sum\nolimits_{i \in \mathcal{N}}^{} f_i(x_i) + g\left( \sum\nolimits_{i \in \mathcal{N}} Q_i x_i \right) \\ \end{alignedat} \end{equation} where \( x_i \in \mathbb{R}^{n_i}, Q_i \in \mathbb{R}^{m \times n_i} \) and let \( \boldsymbol{x} := [x_1; \ldots; x_N] \in \mathbb{R}^n \) with \( n := \sum_{i \in \mathcal{N}} n_i \). Problem \eqref{eq:abmm} is a special case of \eqref{eq:admm_primal} under the substitution \( f_i := \mathcal{I}_{\mathcal{X}_i} \label{subeq:local_feas1},~ \mathcal{X}_i := \{ x_i \in \mathbb{R}^{n_i} ~|~ E_i x_i = e_i, ~ M_i x_i \leq m_i \},~ Q_i := [D_i;N_i],~ g\left( [z_1; z_2] \right) := \norm{z_1 - d}^2 + \mathcal{I}_{\{0\}}(z_2), \) where the dimensions of vectors \( z_1, z_2 \) correspond to the number of rows of \( D_i \) and \( N_i \), respectively. We will study \eqref{eq:admm_primal} under the following assumption. \begin{assumption} \label{ass:admm} \ \begin{enumerate} \item \( g \in \Gamma_m \) and \( f_i \in \Gamma_n \), for all \( i \in \mathcal{N} \). \item A primal-dual solution exists and strong duality holds. \item The graph \( \mathcal{G} \) is connected and undirected. \( \square \) \end{enumerate} \end{assumption} For our problem \eqref{eq:abmm}, feasibility corresponds to stabilizability of each system by \cite[Lemma 4.2]{sls} and implies Assumption 1.1 and 1.2, with the latter following by Slater's constraint qualification \cite[\S 5.2.3]{boyd_convex} since all constraints are affine. Solving \eqref{eq:admm_primal} with a distributed algorithm is challenging due to the coupling introduced by \( \sum\nolimits_{i \in \mathcal{N}} Q_i x_i \). In the sequel, we will solve the dual of \eqref{eq:admm_primal} using the ADMM-based distributed algorithm proposed in \cite{consensus_admm}, and then recover a primal solution. The method in \cite{consensus_admm} addresses the \textit{decomposed consensus} optimization problem \begin{equation} \label{eq:consensus} \begin{alignedat}{2} & \underset{y}{\textrm{minimize }} ~ && \sum\nolimits_{i \in \mathcal{N}} \zeta_i(y) + \xi_i(y) \end{alignedat} \end{equation} where \( \zeta_i, \xi_i \in \Gamma_m \) for all \( i \in \mathcal{N} \). The relevant consensus ADMM algorithm is outlined in \autoref{alg:decomposed_consensus}. Intuitively, \autoref{alg:decomposed_consensus} is derived by applying ADMM to \eqref{eq:consensus} while allowing agents to have local estimates of \( y \) and enforcing the estimates of neighboring agents to coincide (see \cite[App. A.2]{dual_consensus_large} for a detailed derivation). We employ this algorithm because the dual of \eqref{eq:admm_primal} is a decomposed consensus problem, as we show in the subsequent analysis. Our approach resembles and extends those in \cite{dual_consensus_zero}, \cite{dual_consensus_large}, where the authors respectively address the cases where \( g = \mathcal{I}_{\{0\}} \) and \( g = \mathcal{I}_{\mathcal{K}} \), for some nonempty, closed and convex cone \( \mathcal{K} \). To derive its dual, we rewrite \eqref{eq:admm_primal} as \begin{equation} \label{eq:admm_primal_eq} \begin{alignedat}{2} & \underset{w,x_1, \ldots, x_N}{\textrm{minimize }}~ && \sum\nolimits_{i \in \mathcal{N}}^{} f_i(x_i) + g\left( w \right) \\ & \textrm{subject to}~~~ && \sum\nolimits_{i \in \mathcal{N}} Q_i x_i = w \end{alignedat} \end{equation} and the Lagrangian function for this problem reads \begin{align*} \mathcal{L}(\boldsymbol{x},w,y) & = \sum_{i \in \mathcal{N}} f_i(x_i) + g(w) + y^T \bigg(\sum_{i \in \mathcal{N}} Q_i x_i - w \bigg) \end{align*} where \( y \in \mathbb{R}^m \) is the dual variable. The dual function is \begin{align*} h(y) & = \inf_{x,w} \mathcal{L}(x,w,y) \\ & = \inf_{x} \Big\{ \sum_{i \in \mathcal{N}} \left( f_i(x_i) + y^T Q_i x_i \right) \Big\} + \inf_{w} \left\{ g(w) - y^T w \right\} \\ & = - \sum\nolimits_{i \in \mathcal{N}} \conj{f}_i( -Q_i^T y) - \conj{g}(y) \end{align*} thus giving rise to the dual problem \begin{equation} \label{eq:admm_dual} \begin{alignedat}{2} & \underset{y}{\textrm{maximize }}~ && - \sum\nolimits_{i \in \mathcal{N}} \left( \conj{f}_i( -Q_i^T y) + \frac{1}{N} \conj{g}(y) \right) \end{alignedat}, \end{equation} which clearly belongs to the family of \eqref{eq:consensus}. Consequently, \autoref{alg:decomposed_consensus} is applicable with \( \zeta_i := \conj{f}_i \circ ( -Q_i^T ) \) and \( \xi_i := \frac{1}{N} \conj{g} \). \begin{algorithm} \caption{Decomposed Consensus ADMM} \begin{algorithmic}[1] \State \textbf{choose} \( \sigma, \rho > 0 \) \State \textbf{initialize} for all \( i \in \mathcal{N} \): \( p_i^0 = 0, y_i^0, z_i^0, s_i^0 \in \mathbb{R}^m \) \Repeat: for all \( i \in \mathcal{N} \) \State Exchange \( y_i^k \) with neighbors \( \mathcal{N}_i \) \State \( p_i^{k+1} \leftarrow p_i^k + \rho \sum_{j \in \mathcal{N}_i} \left( y_i^k - y_j^k \right) \) \State \( s_i^{k+1} \leftarrow s_i^k + \sigma (y_i^k - z_i^k) \) \State \label{step:consensus_zeta} \( y_i^{k+1} \leftarrow \argmin_{y_i} \bigl\{ \zeta_i(y_i) + y_i^T(p_i^{k+1} + s_i^{k+1}) \bigr. \) \( ~~~~~~~~~~~~~~~~~~~~~ \bigl. +\frac{\sigma}{2} \norm{y_i - z_i^k}^2 + \rho \sum_{j \in \mathcal{N}_i} \norm{y_i - \frac{y_i^k + y_j^k}{2}}^2 \bigr\} \) \State \label{step:consensus_xi} \( z_i^{k+1} \leftarrow \argmin_{z_i} \{ \xi_i(z_i) - z_i^T{s_i^{k+1}} + \frac{\sigma}{2} \norm{z_i - y_i^{k+1}}^2 \} \) \Until{termination criterion is satisfied} \end{algorithmic} \label{alg:decomposed_consensus} \end{algorithm} Observe that, the functions \( \conj{f}_i, \conj{g} \) may not be known in closed form, hence complicating steps \ref{step:consensus_zeta} and \ref{step:consensus_xi} of \autoref{alg:decomposed_consensus} which would require solving nested optimization problems. To overcome this, we will reformulate these update rules in terms of the original functions. By completing the square, we can equivalently rewrite step \ref{step:consensus_xi} as \begin{equation} \label{eq:proximal_reform} \begin{alignedat}{1} & \argmin\nolimits_{z_i} \Big\{ \frac{1}{N} \conj{g}(z_i) + \frac{\sigma}{2} \norm{z_i - \frac{s_i^{k+1} + \sigma y_i^{k+1}}{\sigma}}^2 \Big\} \\ & = \proxim{\conj{g}}^{N \sigma} \Big( \frac{s_i^{k+1}}{\sigma} + y_i^{k+1} \Big) \\ & = \frac{s_i^{k+1}}{\sigma} + y_i^{k+1} - \frac{1}{N \sigma} \proxim{g}^{1/(N \sigma)} \Big(N \big(s_i^{k+1} + \sigma y_i^{k+1}\big) \Big) \end{alignedat} \end{equation} where the last equality holds since \( x = \proxim{g}^{1/\gamma}(x) + \gamma \proxim{\conj{g}}^{\gamma}(x/\gamma) \) by \cite[Th. 14.3(ii)]{convex_monotone2017}. Similarly, square completion for step \ref{step:consensus_zeta} yields \begin{equation*} \argmin_{y_i} \Big\{ \conj{f}_i(-Q_i^T y_i) + \frac{\sigma + 2 \rho d_i}{2} \big\lVert y_i - \frac{1}{\sigma + 2 \rho d_i} r_i^{k+1} \big\rVert ^2 \Big\} \end{equation*} where we defined for brevity \( r_i^{k+1} := \rho \sum_{j \in \mathcal{N}}(y_i^k + y_j^k) + \sigma z_i^k - p_i^{k+1} - s_i^{k+1} \). Observe that the minimizer of the previous problem equals \( \prox_{f_i^{\star} \circ \left( - Q_i^T \right)}^{\sigma + 2 \rho d_i} \left(\frac{r_i^{k+1}}{\sigma + 2 \rho d_i} \right) \) and using \cite[Lemma B.1]{dual_consensus_large} with \( \mathcal{C} := \mathbb{R}^m \) the previous problem admits the solution \begin{equation} \label{eq:new_y_update} y_i^{k+1} \leftarrow \frac{1}{\sigma + 2 \rho d_i} \left ( Q_i x_i^{k+1} + r_i^{k+1} \right ) \end{equation} introducing the auxiliary variable \begin{equation} \label{eq:x_update} x_i^{k+1} \in \argmin_{x_i} \Big\{ f_i(x_i) + \frac{1}{2 (\sigma + 2 \rho d_i)} \norm{Q_i x_i + r_i^{k+1}}^2 \Big\}. \end{equation} Note that \eqref{eq:x_update} is guaranteed to be feasible but may, in general, admit an unbounded solution set. This can lead to the situation described in the introduction, where the primal variables have no limit points and instead diverge to infinity. To alleviate this, we introduce the artificial constraint \( \norm{x_i}_{\infty} \leq M_i \) for \( M_i > 0 \). The parameters \( M_i \) need not be known in advance since we may increase \( M_i \) whenever the additional constraint renders the problem infeasible or the resulting solution is suboptimal for the original problem, as will be shown in \autoref{alg:dual_conensus_admm}. To do so, we solve \eqref{eq:x_update} both with and without the artificial constraint \( \norm{x_i}_\infty \leq M_i \) and compare the results. If \eqref{eq:x_update} with the artifical constraint is infeasible or has higher cost than \eqref{eq:x_update} without the artificial constraint, then we know we have not yet found an optimal solution to \eqref{eq:x_update}, so we double \( M_i \) and repeat this procedure. The result is that the solution of \eqref{eq:x_update} with the artifical constraint will converge to an optimal solution of \eqref{eq:x_update} without this artifical constraint exponentially fast. This observation along with the following assumption will be used to show boundedness of \( x^{k}_i \) for all \( i \in \mathcal{N}, k \in \mathbb{N} \) in our convergence analysis. Then, we will be able to show that the limit set of the primal variables is nonempty, and that the primal variables converge to this limit set, which will guarantee convergence of our algorithm. \begin{assumption} \label{ass:minim_contin} For each \( i \in \mathcal{N} \) and \( \tilde{r}_i \in \mathbb{R}^m \) there exist \( R > 0 \) and \( M_i \) finite, such that \( \norm{r_i^{k+1} - \tilde{r}_i} \leq R \) implies that the solution set of \eqref{eq:x_update} intersects the set \( \{x_i \in \mathbb{R}^{n_i} ~|~ \norm{x_i}_{\infty} \leq M_i \}\). \( \square \) \end{assumption} Assumption \ref{ass:minim_contin} is fairly weak and holds for a large class of problems, including \eqref{eq:abmm}. Applying \eqref{eq:proximal_reform}-\eqref{eq:x_update} to \autoref{alg:decomposed_consensus} yields \autoref{alg:dual_conensus_admm}, where subproblems have been reformulated in terms of the original functions and, therefore, the structure of the primal \eqref{eq:admm_primal} is preserved. \autoref{alg:dual_conensus_admm} also includes the additional infinity norm constraints. Note that, if the solution set of \eqref{eq:x_update} is bounded the additional step \ref{step:x_update} can be disregarded. \begin{algorithm} \caption{Dual consensus ADMM for \eqref{eq:admm_primal}.} \begin{algorithmic}[1] \State \textbf{choose} \( \sigma, \rho > 0 \) \State \textbf{initialize} for all \( i \in \mathcal{N} \): \( p_i^0 = 0, ~ y_i^0, z_i^0, s_i^0 \in \mathbb{R}^m \) \Repeat: for all \( i \in \mathcal{N} \): \State exchange \( y_i^k \) with neighbors \( \mathcal{N}_i \) \State \( p_i^{k+1} \leftarrow p_i^k + \rho \sum_{j \in \mathcal{N}_i} \left( y_i^k - y_j^k \right) \) \State \( s_i^{k+1} \leftarrow s_i^k + \sigma (y_i^k - z_i^k) \) \label{step:s_update} \State \( r_i^{k+1} \leftarrow \rho \sum_{j \in \mathcal{N}_i}(y_i^k + y_j^k) + \sigma z_i^k - p_i^{k+1} - s_i^{k+1} \) \State \label{step:x_preupdate}Compute the optimal value of \eqref{eq:x_update} \State \label{step:x_update} \( x_i^{k+1} \leftarrow \text{Update using \eqref{eq:x_update}} \text{ with } \norm{x_i}_{\infty} \leq M_i. \) ~~~~~~~~~~~~~~~~~If infeasible or suboptimal: \( M_i \leftarrow 2 M_i \), repeat Step \ref{step:x_update}. \State \label{step:y_update} \( y_i^{k+1} \leftarrow \text{Update using \eqref{eq:new_y_update}} \) \State \label{step:z_update} \( z_i^{k+1} \leftarrow \text{Update using \eqref{eq:proximal_reform}} \) \Until{termination criterion is satisfied} \end{algorithmic} \label{alg:dual_conensus_admm} \end{algorithm} \subsection{Distributed Optimal Control Design} In this subsection, we specialize \autoref{alg:dual_conensus_admm} to \eqref{eq:abmm} and exploit its structure to derive more efficient iterations. In particular, using \( f_i = \mathcal{I}_{\mathcal{X}_i} \) where \( \mathcal{X}_i \) are polyhedral, we recognize that the problems in steps \eqref{step:x_preupdate} and \eqref{step:x_update} are QPs. Similarly, the proximal involved in step \ref{step:z_update} can be computed in closed form \begin{equation*} \proxim{g}^{1/(N \sigma) }([z_1;z_2]) = \bigg[\frac{2 N \sigma}{2 N \sigma + 1}d + \frac{1}{2 N \sigma +1} z_1;0 \bigg]. \end{equation*} Thus, our algorithm requires each agent to solve just two QPs per iteration for large enough \( M_i \). \subsection{Convergence Analysis} Next, we will establish convergence of \autoref{alg:dual_conensus_admm} and show how a primal solution can be recovered. Firstly, we recall several useful statements from convex analysis. \begin{lemma} \label{lemma:fermat} Let \( y,z \in \mathbb{R}^n, f \in \Gamma_n \) and consider the sequence \( (x^k, u^k)_{k \in \mathbb{N}} \) such that \( x^k \to x, u^k \to u \) and \( u^k \in \partial f(x^k) \) for all \( k \in \mathbb{N} \). Then, \begin{enumerate} \item \( \argmin f = \{x \in \mathbb{R}^n ~|~ 0 \in \partial f(x)\} \) \cite[Th. 16.3]{convex_monotone2017}, \item \( y \in \partial \conj{f}(z) \iff z \in \partial f(y) \) \cite[Cor. 16.30]{convex_monotone2017}, \item \( u \in \partial f(x) \) \cite[Cor. 16.36]{convex_monotone2017}. \( \square \) \end{enumerate} \end{lemma} We will employ the previous lemma to prove our main result. \begin{theorem} The iterates \( ((y_i^{k})_{i \in \mathcal{N}})_{k \in \mathbb{N}}, ((z_i^{k})_{i \in \mathcal{N}})_{k \in \mathbb{N}} \) of \autoref{alg:dual_conensus_admm} converge to a maximizer of the dual \eqref{eq:admm_dual}, while \( ((x_i^{k})_{i \in \mathcal{N}})_{k \in \mathbb{N}} \) converges to the set of minimizers of the primal \eqref{eq:admm_primal}. \( \square \) \end{theorem} \begin{proof} Initially, we recall that \autoref{alg:dual_conensus_admm} is derived by applying ADMM to \eqref{eq:consensus} and that ADMM is a special case of the Douglas-Rachford splitting algorithm \cite{admm_drs}. Invoking \cite[Cor. 28.3]{convex_monotone2017} it holds \( y^k_i \to y^{\star}, ~ z^k_i \to y^{\star} \) for all \( i \in \mathcal{N} \), where \( y^{\star} \) is a maximizer of \eqref{eq:admm_dual}. Further, we note that the first-order optimality conditions for a primal-dual solution of \eqref{eq:admm_primal} correspond to saddle-points of the Lagrangian and, hence, are given by \begin{subequations} \label{eq:optimality} \begin{align} 0 & \in \partial_{x_i} \mathcal{L}(x,w,y) = \partial f_i(x_i) + Q_i^T y, ~\forall i \in \mathcal{N} \label{eq:optimality_1} \\ 0 & \in \partial_{w} \mathcal{L}(x,w,y) = \partial g(w) - y \label{eq:optimality_2} \\ 0 & = \nabla_y \mathcal{L}(x,w,y) = \sum\nolimits_{i \in \mathcal{N}} Q_i x_i - w. \label{eq:optimality_3} \end{align} \end{subequations} We denote \( \boldsymbol{x}^k := (x^k_i)_{i \in \mathcal{N}}, \boldsymbol{x}^{\star} \in \omega(\boldsymbol{x}^k), w^{k} := \sum_{i \in \mathcal{N}} s_i^{k} \) and \( w^{\star}:=\lim_{k \to \infty}w^k \) which is well-defined by step \ref{step:s_update} since \( y^k_i, z^k_i \to y^{\star} \). In the sequel, we will show that \( (\boldsymbol{x}^{\star}, y^{\star}, w^{\star}) \) satisfies \eqref{eq:optimality} which implies that \( \boldsymbol{x}^{\star} \) is a primal minimizer for every \( \boldsymbol{x}^{\star} \in \omega(\boldsymbol{x}^k) \). Consider the minimization involved in step \ref{step:x_update} and note that by the choice of \( x^{k+1} \) in step \ref{step:x_update} it is a minimizer of \eqref{eq:x_update}. Employing \autoref{lemma:fermat}.1 yields \begin{align*} 0 & \in \partial f_i(x_i^{k+1}) + \frac{1}{\sigma + 2 \rho d_i} Q_i^T \left( Q_i x_i^{k+1} + r_i^{k+1} \right) \\ & = \partial f_i(x_i^{k+1}) + Q_i^T y_i^{k+1} \end{align*} which corresponds to \eqref{eq:optimality_1} being satisfied in each iteration and, by \autoref{lemma:fermat}.3, \eqref{eq:optimality_1} is satisfied by \( (\boldsymbol{x}^{\star}, y^{\star}) \). Next, invoking \autoref{lemma:fermat}.1 for step \ref{step:z_update} in the form of \eqref{eq:proximal_reform}, summing over all \( i \in \mathcal{N} \) and rearranging terms we obtain \begin{align*} w^{k+1} - \frac{1}{N} \sum_{i \in \mathcal{N}} \sigma (z_i^{k+1} - y_i^{k+1} ) \in \frac{1}{N} \sum_{i \in \mathcal{N}} \partial \conj{g}(z_i^{k+1}). \end{align*} Then, taking the limit and utilizing \autoref{lemma:fermat}.3 results in \begin{align*} \lim_{k \to \infty} w^{k+1} \in \frac{1}{N} \sum_{i \in \mathcal{N}} \partial \conj{g}\Big(\lim_{k \to \infty} z_i^{k+1}\Big) & \implies w^{\star} \in \partial \conj{g}(y^{\star}) \\ & \implies y^{\star} \in \partial g(w^{\star}) \end{align*} where the first implication stems from \( z^k_i \to y^{\star} \) while the second follows from \autoref{lemma:fermat}.2; thus, showing \eqref{eq:optimality_2}. Finally, substituting \( r_i^{k+1} \) in step \ref{step:y_update} and summing over all \( i \in \mathcal{N} \) it holds \begin{align*} \sum_{i \in \mathcal{N}} Q_i x_i^{k+1} - w^{k+1} & = \sigma \sum_{i \in \mathcal{N}} (y_i^{k+1} - z_i^k) + \sum_{i \in \mathcal{N}} p_i^{k+1} \\ & ~~~~~ + \rho \sum_{i \in \mathcal{N}} \sum_{j \in \mathcal{N}_i} (2 y_i^{k+1} - y_i^k - y_j^k) \end{align*} where \( \sum_{i \in \mathcal{N}} p_i^{k+1} = 0 \) for all \( k \in \mathbb{N} \), by properties of the consensus ADMM algorithm \cite[App. A.1]{dual_consensus_large}. Hence, we see that the right-hand side vanishes as \( k \to \infty \) thus implying \( \sum_{i \in \mathcal{N}} Q_i x_i^{\star} = w^{\star}\), which amounts to \eqref{eq:optimality_3}. Therefore, we established that \( (\boldsymbol{x}^{\star}, y^{\star}, w^{\star}) \) satisfies \eqref{eq:optimality}. To conclude the proof, it suffices to show that \( \boldsymbol{x}^k \to \omega(\boldsymbol{x}^k) \) since every \( \boldsymbol{x}^{\star} \in \omega(\boldsymbol{x}^k) \) is primal optimal. Since \( y_i^k, z_i^k \to y^{\star} \) and \( p_i^k \) and \( s_i^k \) also converge to limits, we have that \( r_i^k \) converges to a limit, say \( r_i^{\star} \), and by definition there exists \( \overline{N} \) finite such that \( k \geq \overline{N} \) implies \( \norm{r_i^k - r_i^{\star}} \leq R \) for all \( i \in \mathcal{N} \). Therefore, by Assumption \autoref{ass:minim_contin} and step \ref{step:x_update}, we deduce that \( (\boldsymbol{x}^k)_{k \geq \overline{N}} \) is bounded. As \( \overline{N} \) is finite, \( (\boldsymbol{x}^k)_{k \in \mathbb{N}} \) is also bounded and thus contained in a compact set, which implies that \( \omega(\boldsymbol{x}^k) \) is nonempty and \( \boldsymbol{x}^k \to \omega(\boldsymbol{x}^k) \) by virtue of \cite[App. A.2]{khalil1996nonlinear}, hence concluding the proof. \end{proof} \begin{remark} The prior works \cite{dual_consensus_zero} and \cite{dual_consensus_large} ensure primal optimality of any limit point of \( ((x_i^{k})_{i \in \mathcal{N}})_{k \in \mathbb{N}} \) for their algorithms; nonetheless, as mentioned above limit points are not guaranteed to exist for their algorithms, so the set of primal solutions can diverge to infinity in those cases. To see this, recall that the optimization problem in update rule \eqref{eq:x_update} need not be strongly convex, e.g., when \( Q_i \) has a nontrivial null space, which implies that \( x_i^{k+1} \) can diverge to infinity. Our Assumption \autoref{ass:minim_contin} along with step \ref{step:x_update} of \autoref{alg:dual_conensus_admm} preclude this behavior and establish the existence of limit points of \( ((x_i^{k})_{i \in \mathcal{N}})_{k \in \mathbb{N}} \), thus ensuring convergence of the primal variables to the set of primal minimizers. \end{remark} \section{Test Case} \label{sec:test_case} To demonstrate the effectiveness of our proposed control design, we will consider the control of an aggregation of DERs, referred to as virtual power plant (VPP), including wind turbines (WTs), photovoltaics (PVs) and energy storage devices (ESs), that collectively provide fast frequency regulation to the power grid. \begin{figure*} \caption{Scenario 1} \label{subfig:perfect_dec} \caption{Scenario 2} \label{subfig:imperfect_dec} \caption{Relative Suboptimality} \label{subfig:suboptimality} \caption{(a), (b) Power output of each device \( P_i^k \), as deviation from its setpoint. Dashed lines correspond to power constraints \( |P^k_i| \leq \overline{P} \label{fig:powers} \end{figure*} \subsection{Power Grid Model} We adopt a first-order system representation for both the grid frequency and the power of the grid of the form \begin{align} \omega^{k+1} & = \omega^k + 0.083(P_{g}^k + P_{vpp}^k + P_{ext}^k) \\ {P}_{g}^{k+1} & = 0.9944~P_{g}^k +0.0015~\omega^k \end{align} where \( \omega^k \) is the frequency of the grid, \( P_{sg}^k, P_{vpp}^k \) denote the power of the grid and the VPP, whereas \( P_{ext}^k \) models external power injections or losses and is a step function of size 70. All variables correspond to deviations from setpoints. \subsection{DER Model} For individual DERs, we discretize the continuous-time model in \cite{verena_dvpp} which results in the following discrete-time LTI description \begin{equation*} \renewcommand*{\arraystretch}{0.9} \begin{aligned} x^{k+1}_i &= \Vector{1 - \frac{hk_{I}}{k_{P}} & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & \frac{\tau_i-h}{\tau_i} } x^k_i + \Vector{0 \\ 0 \\ \frac{h}{\tau_i}} u^k + \Vector{\frac{-h k_{I}}{k_{P}^2} \\ \frac{1}{k_{P}} \\ 0} \omega^k \\ P^k_i & = \Vector{0 & 1025 & -143} x^k \end{aligned} \end{equation*} where \( k_I = 1700, k_P = 150 \) are constants and \( h = 0.0167 \) is the discretization time step in seconds. The parameter \( \tau_i \) is the time constant of each DER and is equal to 1.3, 0.55 and 0.15 seconds for WTs, PVs and ESs, respectively. For simplicity of exposition, parameters will be shared among DERs of the same type. The power output of each device is \( P_i^k \) and it holds that \( P_{vpp}^k = \sum_{i \in \mathcal{N}} P_i^k. \) Our considered VPP is composed of 3 WTs, 3 PVs and 3 ESs communicating over a randomly-generated connected undirected graph. We explicitly include communication and computation delays related to our distributed algorithm in our simulations, by assuming that one iteration of \autoref{alg:dual_conensus_admm} is performed every 6 time steps of the controlled system. This means that DER controllers are updated significantly slower than the time scale of the faster system dynamics. \subsection{Control Design Parameters} The main component of our controller is the desired aggregate behavior which we designate as a low-pass filter, according to grid operator specifications, with time constant 0.1 and gain 53.1. For \( \mathcal{P}_i \), we use 10 simple poles for each agent that include the poles of the desired transfer function, the plant and the remaining are chosen along a spiral inside the unit disk as described in \cite[Section \uppercase\expandafter{\romannumeral 4\relax}]{msls_placeholder}. Engineering limitations on the device level are specified as upper bounds on \( |P^k_i| \), denoted \( \overline{P}_i \), under a worst-case frequency signal which we choose as a step input of size 0.25. We set \( \overline{P}_i \) to 2.1, 2.9 and 27.5 for WTs, PVs and ESs, respectively, which correspond to device-specific percentages of the nominal power \( \hat{P}_{i} \) of each device, where the values for WTs and PVs are small because their nominal operation is close to their maximum power capacity. We, additionally, constrain ESs to have zero output at steady-state, to avoid impacting their state of charge. Finally, we enforce the contribution of WTs and PVs to the aggregate steady-state power output of the VPP to be proportional to their nominal power, i.e., \( P_i^{\infty}/P_{vpp}^{\infty} = \hat{P}_{i} \), in order to achieve fair power sharing in accordance with grid codes. This specification is encapsulated as a coupling constraint of the form \eqref{eq:abmm_couple} and ensures that the power deviation introduced to compensate \( P_{ext} \) is distributed fairly among devices. \subsection{Simulations} In the following simulations, we utilize \autoref{alg:dual_conensus_admm} in an online fashion in the sense that device controllers are iteratively updated during system operation. For comparison, we also demonstrate the response obtained via a centralized solution of \eqref{eq:abmm}, as well as that of the desired transfer function. We set \( \rho = \sigma = 0.1 \) in \autoref{alg:dual_conensus_admm} for all simulations. In our first scenario, we deploy \autoref{alg:dual_conensus_admm} cold-started, i.e., with \( p_i^0 = y_i^0 = z_i^0 = s_i^0 = 0 \) for all \( i \in \mathcal{N} \), which corresponds to the challenging case of a disturbance occurring while local controllers are being initialized. We consider operating conditions where perfect model matching is feasible. \autoref{subfig:perfect_dec} demonstrates the power output of each device whereas \autoref{fig:perfect_all} compares the distributed, centralized and desired response for the aggregate power output \( P_{vpp}^k \). Our distributed scheme achieves similar performance as the centralized approach, with small oscillating during the initial transient (see \autoref{fig:perfect_all_detail}). \begin{figure} \caption{Scenario 1: Full simulation} \label{fig:perfect_all} \caption{Scenario 1: Initial transient} \label{fig:perfect_all_detail} \caption{Scenario 2: Full simulation} \label{fig:imperfect_all} \caption{Scenario 2: Initial transient} \label{fig:imperfect_all_detail} \caption{Aggregate power response \( P_{vpp} \label{fig:all_responses} \end{figure} In our second scenario, we consider a decrease of nominal power from WTs, e.g., due to a decrease in local wind speeds, thus tightening their power limit constraints to 0.612 and making perfect model matching not feasible. The individual and aggregate device output is shown in \autoref{subfig:imperfect_dec} and \autoref{fig:imperfect_all}, respectively. In this case, we perform 50 iterations of \autoref{alg:dual_conensus_admm} before starting the simulation, which more closely corresponds to the nominal case where local controllers are distributedly computed before deployment. Performance is significantly improved during transient where oscillations are limited, see \autoref{fig:perfect_all_detail} and \autoref{fig:imperfect_all_detail}, and steady-state where coupling constraints are more rigidly satisfied, see \autoref{subfig:imperfect_dec}. This improvement is justified by the rapid convergence of \autoref{alg:dual_conensus_admm}, reaching a relative suboptimality of \( 10^{-5} \) w.r.t.\ the optimal value of \eqref{eq:abmm} within 100 iterations, as shown in \autoref{subfig:suboptimality}, which is why the warm-started Scenario 2 shows improved performance over the cold-started Scenario 1. Moreover, the reduced nominal power of the WTs leads to a decreased contribution in the aggregate steady-state power output as prescribed by the coupling constraints, while the opposite behavior is observed for PVs. The ESs are more active in Scenario 2 and compensate for WTs, especially during the initial transient given that ESs are characterized by the smallest time constant. Further, \autoref{subfig:imperfect_dec} clearly indicates that devices satisfy constraints non-conservatively and are being operated at their limits. Finally, our distributed method yields very similar responses to the centralized solution, even under online deployment, as shown in \autoref{fig:all_responses}. \section{Conclusion} \label{sec:conclusion} In this paper, we proposed a distributed, optimal control design of local state feedback controllers for a set of interconnected agents with state and input constraints. Our approach employs SLS with SPA to prescribe a desired aggregate behavior and impose device constraints non-conservatively via a convex optimization problem, without closed-loop finite impulse response or locality constraints. The main challenge addressed was solving SLS with SPA in a distributed manner, for which we developed a dual consensus ADMM algorithm and, under weak assumptions, provided a convergence certificate that guarantees convergence to the set of primal minimizers, unlike similar dual consensus ADMM algorithms where divergence of the primal variables was possible. Additionally, our optimization algorithm extended existing theoretical results on DAO, by handling coupling constraints and lifting certain common, yet restrictive, regularity conditions on the objective, e.g., strong convexity and differentiability. We demonstrated the effectiveness of our method on a control design problem for DERs that collectively provide stability services to the power grid. Future work includes investigating the convergence rate of the proposed distributed optimization algorithm, and development of distributed solutions for SLS with SPA in the case of \( \mathcal{H}_{\infty} \) control synthesis. \end{document}
math
47,714
\begin{document} \title{f A Dehn surgery description of regular finite cyclic covering spaces of rational homology spheres} \begin{abstract} We provide related Dehn surgery descriptions for rational homology spheres and a class of their regular finite cyclic covering spaces. As an application, we use the surgery descriptions to relate the Casson invariants of the covering spaces to that of the base space. Finally, we show that this places restrictions on the number of finite and cyclic Dehn fillings of the knot complements in the covering spaces beyond those imposed by Culler-Gordon-Luecke-Shalen and Boyer-Zhang. \end{abstract} \noindent {\em Keywords:} Covering space; Dehn surgery; Casson invariant; Dehn filling \noindent {\em AMS classification:} 57 \section{Introduction} It is well-known that any closed, oriented, connected 3-manifold may be obtained by Dehn surgery on a link in $S^3$. In recent years, 3-manifold theorists have exploited this fact, computing certain 3-manifold invariants by describing how the invariant changes under Dehn surgery. This has been an effective method for computing invariants of individual manifolds. However, until now, there was no known procedure for relating Dehn surgery descriptions of manifolds with those of their covering spaces. Therefore, although invariants for a manifold and a covering space of the manifold could each be computed using Dehn surgery formulas, no general statements could easily be made regarding the relationship between the invariants. Let $(X, \tilde{X})$ be a 3-manifold pair, where $X$ is a rational homology sphere and $\tilde{X}$ is a regular finite cyclic covering space of $X$, say with $\pi_1(X)/\phi_*(\pi_1(\tilde{X})) = {\displaystyle\Bbb Z} / k \displaystyle\Bbb Z$, where \mbox{$\phi:\tilde{X}\rightarrow X$} is the projection map. Since this group is abelian, we may factor the quotient map $\pi_1(X)\rightarrow \pi_1(X)/\phi_*(\pi_1(\tilde{X}))$ through the first homology group ${\rm H}_1(X;\displaystyle\Bbb Z)$. \begin{defn} We call the covering $\tilde{X}\rightarrow X$ \mbox{\bf torsion-split} if there exists a homology decomposition ${\rm H}_1(X;{\displaystyle\Bbb Z}) = {\displaystyle\Bbb Z} / kp {\displaystyle\Bbb Z} \oplus H$ (where possibly $H = 0$) satisfying: \begin{itemize} \item[i)] the decomposition is a decomposition of the torsion linking pairing on ${\rm H}_1(X)$: i.e. if $\alpha$ is a generator of the ${\displaystyle\Bbb Z} / kp {\displaystyle\Bbb Z}$-summand and $\beta_1, \beta_2, \ldots, \beta_j$ is a torsion basis for $H$, then $link(\alpha,\alpha) = m/n$ for some $m$ relatively prime to $n$ and $link(\alpha,\beta_i) = 0$ for all $i$ \item[ii)] any generator of the ${\displaystyle\Bbb Z} / kp {\displaystyle\Bbb Z}$-summand maps to a generator in $\pi_1(X)/\phi_*(\pi_1(\tilde{X}))$ under the quotient map $ {\rm H}_1(X) \rightarrow \pi_1(X)/\phi_*(\pi_1(\tilde{X}))$ \item[iii)] $H$ maps to 0 in $\pi_1(X)/\phi_*(\pi_1(\tilde{X}))$. \end{itemize} \end{defn} In this paper, we provide a Dehn surgery description for torsion-split regular $k$-fold cyclic covering space pairs $(X, \tilde{X})$ with base space a rational homology sphere. Specifically, we prove the following: Let $K$ be a knot in $S^3$, and let $L = (L_1,L_2,\ldots,L_n)$ be a link in $S^3$. Assume $K$ bounds a Seifert surface $\Sigma$ with the property that there exist Seifert surfaces $\Sigma_1, \Sigma_2,\ldots,\Sigma_n$ for $L_1,L_2,\ldots,L_n$ disjoint from a neighborhood of $\Sigma$. Let $p$, $q$, and $k$ be integers with $1 \leq q$, $1 \leq |p|$, and $k > 1$. Assume $kp$ and $q$ are relatively prime. Let $M$ be the 3-manifold obtained by $kp/q$-Dehn surgery on $K$ in $S^3$ followed by surgery on $L$ with surgery coefficients $I = (i_1,i_2,\ldots,i_n)$, where $i_j = \pm 1$ for each $j$. Note that for each $j = 1,2,\ldots,n$, the component $L_j$ of $L$ has $k$ disjoint lifts in the $k$-fold branched cyclic cover of $S^3$ branched along $K$, since $(K,L_j)$ is a boundary link. Let $\tilde{M}$ be the 3-manifold obtained by $p/q$-Dehn surgery on the lift of $K$ to the $k$-fold branched cyclic cover of $S^3$ branched along $K$, followed by surgery on the $k$ lifts of $L$ with surgery coefficient $i_j$ for every lift of $L_j$. \begin{theorem} $\tilde{M}$ is a regular $k$-fold cyclic covering space of $M$. \end{theorem} Call $(K,L,k,p,q,I)$ a {\em pairwise Dehn surgery description} for $(M,\tilde{M})$. Then we also have \begin{theorem} Let $(X,\tilde{X})$ be a torsion-split regular $k$-fold cyclic covering space pair with base space $X$ a rational homology sphere. Then $(X,\tilde{X})$ has a pairwise Dehn surgery description. \end{theorem} Thus, every torsion-split regular $k$-fold cyclic covering space pair over a rational homology sphere has a pairwise Dehn surgery description. This, then, may be used for computing 3-manifold invariants for the pair. This should be a necessary first step in drawing general conclusions about the relationships between invariants for $X$ and those for $\tilde{X}$. The paper is outlined as follows. In Section 2, we show that the manifolds generated by the pairwise Dehn surgery description $(K,L,k,p,q,I)$ are a regular $k$-fold cyclic covering space pair. In Section 3, we show that all torsion-split regular $k$-fold cyclic covering space pairs over rational homology spheres arise in this way. In Section 4, as an application, we compute Casson invariants for the pair $(X,\tilde{X})$. In Section 5, we discuss the implications of our work for generating examples of finite and cyclic Dehn fillings of 3-manifolds with toral boundary. Specifically, let $K$ be a knot homologous to 0 in a closed, connected, oriented 3-manifold $Y$, and let $X$ be the result of $p/q$-Dehn surgery on $K$ in $Y$. We show that if $\pi_1(X)$ is finite (resp. cyclic), then so too is the fundamental group of the 3-manifold obtained by $r/q$-Dehn surgery on the lift of $K$ to the $p/r$-fold cyclic branched cover of $Y$ branched along $K$, where $r$ is any positive integer dividing $p$. Thus, one example of a finite or cyclic Dehn filling may generate many more such examples. Moreover, this restricts the possible number of finite and cyclic surgery slopes in the cyclic covering spaces of $Y - nbhd(K)$. Finally, in the appendix, we demonstrate some relationships between certain Alexander polynomials which are required in Section 4. We remark that the construction presented here may generalize to pairs $(X,\tilde{X})$ for which $X$ is not a rational homology sphere. This theory, together with applications, will be the topic of a future paper. I am indebted to Andrew Clifford for his help throughtout the writing process and to Steve Boyer for his corrections to an early version of this paper. I also thank Nancy Hingston and Andy Nicas for their helpful comments. \section{Generating covering space pairs} Retain all notation from Section 1. Further, denote by $\overline{S}^3_K$ the $k$-fold cyclic covering space of $S^3$ branched over $K$. Let $N$ be the closed 3-manifold resulting from $kp/q$-Dehn surgery on $K$ in $S^3$, and let $\tilde{N}$ denote the 3-manifold which is the result of $p/q$-Dehn surgery on the lift of $K$ to $\overline{S}^3_K$. Finally, if $Y$ is any 3-manifold and $K$ is a knot in $Y$, we abuse notation and denote by $Y-K$ the compact 3-manifold formed by removing an open tubular neighborhood of $K$. Before proving Theorem 1.2, we prove the following: \begin{lemma} $\tilde{N}$ is a regular $k$-fold cyclic covering space of $N$. \end{lemma} \noindent {\em Proof of Lemma 2.1:} Let $\overline{K}$ denote the lift of $K$ to $\overline{S}^3_K$, and let $\overline{m}$ denote its meridian. Let $m$ denote the meridian of $K$. Let $l$ denote the preferred longitude of $K$ on the boundary of $S^3 - K$, and let $\overline{l}$ be a longitude in the boundary of $\overline{S}^3_K - \overline{K}$ which lifts $l$. Then $\tilde{N}$ is obtained by gluing a solid torus to $\overline{S}^3 - \overline{K}$, sending a meridian of the solid torus to the simple closed curve $\overline{m}^p \overline{l}^q$. Similarly, $N$ is obtained by gluing a solid torus to $S^3 - K$, sending a meridian of the solid torus to $m^{kp}l^q$. Let $\phi$ denote the regular $k$-fold cyclic covering map of the knot complements. We wish to show that $\phi$ extends to a regular $k$-fold cyclic (unbranched) covering map $\tilde{N} \rightarrow N$. Note that $\phi$ restricts to a regular $k$-fold cyclic covering map from the boundary of $\overline{S}^3 - \overline{K}$ to $S^3 - K$. Thus, the map from the boundary of the solid torus in $\tilde{N}$ to the boundary of the solid torus in $N$ is a regular $k$-fold cyclic covering map. Now a $k$-fold cyclic covering map from the boundary of one solid torus to the boundary of another solid torus extends to a regular $k$-fold cyclic covering map from the solid torus to the solid torus if and only if the meridian of the solid torus in the initial solid torus is taken to the meridian of the final solid torus. In our case, the meridian of the solid torus in $\tilde{N}$ is $\overline{m}^p\overline{l}^q$, and the meridian of the solid torus in $N$ is $m^{kp}l^q$. But $\phi$ takes $\overline{m}$ to $m^k$ and $\overline{l}$ to $l$. Hence, $\phi(\overline{m}^p\overline{l}^q) = m^{kp}l^q$. It follows that $\phi$ takes a meridian of the solid torus in $\tilde{N}$ to a meridian of the solid torus in $N$, and therefore $\phi$ extends to a covering map $\tilde{N}\rightarrow N$. $\Box$ We now prove Theorem 1.2. \noindent {\em Proof of Theorem 1.2:} We know that $\tilde{N}$ is a regular $k$-fold cyclic covering space of $N$ by Lemma 2.1. Let $\phi$ denote the covering map. We abuse notation and denote the image of $L_j$ in $N$ by $L_j$ for $j = 1,2,\ldots,n$. Let $\overline{L}_j$ denote the inverse image under $\phi$ of $L_j$ for each $j$. Since $(K,L_j)$ is a boundary link in $S^3$ for each $j$, we see that $\overline{L}_j$ consists of $k$ disjoint simple closed curves in $\tilde{N}$. Choose a Seifert surface $\Sigma$ for $K$ and Seifert surfaces $\Sigma_j$ for $L_j$ in $S^3$ such that $\Sigma \bigcap \Sigma_j = \emptyset$. Then the image of $\Sigma_j$ in $N$ is a Seifert surface for $L_j$ in $N$, and it lifts to $k$ Seifert surfaces $\overline{\Sigma}_{j,1},\overline{\Sigma}_{j,2}, \ldots, \overline{\Sigma}_{j,k}$ for the $k$ lifts $\overline{L}_{j,1},\overline{L}_{j,2},\ldots, \overline{L}_{j,k}$ of $L_j$ in $\overline{L}_j$. Recall that $\overline{S}^3_K$ may be explicitly constructed according to the following outline: let $Y^0 = \Sigma \times (-1,1)$ be an open bicollar of $\Sigma$, and let $Y = Y^0/ (K\times (-1,1) \sim K)$. Let $Y^-$ be the manifold obtained by removing $K$ from $Y$. Glue $k$ copies of $S^3 - \Sigma$ together along $k$ copies of $Y^-$, alternating copies of $S^3 - \Sigma$ with copies of $Y^-$. Finally, glue $K$ back in to compactify. For a precise description of this construction, see [R, pp 128 - 131 and pp 297-298]. Now since $(K,L_j)$ is a boundary link in $S^3$, we see that the $k$ lifts of $\Sigma_j$ to $\overline{S}^3_K$ are contained in the $k$ disjoint copies of $S^3 - Y$. It follows that these lifts of $\Sigma_j$ are disjoint. Then clearly the Seifert surfaces $\overline{\Sigma}_{j,1}, \overline{\Sigma}_{j,2}, \ldots,\overline{\Sigma}_{j,k}$ in $\tilde{N}$ are also disjoint. Now the Dehn surgery on $L_j$ may be carried out in a neighborhood $Z_j$ of $\Sigma_j$. Moreover, if we take $Z_j$ to be sufficiently small, then the inverse image $\phi^{-1}(Z_j)$ consists of $k$ disjoint copies $\overline{Z}_{j,i}$ of $Z_j$ which are neighborhoods of the $\overline{\Sigma}_{j,i}$. Clearly the Dehn surgery on $L_j$ in $Z_j$ induces an identical Dehn surgery on $\overline{L}_{j,i}$ in $\overline{Z}_{j,i}.$ Therefore every point of $M$ has $k$ distinct inverse images in $\tilde{M}$, each with a neighborhood which is carried homeomorphically to a neighborhood of the point in $M$. Thus, $\tilde{M}$ is a $k$-fold covering space of $M$. Finally, note that for each $j$, the automorphism group ${\displaystyle\Bbb Z}/ k {\displaystyle\Bbb Z}$ of $\phi:\tilde{N} \rightarrow N$ cyclically permutes the $k$ disjoint copies of the Seifert surface for $L_j$ in $\tilde{N}$. Clearly, then, the automorphism group of the covering space $\tilde{M}\rightarrow M$ is also ${\displaystyle\Bbb Z} / k {\displaystyle\Bbb Z}$, and the images of the $k$ lifts of $\overline{L}_{j,i}$ after Dehn surgery are permuted by the automorphism group. Since the covering space $\tilde{N}\rightarrow N$ was regular, it follows that $\tilde{M}\rightarrow M$ is regular. $\Box$ We remark that the covering $\tilde{M} \rightarrow M$ is clearly torsion-split by construction. \section{Completeness of the construction} In this section, we show that the construction described in Section 1 is complete in that it generates all torsion-split regular $k$-fold cyclic covering space pairs $(X,\tilde{X})$ over rational homology spheres. We prove: \noindent {\bf Theorem 1.3} {\em Let $(X,\tilde{X})$ be a torsion-split regular $k$-fold cyclic covering space pair with base space $X$ a rational homology sphere. Then $(X,\tilde{X})$ has a pairwise Dehn surgery description.} We first prove the following lemma, which is a straightforward generalization of a lemma of S. Boyer and D. Lines [BL]. \begin{lemma} Let $W$ be a rational homology sphere with ${\rm H}_1(W;{\displaystyle\Bbb Z}) = {\displaystyle\Bbb Z} / n {\displaystyle\Bbb Z} \oplus H$ for some finite abelian group $H$. Assume further that the homology decomposition arises as a decomposition of the torsion linking pairing on ${\rm H}_1(W)$. Then there is a 3-manifold $V$ with ${\rm H}_1(V;{\displaystyle\Bbb Z}) = H$, a knot ${\cal K}$ homologous to 0 in $V$, and an integer $m$ such that $W$ is the result of $n/m$-Dehn surgery on ${\cal K}$ in $V$. \end{lemma} \noindent {\em Proof of Lemma 3.1:} Let $\alpha$ be a generator of the ${\displaystyle\Bbb Z} / n {\displaystyle\Bbb Z}$ summand of ${\rm H}_1(W;\displaystyle\Bbb Z)$. Represent $\alpha$ by a curve $C$ in $W$. Note that the torsion subgroup of ${\rm H}_1(W - C;\displaystyle\Bbb Z)$ is just $H$. To see this, note that by assumption, $link(\alpha,\alpha)= t/n$ for some integer $t$ relatively prime to $n$, where $link(\mbox{\_},\mbox{\_})$ is the torsion linking pairing on ${\rm H}_1(W)$. Therefore $C$ intersects the surface with boundary $nC$, and $nC$ does not bound in $W - C$. On the other hand, there exist generators $\beta_i$ of $H$ such that $link(\alpha,\beta_i) = 0$. It follows that $\beta_i$ has the same finite order in ${\rm H}(W - C)$ as in ${\rm H}(W)$. Let $T(C)$ be a tubular neighborhood of $C$. Then ${\rm H}_2(W ,W - T(C);\displaystyle\Bbb Z)$ is infinite cyclic with generator a meridional disk of $T(C)$. From the exact sequence \[0\rightarrow {\rm H}_2(W ,W - T(C);{\displaystyle\Bbb Z}) \rightarrow {\rm H}_1(W - T(C);{\displaystyle\Bbb Z}) \rightarrow {\rm H}_1(W;{\displaystyle\Bbb Z}) \rightarrow 0\] it follows that ${\rm H}_1(W - T(C);{\displaystyle\Bbb Z}) \cong {\displaystyle\Bbb Z} \oplus H$. Let $C'$ be a simple closed curve on $\partial T(C)$ generating the infinite cyclic summand of ${\rm H}_1(W - T(C);\displaystyle\Bbb Z)$. Attach a solid torus to $W - T(C)$ sending the meridian to $C'$; call the resulting manifold $V$. Let ${\cal K}$ denote the core of the surgery torus. Then ${\rm H}_1(V;{\displaystyle\Bbb Z}) \cong H$, and $W$ is the result of $n/m$-Dehn surgery on ${\cal K}$ for some integer $m$. Moreover, ${\cal K}$ is homologous to 0 in $V$, since all curves on $\partial T(C)$ represent 0 in $H$. $\Box$ \noindent We now prove the theorem. \noindent {\em Proof of Theorem 1.3:} Let $\tilde{X}\rightarrow X$ be a torsion-split regular $k$-fold cyclic covering space with $X$ a rational homology sphere. Let ${\rm H}_1(X;{\displaystyle\Bbb Z}) = {\displaystyle\Bbb Z} / kp {\displaystyle\Bbb Z} \oplus H$ be a homology decomposition for $X$ obeying properties {\em i) - iii)} of Definition 1.1. We may apply Lemma 3.1 to find a 3-manifold $V$ with ${\rm H}_1(V;{\displaystyle\Bbb Z}) = H$, a knot ${\cal K}$ homologous to 0 in $V$, and an integer $q$ such that $X$ is the result of $kp/q$-Dehn surgery on ${\cal K}$ in $V$. Let $\Sigma$ be a Seifert surface for ${\cal K}$ in $V$. It is well-known that there exists a link ${\cal L}=({\cal L}_1,{\cal L}_2,\ldots,{\cal L}_n)$ in $V$ such that $S^3$ is the result of surgery on ${\cal L}$. Let $L = (L_1,L_2,\ldots,L_n)$ be the image of ${\cal L}$ in $S^3$. Then $V$ may be obtained from $S^3$ by surgeries on the components of $L$. Moreover, we may choose ${\cal L}$ in such a way that the surgery coefficients for the components of $L$ are all $\pm 1$. Choose $\alpha_1,\alpha_2,\ldots,\alpha_{2g}$ a collection of simple closed curves on $\Sigma$ representing a homology basis for $\Sigma$. Isotope ${\cal L}$ without changing any crossings of ${\cal L}$ so that the linking number of ${\cal L}_i$ and $\alpha_j$ is 0 for any $i = 1,2,\ldots,n$ and any $j = 1,2,\ldots,2g$ and so that ${\cal L}_i \cap \Sigma = \emptyset$. We continue to denote by ${\cal L}$ and $L$ the images of ${\cal L}$ and $L$ under the isotopy. Let $K$ be the image of ${\cal K}$ in $S^3$ after surgery on ${\cal L}$. Abusing notation yet again, we denote by $\Sigma$ and $\alpha_j$ the image of $\Sigma$ and $\alpha_j$ in $S^3$. Note that the linking number of $L_i$ and $\alpha_j$ is 0 for any $i = 1,2,\ldots,n$ and any $j = 1,2,\ldots,2g$ by Lemma A.5. Furthermore $L_i$ does not meet $\Sigma$. We require the following \begin{lemma} Let $C$ be a knot in $S^3$, and let $S$ be a Seifert surface for $C$. Let $x_1,x_2,\ldots,x_{2g}$ be a collection of curves on $S$ representing a basis for ${\rm H}_1(S;{\displaystyle\Bbb Z})$. Let $D$ be a knot in $S^3$ such that $D \cap S = \emptyset$ and $lk(x_i,D)=0$ for $i = 1,2,\ldots,2g$. Then $D$ bounds a Seifert surface disjoint from $S$. \end{lemma} \noindent {\em Proof of lemma:} Let $S'$ be a Seifert surface for $D$ meeting $S$ transversely. If $S' \cap S = \emptyset$, we are done. Otherwise, $S' \cap S$ is a collection $y_1,y_2,\ldots,y_m$ of oriented simple closed curves. Now the homology class represented by $y_1+y_2+\ldots +y_m$ in ${\rm H}_1(S;{\displaystyle\Bbb Z})$ must be 0. For if this class were non-zero, then there would be a curve $x_i$ on $S$ whose oriented intersection number with $y_1+y_2+\ldots +y_m$ was non-zero. Then $x_i$ would have non-zero intersection number with $S'$ and hence would have non-zero linking number with $D$. Now remove from $S'$ the components of $S' - S$ which do not meet $D$. The resulting surface $S''$ has boundary $y_1 \cup y_2 \cup \ldots \cup y_m \cup D$. Since the sum of the $y_i$'s represents 0 in ${\rm H}_1(S;{\displaystyle\Bbb Z})$, the curves $y_i$ cobound a collection of subsurfaces of $S$. Then we may glue a collection of parallel copies of these surfaces to the appropriate boundary components of $S''$ to form a new two-sided surface $S'''$ with boundary $D$. Pushing the parallel copies of subsurfaces of $S$ apart and away from $S$, we will find that $S'''$ is embedded and disjoint from $S$. $\Box$ Now returning to the proof of the theorem: Applying the lemma to each of the link components $L_i$, we may find Seifert surfaces $\Sigma_1,\Sigma_2,\ldots,\Sigma_n$ for $L_1,L_2,\ldots,L_n$, respectively, such that $\Sigma \cap \Sigma_i = \emptyset$ for $i =1,2,\ldots,n$. We show \noindent {\em Claim:} $(K,L,k,p,q,I)$ is a pairwise Dehn surgery description for $(X,\tilde{X})$. \noindent {\em Proof of Claim:} It is clear from the construction that $(K,L,k,p,q,I)$ is a pairwise Dehn surgery description for some pair $(X,X')$ with base space $X$. Let $\psi: X' \rightarrow X$ be the projection map. Then $\psi_*(\pi_1(X'))$ is the kernel of the homomorphism \[\pi_1(X)\rightarrow {\rm H}_1(X;{\displaystyle\Bbb Z}) \rightarrow {\displaystyle\Bbb Z} / k {\displaystyle\Bbb Z},\] where the latter homomorphism is the projection \[{\rm H}_1(X;{\displaystyle\Bbb Z}) \rightarrow {\rm H}_1(X;{\displaystyle\Bbb Z}) / H = {\displaystyle\Bbb Z} / kp {\displaystyle\Bbb Z} \rightarrow {\displaystyle\Bbb Z} / k {\displaystyle\Bbb Z}.\] But this kernel is precisely $\phi_*(\pi_1(\tilde{X}))$. Hence $X' = \tilde{X}$. $\Box$ \section{ Casson-Walker invariants for pairs} In 1985, Andrew Casson defined an invariant $\lambda$ for integral homology 3-spheres. Roughly, this invariant counts the signed equivalence classes of SU(2)-representations of the fundamental group of the 3-manifold. This invariant was extended to an invariant for oriented rational homology 3-spheres by Kevin Walker in [W], and Christine Lescop derived a combinatorial formula extending the invariant to arbitrary closed, oriented 3-manifolds in [L]. A number of mathematicians have explored the Casson-Walker invariant for branched covers of links in $S^3$. David Mullins computed the invariant for 2-fold branched covers in the case when the 2-fold branched cover is a rational homology sphere in [M]. More general results for $k >2$ may be found in [GR]. The invariants for $n$-fold branched covers $\overline{S}^3_K$ of particular families of knots have been computed by J. Hoste, A. Davidow, and K. Ishibe in [H], [D], and [I]. Garoufalidis generalizes several of these formulas in [G]. We study the Casson-Walker invariants of pairs $(X,\tilde{X})$. As in the early sections of the paper, we assume $\tilde{X} \rightarrow X$ is a torsion-split regular $k$-fold cyclic covering over a rational homology sphere. In this section, we assume further that $\tilde{X}$ is a rational homology sphere and that $(X,\tilde X)$ has a Dehn surgery description $(K,L,k,p,q,I)$ with $\overline S^3_K$ a rational homology sphere. In what follows, for any pair of non-zero integers $x$ and $y$ which are relatively prime, let $s(x,y)$ denote the Dedekind sum defined by \[s(x,y) = sign (y)\sum_{j = 1}^{|x|}((j/y))((jx/y))\] where \[((z)) = \left\{ \begin{array}{ll} 0 & z \in {\displaystyle\Bbb Z}\\ z - [z] - 1/2 & \mbox{else} \end{array} \right. \] For a knot $C$ in a rational homology sphere, let $\Delta_C$ denote the Alexander polynomial of C, normalized so that it is symmetric in $t^{1/2}$ and $t^{-1/2}$ and so that $\Delta_C(1)= 1$. We show \begin{theorem} Let $(K,L,k,p,q,I)$ be a pairwise Dehn surgery description for $(X,\tilde{X})$. Then \[\lambda(\tilde{X}) = k \lambda(X) + q/p (\Delta^{''}_{\overline{K}}(1) - \Delta_K^{''}(1)) - k s(q,kp) + s(q,p) + \lambda(\overline{S}^3_K).\] Here, $\overline{S}^3_K$ denotes the $k$-fold branched cyclic cover of $S^3$ branched along $K$ and $\overline{K}$ denotes the lift of $K$ to $\overline{S}^3_K$, as above. \end{theorem} Note that $\Delta_{\overline{K}}$ can be computed from $\Delta_K$. This relationship is described in the appendix. We remark further that in the case $k=2$, the invariant $\lambda(\overline{S}^3_K)$ can be computed whenever $\tilde{X}$ is a rational homology sphere using the work of Mullins. For $k > 2$, the results of Garoufalidis and Rozansky apply. For certain families of knots, the invariant $\lambda(\overline{S}^3_K)$ can be computed for any value of $k$ using the results of Hoste, Davidow, Ishibe, and Garoufalidis. {\em Proof of Theorem 4.1:} Retain all notation from previous sections. We begin by noting that \begin{equation} \lambda(N) = (q/kp) \Delta^{''}_K (1) + s(q,kp) \end{equation} and \begin{equation} \lambda(\tilde{N}) = \lambda(\overline{S}^3_K) + (q/p) \Delta^{''}_{\overline{K}}(1) + s(q,p) \end{equation} by Proposition 6.2 of [W], since $K$ is a knot in $S^3$ and since $\overline{K}$ is homologous to 0 in $\overline{S}^3_K$. Now $X$ is obtained from $N$ by $I$-surgery on $L$. We know that $\lambda(X) - \lambda(N)$ may be obtained using the surgery formulae developed by Walker in [W]. These formulae depend on the coefficients $I$, as well as the link $L$ and the Alexander polynomials of the components of L. Similarly $\tilde{X}$ is obtained from $\tilde{N}$ by $I$-surgery on each lift $\overline{L}_j$ of $L$ in $\tilde{N}$. Now by Proposition A.7 in the appendix, the Alexander polyomial of each component of $\overline{L}$ is equal to that of the corresponding component of $L$. Therefore, since a neighborhood of each component $\overline{L}_{j,i}$ of $\overline{L}_j$ in $\tilde{N}$ is carried homeomorphically onto a neighborhood of $L_j$ in $N$ and the coefficients $I$ correspond, it is clear that for any $j$, the Casson-Walker invariant of the manifold $\tilde{N}_j$ obtained by doing $I$-surgery on $\overline{L}_j$ in $\tilde{N}$ obeys the formula $\lambda(\tilde{N}_j) - \lambda(\tilde{N}) = \lambda(X) - \lambda(N)$. But the lifts $\overline{\Sigma}_{j,i}$ of $\Sigma_j$ are contained in disjoint copies of $S^3 - Y$ in $\tilde{N}$, as shown in the proof of Theorem 1.2, so $\overline{\Sigma}_{j,i} \cap \overline{\Sigma}_{l,m} = \emptyset$ if $i \neq m$. Then also $\lambda(\tilde{N_{jk}}) - \lambda(\tilde N_j) = \lambda(\tilde N_j)-\lambda(N)$, where $\tilde N_{jk}$ is the manifold obtained by doing $I$-surgery on the image of $\overline{L}_k$ in $\tilde N_j$. It follows that \begin{equation} \lambda(\tilde{X})-\lambda(\tilde{N}) = k (\lambda(X) - \lambda(N)). \end{equation} The theorem follows after suitably combining equations (1) - (3). $\Box$ We remark that analogous results for the generalizations of $\lambda$ counting representations in SO(3), U(2), Spin(4), and SO(4) may be immediately obtained using the results of [C]. \section{Cyclic and finite Dehn surgeries} In recent years, many new and exciting results have come to light concerning Dehn fillings of 3-manifolds with toral boundary. Among these are the Cyclic Surgery Theorem of Culler, Gordon, Luecke, and Shalen [CGLS] and the work of Boyer and Zhang concerning finite fillings [BZ1] and [BZ2]. We review some basic definitions. \begin{defn} Let $K$ be a knot homologous to 0 in an oriented 3-manifold $Y$. A \mbox{\bf slope} of $K$ is the unoriented isotopy class of a non-trivial simple closed curve in $\partial(Y-K)$. The \mbox{\bf distance} between two slopes is their geometric intersection number. \end{defn} Recall that for any knot homologous to 0 in an oriented 3-manifold, the set of slopes of the knot is canonically isomorphic to ${\overline{\displaystyle\Bbb Q}}$. Thus, we denote by $p/q$ the slope which is the isotopy class of a curve which is $p$ times a meridian plus $q$ times a longitude. \begin{defn} With $K$ and $Y$ as above, suppose that $p/q$ is a slope of $K$ such that the manifold $X$ which is the result of $p/q$ Dehn surgery on $K$ in $Y$ has finite fundamental group. Then we call $p/q$ a \mbox{\bf finite surgery slope} of $K$. Similarly, if $X$ has cyclic fundamental group, we call $p/q$ a \mbox{\bf cyclic surgery slope} of $K$. \end{defn} The results of Culler, Gordon, Luecke, and Shalen and of Boyer and Zhang state that for most irreducible knot complements $Y-K$ as above, there are at most 3 cyclic surgery slopes of $K$ and at most 6 finite surgery slopes. Moreover these slopes may be distance at most 1 apart in the cyclic case and at most 5 apart in the finite case. A detailed survey of work in this area may be found in [B]. Here, we note that given a knot $K$ homologous to 0 in a closed, oriented 3-manifold $Y$ and a finite or cyclic surgery slope of $K$, our work leads to explicit examples of more such fillings. Specifically, let $\overline{Y}_K$ denote the $k$-fold branched cyclic cover $Y$ branched along $K$, and let $\overline{K}$ denote the lift of $K$ to $\overline{Y}_K$. We show \begin{theorem} Let $K$ be a knot homologous to 0 in a closed oriented 3-manifold $Y$. Let $k$, $p$, and $q$ be integers with $k > 1$, $|p| \geq 1$, and $q \geq 1$. Then $kp/q$ is a finite surgery slope of $K$ if and only if $p/q$ is a finite surgery slope of $\overline{K}$ in $\overline{Y}_K$. Moreover if $kp/q$ is a cyclic surgery slope of $K$, then $p/q$ is a cyclic surgery slope of $\overline{K}$. Finally, if $p/q$ is a cyclic surgery slope of $\overline{K}$ and $p \neq 1$, then $kp/q$ is a cyclic surgery slope of $K$. \end{theorem} \noindent {\em Proof:} Let $X$ denote the manifold resulting from $kp/q$-Dehn surgery on $K$, and let $\tilde{X}$ denote the result of $p/q$-Dehn surgery on $\overline{K}$. As in the proof of Theorem 1.3, we may find a knot $C$ and a link $L = (L_1,L_2,\ldots,L_n)$ in $S^3$ satisfying \begin{itemize} \item there exist Seifert surfaces $\Sigma$ and $\Sigma_1, \Sigma_2,\ldots,\Sigma_n$ for $C$ and $L_1,L_2,\ldots,L_n$, respectively, with $\Sigma\cap\Sigma_j = \emptyset$ for $j=1,2,\ldots,n$ \item $Y$ is the result of $I$-surgery on $L$ for some $I=(i_1,i_2,\ldots,i_n)$ with $i_j = \pm 1$ \item $K$ is the image of $C$ in $Y$. \end{itemize} (Note that none of these steps requires $Y$ to be a rational homology sphere.) Now $(C,L,k,p,q,I)$ is a Dehn surgery description for some pair $(Z,\tilde Z)$. But since $(C,L_i)$ is a boundary link for $i=1,2,\ldots,n$, it is clear that $kp/q$-surgery on $C$ followed by $I$-surgery on the image of $L$ yields the same manifold as $I$-surgery on $L$ followed by $kp/q$-surgery on the image of $C$. Hence $X \cong Z$. Also, since $(C,L_i)$ is a boundary link, we see that $\overline Y_K$ may be obtained by $I$-surgery on the $k$ lifts of $L$ to $\overline S^3_C$. Moreover $p/q$-surgery on $\overline C$ in $\overline S^3_C$ followed by $I$-surgery on the images of each of the $k$ lifts of $L$ to $\overline S^3_C$ yields the same manifold as $I$-surgery on each of the lifts of $L$ to $\overline S^3_C$ followed by $p/q$-surgery on the image $\overline K$ of $\overline C$ in $\overline Y_K$. Hence $\tilde X \cong \tilde Z$. Now it follows from Theorem 1.2 that $\tilde{X}$ is a regular $k$-fold cyclic covering space of $X$. Therefore the fundamental group of $\tilde{X}$ is an index $k$ subgroup of that of $X$. Then clearly $\pi_1(X)$ is finite if and only if $\pi_1(\tilde{X})$ is finite, and $\pi_1(\tilde{X})$ is cyclic if $\pi_1(X)$ is cyclic. Finally, if $\pi_1(\tilde{X})$ is cyclic and $p \neq 1$, so $\pi_1(\tilde{X}) = {\displaystyle\Bbb Z}/p {\displaystyle\Bbb Z}$, then $\pi_1(X) = {\rm H}_1(X) = {\displaystyle\Bbb Z}/ k p {\displaystyle\Bbb Z}$, since ${\rm H}_1(X; {\displaystyle\Bbb Z})$ has a ${\displaystyle\Bbb Z}/ k p {\displaystyle\Bbb Z}$ summand and $\pi_1(\tilde{X})$ is index $k$ in $\pi_1(X)$. This proves the theorem. $\Box$ In light of [CGLS], [BZ1], and [BZ2], then, finding a finite or cyclic surgery slope $p/q$ for some knot $K$ homologous to 0 in an oriented 3-manifold $Y$ severely restricts the set of possible finite and cyclic surgery slopes not only of $K$ itself, but also of lifts of $K$ to branched covers of $Y$ branched along $K$ of all orders dividing $p$. For example, it is a result of Fintushel and Stern [FS] that 18- and 19-surgeries on the $(-2,3,7)$ pretzel knot yield lens spaces. It follows that 1-surgery on the lift of the $(-2,3,7)$ pretzel knot to either the 18- or 19-fold branched cover of $S^3$ branched along the pretzel knot yields $S^3$. Further, we find that $p$-surgery on the lift of the knot to the $(18/p)$-fold branched cover of $S^3$ branched along the knot also yields a lens space for $p = 2$, 3, 6, or 9. Finally, we note that the results of [CGLS], [BZ1], and [BZ2] may be strengthened for manifolds of the form $\overline{Y}_K - \overline{K}$. We have the following corollary, where $Y$, $K$, $\overline{Y}_K$, and $\overline{K}$ are defined as above: \begin{cor} Suppose $Y - K$ is irreducible and is not a Seifert fibered space. Then $\overline{K}$ has at most one cyclic surgery slope $p/q$ with $p \neq 1$. If also $Y - K$ is not a cable on the twisted $I$-bundle over the Klein bottle, then the distance between any two finite surgery slopes of $\overline{K}$ is at most $5/k$. If $Y - K$ is also hyperbolic, then the distance between a cyclic surgery slope $p/q$ with $p \neq 1$ and any finite surgery slope is at most $2/k$. Moreover the distance between any two finite surgery slopes is at most $3/k$. \end{cor} \noindent{\em Proof:} By Theorem 5.3, for any $p$ and $q$ with $|p|\geq 1$ and $q\geq 1$, we know that $p/q$ is a finite surgery slope of $\overline{K}$ if and only if $kp/q$ is a finite surgery slope of $K$ and that $kp/q$ is a cyclic surgery slope of $K$ if $p/q$ is a cyclic surgery slope of $\overline{K}$ with $p \neq 1$. The distance between $kp'/q'$ and $kp/q$ is $k$ times the distance between $p'/q'$ and $p/q$. The assertions follow from the restrictions on the distance between cyclic (resp. finite) surgery slopes $kp/q$ for $K$ imposed by [CGLS], [BZ1], and [BZ2]. $\Box$ \appendix \section{Alexander polynomials in $\overline S^3_K$} In this section, we relate the Alexander polynomials $\Delta_{\overline K}$ and $\Delta_{\overline L_{j,i}}$ in $\overline S^3_K$ to $\Delta_K$ and $\Delta_{L_j}$, respectively. These results are used in Section 4. We begin by recalling the definition of the Alexander polynomial of a knot in a rational homology sphere. Details can be found in [M]. Let $C$ be a knot in a rational homology sphere, and denote by $Z$ the complement of a tubular neighborhood of $C$. Further, denote by $\tilde Z$ the infinite cyclic cover of $Z$ determined by $\pi_1(Z) \rightarrow {\rm H}_1(Z;{\displaystyle\Bbb Z})/torsion$. Then ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ is a module over ${\displaystyle\Bbb Q}[{\displaystyle\Bbb Z}]$. Now ${\displaystyle\Bbb Q}[{\displaystyle\Bbb Z}]$ is a principal ideal domain, so \[{\rm H}_1(\tilde Z;{\displaystyle\Bbb Q}) \cong {\displaystyle\Bbb Q}[{\displaystyle\Bbb Z}]/(p_1) \oplus {\displaystyle\Bbb Q}[{\displaystyle\Bbb Z}]/(p_2) \oplus \ldots \oplus {\displaystyle\Bbb Q}[{\displaystyle\Bbb Z}]/(p_r). \] We define the {\em order} of ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ to be the product ideal $(p_1p_2\ldots p_r)$. \begin{defn} An \mbox{\bf Alexander polynomial} $\Delta_C(t)$ of $C$ is any polynomial generating the order of ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$. We also call $\Delta_C(t)$ an \mbox{\bf Alexander polynomial} of the ${\displaystyle\Bbb Q}[{\displaystyle\Bbb Z}]$-module ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$. \end{defn} This is well-defined up to multiplication by polynomials of the form $c t^k$, where $c \in {\displaystyle\Bbb Q}$, where $t$ generates the deck transformations of the covering $\tilde{Z} \rightarrow Z$, and where $k \in {\displaystyle \Bbb Z}$. Henceforth we write $p(t) \equiv q(t)$ if $p(t)$ and $q(t)$ are polynomials with $p(t) = ct^kq(t)$ for some $c \in {\displaystyle\Bbb Q}$ and some $k \in {\displaystyle\Bbb Z}$. We first relate $\Delta_{\overline K}$ to $\Delta_K$. We prove the following theorem, which was pointed out to me by Steve Boyer. The proof offered is that of Boyer. \begin{prop} (Boyer) \[\Delta_{\overline K}(t^k) \equiv \prod_{j=0}^{k-1} \Delta_K (\zeta^j t)\] where $\zeta$ is a primitive $k$th root of unity. \end{prop} \noindent {\em Proof (Boyer):} Let $A$ be the matrix of multiplication by $t$ in ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with respect to the ${\displaystyle\Bbb Q}$-vector space structure. We first show \begin{lemma} (Boyer) $\Delta_K(t) \equiv |A - tI|$ \end{lemma} \noindent {\em Proof of lemma (Boyer):} Since ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ decomposes as \[{\displaystyle\Bbb Q}[t,t^{-1}]/p_1(t) \oplus {\displaystyle\Bbb Q}[t,t^{-1}]/p_2(t) \oplus \ldots \oplus {\displaystyle\Bbb Q}[t,t^{-1}]/p_r(t),\] we see that $A = \oplus_{j=1}^r A_j$, where $A_j$ is the matrix of multiplication by $t$ in ${\displaystyle\Bbb Q}[t,t^{-1}]/p_j(t)$. Thus it suffices to show that \[|A_j - tI| \equiv p_j(t).\] But writing \mbox{$p_j(t) = b_0 + b_1 t + b_2 t^2 + \ldots + b_{s-1} t^{s-1} + t^s$}, we see that ${\displaystyle\Bbb Q}[t,t^{-1}]/p_j(t)$ has basis ${1,t,t^2,\ldots,t^{s-1}}$. Then \[ A_j = \left [ \begin{array}{lllll} 0 & 0 & \ldots & 0 & -b_0\\ 1 & 0 & \ldots & 0 & -b_1\\ 0 & 1 & \ldots & 0 & -b_2\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & 0 & \ldots & 1 & -b_{s-1} \end{array} \right ] \] It follows that $|A_j - tI| = (-1)^s p_j(t)$. $\Box$ Now define a new ${\displaystyle\Bbb Q}[t,t^{-1}]$-module structure on ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with mutliplication \mbox{$t\ast m = t^k m$}. Clearly ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with the new multiplication is again a finitely generated, torsion ${\displaystyle\Bbb Q}[t,t^{-1}]$-module and hence a finite dimensional ${\displaystyle\Bbb Q}$-vector space. We show \begin{lemma}(Boyer) Let $\Delta_k(t)$ denote an Alexander polynomial for ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with the new ${\displaystyle\Bbb Q}[t,t^{-1}]$-module structure. Then \[ \Delta_k(t^k) \equiv \prod_{j=0}^{k-1} \Delta(\zeta^j t)\] \end{lemma} \noindent{\em Proof of lemma (Boyer):} If $A$ is the matrix of multiplication by $t$ in ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with the original ${\displaystyle\Bbb Q}[t,t^{-1}]$-module structure, then $A^k$ is the matrix of multiplication by $t$ in ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with the new ${\displaystyle\Bbb Q}[t,t^{-1}]$-module structure. Therefore by the previous lemma, \begin{eqnarray} \Delta_k(t^k) & \equiv & |A^k - t^k I| \\ & = & \prod_{j=0}^{k-1} |A - \zeta^j t I|\\ & = & \prod_{j=0}^{k-1} \Delta(\zeta^j t) \end{eqnarray} This proves the lemma. $\Box$ But now note that an Alexander polynomial $\Delta_{\overline K}(t)$ is an Alexander polynomial of ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with the second ${\displaystyle\Bbb Q}[t,t^{-1}]$-module structure, while $\Delta_K(t)$ is an Alexander polynomial of ${\rm H}_1(\tilde Z;{\displaystyle\Bbb Q})$ with the original ${\displaystyle\Bbb Q}[t,t^{-1}]$-module structure. The theorem follows. $\Box$ Thus, renaming $u = t^n$, say, we obtain $\Delta_{\overline{K}}(u)$ in terms of $\Delta_K$. This can be further simplified as follows: write \begin{equation} \Delta_K(t) = c_0 + c_1(t + t^{-1}) + c_2(t^2 + t^{-2}) + \ldots + c_n(t^n + t^{-n}). \end{equation} Applying Boyer's theorem to $(7)$ and noting that all cross-terms cancel, we see that \begin{equation} \Delta_{\overline K}(t^k) \equiv c_0^k + c_1^k \prod_{j=0}^{k-1}(\zeta^j t + \zeta^{-j} t^{-1}) + c_2^k \prod_{j=0}^{k-1}(\zeta^{2j} t^2 + \zeta^{-2j} t^{-2}) + \ldots + c_n^k \prod_{j=0}^{k-1}(\zeta^{nj} t^n + \zeta^{-nj} t^{-n}). \end{equation} In particular, if $k$ is odd, note that $\prod_{j=0}^{k-1}(\zeta^{rj} t^r + \zeta^{-rj} t^{-r}) = t^{rk}+ t^{-rk}$ for any $r$. Then equation (8) becomes \begin{equation} \Delta_{\overline K}(t^k) \equiv c_0^k + c_1^k(t^k+t^{-k}) + c_2^k(t^{2k} + t^{-2k}) + \ldots + c_n^k(t^{nk} + t^{-nk}) \end{equation} and finally \begin{equation} \Delta_{\overline K}(u) \equiv c_0^k + c_1^k(u + u^{-1}) + c_2^k(u^2 + u^{-2}) + \ldots + c_n^k(u^n+u^{-n}). \end{equation} We now turn to relating $\Delta_{\overline L_{j,i}}$ to $\Delta_{L_j}$. We return to the convention that $\Delta_C$ is symmetric in $t^{1/2}$ and $t^{-1/2}$ and that $\Delta_C(1)=1$, so that $\Delta_C$ is uniquely defined. For the remainder of the paper, for any rational homology sphere $M$, let $lk_M(\_,\_)$ denote the linking number in $M$. \begin{lemma} Given knots $C$ and $C'$ in a rational homology sphere $M$ such that \mbox{$lk_M(C,C')=0$.} Let $\hat M$ denote the manifold resulting from $p/q$-Dehn surgery on $C'$ in $M$. Then \mbox{$lk_{\hat M}(\hat C,\hat D) = lk_M(C,D)$} for any knot $D$ in $M$, where $\hat C$ and $\hat D$ are the images of $C$ and $D$ in $\hat M$. \end{lemma} \noindent{\em Proof:} Fix a knot $D$ in $M$. Suppose $C$ represents an element of order $m$ in ${\rm H}_1(M;{\displaystyle\Bbb Z})$. Since $lk_M(C,C')=0$, we may choose a two-sided surface $\Sigma$ with boundary $m$ times $C$ which is disjoint from a neighborhood of $C'$ and which meets $D$ transversely. Then Dehn surgery on $C'$ does not affect a neighborhood of $\Sigma$ and hence does not affect the oriented intersection of $\Sigma$ and $D$. The assertion follows. $\Box$ We now show \begin{lemma} Let $C$ and $D$ be knots in a rational homology sphere $M$. Suppose $C$ is homologous to 0 and bounds a Seifert surface $\Sigma$ disjoint from $D$ satisfying $lk_M(D,\alpha_j)=0$ for $j=1,2,\ldots,2g$, where $\alpha_1,\alpha_2,\ldots,\alpha_{2g}$ is a collection of simple closed curves on $\Sigma$ representing a basis of ${\rm H}_1(\Sigma;{\displaystyle\Bbb Z})$. Let $M'$ denote the manifold resulting from $p/q$-Dehn surgery on $D$ in $M$, and let $C'$ denote the image of $C$ in $M'$. Then the Alexander polynomial of $C'$ in $M'$ is equal to the Alexander polynomial of $C$ in $M$. \end{lemma} \noindent {\em Proof:} Let $\Sigma'$ be the image of $\Sigma$ in $M'$. Let $\alpha'_1, \alpha'_2, \ldots, \alpha'_{2g}$ denote the images of $\alpha_1, \alpha_2, \ldots, \alpha_{2g}$, respectively, in $M'$. The Alexander polynomial $\Delta_C$ of $C$ in $M$ is the determinant of the matrix $A$ with entries $a_{i,j} = lk_M(\alpha^+_i,\alpha_j) - t lk_M(\alpha^-_i,\alpha_j)$, while the Alexander polynomial $\Delta_{C'}$ of $C'$ in $M'$ is the determinant of the matrix $A'$ with entries \mbox{$a'_{i,j} = lk_{M'}(\alpha^{'+}_i,\alpha'_j) - t lk_{M'}(\alpha^{'-}_i,\alpha'_j)$}. (Here $x^+$ and $x^-$ denote the plus- and minus-pushoffs of a simple closed curve $x$ on $\Sigma$ or $\Sigma'$.) (See [W,Appendix B], for example.) But since $lk_M(D,\alpha_j)=0$ for $j=1,2,\ldots,2g$, we may apply Lemma A.5 to show that $ lk_{M'}(\alpha^{'+}_i,\alpha'_j) = lk_M(\alpha^+_i,\alpha_j)$ and $lk_{M'}(\alpha^{'-}_i,\alpha'_j) = lk_M(\alpha^-_i,\alpha_j)$ for all $i$ and $j$. It follows that $A' = A$, and hence $\Delta_{C'} = \Delta_C$. $\Box$ \begin{prop} Let $(K,L)$ be a boundary link in $S^3$, and let $\overline{L}$ be a lift of $L$ in the branched $k$-fold cover $\overline{S}^3_K$ for some $k$. The Alexander polynomial of $\overline L$ in $\overline S^3_K$ is equal to the Alexander polynomial of $L$ in $S^3$. \end{prop} \noindent {\em Proof:} Choose a link $C = (C_1,C_2,\ldots,C_n)$ in $S^3$ and surgery coefficients $r = (r_1,r_2,\ldots,r_n)$ so that $r$-Dehn surgery on $C$ yields $S^3$ and so that the image $K'$ of $K$ is an unknot. (See [R, Section 6D], for example.) Isotoping $C$ as necessary, changing crossings of $C$ and $L$ but changing no crossings of $C \cup K$, we may assume that $L$, $K$, and $C_i$ bound Seifert surfaces $\Sigma_L$, $\Sigma_K$, and $\Sigma_i$, respectively, with $\Sigma_L \cap \Sigma_K = \Sigma_L \cap \Sigma_i = \emptyset$ for $i = 1,2,\ldots,n$. Let $L'$ denote the image of $L$ under $r$-surgery on $C$. Let $\alpha_1,\alpha_2,\ldots,\alpha_{2g}$ be a collection of simple closed curves on $\Sigma$ representing a basis of ${\rm H}_1(\Sigma;{\displaystyle\Bbb Z})$. By Lemma A.5, the linking number of the image of any component $C_i$ with the image of any curve $\alpha_j$ is 0 at any stage of the sequence of surgeries on $C_1,C_2,\ldots,C_n$. Then we may apply Lemma A.6 repeatedly to show that the Alexander polynomial $\Delta_{L'} = \Delta_L$. Furthermore, the linking number of $K'$ and the image of any $\alpha_j$ in $S^3$ at the end of the surgery sequence is 0. Then applying Lemma 3.2, we find that $(L',K')$ and $(L',C'_i)$ are boundary links, where the $C'_i$ are the images of the $C_i$ in $S^3$ after the surgery. In fact, since $K'$ is an unknot, we see that $(L',K')$ is splittable. Now consider the $k$-fold branched cover of $S^3$ branched along the unknot $K'$, which is again $S^3$. Since $(L',K')$ is splittable, we see that $L'$ has $k$ lifts $\overline L'_1,\overline L'_2,\ldots,\overline L'_k$ lying in disjoint 3-balls, and the Alexander polynomials satisfy $\Delta_{\overline L'_i} = \Delta_{L'}$ for $i = 1,2,\ldots,k$. Let $\overline C'$ denote the inverse image of $C'$ in $\overline S^3_{K'}$. Finally, it is clear that surgery on $\overline C'$ with appropriate coefficients yields $\overline S^3_K$ and takes $\overline L'_i$ onto $\overline L_i$ for $i =1,2,\ldots,k$. The assertion follows. $\Box$ \begin{flushleft} Department of Mathematics and Statistics \newline The College of New Jersey \newline Ewing, NJ 08628 \newline USA \newline [email protected] \end{flushleft} \end{document}
math
45,030
\begin{document} \title{Shortcuts to adiabaticity by superadiabatic iterations} \author{S. Ib\'a\~{n}ez} \affiliation{Departamento de Qu\'{\i}mica F\'{\i}sica, Universidad del Pa\'{\i}s Vasco - Euskal Herriko Unibertsitatea, Apdo. 644, Bilbao, Spain} \author{Xi Chen} \affiliation{Departamento de Qu\'{\i}mica F\'{\i}sica, Universidad del Pa\'{\i}s Vasco - Euskal Herriko Unibertsitatea, Apdo. 644, Bilbao, Spain} \affiliation{Department of Physics, Shanghai University, 200444 Shanghai, People's Republic of China} \author{J. G. Muga} \affiliation{Departamento de Qu\'{\i}mica F\'{\i}sica, Universidad del Pa\'{\i}s Vasco - Euskal Herriko Unibertsitatea, Apdo. 644, Bilbao, Spain} \affiliation{Department of Physics, Shanghai University, 200444 Shanghai, People's Republic of China} \begin{abstract} Different techniques to speed up quantum adiabatic processes are currently being explored for applications in atomic, molecular and optical physics, such as transport, cooling and expansions, wavepacket splitting, or internal state control. Here we examine the capabilities of superadiabatic iterations to produce a sequence of shortcuts to adiabaticity. The general formalism is worked out as well as examples for population inversion in a two-level system. \end{abstract} \pacs{37.10.De, 32.80.Qk, 42.50.-p, 03.65.Ca} \maketitle \section{Introduction} There is currently much interest to speed up quantum adiabatic processes for applications such as fast cold-atom or ion transport, expansions, wave-packet splitting or internal state population and state control \cite{review}. Different techniques have been put forward and/or applied. Among them Demirplack and Rice \cite{DR03,DR05,DR08}, and Berry \cite{Berry2009} proposed the addition of a suitable counterdiabatic term $H_{cd}^{(0)}(t)$ to the time dependent Hamiltonian $H_0(t)$ whose adiabatic dynamics is to be implemented. With that term added transitions in the instantaneous eigenbasis $\{|n_0(t)\rangle\}$ of $H_0(t)$ are suppressed, $H_0(t) |n_0(t)\rangle = E_n^{(0)}(t) |n_0(t)\rangle$, while there are in general transitions in the instantaneous eigenbasis of the new full Hamiltonian $H_0+H_{cd}^{(0)}$. Experiments that implement these ideas have been recently performed in different two-level systems \cite{Oliver,Zhang}. The same $H_{cd}^{(0)}(t)$ also appears naturally when studying the adiabatic approximation of the reference system, the one that evolves with $H_0(t)$, see e.g. \cite{Mes}. The reference system behaves adiabatically, following the eigenstates of $H_0(t)$, when the counterdiabatic term is negligible, and the adiabatic approximation is close to the actual dynamics. This is made evident in an interaction picture (IP) based on the unitary transformation $A_0(t)=\sum_n |n_0(t)\rangle\langle n_0(0)|$. (The ``parallel-transport'' condition $\langle n_0(t)|\dot{n}_0(t)\rangle=0$ is assumed hereafter to define the phases.) From the Schr\"odinger equation $i\hbar \partial_t \psi_0(t)=H_0(t)\psi_0(t)$ and defining $\psi_1(t)=A_0^\dagger\psi_0(t)$, the IP equation $i\hbar \partial_t \psi_1(t)=H_1(t)\psi_1(t)$ is deduced, where $H_1(t)=A_0^\dagger(t)(H_0(t)-K_0(t))A_0(t)$ is the effective IP Hamiltonian and $K_0(t)=i\hbar \dot{A}_0(t)A_0^\dagger(t)$ is a coupling term. If $K_0(t)$ is zero or negligible, $H_1(t)$ becomes diagonal in the basis $\{|n_0(0)\rangle\}$, so that the IP equation becomes an uncoupled system with solutions \begin{equation} |\psi_1(t)\rangle=U_1(t) |\psi_1(0)\rangle, \end{equation} where \begin{equation} U_1(t)=\sum_n |n_0(0)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(0)}(t')dt'} \langle n_0(0)| \end{equation} is the unitary evolution operator for the uncoupled system. Correspondingly, from $|\psi_0(t)\rangle= A_0(t) |\psi_1(t)\rangle$, \begin{equation} |\psi_0^{(1)}(t)\rangle=\sum_n |n_0(t)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(0)}(t')dt'} \langle n_0(0)|\psi_0(0)\rangle, \end{equation} where we have used $|\psi_1(0)\rangle=|\psi_0(0)\rangle$ since $A_1(0)=1$ by construction. The same solution, which, for a non-zero $K_0(t)$, is only approximate, may become exact by adding to the IP Hamiltonian the counterdiabatic term $A_0^\dagger(t)K_0(t)A_0(t)$. This requires an external intervention and changes the physics of the original system, so that $U_1(t)$ describes the evolution exactly. In the IP the modified Hamiltonian is $H^{(1)}(t)=H_1(t)+A_0^\dagger(t)K_0(t)A_0(t)= A_0^\dagger(t)H_0(t)A_0(t)$, and in the Schr\"odinger picture (SP) the additional term becomes simply $K_0$. The modified Schr\"odinger Hamiltonian is $H_0^{(1)}(t)=H_0(t)+K_0(t)$, so we identify $H_{cd}^{(0)}(t)=K_0(t)$. A ``small'' coupling term $K_0(t)$ that makes the adiabatic approximation a good one also implies a small counterdiabatic manipulation but, irrespective of the size of $K_0(t)$, $H_0^{(1)}(t)$ provides a shortcut to slow adiabatic following because it keeps the populations in the instantaneous basis of $H_0(t)$ invariant, in particular at the final time $t_f$. Moreover, if $K_0(0) = 0$ and $K_0(t_f) = 0$, then $H_0^{(1)}=H_0$ at $t=0$ and $t=t_f$. This is useful in practice to ensure the continuity of the Hamiltonian at the boundary times: usually $H_0(t<0)=H_0(0)$ and $H_0(t>t_f)=H_0(t_f)$, so $K_0(t<0)=K_0(t>t_f)=0$, i.e., $H_0(t)$ is the actual Hamiltonian before and after the process. The previous formal framework may be repeated iteratively to define further IPs by diagonalizing the effective Hamiltonians of each IP. These iterations were used to establish {\it generalized adiabatic invariants} and {\it adiabatic invariants of $n$-th order} by Garrido \cite{Garrido}. Berry also used this iterative procedure to calculate a sequence of corrections to Berry's phase for cyclic processes with finite slowness, and introduced the concept of ``superadiabaticity'' \cite{Berry1987}. For later developments and applications see e.g. \cite{Berry1990,pertur,Joli,DR08,master,MagReson,Berry&Uzdin,Uzdin&Moiseyev,Moiseyev,Oliver,Sara12}. The idea of superadiabatic iterations is best understood by working out explicitly the next interaction picture:\footnote{The first IP and iteration just described, with dynamics governed by $H_1(t)$, generates the modified dynamics based on $H_0^{(1)}$ in the SP. This iteration may be naturally termed as ``adiabatic'', since the unitary transformation used, $A_0$, relies on the usual adiabatic basis. Moreover this is the IP used to perform the adiabatic approximation by neglecting $K_0$. The second iteration may be considered as the first ``superadiabatic'' one.} let us start with $i\hbar\partial_t \psi_1(t)=H_1(t)\psi_1(t)$ and treat it as if it were, formally, a Schr\"odinger equation. The diagonalization of $H_1(t)$ provides the eigenbasis $\{|n_1(t)\rangle\}$, {$H_1(t)|n_1(t)\rangle = E_n^{(1)}(t)|n_1(t)\rangle$}, that we fix again with the parallel transport condition, $\langle n_1(t)|\dot{n}_1(t)\rangle=0$. A new unitary operator $A_1=\sum_n|n_1(t)\rangle\langle n_1(0)|$ plays now the same role as $A_0$ in the first (adiabatic) IP. It defines a new interaction picture wave function $\psi_2(t)=A_1^\dagger(t)\psi_1(t)$ that satisfies $i\hbar\partial_t \psi_2(t)=H_2(t)\psi_2(t)$, where $H_2(t)=A_1^\dagger(t)(H_1(t)-K_1(t))A_1(t)$ and $K_1=i\hbar\dot{A}_1A_1^\dagger$. If $K_1$ is zero or ``small'' enough, i.e., if a (first order) superadiabatic approximation is valid, the dynamics would be uncoupled in the new interaction picture, namely, \begin{equation} |\psi_2(t)\rangle=U_2(t)|\psi_2(0)\rangle, \end{equation} where \begin{equation} U_2(t)= \sum_n |n_1(0)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(1)}(t')dt'} \langle n_1(0)| \end{equation} is the approximate evolution operator in the second IP for uncoupled motion. It may happen that a process is not adiabatic, since $K_0(t)$ may not be neglected, but (first-order) superadiabatic when $K_1(t)$ can be neglected. Transforming back to the Schr\"odinger picture, $|\psi_0^{(2)}(t)\rangle = A_0(t) A_1(t) U_2(t) |\psi_2(0)\rangle$ becomes \begin{equation}a |\psi_0^{(2)}(t)\rangle &=& \sum_n\sum_m |m_0(t)\rangle \langle m_0(0)|n_1(t)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(1)}(t')dt'} \nonumber \\ &\times& \langle n_1(0)|\psi_0(0)\rangle, \langlebel{psi02} \end{equation}a and $|\psi_2(0)\rangle=|\psi_0(0)\rangle$ since $A_0(0)=A_1(0)=1$. Garrido distinguished two different aspects \cite{Garrido}: \begin{itemize} \item Generalized adiabaticity: The evolution operator $A_0(t) A_1(t) U_2(t)$ provides an approximation to the actual (Schr\"odinger) dynamics up to a correction term of order $1/t_f^{2}$. This is so without imposing any boundary conditions (BCs) at $t=0$ and $t=t_f$ on the Hamiltonian $H_0$. \item Higher order adiabaticity: $A_0(t) A_1(t) U_2(t)$ does not guarantee in general that $|n_0(0)\rangle$ evolves into $|n_0(t_f)\rangle$, up to a phase factor. If this is the objective, in other words, if a superadiabatic approximation should behave, at final times, like the adiabatic approximation, up to phase factors, then some BCs have to be imposed. Garrido discussed how generalized adiabaticity implies higher order adiabaticity when BCs at the boundary times are imposed on the derivatives of $H_0$. \end{itemize} (Garrido's distinction does not apply in \cite{Berry1987} since there, it is assummed from the start that all the derivatives of the Hamiltonian $H_0$ vanish at the (infinite) time edges.) The second aspect is crucial to design shortcuts to adiabaticity for finite process times using the superadiabatic iterative structure, so let us be more specific. First notice that Eq. (\ref{psi02}) becomes exact if the term $A_1^{\dag} K_1 A_1$ is added to the IP Hamiltonian, so that now the modified IP Hamiltonian is $H^{(2)} = H_2+A_1^{\dag} K_1 A_1= A_1^{\dag} H_1 A_1$. Then the modified SP Hamiltonian becomes $H_0^{(2)}(t)=H_0(t)+H_{cd}^{(1)}(t)$, where $H_{cd}^{(1)}(t) = A_0(t) K_1(t) A_0^{\dag}(t)$. However, quite generally the populations of the final state (\ref{psi02}) in the adiabatic basis $\{|n_0(t_f)\rangle\}$ will be different from the ones of the adiabatic process, unless (a) $\{|n_0(0)\rangle\} = \{|n_1(t_f)\rangle\}$, up to phase factors, and (b) $\{|n_0(0)\rangle\} =\{|n_1(0)\rangle\}$, also up to phase factors. (a) is satisfied when $K_0(t_f)=0$. This makes $H_1(t_f)$ diagonal in the basis $\{|n_0(0)\rangle \}$ and $[H_1(t_f),H_0(0)]=0$. (b) is satisfied when $K_0(0)=0$, which implies $H_1(0)=H_0(0)$. In summary the requirement is that the eigenstates of $H_1(t)$ at $t=0$ and $t=t_f$ coincide with the eigenstates of $H_0(0)$. If, in addition, $K_1(0)=K_1(t_f)=0$, not only the populations, but also the initial and final Hamiltonians are the same for the ``corrected'' and for the reference processes, namely $H_0^{(2)}(0)= H_0(0)$ and $H_{0}^{(2)} (t_f)= H_0(t_f)$. Further iterations define higher order superadiabatic frames with IP equations $i\hbar\partial_t\psi_j(t)=H_j\psi_j(t)$, where \begin{equation}a H_j&=&A_{j-1}^\dagger (H_{j-1}-K_{j-1})A_{j-1}, \langlebel{Hj} \\ K_j&=&i\hbar\dot{A}_jA_j^\dagger, \langlebel{Kj} \end{equation}a with $A_j(t)= \sum_n |n_j(t)\rangle \langle n_j(0)|$ and $H_j(t) |n_j(t)\rangle = E_n^{(j)}(t) |n_j(t)\rangle$. As the $A_j(0)=1$ by construction, there is a common initial state $| \psi_j(0) \rangle=| \psi_0(0) \rangle$ for all iterations. The general form for the modified IP Hamiltonians is $H^{(j)} = H_j+A_{j-1}^{\dag} K_{j-1} A_{j-1}= A_{j-1}^{\dag} H_{j-1} A_{j-1}$. Thus, the form of the modified Hamiltonians in the SP is \begin{equation} H_0^{(j)} = H_0+H_{cd}^{(j-1)}, \end{equation} where the SP counterdiabatic term is \begin{equation} \langlebel{Hcd} H_{cd}^{(j)} = B_j K_j B_j^\dagger = i\hbar B_j \dot{A}_j A_j^\dagger B_j^\dagger, \end{equation} with $B_j = A_0 \cdot\cdot\cdot A_{j-1}$ and $B_0=I$. If $K_{j-1}(t)$ is small or negligible, $H_j(t)$ becomes diagonal in the basis $\{ |n_{j-1}(0)\rangle \}$ and the IP equation becomes an uncoupled system with solutions $|\psi_j(t)\rangle = U_j(t) |\psi_j(0)\rangle$, where \begin{equation} U_j(t)= \sum_n |n_{j-1}(0)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(j-1)}(t')dt'} \langle n_{j-1}(0)| \end{equation} is the approximate evolution operator in the $j$-th IP. Correspondingly, the approximate solution in the SP is given by $|\psi_0^{(j)}(t)\rangle = A_0(t) A_1(t) \cdot \cdot \cdot A_{j-1}(t) U_j(t) |\psi_0(0)\rangle$. This solution becomes exact if the $H_{cd}^{(j-1)}$ term is added to $H_0$, where in general the populations of $|\psi_0^{(j)}(t_f)\rangle$ in the adiabatic basis $\{|n_0(t_f)\rangle\}$ will be different from the ones of the adiabatic process, unless appropriate BCs are impossed. These boundary conditions are made explicit in the next section, and correspond partially to the conditions discussed by Garrido in \cite{Garrido} to define ``higher order adiabaticity''. Is there any advantage in using one or another counter-diabatic scheme? There are several reasons that could make higher order schemes attractive in practice: one is that the structure of the $H_{cd}^{(j)}$ may change with $j$. For example, for a two-level atom population inversion problem, $H_{cd}^{(0)}=\hbar(\dot{\Theta}_0/2)\sigma_y$, whereas $H_{cd}^{(1)}=\hbar(\dot{\Theta}_1/2)(\cos\Theta_0 \sigma_x-\sin\Theta_0\sigma_z)$, where the $\Theta_j$ are the polar angles corresponding to the Cartesian components of the Hamiltonian $H_j$, and the $\sigma_u$, with $u=x,y,z$, are Pauli matrices \cite{Sara12}. (We shall use the Cartesian decomposition $X\sigma_x+Y\sigma_y+Z\sigma_z$ for different Hamiltonians below.) A second reason is that, for a fixed process time, the cd-terms tend to be smaller in norm as $j$ increases, up to a value in which they begin to grow \cite{MagReson}. An optimal iteration may thus be set \cite{Berry1990,MagReson}. The ``asymptotic character'' of the superadiabatic coupling terms and the eventual divergence of the sequence can be traced back to the existence of non-adiabatic transitions, even if they are small \cite{Berry1987}. To generate shortcuts one should pay attention though not only to the size of the cd-terms but also to the feasibility or approximate fulfillment of the required BCs at the boundary times. Thus, it may happen that an ``optimal iteration'', of minimal norm for the cd-term, fails to provide a shortcut because of the BCs, as illustrated below in Sect. IV. \section{Boundary conditions for shortcuts to adiabaticity via superadiabatic iterations} In this section we set the boundary conditions that guarantee that $H_0^{(j)}(t)$ provides a shortcut to adiabaticity. We have seen that for $j=1$ no conditions are required. For $j=2$ we need that $\{|n_{1}(t_f)\rangle \}=\{|n_0(0)\rangle \}$ and $\{|n_{1}(0)\rangle \}=\{|n_0(0)\rangle \}$, (as before in these and similar expressions in brackets, the equalities should be understood up to phase factors), i.e., $K_0(t_f)=K_0(0)=0$. For the iterations $j>2$ we need that (a) $\{|n_{j-1}(0)\rangle \}=\{|n_0(0)\rangle \}$, which occurs when $K_{j-2}(0)=K_{j-3}(0)= ... =K_1(0)=K_0(0)=0$, and (b) $\{|n_{j-1}(t_f)\rangle \}=\{|n_{j-2}(0)\rangle \}$, $\{|n_{j-2}(t_f)\rangle \}=\{|n_{j-3}(0)\rangle \}$, $\{|n_{j-3}(t_f)\rangle \}=\{|n_{j-4}(0)\rangle \}$, ... , and $\{|n_1(t_f)\rangle \}=\{|n_0(0)\rangle \}$. This amounts to imposing $K_{j-2}(t_f)=K_{j-3}(t_f)= ... =K_1(t_f)=K_0(t_f)=0$. The vanishing of $K_{j'}(0)$ for $j'\leq j-2$ implies that $H_0(0)=H_1(0)=...=H_{j-1}(0)$, so (a) and (b) combined may be summarized as $\{|n_{j'}(0)\rangle \}=\{|n_{j'}(t_f)\rangle \}=\{|n_0(0)\rangle \}$ for all $j'\leq j-1$. Garrido showed that canceling out the first $l$-th time derivatives of $H_0(0)$ and $H_0(t_f)$ makes $K_j(0)=0$ and $K_j(t_f)=0$, for $j=1, ... ,l-1$, respectively \cite{Garrido}. However canceling out the derivatives of $H_0$ is a sufficient but not a necessary condition to cancell the coupling terms, so we find it more useful to focus instead on the coincidence of the bases, this is exemplified in Sect. IV. \section{Alternative framework with a constant basis} An alternative to the formal framework described so far provides computational advantages. It was implicity applied by Demirplak and Rice for a two-level system \cite{DR08}. We shall here generalize and formulate explicitly this approach and show its essential equivalence to the former. The main idea is to use instead of the $A_j$ a different set of unitary operators, $\tilde{A}_j(t) = \sum_{n} |\tilde{n}_j(t)\rangle \langle n|$, to define the sequence of interaction pictures, where $|\tilde{n}_j(t)\rangle$ are eigenstates of the new IP Hamiltonians $\tilde{H}_{j}(t)$, {such that $\tilde{H}_{j}(t) |\tilde{n}_j(t)\rangle = \tilde{E}_n^{(j)}(t) |\tilde{n}_j(t)\rangle$,} and $\{ |n\rangle \}$ is a constant orthonormal basis {\it equal for all} $j$, which in principle does not necessarily coincide with $|n_j(0)\rangle$. Similarly to Eq. (\ref{Hj}), \begin{equation} \langlebel{H_j+1'} \tilde{H}_{j} = \tilde{A}_{j-1}^\dagger(\tilde{H}_{j-1}-\tilde{K}_{j-1})\tilde{A}_{j-1}, \end{equation} where $\tilde{K}_j = i\hbar \dot{\tilde{A}}_j \tilde{A}_j^\dagger = i \hbar \sum_{n} |\dot{\tilde{n}}_j(t)\rangle\langle \tilde{n}_j(t)|$. The counterdiabatic terms in the SP are introduced as before, $\tilde{H}_{cd}^{(j)} = \tilde{B}_j \tilde{K}_j \tilde{B}_j^\dagger$, where $\tilde{B}_j = \tilde{A}_0 \cdot\cdot\cdot \tilde{A}_{j-1}$ with $\tilde{B}_0=I$. We shall next show that these cd-terms are independent of the chosen constant basis, so that $\tilde{H}_{cd}^{(j)}(t) = H_{cd}^{(j)}(t)$. Therefore, it is worth using $\tilde{A}_j(t)$ instead of $A_j(t)$ since they are simpler operators and significantly facilitate the manipulations as a common basis is used. Let us start with the first iteration. Since $\tilde{H}_{0}(t)=H_{0}(t)$, then $\tilde{E}_n^{(0)}(t)= E_n^{(0)}(t)$, $|\tilde{n}_0(t)\rangle = |n_0(t)\rangle$, and $\tilde{K}_0=K_0$, so $\tilde{H}_{cd}^{(0)}=H_{cd}^{(0)}$. In addition, from Eq. (\ref{Hj}), $H_0-K_0 = A_0 H_{1}A_0^\dagger$, and substituting it in Eq. (\ref{H_j+1'}) leads to \begin{equation} \langlebel{H_1'} \tilde{H}_1= u_0 H_1 u_0^\dagger, \end{equation} where we have defined a constant unitary operator \begin{equation}a u_0 &=& \tilde{A}_0^\dagger A_0 = \sum_{n} |n\rangle \langle n_0(0)|, \nonumber \\ \dot{u}_0 &=& 0. \nonumber \end{equation}a Using \begin{equation} \langlebel{H_E} H_{j}(t) = \sum_{n} |n_{j}(t)\rangle E_n^{(j)}(t) \langle n_{j}(t)| \end{equation} and \begin{equation} \langlebel{H_E'} \tilde{H}_{j}(t) = \sum_{n} |\tilde{n}_{j}(t)\rangle \tilde{E}_n^{(j)}(t) \langle \tilde{n}_{j}(t)|, \end{equation} for $j=1$ in Eq. (\ref{H_1'}), we get that $\tilde{E}_n^{(1)}(t) = E_n^{(1)}(t)$ and $|\tilde{n}_{1}(t)\rangle = u_0 |n_{1}(t)\rangle$, while $|n\rangle = u_0 |n_{0}(0)\rangle$. Expanding $\tilde{H}_{cd}^{(1)} = \tilde{A}_0 \tilde{K}_1 \tilde{A}_0^\dagger$ we have that \begin{equation}a \tilde{H}_{cd}^{(1)}(t) = i \hbar \sum_{n,m,l,p} |\tilde{n}_0(t)\rangle \langle n|\dot{\tilde{m}}_1(t)\rangle \langle m|l\rangle \langle \tilde{l}_1(t)|p\rangle \langle \tilde{p}_0(t)|. \nonumber \end{equation}a Using now $\langle m|l\rangle = \delta_{m l}$, $|\tilde{n}_0(t)\rangle = |n_0(t)\rangle$, $|n\rangle = u_0 |n_{0}(0)\rangle$, and $|\tilde{n}_{1}(t)\rangle = u_0 |n_{1}(t)\rangle$, it follows that $\tilde{H}_{cd}^{(1)}=H_{cd}^{(1)}$. Also, $\tilde{K}_1 = \tilde{A}_0^\dagger A_0 K_1 A_0^\dagger \tilde{A}_0 = u_0 K_1 u_0^\dagger$. Repeating these steps for $j\geqslant1$, $\tilde{H}_{j} = u_{j-1} H_{j} u_{j-1}^\dagger$ and $\tilde{K}_{j} = u_{j-1} K_{j} u_{j-1}^\dagger$, where \begin{equation}a u_j&=& \tilde{A}_j^\dagger u_{j-1} A_j = \sum_{n} |n\rangle \langle n_j(0)|, \nonumber \\ \dot{u}_j&=& 0. \nonumber \end{equation}a This leads to $\tilde{E}_n^{(j)}(t) = E_n^{(j)}(t)$, $|\tilde{n}_{j}(t)\rangle = u_{j-1} |n_{j}(t)\rangle$, and $|n\rangle = u_{j-1} |n_{j-1}(0)\rangle$. Thus, for all $j\geq 0$, \begin{equation} \tilde{H}_{cd}^{(j)} = H_{cd}^{(j)}. \nonumber \end{equation} The boundary conditions to achieve shortcuts to adiabaticity take the same form as for the original framework in the previous section. Since $\tilde{K}_{0} = K_{0}$ and $\tilde{K}_{j} = u_{j-1} K_{j} u_{j-1}^\dagger$ for $j \geqslant 1$, for the $j$-th iteration, {with $j>1$}, we need that $\tilde{K}_0(0)=\tilde{K}_1(0)=...=\tilde{K}_{j-2}(0)=0$, and $\tilde{K}_0(t_f)=\tilde{K}_1(t_f)=...=\tilde{K}_{j-2}(t_f)=0$. Let us recall that no conditions were required for $j=1$, although, as shown in the next section, using a convenient (constant or initial adiabatic) basis for specific Hamiltonians may also lead to conditions for $j=1$. \section{Two-level atom} The general formalism will now be applied to the two-level atom. Assuming a semiclassical interaction between a laser electric field and the atom, the electric dipole and the rotating wave approximations, the Hamiltonian of the system in a laser-adapted IP (that plays the role of the Schr\"{o}dinger picture of the previous section) is \begin{equation} H_{0}(t)=\frac{\hbar}{2} \left(\begin{array}{cc} -\Delta(t) & \Omega_{R}(t) \\ \Omega_{R}(t) & \Delta(t) \end{array} \right), \langlebel{H0_2level} \end{equation} where $\Omega_{R}(t)$ is the Rabi frequency, assumed real, and $\Delta(t)$ is the detuning, in the ``bare basis'' of the two level system, $|1\ranglengle = (\tiny{\begin{array} {c} 1\\ 0 \end{array}})$, $|2\ranglengle = (\tiny{\begin{array} {c} 0\\ 1 \end{array}})$. The Hamiltonians of the consecutive interaction pictures can be written as \cite{DR08} \begin{equation} \tilde{H}_j(t)=\left( \begin{array}{cc} Z_j(t) & X_j(t)-iY_j(t) \\ X_j(t)+iY_j(t)& -Z_j(t) \end{array} \right), \langlebel{Hj_2level} \end{equation} or $\tilde{H}_j=X_j\sigma_x+Y_j\sigma_y+Z_j\sigma_z$ \cite{Sara12}. Then, $X_0(t)=\hbar \Omega_{R}(t)/2$, $Y_0(t)=0$ and $Z_0(t)=- \hbar \Delta(t)/2$. $X_j$, $Y_j$, and $Z_j$ are the Cartesian coordinates of the ``trajectory'' of $\tilde{H}_j(t)$. It is also useful to consider the corresponding polar, azimuthal and radial spherical coordinates, $\Theta_j(t)$, $\Phi_j(t)$, and $R_j(t)$ \cite{DR08,Sara12}, that satisfy \begin{equation}a \langlebel{spherical} \cos(\Theta_j) &=& \frac{Z_j}{R_j} ,\,\,\,\, \sin(\Theta_j)= \frac{P_j}{R_j} ,\,\,\,\, 0 \leq \Theta_j \leq \pi, \nonumber \\ \cos(\Phi_j) &=& \frac{X_j}{P_j} ,\,\,\,\, \sin(\Phi_j)= \frac{Y_j}{P_j} ,\,\,\,\, 0 \leq \Phi_j \leq 2\pi, \,\,\,\,\,\,\,\, \end{equation}a with $R_j= \sqrt{X_j^2+Y_j^2+Z_j^2}$ and $P_j= \sqrt{X_j^2+Y_j^2}$, where the positive branch is taken. The eigenvalues of $\tilde{H}_j(t)$ are $E_1^{(j)}=-R_j$ and $E_2^{(j)}=R_j$, and the corresponding eigenstates $\{|\tilde{n}_j(t)\rangle \}$ are \begin{equation}a \langlebel{eigenstates} |\tilde{1}_{j}\rangle &=& e^{i \varepsilon_j} \left[ e^{-i \Phi_j /2} \sin{\left(\frac{\Theta_j}{2}\right)} |1\rangle - e^{i \Phi_j /2} \cos{\left(\frac{\Theta_j}{2}\right)} |2\rangle \right], \nonumber \\ |\tilde{2}_{j}\rangle &=& e^{-i \varepsilon_j} \left[ e^{-i \Phi_j /2} \cos{\left(\frac{\Theta_j}{2}\right)} |1\rangle + e^{i \Phi_j/2} \sin{\left(\frac{\Theta_j}{2}\right)} |2\rangle \right], \nonumber \\ \end{equation}a where the phase \begin{equation} \langlebel{epsilon} \varepsilon_j(t)= - \frac{1}{2} \int_0^t \dot{\Phi}_j(t') \cos{[\Theta_j(t')]} dt' \end{equation} is introduced to fulfill the parallel transport condition $\langle \tilde{n}_j|\dot{\tilde{n}}_j \rangle=0$. We define $\tilde{A}_j=|\tilde{1}_j(t)\rangle\langle 1|+|\tilde{2}_j(t)\rangle\langle 2|$. The matrix $\tilde{A}_j(t)$ under these conditions is \begin{equation} \tilde{A}_j=\left( \begin{array}{cc} \sin{\left(\frac{\Theta_j}{2}\right)} e^{i\varepsilon_j-i\Phi_j /2} & \cos{\left(\frac{\Theta_j}{2}\right)} e^{-i\varepsilon_j-i\Phi_j /2} \\ -\cos{\left(\frac{\Theta_j}{2}\right)} e^{i\varepsilon_j+i\Phi_j /2} & \sin{\left(\frac{\Theta_j}{2}\right)} e^{-i\varepsilon_j+i\Phi_j/2} \end{array} \right). \langlebel{Aj} \end{equation} Then, from Eq. (\ref{Kj}), \begin{equation}a \langlebel{K_j} \tilde{K}_j &=& \frac{\hbar}{2} \left[ -\dot{\Theta}_j \sin{\left(\Phi_j \right)}- \frac{\dot{\Phi}_j}{2} \cos{\left( \Phi_j \right)} \sin{\left(2\Theta_j \right)} \right] \sigma_x \nonumber \\ &+& \frac{\hbar}{2} \left[ \dot{\Theta}_j \cos{\left(\Phi_j \right)}- \frac{\dot{\Phi}_j}{2} \sin{\left( \Phi_j \right)} \sin{\left(2\Theta_j \right)} \right] \sigma_y \nonumber \\ &+& \frac{\hbar \dot{\Phi}_j}{2} \sin^2{\left(\Theta_j \right)} \sigma_z. \end{equation}a Note that $\tilde{A}_j^\dagger \tilde{K}_j \tilde{A}_j = \tilde{A}_j^\dagger \dot{\tilde{A}}_j$ has only non-diagonal elements in the bare basis $\{|1\rangle, |2\rangle \}$ \cite{DR08}. From Eq. (\ref{Hj}), the Cartesian coordinates of $\tilde{H}_{j+1}(t)$ are \begin{equation}a \langlebel{XYZ} X_{j+1} &=& \frac{\hbar}{2} \left[ \dot{\Theta}_{j} \sin{\left(2\varepsilon_{j} \right)}- \dot{\Phi}_{j} \sin{\left( \Theta_{j} \right)} \cos{\left(2\varepsilon_{j} \right)} \right], \nonumber \\ Y_{j+1} &=& \frac{\hbar}{2} \left[ -\dot{\Theta}_{j} \cos{\left(2\varepsilon_{j} \right)}- \dot{\Phi}_{j} \sin{\left( \Theta_{j} \right)} \sin{\left(2\varepsilon_{j} \right)} \right], \nonumber \\ Z_{j+1} &=& -R_{j}. \end{equation}a In general, if $\Phi_j(t)$ is constant for a particular $j=J$, then $\dot{\Phi}_J(t)=0$, and from Eq. (\ref{epsilon}), $\varepsilon_J(t)=0$. Thus, taking into account Eq. (\ref{XYZ}), we have that $X_{J+1}(t)=0$ and $Y_{J+1}(t)=-\hbar \dot{\Theta}_J/2$. Equation (\ref{spherical}) leads to $\Phi_{J+1}(t)= \{ \pi/2,3\pi/2 \}$, with $\pi/2$ when $Y_{J+1} > 0$ ($\dot{\Theta}_J < 0$), and $3\pi/2$ when $Y_{J+1} < 0$ ($\dot{\Theta}_J > 0$). If $Y_{J+1}=0$, $\Phi_{J+1}$ is discontinuous, and $\Theta_{J+1}=\pi$. Therefore, $\varepsilon_{J+1}(t)= \{ 0,\pm \pi/2 \}$. From here, several general conditions can be deduced for $j' >J$: $\Phi_{j'>J}(t)= \{ \pi/2,3\pi/2 \}$, $\varepsilon_{j' > J}(t)= \{ 0,\pm \pi/2 \}$, $X_{j'>J}(t)=0$, and $Y_{J+1}(t)=-\hbar \dot{\Theta}_J/2$ or $Y_{j' > J+1}(t)= \pm \hbar \dot{\Theta}_{j'-1}/2$. Moreover, from Eq. (\ref{K_j}), $\tilde{K}_{j'>J}= \pm (\hbar \dot{\Theta}_{j'} /2) \sigma_x$ with positive sign if $\Phi_{j'}(t)= 3\pi/2$ and negative sign if $\Phi_{j'}(t)= \pi/2$. Eq. (\ref{spherical}) and $Y_0(t)=0$ imply $\Phi_0(t)=0$ if $X_0(t)>0$ and $\Phi_0(t)=\pi$ if $X_0(t)<0$. We may thus take $J=0$ and apply the above relations, for example $\dot{\Phi}_0(t)=0$ and $\varepsilon_0(t)=0$.\footnote{The analysis in this paragraph follows closely \cite{DR08}, but some of the results differ, in particular the values allowed for the phases $\varepsilon_{j'>J}$.} As we mentioned before, the method fails as a shortcut to adiabaticity when the boundary conditions are not well fulfilled. In order to have a shortcut generated by the iteration {$j$} we require that $\Delta(t)$ and $\Omega_R(t)$ are such that { \begin{equation}a \langlebel{IC} |\tilde{1}_{j'}(0)\rangle &\approx& |1\rangle ,\,\,\,\,\,\,\,\, |\tilde{2}_{j'}(0)\rangle \approx |2\rangle, \\ |\tilde{1}_{j'}(t_f)\rangle &\approx& |1\rangle ,\,\,\,\,\,\,\,\, |\tilde{2}_{j'}(t_f)\rangle \approx |2\rangle, \langlebel{IC2} \end{equation}a for $0<j'< j$, up to phase factors.} For $j'=0$ a natural and simple assumption is that the bare basis coincides initially with the adiabatic basis, i.e., Eq. (\ref{IC}); at $t_f$ we assume that the bare and adiabatic bases also coincide, allowing for permutations in the indices and phase factors. At $t=0$, using Eq. (\ref{eigenstates}), taking into account that, from Eq. (\ref{epsilon}), $\varepsilon_{j'}(0)=0$, and that $\Phi_0(t)=0$, $\sin{\left[\Theta_{j'}(0)/2\right]}=1$ and $\cos{\left[\Theta_{j'}(0)/2\right]}=0$ are required, or $\Theta_{j'}(0)=\pi$. Then, $\cos{[\Theta_{j'}(0)]}= Z_{j'}(0)/R_{j'}(0)= -1$. This condition is fulfilled if \begin{equation} \langlebel{cond_1} Z_{j'}^2(0) \gg X_{j'}^2(0)+Y_{j'}^2(0), \end{equation} as long as $Z_{j'=0}(0)<0$, and knowing that $Z_{j'>0}(t)=-R_{j'-1}(t)<0$. The condition (\ref{cond_1}) can be simplified for specific $j'$-values as \begin{equation}a |Z_0(0)| &\gg& |X_0(0)|, \langlebel{con00} \\ |Z_{j'>0}(0)| &\gg& |Y_{j'>0}(0)|. \langlebel{cond_1_0} \end{equation}a At $t=t_f$, \begin{equation} Z_{j'}^2(t_f) \gg X_{j'}^2(t_f)+Y_{j'}^2(t_f) \end{equation} should be satisfied, where now, $\Theta_0(t_f)$ can be either $0$, if $Z_0(t_f)>0$, or $\pi$ if $Z_0(t_f)<0$, and $\Theta_{j'>0}(t_f)=\pi$. As before, this condition splits into \begin{equation}a |Z_0(t_f)| &\gg& |X_0(t_f)|, \langlebel{con0tf} \\ |Z_{j'>0}(t_f)| &\gg& |Y_{j'>0}(t_f)|. \langlebel{cond_1_tf} \end{equation}a \begin{figure} \caption{\langlebel{H_lz} \end{figure} As an example we consider now a Landau-Zener scheme for $H_0$ (for the Allen-Eberly scheme we have found similar results), and study the behavior of $H_0^{(j)}$ with $j=1,2,3,4$, and the populations of the bare states driven by these Hamiltonians. \begin{table} \caption{Maxima of the X and Y components of $H_0$ and $H_0^{(j)}$ for $j=1,2,3,4,5$. Parameters: $\alpha=-20$ MHz$^2$, $\Omega_{0,lz}=0.2$ MHz, and $t_f=0.2$ $\mu$s.} \begin{center} \begin{tabular}{@{\hspace{1pt}} c@{\hspace{5pt}} @{\hspace{5pt}} c@{\hspace{5pt}} @{\hspace{5pt}} c @{\hspace{1pt}}} \hline\hline \\ [-2 ex] Hamiltonian & $|X_{max}|/\hbar$ (MHz) & $|Y_{max}|/\hbar$ (MHz) \\ [2ex] \hline $H_0/\hbar$ & 0.1 & 0 \\ [1.5 ex] $H_0^{(1)}/\hbar$ & 0.1 & 49.9 \\ [1.5 ex] $H_0^{(2)}/\hbar$ & 10 & 0 \\ [1.5 ex] $H_0^{(3)}/\hbar$ & 8.4 & 2.8 \\ [1.5 ex] $H_0^{(4)}/\hbar$ & 46.8 & 28.1 \\ [1.5 ex] $H_0^{(5)}/\hbar$ & 56.2 & 62.8 \\ [2 ex] \hline \end{tabular} \end{center} \langlebel{table1} \end{table} For the Landau-Zener model $\Delta(t)$ is linear in time and $\Omega_{R}(t)$ is constant, \begin{equation}a \langlebel{AE_model} \Delta_{lz}(t) &=& \alpha (t-t_f/2), \nonumber \\ \Omega_{R, lz}(t) &=& \Omega_{0,lz}, \end{equation}a where $\alpha$ is the chirp, and $\Omega_{0,lz}$ is a constant Rabi frequency. Condition (\ref{con00}) can be restated as \begin{equation} \langlebel{cond_2} t_f \gg 2 \left|\frac{\Omega_{0,lz}}{\alpha}\right|. \end{equation} We consider the parameters $\alpha=-20$ MHz$^2$, $\Omega_{0,lz}=0.2$ MHz, and $t_f=0.2$ $\mu$s for which the dynamics with $H_0$ is non-adiabatic, see the Appendix A. Fig. \ref{H_lz} shows $X$, $Y$ and $Z$ components of $H_0$ and $H_0^{(j)}$, with $j=1,2,3,4$. In Figs. \ref{H_lz}a and \ref{H_lz}b and in table \ref{table1} we see that $H_0^{(2)}$ (corresponding to the first superadiabatic iteration) is optimal with respect to applied intensities. Moreover it cancells the $Y$-component completely, which is a simplifying practical advantage in some realizations of the two-level system \cite{Oliver,Sara12}. From the second superadiabatic iteration both intensities start to increase again. For the parameters above, condition (\ref{cond_2}) is satisfied since $t_f=20 \times |\Omega_{0,lz}/\alpha|$, but not so condition (\ref{cond_1_0}). Fig. \ref{H_lz} shows the disagreement between $H_0^{(j)}$ and $H_0$, at $t=0$ and $t=t_f$ for $j>1$. Fig. \ref{P1_lz} shows that only $H_0^{(1)}= H_0+ H_{cd}^{(0)}$ inverts the population of $|1\rangle$, $P_1(t)$, whereas the rest of the Hamiltonians fail to do so. \begin{figure} \caption{\langlebel{P1_lz} \end{figure} \section{Discussion} In this paper we have investigated the use of quantum superadiabatic iterations (a non-convergent sequence of nested interaction pictures) to produce shortcuts to adiabaticity. Each superadiabatic iteration may be used in two ways: ({\it i}) to generate a superadiabatic approximation to the dynamics, or ({\it ii}) to generate a counterdiabatic term that, when added to the original Hamiltonian, makes the approximate dynamics exact. The second approach, however, does not automatically generate shortcuts to adiabaticity, namely, a Hamiltonian that produces in a finite time the same final populations than the adiabatic dynamics. The boundary conditions needed for the second approach to generate a shortcut have been spelled out. This work is parallel to the investigation by Garrido to establish conditions so that the approach ({\it i}) provides an adiabatic-like approximation \cite{Garrido}. We have also described an alternative framework to the usual set of superadiabatic equations which offers some computational advantages, and have applied the general formalism to the particular case of a two-level system. An optimal superadiabatic iteration with respect to the norm of the counterdiabatic term, is not necessarily the best shortcut, or in fact a shortcut at all, because of the possible failure of the boundary conditions. We end by mentioning further questions worth investigating on the superadiabatic framework as a shortcut-to-adiabaticity generator. For example, other operations different from the population control of two-level systems (such as transport or expansions of cold atoms) have to be studied. Unitary transformations may be also applied to simplify the Hamiltonian structure making use of symmetries \cite{Sara12}. They have been discussed before as a way to modify the first (adiabatic) iteration \cite{Berry1990,Sara12}, and applied to perform a fast population inversion of a condensate in the bands of an optical lattice \cite{Oliver}, but a systematic application and study, e.g. of the order with respect to the small (slowness) parameter, in particular for higher superadiabatic iterations, are still pending. A comparison with other methods to get shortcuts, at formal and practical levels would be useful too. A preliminary step in this direction, relating and comparing the invariant-based inverse engineering approach to the counterdiabatic approach of the first (adiabatic) iteration was presented in \cite{Inv_Berry}, see also the Appendix B. Finally, comparisons among superadiabatic iterations themselves have to be performed, in particular regarding practical aspects such as the transient excitations involved \cite{energy}. We are grateful to O. Morsch and M. Berry for discussions. We acknowledge funding by Projects No. IT472-10, No. FIS2009-12773-C02-01, and the UPV/EHU program UFI 11/55. S. I. acknowledges Basque Government Grant No. BFI09.39. X. C. thanks the National Natural Science Foundation of China (Grant No. 61176118) and Grant No. 12QH1400800. \appendix \section{Adiabaticity and boundary conditions for the Landau-Zener protocol} The adiabaticity condition for a two-level atom driven by the Hamiltonian (\ref{H0_2level}) is \cite{Xi_PRL} \begin{equation} \frac{1}{2} |\Omega_a(t)| \ll |\Omega(t)|, \end{equation} where $\Omega_a(t) \equiv [\Omega_R(t) \dot{\Delta}(t)-\dot{\Omega}_R(t) \Delta(t)] / \Omega^2(t)$ and $\Omega = \sqrt{\Delta^2 + \Omega_R^2}$. For the Landau-Zener scheme this condition takes the form \begin{equation} |\alpha| \ll 2 \Omega_{0, lz}^2. \langlebel{adiab_cond} \end{equation} The inequalities that $\alpha$ must satisfy so that the system is adiabatic and also fulfills the boundary condition (\ref{cond_2}) are \begin{equation} 2 |\Omega_{0, lz}|/t_f \ll |\alpha| \ll 2 \Omega_{0, lz}^2. \langlebel{adiab_bc} \end{equation} Fig. \ref{conditions} shows the (shaded area) region for which $\alpha$ satisfies $20 |\Omega_{0, lz}|/t_f < |\alpha| < 0.2 \Omega_{0, lz}^2$ when $t_f=2$ $\mu$s. No such area exists in the depicted domain for $t_f=0.2$ $\mu$s. For this shorter time the critical point where $1/t_f= \Omega_{0, lz}/100$ corresponds to $\Omega_{0, lz} =500$ MHz and detunings of up to $5$ GHz. Both may be problematic, as very large laser intensities and detunings could excite other transitions. \begin{figure} \caption{\langlebel{conditions} \end{figure} \section{Invariants} The superadiabatic sequence may be pictured as an attempt to find a higher order frame for which a coupling term $K_j$ is zero in the dynamical equation so that there are no transitions in some basis. This would mean that the states that the system follows exactly have been found, in other words, the eigenvectors of a dynamical invariant $I(t)$ {\cite{Lewis_Riesenfeld, Lewis_Leach, Inv_Berry}}. When counter-diabatic terms are added, it is easy to construct invariants for $H_0^{(j)}$ from the instantaneous eigenstates of $H_0(t)$. However, quite generally this is not enough to generate a shortcut to adiabaticity because the boundary conditions to perform a quasi-adiabatic process (one that ends up with the same populations than the adiabatic one) may not be satisfied. A way out is to design the invariant first, and then $H(t)$ from it, satisfying the boundary conditions $[I(t),H(t)]=0$ at $t=0$ and $t=t_f$, and such that $H(0)=H_0(0)$ and $H(t_f)=H_0(t_f)$ \cite{Inv_Berry,review}. \begin{figure} \caption{\langlebel{Omega_Delta} \end{figure} \begin{figure} \caption{\langlebel{Omega1_Delta1} \end{figure} \begin{figure} \caption{\langlebel{p1_p2} \end{figure} For the general Hamiltonian in Eq. (\ref{H0_2level}), a dynamical invariant of the corresponding Schr\"odinger equation may be parameterized as \cite{Inv_Berry} \begin{equation}a \langlebel{I} I (t)= \frac{\hbar}{2} \nu \left(\begin{array}{cc} \cos{\gamma(t)} & \sin{\gamma(t)} e^{ i \beta(t)} \\ \sin{\gamma(t)} e^{-i \beta(t)} & -\cos{\gamma(t)} \end{array}\right), \end{equation}a where $\nu$ is an arbitrary constant with units of frequency to keep $I(t)$ with dimensions of energy. From the invariance condition for $I$, \begin{equation}a \frac{dI (t)}{dt} \equiv \frac{\partial{I}(t)}{\partial{t}} - {\frac{i}{\hbar}} [I(t),H_0(t)] = 0, \end{equation}a the functions $\gamma(t)$ and $\beta(t)$ must satisfy the differential equations \begin{eqnarray} \dot{\gamma} &=& \Omega_{R}\sin\beta, \nonumber\\ \dot{\beta} &=& \Delta + \Omega_{R}\cos\beta \cot\gamma. \langlebel{schrpure} \end{eqnarray} To achieve a population inversion, the boundary values for $\gamma$ should be $\gamma(0)=0$ and $\gamma(t_f)=\pi$. Assuming a polynomial ansatz \cite{shortcut_harmonic_traps,transport,Inv_Berry} for $\gamma(t)$ and $\beta(t)$, as $\gamma(t)=\sum_{n=0}^3 a_n t^n$ with the boundary conditions $\gamma(0)=\pi$, $\gamma(t_f)= \dot{\gamma}(0)= \dot{\gamma}(t_f)=0$, and $\beta(t)=\sum_{n=0}^4 b_n t^n$ with the boundary conditions $\beta(0)= \beta(t_f/2)= \beta(t_f)= -\pi/2$, $\dot{\beta}(t_f)=-\pi/(2t_f)$, and $\dot{\beta}(0)=\pi/(2t_f)$, we can construct $\Delta$ and $\Omega_R$ \cite{Inv_Berry}. These two functions are shown in Fig. \ref{Omega_Delta}, for $t_f=0.2$ $\mu$s ($\Omega_R=2X/\hbar$ and $\Delta=-2Z/\hbar$). For the same process time $t_f$ we also plot in {Fig. \ref{Omega1_Delta1} $X_{0}(t)$ and $Z_{0}(t)$ for a Landau-Zener protocol in which the Rabi frequency is slightly larger than the maximun required for the invariant-based protocol: $\Omega_{0,lz}=30$ MHz. As explained in the previous appendix, an unreasonably high laser intensity would be required to make it adiabatic while satisfying the bare-state condition at the edges, and $\Omega_{0,lz}=30$ MHz is still too small to satisfy Eq. (\ref{adiab_bc}). This is evident in the failure to invert the population, see Fig. \ref{p1_p2}. We use $\alpha=-2800$ MHz$^2$ to have the bare states as eigenvectors at the time edges which implies a rather large detuning. Fig. \ref{Omega1_Delta1} also depicts the $Y(t)$ component of $H_0^{(1)}$ and the $X(t)$ and $Z(t)$ components of $H_0^{(2)}$} for $t_f=0.2$ $\mu$s. With these parameters these Hamiltonains provide shortcuts to adiabaticity, see Fig. \ref{p1_p2}, but they use very high detunings compared to those of the invariant-based protocol. This example does not mean, however, that invariant-based engineering is systematically more efficient. Invariant-based engineering and the counterdiabatic approach provide families of protocols that depend on the chosen interpolating auxiliary functions in the first case and on the reference Hamiltonian $H_0$ in the second. Their potential equivalence was studied in \cite{Inv_Berry}. \end{document}
math
40,469
\begin{document} \title{Measuring the Capabilities of Quantum Computers} \author{Timothy Proctor\textsuperscript{*}} \author{Kenneth Rudinger} \author{Kevin Young} \author{Erik Nielsen} \author{Robin Blume-Kohout} \affiliation{Quantum Performance Laboratory, Sandia National Laboratories, Albuquerque, NM 87185, USA and Livermore, CA 94550, USA} \date{\today} \begin{abstract} { A quantum computer has now solved a specialized problem believed to be intractable for supercomputers, suggesting that quantum processors may soon outperform supercomputers on scientifically important problems. But flaws in each quantum processor limit its capability by causing errors in quantum programs, and it is currently difficult to predict what programs a particular processor can successfully run. We introduce techniques that can efficiently test the capabilities of any programmable quantum computer, and we apply them to twelve processors. Our experiments show that current hardware suffers complex errors that cause structured programs to fail up to an order of magnitude earlier --- as measured by program size --- than disordered ones. As a result, standard error metrics inferred from random disordered program behavior do not accurately predict performance of useful programs. Our methods provide efficient, reliable, and scalable benchmarks that can be targeted to predict quantum computer performance on real-world problems. } \end{abstract} \maketitle \noindent Quantum processors are on the verge of realizing their promise to revolutionize computing. A quantum processor has now executed programs believed to defy classical simulation \cite{arute2019quantum}, and many hybrid quantum/classical algorithms have appeared that offer the possibility of near-term computational advantage \cite{preskill2018quantum}. Publicly available quantum processors continue to proliferate, and with them a widespread interest in running application-inspired quantum programs. But contemporary quantum processors are plagued by errors that will cause many of these programs to fail. Existing tools for characterization and benchmarking \cite{arute2019quantum, neill2018blueprint, boixo2018characterizing, cross2018validating, magesan2011scalable, emerson2007symmetrized, emerson2005scalable, proctor2018direct, linke2017experimental, harrow2017quantum, wright2019benchmarking, erhard2019characterizing, flammia2019efficient} probe the magnitude and type of these errors. But none of them provide direct insight into a processor's capability --- the programs it can run successfully --- and most are not practical on devices that are large enough to potentially demonstrate a quantum advantage. In this work we introduce the first scalable benchmark that is able to efficiently probe and summarize the capability of any gate-model quantum computer, and we present the first systematic study of the capabilities of publicly accessible quantum processors. The errors suffered by multi-qubit quantum processors are complex and varied, often including effects such as crosstalk \cite{gambetta2012characterization}, coherent noise \cite{huang2019performance, kueng2015comparing, murphy2019controlling}, and drift \cite{mavadia2017experimental, proctor2019detecting}. Simple models for device performance that ignore this complexity offer inaccurate predictions, while complex models are generally intractable to learn or computationally taxing to use. Instead, we argue that the capability of a processor is best probed by running a set of representative test quantum programs whose measured output can be verified classically. \begin{figure*} \caption{{\bf A scalable method for benchmarking a quantum computer's capability.} \label{fig:1} \end{figure*} While several benchmarks have been proposed, few are efficiently verifiable. IBM's quantum volume benchmark \cite{cross2018validating}, like many application-derived benchmarks \cite{linke2017experimental, harrow2017quantum, wright2019benchmarking}, becomes infeasible to verify by classical simulation as the number of qubits grows. Google leveraged the extreme difficulty of verifying the results of their cross-entropy benchmarking circuits to demonstrate ``quantum supremacy'' \cite{arute2019quantum, neill2018blueprint, boixo2018characterizing}. Other benchmarks present different problems. For example, Clifford randomized benchmarking \cite{magesan2011scalable,emerson2007symmetrized,emerson2005scalable} uses a class of programs that, while efficiently verifiable, require so many gates when compiled on more than 3-5 qubits that they almost never run correctly on today's processors \cite{proctor2018direct}. Moreover, all of these benchmarks rely strictly on randomized, disordered programs. This limits their sensitivity to coherent noise \cite{kueng2015comparing}, and so they are unlikely to reflect the performance of structured programs that implement quantum algorithms. We solve all of these problems by introducing a family of benchmarks that can probe the capability of any gate-model quantum processor --- including large ones capable of quantum advantage. To build these benchmarks, we begin with quantum circuits of varied sizes and structures that constitute challenging tasks for a processor. Then we apply a procedure called “mirroring” \footnote{Our circuit mirroring technique is introduced in detail in Appendix~\ref{sec:mirroring}.} that transforms any circuit $C$ into a related suite $\{M_C\}$ of “mirror circuits” that are efficiently verifiable (see Fig.~\ref{fig:1}a). Mirroring concatenates the original circuit $C$ with a quasi-inverse $\tilde{C}^{-1}$ that reverses $C$ up to a Pauli operation, and inserts special layers of operations before, after, and between $C$ and $\tilde{C}^{-1} $. Quasi-inversion, inspired by the Loschmidt echo \cite{loschmidt1876uber} and early work on randomized benchmarking \cite{emerson2005scalable,emerson2007symmetrized}, ensures that each $M_C$ has a definite and easily verified target output, while the extra layers preserve the original circuit’s sensitivity to errors so that performance on $\{M_C\}$ faithfully represents performance on $C$. Unlike test circuits that yield high-entropy target distributions \cite{boixo2018characterizing,cross2018validating}, a mirror circuit’s performance is easily quantified by the probability $S$ of observing the ideal outcome. Mirror circuit benchmarks measure -- and inform prospective programmers about --- a processor’s capability to run specific programs (quantum circuits), rather than its ability to produce specific distributions \cite{arute2019quantum} or unitary transformations \cite{cross2018validating}. The properties probed by such a benchmark are determined by the properties of the circuits in it. Mirror circuits can be efficiently constructed from circuits involving any number of qubits (circuit width, $w$) and logical cycles (circuit depth, $d$). They can have any structure, enabling construction of benchmarks that serve as proxies for any quantum program. We built benchmarks from disordered (Fig.~\ref{fig:1}b) and highly structured (Fig.~\ref{fig:1}c) circuits, using gates that respect each processor’s connectivity, to probe different aspects of performance We ran randomized mirror circuit benchmarks \footnote{A complete definition of randomized mirror circuits and the sampling distributions used in this experiment are given in Appendix~\ref{sec:rmcs}. Experimental details are given in Appendix~\ref{sec:b1}.} on twelve publicly accessible quantum computers from IBM \cite{ibmq2} and Rigetti Computing \cite{rigetti-qcs}. Their measured capabilities are displayed in Fig.~\ref{fig:1}d, using the framework of volumetric benchmarking \cite{blume2019volumetric}. We probed each device at exponentially spaced ranges of circuit widths $w$ and benchmark depths $d$ \footnote{All circuits within a benchmark share a fixed $O(1)$ number of overhead layers; a mirror circuit’s benchmark depth counts only its non-overhead layers, and it equals twice the depth of the original circuit. This is explained further in the appendices.}, and for each width $w$ we tested several different embeddings of $w$ qubits into the available physical qubits. For each $d$, $w$, and embedding we ran 40 randomized mirror circuits. For each shape $(w,d)$, Fig.~\ref{fig:1}d shows the best, worst, and average case polarization $P = (S - \nicefrac{1}{2^w})/(1- \nicefrac{1}{2^w})$ for the best-performing $w$-qubit embedding. The polarization $P$ is a rescaling of the success probability $S$ that corrects for few-qubit effects. For example, $S =\nicefrac{1}{2}$ is reasonably good performance for a width-10 circuit ($P\simeq\nicefrac{1}{2}$) but represents total failure for a width-1 circuit ($P=0$). The volumetric benchmark plots \cite{blume2019volumetric} displayed in Fig.~\ref{fig:1}d provide an at-a-glance summary of these devices' capabilities to run random disordered circuits. They also encode considerable detail about the nature of the errors that limit capability. The mean polarization at each shape indicates the expected performance of a random circuit of that shape, and it is closely related to the fidelity of the logic gates (a standard measure of gate quality). The maximum and minimum polarization, $P_{\max}$ and $P_{\min}$, provide estimates of best- and worst-case capability, and their difference captures the variability --- how reliably do width and depth predict whether a random circuit will succeed? A large difference implies that whether a circuit can be successfully run on that processor depends not only on the circuit's shape, but also on its exact arrangement of gates, i.e., its structure. Our experiments reveal that certain processors' performance is strongly structure-dependent (e.g., Aspen 6) whereas other processors' performance is nearly structure-invariant (e.g., Vigo). This is highlighted by comparing the dotted lines in Fig.~\ref{fig:1}d that show the frontiers beyond which $P_{\mathrm{min}}$ (red) and $P_{\mathrm{max}}$ (green) fall below $1/e$ \footnote{The $\nicefrac{1}{e}$ threshold is arbitrary but convenient; under a simple, naive error model where each gate fails with probability $\epsilon \ll 1$, the frontier will include all circuits of size $\leq \nicefrac{1}{\epsilon}$. The maximum and minimum frontiers are calculated so that any discrepancy between them is statistically significant at $p=0.05$. The details of the statistical analysis are given in Appendix~\ref{sec:b1}.}. When a processor's performance is strongly structure-dependent, standard metrics derived from the average performance of random circuits \cite{boixo2018characterizing, cross2018validating, magesan2011scalable} will not reliably predict whether it can successfully run any particular randomly sampled circuit. The success probability of a quantum circuit is dictated by a complex interplay between the structure present in that circuit and the structure of the errors. If errors are completely structureless (i.e., global depolarization), all circuits of a given shape will have the same success probability. But structureless errors are rare in quantum hardware. Error rates vary across qubits and noise is often correlated in time or space. Our results for randomized circuits provide a glimpse of this interplay. But random circuits are inefficient probes of structured errors \cite{kueng2015comparing}, because a typical randomized mirror circuit is almost completely disordered in space and time (Fig.~\ref{fig:1}b). To study the effects of structure, we can incorporate explicit long-range order, such as periodic arrangements of gates, into mirror circuits (Fig.~\ref{fig:1}c). Periodic mirror circuits can be extremely sensitive to structured errors, supporting linear growth of coherent errors \cite{blume2016certifying} just as ordered lattice systems support ballistic transport of excitations~\cite{kohn1957quantum}. To investigate the interplay between circuit and error structures in real hardware, we benchmarked eight quantum processors using mirror circuits both with and without long-range order. We used periodic mirror circuits constructed by repeating a short unit cell of circuit layers (Fig.~\ref{fig:1}c) selected so that every circuit with $w >1$ had a two-qubit gate density of $\xi \approx \nicefrac{1}{8}$. Concurrently, we ran similar but randomized mirror circuits, sampled so that $\xi=\nicefrac{1}{8}$ in expectation. All circuits have $\xi \leq \nicefrac{1}{2}$, and deviations from $\xi = \nicefrac{1}{8}$ are small circuit-size effects. We sampled and ran 40 circuits of each type at a range of widths and depths, using the best qubits according to the manufacturer's published error rates \footnote{A complete definition of randomized mirror circuits, periodic mirror circuits, and the sampling distributions used in this experiment are given in Appendices~\ref{sec:rmcs} and~\ref{sec:pmcs}. Experimental details are given in Appendix~\ref{sec:b2}.}. Results for four representative devices \footnote{The results for all eight processors are included in Appendix~\ref{sec:b2}.} are summarized in Fig.~\ref{fig:2}. \begin{figure} \caption{{\bf Randomized benchmarks do not predict structured circuit performance.} \label{fig:2} \end{figure} We found that the worst-case performance ($P_{\mathrm{min}}$) of periodic circuits was worse than that of disordered circuits for every processor, as shown in Fig.~\ref{fig:2}a. For some processors, the difference is dramatic --- e.g., Aspen 4 ran every width-1 disordered circuit up to depth $128$ successfully ($P \geq \nicefrac{1}{e}$), but failed on periodic circuits of depth 32. We conclude that testing a processor with disordered circuits cannot reliably predict whether that processor will be capable of running circuits with long-range order. Since circuits for quantum algorithms typically have long-range order, benchmarks like periodic mirror circuits are needed to predict the performance of algorithmic circuits \footnote{The structure in algorithm circuits can be reduced using a variety of randomization techniques \cite{wallman2015noise,campbell2017shorter}, but it is not possible to remove all structure in algorithm circuits.}. We used our benchmarks to investigate one final question: can conventional error rates be used to predict a processor's capability? IBM and Rigetti publish an error rate for each logic operation (gates and readouts) in each device, updated every day after recalibration \cite{ibmq2,rigetti-qcs}. The presumption that these error rates can be used to accurately predict circuit success probabilities is the grounds for interpreting them as a measure of device quality. We recorded these error rates at the time of our experiments, and used them to predict the success probability for every circuit that we ran \footnote{The method used to calculate these predictions is given in Appendix~\ref{sec:predictions}}. Fig.~\ref{fig:2}c shows a scatter plot comparing this prediction to experimental observations. In every case, the observed failure rates are dispersed widely around the prediction. This confirms the presence of unmodeled structure in the errors. The predictions are also biased towards over-optimism, suggesting the existence of significant error sources that are not captured by the error rates. Comparing Figs.~\ref{fig:2}a and \ref{fig:2}b shows that, for every device, the observed worst-case performance is significantly worse than the performance predicted using published error rates. However, those error rates do not appear to be wrong --- they correctly predict the average performance of one- and two-qubit disordered circuits in most cases. Instead, we conclude that these discrepancies stem from unmodeled structure. Structured errors affect structured and disordered circuits differently, and this cannot be captured by simple error rates. The discrepancy between our observations and the predictions of the error rates reveals the types of structure present in the errors. All tested processors display performance that declines faster with circuit width than the error rates predict. This is a signature of crosstalk \cite{blume2019volumetric}. Similarly, the worst-case success rate of periodic circuits decays faster with depth than predicted, and than observed for disordered circuits. This is a signature of coherent errors \cite{blume2019volumetric,blume2016certifying}. Mirror circuits with configurable structure are a simple tool for measuring the impact of these errors in large circuits like those needed for algorithms, so that they can be quantified and suppressed (e.g., with better calibrations) as necessary. \begin{figure} \caption{{\bf Empirical capability regions.} \label{fig:3} \end{figure} We have shown how to use mirror circuit benchmarks for detailed analysis of quantum processors' performance. But our original goal was to capture performance in a simple and intuitive way. So, in Fig.~\ref{fig:3}, we concisely summarize the performance of all eight devices tested with both kinds of mirror circuits, by dividing the circuit width $\times$ depth plane into ``success'', ``indeterminate'', and ``fail'' regions. They correspond to the circuit shapes at which (respectively) all, some, and none of the test circuits succeeded ($P \geq \nicefrac{1}{e}$). These empirical capability regions allow potential users to predict what circuits a processor is likely to be capable of running. A circuit whose shape falls into a processor's ``success'' or ``fail'' regions is likely to succeed or fail (respectively), because the test circuits probe both extremes of performance by including a variety of disordered and periodic circuits at each circuit shape. Conversely, a processor's ability to successfully run a specific circuit whose shape falls within its ``indeterminate'' region depends unavoidably on that circuit's structure. Capability regions depend on two-qubit gate density ($\xi \approx \nicefrac{1}{8}$ in Fig.~\ref{fig:3}) and the threshold for success ($\nicefrac{1}{e}$ in Fig.~\ref{fig:3}), and can be easily adapted to particular applications by setting these parameters. Quantum computational power is a double-edged sword. The infeasibility of simulating quantum processors with 50+ qubits offers the possibility of computational speedups \cite{arute2019quantum, preskill2018quantum}, but simultaneously poses real problems for testing and assessing their capability. As processors grow, users and computer engineers will need scalable, efficient and flexible benchmarks that can measure and communicate device capabilities. Mirror circuit benchmarks demonstrate that this is possible, and highlight the scientific value of carefully designed benchmarks. *[email protected] \begin{small} \noindent \textbf{Acknowledgements:} This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research through the Quantum Testbed Program and the Accelerated Research in Quantum Computing (ARQC) program, and the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of the U.S. Department of Energy, or the U.S. Government, or the views of IBM, or Rigetti Computing. We thank both the IBM Q and Rigetti Computing teams for extensive technical support, in particular Amy Brown, Jerry Chow, Jay Gambetta, Sebastian Hassinger, Ali Javadi, Francisco Jose Martin Fernandez, Peter Karalekas, Ryan Karle, Douglas McClure, David McKay, Paul Nation, Nicholas Ochem, Chris Osborn, Eric Peterson, Diego Moreda Rodriguez, Mark Skilbeck, Maddy Tod, and Chris Wood. \end{small} \section{Overview of the appendices}\label{sec:overview} In the main text, we used \emph{mirror circuit benchmarks} to probe quantum computers' capabilities. In these appendices: \begin{enumerate} \item We explain why mirror circuits constitute a \emph{good} benchmark. \item We detail our experiments and data analysis. \end{enumerate} In this overview we explain what \emph{kind} of benchmark we seek to construct, we list desiderata for such a benchmark, and we provide a guide for the remainder of these appendices. \subsection{The kind of benchmark we constructed} Benchmarking a device means commanding it to perform a set of tasks, and measuring its performance on them. The measured performance should be \emph{meaningful}. Prospective users should be able to extrapolate straightforwardly, from benchmark results, approximately how well the device would perform on \emph{their} use cases. But devices can be used in different ways, and for different tasks. Distinct use cases require distinct benchmarks. For example, a quantum computer can be commanded to (1) run a particular circuit, (2) apply a particular unitary, or (3) generate samples from a particular distribution. These task classes are categorically distinct, but each has real-world relevance. Google, in their demonstration of quantum supremacy \cite{arute2019quantum}, benchmarked their Sycamore chip (against a supercomputer) by its performance at sampling a \emph{distribution}. IBM's quantum volume benchmark \cite{cross2018validating} challenges quantum processors to perform specific \emph{unitaries}, and cautions that it's cheating to sample from the resultant distribution without performing the specified unitary. Randomized benchmarking \cite{knill2008randomized, magesan2011scalable, proctor2018direct} commands a processor to run specific \emph{circuits}, each one of which produces a trivial unitary and a trivial distribution. These illustrate three different ways that a quantum processor's task can be defined. Here, we have adopted the third approach. We benchmark processors by specifying concrete \emph{circuits}, not unitaries or distributions. Therefore, these benchmarks measure a processor's ability to run circuits. Their results should enable users to predict how well that processor will run \emph{other} circuits with similar properties. Relative to the other paradigms mentioned above, this approach emphasizes the reliability of the processor's gates. Our paradigm isolates that aspect of performance from other properties, like qubit connectivity, gate set expressiveness, or the performance of a processor's classical compilation software. Such benchmarks are and will be particularly useful to low-level quantum programmers who express their programs or algorithms as concrete circuits made of native gates, and then wish to predict how large a circuit can be run. Benchmarks rooted in the other paradigms mentioned above are complementary, emphasizing other aspects of performance. No single benchmark or paradigm is sufficient to capture all use cases. \subsection{Desiderata for benchmark circuits}\label{sec:desiderata} The specific benchmarks we use in the main text are particular cases produced by a general process. This process is designed to generate a set of circuits suitable for benchmarking from one or more exemplar circuits $C$ that represent a particular use case. A good question to ask is ``If $C$ is a representative circuit, why not simply run $C$ itself as a benchmark?'' Doing this presents two problems. First, since the point of a benchmark is to measure performance, we \emph{must} be able to evaluate how well or accurately a given processor has run our benchmark circuits. For many interesting and representative circuits, this is or will be impractical because good quantum algorithms can generate results that aren't classically simulable, and/or solve problems outside of NP (i.e., the result is not efficiently verifiable). Second, many circuits $C$ are intrinsically \emph{subroutines}, whose performance we wish to predict in contexts (i.e., within larger programs) that are \emph{a priori} unknown or only partially known. A benchmark needs to run $C$ in context --- at a minimum, after state initialization and before measurement of all the qubits --- and a good benchmark must place it in \emph{representative} contexts, so that users can infer or predict how it is likely to perform in the specific context of \emph{their} use cases. Even when $C$ is not a subroutine, but a full algorithm that defines its own context, the transformations required to make it easy-to-verify (solving the first problem above) can change that context, requiring additional work to ensure that $C$'s performance is probed in contexts that are representative of its original function. To solve these problems, we need a process that transforms a user-specified circuit $C$ into a set of circuits or \emph{test suite} $\mathbb{S}(C)$, that can be run exhaustively or sampled from, and which satisfies the following key desiderata: \begin{enumerate} \item Even if $C$ is a subroutine that needs to be embedded into a larger circuit, every circuit in $\mathbb{S}(C)$ has a fully specified context including state initialization and measurement. \item Each circuit in $\mathbb{S}(C)$ has a well-defined and easy to simulate \emph{target output}, which it would produce if implemented without errors, so that the performance of an imperfect implementation can be measured straightforwardly. \item The success probabilities of the circuits in $\mathbb{S}(C)$ are \emph{representative} of how $C$ would perform in the context[s] where it might be used (which may be unknown). \end{enumerate} \subsection{Mirror circuit benchmarks} We have developed a set of circuit transformations, collectively called \emph{mirroring}, that generate a set of benchmarking circuits from a user-specified circuit $C$, and that can be used to satisfy the above desiderata. These transformations generate \emph{mirror circuit benchmarks}. In our experiments we ran two particular types of mirror circuit benchmark: \emph{randomized mirror circuits} and \emph{periodic mirror circuits}. Appendices~\ref{sec:defs}-\ref{sec:pmcs} are dedicated to introducing these benchmarking methods: \begin{itemize} \item In Appendix~\ref{sec:defs} we introduce our notation and definitions, and review the background material required to present both our benchmarking methods and the theory supporting them. \item In Appendix~\ref{sec:layer-set} we discuss the relative merits of defining benchmarking circuits over a standardized gate set versus over a gate set that is native to a particular processor, and we specify the approach that we take in our experiments. \item In Appendix~\ref{sec:mirroring} we introducing our mirroring circuit transformations, and show how and why they satisfy the above desiderata. \item In Appendix~\ref{sec:rmcs} we introduce randomized mirror circuits, and the particular types of randomized mirror circuits used in our experiments. \item In Appendix~\ref{sec:pmcs} we introduce periodic mirror circuits, and the particular type of periodic mirror circuits used in our experiments. \end{itemize} Although Appendices~\ref{sec:defs}-\ref{sec:pmcs} discuss certain aspects of our experiments, they are primarily focused on describing our benchmarking methods in a general way that is applicable to almost any quantum computer. The final three Appendices focus on our particular experiments and the corresponding data analysis: \begin{itemize} \item In Appendix~\ref{sec:predictions} we explain how we used each processor's published error rates to predict the success probabilities of the mirror circuits that we ran. \item In Appendix~\ref{sec:b1} we detail the randomized mirror circuit experiment, and the corresponding data analysis, that is summarized in Fig.~1d of the main text. We will refer to this as \emph{experiment \#1} throughout these appendices \item In Appendix~\ref{sec:b2} we detail the randomized and periodic mirror circuits experiment, and the corresponding data analysis, that is summarized in Figs.~2 and 3 of the main text. We will refer to this as \emph{experiment \#2} throughout these appendices. \end{itemize} It is \emph{not} necessary to read these appendices in chronological order. Each appendix has been written to be as self contained as possible. \section{Definitions}\label{sec:defs} The purpose of this appendix is to define our notation and review the background material required throughout these appendices. \subsection{Quantum circuits} We use quantum circuits extensively in this paper, to define tasks and programs for quantum computers. Quantum circuits have been used so ubiquitously in the literature, for so many purposes, that it is difficult to define them in a simple yet universally valid way. Broadly speaking, a quantum circuit \emph{describes} a (possibly complex) operation to be performed on a quantum computer, by specifying an arrangement of ``elementary'' operations (e.g., logic gates or subroutines) in sequence or in parallel, which if performed on the quantum computer will transform its state in a particular way. All the circuits that we consider in this paper can be represented, and implemented, as a series of \emph{layers}. \subsubsection{Logic layers and unitaries}\label{sec:layers} A $w$-qubit logic layer is an \emph{instruction} to apply physical operations that implement a particular unitary evolution on $w$ qubits. We denote the unitary corresponding to $L$ by $U(L) \in \text{SU}(2^w)$. Here $\text{SU}(2^w)$ denotes the $2^w$-dimensional special unitary group represented as matrices acting on the $2^w$-dimensional complex vector space $\mathcal{H}_{w}$ of pure $w$-qubit quantum states. It will also often be convenient to use the superoperator representation of a unitary, so we define $\mathcal{U}(L)$ to be the linear map \begin{equation} \mathcal{U}(L)[\rho] := U(L) \rho U(L)^{\dagger}, \end{equation} where $\rho$ is a $w$-qubit density operator (a unit-trace positive semi-definite operator on $\mathcal{H}_{w}$), representing a general $w$-qubit quantum state. We consider a logic layer $L$ to be entirely defined by the unitary $U(L)$, so --- by definition --- there is only one logic layer corresponding to each unitary. There will usually be many ways to implement a particular layer. Our methods are entirely agnostic as to how a layer is implemented, except that an attempt must be made to faithfully implement the unitary it defines. We use $L^{-1}$ to denote the logic layer satisfying \begin{equation} U(L^{-1}) = U(L)^{-1}. \end{equation} There are two additional, special layers that can appear in our quantum circuits: an \emph{initialization} or state preparation layer $I$ that initializes all qubits in the $\ket{0}$ state, and a \emph{readout} or measurement layer $R$ that reads out all qubits in the computational basis, producing a classical bit string and terminating the circuit. Initialization can only appear as the first layer in a circuit, and readout can only appear as the last layer. These layers are not unitary, and $U(\cdot)$ is not defined for them. \subsubsection{Quantum circuits}\label{sec:circuits} A quantum circuit $C$ over a $w$-qubit logic layer set $\mathbb{L}_w$ is a sequence of $d \geq 0$ logic layers that are all elements from $\mathbb{L}_w$. We will write this as \begin{equation} C = L_d L_{d-1} \cdots L_2 L_1, \end{equation} where each $L_i \in \mathbb{L}_w$, and we use a convention where the circuit is read from right to left. The circuit $C$ is an instruction to applying its constituent logic layers, $L_1$, $L_2$, $\dots$, in sequence. For the benchmarking purposes that we are concerned with in this paper, operations across multiple layers must \emph{not} be combined or compiled together by implementing a physical operation that enacts their composite unitary. This notion of strict ``barriers'' between circuit layers is required in many benchmarking and characterization methods \cite{magesan2011scalable,magesan2012characterizing,blume2016certifying}, and we use it throughout this work. We consider two categories of quantum circuits, which have significantly different roles. \emph{Quantum input / quantum output} (QI/QO) circuits do \emph{not} use the initialization or readout layers. \emph{Fixed input / classical output} (FI/CO) circuits begin with an initialization layer, and end with a readout layer. There is a canonical mapping from QI/QO circuits to FI/CO circuits (by adding the initialization and readout layers) and back (by stripping them off). QI/QO circuits generally appear as subroutines. A QI/QO circuit $C$ encodes a unitary map $U(C)$ on $w$ qubits given by \begin{equation} U(C) =U(L_d) \cdots U(L_2) U(L_1). \end{equation} FI/CO circuits represent complete, runnable quantum programs. A FI/CO circuit $C$ encodes a probability distribution \begin{equation} \mathsf{Pr}(x \mid C) = \left|\bra{x} U(L_d) \cdots U(L_2) U(L_1) \ket{0}^{\otimes w}\right|^2, \end{equation} over length-$w$ classical bit-strings, $x$. \subsubsection{Circuit width, depth, size and shape}\label{sec:width-and-depth} The circuit $C= L_dL_{d-1} \cdots L_2L_1$ defined over the $w$-qubit layer set $\mathbb{L}_w$ has \begin{itemize} \item a \emph{width} of $w$, \item a \emph{depth} of $d$, \item a \emph{size} of $w d$, and \item a \emph{shape} of $(w,d)$. \end{itemize} The depth of a circuit is defined explicitly with respect to that circuit's specific layer set $\mathbb{L}_w$. Each specific quantum processor has a ``native'' layer set, generally corresponding to logic layers that can be implemented in a single unit of time. For most processors, each native layer is some combination of one- and two-qubit gates in parallel. We do \emph{not} assume that every layer in the set $\mathbb{L}_w$ is native, nor that it can even be implemented with a short sequence of the native logic layers. So implementing a circuit of depth $d$ could require many more than $d$ units of physical time. Every (circuit paradigm) benchmark is defined by a set of circuits, and for every benchmark there is a set of ``overhead'' layers that are common to, and shared by, every circuit in the benchmark. At a minimum, this overhead includes initialization and readout layers. Therefore, in the context of a specific benchmark, we define three different depths for a circuit: \begin{enumerate} \item The \emph{full depth} $d_0$ of a circuit is the total number of layers, including initialization and readout, as defined above. \item The \emph{benchmark depth} $d$ of a circuit is the total number of non-overhead layers, $d = d_0 - \mathrm{const}$, where the constant is the same for every circuit in a benchmark. \item The \emph{physical depth} of a circuit is the total time taken to run a circuit assuming that every gate can be performed in single clock cycle (a single unit of time). Because this can depend strongly on hardware constraints, such as restrictions on parallelism, we do not use the physical depth in this work. \end{enumerate} The circuit mirroring procedure that we discuss below typically adds overhead layers, and in our experiments there are five overhead layers (initialization, readout, and three extra logic layers). In the main text, we report the benchmark depth defined by $d = d_0 - 5$. \subsubsection{Pauli layers}\label{sec:pls} The $w$-qubit Pauli layers $\mathbb{P}_w$ are the $4^w$ logic layers that instruct the processor to implement $w$-fold tensor products of the four standard Pauli operators $I$, $X$, $Y$ and $Z$. For all $Q_1,Q_2 \in \mathbb{P}_w$, \begin{equation} \mathcal{U}(Q_2 Q_1) = \mathcal{U}(Q_3) \end{equation} for some $Q_3 \in \mathbb{P}_w$, i.e., $\mathcal{U}(\mathbb{P}_w)$ is a group, where $\mathcal{U}(\mathbb{L}) := \{\mathcal{U}(L)\}_{L \in \mathbb{L}}$ for any layer set $\mathbb{L}$. The Pauli operators induce bit flips and/or phase-flips on the qubits. So, if \begin{equation} \mathcal{U}(L_dL_{d-1} \cdots L_2L_1) = \mathcal{U}(Q) \end{equation} for some Pauli layer $Q\in \mathbb{P}_w$, then the circuit $C = R L_dL_{d-1} \cdots L_2L_1 I$ will deterministically output a $w$-bit string that is specified by $Q$. This is a property that holds for all our benchmarking circuits. For any such circuit, its \emph{target} bit string is the unique $w$-bit string that the circuit will output if it is implemented perfectly. \subsubsection{Clifford layers and circuits}\label{sec:cls} All the benchmarking circuits in our experiments contain only Clifford layers. A $w$-qubit logic layer $L$ is a Clifford layer if, for each $Q \in \mathbb{P}_w$, \begin{equation} \mathcal{U}\left(L Q L^{-1}\right) = \mathcal{U}\left(Q'\right) \end{equation} for some Pauli layer $Q' \in \mathbb{P}_w$ \cite{aaronson2004improved}. Note that the Pauli layers are also Clifford layers, and $\mathcal{U}(\mathbb{C}_w)$ is a group where $\mathbb{C}_w$ denotes the set of all $w$-qubit Clifford layers. If a circuit contains only Clifford layers we refer to it as a Clifford circuit. \subsection{Modeling quantum processors}\label{sec:markovian} In these appendices we will show how a processor's performance on our mirror circuit benchmarks depends on the magnitude and type of the imperfections in that processor. Here we introduce our notation for modeling errors in quantum processors, and review the relevant standard definitions. \subsubsection{The Markovian error model} We will use $\Lambda(\cdot)$ to map from instructions --- layers or circuits --- to a mathematical object that models a processor's implementation of that instruction. In particular: \begin{itemize} \item For a FI/CO circuit $C$, $\Lambda(C)$ is the distribution over $w$-bit strings that each run of $C$ on that processor is sampling from. \item For a QI/QO circuit $C$, $\Lambda(C)$ denotes a map from $w$-qubit quantum states to $w$-qubit quantum states. \end{itemize} Our theory will use the \emph{Markovian error model} \cite{sarovar2019detecting} in which \begin{itemize} \item $\Lambda(I)$ is a fixed $w$-qubit density operator. \item For any unitary logic layer $L$, $\Lambda(L)$ is a fixed completely positive and trace preserving (CPTP) linear map from $w$-qubit density operators to $w$-qubit density operators. \item $\Lambda(R)$ is a positive-operator valued measure (POVM), i.e., \begin{equation} \Lambda(R)= \{\Lambda(R)_b\}_{b \in \mathbb{B}_w}, \end{equation} where $\mathbb{B}_w$ is the set of $w$-bit strings, the $\Lambda(R)_b$ are positive operators, and $\sum_b \Lambda(R)_b = \mathds{1}$. \end{itemize} The map implemented by a QI/QO circuit $C=L_d\cdots L_2L_1$ is then \begin{equation} \Lambda(C) = \Lambda(L_d) \cdots \Lambda(L_2)\Lambda(L_1), \end{equation} where we have denoted composition of linear maps by multiplication (i.e., $\Lambda(L')\Lambda(L)$ represents the composition of the two linear maps). Similarly, for a FI/CO circuit $C=RL_d\cdots L_2L_1I$, $\Lambda(C)$ is a probability distribution over $w$-bit strings where the probability of the bit-string $b$ is \begin{equation} \Lambda(C)_{b} = \text{Tr}\left[ \Lambda(R)_b\Lambda(L_d) \cdots \Lambda(L_1)[\Lambda(I)] \right], \end{equation} This error model can describe many common error modes in quantum processors --- including local coherent, stochastic and amplitude damping errors, as well as complex many-qubit errors like stochastic or coherent crosstalk \cite{gambetta2012characterization,sarovar2019detecting}. \subsubsection{Stochastic Pauli channels}\label{sec:pauli-channels} Stochastic Pauli channels, and the special case of depolarizing channels, will have an important role in our theory of mirror circuit benchmarks. A $w$-qubit stochastic Pauli channel is parameterized by a probability distribution over the $4^w$ Pauli operators: $\{\gamma_{Q}\}_{Q \in \mathbb{P}_w}$ with $\sum_{Q \in \mathbb{P}_w} \gamma_Q = 1$ and $\gamma_Q \geq 0$. The stochastic Pauli channel specified by $\{\gamma_{Q}\}$ has the action \begin{equation} \mathcal{E}_{\text{pauli},\{\gamma_{Q}\}}[\rho] := \sum_{Q \in \mathbb{P}_w} \gamma_Q U(Q) \rho U(Q)^{-1}. \label{eq:pauli-channel} \end{equation} A $w$-qubit depolarizing channel ($\mathcal{D}_{w,\epsilon}$) is a special case of a stochastic Pauli channel that is parameterized only by an error rate $\epsilon$: \begin{equation} \mathcal{D}_{w,\epsilon}[\rho] :=(1 - \epsilon) \rho + \frac{\epsilon}{4^w - 1} \sum_{Q \in \mathbb{P}_{w, \text{err.}}} U(Q) \rho U(Q)^{-1}, \label{eq:global-dep} \end{equation} where $\mathbb{P}_{w,\text{err.}}$ is the Pauli layers excluding the identity Pauli layer. A $w$-qubit depolarizing channel is \emph{not} the $w$-fold tensor product of one-qubit depolarizing channels, that is, $\mathcal{D}_{w,\epsilon} \neq \mathcal{D}_{1,\epsilon'}^{\otimes w}$ for any $\epsilon'$ except for the special cases of the identity channel ($\epsilon=0$) and the maximally depolarizing channel ($\epsilon =(4^w-1)/4^w$). A $w$-qubit depolarizing channel induces highly correlated errors, whereas the $w$-fold tensor product of one-qubit depolarizing channels induces independent errors. \subsubsection{Process fidelity}\label{sec:fidelity} As we will show later, performance on our mirror circuit benchmarks have a relationship to the fidelity of the processor's implementation of the circuit[s] from which that benchmark was constructed, via ``mirroring.'' There are two commonly used definitions for the ``process fidelity'' --- the average fidelity and the entanglement fidelity. The average fidelity ($F_a$) of a $w$-qubit process $\mathcal{E}$ to the identity process is defined by \cite{nielsen2002simple} \begin{equation} F_{a}(\mathcal{E}) := \int d\psi \, \bra{\psi} \mathcal{E}[\ket{\psi}\bra{\psi}] \ket{\psi}, \label{eq:agf} \end{equation} where the integral is over the unique $\text{SU}(2^w)$-invariant measure on pure states. The entanglement fidelity ($F_{e}$) is defined by \cite{nielsen2002simple} \begin{equation} F_{e}(\mathcal{E}) := \bra{\psi_e} ( \mathcal{E} \otimes \mathcal{I})[\ket{\psi_e}\bra{\psi_e}] \ket{\psi_e}, \label{eq:agf} \end{equation} where $\mathcal{I}$ is the $w$-qubit identity superoperator (i.e., $\mathcal{I}[\rho]=\rho$), and $\ket{\psi_e}$ is any maximally entangled state in $\mathcal{H}_w\otimes \mathcal{H}_w$. The entanglement and average fidelity are related via \cite{nielsen2002simple}: \begin{equation} F_{e}(\mathcal{E}) = \left(1 + \nicefrac{1}{2^w}\right)F_a(\mathcal{E}) - \nicefrac{1}{2^w}. \label{eq:Fe-Fa} \end{equation} The average infidelity ($\epsilon_a$) and entanglement infidelity ($\epsilon_e$) are simply defined by \begin{align} \epsilon_{a}(\mathcal{E}) &:= 1 - F_{a}(\mathcal{E}),\label{eq:inf1}\\ \epsilon_{e}(\mathcal{E}) &:= 1 - F_{e}(\mathcal{E}).\label{eq:inf2} \end{align} Although the average fidelity is more widely used in the literature, for our purposes the entanglement fidelity is more relevant. This is because $F_e$ accounts for errors that are only apparent when a circuit is used as a subroutine inside a circuit on more qubits, whereas $F_a$ does not (note that $F_{e} < F_a$, unless $F_a=F_e=0$ or $1$). Therefore, this is the definition that we use for `the process [in]fidelity'. The entanglement infidelity of a stochastic Pauli channel has a simple and intuitive property: it is equal to the probability that the channel induces any Pauli error, i.e., \begin{equation} \epsilon_{e}\left(\mathcal{E}_{\text{pauli},\{\gamma_{Q}\}}\right) = \sum_{Q \in \mathbb{P}_{w,\text{err.}}} \gamma_{Q}. \end{equation} In the special case of a depolarizing channel, $\epsilon_{e}(\mathcal{D}_{w, \epsilon}) = \epsilon$. \section{Volumetric circuit benchmarks}\label{sec:layer-set} The benchmarks constructed and deployed in this paper are examples of \emph{volumetric benchmarks} \cite{blume2019volumetric}. This means that each circuit in the benchmark has a well-defined width $w$ and depth $d$, that circuits with a range of $w$ and $d$ are selected, and that the data analysis sorts those circuits by $w$ and $d$. Therefore, it is essential that the nature of these circuits \emph{and} the precise operational meaning of width and depth be stated clearly. A circuit's width is the number of qubits required to run it, and its depth is the number of layers that appear in it. But both of these definitions are subject to non-obvious subtleties, especially depth. Depth is defined with respect to a particular set of logic operations (see Appendix~\ref{sec:width-and-depth}). Therefore, the benchmarking analysis depends critically on \emph{which} set of logic layers were used to define the benchmark circuits. The purpose of this appendix is to discuss several ways to choose layer sets, and then to describe the layer sets used in our experiments. \subsection{Constructing layer sets from gate sets} Layers are just instructions defining $w$-qubit unitary operations (see Appendix~\ref{sec:layers}). Many diverse layer sets could be defined for circuit benchmarks. For example, it is possible to define layers that perform very complicated unitaries that have to be compiled into complex circuits of one- and two-qubit gates. Conversely, it is possible to define layers that can be performed in a single clock cycle (on a specified processor). The layer sets we use in this paper are composed of layers that are closer to the second example --- their ``physical depth'' (the number of clock cycles required for implementation) is relatively small. In the layer sets used for our benchmarks, every allowed $w$-qubit layer is constructed by combining one- and two-qubit gates, chosen from a small gate set, in parallel. Each of the $w$ qubits is acted on by at most one gate. The gate set contains an \emph{idle} gate, and every qubit \emph{not} targeted by a nontrivial gate is said to be acted on by that idle gate. By saying that individual gates are ``combined in parallel'', we are not saying that the processor has to implement them simultaneously. Recall that a layer defines a unitary, not an implementation. We are defining layers that \emph{could in principle} be implemented in parallel, within a single time step, by a processor that (1) can perform every gate in the gate set in a single time step, and (2) can perform them simultaneously. But real processors are not required to do so --- the individual gates in a layer can be serialized and/or compiled into more elementary operations. A precise description of how our layer sets are constructed from gate sets is as follows: \begin{enumerate} \item A $k$-qubit gate $G$ is an instruction to perform a specific unitary on $k$ qubits. We only consider $k=1,2$. \item A gate set $\mathbb{G} = \{G_1\ldots G_n\}$ is a list of 1- and 2-qubit gates. Each gate could, in principle, be applied to any qubit (for 1-qubit gates) or any ordered pair of qubits (for 2-qubit gates). However, connectivity constraints (see below) can be specified, and they restrict the qubits and/or ordered pairs of qubits on which a given gate can be applied. \item We consider only gate sets that contain (a) exactly \emph{one} 2-qubit gate; (b) a 1-qubit ``idle gate''; and (c) any number of additional 1-qubit gates. \item A $w$-qubit layer is constructed by assigning gates from $\mathbb{G} = \{G\}$ to specific qubits. In each $w$-qubit layer, each of the $w$ qubits is acted on by exactly one gate, which may be the idle gate. \end{enumerate} A $w$-qubit layer set $\mathbb{L}_w$ can be constructed, as above, by starting with a gate set and generating \emph{all} possible $w$-qubit layers of this form. We define smaller layer sets by allowing all and only those layers that respect: \begin{enumerate} \item \emph{A connectivity constraint} ($\Upsilon_{c}$) that specifies which qubits, or ordered pairs of qubits, each gate can be assigned to. We call this a connectivity constraint because the most important type of assignment constraint is a limitation on which ordered pairs of qubits the two-qubit gate can be applied to. The connectivity constraint can be defined to respect a processor's directed connectivity graph, so that a two-qubit gate \emph{only} appears in layers if that processor can implement it natively. The (undirected) connectivity graphs for all twelve processors that we benchmarked are shown in Fig.~1d. \item \emph{A parallelization constraint} ($\Upsilon_{p}$) that specifies which assigned gates are allowed to appear together in a layer. This can be used to respect a processor's limited ability to perform some gates in parallel, e.g., perhaps only a single two-qubit gate can be performed in a layer. \end{enumerate} Enforcing these constraints can reduce (or eliminate) the need for additional circuit compilation at run time. This can simplify further analyses of the benchmark results, such as estimation of per-gate error rates. \subsection{Layer sets for benchmark circuits} The procedure given above defines a canonical layer set for each ($\mathbb{G}$, $\Upsilon_{c}$, $\Upsilon_{p}$), which contains all the layers that can be built from $\mathbb{G}$ and are consistent with the constraints $\Upsilon_{c}$ and $\Upsilon_{p}$. A benchmark's layer set determines two of its properties: \begin{itemize} \item The circuits that can be constructed and included in the benchmark. \item How depth is defined and calculated for a given circuit. \end{itemize} The first property impacts what aspect of processor performance the benchmark measures, while the second impacts how that performance is quantified. So the choice of layer set --- i.e., of $\mathbb{G}$, $\Upsilon_{c}$, and $\Upsilon_{p}$ --- is significant. A standardized, architecture-blind layer set can be defined by making the $\Upsilon_{c}$ and $\Upsilon_{p}$ constraints trivial --- i.e., allowing \emph{all} gate assignments and placing no restrictions on parallelization --- and choosing a generic architecture-independent $\mathbb{G}$ such as \ensuremath{\mathsf{CNOT}}\xspace plus all 24 single-qubit Clifford gates (or all single-qubit gates if non-Clifford gates are allowed.) At the other extreme, we can define an architecture-specific layer set by choosing $\mathbb{G}$, $\Upsilon_{c}$, and $\Upsilon_{p}$ to match the `native' layer set of a specific processor that is to be benchmarked. Both are viable, useful options. Benchmarking circuits defined over these two extreme layer sets, respectively, probe different properties of a processor. Performance on benchmarks defined over native layer sets will correlate directly with the error rate of the native gates, and will not capture how ``useful'' those native gates are, or how much the processor is limited by connectivity or lack of parallelism. Conversely, benchmarks defined over a standardized layer set with no connectivity constraints will penalize processors with lower connectivity (relative to ``native layer'' benchmarks), because each two-qubit gate between qubits that are non-adjacent for a particular processor will have to be decomposed into a sequence of gates on adjacent qubits. In principle, this can be a desirable property, because it is expected to capture performance on realistic algorithm circuits. But it is also hard to calibrate. Exactly \emph{how} a particular benchmark of this type penalizes lower connectivity will depend on the details of the benchmarking circuits. Different algorithmic circuits are expected to incur different amounts of overhead (penalty) when embedded into a particular connectivity \cite{holmes2020impact,linke2017experimental}. Capturing this behavior faithfully may require designing a different benchmark for each type of algorithm circuit. \subsection{The layer sets for experiments \#1 and \#2} In our experiments, we intentionally avoid the complexities of limited connectivity by using layer sets that respect a processor's connectivity graph (in contrast to, e.g., Refs.~\cite{linke2017experimental, cross2018validating}). In particular, we choose a layer set constructed from: \begin{enumerate} \item A gate set consisting of a processor's native two-qubit gate and a subset of the single-qubit Clifford group (see Appendice~\ref{sec:b1-gates} and~\ref{sec:b2-gates} for details). The native two-qubit gate for IBM Q processors is \ensuremath{\mathsf{CNOT}}\xspace~\cite{ibmq2}, and for Rigetti processors it is \ensuremath{\mathsf{CPHASE}}\xspace~\cite{rigetti-qcs}. (Note that here ``native'' means the entangling gate exposed by the processor's interface, which may or may not correspond to the ``raw'' two-qubit gate implemented in hardware.) \item A connectivity constraint corresponding to the processor's directed connectivity graph (see Fig.~1d for the undirected connectivity graphs for all twelve processors that we benchmarked.) Note that this means that specific width-$w$ benchmarking circuits cannot be constructed until we have chosen a subset of $w$ physical qubits on which to run them, because different subsets of qubits in a processor may have different connectivity graphs. We do not allow a disconnected subset of $w$ qubits to be chosen. \item No parallelization constraint. A layer can contain active (i.e., non-idle) gates on all the qubits, regardless of whether the processor actually implements all those gates at the same time. This does \emph{not} mean that a processor necessarily actually runs the gates from a layer in parallel (for Rigetti's processors, it is our understanding that only one active gate is implemented at a time, so every layer is serialized \cite{rigetti-qcs}). It merely means that we define circuit depth with respect to a layer set with parallel gates. On a processor that serializes every $w$-qubit layer, the compiled circuit's \emph{physical} depth (number of clock cycles required) may be up to a factor of $w$ higher than the benchmark depth. (It may be less than $w$ because some gates could take zero time when serialized, e.g., an idle gate can be skipped.) \end{enumerate} It could be argued that our choice for the layer set of each processor does not provide a ``fair'' comparison between the processors. For example, a processor will typically perform better on our benchmarks if the connections in the connectivity graph corresponding to the worst performing two-qubit gates are removed. This is a direct consequence of our decision to benchmark the full set of native operations of a processor. But no single choice of layer set can provide an uniquely ``fair'' comparison of two processors. Processors are described by a complex set of performance characteristics, which can only be fully explored and compared by using multiple benchmarks. Some should capture the limitations stemming from restricted connectivity, while others should not. We anticipate that mirror circuit benchmarks \emph{will} be easily adapted to explore aspects of performance related to device connectivity (as other benchmarks already do \cite{linke2017experimental, cross2018validating}), but that is distinct and future work. \subsection{Self-inverse layer sets} A layer set $\mathbb{L}_w$ is \emph{self-inverse} if and only if $L^{-1} \in \mathbb{L}_w$ for all $L \in \mathbb{L}_w$. All the benchmarks and layer sets that we construct and use in this paper are self-inverse (in particular, note that \ensuremath{\mathsf{CNOT}}\xspace and \ensuremath{\mathsf{CPHASE}}\xspace are self-inverse gates). It is possible to construct layer sets $\mathbb{L}_w$ that (1) are \emph{not} self-inverse, and (2) include one or more layers whose inverse requires a very deep circuit (i.e., many layers in $\mathbb{L}_w$). But this has few or no practical consequences for applying the methods we present here --- in all the commmonly found native layer sets we are aware of, generating $U^{-1}$ requires approximately (and often exactly) the same circuit depth as generating $U$ for any unitary $U \in \text{SU}(2^w)$. Throughout the rest of these appendices we assume a self-inverse layer set without further comment. \begin{figure} \caption{\textbf{Transforming any Clifford circuit into mirror benchmarking circuits} \label{fig:clifford-mirroring} \end{figure} \section{Circuit mirroring}\label{sec:mirroring} The mirror circuit benchmarks used in the main text were constructed using a set of circuit transformation procedures that we call, collectively, \emph{mirroring}. Mirroring transformations take arbitrary circuits, and create suites of benchmarking circuits that are closely related to the original circuit[s], but satisfy the benchmarking desiderata stated in Appendix~\ref{sec:desiderata} above. In this appendix, we introduce and motivate these transformations. First, we summarize the \emph{specific} mirroring procedure used for the experiments we performed. Then, we present each of the transformations that make up mirroring separately, because their utility extends beyond the specific procedure we used in this paper. \subsection{Circuit mirroring as used in our experiments} We refer to the specific circuit transformation used to generate the mirror circuit benchmarks used in our experiments as \emph{subroutine Clifford circuit} (SCC) mirroring. SCC mirroring is illustrated in Fig.~\ref{fig:clifford-mirroring}. A \emph{Clifford subroutine} is any QI/QO circuit composed entirely of Clifford layers. SCC mirroring maps any Clifford subroutine to an ensemble $\mathbb{S}(C)$ of circuits that are suitable for benchmarking. SCC mirroring can be applied to FI/CO Clifford circuits, i.e., fully specified quantum programs composed of Clifford gates, by simply stripping away the program's initialization and readout layers. But, as we discuss below, SCC mirroring is designed to probe the performance of $C$ \emph{as a subroutine} --- i.e., with the expectation that it will not necessarily be applied to the $\ket{0}^{\otimes w}$ state, but to an arbitrary input state, generated in the context of a larger circuit that we do not know \emph{a priori}. So SCC mirroring is not optimized to probe performance in the single FI/CO context, or any other specific context. Given a Clifford QI/QO circuit $C = L_{d}L_{d-1} \cdots L_2 L_1$ defined over the layer set $\mathbb{L}_w$, the circuits in $\mathbb{S}(C)$ are defined as the following sequence of layers: \begin{enumerate} \item[(a)] The initialization layer $I$ that initializes all $w$ qubits to $\ket{0}$. \item[(b)] A layer $L_{0}$ drawn from from $\mathbb{C}_{1}^w$ = \{all $24^w$ $w$-fold tensor products of the 24 single-qubit Clifford gates\}. \item[(c)] The circuit $C = L_d L_{d-1} \cdots L_2 L_1$. \item[(d)] A layer $Q_0$ drawn from $\mathbb{P}_w$ = \{all $4^w$ $w$-qubit Pauli layers\}. \item[(e)] The \emph{quasi-inversion} circuit \begin{equation} \tilde{C}^{-1} = \tilde{L}_1^{-1}\tilde{L}_2^{-1} \cdots \tilde{L}_{d-1}^{-1} \tilde{L}_{d}^{-1}, \end{equation} where each \emph{quasi-inversion} layer $ \tilde{L}^{-1}_i$ is the unique circuit layer satisfying \begin{equation} \mathcal{U}(L_i \tilde{L}^{-1}_i) = \mathcal{U}(Q_i), \end{equation} where $Q_i$ is a Pauli layer that is drawn from a user-specified distribution. (For example, this Pauli layer can be sampled from $\mathbb{P}_w$ uniformly and independently for each quasi-inversion layer. Alternatively, it can be set to the identity layer, so that each quasi-inversion layer is simply the inverse layer, i.e., $\tilde{L}^{-1} = L^{-1}$. We detail the choices made in our experiments later.) \item[(f)] The layer $L_{0}^{-1} \in \mathbb{C}_{1}^w$ that inverts the Clifford layer $L_{0}$ performed in step (b). \item[(g)] The readout layer $R$ that measures every qubit in the computational basis. \end{enumerate} All circuits in $\mathbb{S}(C)$ therefore have the form: \begin{equation} \mathbb{S}(C) = \left\{\,R\,L_0^{-1}\,\tilde{C}^{-1}\,Q_0\,C\,L_0\,I\, \right\}. \end{equation} The circuits in $\mathbb{S}(C)$ can be constructed by enumerating $L_{0}$ over the $24^w$ Clifford layers, $Q_0$ over the $4^w$ Pauli layers, and all other $Q_i$ according to the user-specified distribution. More practically, they can be \emph{sampled} by drawing $L_{0}$ and $Q_0$ uniformly at random from those layer sets, and drawing each $Q_i$ from the given distribution. Each circuit in $\mathbb{S}(C)$ is defined over the layer set $\mathbb{L}_w' = \mathbb{L}_w \cup \mathbb{P}_w \cup \mathbb{C}_{1}^w$ (where $\mathbb{A} \cup \mathbb{B}$ denotes the union of sets $\mathbb{A}$ and $\mathbb{B}$), and it has shape $(w, 2d+5)$. So if $C$'s original layer set $\mathbb{L}_w$ does not contain $\mathbb{P}_w$ and $\mathbb{C}_{1}^w$, then the circuits in $\mathbb{S}(C)$ are defined over a larger layer set than $C$. This generally has no meaningful consequences, because those single-qubit Pauli and Clifford layers can almost always be implemented with shallow circuits over native layers, with relatively low error rates (at least compared with layers containing 2-qubit gates). The generally negligible error rates of these ``extra'' layers motivate their exclusion from benchmark depth (see Appendix~\ref{sec:width-and-depth} above) --- we define the benchmarking depth in SCC mirroring as $d = d_0 - 5$. SCC mirroring is motivated by the benchmarking desiderata that we presented in Appendix~\ref{sec:overview}. So we will now demonstrate that it satisfies each of them. \textbf{The first requirement} is that each circuit in $\mathbb{S}(C)$ must have an entirely specified context --- i.e., it must be a complete, runnable quantum program. SCC mirroring satisfies this by construction (because the $I$ and $R$ layers are explicitly included). \textbf{The second requirement} is that each circuit in $\mathbb{S}(C)$ must have a target output that is easy to compute on a conventional computer. SCC mirroring also satisfies this requirement, although the explanation is a bit longer. Any circuit $C_{\text{scc}} \in \mathbb{S}(C)$ has the form $ C_{\text{scc}} = R C_{\text{scc}}^{(0)} I$ where \begin{align} C_{\text{scc}}^{(0)} = L_0^{-1} \tilde{L}_1^{-1} \tilde{L}_2^{-1} \cdots \tilde{L}_{d}^{-1} Q_0 L_d \cdots L_2L_1 \end{align} is the central (QI/QO) part of the circuit, which we now show implements an easily computed Pauli operation. For any $w$-qubit Pauli layer $Q^{(1)} \in \mathbb{P}_w$ and any $L_i \in \mathbb{L}_w$, \begin{align} \mathcal{U}\left(\tilde{L}_{i}^{-1} Q^{(1)} L_i\right) &= \mathcal{U}\left(Q_i L_{i}^{-1} Q^{(1)} L_i\right), \\ & = \mathcal{U}\left(Q_i Q^{(2)}\right),\\ & = \mathcal{U}\left(Q^{(3)}\right), \end{align} for some $Q^{(2)},Q^{(3)} \in \mathbb{P}_w$. The second equality holds because the Pauli group is closed under conjugation by Clifford operations, and the last because the Pauli group is closed under multiplication. Therefore \begin{equation} \mathcal{U}(C_{\text{scc}}^{(0)}) = \mathcal{U}(Q'), \end{equation} for some Pauli layer $Q' \in \mathbb{P}_w$. This Pauli layer can be calculated efficiently in the circuit's size on a conventional computer using, e.g., the ``CHP'' code of Aaronson \cite{aaronson2004improved} (CHP can simulate large circuits over many thousands of qubits in less than a second on an ordinary laptop). Therefore, if performed without errors, each circuit in $\mathbb{S}(C)$ always produces a unique and deterministic bit string specified by that circuit's $Q'$. Since each circuit in $\mathbb{S}(C)$ has a unique target output, how well a given processor ran that circuit is easily quantified by its success probability ($S$). $S$ is just the probability of seeing the target bit string, and it can be estimated efficiently from data. In our data analysis we rescale $S$ to the \emph{polarization} $P = (S - \nicefrac{1}{2^w})/(1 - \nicefrac{1}{2^w})$, for the reasons discussed in the main text and in Appendix~\ref{sec:pol}. But this is just a linear rescaling, which has no impact on the theory discussed here. So in the rest of this appendix we will analyze $S$ instead of $P$. \textbf{The third requirement} (and the most subtle) is that the performance of the circuits in $\mathbb{S}(C)$ must be \emph{representative} of how $C$ would perform in the context[s] where it might be used. This desideratum is what requires us to map $C$ to an \emph{ensemble} of circuits (rather than just a single circuit), and it therefore motivates each of the randomized elements in the procedure outlined above. To show that SCC mirroring satisfies the third desideratum, we represent a processor's imperfect implementation of the QI/QO circuit $C$ by a $w$-qubit superoperator $\Lambda(C)$ (see Appendix~\ref{sec:defs}). This superoperator can be written as \begin{equation} \Lambda(C) = \mathcal{E}(C)\mathcal{U}(C), \end{equation} where $\mathcal{E}(C)$ is an error map. If the processor can run $C$ perfectly, $\mathcal{E}(C)$ would be the identity superoperator $\mathcal{I}$. As we will explain in the remainder of this appendix, SCC mirroring creates a test suite $\mathbb{S}(C)$ with the following properties: \begin{enumerate} \item For any error superoperator $\mathcal{E}(C) \neq \mathcal{I}$ there is a circuit in $\mathbb{S}(C)$ for which $S < 1$. That is, unless a processor can implement $C$ perfectly in all contexts, there is at least one circuit in $\mathbb{S}(C)$ that will bear witness to the error. \item The \emph{expected} value of $S$ for a circuit sampled from $\mathbb{S}(C)$ is closely related to the process fidelity of $\mathcal{E}(C)$. Therefore, the expected value of $S$ is approximately probing the performance of a processor on $C$ in a uniformly random context. We make this statement more precise later in this appendix. \end{enumerate} These two properties are a well-motivated sense in which a processor's performance on a set of benchmarking circuits derived from $C$ can be representative of the processor's performance on $C$. But it is not the only well-motivated interpretation of ``representative performance''. SCC mirroring creates a benchmark whose \emph{average} performance is closely related to the \emph{average} fidelity with which the processor implements $C$. A benchmark that captured the processor's \emph{worst-case} performance on $C$ --- i.e., the maximum probability, over all possible contexts, of getting the wrong output from running $C$ in that context --- would arguably be even more desirable. But no benchmark can extract this information efficiently in $w$, because there are $e^{O(w)}$ possible contexts (e.g., input states). Capturing worst-case performance, without additional prior information, requires exhaustively exploring all of those contexts, which is infeasible. So the notion of ``representative performance'' achieved by SCC mirroring is not unique, but it is both natural and achievable. The remainder of this appendix presents the collection of circuit transformations that, together, constitute \emph{mirroring}. Combined in a specific way, they generate the SCC mirroring procedure explained above. Since all of the experiments we report in the main text use SCC mirroring exclusively, our primary aim is to prove that SCC mirroring satisfies the two properties stated above. But the mirroring transformations listed here are more powerful. They can also be used to generate (1) benchmarking circuits with different properties, and (2) benchmarks from non-Clifford circuits. So a secondary aim of this appendix is to explain the transformations independently, and illustrate this extensibility. \subsection{Transformation 1: simple circuit mirroring}\label{sec:simple-mirroring} Many classical programs have a unique ``right'' answer, which makes it easy to detect (and benchmark) errors in classical computers. But interesting quantum circuits don't generally produce definite outcomes (i.e., a unique bit string) even when run without errors. Instead, the post-measurement outcome of generic quantum programs is a high-entropy \emph{distribution} over bit strings, and it can be extremely costly to verify that this distribution matches the target, i.e., that the \emph{right} distribution is being produced. So to enable benchmarks derived from generic circuits, the first thing we need is a way of transforming interesting quantum circuits so that they \emph{do} produce definite outcomes. The rather obvious solution is time reversal, and we call the particular transformation that we use \emph{simple circuit mirroring}. This transformation turns any circuit into a definite-outcome circuit, satisfying our second requirement, at the cost of creating some new problems that we will address later. Simple circuit mirroring is essentially a type of Loschmidt echo \cite{loschmidt1876uber}. It maps \emph{any} shape $(w,d)$ QI/QO circuit $C = L_d \cdots L_2L_1$ over some self-inverse layer set $\mathbb{L}_w$ into a single shape $(w,2d+2)$ FI/CO circuit $M(C)$ over $\mathbb{L}_w$, \begin{equation} M(C) = R\,C^{-1}\,C\,I, \end{equation} consisting of: \begin{enumerate}[label=(\roman*)] \item The initialization layer $I$ that initializes all $w$ qubits to $\ket{0}$. \item The circuit $C= L_dL_{d-1} \cdots L_2L_1$. \item The \emph{inversion circuit} \begin{equation} C^{-1} = L_1^{-1} L_2^{-1} \cdots L_{d-1}^{-1}L_{d}^{-1}, \end{equation} consisting of the layers of $C$ in the reverse order and with each layer $L$ replaced with its inverse $L^{-1}$. \item The readout layer $R$ that measures every qubit in the computational basis. \end{enumerate} The inversion circuit $C^{-1}$ implements the inverse unitary to $C$, i.e., \begin{equation} U(C)U(C^{-1}) = \mathds{1}. \end{equation} Therefore, for any circuit $C$, if $M(C)$ is performed without error, it will deterministically return the all-zeros bit string. Simple circuit mirroring achieves the first two of our three desiderata for a circuit transformation (see above) for generating a benchmarking suite $\mathbb{S}(C)$ from a circuit $C$: the single-element set $\mathbb{S}_{sm}(C) = \{M(C)\}$ generated by simple circuit mirroring contains a single circuit with an entirely specified context (it is a complete program) and an efficiently simulable target output (it is the all-zeros bit string). Simple circuit mirroring is a good starting point for satisfying the third desiderata, but, unaltered, it does not meet it. The circuit suite $\mathbb{S}_{sm}(C) = \{M(C)\}$ generated by simple circuit mirroring is not representative of $C$ in \emph{any} meaningful sense. (Unless strong assumptions are made about the types of errors that a processor is subject to, $M(C)$ is only representative of $C$ in the trivial sense that a processor's performance on $M(C)$ is representative of its performance on $C$ in the context of inserting $C$ into that simple circuit mirror circuit.) The limitations of simple circuit mirroring all stem from the fact that it involves running $C$ in a single context. Three specific effects that limit the usefulness of $\mathbb{S}_{sm}(C)$ are: \begin{enumerate} \item \emph{Systematic error cancellation}. In simple circuit mirroring, the circuit $C$ is always followed by the circuit $C^{-1}$. This means that it is possible for systematic (coherent) errors in the implementation of $C$ to exactly cancel with systematic errors in the implementation of $C^{-1}$. For example, if $\Lambda(C) = \mathcal{V}$ and $\Lambda(C^{-1}) = \mathcal{V}^{-1}$ for some unitary superoperator $\mathcal{V}$ then $S=1$, up to contributions from errors in qubit initialization and readout. This does \emph{not} require that $\Lambda(C)$ is even close to the target evolution $\mathcal{U}(C)$. This is a well-known effect with the Loschmidt echo, which tests whether an evolution can be reversed, not whether a desired evolution can be implemented accurately. \item \emph{A single input state}. The state input into $C$ is always $\ket{0}^w$, so simple circuit mirroring is insensitive to any errors that do not impact $\ket{0}^w$. \item \emph{A single measurement basis}. The measurement is always in the computational basis, so simple circuit mirroring is insensitive to any errors that, once commuted through the circuit, manifest as errors that have no observable impact after projection onto $\bra{0}^w$ (such as dephasing or coherent $\hat{z}$-axis errors). \end{enumerate} The three additional circuit transformations tools that we introduce below can be used to place $C$ in a wider range of contexts. They start from $\mathbb{S}_{sm}(C)$ and map it to an altered and (typically) enlarged benchmarking suite. It is convenient to think of these three tools as a set of three configurable circuit transformation that are applied in order. \subsection{Transformation 2: inserting a central subroutine} \label{sec:central-subroutine} The first weakness of simple circuit mirroring, highlighted above, is that it can hide errors in $C$, because errors in $C^{-1}$ might systematically cancel out errors in $C$. To solve this problem, we introduce another transformation called \emph{central subroutine insertion}, which we apply to the test suite $\mathbb{S}_{sm}(C) = \{M(C)\}$ obtained from simple circuit mirroring. It constitutes inserting each of a set $\mathbb{A}$ of subroutines --- i.e., QI/QO circuits --- between $C$ and $C^{-1}$. This transformation acts on $\mathbb{S}_{sm}(C)$ as: \begin{equation} \{ M(C) = R C^{-1} C I \} \to \{ M_A(C) = R C^{-1} A C I \}_{A \in \mathbb{A}}. \end{equation} Central subroutine insertion generates a larger circuit suite, \begin{equation} \mathbb{S}_{\mathbb{A}}(C) = \{M_A(C) \}_{A \in \mathbb{A}}, \end{equation} that can be run exhaustively or sampled from. The point of the central subroutine is to prevent systematic errors in $C$ and $C^{-1}$ from canceling each other. It only works if $\mathbb{A}$ is chosen carefully, to satisfy three competing criteria: \begin{enumerate} \item The subroutines in $\mathbb{A}$ should be sufficiently diverse that no possible error mode on $C^{-1}$ can systematically cancel out errors on $C$ in \emph{every} circuit in $\mathbb{S}_{\mathbb{A}}(C)$. As an obvious example, an $\mathbb{A}$ containing only the trivial, depth-0 circuit would not be sufficiently diverse. \item Each circuit in $\mathbb{S}_{\mathbb{A}}(C)$, when run without error, should output a single, efficiently calculable bit string. In some scenarios, achieving this requirement will require an additional transformation, as explained in Transformation 3 below. \item The subroutines in $\mathbb{A}$ should be implementable with shallow circuits, so that running the circuits in $\mathbb{S}_{\mathbb{A}}(C)$ is not much harder than running simple mirror circuits. \end{enumerate} To make the ``sufficiently diverse'' condition above precise, we consider the linear map $\mathscr{L}_{C, \Lambda}$ on $w$-qubit superoperators (a so-called \emph{super-duper-operator}~\cite{crooks2008quantum}) defined by: \begin{equation} \mathscr{L}_{C,\Lambda}(\mathcal{S}) = \Lambda(C^{-1}) \mathcal{S} \Lambda(C). \end{equation} This map is parameterized by (1) a circuit $C$, and (2) a processor's $\Lambda(\cdot)$ map. We say that a processor implements a circuit $C$ perfectly if and only if $\Lambda(C) = \mathcal{U}(C)$. Therefore, a processor perfectly implements both $C$ and $C^{-1}$ if and only if, for \emph{every} superoperator $\mathcal{S}$, \begin{equation} \mathscr{L}_{C,\Lambda}(\mathcal{S}) = \mathscr{L}_{C,\mathcal{U}}(\mathcal{S}).\label{eq:Ll=Lu} \end{equation} Simple circuit mirroring cannot tell us whether this is the case. It only tells us about $\mathscr{L}_{C,\Lambda}(\mathcal{I})$, where $\mathcal{I}$ is the identity superoperator, because the processor's implementation of the QI/QO component of the simple mirror circuit $M(C)$ is \begin{equation} \Lambda(C^{-1}C) = \mathscr{L}_{C,\Lambda}(\mathcal{I}). \end{equation} So simple circuit mirroring cannot be sensitive to all possible errors in $\Lambda(C)$ and $\Lambda(C^{-1})$, because Eq.~\eqref{eq:Ll=Lu} might hold for $\mathcal{S}=\mathcal{I}$, but not for all $\mathcal{S}$. We can use $\mathscr{L}_{C,\Lambda}$ to more precisely state the first of our criteria for $\mathbb{A}$, introduced above. For any $\Lambda(C)$ and $\Lambda(C^{-1})$ superoperators for which $\Lambda(C) \neq \mathcal{U}(C)$ and/or $\Lambda(C^{-1}) \neq \mathcal{U}(C^{-1})$, there must exist an $A\in \mathbb{A}$ such that \begin{equation} \mathscr{L}_{C,\Lambda}(\mathcal{U}(A)) \neq \mathscr{L}_{C,\mathcal{U}}(\mathcal{U}(A)). \end{equation} Without assumptions about the constituent superoperators, this holds if and only if $\mathcal{U}(\mathbb{A})=\{\mathcal{U}(A)\}_{A\in\mathbb{A}}$ spans the vector space of $w$-qubit superoperators. (Because $\Lambda(C)$ and $\Lambda(C^{-1})$ must be completely positive and trace preserving maps there are interesting edge cases where we can learn everything about $\Lambda(C)$ and $\Lambda(C^{-1})$ with a smaller set $\mathbb{A}$.) Ideally, $\mathcal{U}(\mathbb{A})$ should span that space \emph{uniformly} (as does, e.g., an orthonormal basis) to maximize sensitivity to all possible errors. However, constructing a set of circuits that span the superoperator space requires nontrivial circuits. So in SCC mirroring, we settle for a slight weaker (but much simpler) construction that detects \emph{almost} all errors. We choose an $\mathbb{A}$ containing all the $w$-qubit Pauli layers $\mathbb{P}_w$ (see Fig.~\ref{fig:clifford-mirroring}). When $C$ is a Clifford circuit as in the main text (we address non-Clifford circuits briefly in the next subsection of this appendix), the Pauli layers are a particularly powerful choice for the following reasons. \begin{enumerate} \item \emph{Sensitivity to all small errors}. The Pauli group $\mathcal{U}(\mathbb{P}_w)$ does \emph{not} span superoperator space (there are only $4^w$ elements in $\mathcal{U}(\mathbb{P}_w)$, but superoperator space has dimension $4^w \times 4^w = 16^w$), but it has a property that is almost as good in this context. If $\mathscr{L}_{C,\Lambda} (Q) = \mathscr{L}_{C,\mathcal{U}}(Q)$ for all $Q\in \mathbb{P}_w$ then this implies that $\Lambda(C) =\mathcal{U}(QC)$ and $\Lambda(C^{-1}) =\mathcal{U}(C^{-1}Q)$ for some Pauli layer $Q$ that is the same in both equations, i.e., the correct unitaries are implemented up to multiplication by some Pauli operator. So the only errors that go undetected by $\mathbb{A}=\mathbb{P}_w$ are large, discrete, and unlikely except in an adversarial context. \item \emph{Faithfulness in infidelity}. More than merely making all small errors \emph{detectable}, the Pauli group construction ensures that their average impact on the benchmark circuits faithfully reflects their impact in $C$ and $C^{-1}$. If we average uniformly over $\mathbb{A}$, the effect of inserting a random Pauli layer between $C$ and $C^{-1}$ is to perform a Pauli twirl on the error maps for $C$ and $C^{-1}$, reducing them to stochastic Pauli channels \cite{knill2005quantum, wallman2015noise,ware2018experimental}. For small errors, this ensures that the fidelity of the full benchmark circuit is very close to the product of the fidelities of $C$ and $C^{-1}$. \item \emph{Efficiently calculable target outputs}. For any Pauli layer $Q$, $ \mathcal{U}(C^{-1} Q C) = \mathcal{U}(Q'),$ for some Pauli layer $Q'$. So $M_Q(C)$ will always output a single, efficiently calculable bit string determined by $Q'$, if implemented perfectly. \item \emph{Unbiased target outputs}. Uniform sampling from $\mathbb{S}_{\mathbb{P}_w}(C)$ ensures that the target bit string is uniformly random, so biased readout errors cannot artificially boost or suppress the success probabilities $S$ of the circuits in $\mathbb{S}_{\mathbb{P}_w}(C)$ (again, on average). \item \emph{Low-depth circuits.} Any Pauli layer can be implemented with a low-depth circuit over the native layer-set of a typical processor. \end{enumerate} In the remainder of this appendix we will consider only the case of $\mathbb{A}=\mathbb{P}_w$. \subsection{Transformation 3: replacing the inversion circuit with a suite of quasi-inversion circuits} The reason that errors can systematically cancel in simple circuit mirroring is that $C$ is always followed by the same circuit, $C^{-1}$. Inserting a central subroutine prevents this error cancellation, but we can also reduce the correlation between layers in a mirror circuit by replacing $C^{-1}$ with a \emph{quasi-inversion} circuit $\tilde{C}^{-1}$. For a Clifford circuit $C$, this transformation maps each $M_Q(C)$ circuit to a set of circuits where the inverse circuit $C^{-1}$ has been replaced by each of a set of \emph{quasi-inversion} subroutines $\mathbb{Q}$. It is the map: \begin{equation} M_Q(C) = RC^{-1}QCI \to \{M_{Q,\tilde{C}}(C) = R \tilde{C}^{-1} Q C I\}_{\tilde{C}^{-1} \in \mathbb{Q}}, \end{equation} where $\mathbb{Q}(C)$ consists of all circuits of the form \begin{equation} \tilde{C}^{-1} =\tilde{L}^{-1}_1 \tilde{L}_2^{-1} \cdots \tilde{L}_{d-1}^{-1} \tilde{L}_{d}^{-1}. \end{equation} Here each $\tilde{L}_{i}^{-1}$ runs over some set of $L$-dependent layers $\mathbb{Q}_1(L)$ that all implement unitaries that are equivalent to $\mathcal{U}(L^{-1})$ up to multiplication by a Pauli operator. Different choices for $\mathbb{Q}_1$ result in different transformations. In our benchmarking experiments we use two transformations: the trivial transformation given by $\mathbb{Q}_1(L) = \{L^{-1}\}$, and the transformation in which $\mathbb{Q}_1(L)$ consists of all $4^w$ layers $\tilde{L}^{-1}$ that satisfy $\mathcal{U}(\tilde{L}^{-1}) = \mathcal{U}(\tilde{Q}'L^{-1})$ for some $\tilde{Q}' \in \mathbb{P}_w$ (which is similar to Pauli frame randomization \cite{knill2005quantum, wallman2015noise, ware2018experimental}). A similar transformation can be used to create mirror circuits with a central Pauli subroutine from non-Clifford circuits: in that case we choose the quasi-inverse layers as a function of the central Pauli layer $Q$, which allows us to construct quasi-inverse circuits for which the entire circuit implements a Pauli operator. As we do not use non-Clifford circuits in our experiments, we leave further details of this technique to future work. \subsection{Transformation 4: inserting preparation and measurement subroutines} The last of our circuit transformations is intended to address the last two limitations of simple circuit mirroring listed at the end of Appendix~\ref{sec:simple-mirroring}: that only the $\ket{0}^w$ state is input into $C$, and that readout is always in the computational basis. These limitations mean that any errors in the implementation of $C$ that do not affect the $\ket{0}^w$ state do not contribute to the failure rate of a simple mirror circuit, nor to the failure rates of the circuits in the expanded suites obtained from Transformations 2 and 3. If the circuit $C$ will only ever be applied to $\ket{0}^w$, then this is not a flaw as it represents the desired context. But to capture any other contexts, we need to implement additional input states and measurement bases so that performance on the benchmarking suite is representative of ability to perform $C$ in generic contexts. We do this by inserting ``fiducial'' \cite{blume2016certifying} subroutines just after initialization and before readout, respectively. This procedure is parameterized by a set of QI/QO circuits $\mathbb{F}$ and it maps each circuit $M_{Q,\tilde{C}}(C) = R \tilde{C}^{-1} Q C I$ to a test suite \begin{equation} M_{Q,\tilde{C}}(C) \to \{M_{Q,\tilde{C},F}(C) = R F^{-1} \tilde{C}^{-1} Q C F I\}_{F\in \mathbb{F}}. \end{equation} Together, the four transformations generate the circuit suite \begin{equation} \mathbb{S}_{\mathbb{P}_w, \mathbb{Q}, \mathbb{F}}(C) = \{M_{Q,\tilde{C},F}(C) \}_{Q \in \mathbb{P}_w, \tilde{C}^{-1} \in \mathbb{Q}(C), F \in \mathbb{F}}, \end{equation} which can be run exhaustively or sampled from. We need to choose $\mathbb{F}$ to satisfy the four competing criteria: \begin{enumerate} \item Each circuit in $\mathbb{S}_{\mathbb{P}_w, \mathbb{Q}, \mathbb{F}}(C)$ should still have an efficiently calculable target bit-string. \item The fiducial subroutines should be implementable with shallow circuits over a typical processor's native gate, so that errors in these subroutines do not dominate the failure rate of the circuits in $\mathbb{S}_{\mathbb{P}_w,\mathbb{Q}, \mathbb{F}}(C)$, except perhaps for very shallow $C$. \item The fiducials should be sufficiently diverse that they reveal all errors that are visible in any of the contexts in which $C$ will be used. In the case of a subroutine $C$ that is to be used in an \emph{a priori} entirely unknown context, this means that if $\Lambda( \tilde{C}^{-1} Q C) \neq \mathcal{U}(\tilde{C}^{-1} Q C)$ then there should be at least one $F \in \mathbb{F}$ for which $S < 1$ for the corresponding circuit. \item (Stretch goal) The fiducials should generate circuits that are \emph{uniformly sensitive} to all possible errors in $\Lambda( \tilde{C}^{-1} Q C)$, ensuring that the average performance over randomly sampled fiducials is closely related to the process fidelity of $\Lambda( \tilde{C}^{-1} Q C)$. \end{enumerate} The first criterion is achieved by setting $\mathbb{F}$ to any subset of the $w$-qubit Clifford layers $\mathbb{C}_w$. The third criterion is satisfied if and only if $\mathbb{F}$ is informationally complete (i.e., it's element are sufficient for process tomography). The fourth criterion is achieved by a set $\mathbb{F}$ that generates a 2-design, such as the $w$-qubit stabilizer states, which is achieved by the full $w$-qubit Clifford layer set $\mathbb{C}_w$ \cite{gross2007evenly, dankert2009exact}. But the elements of $\mathbb{C}_w$ cannot be implemented with $O(1)$ depth circuits, so the full $w$-qubit Clifford group cannot satisfy our second criterion. In fact, no 2-design can be generated with $O(1)$ depth circuits over one- and two-qubit gates. We therefore choose to set $\mathbb{F} = \mathbb{C}_1^w$, where $ \mathbb{C}_1^w$ denotes the $w$-fold tensor product of the single-qubit Clifford group. Setting $\mathbb{F} = \mathbb{C}_1^w$ satisfies criteria (1), (2), and (3). It does not directly satisfy criterion (4), as $ \mathbb{C}_1^w$ does \emph{not} generate a 2-design. However, when combined with some simple and efficient data processing, $\mathbb{F} = \mathbb{C}_1^w$ does satisfy the fourth criterion. To understand why, observe that averaging over these fiducials performs a type of group-twirl \cite{gambetta2012characterization}. Fiducials from $\mathbb{C}_1^w$ implement the twirling map $\mathscr{T}$ that acts on $w$-qubit superoperators as \begin{equation} \mathscr{T}(\mathcal{E}) = \frac{1}{24^w} \sum_{L \in\mathbb{C}_1^{\otimes w}} \mathcal{U}(L) \mathcal{E} \mathcal{U}(L^{-1}). \end{equation} This twirl projects any superoperator onto the space spanned by $w$-fold tensor products of one-qubit depolarizing channels \cite{gambetta2012characterization} (in practice there will be errors in the fiducial subroutines, so they do not implement a perfect twirl. However, it is known that the effect of twirling is robust under weak error \cite{proctor2017randomized, wallman2017randomized, merkel2018randomized}). So averaging over the fiducials converts $\Lambda( \tilde{C}^{-1} Q C)$ into a stochastic Pauli channel with a distribution over Pauli errors that, for each of the $w$ qubits, has a uniform marginal distribution over the three Pauli errors, $X$, $Y$ and $Z$. This guarantees good (but not uniform) sensitivity to \emph{all} errors, because, although the $Z$ errors cause no observable failure --- i.e., the correct bit is output by the qubit on which the error occurs --- both $X$ and $Y$ errors flip the output bit of that qubit. So the rate of unobserved $Z$ errors can be inferred from the observed rate of bit flips. This implies that there is a simple function of data that is equal to $F_e(\Lambda( \tilde{C}^{-1} Q C))$, up to contributions from errors in the initialization, readout and fiducial subroutines. For $k=0,1,\dots,w$, let $h_k$ denote the probability of the circuit producing a bit string whose Hamming distance from the target bit string is $k$ --- so $h_0 = S$ and $\sum_{k=0}^{w} h_k = 1$. For $k=0,1,\dots,w$, let $p_k$ denote the probability that $\mathscr{T}(\Lambda( \tilde{C}^{-1} Q C))$ induces any weight $k$ error --- so $p_0$ is the probability of no error, meaning that \begin{equation} p_0 = F_e(\Lambda( \tilde{C}^{-1} Q C)), \end{equation} and $\sum_{k=0}^{w} p_k = 1$. These distributions are related by \begin{equation} \vec{h} = M \vec{p}, \end{equation} where $M_{jk}$ is the probability that a weight-$k$ error causes $j$ bit flips on the target bit string. Now a weight-$k$ error causes $j$ bit flips if $j$ of the $k$ Pauli errors are not $Z$. Because the probability of all three Pauli errors is equal, this is simply given by \begin{equation} M_{jk} = {{k}\choose{j}} \frac{2^j}{3^k}, \end{equation} for $j \leq k$, with $M_{jk} = 0$ for $j > k$. By inverting this equation we obtain: \begin{equation} p_0 = \sum_{k=0}^{w} \left( - \frac{1}{2}\right)^k h_k.\label{eq:p0} \end{equation} The Hamming distance distribution can be efficiently estimated from data (it is a distribution over $w+1$ elements). So we can use this relationship to efficiently estimate the process fidelity of $\Lambda( \tilde{C}^{-1} Q C)$ --- up to contributions from errors in the initialization, readout and fiducial subroutines, which, if desired, could be estimated and removed using standard techniques \cite{magesan2011scalable}. It would therefore be well-motivated to use the right-hand-side of Eq.~\eqref{eq:p0}, in place of $S$ or $P$ (the polarization), as a quantifier of how successfully a mirror circuit with randomized single-qubit Clifford fiducials has run. We do not do so in this work, however, for two reasons. First, $S$ and $P$ are arguably more intuitive. Second, using $p_0$ instead of $S$ or $P$ makes little difference to our results, and no difference to our scientific conclusions. One of the reasons for this is that $p_0 \approx S$ when most of the observed incorrect bit strings are a large Hamming distance from the target bit string. This will typically occur when $C$ is a wide and deep circuit containing many two-qubit gates (which will spread errors). \begin{figure*} \caption{\textbf{Randomized mirror circuits} \label{fig:rmcs} \end{figure*} \section{Randomized mirror circuits} \label{sec:rmcs} Our experiments used two kinds of mirror circuits: \emph{randomized mirror circuits} and \emph{periodic mirror circuits}. In this appendix we define randomized mirror circuits. Although the definitions in this appendix are self-contained, the mirroring transformations used to construct them were introduced and motivated in Appendix~\ref{sec:mirroring}. \subsection{Definition} Our experiments used randomized mirror circuits built from alternating layers of randomized Pauli gates and Clifford gates chosen from a sampling distribution $\Omega$ over a Clifford layer set $\mathbb{L}_w$. In the second half of the circuit, each of the $\Omega$-random layers is inverted, but the Pauli layers are independently resampled randomly. The sampling distribution $\Omega$ is configurable, and is used to vary and fine-tune the properties of the benchmark. (A related construction plays a role in direct randomized benchmarking \cite{proctor2018direct}). A width-$w$ randomized mirror circuit with a benchmark depth of $d$ (see schematic in Fig.~\ref{fig:rmcs}) consists of: \begin{itemize} \item[(a)] An initialization layer that prepares all $w$ qubits in $\ket{0}$. \item[(b)] A layer of uniformly random single-qubit Clifford gates on each qubit. \item[(c)] A sequence of $\nicefrac{d}{4}$ independently sampled pairs of layers, where each pair consists of \begin{enumerate} \item A layer of uniformly random Pauli gates on each qubit. \item A layer sampled from $\Omega$. \end{enumerate} \item[(d)] A layer of uniformly random Pauli gates on each qubit. \item[(e)] The layers from step (c) in the reverse order with: \begin{enumerate} \item Each $\Omega$-random layer replaced with its inverse. \item Each Pauli layer independently resampled. \end{enumerate} \item[(f)]The inverse of the Clifford layer from step (b). \item[(g)] A readout layer that measures each qubit in the computational basis. \end{itemize} Randomized mirror circuits can have any width $w$, and any benchmark depth that is a multiple of four (i.e., $d=4k$ for some integer $k \geq 0$). As with all our benchmark circuits, note that the full depth $d_0$ of the circuit is $d_0 = d + 5$ --- the benchmark depth ignores the constant contribution of the five layers in steps (a), (b), (d), (f) and (g). The bulk of a randomized mirror circuit is occupied by $\Omega$-random layers (which are the heart of the construction) and random Pauli layers. The random Pauli layers play a simple role: they maximize the disorder of each circuit (no matter what $\Omega$ is used) and locally scramble errors. They impose local basis randomization, which ensures that systematic, coherent errors on the layers almost surely do not not align or anti-align, and therefore do not interfere constructively or destructively over many circuit layers. This has an effect somewhat similar to Pauli frame randomization \cite{knill2005quantum, wallman2015noise, ware2018experimental}. However, in contrast to Pauli frame randomization, the Pauli layers in our circuits are \emph{not} resampled each time the circuit is run. This is because our aim is not to convert all types of error into stochastic Pauli errors --- instead we are aiming to benchmark a processor's performance on disordered circuits. (Note, however, that a processor is free to implement our benchmarking circuits using randomized gate implementations. As discussed above, our construction is agnostic as to how the layers are implemented.) The central random Pauli layer, which appears in \emph{all} our mirror circuits (including the periodic ones shown in Fig.~\ref{fig:pmcs}), plays a special role. It prevents cancellation of errors between a circuit $C$ and the ``quasi-inverse'' circuit $\tilde{C}^{-1}$ used to mirror $C$. The alternating layers of randomized Pauli gates --- which only appear in \emph{randomized} mirror circuits --- play a similar role for each layer. They limit the degree to which coherent errors can systematically add or cancel between layers, on average. The $\Omega$-random layers also prevent systematic addition and cancelation, but the addition of the uniformly random Pauli layers causes the rate that coherent errors systematically add or cancel to only weakly depend on $\Omega$. This is convenient, because varying $\Omega$ is useful for generating varied and interesting benchmarking circuit ensembles. The theory of direct randomized benchmarking \cite{proctor2018direct} can be used to show that the mean success probability of a randomized mirror circuit sampled according to $\Omega$ is closely related to the process fidelity of a $\Omega$-random circuit layer. Similar relationships hold for other kinds of randomized circuit \cite{magesan2011scalable, boixo2018characterizing, cross2018validating}. However, we do not use this relationship in this paper, so we do not pursue it further here. \subsection{Circuit samplers} Varying the distribution $\Omega$ over Clifford layers provides a way to tune and control important properties of the random mirror circuit benchmark. One of the most important properties is the density of two-qubit gates within the circuits, which we denote $\xi$. Each of our experiments used a distribution $\Omega$ over layers constructed from the following native gate set: \begin{itemize} \item a set of single-qubit Clifford gates, $\mathbb{G}_1$, each of which may be applied to any qubit, and \item a single two-qubit Clifford gate that may be applied to any pair of connected qubits. \end{itemize} We define $\Omega$ distributions constructively, by defining \emph{samplers} that generate layers. Assigning two-qubit gates is the trickiest part of this sampling, and to do so we make use of an \emph{edge sampler} that we denote $\chi$. An edge sampler $\chi$ takes as input the connectivity graph of the $w$ qubits being benchmarked, and (usually) a parameter to control the number of edges that will be sampled. It outputs a subset of edges that have no qubits (nodes) in common. This can be done in several ways, and we discuss the particular edge samplers we used in a moment. We can use this any such edge sampler $\chi$ to sample a $w$-qubit layer (which defines an $\Omega$) as follows: \begin{enumerate} \item Use $\chi$ to select a set of disjoint connected pairs of qubits from the $w$ available qubits. \item Add a two-qubit gate on each edge selected in Step 1. \item Assign a uniformly random single-qubit gate from $\mathbb{G}_1$ to each remaining qubit. \end{enumerate} This sampling allows us to control the two-qubit gate density in the circuits, while guaranteeing that a typical circuit is always highly disordered. To specify a particular distribution $\Omega$, we only need to specify the edge sampler $\chi$. Below are the $\chi$ samplers used in our two experiments. \subsubsection{The circuit sampling of experiment \#1}\label{sec:e1-sampler} The randomized mirror circuits of experiment \#1 were sampled using a particularly simple edge sampler ($\chi_1$). It returns either \emph{zero} edges (with probability $\nicefrac{1}{2}$), or a single edge selected uniformly at random from the $w$-qubit connectivity sub-graph (with probability $\nicefrac{1}{2}$). This sampling algorithm is not appropriate for arbitrarily large processors, because the expected two-qubit gate density of the circuits it generates ($\bar{\xi}$) goes to zero as $w \to \infty$. However, it is simple and transparent, and in the 1-16 qubit regime of our experiments it generates a useful array of circuits. The low density of two-qubit gates ensures low enough error rates that we can actually probe how device performance varies with $d$ and $w$ (rather than seeing the success probability drop below measurable levels even for very small circuits). \subsubsection{The circuit sampling of experiment \#2}\label{sec:e2-sampler} The randomized mirror circuits of experiment \#2 are sampled using an edge sampler ($\chi_{\bar{\xi}}$) that we call the \emph{edge grab}. It is parameterized by the expected two-qubit gate density of the sampled circuits, $\bar{\xi}$. This sampler is designed for generating randomized mirror circuit benchmarks on arbitrarily large processors. Before we introduce the edge grab, we need to clarify our definition of two-qubit gate density ($\xi)$. The two-qubit gate density of a circuit $C$ with shape $(w,d)$ that contains $\alpha$ two-qubit gates is defined as $\xi = \nicefrac{2\alpha}{wd}$. If the circuit is thought of as a $w \times d$ lattice, this is the proportion of the lattice sites that are occupied by a two-qubit gate. In this work we use the benchmark depth to define $\xi$. The edge grab procedure $\chi_{\bar{\xi}}$ is defined as follows: \begin{enumerate} \item \emph{Select a candidate set of edges $E$}. Initialize $E$ to the empty set, and initialize $E_{r}$ to the set of all edges in the connected sub-graph of the $w$ qubits. Then, until $E_{r}$ is the empty set: \begin{enumerate} \item[1.1] Select an edge $v$ uniformly at random from $E_{r}$. \item[1.2] Add $v$ to $E$ and remove all edges that have a qubit in common with $v$ from $E_{r}$. \end{enumerate} \item \emph{Select a subset of the candidate edges}. For each edge in $E$, include it in the final edge set with a probability of $w\bar{\xi}/|E|$ where $|E|$ is the total number of edges in $E$. \end{enumerate} The expected number of selected edges is $w \bar{\xi}$, so this sampler generates a $w$-qubit layer with an expected two-qubit gate density of $2\bar{\xi}$. Because only half of the layers in a randomized mirror circuit are sampled using $\chi$, randomized mirror circuits sampled according to the edge grab sampler have an expected two-qubit gate density of $\bar{\xi}$. Individual circuits' two-qubit gate density will fluctuate around this value, but the ensemble variance of $\xi$ converges to zero as the circuit size increases. This sampling algorithm has another nice property: the probability of sampling a particular $w$-qubit layer $L$ is non-zero for every $L \in \mathbb{L}_w$ (except if $\bar{\xi}=0$ or $\bar{\xi} = \nicefrac{1}{2}$). The edge grab algorithm is invalid if $w\bar{\xi}/|E| > 1$ for any possible candidate edge set $E$. For an even number of fully-connected qubits, $\xi$ can take any value between 0 and $\nicefrac{1}{2}$ (note that $\nicefrac{1}{2}$ is the maximum possible $\xi$ in a randomized mirror circuit, as half the layers in these circuits contain only single-qubit gates). But for any other connectivity the maximum achievable value of $\bar{\xi}$ is smaller. In our experiments, we set $\bar{\xi} = \nicefrac{1}{8}$. This is an achievable value of $\bar{\xi}$ in the edge grab algorithm for all the processors that we benchmarked. \begin{figure*} \caption{\textbf{Periodic mirror circuits} \label{fig:pmcs} \end{figure*} \section{Periodic mirror circuits} \label{sec:pmcs} Our experiments consisted of running two types of mirror circuit benchmark: \emph{randomized mirror circuits} and \emph{periodic mirror circuits}. In this appendix we define the class of periodic mirror circuits, and the specific periodic mirror circuits that we ran in our experiments. These circuits are constructed using the mirroring circuit transformations introduced in Appendix~\ref{sec:mirroring}, but note that the definitions in this appendix are self-contained. \subsection{Definition} Fig.~\ref{fig:pmcs} illustrates the form of our periodic mirror circuits. They are based on repetitions of a low-depth \emph{germ circuit} $C_g$, named following the terminology of gate set tomography \cite{blume2016certifying}. For a given germ circuit $C_g$ of shape $(w,d_g)$, a width-$w$ periodic mirror circuit with a benchmark depth of $d$ consists of: \begin{itemize} \item[(a)] An initialization layer placing all $w$ qubits in the $\ket{0}$ state. \item[(b)] A layer of uniformly random single-qubit Clifford gates on each qubit. \item[(c)] A depth $\nicefrac{d}{2}$ circuit constructed by repeating $C_g$ $\left\lceil \nicefrac{d}{2d_g} \right\rceil$ times, and removing the final $(\nicefrac{d}{2} \;\mathrm{mod}\; d_g)$ layers. \item[(d)] A layer of uniformly random Pauli operators on each qubit. \item[(e)] The layers from step (c) in the reverse order and with each layer replaced with its inverse. \item[(f)] The inverse of the first layer of Clifford gates in step (b). \item[(g)] A layer reading out each qubit in the computational basis. \end{itemize} Periodic mirror circuits can have any width $w$, and any benchmark depth $d$ that is an integer multiple of two. As with all our benchmark circuits, note that the full depth $d_0$ of the circuit is $d_0 = d + 5$, so we have removed the constant contribution of the five layers in (a), (b), (d), (f) and (g) from our definition of the benchmark depth. \subsection{Selecting the germ circuit} Defining a specific set of periodic mirror circuits --- or a specific distribution over periodic mirror circuits --- requires choosing a method for selecting germ circuits $C_g$. Repeating a specific germ amplifies the effect of some errors, while suppressing others \cite{blume2016certifying}. For example, a single-qubit germ circuit consisting of a single $X$ gate amplifies coherent over/under-rotation errors in the $X$ gate, but it suppresses the effect of ``tilt'' errors --- i.e., $Y$ or $Z$ Hamiltonians that change the rotation axis of the $X$ gate. (For example, if $X$ is implemented perfectly except that it is followed by an erogenous small $\hat{z}$-axis coherent error, then a circuit consisting of an even number of $X$ gates composes to an exact identity). It is possible, in principle, to construct germs that, collectively, amplify all the parameters in a specific error model \cite{blume2016certifying}. But a general model of Markovian errors on $w$ qubits contains $16^w-4^w$ parameters \emph{per layer}, and amplifying all those parameters is infeasible. We could define a much smaller error model and construct germs that amplify all its parameters, but this is only well-motivated if that smaller model accurately describes the tested processors. We therefore take a different approach: we sample germs at random, using an algorithm that is biased towards amplifying parameters that are likely to be physically important. \subsubsection{The germ selection of experiment \#2}\label{sec-germ-e2} We ran periodic mirror circuits in experiment \#2. Here we describe the germ sampling algorithm that we used. It is composed of two steps. The first step constructs a germ circuit composed of only single-qubit gates, and the second step replaces some of these gates with two-qubit gates. This protocol is somewhat complicated, but was designed for investigating our specific scientific question --- the effect of circuit order on circuit failure rates --- and it is not intended as a general-purpose germ selection routine. We expect that different algorithms for generating periodic mirror circuits will be useful for, e.g., creating standardized benchmarks. \noindent \textbf{Step 1}: The first step in our algorithm is to create a width-$w$ germ circuit $C_g$ that contains only single-qubit gates from some set $\mathbb{G}_1$ (in our experiments, $\mathbb{G}_1$ was the 24-element group of single-qubit Clifford gates). Our specific sampling algorithm was: \begin{enumerate} \item \emph{Select a germ depth $d_{g}$}. We do this by setting $d_{g} = 2^x$ with probability $1/2^{x+1}$ for $x=0,1,2,\dots$, and then truncating the depth to $8$. That is, if $d_{\rm g} > 8$ set $d_{\rm g} = 8$. An exponentially decaying probability truncated at depth 8 is useful because a depth-$d$ circuit constructed by repeating a germ of length $d_g$ is only periodic if $d > d_g$. Current processors cannot run very deep circuits without an error almost certainly occurring, so we can only study periodicity by repeating relatively shallow germ circuits. \item \emph{Select a local germ for each qubit}. For each of the $w$ qubits, indexed by $i$, we independently select a local germ $C_{l,i}$ by: \begin{enumerate} \item[2.1] Setting $d_{l,i} = 2^x$ with probability $1/2^{x+1}$ (for $x=0,1,2,\dots$), and, if the selected $d_{l,i}$ is greater than $d_{g}$, then setting $d_{l,i} = d_{g}$. \item[2.2] Setting $C_l$ to a uniformly random depth-$d_{l}$ sequence of single-qubit gates from $\mathbb{G}_1$. \end{enumerate} \item \emph{Combine the local germs}. Construct a germ circuit $C_g$ of depth $d_g$ by combining the $w$ independently selected local germs in parallel. To create a depth $d_g$ circuit, the local germ for qubit $q$ is repeated $\nicefrac{d_g}{d_{l,q}}$ times, where $d_{l,q}$ is the depth of that local germ. By construction, this consists of an integer number of repetitions of each local germ. \end{enumerate} We designed a sampling algorithm that has a strong bias towards shallow local germs --- e.g., the marginal probability of a depth 1 local germ is $\nicefrac{3}{4}$ --- because depth 1 germs amplify a particularly important class of errors that includes coherent over/under-rotations. \noindent \textbf{Step 2}: The second step in our randomized germ selection algorithm is to replace some of the gates in $C_g$ with two-qubit gates (unless $w=1$, in which case this step is skipped). In order to test the hypothesis that periodic circuits perform worse than disordered circuits, we chose an algorithm that generates germs with a two-qubit density of $\xi \leq \nicefrac{1}{8}$, because $\nicefrac{1}{8}$ is the expected two-qubit gate density in the randomized mirror circuits that we ran alongside these periodic mirror circuits (see above, and note that these experiments are detailed further in Appendix~\ref{sec:b2}). This then means that, if we observe worse performance on periodic mirror circuits, this cannot be explained by higher $\xi$. The algorithm that we used, defined for $w>1$, is as follows: \begin{enumerate} \item Set $r$ to the minimum positive integer that satisfies $\nicefrac{2}{r d_g w} < \nicefrac{1}{8}$, where $d_g$ is the current germ's depth, and then replace the germ circuit $C_g$ with $r$ repetitions of $C_g$. This means that we can place at least one two-qubit gate within the germ and still obtain $\xi \leq \nicefrac{1}{8}$. \item For each layer in the updated germ select a set of edges $E_{l}$, with $l= 1, 2, \dots, r d_g$, using the first step of the ``edge grab'' sampling algorithm (see Appendix~\ref{sec:rmcs}). Then combine them into a single set $E_g $ consisting of layer-index and edge pairs. \item Place $n = \nicefrac{r d_g w}{16}$ two-qubit gates into the germ, by \begin{enumerate} \item selecting $n$ layer-index and edge pairs from $E_g$, uniformly at random, and \item replacing the one-qubit gates at each of these positions in the germ with a two-qubit gate. \end{enumerate} \end{enumerate} Note that germs generated via this algorithm have a two-qubit gate density of $\xi \leq \nicefrac{1}{8}$. However, periodic mirror circuits generated from these germs \emph{can} have a two-qubit gate slightly density above $\xi$, because a germ is only partially repeated in a depth $d$ periodic mirror circuit if the germ circuit's depth is not a factor of $\nicefrac{d}{2}$. \section{Predicting mirror benchmarks from a processor's error rates} \label{sec:predictions} Figure 2c of the main text compares the \emph{measured} results of our benchmarks with the \emph{predicted} performance based on the published error rates provided for each of the quantum processors. In this appendix we explain how we obtain these predictions. The set of ``error rates'' $\{\epsilon\}$ provided for a given processor can consist of many different performance metrics estimated in many different ways. For example, the entire error rate set $\{\epsilon\}$ could consist of a single heuristic error rate for the entire processor. At the opposite extreme, $\{\epsilon\}$ could consist of all the parameters of a detailed process matrix error model fit using, e.g., gate set tomography \cite{blume2016certifying}. For the processors in our experiments, the contents of the error rate sets lie between these two extremes. They include summary error rates for the native logic operations, with the gate error rates measured by randomized benchmarking. The error rate set represents a valuable description of a processor's performance, but it does not immediately imply a detailed predictive model for the processor. In order to predict a circuit's success probability from the provided error rates $\{\epsilon\}$, we need to construct a predictive formula or model in which the only parameters are (1) these error rates, and (2) the circuit. \subsection{Standard error rates}\label{sec:standard-error-rates} The exact metrics that constitute the reported error rates, $\{\epsilon\}$, display some minor variation across processors. But all of these can be straightforwardly transformed into a ``standard form'' capable of describing each of the processors we benchmarked. This standard form consists of: \begin{itemize} \item The estimated entanglement infidelity for each available single-qubit gate $G$ on each possible target physical qubit $i$, which we denote $\epsilon(G_i)$. \item The estimated entanglement infidelity for each available two-qubit gate, indexed by the target physical qubits, $i$ and $j$, which we denote $\epsilon(G_{i,j})$. \item A readout error rate $\epsilon(i)$ for each physical qubit $i$ defined by \begin{equation} \epsilon(i) =\frac{1}{2}\big( \;\mathsf{Pr}(1 \vert 0) + \mathsf{Pr}(0 \vert 1)\;\big), \end{equation} where $\mathsf{Pr}(x\vert y)$ is the probability of reading out $x$ on qubit $i$ after preparing that qubit in the state $\ket{y}$. \end{itemize} Initialization errors are not reported separately, and are instead implicitly included in the readout error rate, so $\epsilon(i)$ can be thought of as an average state preparation and measurement (SPAM) error. Further note that this standard form explicitly utilizes the \emph{entanglement} infidelity, rather than the \emph{average gate} infidelity that is the usual error metric associated with randomized benchmarking \cite{magesan2011scalable} (and which is the error metric used by IBM Q and Rigetti). The two infidelities are simply related to each other by the linear rescaling of Eq.~\eqref{eq:Fe-Fa}. \subsection{Constructing a predictive model} This standard form given above for the error rate set $\{\epsilon\}$ does not directly constitute a predictive model. Below we describe several increasingly detailed approaches for converting the descriptive error rates into predictive models of circuit success probabilities, and we highlight the method that we actually used. \subsubsection{A simple error accumulation formula} The simplest approach to predicting the success probability $S$ of a circuit $C$ is to compute the probability that \emph{no} error happens over the course of the circuit. This is simply the product of one minus the error rates of all the operations in $C$. In the small error and small circuit limit, the predicted \emph{failure} rate $(1-S)$ is then approximately the sum of the error rates of the constituent operations. So for the circuit $C=RL_dL_{d-1} \cdots L_2L_1 I$, we have: \begin{equation} S = s(R)s(L_d) \cdots s(L_2)s(L_1) s(I), \label{eq:S-naive-pred} \end{equation} where $s(L)$ is the success probability of layer $L$ given by the product of one minus the error rates of the layer's constituent operations: \begin{itemize} \item For the initialization layer $I$, $s(I)= 1$. As discussed above, errors in the initialization are captured by the ``readout'' error rates. \item For a gate layer $L$ \begin{align} s(L) &= \prod_{G \in L} (1-\epsilon(G)), \end{align} where the product is over the particular one- and two-qubit gates (on particular qubits) from which $L$ is constructed. \item For the readout layer $R$ \begin{align} s(R) &= \prod_{i \in \mathbb{Q}} (1-\epsilon(i)), \label{eq:readout-layer-error} \end{align} where $\mathbb{Q}$ is the set of indices of the qubits on which $C$ acts. \end{itemize} \subsubsection{The global depolarization model} Equation~\eqref{eq:S-naive-pred} is simple and intuitive, but it is flawed. This is because it implicitly assumes that two or more errors cannot cancel, and so it predicts that $S\to 0$ as circuit depth $d \to \infty$ rather than $S \to \nicefrac{1}{2^w}$. So, instead, we use a formula that corrects for this. We predict $S$ using \begin{equation} S = \nicefrac{1}{2^w} + (s(R) - \nicefrac{1}{2^w})\lambda(L_d)\lambda(L_{d-1}) \cdots \lambda(L_1), \end{equation} where \begin{equation} \lambda(L) = \frac{1}{1-4^w}\left( 1 -4^w \prod_{G \in L} \left( 1-\epsilon(G) \right) \right). \end{equation} Although this formula might seem much more complex than Eq.~\eqref{eq:S-naive-pred}, it follows simply from modeling the error in each gate layer as a \emph{global} $w$-qubit depolarizing channel [see Eq.~\eqref{eq:global-dep}] with an entanglement fidelity equal to the product of the entanglement fidelities of the constituent gates --- which is how entanglement fidelity composes under tensor products. Moreover, note that this formula for $S$ depends \emph{approximately} only on the number of times each gate (and readout) appears in the circuit. This holds only approximately because errors compose differently when they occur in parallel or in serial (errors on different qubits that occur in the same layer cannot cancel, where errors on different layers \emph{can} cancel). \subsubsection{The local depolarization model} An alternative model in which to embed the error rates is a local depolarizing model. In this model, each one-qubit gate $G_i$ is modeled as the perfect unitary followed by the one-qubit depolarizing channel $\mathcal{D}_{1,\epsilon(G_i)}$, and each two-qubit gate $G_{i,j}$ is modeled as the perfect unitary followed by the two-qubit depolarizing channel $\mathcal{D}_{2,\epsilon(G_{i,j})}$ [again, see Eq.~\eqref{eq:global-dep} for the definition of a $w$-qubit depolarizing channel]. This is arguably more physically well-motivated than the global depolarizing model, because it is consistent with the characterization experiments from which the errors rates are extracted --- that is, under this model, one- and two-qubit randomized benchmarking will return the error rates used in the model (up to scaling differences between average gate and entanglement infidelity). However, unlike the previous two models, the local depolarization model does not lend itself to a compact, analytical formula for the success probability. In order to make predictions from a local depolarizing model it is necessary to simulate the circuit. Because our benchmarks use Clifford circuits, weak simulation (i.e., sampling from the circuit's output distribution) under local depolarization is efficient in both circuit depth $d$ and width $w$. Strong simulation (i.e., computing the success probability exactly) is expensive, however, scaling exponentially in $d$. Somewhat surprisingly, the success probabilities predicted by this model are typically \emph{approximately} the same as those predicted by a corresponding global depolarizing model. This is because, under either model, a circuit's success probability is controlled only by (1) the rate that errors occur and (2) the rate that errors cancel. The error occurrence rate is equal in both models, and the error cancellation rate is \emph{almost} equal in both models \emph{unless} there is a large variance in the gate error rates on different qubits. For these reasons, we choose to use the global depolarizing model in this work. \section{Experiment \#1} \label{sec:b1} The purpose of this appendix is to describe the benchmarking experiments and data analysis summarized in Fig.~1d of the main text. Throughout these appendices we refer to these benchmarking experiments collectively as \emph{experiment \#1}. This appendix is not intended to be self-contained, and we make explicit references to earlier appendices when necessary. This appendix consists of two parts: in Appendix~\ref{sec:e1-exps} we detail the experiments, and in Appendix~\ref{sec:e1-analysis} we detail the data analysis. \subsection{Experimental details}\label{sec:e1-exps} Experiment \#1 used randomized mirror circuits to benchmark each of the twelve processors shown schematically in Fig.~1d. The benchmarking circuits were designed using a procedure we refer to as \emph{benchmark \#1} that can be applied to any gate-model quantum information processor. Benchmark \#1 has two notable properties: First, it was designed specifically for processors with fewer than $\sim 20$ qubits (in contrast to the benchmark of experiment \#2, described in Appendix~\ref{sec:b2}). Second, these benchmarking experiments took place over a period of time during which our methods were still under active development (the experiment dates range from July 2018 to November 2019), and so some minor aspects of the procedure changed over this time. We will note these changes explicitly as we introduce the benchmark. It was not possible to re-run all of the experiments with identical procedures, because not all of the processors were available for the full period of this research (in particular, IBM Q Rueschlikon, IBM Q Tenerife, Rigetti Agave and Rigetti Aspen-6 were no longer available in autumn 2019). This contrasts with experiment \#2 (see Appendix~\ref{sec:b2}) which is entirely standardized across the eight tested processors. \subsubsection{Circuit benchmarking algorithms}\label{sec:b-alg} Benchmark \#1 is an algorithmic approach for generating mirror circuit benchmarks to run on generic gate-model quantum information processors. It utilizes the following processor-specific information: \begin{enumerate} \item A single-qubit gate set $\mathbb{G}_1$. We assume that all gates in $\mathbb{G}_1$ can be applied to any qubit on the processor. \item A two-qubit gate $G_2$. Without loss of generality, this gate is assumed to be asymmetric and may be applied to any adjacent qubits on the processor's directed connectivity graph. (Fig.~1d displays the undirected connectivity graphs for each of the twelve processors we tested). \end{enumerate} The algorithm then generates a suite of circuits to run on the target processor. The qubits in each circuit are explicitly assigned to specific physical qubits, and the circuits are composed of layers built from $\mathbb{G}_1$ and $G_2$ gates allowed by the device's connectivity. The motivation for choosing this sort of benchmarking circuits is covered in detail in Appendix~\ref{sec:layer-set}. \subsubsection{Gate set}\label{sec:b1-gates} The IBM Q and Rigetti processors use different native gate sets. For this reason, we chose different gate sets for IBM Q and Rigetti processors: \begin{itemize} \item IBM Q processors: \subitem $\mathbb{G}_1 = \mathbb{C}_1$, where $\mathbb{C}_1$ is the set of all 24 single-qubit Clifford gates, \subitem $G_2 = \ensuremath{\mathsf{CNOT}}\xspace$. \item Rigetti processors: \subitem $\mathbb{G}_1$ comprises an idle gate, the three other single-qubit Pauli gates, and $\pm \nicefrac{\pi}{2}$ rotations around $\hat{x}$ and $\hat{z}$. $\mathbb{G}_1$ is a strict subset of $\mathbb{C}_1$. \subitem $G_2 = \ensuremath{\mathsf{CPHASE}}\xspace$. \end{itemize} In contrast, in experiment \#2 we standarized the single-qubit gate set (to $\mathbb{G}_1 = \mathbb{C}_1$). \subsubsection{Circuit shapes} The first step in the benchmark \#1 algorithm is to select the set of circuit shapes at which to construct benchmarking circuits. For an $n$-qubit processor, we chose the circuit shapes $(w,d)\in\mathbb{W}_n\times\mathbb{D}$, where: \begin{align} \mathbb{W}_n &= \{\,2^j \;\vert\; j \in [0\,..\,\lfloor \log_2(n)\rfloor\,] \,\} \;\cup\; \{n\}\\ &=\{1,2,4,\ldots,n\} \\ \mathbb{D} &= \{ \,4 \lfloor 1.4^j \rfloor \; \vert\; j \in [1..13] \}\\ &= \{ 0, 4, 8, 12, 20, 28, 40, 56, 80, 112, 160, 224, 316 \} \end{align} The circuit widths $w\in\mathbb{W}_n$ are powers of 2, with $w=n$ additionally included as the largest width (regardless of whether $n$ itself is a power of 2 or not). The benchmark depths $d\in\mathbb{D}$ are approximately exponentially spaced. We enforce that all depths are an integer multiple of 4, as this is a requirement of randomized mirror circuits (see Appendix~\ref{sec:rmcs}). For six of the twelve experiments (IBM Q Rueschlikon, IBM Q Melbourne, IBM Q Tenerife, Rigetti Agave, Rigetti Aspen 4 and Rigetti Aspen 6) we excluded depths $\{56, 112, 160, 224, 316\}$. For the other six experiments, which were all on 5-qubit IBM Q processors, we iteratively excluded the largest depth as the width was increased, i.e., shapes $(2,316)$, $(3,316)$, $(3,224)$, $(5,316)$, $(5,224)$, and $(5,160)$ where excluded. These choices were made in order to reduce the number of circuits required and/or due to limitations in what a particular processor could run. \subsubsection{Circuit embeddings} For any circuit width $w < n$ we must select a set (or several sets) of $w$ connected physical qubits \emph{before} generating our benchmark circuits. This is because our benchmarks use layers of native gates, so we need to ensure that our circuits respect the connectivity constraints of the $w$ selected physical qubits. For most common connectivity graphs, as $n$ increases there is a rapidly increasing number of distinct connected sets of $w$ qubits for any non-extremal width (i.e., a width $w$ satisfying $1\ll w\ll n$). For each width $w$ we select multiple width-$w$ sets $s_w$. We do so as follows: \begin{itemize} \item For a processor of $n \leq 5$ qubits, for each width we select each possible set of $w$ connected qubits. \item For a processor of $n > 5$ qubits, for each width $w$ we select $\lceil \nicefrac{n}{w} \rceil$ sets of $w$ connected qubits whereby every qubit is in at least one set of each size (here $\lceil \cdot \rceil$ denotes the ceiling function, i.e., rounding up). \end{itemize} \subsubsection{Circuit sampling} For each processor, each circuit shape $(w,d)$, and each chosen set of $w$ qubits ($s_w$) we sampled 40 shape-$(w,d)$ randomized mirror circuits acting on those $w$ qubits. These circuits were constructed using the $\chi_1$ sampler introduced in Appendix~\ref{sec:e1-sampler}. The code that we used to perform this sampling has been incorporated into the open-source software package \texttt{pyGSTi} \cite{nielsen2020probing, pygstiversion0.9.9.1}. \subsubsection{Experimental details} We ran benchmark \#1 on the twelve processors shown in Fig.~1d. The experiments were run using the online access services of IBM Q \cite{ibmq2} and Rigetti \cite{rigetti-qcs}. Both IBM Q and Rigetti routinely recalibrate their processors; all of the circuits were run within a single calibration window. Each circuit was repeated 1024 times, except for the experiments on Rigetti Agave, where each circuit was repeated 1000 times. For our first six experiments (IBM Q Melbourne, IBM Q Rueschlikon, IBM Q Tenerife, Rigetti Agave, Rigetti Aspen 4, and Rigetti Aspen 6), equal-depth, single-qubit circuits on different qubits were implemented simultaneously \cite{gambetta2012characterization}. That is, for each processor and each circuit depth $d$, the $40n$ width-1 circuits were combined into 40 width-$n$ circuits consisting of running one of the 40 depth-$d$ circuits for each qubit in parallel. For the circuit embedding strategy for processors of more than five qubits, this approximately halves the total number of circuits that need to be run. In an ideal processor, running these circuits in parallel has no effect on their outputs, but for real processors this is typically not the case, due to pulse spillover and other crosstalk effects \cite{gambetta2012characterization, sarovar2019detecting}. So, in our later six experiments (IBM Q Yorktown, IBM Q Ourense, IBM Q Essex, IBM Q London, IBM Q Vigo, and IBM Q Burlington), we ran the width-1 circuits separately. We did not parallelize any of the $w>1$ circuits in any of our experiments, as two-qubit gate crosstalk is known to often be a significant effect in superconducting chips \cite{rudinger2018probing, proctor2018direct, harper2019efficient}. \subsection{Data analysis}\label{sec:e1-analysis} The results of experiment \#1 are summarized in the \emph{volumetric benchmarking} plots \cite{blume2019volumetric} of Fig.~1d. Here we explain the data analysis used to generate these plots. In this appendix we use notation that distinguishes between a circuit's true success probability ($S$) and an observed success probability ($\hat{S}$) obtained from a finite number of repetitions of that circuit. As noted above, for some processors we left out depth 56 in order to reduce the total number of circuits. For these processors, in the plots in Fig.~1d the boxes (and frontiers) at depths 40 and 80 are stretched horizontally to meet at depth 56, so that there is no empty space in the plots. \subsubsection{Circuit polarization}\label{sec:pol} For each circuit that we ran, we calculate the observed polarization \begin{equation} \hat{P} = (\hat{S} - \nicefrac{1}{2^w})/(1- \nicefrac{1}{2^w}), \end{equation} where $\hat{S}$ is that circuit's observed success probability and $w$ is the circuit's width. The polarization removes few-qubit effects. If a processor is subject only to depolarizing noise, then $1 \geq S \geq \nicefrac{1}{2^w}$, and as $d \to \infty$ then $S \to \nicefrac{1}{2^w}$ for any shape $(w,d)$ circuit. This is because a deep circuit will output $w$ uniformly random bits. So, under this noise model, $P\to 0$ as $d\to \infty$ and $1 \geq P \geq 0$ for any width circuit. Note that $\hat{P}$ can be negative, which can be caused by finite sampling or because $P$ itself can be negative under more general error models. \subsubsection{Selecting the best qubits} For each benchmarked circuit shape $(w,d)$ and each benchmarked subset of $w$ qubits ($s_w$) we ran 40 distinct randomized mirror circuits and measured 40 corresponding observed polarizations $\hat{P}$. These polarizations are collected into a set $\hat{\mathbf{P}}(w,d,s_w)$ for each circuit shape and qubit set. For each width $w$, the first step in our analysis is to identify the single set of qubits $b_{w}$ that we deem to have performed the best on our benchmarking circuits. We then discard the data for all other qubit sets, and generate volumetric benchmarking plots using only $\hat{\mathbf{P}}(w,d) \equiv \hat{\mathbf{P}}(w,d, b_w)$. We select $b_w$ to be the $w$ qubits with the largest $d_{\text{mean}}$, where $d_{\text{mean}}$ is the smallest depth at which the mean polarization drops below $\nicefrac{1}{e}$. When more than one of the benchmarked sets of $w$ qubits have the same value for $d_{\text{mean}}$ we choose the set of qubits with the largest mean polarization at that depth. This process means that we have selected the $w$ qubit subsets that maximize the depth of the processor's mean polarization $\nicefrac{1}{e}$ frontier, which is the solid black line in each panel of Fig.~1d (discussed below). \subsubsection{Maximum, minimum and mean polarization} In the volumetric benchmarking plot for each processor in Fig.~1d, we display the best, average, and worst case polarization versus circuit shape for the best-performing sets of qubits. That is, at each circuit shape $(w,d)$, we plot the maximum, mean, and minimum of $\hat{\mathbf{P}}(w,d)$. A circuit's polarization can be negative, so we truncate each of our performance metrics to zero. In the case of the mean, this truncation occurs \emph{after} averaging. \subsubsection{Performance frontiers} In each panel of Fig.~1d we plot three frontiers, corresponding to the circuit shapes at which the maximum, mean and minimum polarizations drop below $\nicefrac{1}{e}$. For performance frontiers it is often useful to account for finite sampling effects, i.e., to adjust for the finite number of repetitions of each circuit. Details of this statistical analysis are given below. For now, we assume a statistic-specific function $f$ that takes $\hat{\mathbf{P}}(w,d)$, the set of observed polarizations, and returns ``pass'' or ``fail'' for that circuit shape. It is convenient to enforce that the frontier be monotonic, in the sense that as width or depth is increased the boundary is guaranteed to only be crossed once. So, given an $f$ function, the frontier is calculated as follows: \begin{enumerate} \item For each tested circuit shape $(w, d)$ use $f(\hat{\mathbf{P}}(w,d))$ to designate that circuit shape as a ``pass'' or a ``fail''. \item Set the frontier to the border of the largest region $R$ for which, if $(w^*,d^*) \in R$, $w\le w^*$, and $d\le d^*$, then $f(\hat{\mathbf{P}}(w,d))=$ ``pass''. \end{enumerate} Of course, frontiers may be calculated for any threshold value. We choose $\nicefrac{1}{e}$ because circuit polarization will decay exponentially with the benchmark depth under the simplest error model --- uniform, layer-independent depolarization (i.e., a global depolarizing channel with the same error rate for every circuit layer). When the decay is approximately exponential the frontier is a visual representation of the rate of this approximately exponential decay. \subsubsection{Accounting for finite sample fluctuations}\label{sec:hypothesis} The most appropriate method for accounting for the finite number of repetitions of each circuit ($N$) when calculating a statistic's frontier depends on the inferences that will be made from that frontier. In the case of the mean, we use the ``raw'' frontier that has no finite sampling adjustments. That is, for the mean, we use an $f$ function that simply returns ``pass'' if the mean of $\hat{\mathbf{P}}(w,d)$ is above $\nicefrac{1}{e}$ and otherwise it returns ``fail''. This is an unbiased estimate of whether the mean polarization is above or below the threshold value, and so it is a natural choice. Different choices are possible of course, and \emph{statistical hypothesis testing} \cite{lehmann2006testing} provides a rigorous framework for constructing broad classes of thresholding functions. For the case of the mean, for instance, one may desire a function $f$ that hypothesizes the mean is above a threshold, returning ``fail'' if and only if there is \emph{statistically significant} evidence that the mean is below the $\nicefrac{1}{e}$ threshold value. For the maximum and minimum frontiers, we will utilize this hypothesis testing framework exclusively. In the main text we use the observation of a substantial discrepancy between the maximum and minimum frontiers as evidence that that processor is subject to highly structured errors. We therefore chose to calculate the maximum and minimum frontiers using a statistical hypothesis test that is designed so that the boundaries will be equal if there is no statistically significant evidence in the data to the contrary. This therefore guarantees that any observed discrepancy is not simply an artifact of finite $N$. At each circuit shape, we start from the null hypotheses $H_0$ that either $H_{\uparrow}$ is true or $H_{\downarrow}$ is true, where: \begin{itemize} \item $H_{\uparrow}$ is the hypothesis that every circuit of this shape that we ran has a polarization that is above the $\nicefrac{1}{e}$ threshold. \item $H_{\downarrow}$ is the hypothesis that every circuit of this shape that we ran has a polarization that is below the $\nicefrac{1}{e}$ threshold. \end{itemize} Note that these hypotheses are about the circuits that we ran, \emph{not} the distribution of circuits from which they were sampled. Starting from the $H_0$ hypothesis at each circuit shape encodes our aim of starting from the assumption that the maximum and minimum frontiers are equal. Only if we can reject $H_0$ at a given circuit shape, using a statistical hypothesis test with 5\% significance, do we assign ``pass'' to the maximum polarization and ``fail'' to the minimum polarization. Otherwise we assign ``pass'' or we assign ``fail'' to both statistics (using the strategy outlined below). To test the null hypothesis $H_0$ at a given circuit shape, we perform two statistical hypothesis tests at $5\%$ significance --- one that tests for evidence to reject $H_{\uparrow}$, and one that tests for evidence to reject $H_{\downarrow}$. Because we must reject both $H_{\uparrow}$ and $H_{\downarrow}$ to reject $H_0$, the significance of this type of test for $H_0$ is $5\%$. The two tests that we use are equivalent, so we only describe the test of $H_{\uparrow}$. This test is more simply described in terms of each circuit's observed success probability $\hat{S}$, rather than in terms of the polarizations. The statistical hypothesis test of $H_{\uparrow}$ that we use consists of $K$ log-likelihood ratio tests \cite{rudinger2018probing}, where $K$ is the number of circuits of that shape (here $K=40$). We test whether each observed success probabilities $\hat{S}$ is consistent with the null hypothesis that it is the average of $N$ draws from a 0/1-valued ``coin'' with a probability $S$ to output 1 that is above $T_S = (1 + \nicefrac{1}{2^w})\nicefrac{1}{e} + \nicefrac{1}{2^w}$ ($T_S$ is the $\nicefrac{1}{e}$ polarization threshold rescaled to the equivalent success probability threshold). There are $K$ hypothesis tests performed, and so to maintain the test significance to $5\%$ we must account for this. We do so using the Benjamini-Hochberg procedure \cite{benjamini1995controlling}. We then reject $H_{\uparrow}$ if any of the tests indicate that their circuit's $S$ is below $T_S$. This is similar to rejecting $H_{\uparrow}$ if the smallest p-value in these $K$ tests is smaller than $\nicefrac{0.05}{K}$, which is the well-known Bonferroni correction, but this testing procedure is more powerful. (The Benjamini-Hochberg procedure with $\alpha$ significance guarantees that, if all the tested null hypothesis are true, the probability of rejecting one or more null hypotheses is at most $\alpha$. This is known as weak control of the family-wise error rate. As we are using these 40 tests as a method for testing the composite null hypothesis $H_{\uparrow}$ that is true if and only if all of the individual null hypotheses are true, this is sufficient to maintain the test significance.) There are four possible results of these two hypothesis test, corresponding to all combinations of rejecting or not rejecting $H_{\uparrow}$ and $H_{\downarrow}$. As we already noted, if we reject both hypotheses (and so we reject $H_0$) then we assign ``pass'' to the maximum polarization and ``fail'' to the minimum polarization. Otherwise, we assign the same output for both the maximum and minimum polarization as follows: \begin{itemize} \item If we reject $H_{\downarrow}$ but not $H_{\uparrow}$ then we designate both the maximum and minimum polarization as ``pass.'' \item If we reject $H_{\uparrow}$ but not $H_{\downarrow}$ then we designate both the maximum and minimum polarization as ``fail.'' \item If we reject neither $H_{\uparrow}$ or $H_{\downarrow}$ then we designate the maximum and minimum polarization as both ``pass'' (``fail'') if the maximum polarization (minimum polarization) is further from the $ \nicefrac{1}{e}$ threshold than the minimum polarization (maximum polarization). \end{itemize} Alternative techniques for generating frontiers from data may be preferable in other contexts. \section{Experiment \#2}\label{sec:b2} The purpose of this appendix is to describe the experiments, and the corresponding data analysis, that are summarized in Figs.~2-3 of the main text. Throughout these appendices we refer to this as \emph{experiment \#2}. This appendix is not intended to be self-contained, and we make explicit references to earlier appendices when necessary. This appendix consists of two parts: in Appendix~\ref{sec:e2-exps} we detail the experiments, and in Appendix~\ref{sec:e2-analysis} we detail the data analysis. \subsection{Experimental details} \label{sec:e2-exps} Experiment \#2 used both randomized mirror circuits (see Appendix~\ref{sec:rmcs}) and periodic mirror circuits (see Appendix~\ref{sec:pmcs}) to benchmark each of eight processors and to compare their performance on disordered and ordered circuits. The benchmarking circuits were designed using a procedure that we refer to as \emph{benchmark \#2}. As with benchmark \#1 (see Appendix~\ref{sec:b1}), this procedure can be applied to any gate-model quantum information processor. \subsubsection{The gate set}\label{sec:b2-gates} As with benchmark \#1 of experiment \#1, benchmark \#2 is an algorithmic approach for generating mirror circuit benchmarks to run on generic gate-model quantum information processors. It is parameterized by a processor's two-qubit gate $G_2$ and the processor's directed connectivity graph. Unlike benchmark \#1, it uses a standardized single-qubit gate set $\mathbb{G}_1$ consisting of all 24 single-qubit Clifford gates ($\mathbb{C}_1$) for all processors. As in experiment \#1, we used the native two-qubit gate for each processor, which is $\ensuremath{\mathsf{CNOT}}\xspace$ for IBM Q processors, and $\ensuremath{\mathsf{CPHASE}}\xspace$ for Rigetti processors. \subsubsection{The circuit shapes} The first step in the benchmark \#2 algorithm is to select the set of circuit shapes at which to construct benchmarking circuits. For an $n$-qubit processor, we chose the circuit shapes $(w,d)\in\mathbb{W}_n\times\mathbb{D}$, where: \begin{align*} &\mathbb{W}_n = \{1, 2, 3, 4, \dots, n\},\\ &\mathbb{D} = \{0, 4, 8, 16, 32, 64, 128, 256, 512\}. \end{align*} Exponentially spaced widths would likely be preferable for larger processors, but for the processors we tested, running circuits at an exhaustive set of widths is feasible. For the larger widths we excluded the largest depths, as the processors' error rates implied that all circuits of these shapes would almost certainly all fail. The exact combination of circuit shapes tested can be seen in Fig.~\ref{fig:b2-vbs}. Circuits with depths of 1024 and above were not included only because the IBM Q interface did not allow these circuits to be run. \subsubsection{The circuit embeddings} For each processor, we ran width-$w$ circuits on a single set of $w$ qubits. We chose the $w$ qubits predicted to perform the best on our benchmark, according to a simple heuristic based on the processor's published error rates. As is the case for choosing the ``best'' performing qubits from data --- which we did in the data analysis for experiment \#1 (see Appendix~\ref{sec:b1}) --- there are many reasonable ways to use the error rates to choose this qubit set. Using the standard form error rate set $\{\epsilon\}$ introduced in Appendix~\ref{sec:standard-error-rates}, we do so as follows: \begin{enumerate} \item We model the success probability for a shape $(w,d)$ benchmarking circuit on the qubit set $q_w$ as \begin{equation} S = \left(s(R)- \nicefrac{1}{2^w}\right) \lambda_1^{d(w - \xi)}\lambda_2^{\nicefrac{d \xi}{2}} + \nicefrac{1}{2^w}, \label{eq:q-selection} \end{equation} where \begin{itemize} \item $s(R)$ is the success rate of the readout error layer for those qubits, defined in Eq.~\eqref{eq:readout-layer-error}, \item $\xi$ is the target two-qubit gate density of the benchmarking circuits (in these experiments, $\xi = 0$ for $w=0$ and $\xi =\nicefrac{1}{8}$ otherwise), \item $\lambda_1 = 1 - \frac{4 \epsilon_1}{3}$ where $\epsilon_1$ is the mean error rate of the one-qubit gates on the $s_w$, and \item $\lambda_2 = 1 - \frac{16 \epsilon_2}{15}$ where $\epsilon_2$ is the mean error rate of the two-qubit gates between the qubits in $q_w$. \end{itemize} This formula is a heuristic for predicting the expected success probability of a shape $(w,d)$ randomized mirror circuit with a two-qubit density of $\xi$. \item For each connected qubit subset of size $w$, we find the depth $d$ for which Eq.~\eqref{eq:q-selection} predicts that $S=\nicefrac{1}{e}(1 - \nicefrac{1}{2^w}) + \nicefrac{1}{2^w}$. In terms of polarization ($P$) this is the depth at which this equation predicts that $P =\nicefrac{1}{e}$. Note that $d$ is not restricted to being an integer, and it can be negative. \item For each width $w$, we select the connected qubit subset for which this depth is maximized. \end{enumerate} This procedure is one reasonable method for selecting the ``best'' set of qubits using only a set of generic error rates for those qubits --- but note that there are many possible alternative heuristics, and we do not claim our choice is optimal. \subsubsection{The circuit sampling} In this experiment we ran randomized mirror circuits and periodic mirror circuits. As the aim was to investigate the role of circuit order/disorder on circuit failure rates, they were designed to have similar properties. \begin{itemize} \item The randomized mirror circuits were sampled using the edge-grab sampler introduced in Appendix~\ref{sec:e2-sampler}. The \emph{expected} two-qubit gate density was set to $\nicefrac{1}{8}$. \item The periodic mirror circuits were sampled using the algorithm introduced in Appendix~\ref{sec-germ-e2}. The two-qubit gate density of these circuits is approximately bounded by $\nicefrac{1}{8}$ (it is rigorously bounded by $\nicefrac{1}{8}$ except when a partial repetition of a germ is required --- see Appendix~\ref{sec-germ-e2}). \end{itemize} We sampled 40 randomized mirror circuits and 40 periodic mirror circuits for each circuit shape $(w,d)$. As with experiment \#1, the circuits are constructed \emph{after} identifying the expected best $w$-qubit set at each width $w$. This allows us to ensure that the benchmark circuits respect connectivity constraints. Because the periodic mirror circuits are also randomly sampled from a distribution (see Appendix~\ref{sec-germ-e2}), for the remainder of this appendix we will refer to the randomized mirror circuits in this experiment as \emph{disordered mirror circuits}. The code that we used to perform this sampling has been incorporated into the open-source software package \texttt{pyGSTi} \cite{nielsen2020probing, pygstiversion0.9.9.1}. \subsubsection{Experimental details} We ran benchmark \#2 on eight of the twelve processors that we tested in experiment \#1. The four devices benchmarked in experiment \#1 but not in experiment \#2 (IBM Q Reuschlikon, IBM Q Tenerife, Rigetti Agave and Rigetti Aspen 6) were no longer available once we had designed this benchmark. At the time of these experiments, 8 of the 16 qubits in Rigetti Aspen 4 were not functioning, so it was tested as an 8-qubit processor (whereas all 16 qubits were available when we ran benchmark \#1 on Aspen 4). The experiments were run using the online access services of IBM Q \cite{ibmq2} and Rigetti \cite{rigetti-qcs}. We implemented two ``passes'' through the circuits \cite{rudinger2018probing}: the circuits were looped through with each circuit repeated 1024 times, and then we repeated this a second time (see Fig.~\ref{fig:stability} for more details). Both passes through all the circuits were run within the same calibration window. Unlike in experiment \#1, none of the circuits were run in parallel. \begin{figure} \caption{{\bf Random benchmarks do not predict structured circuit performance.} \label{fig:b2-vbs} \end{figure} \begin{figure*} \caption{\textbf{Quantifying temporal instability} \label{fig:stability} \end{figure*} \subsection{Data analysis}\label{sec:e2-analysis} The results of experiment \#2 are summarized in Figs.~2 and 3 of the main text. For brevity, Fig.~2 shows the results for only four of the eight benchmarked processors, so in Fig.~\ref{fig:b2-vbs} we expand on Fig.~2 to show the results for all eight processors. In the remainder of this appendix we explain the data analysis used to generate these plots. We use notation that explicitly distinguishes between a circuit's true success probability ($S$) and the observed success probability ($\hat{S}$) obtained from a finite number of repetitions of that circuit. Because the data was taken in two passes, it is useful to further distinguish between a circuit's true and observed success probability at the time of the first pass through the circuits ($S_1$ and $\hat{S}_1$, respectively) and a circuit's true and observed success probability at the time of the second pass through the circuits ($S_2$ and $\hat{S}_2$, respectively). If the processor is stable then $S_1 = S_2$ for every circuit, but this is not guaranteed to be true, as drift is a common problem in quantum processors \cite{rudinger2018probing, proctor2019detecting, wan2019quantum}. \subsubsection{Quantifying processor instability}\label{sec:stability} The first step in our data analysis identifies instability in the processors by comparing the two passes through the benchmarking circuits using the statistically rigorous hypothesis testing technique of Ref.~\cite{rudinger2018probing}. Although the results of this analysis are an informative performance benchmark in their own right, this analysis was primarily implemented so that the presence (or absence) of detectable instability could be used to inform other aspects of the data analysis. The formal aim of this analysis is to assess whether there is statistically significant evidence in the data that $S_1 \neq S_2$ for any circuit. The analysis performs statistical hypothesis tests of the null hypothesis that $S_1 = S_2$ for each circuit. Fig.~\ref{fig:stability} plots $(1-\hat{S}_1)$ versus $(1-\hat{S}_2)$ for every periodic (upper row) and disordered (lower row) mirror circuit that was run on each processor (the columns). Solid circles (transparent stars) denote the observed failure probabilities that are (are not) sufficiently different to constitute statistically significant evidence that $S_1 \neq S_2$ for that circuit, according to the hypothesis tests of Ref.~\cite{rudinger2018probing}. (The procedure of Ref.~\cite{rudinger2018probing} is designed for strong control of the family-wise error rate. We implemented the hypothesis test at $5\%$ significance. The test significance is not corrected to account for the fact that we are testing eight different processors.) There is statistically significant evidence in the data that all eight processors are unstable between the two passes through the circuits, i.e., for every processor there is evidence that there is at least one circuit for which $S_1 \neq S_2$. However, the magnitude of the instability varies dramatically between processors. For example, the difference between $\hat{S}_1$ and $\hat{S}_2$ is small for \emph{every} circuit that was run on IBM Q Yorktown, whereas many circuits exhibit large differences between $\hat{S}_1$ and $\hat{S}_2$ for IBM Q Melbourne and Rigetti Aspen 4. We note that the experiment on IBM Q Melbourne took approximately 10 hours whereas all other experiments took under 3.5 hours, so the difference between the observed instability on IBM Q Melbourne and the other IBM Q devices should not be used to infer that IBM Q Melbourne suffered from worse instabilities than the other IBM Q devices. As we explain further below, because of the results of this instability analysis we discarded the data from the second pass through the circuits --- i.e., we used only the data from the first pass in the remainder of the analysis. \subsubsection{Worst-case volumetric benchmarks} In Fig.~\ref{fig:b2-vbs}a (and Fig.~2a) we summarized the difference between the success rates of periodic and disordered circuits in terms of worst-case performance. For each processor and each benchmarked shape $(w,d)$, Fig.~\ref{fig:b2-vbs}a (and Fig.~2a) shows the observed polarization versus circuit shape $(w,d)$ for periodic (outer squares) and disordered (inner squares) circuits, minimized over all the test circuits of shape $(w^*,d^*)$ where $w^* \leq w$ and $d^* \leq d$. This was calculated using only the data from the first pass through the circuits. The observed minimum polarization is a biased estimate for the true minimum polarization over a circuit ensemble. If each circuit's success probability was stable over time, this bias could be removed by using the data from the first pass through the circuits to select the worst-performing circuit for each circuit shape, and then using the data from the second pass to estimate these circuits' polarizations. However, the validity of that strategy is based on the assumption of stability, and there are large instabilities between the two passes for some processors (see Fig.~\ref{fig:stability}). As the purpose of Fig.~2a is to \emph{compare} periodic and disordered circuits, and we ran the same number of periodic and mirror circuits of each shape and we ran every circuit the same number of times, we therefore chose to make no adjustment for finite sampling in the analysis for Fig.~2a. \subsubsection{Comparing to the predictions of each processor's error rates} In Fig.~\ref{fig:b2-vbs}b-c (and Fig.~2b-c) we compare our experimental results to predictions derived from each processor's published error rates. The method used to predict the success probability for a specific circuit is explained in Appendix~\ref{sec:predictions}. Fig.~\ref{fig:b2-vbs}c simply plots the predicted failure probability (i.e., one minus the predicted success probability) against the observed failure probability $(1 - \hat{S})$ for every circuit that we ran. This is arranged by processor (the rows) and is further split into periodic and disordered mirror circuits (the left and right columns, respectively). As with all of Fig.~\ref{fig:b2-vbs}c, we include only the data from the first pass through the circuits. In Fig.~\ref{fig:b2-vbs}b we show volumetric benchmarking plots of the \emph{predicted} worst-case performance implied by the processors' error rates. As discussed above, the analysis resulting in Fig.~\ref{fig:b2-vbs}a does not correct for finite sampling bias in the estimate of each of the minimum polarizations. To ensure that Fig.~\ref{fig:b2-vbs}b may be fairly compared to Fig.~\ref{fig:b2-vbs}a, we simulate this bias using a standard parametric bootstrap. For each processor: \begin{enumerate} \item We generated 1000 bootstrapped data sets, by sampling an ``observed'' success probability for each circuit that we ran, given by the average of 1024 draws from a 0/1 valued ``coin'' with the success probability set to the predicted success probability of the circuit. \item For each bootstrapped data set, we implemented exactly the same analysis that was applied to the experimental data to generate Fig.~\ref{fig:b2-vbs}a. This analysis computes a statistic $\lambda(w,d)$ at each circuit shape $(w,d)$. This results in 1000 bootstrapped values for each $\lambda(w,d)$. \item The predicted $\lambda(w,d)$ is then set to the mean of the 1000 bootstrapped values. \end{enumerate} \subsubsection{Empirical capability regions} In Fig.~3 of the main text, we summarize the performance of all eight processors on both periodic and disordered mirror circuits, by dividing the circuit width $\times$ depth plane into ``success'', ``indeterminate'', and ``fail'' regions. These regions correspond to the circuit shapes at which all, some, and none of the 80 test circuits succeeded, respectively, where a circuit is considered to succeed if $P \geq \nicefrac{1}{e}$. To estimate these regions from the data we use statistically hypothesis testing. We start from the null hypothesis that, at shape $(w,d)$, every circuit succeeds ($P \geq \nicefrac{1}{e}$) or every circuit fails ($P < \nicefrac{1}{e}$), and we assign a circuit shape to ``indeterminate'' only if statistical hypothesis testing on the data allows us to reject this null hypothesis with 5\% statistical significance. The statistical hypothesis testing is performed using the same technique that we used to generate the performance frontiers from the data in experiment \#1 (this is described in detail in Appendix~\ref{sec:hypothesis}), and the analysis uses only the data from the first pass through the circuits. The reason for using this hypothesis testing framework is that it addresses a bias towards including \emph{every} circuit shape in the ``indeterminate'' region. To understand this, consider running $K$ circuits of shape $(w,d)$ with each circuit repeated $N$ times and with these $K$ circuits sampled from some circuit ensemble in which every circuit has a success probability that is not exactly 1 or 0. Then, for fixed $N$, the probability of the \emph{observed} polarization being above $\nicefrac{1}{e}$ for at least one of these $K$ sampled shape $(w,d)$ circuits and being below $\nicefrac{1}{e}$ for at least one of these $K$ sampled shape $(w,d)$ circuits converges to 1 as $K$ increases, even if every circuit in the ensemble has a success probability well above the threshold value. \end{document}
math
155,699
\begin{document} \title{On differences between DP-coloring and list coloring} \begin{abstract} \noindent DP-coloring (also known as correspondence coloring) is a generalization of list coloring introduced recently by Dvo\v r\' ak and Postle~\cite{DP15}. Many known upper bounds for the list-chromatic number extend to the DP-chromatic number, but not all of them do. In this note we describe some properties of DP-coloring that set it aside from list coloring. In particular, we give an example of a planar bipartite graph with DP-chromatic number $4$ and prove that the edge-DP-chromatic number of a $d$-regular graph with $d\geqslant 2$ is always at least $d+1$. \end{abstract} \section{Introduction} \subsection{Basic notation and conventions} We use $\mathbb{N}$ to denote the set of all nonnegative integers. For a set $S$, $\powerset{S}$ denotes the power set of $S$, i.e., the set of all subsets of $S$. All graphs considered here are finite, undirected, and simple, except in Section~\ref{sec:multi}, which mentions (loopless) multigraphs. For a graph~$G$, $V(G)$ and $E(G)$ denote the vertex and the edge sets of $G$ respectively. For a subset $U \subseteq V(G)$, $G[U]$ is the subgraph of $G$ induced by $U$. For two subsets $U_1$, $U_2 \subseteq V(G)$, $E_G(U_1, U_2) \subseteq E(G)$ is the set of all edges of $G$ with one endpoint in $U_1$ and the other one in $U_2$. The maximum degree of $G$ is denoted by $\Delta(G)$. \subsection{Graph coloring, list coloring, and DP-coloring} Recall that a \emph{proper coloring} of a graph $G$ is a function $f \colon V(G) \to C$, where $C$ is a set of \emph{colors}, such that $f(u) \neq f(v)$ for each edge $uv \in E(G)$. The \emph{chromatic number} $\chi(G)$ of $G$ is the smallest $k \in \mathbb{N}$ such that there exists a proper coloring $f \colon V(G) \to C$ with $|C| = k$. \emph{List coloring} is a generalization of ordinary graph coloring that was introduced independently by Vizing~\cite{Viz76} and Erd\H{o}s, Rubin, and Taylor~\cite{ERT79}. As in the case of ordinary graph coloring, let $C$ be a set of \emph{colors}. A \emph{list assignment} for a graph $G$ is a function $L \colon V(G) \to \powerset{C}$; if $|L(u)| = k$ for all $u \in V(G)$, then~$L$ is called a \emph{$k$-list assignment}. A proper coloring $f \colon V(G) \to C$ is called an \emph{$L$-coloring} if $f(u) \in L(u)$ for each $u \in V(G)$. The \emph{list-chromatic number} $\chi_\ell(G)$ of $G$ is the smallest $k \in \mathbb{N}$ such that $G$ admits an $L$-coloring for every $k$-list assignment $L$ for $G$. An immediate consequence of this definition is that $\chi_\ell(G) \geqslant \chi(G)$ for all graphs~$G$, since ordinary coloring is the same as $L$-coloring with $L(u) = C$ for all $u \in V(G)$. On the other hand, it is well-known that the gap between $\chi(G)$ and $\chi_\ell(G)$ can be arbitrarily large; for instance, $\chi(K_{n, n}) = 2$, while $\chi_\ell(K_{n,n}) = (1 + o(1))\log_2(n) \to \infty$ as $n \to \infty$, where $K_{n,n}$ denotes the complete bipartite graph with both parts having size $n$. In this paper we study a further generalization of list coloring that was recently introduced by Dvo\v r\' ak and Postle~\cite{DP15}; they called it \emph{correspondence coloring}, and we call it \emph{DP-coloring} for short. In the setting of DP-coloring, not only does each vertex get its own list of available colors, but also the identifications between the colors in the lists can vary from edge to edge. \begin{defn}\label{defn:cover} Let $G$ be a graph. A \emph{cover} of $G$ is a pair $\Cov{H} = (L, H)$, consisting of a graph $H$ and a function $L \colon V(G) \to \powerset{V(H)}$, satisfying the following requirements: \begin{enumerate}[labelindent=\parindent,leftmargin=*,label=(C\arabic*)] \item the sets $\set{L(u) \,:\,u \in V(G)}$ form a partition of $V(H)$; \item for every $u \in V(G)$, the graph $H[L(u)]$ is complete; \item if $E_H(L(u), L(v)) \neq \varnothing$, then either $u = v$ or $uv \in E(G)$; \item \label{item:matching} if $uv \in E(G)$, then $E_H(L(u), L(v))$ is a matching. \end{enumerate} A cover $\Cov{H} = (L, H)$ of $G$ is \emph{$k$-fold} if $|L(u)| = k$ for all $u \in V(G)$. \end{defn} \begin{remk} The matching $E_H(L(u), L(v))$ in Definition~\ref{defn:cover}\ref{item:matching} does not have to be perfect and, in particular, is allowed to be empty. \end{remk} \begin{defn} Let $G$ be a graph and let $\Cov{H} = (L, H)$ be a cover of $G$. An \emph{$\Cov{H}$-coloring} of $G$ is an independent set in $H$ of size $|V(G)|$. \end{defn} \begin{remk}\label{remk:single} By definition, if $\Cov{H} = (L, H)$ is a cover of $G$, then $\set{L(u)\,:\, u \in V(G)}$ is a partition of~$H$ into $|V(G)|$ cliques. Therefore, an independent set $I \subseteq V(H)$ is an $\Cov{H}$-coloring of $G$ if and only if $|I \cap L(u)| = 1$ for all $u \in V(G)$. \end{remk} \begin{defn} Let $G$ be a graph. The \emph{DP-chromatic number} $\chi_{DP}(G)$ of $G$ is the smallest $k \in \mathbb{N}$ such that $G$ admits an $\Cov{H}$-coloring for every $k$-fold cover $\Cov{H}$ of $G$. \end{defn} \begin{exmp}\label{exmp:cycles} Figure~\ref{fig:cycle} shows two distinct $2$-fold covers of the $4$-cycle $C_4$. Note that $C_4$ admits an $\Cov{H}_1$-coloring but not an $\Cov{H}_2$-coloring. In particular, $\chi_{DP}(C_4) \geqslant 3$; on the other hand, it can be easily seen that $\chi_{DP}(G) \leqslant \Delta(G) + 1$ for any graph $G$, and so we have $\chi_{DP}(C_4) = 3$. A similar argument demonstrates that $\chi_{DP}(C_n) = 3$ for any cycle $C_n$ of length $n \geqslant 3$. \end{exmp} \begin{figure} \caption{Two distinct $2$-fold covers of a $4$-cycle. } \label{fig:cycle} \end{figure} One can construct a cover of a graph $G$ based on a list assignment for~$G$, thus showing that list coloring is a special case of DP-coloring and, in particular, $\chi_{DP}(G) \geqslant \chi_\ell(G)$ for all graphs $G$. \begin{figure} \caption{A graph with a $2$-list assignment and the corresponding $2$-fold cover.} \label{fig:list} \end{figure} More precisely, let $G$ be a graph and suppose that $L \colon V(G) \to \powerset{C}$ is a list assignment for~$G$, where $C$ is a set of colors. Let $H$ be the graph with vertex set $$ V(H) \coloneqq \set{(u, c)\,:\, u \in V(G) \text{ and } c \in L(u)}, $$ in which two distinct vertices $(u, c)$ and $(v, d)$ are adjacent if and only if \begin{itemize} \item[--] either $u = v$, \item[--] or else, $uv \in E(G)$ and $c = d$. \end{itemize} For each $u \in V(G)$, set $$ L'(u) \coloneqq \set{(u, c) \,:\, c \in L(u)}. $$ Then $\Cov{H} \coloneqq (L', H)$ is a cover of $G$, and there is a natural bijective correspondence between the $L$-colorings and the $\Cov{H}$-colorings of $G$. Indeed, if $f \colon V(G) \to C$ is an $L$-coloring of $G$, then the set $$ I_f \coloneqq \set{(u, f(u)) \,:\, u \in V(G)} $$ is an $\Cov{H}$-coloring of $G$. Conversely, given an $\Cov{H}$-coloring $I \subseteq V(H)$ of $G$, $|I \cap L'(u)| = 1$ for all $u \in V(G)$, so one can define an $L$-coloring $f_I \colon V(G) \to C$ by the property $$(u, f_I(u)) \in I \cap L'(u)$$ for all $u \in V(G)$. \subsection{DP-coloring vs. list coloring and the results of this note} Some upper bounds on list-chromatic number hold for DP-chromatic number as well. For instance, it is easy to see that $\chi_{DP}(G) \leqslant d +1$ for any $d$-degenerate graph $G$. Dvo\v r\' ak and Postle~\cite{DP15} observed that for any planar graph $G$, $\chi_{DP}(G) \leqslant 5$ and, moreover, $\chi_{DP}(G) \leqslant 3$ if $G$ is a planar graph of girth at least $5$ (these statements are extensions of classical results of Thomassen~\cite{Tho94, Tho95} on list colorings). Furthermore, there are statements about list coloring whose only known proofs involve DP\-/coloring in essential ways. For example, the reason why Dvo\v r\' ak and Postle originally introduced DP\-/coloring was to prove that every planar graph without cycles of lengths $4$ to $8$ is $3$-list-colorable~\cite[Theorem~1]{DP15}, thus answering a long-standing question of Borodin~\cite[Problem~8.1]{Bor13}. Another example can be found in~\cite{BK16}, where Dirac's theorem on the minimum number of edges in critical graphs~\cite{Dir57, Dir74} is extended to the framework of DP-colorings, yielding a solution to the problem, posed by Kostochka and Stiebitz~\cite{KS02}, of classifying list-critical graphs that satisfy Dirac's bound with equality. On the other hand, DP-coloring and list coloring are also strikingly different in some respects. For instance, Bernshteyn~\cite[Theorem~1.6]{Ber16} showed that the DP-chromatic number of every graph with average degree $d$ is $\Omega(d/\log d)$, i.e., close to linear in $d$. Recall that due to a celebrated result of Alon~\cite{Alo00}, the list-chromatic number of such graphs is $\Omega(\log d)$, and this bound is sharp for ``small'' bipartite graphs. In spite of this, known upper bounds on list-chromatic numbers often have the same order of magnitude as in the DP\-/coloring setting. For example, by Johansson's theorem~\cite{Joh}, triangle-free graphs $G$ of maximum degree~$\Delta$ satisfy $\chi_\ell(G) = O(\Delta/\log\Delta)$. The same asymptotic upper bound holds for $\chi_{DP}(G)$~\cite[Theorem~1.7]{Ber16}. Recently, Molloy~\cite{Mol} refined Johansson's result to $\chi_\ell(G) \leqslant (1 + o(1)) \Delta/\ln\Delta$, and this improved bound, including the constant factor, also generalizes to DP-colorings~\cite{BerJM}. Important tools in the study of list coloring that do not generalize to the framework of DP\-/coloring are the orientation theorems of Alon and Tarsi~\cite{AT92} and the closely related Bondy--Boppana--Siegel lemma (see~\cite{AT92}). Indeed, they can be used to prove that even cycles are $2$-list-colorable, while the DP-chromatic number of any cycle is $3$, regardless of its length (see Example~\ref{exmp:cycles}). In this note we demonstrate the failure in the context of DP-coloring of two other list-coloring results whose proofs rely on either the Alon--Tarsi method or the Bondy--Boppana--Siegel lemma. A well-known application of the orientation method is the following result: \begin{theo}[{Alon--Tarsi~\cite[Corollary~3.4]{AT92}}]\label{theo:AT} Every planar bipartite graph is $3$-list-colorable. \end{theo} We show that Theorem~\ref{theo:AT} does not hold for DP-colorings (note that every planar triangle-free graph is $3$-degenerate, hence $4$-DP-colorable): \begin{theo}\label{theo:plan_bip} There exists a planar bipartite graph $G$ with $\chi_{DP}(G) = 4$. \end{theo} This answers a question of Grytczuk (personal communication, 2016). We prove Theorem~\ref{theo:plan_bip} in Section~\ref{sec:plan_bip}. Our second result concerns edge colorings. Recall that the \emph{line graph} $\mathsf{Line}(G)$ of a graph $G$ is the graph with vertex set $E(G)$ such that two vertices of $\mathsf{Line}(G)$ are adjacent if and only if the corresponding edges of $G$ share an endpoint. The chromatic number, the list-chromatic number, and the DP-chromatic number of $\mathsf{Line}(G)$ are called the \emph{chromatic index}, the \emph{list-chromatic index}, and the \emph{DP-chromatic index} of $G$ and are denoted by $\chi'(G)$, $\chi'_\ell(G)$, and $\chi'_{DP}(G)$ respectively. The following hypothesis is known as the \emph{Edge List Coloring Conjecture} and is a major open problem in graph theory: \begin{conj}[{Edge List Coloring Conjecture, see~\cite{JT95}}]\label{conj:ELCC} For every graph $G$, $\chi'_\ell(G) = \chi'(G)$. \end{conj} In an elegant application of the orientation method, Galvin~\cite{Gal95} verified the Edge List Coloring Conjecture for bipartite graphs: \begin{theo}[{Galvin~\cite{Gal95}}] For every bipartite graph $G$, $\chi'_\ell(G) = \chi'(G) = \Delta(G)$. \end{theo} We show that this famous result fails for DP-coloring; in fact, it is impossible for a $d$-regular graph $G$ with $d \geqslant 2$ to have DP-chromatic index $d$: \begin{theo}\label{theo:reg_ind} If $d \geqslant 2$, then every $d$-regular graph $G$ satisfies $\chi'_{DP}(G) \geqslant d+1$. \end{theo} We prove Theorem~\ref{theo:reg_ind} in Section~\ref{sec:reg_ind}. Vizing~\cite{Viz64} proved that the inequality $\chi'(G) \leqslant \Delta(G) + 1$ holds for all graphs $G$. He also conjectured the following weakening of the Edge List Coloring Conjecture: \begin{conj}[{Vizing}]\label{conj:viz} For every graph $G$, $\chi'_\ell(G) \leqslant \Delta(G)+1$. \end{conj} We do not know if Conjecture~\ref{conj:viz} can be extended to DP-colorings: \begin{problem} Do there exist graphs $G$ with $\chi'_{DP}(G) \geqslant \Delta(G) + 2$? \end{problem} In Section~\ref{sec:multi} we discuss two natural ways to define edge-DP-colorings for multigraphs. According to one of them, the DP-chromatic index of the multigraph $K^d_2$ with two vertices joined by $d$ parallel edges is $2d$. \section{Proof of Theorem~\ref{theo:plan_bip}}\label{sec:plan_bip} In this section we construct a planar bipartite graph $G$ with DP-chromatic number $4$. The main building block of our construction is the graph $Q$ shown in Figure~\ref{fig:cube} on the left, i.e., the skeleton of the $3$-dimensional cube. Let $\Cov{F} = (L, F)$ denote the cover of $Q$ shown in Figure~\ref{fig:cube} on the right. \begin{figure} \caption{The graph $Q$ (left) and its cover $\Cov{F} \label{fig:cube} \end{figure} \begin{lemma}\label{lemma:K} The graph $Q$ is not $\Cov{F}$-colorable. \end{lemma} \begin{proof} Suppose, towards a contradiction, that $I$ is an $\Cov{F}$-coloring of $Q$. Since $L(a) = \set{x}$, we have $x \in I$, and, similarly, $y \in I$. Since $z_1$ is the only vertex in $L(c_1)$ that is not adjacent to $x$ or $y$, we also have $z_1 \in I$, and, similarly, $z_2 \in I$. This leaves only $2$ vertices available in each of $L(d_1)$, $L(d_2)$, $L(d_3)$, and $L(d_4)$, and it is easy to see that these $8$ vertices do not contain an independent set of size $4$ (cf.~the cover $\Cov{H}_2$ of the $4$-cycle shown in Figure~\ref{fig:cycle} on the right). \end{proof} Consider $9$ pairwise disjoint copies of $Q$, labeled $Q_{ij}$ for $1 \leqslant i$, $j \leqslant 3$. For each vertex $u \in V(Q)$, its copy in $Q_{ij}$ is denoted by $u_{ij}$. Let $\Cov{F}_{ij} = (L_{ij}, F_{ij})$ be a cover of $Q_{ij}$ isomorphic to $\Cov{F}$. Again, we assume that the graphs $F_{ij}$ are pairwise disjoint and use $u_{ij}$ to denote the copy of a vertex $u \in V(F)$ in $F_{ij}$. Let $G$ be the graph obtained from the (disjoint) union of the graphs $Q_{ij}$ by identifying the vertices $a_{11}$, \ldots, $a_{33}$ to a new vertex $a^\ast$ and the vertices $b_{11}$, \ldots, $b_{33}$ to a new vertex~$b^\ast$. Let $H$ be the graph obtained from the union of the graphs $F_{ij}$ by identifying, for each $1 \leqslant i$, $j \leqslant 3$, the vertices $x_{i1}$, $x_{i2}$, $x_{i3}$ to a new vertex $x_i$ and the vertices $y_{1j}$, $y_{2j}$, $y_{3j}$ to a new vertex $y_j$. Define the map $L^\ast \colon V(G) \to \powerset{V(H)}$ as follows: $$ L^\ast(u) \coloneqq \begin{cases} L_{ij}(u) &\text{if } u \in V(Q_{ij});\\ \set{x_1, x_2, x_3} &\text{if } u = a^\ast;\\ \set{y_1, y_2, y_3} &\text{if } u = b^\ast. \end{cases} $$ Then $\Cov{H} \coloneqq (L^\ast, H)$ is a $3$-fold cover of $G$. We claim that $G$ is not $\Cov{H}$-colorable. Indeed, suppose that $I$ is an $\Cov{H}$-coloring of $G$ and let $i$ and $j$ be the indices such that $\set{x_i, y_j} \subset I$. Then $I$ induces an $\Cov{F}_{ij}$-coloring of $Q_{ij}$, which cannot exist by Lemma~\ref{lemma:K}. Since $G$ is evidently planar and bipartite, the proof of Theorem~\ref{theo:plan_bip} is complete. \section{Proof of Theorem~\ref{theo:reg_ind}}\label{sec:reg_ind} Let $d \geqslant 2$ and let $G$ be an $n$-vertex $d$-regular graph. If $\chi'(G) = d+1$, then $\chi'_{DP}(G) \geqslant d+1$ as well, so from now on we will assume that $\chi'(G) = d$. In particular, $n$ is even. Indeed, a proper coloring of $\mathsf{Line}(G)$ is the same as a partition of $E(G)$ into matchings, and if $n$ is odd, then $d$ matchings can cover at most $d(n-1)/2 < dn/2 = |E(G)|$ edges of $G$. Let $uv \in E(G)$ and let $G'\coloneqq G-uv$. Our argument hinges on the following simple observation: \begin{lemma}\label{lemma:same} Let $C$ be a set of size $d$ and let $f \colon E(G') \to C$ be a proper coloring of $\mathsf{Line}(G')$. For each $w \in \{u,v\}$, let $f_w$ denote the unique color in $C$ not used in coloring the edges incident to $w$. Then $f_u =f_v$. \end{lemma} \begin{proof} For each $c \in C$, let $M_c \subseteq E(G')$ denote the matching formed by the edges $e$ with $f(e) = c$. Then $|M_c| \leqslant n/2$ for all $c \in C$. Moreover, by definition, $\max \set{|M_{f_u}|, |M_{f_v}|} \leqslant n/2 - 1$. Thus, if $f_u \neq f_v$, then $$ \frac{dn}{2} - 1 = |E(G')| = \sum_{c \in C} |M_c| \leqslant \frac{dn}{2} - 2; $$ a contradiction. \end{proof} Let $\mathbb{Z}_d$ denote the additive group of integers modulo $d$ and let $H$ be the graph with vertex set $$V(H) \coloneqq E(G) \times \mathbb{Z}_d,$$ in which the following pairs of vertices are adjacent: \begin{itemize} \item[--] $(e, i)$ and $(e, j)$ for $e \in E(G)$ and $i$, $j \in \mathbb{Z}_d$ with $i \neq j$, \item[--] $(e, i)$ and $(h, i)$ for $eh \in E(\mathsf{Line(G')})$ and $i \in \mathbb{Z}_d$, \item[--] $(uv, i)$ and $(uv', i)$ for $uv' \in E(G')$ and $i \in \mathbb{Z}_d$; \item[--] $(uv, i)$ and $(u'v, i + 1)$ for $u'v \in E(G')$ and $i \in \mathbb{Z}_d$. \end{itemize} For each $e \in E(G)$, let $L(e) \coloneqq \set{e} \times \mathbb{Z}_d$. Then $\Cov{H} \coloneqq (L, H)$ is a $d$-fold cover of $\mathsf{Line}(G)$. We claim that $\mathsf{Line}(G)$ is not $\Cov{H}$-colorable (which proves Theorem~\ref{theo:reg_ind}). Indeed, suppose that $I$ is an $\Cov{H}$-coloring of $\mathsf{Line}(G)$. For each $e \in E(G')$, let $f(e)$ denote the unique element of $\mathbb{Z}_d$ such that $(e, f(e)) \in I$. Then $f$ is a proper coloring of $\mathsf{Line}(G')$ with $\mathbb{Z}_d$ as its set of colors. Let $f_u$ be the unique element of $\mathbb{Z}_d$ that is not used in coloring the edges incident to $u$. Then the only element of $L(uv)$ that can, and therefore must, belong to $I$ is $(uv, i)$. On the other hand, Lemma~\ref{lemma:same} implies that $i$ is also the unique element of $\mathbb{Z}_d$ that is not used in coloring the edges incident to $v$, and, in particular, for some $u'v \in E(G')$, $f(u'v) = i+1$. Since $(uv, i)$ and $(u'v, i+1)$ are adjacent vertices of $H$, $I$ is not an independent set, which is a contradiction. \section{Edge-DP-colorings of multigraphs}\label{sec:multi} One can extend the notion of DP-coloring to loopless multigraphs, see~\cite{BKP16}. The definitions are almost identical; the only difference is that in Definition~\ref{defn:cover}, \ref{item:matching} is replaced by the following: \begin{enumerate}[labelindent=\parindent,leftmargin=*,label=(C\arabic*)] \item[(C4$'$)] If $u$ and $v$ are connected by $t \geqslant 1$ edges in $G$, then $E_H(L(u), L(v))$ is a union of $t$ matchings. \end{enumerate} An interesting property of DP-coloring of multigraphs is that the DP-chromatic number of a multigraph may be larger than its number of vertices. For example, the multigraph $K^t_k$ obtained from the complete graph $K_k$ by replacing each edge with $t$ parallel edges satisfies $$ \chi_{DP}(K^t_k) = \Delta(K^t_k) + 1 = tk - t + 1. $$ (See~\cite[Lemma~7]{BKP16}.) Similarly to the case of simple graphs, the \emph{line graph} $\mathsf{Line}(G)$ of a multigraph $G$ is the graph with vertex set $E(G)$ such that two vertices of $\mathsf{Line}(G)$ are adjacent if and only if the corresponding edges of $G$ share at least one endpoint. Notice that, in particular, $\mathsf{Line}(G)$ is always a simple graph. Sometimes, instead of $\mathsf{Line}(G)$, it is more natural to consider the \emph{line multigraph} $\mathsf{MLine}(G)$, where if two edges of $G$ share both endpoints, then the corresponding vertices of $\mathsf{MLine}(G)$ are joined by a pair edges. Line multigraphs were used, e.g., in the seminal paper by Galvin~\cite{Gal95} and also in~\cite{BKW97, BKW98}. Somewhat surprisingly, Shannon's bound $\chi'(G) \leqslant 3\Delta(G)/2$~\cite{Sha49} on the chromatic index of a multigraph $G$ does not extend to $\chi_{DP}(\mathsf{MLine}(G))$. Indeed, if $G \cong K^d_2$, i.e., if $G$ is the $2$-vertex multigraph with $d$ parallel edges, then $\mathsf{MLine}(G) \cong K^2_d$, so $$\chi_{DP}(\mathsf{MLine}(G)) = \chi_{DP}(K^2_d)= 2d-1 = 2 \Delta(G) -1.$$ This is in contrast with the result in~\cite{BKW97} that $\chi'_\ell(G)\leqslant 3\Delta(G)/2$ for every multigraph $G$. However, we conjecture that the analog of Shannon's theorem holds for line \emph{graphs}: \begin{conj} For every multigraph $G$, $\chi_{DP}(\mathsf{Line}(G)) \leqslant 3\Delta(G)/2$. \end{conj} \end{document}
math
21,103
\begin{document} \begin{center} \textbf{\Large Mid-infrared homodyne balanced detector for quantum light characterization} \\ Tecla Gabbrielli\textsuperscript{1,2,*}, Francesco Cappelli\textsuperscript{1,2}, Natalia Bruno\textsuperscript{1,2}, Nicola Corrias\textsuperscript{1,2}, Simone Borri\textsuperscript{1,2,3}, Paolo De Natale\textsuperscript{1,2,3}, Alessandro Zavatta\textsuperscript{1,2,3} \\ \textit{\small\textsuperscript{1} Istituto Nazionale di Ottica (CNR-INO), Largo Enrico Fermi 6, 50125 Florence, Italy} \\ \textit{\small\textsuperscript{2} European Laboratory for Non-linear Spectroscopy (LENS), Via nello Carrara 1, 50019 Sesto Fiorentino, Florence, Italy} \\ \textit{\small\textsuperscript{3}Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Firenze, 50019 Sesto Fiorentino, Florence, Italy} * [email protected] \end{center} \begin{abstract} We present the characterization of a novel balanced homodyne detector operating in the mid-infrared. The challenging task of revealing non-classicality in mid-infrared light, e.~g. in quantum cascade lasers emission, requires a high-performance detection system. Through the intensity noise power spectral density analysis of the differential signal coming from the incident radiation, we show that our setup is shot-noise limited. We discuss the experimental results with a view to possible applications to quantum technologies, such as free-space quantum communication. \end{abstract} \section{Introduction} Balanced homodyne detection is an effective measurement technique, widely used by the quantum optics community~\cite{yuen1983noise,Shapiro:1985,raymer1995ultrafast,Lambrecht_1996,loudon:2000quantum,zavatta2002time,sasaki2006multimode,kumar2012versatile}, as it allows the reconstruction of quantum states of light by retrieving their field quadratures. It is based on a differential measurement carried out after mixing the signal of interest with a reference radiation, named Local Oscillator (LO), on a 50/50 beam splitter followed by two identical detectors. This technique has all the advantages of the balanced detection, where the common noise (e.~g. correlated noise due to the photon-generation process or amplification) is suppressed by measuring the difference between the two balanced parts of the optical beam. Any possible classically-correlated contribution affecting the measurement is, therefore, cancelled out increasing the detection sensitivity as required for reaching the standard quantum limit~\cite{Schumaker:84}. Investigation of quantum light through balanced homodyne detection has been extensively pursued in the near-infrared. As a matter of fact, this is the region where the first efficient systems have been developed~\cite{raymer1995ultrafast,Lambrecht_1996,loudon:2000quantum,zavatta2002time} and utilized~\cite{costanzo2017measurement,Zavatta:2020}. Besides, the technological progress of near-infrared components goes hand-in-hand with the worldwide demand for communications devices at telecom wavelengths~\cite{kaiser2016fully,mondain2019chip}. In this framework, balanced homodyne detection is a useful tool, widely exploited for continuous-variable quantum communication both in optical fiber and free-space links~\cite{Ralph:1999,Semenov:2009,Elser:2009}. The mid-infrared (MIR) spectral region ($\lambda > \SI{3}{\micro \meter}$) is a promising alternative to the near-infrared for free-space-optical communication~\cite{Temporao}. In fact, considering the well-reduced Rayleigh scattering cross section compared to the visible/near-infrared, the atmosphere's MIR transparency window between 3 and \SI{5}{\micro m} makes MIR radiation an excellent candidate for free-space communication applications. Up to now, MIR light has been widely investigated and employed for spectroscopy applications. Here indeed many molecules of atmospheric and astrophysical interest can be investigated on their strongest ro-vibrational transitions~\cite{wysocki:2005,lee:2007,Bartalini:2009,Galli:2013a,Galli:2013b,Galli:2014a,Galli:2016b,Coddington:2016,Campo:2017,Consolino:2018,Borri:2019a,Picque:2019,Karlovets:2020}. Also in this field, the availability of quantum MIR sources can set the scene for compelling quantum sensing applications~\cite{RevModPhys.Degen:2017}. The goal of this work is to explore the extension of quantum balanced homodyne detection to the MIR spectral region, by demonstrating novel technologies for fully exploiting the advantages to operate in a quantum regime. Balanced detection has already been investigated in the MIR for classical applications such as frequency-modulation spectroscopy~\cite{carlisle:1989}, difference-frequency laser spectroscopy~\cite{chen:1998}, balanced radiometric detection~\cite{sonnenfroh:2001}, and Doppler-free spectroscopy~\cite{Bartalini:09}. Other optical schemes suitable for single-photon quantum applications, such as coincidence measurements~\cite{mancinelli2017} or free-space Quantum Key Distribution with discrete variables~\cite{aellen2008feasibility}, have so far been studied. In this work, we evaluate the possibility of investigating continuous-variable quantum physics in the MIR through a novel Balanced Homodyne Detector (BHD). In particular, our BHD has been tested with Quantum Cascade Lasers (QCLs), chip-scale semiconductor-heterostructure devices based on intersubband transitions in quantum wells, operating in the mid-to-far infrared~\cite{Faist:1994,Tombez:2013a}. The development of our BHD allows the investigation of the quantum properties on QCLs radiation which are yet unexplored. Broadband QCLs can emit frequency combs due to the high third-order non-linearity which characterizes the active region and enables a Four-Wave Mixing (FWM) parametric process in their waveguide~\cite{hugi:2012,Friedli:2013,Riedi:2015,Burghoff:2014,Faist:2016,Cappelli:2016,CappelliConsolino:2019,Mezzapesa:2019,Consolino:2020}. From a quantum optical point of view, FWM makes QCLs potential non-classical state emitters. Indeed, the possibility of engineering squeezed and color entangled states via FWM has been already demonstrated in several optical systems~\cite{levenson:1985,slusher:1985squeezing,mccormick:2007,Dutt:2015}. In the following sections we show that the novel BHD here presented is shot-noise-limited and suitable for directly unveiling non-classicality in MIR light. This represents the first experimental step for the investigation and exploitation of non-classical correlations in the light emitted by QCLs. \section{Methods}\label{sec:Theory of balanced detection} \subsection{Theory of balanced detection} \label{subsec:theory} We describe the theory of BHD, composed of a 50/50 beam splitter and two detectors~\cite{loudon:2000quantum}, to test whether a given detector can operate at the shot-noise level when a single-mode radiation is used as the LO. Typically, the LO is assumed to be a coherent state with a well-defined photon-number variance, equal to the mean number of emitted photons~\cite{loudon:2000quantum}. In the following description, we consider the incident radiation to have an arbitrary variance $(\Delta n_R )^2$ to take into account the extra noise that is present in our QCL sources (see section~\ref{sec:Results and discussion}). \\In a real setup, optical signals are affected by losses caused by absorption and reflections from optical components. In Quantum Optics theory, losses can be represented as a beam splitter that couples the radiation with the vacuum, characterized by the coefficients $R=i(\sqrt{1-\eta_1})$ and $T= \sqrt{\eta_1}$ where $\eta_1$ is the overall optical transmission efficiency, taking into account any attenuation due to the different components of the experimental setup~\cite{loudon:2000quantum}. The optical losses budget should be carefully addressed according to the specific application of the balanced detector. In the case of homodyne detection, the LO acts as the reference radiation and the relevant optical losses are the ones affecting the signal of interest mixed via the beam splitter with the LO~\cite{loudon:2000quantum}. On the contrary, when the balanced detector is used for characterizing the statistics of the laser source employed as LO, the LO becomes the radiation under study and, therefore, the optical losses affecting it become relevant. The two different scenarios are discussed in the corresponding experimental context in section~\ref{sec:ExpSetup}. \begin{figure} \caption{ Scheme of the sum and difference measurement, where an attenuation $1-\eta_1$ is placed before the 50/50 beam splitter and the two real detectors are considered with quantum efficiency $\eta_{qe} \label{fig:sommadiffreale} \end{figure} In practice, also the two detectors have losses, resulting in a ratio between flowing electrons and number of incident photons lower than one (quantum efficiency $\eta_{qe} < 100\%$). These losses can be modelled with a beam splitter as well placed before an ideal detector ($D_3$ and $D_4$, Fig.~\ref{fig:sommadiffreale}). For this model, the real detectors are assumed to be identical (same quantum efficiency), with no saturation, and with an instantaneous and linear responsivity in time. Any time dependence in the creation and annihilation operators is neglected. Furthermore, the detection system is assumed to be perfectly balanced to benefit from the advantages of a balanced detection in term of noise suppression~\cite{Schumaker:84,loudon:2000quantum}. In a setup as the one depicted in Fig.~\ref{fig:sommadiffreale}, it is possible to derive a relation between the real detected quantities (labelled with $D$ in the equations below) and the ones of the incident radiation. The currents at the outputs of the two detectors, $\hat{I}_{3D}$ and $\hat{I}_{4D}$, are proportional to the incident flux of photons onto the corresponding detectors. Therefore, integrating the sum and the difference of the output signals over the measurement time leads to the following results: \begin{eqnarray} \langle \hat{N}^D_+ \rangle & = & \eta \langle \hat{n}_R \rangle \label{eq:sum},\\ \left (\Delta {N}^D_+ \right)^2 & = & \eta^2 \left (\Delta {n}_R \right)^2 + \eta (1-\eta) \langle \hat{n}_R \rangle \label{eq:realvariancesum},\\ \langle \hat{N}^D_- \rangle & = & 0 \, \label{eq:dif},\\ \left(\Delta N^D_- \right)^2 & = & \eta \langle \hat{n}_R \rangle \label{eq:realvariance}, \end{eqnarray} where $\hat{N}^D_+$ (Eqs.~\ref{eq:sum} and \ref{eq:realvariancesum}) is the sum of the detected photon-number signals, $\hat{N}^D_-$ (Eqs.~\ref{eq:dif} and \ref{eq:realvariance}) is the difference, $\eta = \eta_1 \eta_{qe}$, and $\hat{n}_R$ is the number of photons emitted by the LO source. From this derivation, it is clear that an accurate analysis of the losses is needed: losses reduce the measured signal and add an extra term to the variance of the sum given by the coupling with the vacuum field. An excess of attenuation can lead to a signal lying under the background noise floor of the setup and/or to a measured light statistics dominated by the vacuum fluctuations. In the case of a coherent state $\left (\Delta {n}_R \right)^2 = \langle \hat{n}_R \rangle $, the retrieved sum signal via Eq.~\eqref{eq:realvariancesum} is at the shot-noise level $ \left (\Delta {N}^D_+ \right)^2 = \eta \langle \hat{n}_R \rangle $ and the vacuum does not alter the measured light statistics. More generally, in a regime not dominated by vacuum fluctuations, by comparing Eq.~\eqref{eq:realvariancesum} with Eq.~\eqref{eq:realvariance}, it is possible to understand whether the statistics of the incident light (LO) is shot-noise limited. \subsection{Homodyne detector characterization} As discussed in the previous section, it is possible to characterize a balanced homodyne detection setup by sending only the LO on the beam splitter and measuring the difference signal between the two detector outputs. In the limit of the detector's linear responsivity, the differential noise (Eq.~\eqref{eq:realvariance}) is directly proportional to the incident power of the LO and corresponds to the shot noise. Furthermore, the balanced detector can be applied for the characterization of the LO statistics: in principle, the laser intensity noise can be retrieved from the sum of the two detectors output signals (Eq.~\eqref{eq:realvariancesum}). It is possible to understand if the statistics of the LO is shot-noise-limited by comparing the sum (Eq~\eqref{eq:realvariancesum}) with the difference (Eq.~\eqref{eq:realvariance}). The BHD differential background noise, which comprises the dark current of the detectors and the electronic noise, sets the BHD sensitivity limit and determines the minimum measurable noise level. The clearance, given by the noise power ratio between the measured shot noise and the differential background noise, is an important feature for a BHD, as it contributes to the overall detection efficiency~\cite{appel2007electronic}. The maximum clearance of the BHD is obtained for the maximum incident LO power before detector saturation. Working in the linear regime is essential for having a direct link between the current's statistics at the detector output and the photon statistics of the incident radiation and, consequently, to get accurate results. Finally, another crucial point for the test of the BHD is a thorough analysis of the losses. Indeed, as described in Eqs.~\eqref{eq:realvariancesum} and ~\eqref{eq:realvariance}, each loss (e.g. due to optical elements or quantum efficiencies of the detectors) couples the light under analysis with the vacuum field, and thus affects the measured statistics~\cite{loudon:2000quantum}. \subsection{Experimental setup} \label{sec:ExpSetup} \begin{figure} \caption{Sketch of the BHD characterization setup. A single-mode QCL is used as LO and is sent on the BHD made of a 50/50 beam splitter (BS) and two \ce{HgCdTe} \label{fig:50-50setup} \end{figure} The setup used to characterize the BHD is schematically shown in Fig.~\ref{fig:50-50setup}. It is composed of a 50/50 \ce{CaF2} beam splitter, coated for a wavelength range from~\SIrange{2}{8}{\micro\meter}, and two commercial preamplified photovoltaic \ce{HgCdTe} detectors (VIGO, PVI-4TE-5-2x2) characterized by a nominal bandwidth of \SI{180}{\mega\hertz} and a spectral response spanning from~\SIrange{2.5}{5.0}{\micro \meter} \cite{note3}. These detectors are equipped with a commercial two-stage preamplifying system (VIGO, MIP-10-250M-F-M4): the first stage is a DC-coupled transimpedance amplifier, the second stage is an AC-coupled amplifier with a measured gain of 26.5 in voltage. The detectors are cooled down to $T=\SI{200}{K}$ by a four-stage-peltier cooling system via a thermoelectric cooler controller (VIGO,PTCC-01-BAS). The 50/50 splitting is done with a precision $|R|^2-|T|^2= 0.2\%$, calculated via DC signals. The signals at the output of the detectors are acquired in the time domain with a sample rate of \SI{625}{\mega S/s} through two different channels of an oscilloscope with a bandwidth of \SI{200}{\mega \hertz}. The time duration of each acquisition is \SI{1}{\milli \second}. \begin{figure} \caption{Detectors responsivities measured at \SI{4.72} \label{fig:responsivity} \end{figure} We tested the BHD with two different continuous-wave single-mode QCLs emitting at $\lambda=\SI{4.47}{\micro \meter}$ and $\lambda=\SI{4.72}{\micro \meter}$. The two lasers are powered by ultra-low-noise current drivers (ppqSense, QubeCL15-P) with a typical current noise density of $\SI{200}{pA/\sqrt{\mathrm{Hz}}}$, to minimize excess technical noise. The QCL radiation first passes through a MIR optical isolator (wavelength working range from \SIrange{4.5}{4.7}{\micro\meter}) and is then attenuated by a variable attenuator used for controlling the LO power impinging on the BHD independently of the laser's operating regime. In the case of maximum transmission through the variable attenuator, taking into account the contribution from all the optical elements (attenuator, isolator, beam splitter, lenses, mirrors), the total optical transmission is \SI{47 \pm 1}{\%} at \SI{4.72}{\micro \meter} and \SI{55 \pm 1}{\%} at \SI{4.47}{\micro \meter}. As shown in section~\ref{sec:Theory of balanced detection}, the quantum efficiency is another key parameter to be taken into account in the loss budget. It can be calculated from the responsivities ($\mathcal{R}$), reported in Fig.~\ref{fig:responsivity}, as $\eta_{qe} = \mathcal{R} hc/ (\lambda e)$, where $h$ is the Plank constant, $c$ is the speed of light, and $e$ is the electron charge. This leads to a quantum efficiency of \SI{33 \pm 1}{\%} for both the two detectors at $\lambda=\SI{4.72}{\micro \meter}$ (Figs~\ref{fig:responsivity} (a) and (b)). At $\lambda=\SI{4.47}{\micro \meter}$ the responsivities of the detectors are higher (Figs.~\ref{fig:responsivity} (c) and (d)), in agreement with the curve reported in the datasheet, which has a maximum at \SI{4.5}{\micro \meter}. The corresponding quantum efficiency is \SI{41 \pm 1}{\%}. \paragraph{Losses budget for balanced homodyne detection.} For homodyne detection applications of the BHD, we must consider both the quantum efficiency and the optical losses contribution due to the optical components from the beam splitter on. This leads to a detection efficiency up to 40\%. In the MIR, a large contribution to the losses comes from the Fresnel reflection off the optical elements, due to the relatively large refractive index of their materials. Notice that the quantum efficiency of the \ce{HgCdTe} detectors is not limited by fundamental properties of the material, but mostly by its purity. Besides, the available anti-reflection coatings for this spectral region are generally less effective and much more expensive than the ones for visible and near-infrared wavelengths, where, thanks to a more advanced technology, higher quantum efficiencies are easier to achieve, as required for advanced measurements such as quantum state tomography~\cite{lvovsky2009continuous}. In addition to the adoption of effective anti-reflection coatings, the detection efficiency can be enhanced by increasing the absorption probability of the light by the photodiode. This can be done, for example, by placing a golden surface on the back of the semiconductor medium acting as a retroreflector. \paragraph{Losses budget for laser source characterization.} To characterize the laser source that we aim to employ as LO, we can use the presented detector (Fig.\ref{fig:50-50setup}) as a direct balanced detector, in which we mix the light under study with the vacuum and compare the sum with the difference of the two acquired signals, as reported in section~\ref{sec:Theory of balanced detection}. In this scenario, the LO is no longer a reference radiation but it becomes the light under investigation itself. For studying the light emitted by the laser source, both optical losses and quantum efficiency are relevant, as they change the statistics of the measured light field via coupling with the vacuum (Eq.~\eqref{eq:realvariancesum}). With an optical transmissivity of 55\% and a quantum efficiency of 41\%, the total achievable maximum detection efficiency is around 23\%. It is worth noting that this value depends both on the detection system and on the source. Indeed, when the laser output power overcomes the detector saturation level (e.~g. $ P>\SI{1.2}{mW}$, Figs.~\ref{fig:responsivity} (a) and (b)), an attenuator is required. This introduces losses and affects the overall efficiency. Moreover, when using lasers which are sensitive to optical feedback as QCLs, an optical isolator is required. This is the case of our setup, as shown in Fig.~\ref{fig:50-50setup}, in which an optical isolator with a transmissivity around 70\% (60\%) at \SI{4.47}{\micro\meter} (\SI{4.72}{\micro\meter}) is used to prevent optical feedback perturbing laser operation. In optimal conditions, that is an emission power below the detector saturation level and no isolator, only the losses due to the remaining optical elements (beam splitter, lenses and detectors) have to be considered and the detection efficiency can increase up to 40\%. \section{Results and discussion} \label{sec:Results and discussion} The AC signals acquired at the outputs of the two \ce{HgCdTe} detectors are digitally summed and/or subtracted. In Fig.~\ref{fig:INPSDfreq}, the Intensity Noise Power Spectral Density (INPSD) is calculated~\cite{note2} and the spectra are compared with the INPSD of the difference of the AC background signals. As described in section~\ref{sec:Theory of balanced detection}, in the linear-responsivity regime the variance of the sum and of the difference signals, measured as INPSD, provide information about the intensity noise of the incident radiation and the corresponding shot-noise level, respectively. \begin{figure} \caption{(a) Example of INPSDs of the sum (orange trace) and the difference (blue trace) of the AC signals compared to the difference of the detector background (grey trace) and the difference of the two oscilloscope channels backgrounds (petroleum trace). The dashed black line represents the theoretical one-sided PSD shot-noise level for an ideal detector (i.e. with an infinite bandwidth). For frequencies higher than $\SI{100} \label{fig:INPSDfreq} \end{figure} \\The sum INPSD, shown in Fig.~\ref{fig:INPSDfreq}~(a) (orange trace), represents the detected intensity noise obtained using the single-mode QCL emitting at $\lambda=\SI{4.72}{\micro \meter}$, driven at \SI{712}{mA}, at a working temperature of \SI{18}{\celsius}, and after an optical attenuation of 93\%. Despite the considerable attenuation, by comparing the sum with the difference (blue trace) we can infer that the detected intensity noise of the laser is above the shot-noise level. Giving a closer look at the differential measurement in Fig.~\ref{fig:INPSDfreq}~(a) it is possible to determine the optimal working frequency range for the BHD as the interval between \SI{1}{MHz} and \SI{100}{MHz} approximately, where the difference INPSD has the typical white-noise flat trend and it is compatible with the expected ideal one-sided Power Spectral Density (PSD) of the shot noise (dashed black line). This is defined as $ PSD_{\mathrm{shot-noise}} = 2 e I$, where $e$ is the electron charge and $I$ is the detector output current~\cite{rice2016:shotnoiseinfrequency}. For high frequencies, above \SI{100}{MHz}, the data drop below the ideal shot-noise level is due to the finite bandwidth of the setup (detector and oscilloscope). The cut-off is measured as the \SI{-3}{dB} drop point of the signal, resulting in a measured bandwidth of \SI{120}{MHz}. The shot-noise-sensitivity limit of the balanced detector is given by the grey trace, which is the INPSD of the difference of the AC background signals. We have also verified that the contribution of the oscilloscope background to the electronic noise is negligible (petroleum trace). Indeed, the oscilloscope differential noise is more than \SI{20}{dB} below the detector background noise, reaching \SI{30}{dB} at \SI{30}{MHz}. Another important parameter in the BHD characterization is the Common-mode Rejection Ratio (CMRR). The CMRR is defined as the ratio between the INPSD of the sum and the INPSD of the difference and characterizes quantitatively the noise suppression capability of the system. To measure the CMRR at different frequencies, we modulated the intensity of the laser with a square-wave signal at \SI{1}{MHz}. In this way, it is possible to test the CMRR of the BHD simultaneously at different frequencies, as the square-wave spectrum is composed of the odd harmonics of the fundamental frequency. For this characterization we used the laser in the same conditions as in Fig.~\ref{fig:INPSDfreq}~(a). For each frequency component of the square wave we have computed the CMRR as the ratio between the INPSD of the sum and that of the difference, as shown in Fig.~\ref{fig:INPSDfreq}~(b) (blue circles) \cite{note4}. The same measurements have been performed for characterizing the CMRR of the oscilloscope (grey circles) to test the instrumental limit of this measurement. In particular, we have sent the same square-wave signal, equally split, in the two channels of the oscilloscope used for the acquisition and, by measuring the INPSD of the sum and difference, we have estimated the CMRR of the oscilloscope. The analysis in Fig.~\ref{fig:INPSDfreq}~(b) is performed in the flat working region of the BHD (i.e. from \SI{1}{MHz} and \SI{100}{MHz}, as already discussed). Notice that while at lower frequencies the CMRR is limited by the oscilloscope, this is not the case for higher frequencies, where the CMRR of the BHD is lower. By the way, it is possible to find a high-frequency region around \SI{30}{MHz} where the CMRR is still over \SI{20}{dB}. \begin{figure} \caption{INPSD of the difference of the AC output signals versus the incident power of the radiation. Each point corresponds to the average level in the selected frequency window of \SI{3} \label{fig:INPSDvsP} \end{figure} In general, to minimize the effect of the background on the measured INPSD it is convenient to identify the optimal working region of the BHD, where the shot noise of the incident radiation is well above the background noise and the responsivity is kept linear at the same time. Indeed, only in the linear-responsivity regime of the detectors, current fluctuations are directly proportional to the photon-number fluctuations. Therefore, by measuring the statistics of the current, it is possible to obtain direct information on the statistics of the light under investigation. Furthermore, this is the optimal working region for quantum application as well. Indeed, it is exactly in the range where the shot noise is well above the background noise that it is possible to unveil sub-shot-noise fluctuations, expected for quantum states such as intensity-squeezed states~\cite{loudon:2000quantum}. Given these considerations, we selected a \SI{3}{MHz} frequency window, centred at \SI{30}{MHz}, for the spectral analysis. Here, the sum signal is less perturbed by classical and technical noise contributions and the incident radiation noise is indeed closer to the shot-nose level and more compatible with a coherent state. At the same time, this frequency window is far enough from the high-frequency cut-off of the detectors. By computing the average level of the spectra in this window, we verified that the INPSD of the difference shows a linear trend with the LO incident optical power and that the BHD is shot-noise limited. This demonstrates that our detector is suitable for measuring the fluctuations of the incident radiation at the shot-noise level and below. As shown in Fig.~\ref{fig:INPSDvsP}, at \SI{30}{MHz} the measured differential INPSDs are up to 7 (6) times above the background level for $\lambda = \SI{4.47}{\micro \meter}$ (\SI{4.72}{\micro \meter}), i.~e. the maximum clearance of the BHD is 7 (6). This leads to an equivalent optical efficiency, as defined in~\cite{appel2007electronic}, of 86\% for an incident radiation at a wavelength of \SI{4.47}{\micro \meter} (83\% at \SI{4.72}{\micro \meter}). By multiplying this value for the measured quantum efficiency (41\% at $\lambda=\SI{4.47}{\micro \meter}$), we find an effective quantum efficiency of 35\%. To achieve the best performance in terms of clearance and linear responsivity, the incident power of the LO has to be accurately selected. According to our characterization, the optimal LO power impinging on the beam splitter is \SI{2.0}{mW} at \SI{4.47}{\micro \meter} and \SI{2.5}{mW} at \SI{4.72}{\micro \meter}. Notice that in the characterization at \SI{4.47}{\micro \meter} (Fig.~\ref{fig:INPSDvsP}) saturation is achieved just above \SI{2.0}{mW} incident power, significantly lower than the saturation level measured for \SI{4.72}{\micro \meter} ($P>\SI{2.5}{mW}$). This is in agreement with the responsivity peak of our HgCdTe detectors, which is centred around \SI{4.5}{\micro \meter}. It is also important to notice that the saturation level depends also on the transimpedance amplification system: proper adjustments of the electronic amplification can allow higher LO power detection while preventing early-power saturation. \begin{figure} \caption{ Clearance measured at \SI{4.47} \label{fig:Clearance} \end{figure} A thorough analysis of the performance of the BHD in terms of clearance and linearity is shown in Fig.~\ref{fig:Clearance}. In particular, the clearance spectra (a,b), for several incident power values and for both the wavelengths used, are plotted in the \SIrange{1}{100}{MHz}-frequency region, i.e. where the differential spectrum is flat, as evidenced in the previous discussion (Fig.~\ref{fig:INPSDfreq}~(a)). In particular, Fig.~\ref{fig:Clearance}~(a) and (b) show two different steps of saturation regime: in graph (a) the clearance at \SI{2.20}{mW} (orange trace) is characterized by a saturation visible in the whole spectrum, while in graph (b) the curve corresponding to \SI{2.54}{mW} is at the edge of power saturation, clearly visible only for high frequencies (>\SI{40}{MHz}). According to the analysis shown in Fig.~\ref{fig:INPSDvsP}, in Fig.~\ref{fig:Clearance} the spectra exhibit a higher clearance before saturation at \SI{4.47}{\micro \meter}, reaching a value up to 8 and, consequently, an effective quantum efficiency up to 36\%. To evidence the linearity of the clearance with the incident power of the BHD, we have integrated the spectra in a \SI{3}{MHz} window centered at different frequencies, as reported in Fig.~\ref{fig:Clearance}~(c) and (d). In Fig.~\ref{fig:Clearance}~(c) it is clear that for an incident power of \SI{2.0}{mW} the clearance deviates from the linear trend for frequencies above \SI{10}{MHz}, suggesting a beginning of saturation. In summary, from Fig.~\ref{fig:Clearance}~(c) and (d) it is possible to conclude that the detector shows a linear behavior with the incident power until the saturation level is achieved, for different Fourier frequencies going from \SIrange{10}{80}{MHz}. In addition, the graphs show that the clearance decreases by increasing the central frequency of such analysis. The achieved values of overall detection efficiency and clearance of our BHD are very encouraging. With the present values, our BHD will be able to detect quantum states of light in the MIR such as squeezed states. Quantum characterization of single-photon states as well as more exotic quantum states is also possible, although quantum state tomography requires quantum efficiencies above 50\% to retrieve negative-valued Wigner functions \cite{lvovsky2009continuous}. However, more sophisticated criteria can be applied to certify non-classicality of the light under investigation when the quantum efficiency is lower than 50\% \cite{biagi:2021}. Instead, for quantum information processing such as continuous-variables quantum teleportation \cite{paris:2003} and for long-distance continuous-variable quantum communications in free space (e.g. satellite-based \cite{Dequal2021}), the overall detection efficiency needs to be improved. \section{Conclusion} We have presented a mid-infrared balanced detection system suitable for quantum characterization of light via homodyne detection. In particular, we have proven its capability to achieve shot-noise-limited detection, by showing that the differential signal retrieved at the output of the beam splitter is directly proportional to the incident power. The main features of the setup are \SI{120}{MHz} bandwidth, quantum efficiency up to 41\%, saturation for incident power higher than \SI{2.0}{mW} at the peak-responsivity wavelength of the HgCdTe detectors, and 50/50 DC splitting ratio with 0.2\% uncertainty. In this work, the wavelength dependence of the BHD responsivity and saturation is studied by testing the setup with two QCLs emitting at \SI{4.72}{\micro \meter} and \SI{4.47}{\micro \meter}. The spectral analysis of the clearance operated for different values of incident power on the BHD and at different FFT frequencies evidences that the maximum clearance, while keeping a linear response and avoiding saturation, is up to 8, leading to an effective quantum efficiency of 36\%. This value is achievable at a Fourier frequency of \SI{10}{MHz}, that is an optimal working region also for the CMRR achieving over \SI{30}{dB}. The CMRR analysis shows a significant frequency dependence in the noise extinction in the presented setup: the CMRR decreases as the frequency increases. In general, for balanced homodyne detection applications, the optimal working region for the LO is where the clearance is maximum. Indeed, this is the optimal range where sub-shot-noise fluctuations can be observed. Furthermore, by exploiting the possibilities given by the digital mathematical operations, the BHD can easily be adapted to measure not only sum and difference but also product or even more complex mathematical operations. This makes our BHD a versatile tool suitable not only for homodyne detection but also for other schemes, e.~g. second-order correlation measurements, that require a mathematical operation on split optical beams. This detector can be used for classical measurements as well, taking all the advantages of a 50/50 balanced detection where the common noise contributions from the two balanced signals are subtracted at the shot-noise level. Given its versatility, this detector represents an important step towards the quantum characterization of mid-infrared light. The first test results reported here show that such a scheme can represent a novel setup for quantum characterization of mid-infrared light, suitable for demonstrating quantum-state generation in mid-infrared sources. \section*{Acknowledgments} The authors gratefully thank the collaborators within the consortium of the Qombs Project: Prof. Dr. Jérome Faist (ETH Zurich) for having provided the quantum cascade lasers and the company ppqSense for having provided the ultra-low-noise laser current drivers (QubeCLs). \\ The Authors acknowledge financial support from the European Union’s Horizon 2020 Research and Innovation Programme (Qombs Project, FET Flagship on Quantum Technologies grant n. 820419; Laserlab-Europe Project grant n. 871124) and from the Italian ESFRI Roadmap (Extreme Light Infrastructure - ELI Project). \section*{Disclosures} The authors declare no conflicts of interest. \section*{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \end{document}
math
34,764
\begin{document} \title{A numerical study of branching and stability of solutions to three-dimensional martensitic phase transformations using gradient-regularized, non-convex, finite strain elasticity} \author{K. Sagiyama\thanks{Mechanical Engineering, University of Michigan}, S. Rudraraju\thanks{Mechanical Engineering, University of Michigan} and K. Garikipati\thanks{Mechanical Engineering and Mathematics, University of Michigan, corresponding author \textsf{[email protected]}}} \maketitle \abstract{ In the setting of continuum elasticity, phase transformations involving martensitic variants are modeled by a free energy density function that is non-convex in strain space. Here, we adopt an existing mathematical model in which we regularize the non-convex free energy density function by higher-order gradient terms at finite strain and derive boundary value problems via the standard variational argument applied to the corresponding total free energy, inspired by Toupin's theory of gradient elasticity. These gradient terms are to preclude existence of arbitrarily fine microstructures, while still allowing for existence of multiple solution branches corresponding to local minima of the total free energy; these are classified as \emph{metastable} solution branches. The goal of this work is to solve the boundary value problem numerically in three dimensions, observe solution branches, and assess stability of each branch by numerically evaluating the second variation of the total free energy. We also study how these microstructures evolve as the length-scale parameter, the coefficient of the strain gradient terms in the free energy, approaches zero. } $ $\\ \keywords{phase-transformation, twinning, three-phase equilibrium, meta-stability, non-convex free energy.} \section{Introduction}\label{S:introduction} Many multi-component solids, such as shape memory alloys (\textsf{NiTi}), involve phase transformations from cubic austenite to tetragonal martensite crystal structures. The tetragonal lattice is characterized by transformation strains relative to the undistorted reference cubic structure. The strain splits the symmetry group of the cubic lattice into three equivalent sub-groups, each of which corresponds to a tetragonal lattice oriented along one of the cubic crystal axes. These tetragonal variants accommodate themselves in a body to achieve configurations that are local energy minimizers while maintaining kinematic compatibility. As a result, \emph{twin} microstructures form with a tiled appearance due to the near constancy of strain within each twinned sub-domain. The underlying phenomenology can be described by a free energy density function that is non-convex in a frame-invariant strain measure, to account for the finite deformation, and admits three minima corresponding to the tetragonal variants. The classical variational treatment of elasticity only identifies stationary points, while of particular interest are the \emph{metastable} solution branches that correspond to local minima of the total energy. These metastable branches are to be identified in this work by examining the stability of solutions obtained by numerically solving the boundary value problems arising from the variational formulation. Configurations that minimize the total free energy on a given domain with boundaries have been studied in the setting of sharp-interface models by constructing sequences that converge weakly to the minimizer \cite{Ball1987,Chipot1988}. Although this approach provides one with good insights to various classes of problems, it allows for arbitrarily fine twin microstructures---a non-physical aspect of the mathematical formulation resulting from the absence of interface energies associated with the martensitic phase boundaries. Diffuse-interface models resolve this pathology by including higher-order, strain gradient-dependent terms representing the interfacial energy; the coefficients of the higher-order gradient terms, which control the twin interface thicknesses, give rise to \emph{length-scale parameters}. As diffuse-interface models directly taking into account the total free energy, they also make it straightforward to solve general boundary value problems, provided that the high-order gradients in the partial differential equation can be suitably treated, to solve problems that involve energy wells of unequal depths, and to investigate stability/metastability of solution branches via investigation of the second variation of the total energy. One-dimensional models have been intensively studied in this context: Carr et al. \cite{Carr1984} showed, for standard one-dimensional problems with Dirichlet and higher-order Neumann boundary conditions, that mere inclusion of the higher-order strain gradient energy terms only leaves a pair of stable solutions, the global minimizers of the total energy, which have only a single \emph{phase boundary}. These solutions, however, do not represent experimentally obtained microstructures that have twin layers separated by multiple phase boundaries. This gap in the representation was resolved by Truskinovsky and Zanzotto \cite{Truskinovsky1995,Truskinovsky1996} and by Vainchtein and co-workers \cite{Vainchtein1998,Vainchtein1999} by adding to the model an elastic support that represents the multi-dimensional effect. This allowed the successful recovery of metastable solution branches corresponding to local minimizers of the total energy. Healey and Miller \cite{Healey2007} studied an anti-plane shear model of martensite-martensite phase transformations for pure Dirichlet problems in two dimensions, and obtained metastable solution branches over a range of values of the length scale parameter. Numerical work in this field has included \emph{branch-tracking} techniques to obtain metastable solution branches and evaluation of the second variation of the total free energy to assess their stability \cite{Vainchtein1998,Vainchtein1999,Healey2007}. While one-dimensional problems and some restricted, linearized two-dimensional problems may be partially aided by analysis, the complete, nonlinear, three-dimensional treatment at finite strain with general boundary conditions must be numerical. Rudraraju et al. \cite{Rudraraju2014} have adopted spline-based (isogeometric analytic) numerical methods to obtain three-dimensional solutions to general boundary value problems of Toupin's theory of gradient elasticity at finite strain \cite{Toupin1962}. This approach makes it possible to study a wide diversity of problems. In this communication we present numerical solutions to diffuse-interface problems of phase transformations between martensitic variants under traction loading in three dimensions. The fundamental framework to study metastable branches is the same as that used previously in the literature \cite{Vainchtein1998,Vainchtein1999,Healey2007}. We obtain numerical solutions to boundary value problems from a starting guess. Particular solution branches are tracked as the strain gradient length scale parameter is varied, and stability of a given branch is determined by numerically examining the second variation of the total free energy. To the best of our knowledge, this is the first three-dimensional study of twin microstructures and their stability using gradient-regularized non-convex elasticity. Of further note is that we apply arbitrary boundary conditions. Crucial to our work is the numerical framework derived from the work of Rudraraju et al. \cite{Rudraraju2014}. In Sec. \ref{S:One-dimensional_example} we present an overview of the fundamental ideas using a simple problem in one dimension. Three-dimensional problems are then studied in Sec. \ref{S:Three-dimensional_example} employing virtually the same numerical techniques. Conclusions and future studies are discussed in Sec. \ref{S:Conclusion}. \section{A one-dimensional primer}\label{S:One-dimensional_example} Branching in three dimensions being our eventual concern, it is instructive to first study a related problem in one dimension. The one-dimensional free energy density, $\Psi_{\text{1D}}$, is defined as a function of strain and strain-gradient derived from the solution field $u(X)$ as:\\ \begin{align} \Psi_{\text{1D}}=(u_{,X}^4-2u_{,X}^2)+l^2 u_{,XX}^2,\label{E:Psi_1d} \end{align} where $l$ is the strain gradient length-scale parameter. This energy density \eqref{E:Psi_1d} is non-convex with respect to the strain component $u_{,X}$; see Fig. \ref{Fi:plot_free-energy-density}. In the absence of the strain-gradient contribution; i.e., with $l=0$ in \eqref{E:Psi_1d}, this non-convex density function characterizes fields $u$ that are composed purely of two variants, one with $u_{,X}=-1$ and the other with $u_{,X}=+1$; see Fig. \ref{Fi:ux_1d_alt}. In this setting laminae (sub-domains) of these two variants form with arbitrary size, and in principle, infinitely fine microstructures can develop. The length scale parameter $l$ precludes the existence of such twinned microstructures of infinite fineness by penalizing the interfaces between them. It also introduces a characteristic length scale to the problem. \begin{figure} \caption{ Plots of (\subref{Fi:plot_free-energy-density} \label{Fi:plot_strain-energy-density_soln} \end{figure} We seek solution fields $u \in \mathcal{S}_{1D}$ on $\overline{\Omega}_{1D}$, where $\Omega_{1D}=(0,1)$, that satisfy standard and higher-order Dirichlet boundary conditions: \begin{subequations} \begin{align} u=0,\; u_{,X}=0 \quad &\text{ on } X=0,\\ u=d,\; u_{,X}=0 \quad &\text{ on } X=1, \end{align} \label{E:bc_u_1D} \end{subequations} where $d=2^{-10}$, and globally/locally minimize the total free energy corresponding to the density function \eqref{E:Psi_1d}: \begin{align} \Pi_{\text{1D}}:=\int_{\Omega_{\text{1D}}}\Psi_{\text{1D}}\hspace{0.1cm}\mathrm{d}\Omega_{\text{1D}}. \end{align} To this end, we define admissible test functions $w \in \mathcal{V}_{1D}$ that satisfy: \begin{subequations} \begin{align} w=0,\; w_{,X}=0 \quad &\text{ on } X=0,\\ w=0,\; w_{,X}=0 \quad &\text{ on } X=1, \end{align} \label{E:bc_w_1D} \end{subequations} and solve the following weak form of the boundary value problem derived from the variational argument: Find $u \in \mathcal{S}_{1D}$ such that $\forall w \in\mathcal{V}_{1D}$, \begin{align} D\Pi_{\text{1D}}[w]=\int_{\Omega_{\text{1D}}}(w_{,X}P+w_{,XX}B)\hspace{0.1cm}\mathrm{d}\Omega_{\text{1D}}=0,\label{E:D_Pi_1D} \end{align} where $P$ is the first Piola-Kirchhoff stress and $B$ is the higher-order stress, defined as: \begin{subequations} \begin{align} P&:=\partial\Psi_{\text{1D}}/\partial u_{,X},\label{P1D}\\ B&:=\partial\Psi_{\text{1D}}/\partial u_{,XX}\label{B1D}. \end{align} \end{subequations} Stability of the solutions is then assessed by examining the positive definiteness of the second variation: \begin{align} D^2\Pi_{\text{1D}}[w,w]=\int\limits_{\Omega_{\text{1D}}}\left( w_{,X}\frac{\partial^2\Psi_{\text{1D}}}{\partial u_{,X}\partial u_{,X}}w_{,X}+ w_{,X}\frac{\partial^2\Psi_{\text{1D}}}{\partial u_{,X}\partial u_{,XX}}w_{,XX}+ w_{,XX}\frac{\partial^2\Psi_{\text{1D}}}{\partial u_{,XX}\partial u_{,X}}w_{,X}+ w_{,XX}\frac{\partial^2\Psi_{\text{1D}}}{\partial u_{,XX}\partial u_{,XX}}w_{,XX} \right)\hspace{0.1cm}\mathrm{d}\Omega_{\text{1D}}>0,\label{E:D2_Pi_1D} \end{align} where we have explicitly retained the symmetric second and third terms of the integrand for clarity of the development. Note that weak form \eqref{E:D_Pi_1D} and \eqref{E:bc_u_1D} with \eqref{E:bc_w_1D} leads us to the following strong form: \begin{align*} -P_{,X}+B_{,XX}=0\quad\text{ on }\Omega_{\text{1D}},\label{E:strong_1d} \end{align*} which possesses fourth-order spatial derivatives due to the constitutive relations \eqref{E:Psi_1d}, \eqref{P1D} and \eqref{B1D}. This strong form is not investigated further in this work. \subsection{The numerical framework} We seek numerical solutions $u^h\in \mathcal{S}_{1D}^h\subset\mathcal{S}_{1D}$ to a discretized counterpart of the weak from \eqref{E:D_Pi_1D} with the test functions $w^h\in\mathcal{V}_{1D}^h\subset\mathcal{V}_{1D}$ where: \begin{subequations} \begin{align} \mathcal{S}_{1D}^h &= \{v^h \in H^2(\Omega_\text{1D})\vert v^h = 0,\; v^h_{,X} = 0\;\text{on} \; X = 0, \; v^h = d,\; v^h_{,X} = 0\; \text{on} \; X = 1\}, \\ \mathcal{V}_{1D}^h &= \{v^h \in H^2(\Omega_\text{1D})\vert v^h = 0,\; v^h_{,X} = 0\;\text{on} \; X = \{0,1\}\}, \end{align} \end{subequations} where $H^2$ represents the standard Sobolev space of integrable functions with integrable first and second derivatives. The problem was solved using isogeometric analysis (IGA) with fourth-order B-spline basis functions defined on $1024$ elements of uniform size on $\Omega_{\text{1D}}$; see Cottrell et al. \cite{Cottrell2009} for a comprehensive treatment of IGA, and Rudraraju et al. \cite{Rudraraju2014} for its application to the current problem framework, but with weak enforcement of Dirichlet boundary conditions. We use the quartic-precision floating-point format and solve for solutions up to an absolute tolerance of $10^{-25}$ in the Euclidean norm of the residual corresponding to the discretized version of \eqref{E:D_Pi_1D}. The higher than typical precision and more stringent tolerance are important to verify convergence to extrema/saddle points of the rapidly fluctuating free energy functional that governs this problem. This rapid fluctuation underlies the existence of families of stable/unstable solutions, which is the crux of this work. The second variation in \eqref{E:D2_Pi_1D} is discretized on the same B-spline basis, and stability is assessed by examining the positive definiteness of the resulting symmetric Hessian matrix using the eigenvalue solver \textsf{FEAST v3.0} \cite{Polizzi2009} with the relative accuracy of $O(10^{-8})$. Figures were produced using \textsf{mathgl 2.3.0}. \subsection{Solution branches and branch-tracking} Fig. \ref{Fi:PI-l_1d_orig} shows the total free energy, $\Pi_{\text{1D}}$, of solutions to the boundary value problem, plotted against the length scale parameter $l$, where six representative branches are labeled as A - F. Numerically computed strains, $u_{,X}$, are plotted against $X$ in Fig. \ref{Fi:uX-X} for branches A-F at selected values of $l$. Here, we outline the procedure that we used to obtain the branches shown in Fig. \ref{Fi:PI-l_1d_orig}. In solving the nonlinear boundary value problem \eqref{E:D_Pi_1D} and \eqref{E:bc_u_1D} with \eqref{E:bc_w_1D}, we observed that the homogeneous initial guess $u_{\text{init}}=0$ always captures the branch of highest energy at each $l$ as seen in Fig. \ref{Fi:PI-l_1d}, where these solutions are represented by green squares; discontinuities present in the sequence of green squares are good indicators of the existence of multiple brunches. The blue solid lines in Fig. \ref{Fi:PI-l_1d}, on the other hand, are obtained by a simple \emph{branch-tracking} technique. We first chose a starting value of $l$ and an initial guess for the solution $u_{\text{init}}$, and solved the problem for $\bar{u}(l)$. We then incremented/decremented $l$ by a small amount $\mathrm{D}elta l$ and solved this updated problem for $\bar{u}(l+\mathrm{D}elta l)$ using $\bar{u}(l)$ as the initial guess. We repeated this process of using the previous solution as the initial guess for the updated problem to extend the smooth energy curves shown in Fig. \ref{Fi:PI-l_1d}. This method helped us to \emph{stay} on the branch that the very first solution happened to fall onto. In our numerical experiments the first solutions were obtained in two different ways: using the homogeneous initial guess and using random initial guesses. Specifically, branches B, D, and F were first solved using the homogeneous initial guess at $l=0.20$, $l=0.10$, and $l=0.08$, followed by incrementation/decrementation of $l$. Branches B, D, and F obtained in this way respect \emph{geometric symmetry} (cf. \cite{Truskinovsky1996}) of the boundary value problem at least for large enough $l$; contrary to the case considered in Sec. 3.4 of \cite{Truskinovsky1996}, elastic supports are absent in our problem and one can see that the translated solution fields $\tilde{u}(X):=u(X)-d/2$ of these branches satisfy $\tilde{u}(X)=-\tilde{u}(1-X)$, and thus $\tilde{u}_{,X}(X)=\tilde{u}_{,X}(1-X)$. Branches A, C, and E, on the other hand, are asymmetric. Those branches were obtained using random initial guesses at $l=0.10$, $l=0.10$, and $l=0.05$, and then branch-tracking. \subsection{Stability} We performed a numerical stability analysis by evaluating the positive definiteness of the Hessians corresponding to the second variation \eqref{E:D2_Pi_1D} of the continuous problem for branches A - F, of increasing total free energy. We recall that if we find multiple solution branches corresponding to local minima of the total free energy, these are classified as \emph{metastable}. Our analysis showed that the lowest branch, A, is stable up to $l=0.225$, at which value it meets branch B in the $\Pi_{\text{1D}}-l$ space, branch B is stable for $l>0.225$, and there exists no metastable solution branch at any $l$; see Figs. \ref{Fi:PI-l_1d_orig} and \ref{Fi:PI-l_1d_zoom1}. In \cite{Carr1984} it was shown that, if higher-order Neumann boundary conditions $u_{,XX}=0$ are applied at both ends instead of the higher-order Dirichlet conditions $u_{,X}=0$ as in \eqref{E:bc_u_1D}, the only stable branch is the one of lowest energy and no metastable branch exists for the type of energy density defined in \eqref{E:Psi_1d}. Although the boundary conditions employed here are different, leading to the particular $\Pi_{\text{1D}}-l$ free energy landscapes in Fig. \ref{Fi:PI-l_1d_orig}, our observation is essentially consistent with the analysis given in \cite{Carr1984}. \begin{figure} \caption{ Plots of total free energy $\Pi_{\text{1D} \label{Fi:PI-l_1d} \end{figure} \begin{figure} \caption{Plots of computed strain $u_{,X} \label{Fi:uX-X} \end{figure} \section{Branching and stability of solutions in three-dimensional, non-convex elasticity}\label{S:Three-dimensional_example} We now turn our attention to the main focus of this communication: branching in three-dimensional problems. In related two dimensional work \cite{Healey2007} the branching of solutions was studied for two-phase elastic solids with pure Dirichlet boundaries using a non-convex free energy density function also regularized by strain-gradient terms. There, the associated Euler-Lagrange equation admitted the trivial solution, allowing for a local bifurcation analysis. The equations were first linearized to find \emph{bifurcation points}, i.e. values of the length-scale parameter at which solution branches bifurcate from the trivial solution. The local bifurcation analysis was then followed by a global bifurcation analysis, where solution branches were continued along the length-scale parameter from those bifurcation points, solving the original nonlinear equation using a branch-tracking technique. The stability of those branches was then assessed by numerically checking the positive definiteness of the second variation of the total free energy. The same technique was also used by Vainchtein and co-workers \cite{Vainchtein1999} for one-dimensional problems. Here, we carry out a numerical, three-dimensional study of an elastic solid that undergoes phase transformations between three tetragonal variants under traction loads to form branches. We chose to work on a body subject to traction because one of our ultimate goals is the simulation of shape-memory alloys, where such traction boundary conditions naturally arise. For the boundary value problem governed by non-convex elasticity, regularized by Toupin's theory of gradient elasticity at finite strain, only pure numerical approaches are feasible to compute branches, using the methods described in Sec. \ref{S:One-dimensional_example}, instead of the ones based on local bifurcation analysis. The stability of each solution is assessed as described in Sec. \ref{S:One-dimensional_example} using the second variation of the total free energy. \begin{figure} \caption{ (\subref{Fi:3well_diagram} \label{Fi:plot_contour_3variants} \end{figure} The non-dimensionalized free-energy density function is defined in terms of gradients and strain gradients of the displacement field $\boldsymbol{u}(\boldsymbol{X})$ as: \begin{align} \Psi&: =B_1e_1^2 +B_2(e_2^2+e_3^2) +B_3e_3(e_3^2-3e_2^2) +B_4(e_2^2+e_3^2)^2 +B_5(e_4^2+e_5^2+e_6^2)\notag\\ &\phantom{:}+l^2(e_{2,1}^2+e_{2,2}^2+e_{2,3}^2+e_{3,1}^2+e_{3,2}^2+e_{3,3}^2), \label{E:Psi} \end{align} where $B_1,...,B_5$ are constants and the following reparameterized strain measures were used: \begin{subequations} \begin{alignat}{3} &e_1=\frac{E_{11}+E_{22}+E_{33}}{\sqrt{3}},\qquad &&e_2 =\frac{E_{11}-E_{22}}{\sqrt{2}},\qquad &&e_3 =\frac{E_{11}+E_{22}-2E_{33}}{\sqrt{6}} \label{E:strains1-3}\\ &e_4=E_{23}=E_{32},\qquad &&e_5 =E_{13}=E_{31},\qquad &&e_6 =E_{12}=E_{21},\label{E:strains4-6} \end{alignat} \end{subequations} where $E_{IJ}=1/2(F_{kI}F_{kJ}-\delta_{IJ})$ are components of the Green-Lagrange strain tensor, $F_{iJ}=\delta_{iJ}+u_{i,J}$ being components of the deformation gradient tensor. Here as elsewhere ${(\hspace{1pt}\cdot\hspace{1pt})_{,J}}$ denotes spatial derivatives with respect to the reference rectangular Cartesian coordinate variable $X_J$ ($J=1,2,3$). Throughout this work we set $B_5=180$, $B_1=3.25B_5$, $B_2=-1.5/r^2$, $B_3=1.0/r^3$, and $B_4=1.5/r^4$, where $r=0.25$, unless otherwise noted. At $e_2,e_3 = 0$, corresponding to deformations that reduce to volumetric dilatations in the infinitesimal strain limit, the free energy density function \eqref{E:Psi} possesses a local maximum in $e_2-e_3$ space and represents the cubic austenite crystal structure. The reference, unstrained state is also in the cubic austenite structure. Thus defined, $\Psi$ is non-convex with respect to the strain variables $e_2$ and $e_3$ and possesses three minima, or \emph{energy wells}, of unit depth located at a distance of $0.25$ from the origin on the $e_2-e_3$ plane; see Fig. \ref{Fi:3well_diagram}. These three energy wells represent three martensitic variants of symmetrically equivalent tetragonal crystal structures, elongated in the $X_1$-, $X_2$-, and $X_3$-directions, respectively, that are colored/numbered in Fig. \ref{Fi:3well_diagram}. As an example, a characteristic configuration that achieves minimum energy density of $-1$ almost everywhere for $l=0$ in \eqref{E:Psi} appears in Fig. \ref{Fi:twin}, showing laminae of variant 1 and variant 2. In our numerical example a tetragonal variant is regarded as present at a point of the body if the energy density on this $e_2-e_3$ plane is less than $-0.5$ at that point. We are interested in solution fields $\boldsymbol{u}(\boldsymbol{X})$ on a unit cube $\overline{\Omega}$, where $\Omega=(0,1)^3$, that satisfy the following Dirichlet boundary conditions: \begin{subequations} \begin{align} u_i=0,\; u_{i,1}=0 \quad &\text{ on } X_1=0,\quad i=1,2,3,\\ u_1=0,\; u_{1,1}=0 \quad &\text{ on } X_1=1, \end{align} \label{E:bc_u} \end{subequations} and globally/locally minimize the total free energy corresponding to the density function \eqref{E:Psi}: \begin{align} \Pi:=\int_{\Omega}\Psi\hspace{0.1cm}\mathrm{d}\Omega-\int_{\Gamma_{X_1=1}}(u_2T_2+u_3T_3)\hspace{0.1cm}\mathrm{d}\Gamma_{X_1=1},\label{E:Pi} \end{align} where $T_2=T_3=0.01$ are the standard tractions on the reference boundary $X_1=1$ denoted by $\Gamma_{X_1=1}$. The standard traction vanishes on the boundaries $X_2 = \{0,1\}$, $X_3 = \{0,1\}$, and higher-order tractions vanish wherever higher-order Dirichlet conditions are not prescribed \cite{Toupin1962,Toupin1964}. We define admissible test functions $\boldsymbol{w} \in \mathcal{V}$ that satisfy: \begin{subequations} \begin{align} w_i=0,\; w_{i,1}=0 \quad &\text{ on } X_1=0,\quad i=1,2,3,\\ w_1=0,\; w_{1,1}=0 \quad &\text{ on } X_1=1, \end{align} \label{E:bc_w} \end{subequations} and solve the following weak form of the boundary value problem derived from variational arguments \cite{Toupin1962,Toupin1964,Rudraraju2014}: \begin{align} D\Pi[\boldsymbol{w}]=\int_{\Omega}(w_{i,J}P_{iJ}+w_{i,JK}B_{iJK})\hspace{0.1cm}\mathrm{d}\Omega-\int_{\Gamma_{X_1=1}}(w_2T_2+w_3T_3)\hspace{0.1cm}\mathrm{d}\Gamma_{X_1=1}=0,\quad\forall\boldsymbol{w}\in \mathcal{V}\label{E:D_Pi} \end{align} where the first Piola-Kirchhoff stress tensor and the higher-order stress tensor in component form are: \begin{subequations} \begin{align} P_{iJ}&:=\partial\Psi/\partial F_{iJ},\label{P3D}\\ B_{iJK}&:=\partial\Psi/\partial F_{iJ,K}.\label{B3D} \end{align} \end{subequations} We then assess stability of each solution by numerically checking the positive definiteness of the second variation as: \begin{align} D^2\Pi[\boldsymbol{w},\boldsymbol{w}]&=\int\limits_{\Omega}\left( w_{i,I}\frac{\partial^2\Psi}{\partial F_{iI}\partial F_{jJ}}w_{j,J} +w_{i,I}\frac{\partial^2\Psi}{\partial F_{iI}\partial F_{jJ,K}}w_{j,JK} +w_{i,IL}\frac{\partial^2\Psi}{\partial F_{iI,L}\partial F_{jJ}}w_{j,J} +w_{i,IL}\frac{\partial^2\Psi}{\partial F_{iI,L}\partial F_{jJ,K}}w_{j,JK} \right)\hspace{0.1cm}\mathrm{d}\Omega>0,\label{E:D2_PI} \end{align} where, as in the one-dimensional case, we have explicitly retained the symmetric second and third terms of the integrand for clarity of the development. Note that, following standard variational arguments \cite{Toupin1962,Toupin1964}, one can derive the strong form of the boundary value problem corresponding to the weak form \eqref{E:D_Pi} and \eqref{E:bc_u} with \eqref{E:bc_w} as: \begin{align} -P_{iJ,J}+B_{iJK,JK}=0\quad\text{ on }\Omega,\notag \end{align} along with Neumann/higher-order Neumann conditions: \begin{subequations} \begin{alignat}{2} P_{i1}-B_{i11,1}-2B_{i12,2}-2B_{i13,3}&=T_i,\; B_{i11}&&=0,\; (i=2,3) \;\text{on}\; X_1=1,\notag\\ P_{iL}-2(B_{iL1,1}+B_{iL2,2}+B_{iL3,3})+B_{iLL,L}&=0,\;B_{iLL}&&=0,\;(i=1,2,3,\; \text{no sum on } L)\; \text{on}\; X_L=\{0,1\}\;(L=2,3).\notag \end{alignat} \end{subequations} A more detailed treatment of these boundary conditions is found in the works of Toupin \cite{Toupin1962,Toupin1964}. \subsection{Numerics} We seek numerical solutions $\boldsymbol{u}^h\in \mathcal{S}^h\subset\mathcal{S}$ to a finite-dimensional counterpart of the weak form \eqref{E:D_Pi} defined for $\boldsymbol{w}^h\in\mathcal{V}^h\subset\mathcal{V}$, where: \begin{subequations} \begin{align} \mathcal{S}^h &= \{\boldsymbol{v}^h \in H^2(\Omega)\vert v_i^h=0,\; v^h_{i,1}=0\;\text{on} \; X_1=0,\;\text{for}\; i=1,2,3,\;v^h_1=0,\;v^h_{1,1}=0\;\text{on}\; X_1 = 1\}, \\ \mathcal{V}^h &= \{\boldsymbol{v}^h \in H^2(\Omega)\vert v_i^h=0,\; v^h_{i,1}=0\;\text{on} \; X_1=0,\;\text{for}\; i=1,2,3,\;v^h_1=0,\;v^h_{1,1}=0\;\text{on}\; X_1 = 1\}. \end{align} \end{subequations} The problem was solved using IGA. The finite-dimensional subspaces $\mathcal{S}^h$ and $\mathcal{V}^h$ were constructed using a second-order, $C^1$-continuous, B-spline basis defined in three dimensions on $64^3$, $128^3$, and $256^3$ elements of uniform size, which enforce the Dirichlet/higher-order Dirichlet conditions \emph{strongly}. IGA was previously employed to solve a range of boundary value problems with Toupin's theory of gradient elasticity at finite strain by Rudraraju et al. \cite{Rudraraju2014}, with higher-order Dirichlet conditions applied \emph{weakly}. Our code \cite{IGAP4GradElast} is written in \textsf{C}. We use \textsf{Mathematica 10} to symbolically produce elementwise residual/tangent evaluation routines, \textsf{PETSc 3.7.4} \cite{petsc-web-page,petsc-user-ref,petsc-efficient} for iterative linear/nonlinear solvers, \textsf{SLEPc 3.7.3} \cite{slepc1,slepc2,slepc3} for an eigenvalue problem solver, and \textsf{mathgl 2.3.0} for plots. Specifically, \textsf{MINRES} with Jacobi preconditioner and a backtracking line search method with cubic-order approximation were chosen for iterative solvers. We used the double-precision floating-point format with absolute tolerance of $10^{-12}$ on the residual of the discretized, matrix-vector weak form. The floating point precision and residual tolerance were relaxed relative to the one-dimensional problem for numerical efficiency. In practice, the lower precision and less stringent tolerance were found to be adequate after using the more demanding thresholds in the one-dimensional case. The second variation \eqref{E:D2_PI} was discretized on the same B-spline basis, and the stability of each solution was assessed by extracting the lower end of the spectrum of eigenvalues of the corresponding symmetric Hessian matrices. For the eigenvalue problem, we used an absolute convergence error tolerance of $1\times 10^{-6}$. \begin{figure} \caption{Plots of total free energy $\Pi$ v.s. length scale parameter $l$ for branches A - E on (\subref{Fi:PI-l_orig} \label{Fi:PI-l} \end{figure} \subsection{Solution branches and branch-tracking} We solved the boundary value problem on the $64^3$ mesh, obtained five different branches, denoted by A - E, and computed the total free energy for these solutions over ranges of values of $l$; these computed values appear as solid curves in Fig. \ref{Fi:PI-l}. At selected values of $l$, $l=0.0625$, $0.0750$, $0.1000$, $0.1500$, and/or $0.2000$, solutions were refined on a $128^3$ mesh and a $256^3$ mesh, and are plotted in Fig. \ref{Fi:PI-l} by red squares and black '+' signs, respectively. Reparameterized strains, $e_2$ and $e_3$ \eqref{E:strains1-3}, obtained on the $128^3$ mesh are plotted in Figs. \ref{Fi:e1} and \ref{Fi:e2} for each branch at selected values of $l$. In addition, values of $e_2$ and $e_3$ were computed at $33^3$ uniformly spaced points in the body and were plotted on the $e_2-e_3$ space in Fig. \ref{Fi:e1-e2}, superposed on the three-well diagram presented in Fig. \ref{Fi:3well_diagram}. The strain states corresponding to tetragonal variants 1, 2, and 3 appear in the orange, green, and brown wells, respectively. Fig. \ref{Fi:phase} shows three-variant plots that delineate sub-domains of the variants; material points that lie in the interfaces between any two variants are colored in dark gray. Note that while branches A and B very nearly overlap in the $\Pi-l$ space of Fig. \ref{Fi:PI-l}, the strains, microstructures and energy landscapes of these branches are actually vastly different as seen in Figs. \ref{Fi:e1}-\ref{Fi:phase}. The curves in Fig. \ref{Fi:PI-l} were obtained using a simple branch-tracking technique employed in the one-dimensional example in Sec. \ref{S:One-dimensional_example} with slight modification. For the purpose of demonstration, we aimed to obtain \emph{moderately low-energy} microstructures for which tetragonal variants are well developed in the body for the free energy coefficient $B_5=180$ and in the vicinity of $l=0.10$. Since such low-energy microstructures are under large strains, direct computation at $(B_5,l)=(180,0.10)$ is not practical as we in general do not have \emph{good} initial guesses with which we can obtain converged solutions in three dimensions. We thus first computed high-energy solutions under small strains at relatively large values of $(B_5,l)$, and then employed branch-trackings in decreasing $B_5$- and $l$-directions down to $(B_5,l)=(180,0.10)$, which would develop lower-energy microstructures under larger strains. Small-strain, high-energy solutions were obtained using either the homogeneous initial guess or random initial guesses of small magnitude. The former was used for branch E and the latter was used for branches A-D. To eventually obtain solutions well resolved on the $64^3$ mesh, we first computed the initial high-energy solutions on an $8^3$ mesh or on a $16^3$ mesh and successively refined them before branch-trackings. Finer initial meshes, say $32^3$, would produce finer and more interesting microstructures at the end of the iterative process, but at the expense of greater computational complexity. Our goal being demonstration of the series of techniques, however, we do not pursue this approach here. For branch E, for instance, a solution was first computed with $(B_5,l)=(500,0.54)$ on the $16^3$ mesh using the homogeneous initial guess and successively refined onto the $64^3$ mesh, which was then followed by branch-trackings as $(B_5,l)=(500,0.54)\rightarrow(500,0.15)\rightarrow(180,0.15)\rightarrow(180,0.10)$. Branch D, on the other hand, was first computed with $(B_5,l)=(500,0.50)$ using a random initial guess of $\vert\boldsymbol{u}^h\vert \sim O(10^{-2})$ on the $8^3$ mesh, successively refined onto the $64^3$ mesh, and subject to branch-trackings as $(B_5,l)=(500,0.54)\rightarrow(500,0.27)\rightarrow(180,0.27)\rightarrow(180,0.10)$. \subsection{Numerical convergence} For added confidence that the branches obtained in this study are not numerical artifacts, we studied convergence of these solutions with mesh refinement; at selected values: $l=0.0625$, $0.0750$, $0.1000$, $0.1500$, and/or $0.2000$, solutions were refined on a $128^3$ mesh and a $256^3$ mesh, and corresponding energy values were plotted in Fig. \ref{Fi:PI-l} as red squares and black '+' signs, respectively. Fig. \ref{Fi:PI-l_orig} implies that, in general, solutions computed on the $64^3$ mesh are energetically well converged and Fig. \ref{Fi:PI-l_zoom} indicates that solutions are better resolved for larger values of $l$, where the interface thickness is wider and microstructures are coarser. Refinement of branches A and B was more challenging especially for larger values of $l$, and refinement on the $256^3$ mesh was only performed for $l\leq 0.1000$ for these branches. These branches also experience slightly larger deviation when refined as seen in Fig. \ref{Fi:PI-l_zoom}. We did not pursue these problems further as these branches are of little practical interest. Fig.\ref{Fi:convergence}, on the other hand, shows the distribution of the three variants making up the microstructure at $l=0.0625$ for the $64^3$, $128^3$, and $256^3$ meshes. Attention is drawn to the near complete convergence of solutions on the $64^3$ mesh in the sense of microstructure. \subsection{Stability/metastability} The numerical study of the boundary value problem was followed by stability analysis, where positive definiteness of the Hessian derived from the second variation \eqref{E:D2_PI} was numerically checked for each branch. Table \ref{Ta:eig} shows the smallest eigenvalues of the Hessians for branches A - E on different refinement levels at selected values of $l$; positive values therefore imply stability/metastability. Branch A, though the smallest eigenvalue is negative at $l=0.0750$ probably due to poor resolution, is most likely to be stable, which is consistent with that branch A is most likely to be the lowest-energy branch. Branch B, on the other hand, seems to gain stability somewhere between $l=0.1000$ and $l=0.1500$. Portions of branch C and branch D would also be good candidates for metastable portions of branches, considering the convergence behavior of the eigenvalues with mesh refinement. The stability/metastability behavior of the branches with decreasing $l$ points to the existence of as yet undiscovered branches at increasingly finer microstructure. Their resolution is only limited by the numerical expense of ever finer meshes. \subsection{Parametric dependence of twin microstructures} One can make several important observations for the solutions obtained in this section. As was also observed in one dimension, the interfaces between tetragonal variants, which are represented by the dark gray regions in Fig. \ref{Fi:phase}, become sharper as the length scale parameter $l$ decreases, the state of much of the material descending into the energy wells as indicated in Figs. \ref{Fi:e1-e2}. These plots also highlight how a richer microstructure develops at lower values of $l$, with more material points being localized to those wells that are sparsely populated at higher $l$. Fig. \ref{Fi:phase} also suggests that the interface thickness is proportional to $l$ and that, at a fixed value of $l$, the thickness is virtually the same over different branches -- an observation that can also be made for the one-dimensional problem in Sec. \ref{S:One-dimensional_example} from Fig. \ref{Fi:uX-X}. One can further infer from Figs. \ref{Fi:PI-l} and \ref{Fi:phase} that, regardless of stability, the equilibrium solutions achieving relatively low total free energy form via tetragonal variants with twin interfaces. This is as shown for the limiting case of $l=0$ in Fig. \ref{Fi:twin} according to the pure energy minimization argument. Fig. \ref{Fi:top}, for instance, shows the twin-structures observed in branches B and D at $l=0.0625$; Fig. \ref{Fi:twin} is repeated here to ease comparison. \begin{figure} \caption{Field values of $e_2$ for branches A - E on deformed configurations for selected values of $l$. Solutions on the $128^3$ mesh have been overlaid with a $32^3$ plotting mesh. } \label{Fi:e1} \end{figure} \begin{figure} \caption{Field values of $e_3$ for branches A - E on deformed configurations for selected values of $l$. Solutions on the $128^3$ mesh have been overlaid with a $32^3$ plotting mesh. } \label{Fi:e2} \end{figure} \begin{figure} \caption{Contours of $(e_2,e_3)$ for branches A - E computed from the solutions on the $128^3$ mesh at $33^3$ uniformly spaced points. Selected values of $l$ are indicated. The three-well contour diagram, Fig. \ref{Fi:3well_diagram} \label{Fi:e1-e2} \end{figure} \begin{figure} \caption{Distribution of the three tetragonal variants for branches A - E on deformed configurations for selected values of $l$. Solutions on the $128^3$ mesh have been overlaid with a $32^3$-plotting mesh. The three tetragonal variants are indicated by different colors; see Fig. \ref{Fi:3well_diagram} \label{Fi:phase} \end{figure} \begin{figure} \caption{Convergence study of the distribution of tetragonal variants in the microstructure for branches B, C, and D. Solutions were computed at $l=0.0625$ on the $64^3$, $128^3$, and $256^3$ meshes. All plots were overlaid with a $32^3$ plotting mesh. } \label{Fi:convergence} \end{figure} \begin{table} \begin{center} \begin{tabular}[t]{|c|r|r|r|r|r|r|} \hline Branch & mesh & $l=0.0625$ & $l=0.0750$ & $l=0.1000$ & $l=0.1500$ & $l=0.2000$\\ \hline & $64^3$ & & -0.088755 & -0.065765 & -0.040403 & -0.027748 \\ E & $128^3$ & & -0.020364 & -0.013910 & -0.007705 & -0.004932 \\ & $256^3$ & & -0.003886 & -0.002482 & -0.001261 & -0.009806 \\ \hline & $64^3$ & -0.000301 & -0.000285 & -0.000284 & -0.000283 & -0.000206 \\ D & $128^3$ & -0.000041 & -0.000038 & -0.000038 & -0.000037 & -0.000027 \\ & $256^3$ & -0.000005 & -0.000005 & -0.000005 & -0.000005 & -0.011602 \\ \hline & $64^3$ & -0.046564 & -0.022921 & -0.000227 & -0.000264 & -0.000169 \\ C & $128^3$ & -0.006671 & -0.003216 & -0.000030 & -0.000034 & -0.000023 \\% & $256^3$ & -0.000861 & -0.000408 & -0.000004 & -0.000004 & -0.006915 \\ \hline & $64^3$ & -0.007714 & -0.008696 & -0.008139 & 0.000352 & 0.000350 \\ B & $128^3$ & -0.001112 & -0.001184 & -0.001133 & 0.000046 & 0.000046 \\ & $256^3$ & -0.000143 & -0.000152 & -0.000147 & & \\ \hline & $64^3$ & & -0.003220 & 0.000313 & 0.000352 & 0.000350 \\ A & $128^3$ & & -0.000522 & 0.000041 & 0.000046 & 0.000046 \\ & $256^3$ & & -0.000070 & & & \\ \hline \end{tabular} \end{center} \caption{Smallest eigenvalues of the Hessians corresponding to the discretized counterpart of the second variation \eqref{E:D2_PI} for branches A - E at different refinement levels for selected values of $l$. Positive values indicate stable/metastable solutions. } \label{Ta:eig} \end{table} \begin{figure} \caption{(\subref{Fi:twin2} \label{Fi:top} \end{figure} \section{Conclusion and future works}\label{S:Conclusion} We have considered martensitic phase transformations in three dimensions that are modeled by a free energy density function that is non-convex in strain space, and is regularized by Toupin's theory of gradient elasticity at finite strain. There exist three minima in the non-convex free energy density in strain space, corresponding to three, symmetrically equivalent, tetragonal, martensitic variants. The single maximum represents the cubic, austenite. Our primary interest was to establish numerical procedures to obtain solution branches corresponding to the extrema/saddle points of the total free energy, and to assess their stability. To this end, we have employed a simple branch-tracking technique to continuously follow a solution branch along the strain gradient length-scale parameter starting from the first solution computed using either a random initial guess or the homogeneous initial guess. The stability of each solution was then investigated in terms of the positive definiteness, or lack thereof, of the second variation of the total free energy. Each solution branch corresponds to a distinct twinned microstructure for the same boundary value problem. The phase interfaces between energetically stable tetragonal variants in a microstructure become sharper as the length scale parameter decreases. The microstructures of certain branches themselves become richer with variants missing at higher values of $l$ emerging at lower $l$. To our knowledge this is the first work that comprehensively studies branching of solutions and observed twin structures in three-dimensional diffuse-interface problems based on a non-convex density function regularized by strain gradient terms. This work forms a foundation to study shape-memory alloys under loading, where different microstructures are experimentally observed for the same set of boundary conditions. A proper investigation of that class of problems also requires the incorporation of elastodynamics, in which case, the variations from initial conditions lead to different solution branches and therefore different microstructures for the same set of boundary conditions. A more direct comparison with experiments also needs a treatment of plasticity coupled with twinning as modelled here. This work also provides a basis to study the homogenized response of a material exhibiting the microstructures corresponding to different solution branches. From such a study it may be possible to develop reduced order, effective constitutive models that also incorporate the evolution of martensitic microstructures. \section*{Acknowledgments} The numerical formulation and computations have been carried out as part of research supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award \#DE-SC0008637 that funds the PRedictive Integrated Structural Materials Science (PRISMS) Center at University of Michigan. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. We used XSEDE resources \cite{xsede} through the Campus Champions program. The numerical computations in three dimensions presented here also made intensive use of resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Finally, this research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. \end{document}
math
44,004
\begin{document} \title{Topological optimization of quantum key distribution networks} \author{R~All\'eaume$^1$, F~Roueff$^1$, E~Diamanti$^1$ and N~L\"utkenhaus$^{2,3}$} \address{$^1$ Telecom ParisTech \& LTCI - CNRS, Paris, France} \address{$^2$ University of Erlangen, Germany} \address{$^3$ Institute for Quantum Computing, Waterloo, Canada} \ead{[email protected]} \begin{abstract} A Quantum Key Distribution (QKD) network is an infrastructure that allows the realization of the key distribution cryptographic primitive over long distances and at high rates with information-theoretic security. In this work, we consider QKD networks based on trusted repeaters from a topology viewpoint, and present a set of analytical models that can be used to optimize the spatial distribution of QKD devices and nodes in specific network configurations in order to guarantee a certain level of service to network users, at a minimum cost. We give details on new methods and original results regarding such cost minimization arguments applied to QKD networks. These results are likely to become of high importance when the deployment of QKD networks will be addressed by future quantum telecommunication operators. They will therefore have a strong impact on the design and requirements of the next generation of QKD devices. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Quantum Key Distribution (QKD) is a technology that uses the properties of quantum mechanics to realize an important cryptographic primitive: key distribution~\footnote{More accurately, the primitive is that of secret key agreement using a public quantum channel and a public authenticated classical channel.}. Unlike the techniques used in traditional ``classical'' cryptography, for which the security relies on the conjectured computational hardness of certain mathematical problems, QKD security can be formally proven. Secret keys established via QKD are information-theoretically secure, which implies that any adversary trying to eavesdrop cannot obtain any information on the transmitted keys at any point in the future, even if she possesses extremely large computational resources. The communication channels needed to perform QKD consist in an optical channel, on which well-controlled quantum states of light are exchanged, and a classical channel that is used for signaling during the quantum exchanges and for the classical post-processing phase, namely key reconciliation. Their combination forms a communication link, over which quantum key distribution allows two distant users to exchange a specific type of data, in particular secret keys. In this sense, QKD is by nature a telecommunication technology, and so \emph{QKD links} can be combined with appropriately designed nodes to form \emph{QKD networks}. The performance of QKD links has rapidly improved in the last years. Starting from pioneering experiments in the 90s~\cite{bennett:jcrypto92}, important steps have been taken to bring QKD from the laboratory to the open field. Thanks to the continuous efforts invested in developing better QKD protocols and hardware, in parallel to the advancement of security proofs (see~\cite{gisin:rmp02,dusek:pino06,scarani:qp08} for reviews), the performance that can now be achieved, in terms of attainable communication distance, secret key generation rate and reliability, positions QKD as the first quantum information processing technology reaching a level of maturity sufficient to target deployment over real-world networks. Indeed, off-the-shelf QKD systems are now commercially available~\cite{idsqmagiq}, and the first QKD networks have recently been implemented~\cite{elliott:njp02,elliott:qp05,secoqc}. Up till now, research in QKD has focused on building and optimizing individual systems to reach the longest possible distance and/or the highest possible secret bit rate, without taking into account the cost of such systems. However, as the perspective of deploying QKD networks becomes a reality, the question of optimal resource allocation, intrinsically linked to cost considerations, becomes relevant and important, as is the case for any telecommunication network infrastructure. It becomes therefore necessary to consider QKD from a cost perspective, and in particular study the potential trade-offs of cost and performance that can occur in this context. Following the above arguments, we consider in this work the design of QKD networks from a topology viewpoint, and present techniques and analytical models that can be used to optimize the spatial distribution of QKD devices and QKD nodes within specific network architectures in order to guarantee a given level of service to the network users, at a minimum cost. We also study how cost minimization arguments influence the optimal working points of QKD links. We show in particular that, in the perspective of QKD networks, individual QKD links should be operated at an optimal working distance that can be significantly shorter than their maximum attainable distance. The paper is structured as follows. In section~\mathrm{Re}f{sec:QKDnetworks}, we define a QKD network and discuss the topology and characteristics of the network architecture that we consider in this work. We also introduce the concept of a backbone network structure. In section~\mathrm{Re}f{sec:Optimization}, we present our calculations and results on network topological optimization based on cost arguments. In particular, we provide a comprehensive set of modeling tools and cost function calculations in specific network configurations, and discuss the effect of our results on the design of practical QKD networks. Finally, in section~\mathrm{Re}f{sec:perspectives}, we discuss open questions and future perspectives for QKD networks. \section{QKD networks} \label{sec:QKDnetworks} \noindent \emph{Definition and types of QKD networks} \noindent Extending the range of quantum key distribution systems to very long distances, and allowing the exchange of secret keys between multiple users necessitates the development of a network infrastructure connecting multiple individual QKD links. Indeed, QKD links are inherently only adapted to point-to-point key exchange between the two endpoints of a quantum channel, while the signal-to-noise ratio decrease occurring with propagation loss ultimately limits their attainable range. It is then natural to consider QKD networks as a means to overcome these limitations. A QKD network is an infrastructure composed of QKD links, \emph{i.e.} pairs of QKD devices linked by a quantum and a classical communication channel connecting two separate locations, or nodes. These links are then used to connect multiple distant nodes. Based on these resources and using appropriate protocols, this infrastructure can enable the unconditionally secure distribution of symmetric secret keys between any pair of legitimate users accessing the network. QKD networks can be categorized in two general groups~\cite{salvail:jcs09}: networks that create an end-to-end quantum channel between the two users, and networks that require a transport of the key over many intermediate trusted nodes. In the first group, we find networks in which a classical optical function such as switching or multiplexing is applied at the node level on the quantum signals sent over the quantum channel. This approach allows multi-user QKD but cannot be used to extend the key distribution distance. Much more advanced members of this group are the quantum repeater based QKD networks. Quantum repeaters~\cite{briegel:prl98} can create a perfect end-to-end quantum channel by distributing entanglement between any two network users. The implementation of quantum repeaters, however, requires complex quantum operations and quantum memories, whose realization remains an experimental challenge. The same is true for the simpler version of quantum repeaters, namely quantum relays~\cite{collins:jmo05}, which on the one hand do not require a quantum memory but on the other cannot arbitrarily extend the QKD communication distance.\\ \noindent \emph{Trusted repeater QKD networks: characteristics and assumptions} \noindent In this work, we are interested in the second group of networks, which we call \emph{trusted repeater QKD networks}. In these networks, the nodes act as trusted relays that store locally QKD-generated keys in classical memories, and then use these keys to perform long-distance key distribution between any two nodes of the network. Therefore, trusted repeater QKD networks do not require nodes equipped with quantum memories; they only require QKD devices and classical memories as well as processing units placed within secure locations, and can thus be deployed with currently available technologies. Indeed, the implementation of such networks has been the subject of several international projects~\cite{elliott:qp05,secoqc,dianati:scn08, peev:inprep09}. As we will see in detail in the following section, the analysis of trusted repeater QKD networks from a topology viewpoint and with the goal of achieving optimization based on cost considerations involves modeling several characteristics of such a network, namely the \emph{user distribution}, the \emph{node distribution}, the \emph{call traffic}, and the \emph{traffic routing}. The user and node distributions, denoted by $\Pi$ and $M$ respectively, will be considered as Poisson stochastic point processes, and will be thus modeled using convenient stochastic geometry tools. Modeling the traffic demand is particularly subtle because of the variation with respect to time and distance that this traffic may feature in a real network. Calculations here will neglect these variations and will be performed under the assumption of a uniform call volume between any pair of users, denoted as $V$. Finally, routing in trusted repeater QKD networks is performed according to the following general principle: First, local keys are generated over QKD links and are stored in nodes that are placed on both ends of each link. Global key distribution is then performed over a QKD path, \emph{i.e.} a one-dimensional chain of trusted relays connected by QKD links, establishing a connection between two end nodes. Secret keys are forwarded, in a hop-by-hop fashion, along these QKD paths. To ensure their secrecy, one-time pad encryption and information-theoretically secure authentication, both realized with a local QKD key, are performed. End-to-end information-theoretic security is thus obtained between the end nodes, provided that the intermediate nodes can be trusted.\\ \noindent \emph{Quantum backbone network architecture} \noindent Introducing hierarchy into network design can be an extremely convenient architectural tool because it allows to break complex structures into smaller and more flexible ensembles. Indeed, such hierarchical levels offer an efficient way to help solve resource allocation problems arising in networks, ranging from network routing to network deployment planning. In this work, we will associate the notion of hierarchy in QKD networks with the existence of what we will call a \emph{quantum backbone network}. In classical networks and especially the Internet, a backbone line is a larger transmission line that carries data gathered from smaller lines that interconnect with it. By analogy with this definition, the backbone QKD network is an infrastructure for key transport that gathers the traffic of secret key from many individual QKD links. QKD backbone links and nodes clearly appear as mutualized resources shared to provide service to many pairs of users. Keeping the fruitful analogy with classical networks, we will call \emph{access QKD links} the point-to-point links used to connect QKD end users to their nearest QKD backbone node. The principle of traffic routing that we described above can be conveniently transposed in the context of backbone networks. In this case, traffic from individual users is gathered locally to backbone QKD nodes. This mutualized traffic is then routed hop-by-hop over the backbone structure. Furthermore, it is important to note that the node and user point process distributions are distinct when a backbone network is considered, which might not be the case in a network without backbone.\\ In the following, we will derive cost functions for different QKD network configurations, under the above assumptions regarding the topology and the way traffic is routed in these networks, and as a function of the characteristics of individual QKD links. We will then use the results to discuss how QKD networks should be dimensioned, the optimal working points of QKD links, as well as the interest of adopting a hierarchical architecture, materialized by the existence of a backbone, in QKD networks. \section{Topological optimization based on cost arguments} \label{sec:Optimization} \subsection{QKD links: characterizing the rate versus distance} \label{subsec:qkdlinkrate} The main element underlying the cost optimization related to the deployment of quantum networks is the intrinsic performance of QKD links. This performance can essentially be summarized by the function $R(\ell)$, which gives the rate, in bit/s, of secret key that can be established over a QKD link of length $\ell$. Clearly, this secret key bit rate varies from system to system and comparisons between systems are thus difficult to establish. Moreover, comparisons have to be related to the security proofs for which the secret key bit rates have been derived. Security proofs are not yet fully categorized, although important steps in this direction have been taken~\cite{scarani:qp08}. As shown on figure~\mathrm{Re}f{fig:RateQKDLink}, the typical curve describing the variation with distance of the logarithm of the mean rate of secret bit establishment $R(\ell)$ can be essentially separated into two parts: \begin{figure} \caption{Typical profile of the Rate versus Distance curve for a single QKD link. } \label{fig:RateQKDLink} \end{figure} \begin{itemize} \item A {\bf linear} part that is the region where the rate of secret key establishment varies as a given power of the propagation attenuation. Since the attenuation $\eta(\ell)$ is exponentially increasing with distance, $\log R(\ell)$ is linear in $\ell$. \item An {\bf exponential drop-off} at longer distances, where the error rate rapidly increases due to the growing contribution of detection dark counts. In this region, the decrease of the secret key rate is multi-exponential with distance. The slope of the curve representing $\log R(\ell)$ is thus becoming increasingly steep until a maximum distance is reached. \end{itemize} For completeness, it is also important to mention the possibility that, for short distances, the secret bit rate could be limited by a saturation of the detection setup. This will be the case if the repetition rate at which the quantum signals are sent in the quantum channel exceeds the bandwidth of the detector. We will however not investigate this possibility any further in the remaining of this work. The behavior of the secret bit rate function $R(\ell)$ can be described using essentially three parameters, schematically shown on figure~\mathrm{Re}f{fig:RateQKDLink}: \begin{enumerate} \item The secret bit rate at zero distance, $R_0$; \item The scaling parameter $\lambda_{\textrm{\tiny QKD}}$ in the linear region such that $R(\ell)= R_0 \, e^{-\ell/\lambda_{\textrm{\tiny QKD}}}$; \item The distance at which the scaling of the rate becomes exponential, which is comparable to the maximum attainable distance, $D_{\textrm{\tiny drop}} \sim D_{\textrm{\tiny max}}$. \end{enumerate} $R_0$ is determined by the maximum clock rate of the QKD system. In QKD relying on photon-counting detection setups, $R_0$ is limited by the performance of the detectors, and is usually in the Mbit/s range. Clearly, the solutions allowing to improve the performance of the detectors have a direct impact on $R_0$~\cite{diamanti:pra05,yuan:apl07,hadfield:oe05,ma:ieeecl07}. For QKD systems relying on continuous variables~\cite{grosshans:nature03}, based on homodyne detection performed with fast photodiodes, the experimental bound on $R_0$ can be significantly higher, potentially in the Gbit/s range. The computational complexity of the reconciliation however currently limits $R_0$ in the Mbit/s range in the practical demonstrations performed so far~\cite{lodewyck:pra07}. The scaling parameter $\lambda_{\textrm{\tiny QKD}}$ is essentially determined by the attenuation $\eta(\ell)$ over a quantum channel of length $\ell$, and by a coefficient $r$ that is mainly related to the security proof that can be applied to the experimental system. In the case of a typical network based on optical fibers, the attenuation $\eta(\ell)$ can be parametrized by an attenuation coefficient $\alpha$ (in dB/km) as $\eta(\ell)= 10^{- \alpha \ell/10}$ (for scaling of the attenuation in free space, see~\cite{scarani:qp08}). In the linear part of the curve shown on figure~\mathrm{Re}f{fig:RateQKDLink}, the rate $R(\ell)$ varies as a given power $r$ of the attenuation, $R(\ell)= R_0 \, \eta(\ell)^r$. We can thus define the scaling parameter as $\lambda_{\textrm{\tiny QKD}} = 10/(\alpha \,r\,\log(10))$. For QKD performed at telecom wavelengths, with protocols optimized for long distance operation, we can take $\alpha = 0.22$~dB/km and $r=1$, which leads us to $\lambda_{\textrm{\tiny QKD}} = 19.7$~km, as the typical scaling distance for such QKD systems. This parameter is important since, as we shall see in the following, the optimal working distance of QKD links will essentially scale as $\lambda_{\textrm{\tiny QKD}}$. Finally, the existence of a rapid drop-off of the secret key rate at distances around $D_{\textrm{\tiny drop}}$ arises when the probability to detect some signal sent in the quantum channel, $p_s$, becomes comparable to the probability to detect a dark count per detection time slot, $p_d$. This occurs around the distance $D_{\textrm{\tiny drop}}$, for which we have $p_s \simeq \exp(-D_{\textrm{\tiny drop}}/\lambda_{\textrm{\tiny QKD}}) \times \eta_{d}$, where $\eta_{d}$ represents the detector efficiency. We thus find $D_{\textrm{\tiny drop}} \simeq \lambda_{\textrm{\tiny QKD}} \, \log(\eta_d/p_d)$. In practice, when working with InGaAs single-photon avalanche photodiodes (SPADs) operating at 1550~nm, the ratio $\eta_d/p_d$ is optimized by varying the different external parameters of the detector such as the temperature, gate voltage or time slot duration. The best published performances for InGaAs SPADs \cite{zbinden:apb98,kosaka:el03} report values of the dark counts $p_d \simeq 10^{-7} \, \textrm{to} \, 10^{-6}$ for a detection efficiency $\eta_d$ around $10 \%$, which leads to $D_{\textrm{\tiny drop}} \sim D_{\textrm{\tiny max}} \sim 100-120$~km for QKD systems employing such detectors. For a similar detection efficiency, the best available superconducting single-photon detectors (SSPDs) present dark counts $p_d \simeq 10^{-8} \, \textrm{to} \, 10^{-6}$ ~\cite{korneev:jstqe07}, leading to a maximum distance that can reach 140~km. \subsection{Toy model for QKD network cost derivation: a linear chain between two users} \label{subsec:chain} \emph{The linear chain as a simple asymptotic model of a quantum backbone network} \noindent As a first example of QKD network cost derivation and optimization, we will consider what we will call the linear chain scenario. In particular, we consider two users, A and B, that want to rely on QKD to exchange secret keys in a scenario that imposes the use of several QKD links: \begin{itemize} \item The two QKD users are \emph{very far away}: their distance is $L = ||AB||$ with $L \gg D_{\textrm{\tiny max}}$. \item The two QKD users are exchanging secret bits at a \emph{very high rate}. We will call $V$ the volume of calls between the two users A and B (units of $V$: bits of secret key), and will assume $V \gg R_0$. \end{itemize} Because of the first condition, many intermediate nodes have to be used as trusted key relays to ensure key transport over QKD links from A to B. Because of the second condition, many QKD links have to be deployed in parallel to reach a secret key distribution rate capacity at least equal to the traffic volume. The linear chain QKD network scenario is in a sense the simplest situation in which an infrastructure such as a quantum backbone network, described in section~\mathrm{Re}f{sec:QKDnetworks}, is required. It therefore provides an interesting toy model for cost optimization and topological considerations.\\ \noindent \emph{Cost model: assumptions and definitions} \noindent The generic purpose of cost optimization is to ensure a given objective in terms of service, at the minimum cost. In the case of the linear chain scenario, this objective is to be able to offer a secret bit rate of $V$~bit/s between two users A and B separated by a distance $L$, while minimizing the cost of the network infrastructure to be deployed. In this and all subsequent models, we will consider as the total cost $\mathcal{C}$ of a QKD network, the cost of the equipment to be deployed to build the network. This can be seen as a simplifying assumption, since it is common, in network planning, to differentiate between capital and operating expenditures. We have chosen here to restrict our models to capital expenditures of QKD networks and will consider that their cost is arising from two sources: \begin{itemize} \item The cost of QKD link equipment to be deployed. We will denote as $C_{\textrm{\tiny QKD}}$ the unit cost per QKD link. $C_{\textrm{\tiny QKD}}$ essentially corresponds to the cost of a pair of QKD devices. Note that here we implicitly assume that the deployment of optical fibers is \emph{for free}, or more precisely that it is done independently and prior to the deployment of a QKD network. \item The cost of node equipment, which we denote as $C_{\textrm{\tiny node}}$. $C_{\textrm{\tiny node}}$ typically corresponds to the hardware cost (for example some specific kind of routers need to be deployed inside QKD nodes), as well as the cost of the security infrastructure that is needed to make a QKD node a trusted and secure location. \end{itemize} As explained before and shown on figure~\mathrm{Re}f{fig:1DQKDChain}, a linear chain QKD network is composed of a one-dimensional chain where adjacent QKD nodes are connected by QKD chain segments, each segment being potentially composed of multiple QKD links to ensure that a capacity equal to the traffic volume is reached.\\ \begin{figure} \caption{The one-dimensional QKD chain linking two QKD users, Alice and Bob, over a distance $L$. Since $L$ is considered much longer than the maximum span of a QKD link, $D_{\textrm{\tiny max} \label{fig:1DQKDChain} \end{figure} \noindent \emph{Total cost of the linear chain QKD network} \noindent For convexity reasons, discussed in more detail at the end of this section, the topology ensuring the minimum cost will correspond to place QKD nodes at regular intervals between A and B. We denote by $\ell$ the distance between two intermediate nodes, which then corresponds to the distance over which QKD links are operated within the linear chain QKD network. As we shall see, the question of cost minimization will reduce to finding the optimum value of QKD link operational distance, $\ell^{\textrm{\tiny opt}}$, for the linear chain QKD network. There are clearly two antagonistic effects in the dependence of the total cost of the considered network on $\ell$: \begin{itemize} \item On the one hand, if QKD links are operated over long distances, their secret bit capacity $R(\ell)$ decreases. This will impose the deployment of more QKD links in parallel, on each chain segment linking two adjacent QKD nodes, and thus tends to increase the total cost. \item On the other hand, it is clear that increasing the operating distance $\ell$ allows to decrease the required number of intermediate trusted relay nodes, which leads to a decreased cost. \end{itemize} The optimum operating distance $\ell^{\textrm{\tiny opt}}$ corresponds to the value of $\ell$ that minimizes the total cost function $\mathcal{C}$: \begin{equation} \mathcal{C} = C_{\textrm{\tiny QKD}} \, \frac{L}{\ell} \, \frac{V}{R(\ell)} + C_{\textrm{\tiny node}} \frac{L}{\ell} \label{eq:C1D} \end{equation} It is important to note that, in the above equation, we have made the assumption that we can neglect the effects of discretisation. This means that the length of the chain, $L$, can be considered much longer than the length of individual QKD links, $\ell$, and that the traffic volume $V$ can be considered as a continuous quantity, neglecting the discrete jumps associated to variations in the number of calls. \noindent \emph{Cost minimization and optimum working distance of QKD links} \noindent In the asymptotic limit of very high traffic volume $V$, the cost of nodes can be neglected in comparison with the cost of QKD devices. The expression of the total cost in equation~(\mathrm{Re}f{eq:C1D}) then reduces to the first term, and we have the following interesting properties: \begin{itemize} \item The total cost is directly proportional to the product of the traffic volume $V$ and the total distance $L$. \item Optimizing the total cost $\mathcal{C}$ is equivalent to minimizing $C(\ell)/\ell$ where $C(\ell) = C_{\textrm{\tiny QKD}}/R(\ell)$ is the per-bit cost of one unit of secret key rate. \end{itemize} Furthermore, assuming that QKD links are operated in the linear part of their characteristic (see figure~\mathrm{Re}f{fig:RateQKDLink}), we can write $C(\ell) = \frac{C_{\textrm{\tiny QKD}}}{R_0} e^{\, \ell/\lambda_{\textrm{\tiny QKD}}}$. Then, the value of $\ell^{\textrm{\tiny opt}}$ that minimizes the quantity $C(\ell)/\ell$ can be explicitly derived as \begin{equation} \ell^{\textrm{\tiny opt}} = \lambda_{\textrm{\tiny QKD}} \; , \end{equation} where $\lambda_{\textrm{\tiny QKD}}$ was defined in section \mathrm{Re}f{subsec:qkdlinkrate} as the natural scaling parameter of the function $R(\ell)$. In the general case, the second term of the cost function in equation~(\mathrm{Re}f{eq:C1D}), corresponding to the cost of nodes, cannot be neglected. This second term does not depend on the volume of traffic $V$, and is always decreasing with $\ell$. As a consequence, the optimum operating distance that minimizes $\mathcal{C}$ will always be greater than $\lambda_{\textrm{\tiny QKD}}$, the value minimizing the first term in equation~(\mathrm{Re}f{eq:C1D}). Under the assumption that the optimum distance will remain in the linear part of the function $\log R(\ell)$, we can derive the following implicit relation for $\ell^{\textrm{\tiny opt}}$: \begin{equation} \ell^{\textrm{\tiny opt}} = \lambda_{\textrm{\tiny QKD}} \, \Big( 1 + \frac{C_{\textrm{\tiny node}} }{C_{\textrm{\tiny QKD}}} \, \frac{R_0}{V} e^{\, -\ell^{\textrm{\tiny opt}}/\lambda_{\textrm{\tiny QKD}}} \Big) \label{eq:LoptWithNode} \end{equation} The above equation allows for a quantitative discussion of the ``weight'' of the nodes in the behavior of the cost function. Indeed, we can see that the influence of the node cost is potentially important and can lead to an optimum working distance that can be significantly greater than $\lambda_{\textrm{\tiny QKD}}$ when $ \frac{C_{\textrm{\tiny node}} }{C_{\textrm{\tiny QKD}}} \, \frac{R_0}{V} \gg 1$.\\ \noindent \emph{Existence of an optimum working distance and convexity of $C(\ell)$} \noindent In most of the explicit derivations performed in this work, we assume a purely linear dependency of $\log R(\ell)$ on $\ell$. This assumption is convenient but remains an approximation since it does not take into account the drop-off of $R(\ell)$ occurring around $D_{\textrm{\tiny drop}}$. It is however possible to demonstrate the existence of an optimum working distance for QKD links in a more general case, by solely relying on the assumption that the function $R(\ell)$ is log-concave, \emph{i.e.} that $\log R(\ell)$ is concave. The log-concavity of $R(\ell)$ can be checked on a simple model inspired by the secret key rate formula for the BB84 QKD protocol with perfect single photons~\cite{scarani:qp08}. In particular, in this case we have $R(p)= 1 - 2 h(p)$, where $h(p)$ is the entropy associated to a quantum bit error rate $p$, and assume that the dependence of the error rate $p$ on the distance is of the form $p = a + b / \eta(\ell) = a + b ^{\, \ell/\lambda_{\textrm{\tiny QKD}}} $, where $a$ and $b$ are parameters linked to the detection system. In this setup, it is straightforward to verify numerically that $\log R(\ell)$ is concave for all reasonable values of $a$ and $b$. Since $C(\ell)$, the per-unit cost of secret bit rate on a QKD link, is proportional to $1/R(\ell)$, the log-concavity of $R(\ell)$ implies the log-convexity of $C(\ell)$, which itself implies the convexity of $C(\ell)$. Finally, we can write the total cost of the linear chain QKD network as the sum of the cost of each chain segment and the cost of the node equipment, namely $$ \mathcal{C}(\ell_0,\mathrm{d}ots,\ell_n) = V \, \sum_{i=0}^{n} C(\ell_i) + n\, C_{\textrm{\tiny node}} \;. $$ In the above equation, $\ell_0$ denotes the distance between A and the first node, $\ell_k$, $k=1,\mathrm{d}ots n-1$, the distance between the $k$th node and the $k+1$th node, and $\ell_n$ the distance between the last node and B. For a convex function $C$, the minimization of $\sum_{i=0}^{n} C(\ell_i)$ under the constraint $\sum_{i=0}^{n} \ell_i=L$, where $L$ is the distance between A and B, is obtained with $\ell_i=L/(n+1)$ for all $i$. Once we set $\ell_i=L/(n+1)$, the cost expression in the above equation only depends on $n$, or equivalently on $\ell=L/(n+1)$. For large $L$, we can disregard the fact that $\ell$ is an integer divider of $L$ and approximate $(n+1)/n$ by 1, which then leads to equation~(\mathrm{Re}f{eq:C1D}). \subsection{Cost of QKD networks: towards more general models} \label{subsec:costgeneral} The linear chain toy model developed in section~\mathrm{Re}f{subsec:chain} provides an interesting intuition into the behavior of the cost function. The most important result is that, in the limit of large traffic rates and/or low cost of QKD nodes, the QKD network cost optimization reduces to the minimization of $C(\ell)/\ell \sim 1 / (R(\ell) \ell)$. This leads to the existence of an optimum working distance, $\ell^{\textrm{\tiny opt}}$, at which QKD links need to be operated in order to minimize the global cost of the network deployment. The linear chain QKD network model is however too restrictive in many aspects: it is one-dimensional and limited to the description of a network providing service to two users. We will now consider more general models, which allow us to study the more realistic case of QKD networks spanning a two-dimensional area, and providing service to a large number of users.\\ \noindent \emph{Modeling network spatial processes with stochastic geometry} \noindent Stochastic geometry is a very useful mathematical tool for modeling telecommunication networks. It has the advantage of being able to describe the essential spatial characteristics of a network using a small number of parameters~\cite{baccelli:ts97}. It thus allows to study some general characteristics of a given network, like the behavior of its cost function, under a restricted set of assumptions. This approach fits well with the objectives of this work, and so we have employed stochastic tools to model a QKD backbone network. As we shall see, instead of calculating the cost of a QKD network for fixed topologies and traffic usage, we will try to understand the general behavior of the cost function by calculating the \emph{average} cost function, where the average will be taken over some probability distributions of spatial processes modeling QKD users and QKD node locations. The collection of spatial locations of the QKD nodes over the plane will be represented by a spatial point process $M=\{ X_i\}$. Then, as illustrated in figure~\mathrm{Re}f{fig:Voronoi}, we define a corresponding partition of the plane~\footnote{More accurately, the geometrical object we consider here is a tesselation, the boundaries of which are neglected.} as the ensemble of the convex polygons $\{ D_i\}$, known as the Vorono\"{\i} cells of nucleus $\{ X_i\}$. Each Vorono\"{\i} cell $ D_i$ is constructed by taking the intersection of the half-planes bounded by the bisectors of the segment $[X_i, X_j]$ and containing $ X_i$. The system of all the cells creates the so-called Vorono\"{\i} partition. Finally, we define the Delaunay graph as the graph, whose vertices are the $\{ X_i\}$ and whose edges are formed by connecting each Vorono\"{\i} cell nucleus $\{ X_i\}$ with the nuclei of the adjacent Vorono\"{\i} cells.\\ \begin{figure} \caption{Thick black lines: Vorono\"{\i} \label{fig:Voronoi} \end{figure} \noindent \emph{User distribution and traffic} \noindent In the remaining of this paper, and in contrast to the linear chain toy model developed in section~\mathrm{Re}f{subsec:chain}, we will consider QKD networks providing secret key distribution service to a large number of users, distributed over a two-dimensional area. The user distribution will be modeled by a Poisson stochastic point process, $\Pi=\{U_i\}$, defined over the support $D$ of size $L \times L$, while the average number of QKD users will be denoted by $\mu$. The point process $\Pi$ will also be assumed to have an intensity density $f$ satisfying $\mu=\int f <\infty$, which means that for every set $E$ the number of users within $E$ is a Poisson random variable with mean $\int_E f$. Finally, whenever this additional assumption will prove to be useful to perform the desired calculations, we will consider that the distribution of users is homogeneous over $D$, \emph{i.e.} that the intensity function $f$ is constant over $D$. We will denote this constant user density by $1/\alpha_{\textrm{\tiny u}}^2$ so that $\alpha_{\textrm{\tiny u}}$ corresponds to a distance (it can be shown that for large $L$, $\alpha_{\textrm{\tiny u}}/2$ is the average distance between the origin and the point $U_i$ closest to the origin). We will have in this case: \begin{equation} \label{eq:mudef} \mu=\int f = \left(L/\alpha_{\textrm{\tiny u}}\right)^2 \; . \end{equation} For the traffic model, we will generalize the assumption made for the linear chain QKD network model: the traffic between any pair of QKD users will be seen as an aggregate volume of calls (expressed in units of secret key exchange rate). The volume of traffic will be assumed to be the same between any pair of users, and will be denoted by $V$.\\ \noindent \emph{QKD networks with or without a hierarchical architecture} \noindent As was discussed in section~\mathrm{Re}f{sec:QKDnetworks}, it is interesting to study to which extent deploying a structure such as a backbone, which is synonymous to the existence of hierarchy in a network, would be advantageous in the case of QKD networks. To this end, continuing to place ourselves in the perspective of cost optimization, we will derive cost functions for QKD network models with or without a quantum backbone. The obtained results will then allow us to establish comparisons and thus discuss the interest of hierarchy in quantum networks. \subsection{Cost function for a two-dimensional network without backbone: the generalized QKD chain model} \label{subsec:2Dchain} A direct way to generalize the two-user one-dimensional chain model presented in section~\mathrm{Re}f{subsec:chain} is simply to assume that a chain of QKD links and intermediate nodes will be deployed between each pair of users $u$ and $v$ within the QKD network. Each chain will therefore be dimensioned in order to accommodate a volume $V$ of calls. The routing of calls is trivial on such a network. The distance between the intermediate nodes on a chain will be denoted by $\ell$, as in section~\mathrm{Re}f{subsec:chain}. Here as well, we neglect the effects of discretisation, \emph{i.e.} the length of the chains, $||u-v||$, will be considered much longer than the length of individual QKD links, $\ell$, and the traffic volume $V$ will be considered a continuous quantity. Under these assumptions, we know that the cost associated with a pair of users located respectively at positions $u$ and $v$ and exchanging a volume $V$ of calls is (see equation~(\mathrm{Re}f{eq:C1D})) \begin{equation} \mathcal{C}^{\textrm{\tiny pair}}(u,v) = V \, ||u-v|| \, C(\ell)/\ell \,+ \, (||u-v||/\ell) C_{\textrm{\tiny node}} \label{eq:Cchain} \end{equation} Recall that the distribution of users is described by a Poisson point process $\Pi=\{U_i\}$. Then, we can calculate the average total cost of the QKD network, $\mathcal{C}$, by summing up the costs $\mathcal{C}^{\textrm{\tiny pair}}(U_k,U_l)$ associated with the QKD chains deployed between each pair of users over $k\neq l$ and then average this sum over the stochastic user point process $\Pi$: \begin{eqnarray} \label{eq:Cchaintotal} \mathcal{C} & = \mathbb E \left[\sum_{k \neq l} \mathcal{C}^{\textrm{\tiny pair}}(U_k, U_l)\right] \nonumber \\ & = \mathbb E \left[\sum_{k \neq l} V \, ||U_k-U_l|| \, C(\ell)/\ell \,+ \, ||U_k-U_l|| C_{\textrm{\tiny node}}\right] \nonumber \\ & = ( V \, C(\ell)/\ell \, + C_{\textrm{\tiny node}}/\ell ) \,\mathrm{d}elta \;, \end{eqnarray} where $\mathrm{d}elta$ is the average sum of distances over all pairs of two different users, namely \begin{equation} \label{eq:deltaDef} \mathrm{d}elta=\mathbb E \left[\sum_{k \neq l} ||U_k-U_l||\right] \;. \end{equation} For a homogeneous Poisson point process $\Pi$ with spatial density of users $\alpha_{\textrm{\tiny u}}^{-2}$ over a square domain $D$ of size $L\times L$, it is possible to perform the exact integral calculation of $\mathrm{d}elta$, yielding \begin{equation} \label{eq:deltaVal} \mathrm{d}elta=\gamma \,L^5/\alpha_{\textrm{\tiny u}}^{4}\quad\textrm{with}\quad\gamma = \frac{1}{3} \log ( 1+ \sqrt2) + \frac{2 + \sqrt2}{15} \simeq 0.5214\;. \end{equation} \subsection{Cost function for a two-dimensional QKD network with backbone} \label{subsec:cost2Dbackbone} The backbone architectures we will consider in this work are \emph{topological}: for a given distribution of QKD nodes, which will be either deterministic (section~\mathrm{Re}f{subsec:square}) or stochastic (section~\mathrm{Re}f{subsec:stochasticQBB}), the backbone cells and backbone links will strictly coincide with the Vorono\"{\i} cells and the edges of the corresponding Delaunay graph defined above, respectively.\\ \noindent \emph{Routing traffic over a QKD backbone network} \noindent The backbone hierarchical structure provides a convenient way to solve the routing problem that we have adopted in our cost calculations. For a given origin-destination pair of users (A,B) wishing to exchange a volume of calls $V_{AB}$, the traffic is routed in the following way: \begin{itemize} \item The traffic goes from A to its nearest QKD backbone node $X_A$ (center of the backbone cell containing A), through a single QKD link (an access link). \item The traffic is routed through the {\bf optimal (less costly) path} over the backbone QKD network from $X_A$ to $X_B$ (QKD node closer to B). \item The traffic goes from $X_B$ to B. \end{itemize} The routing rule defined above can be characterized as \emph{geographical}, in the sense that it is driven by distance considerations. However, determining the optimal path in a given backbone network of arbitrary topology may not be a tractable problem. Even in standard networks, where the optimal path is the shortest one, an analytic computation of the average length/cost is not always possible. In the context of backbone nodes distributed as a Poisson point process, an alternative suboptimal routing policy, the so called \emph{Markov path}, has been proposed, and leads to analytic computation of the average path length. In QKD networks, the cost is a non-linear function of the length and some adjustments are required. We consider two different geometries for the backbone: \begin{enumerate} \item A square backbone QKD network (section~\mathrm{Re}f{subsec:square}), \emph{i.e.} a regular structure where nodes and links form a regular graph of degree 4. In this case finding the length of the shortest path between two nodes is trivial: backbone nodes $X_A$, $X_B$ can be designated by cartesian coordinates $(x_A, y_A)$, $(x_B, y_B)$ and the shortest path length is simply $|x_A - x_B| + |y_A - y_B| $. Moreover, cost calculations are simplified using the fact that the links between two neighbor nodes of the backbone all have the same length. \item A stochastic backbone network (section~\mathrm{Re}f{subsec:stochasticQBB}), where backbone nodes are distributed following a random point process and backbone cells are the corresponding Vorono\"{\i} partition. For this stochastic backbone, we have used a routing technique called \emph{Markov-path routing} for which, as previously established by Tchoumatchenko \emph{et al.}~\cite{tchoumatchenko:phd99,baccelli:aap00}, the average length of routes can be calculated. In the following, we will adapt these calculations to our cost function $C(\ell)$.\\ \end{enumerate} \noindent \emph{Generic derivation of the cost function for QKD backbone networks} \noindent For a QKD network with a backbone structure, we define $M=\{X_i\}$ as the point process of the network node distribution, and $\Pi=\{U_i\}$ as the point process of the network user distribution, with intensity density $f$. Each node $X_i$ is connected to some nodes in its neighborhood and to the clients belonging to the associated cell $D_i$. In the following, we will assume that $M$ is statistically independent of $\Pi$, and that the cells $D_i$ are the Vorono\"i cells associated to $M$, that is \begin{equation} \label{eq:voronoiCell} D_i= \left\{x\;:\;\|x-X_i\|\leq \inf_{j\neq i}\|x-X_j\|\right\}\;. \end{equation} In the case of the QKD backbone network, our routing policy allows to calculate $C^{\textrm{\tiny pair}}(u,v;M)$, the QKD equipment cost associated with sending one unit of call between users $u$ and $v$, over a network whose backbone nodes are described by the point process $M$: \begin{equation*} C^{\textrm{\tiny pair}}(u,v;M) = \left\{ \begin{array}{ll} C(\|u-X_i\|)+C(\|v-X_i\|) \\ \;\;\;\;\;\;\;\;\textrm{ if } u,v\in D_i \\ C(\|u-X_i\|)+C(\|v-X_j\|)+ C^{\textrm{\tiny hop}}(i,j;M) \\ \;\;\;\;\;\;\;\;\textrm{ if } u\in D_i\textrm{ and }v\in D_j\textrm{ with } i\neq j \;, \end{array} \right. \end{equation*} where $C(\ell)$ is the cost spent to send a secret bit on a QKD link over a distance $\ell$ and $C^{\textrm{\tiny hop}}(i,j;M)$ is the cost to send a secret bit between the nodes $X_i$ and $X_j$ of the backbone for the given routing policy. Given that the volume between each pair of users is $V$, the average total cost $\mathcal{C}$ of the QKD network then reads \begin{equation*} \mathcal{C} = \mathcal{C}^{\textrm{\tiny QKD}} + \mathcal{C}^{\textrm{\tiny node}} = V \times \mathbb E \left[\sum_{k\neq l} C^{\textrm{\tiny pair}}(U_k,U_l;M)\right] + C_{\textrm{\tiny node}}\,N^2 \;, \end{equation*} where $N^2$ is the average number of nodes of the backbone deployed in the domain $D$ of size $L\times L$. Here $\mathbb E$ denotes the average cost over the spatial distributions of users and backbone nodes, that is over the realizations of $\Pi$ and $M$. Since $M$ and $\Pi$ are supposed independently distributed, we may compute this average successively with respect to $M$ and $\Pi$. The total cost, averaged only over $\Pi$, can be decomposed as follows: \begin{eqnarray*} \fl \mathbb E \left[\sum_{k\neq l} C^{\textrm{\tiny pair}}(U_k,U_l;M) \right] & = \int C^{\textrm{\tiny pair}}(u,v;M) \, f(u)\,f(v) \,du\,dv \\ & = \sum_k \int_{D_k\times D_k} \left\{C(\|u-X_k\|)+C(\|v-X_k\|)\right\}\, f(u)\,f(v) \,du\,dv \\ & \hspace{0.1cm} + \sum_{k\neq l}\int_{D_k\times D_l} \left\{C(\|u-X_k\|)+C(\|v-X_l\|)+ C^{\textrm{\tiny hop}}(k,l;M)\right\}\,f(u)\,f(v) \,du\,dv \\ & = \sum_k\sum_l \int_{D_k\times D_l} \left\{C(\|u-X_k\|)+C(\|v-X_l\|)\right\}\, f(u)\,f(v) \,du\,dv \\ & \hspace{0.1cm} + \sum_{k\neq l}\int_{D_k\times D_l} C^{\textrm{\tiny hop}}(k,l;M)\,f(u)\,f(v) \,du\,dv \\ \end{eqnarray*} As we can see from the last expression, the total cost $\mathcal{C}$ can be separated in three terms: \begin{equation} \label{eq:totalCost} \mathcal{C} =: C^{\textrm{\tiny loc}} +C^{\textrm{\tiny bb}}\ + \mathcal{C}^{\textrm{\tiny node}}\;, \end{equation} where $C^{\textrm{\tiny loc}}$ takes into account all connections from one client to the closest backbone node, $C^{\textrm{\tiny bb}}$ all connections from one backbone node to another, and $\mathcal{C}^{\textrm{\tiny node}}$ is the cost of node equipment. The explicit models that we will study will allow us to compare the behavior of these different terms and thus to understand how QKD network backbone topologies can be optimized. \subsection{Cost calculations for two explicit quantum backbone models} \label{subsec:costcalc} \subsubsection{Cost of the square backbone QKD network} \label{subsec:square} \paragraph{Network model:} We consider, as a first simple example, the case of a QKD backbone network that has a perfectly regular topology, and for which the shortest path length between two backbone nodes is easily determined. The architecture we consider is the following: users are distributed as previously over a large area $D$ of size $L\times L$ and the backbone QKD network is a regular graph of degree 4, \emph{i.e.} the backbone QKD nodes and links constitute a square network. The structure of the square backbone QKD network and the way a call is routed is summarized on figure~\mathrm{Re}f{fig:SquareBB}. The free parameter with respect to which we will perform the cost optimization is the size of backbone cells $\alpha_{\textrm{\tiny bb}}$. We will also make the assumption that the user density function $f$ is uniform over $D$. \begin{figure} \caption{Structure of a two-dimensional regular square backbone network: a regular array of cells of dimension $\alpha_{\textrm{\tiny bb} \label{fig:SquareBB} \end{figure} \paragraph{Computation of $C^{\textrm{\tiny bb}} $ for the square network:} We set $X_k=k\alpha_{\textrm{\tiny bb}}$ and $D_k=X_k+\alpha_{\textrm{\tiny bb}}[-1/2,1/2]^2$ with $k\in\mathbb Z^2$ and, for all $k\neq l$, \begin{equation*} C^{\textrm{\tiny hop}}(k,l;M) = \|k-l\|_1\, C(\alpha_{\textrm{\tiny bb}}) \; . \end{equation*} Here, $\|k-l\|_1$ corresponds to the number of hops between $X_k$ and $X_l$ and $C(\alpha_{\textrm{\tiny bb}})$ to the per bit cost of one hop. Calling $\mu_i$ the average number of QKD users in a backbone cell $i$, we have: \begin{equation} \label{eq:Ibb} C^{\textrm{\tiny bb}} = V \sum_{k\neq l}\mu_k\mu_l \, C^{\textrm{\tiny hop}}(k,l;M) \end{equation} Hence, \begin{equation*} C^{\textrm{\tiny bb}} = V C(\alpha_{\textrm{\tiny bb}}) \, \boldsymbol{\mu}^T\Gamma\boldsymbol{\mu} \;, \end{equation*} where $\boldsymbol{\mu}$ is the column vector with entries $\mu_k$, $k\in\mathbb Z^2$, and $\Gamma$ is the Toeplitz array indexed on $\mathbb Z^2$ with entries $\Gamma_{k,l}=\|k-l\|_1$. Since the density of users $f$ is constant and equal to $\sigma$ on its support $D$, where $D:=\bigcup_{k\in\{0,\mathrm{d}ots,N-1\}^2} D_k$, $\mu_k$ is the same for all cells $D_k$: $\mu_k= \mu/ N^2$, with $N^2$ denoting the total number of backbone cells, and $\mu=(L/\alpha_{\textrm{\tiny u}})^2$ the mean number of users over $D$ (see equation~(\mathrm{Re}f{eq:mudef})). Hence, we find \begin{equation*} C^{\textrm{\tiny bb}}= V C(\alpha_{\textrm{\tiny bb}})\, \mu^2/N^{4}\,\sum_{k,l\in\{0,\mathrm{d}ots,N-1\}^2}\|k-l\|_1 \; . \end{equation*} Now, we compute \begin{eqnarray*} \sum_{k,l\in\{0,\mathrm{d}ots,N-1\}^2}\|k-l\|_1 & = & \sum_{k_1,l_1=0}^{N-1}\sum_{k_2,l_2=0}^{N-1}\sum_{i=1}^2 |k_i-l_i| \\ & = & 2 \sum_{k_1,l_1=0}^{N-1}\sum_{k_2,l_2=0}^{N-1} |k_1-l_1| = 2 \,N^{2}\,\sum_{k,l=0}^{N-1} |k-l|\\ & = & 4 \,N^{2}\,\sum_{k=0}^{N-1} \sum_{l<k} |k-l| = 4 \,N^{2}\,\sum_{k=0}^{N-1} \sum_{l<k} |k-l|\\ & \sim & \frac{2}3 \,N^{5}\, \end{eqnarray*} where the asymptotic equivalence holds as $N\to\infty$. Using $N\sim L/\alpha_{\textrm{\tiny bb}}$ and equation~(\mathrm{Re}f{eq:mudef}), we obtain, as $N\to\infty$, \begin{equation} \label{eq:CbbSquare} C^{\textrm{\tiny bb}}\sim V \, \frac{\mu^2}{N^4} C(\alpha_{\textrm{\tiny bb}}) \, \frac{2}3 \,N^{5} = \frac 23 \, \frac{C(\alpha_{\textrm{\tiny bb}}\alpha_{\textrm{\tiny u}}^4)}\alpha_{\textrm{\tiny bb}} \, L^5 \,V = \frac 23 \, \frac{C(\alpha_{\textrm{\tiny bb}})}\alpha_{\textrm{\tiny bb}} \, \mu^2 V \,L \;. \end{equation} In the latter expression, we have four multiplicative terms: \begin{enumerate} \item $2/3$, a constant depending only on the dimension and the geometry of the backbone network (for a cube of dimension $d$, we could generalize our calculation and would find $d/3$); \item $C(\alpha_{\textrm{\tiny bb}})/\alpha_{\textrm{\tiny bb}}$, a cost function depending only on the distance $\alpha_{\textrm{\tiny bb}}$ between the nodes of the backbone; \item $\mu^2 \, V$, the square of the mean number of users times the volume of call per pair of users, \emph{i.e.} in our communication model, the total volume of the communications over which the total cost is computed; \item $L$, the size of the support of $f$, that is of the domain where the users lie. \end{enumerate} To understand better the derived expression for $C^{\textrm{\tiny bb}}$, it is interesting to compare it with $ C^{\textrm{\tiny loc}} $ and $\mathcal{C}^{\textrm{\tiny node}}$. Indeed, we can show that $C^{\textrm{\tiny loc}} \simeq \mu^2 \,\overline{C}$, where $\overline{C}$ stands for the per-bit cost function $C$ averaged over one cell. In the case of the square network with $\alpha_{\textrm{\tiny bb}}\times\alpha_{\textrm{\tiny bb}}$ square cells, these cells are contained between two circles of radius $\alpha_{\textrm{\tiny bb}}/2$ and $ \alpha_{\textrm{\tiny bb}} \, \sqrt{2}/2 < \alpha_{\textrm{\tiny bb}}$. Since $C$ is an increasing function of distance we have $\overline{C} < C(\alpha_{\textrm{\tiny bb}})$, and we can thus derive the important following property: {\bf In the limit of large networks, \emph{i.e.} for $L \gg \alpha_{\textrm{\tiny bb}}$, the backbone cost is dominant over the local cost.} We will see in the following section that this property is preserved for a backbone with randomly positioned nodes and an appropriate routing policy. Furthermore, we will see that for large $L$, the backbone node equipment cost $\mathcal{C}^{\textrm{\tiny node}}$ is negligible. Therefore, to optimize the cost~(equation~\mathrm{Re}f{eq:totalCost}), we only need to minimize $C^{\textrm{\tiny bb}}$. Assuming a square regular backbone, this means choosing $\alpha_{\textrm{\tiny bb}}$ so as to minimize $C(\alpha_{\textrm{\tiny bb}})/\alpha_{\textrm{\tiny bb}}$, exactly as in the case of the linear chain QKD network model of section~\mathrm{Re}f{subsec:chain}. Hence, if we take $C(\ell) = \frac{C_{\textrm{\tiny QKD}}}{R_0} e^{\, \ell/\lambda_{\textrm{\tiny QKD}}}$, the cost is minimized for \begin{equation} \label{eq:alphaOptSquareBB} \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} = \lambda_{\textrm{\tiny QKD}}\;. \end{equation} \subsubsection{Cost calculation for a stochastic QBB with Markov-path routing} \label{subsec:stochasticQBB} \paragraph{} We now compute $C^{\textrm{\tiny loc}}$ and $C^{\textrm{\tiny bb}}$ in the case where the routing policy is the so called Markov path, as proposed in~\cite{baccelli:aap00}, where some general formulae are given for computing average costs in a general framework (see also \cite{tchoumatchenko:phd99}). The routing policy is defined as follows. First, all pairs of nodes whose cells share a common edge are connected. The corresponding graph is a Delaunay graph. Next, given two users A and B with respective positions $u$ and $v$, we define a finite sequence of the nodes $X_{k_0},X_{k_1},\mathrm{d}ots,X_{k_n}$ in the successive cells encountered when drawing a line from $u$ to $v$. This routing policy is illustrated on figure~\mathrm{Re}f{fig:Voronoi}. By definition, $X_{k_0}$ and $X_{k_n}$ are the centers of the cells containing $u$ and $v$ respectively and \begin{eqnarray} \label{eq:ClocStationaryM} C^{\textrm{\tiny loc}} & = & V \times\int_{D\times D}\mathbb E\left[C(\|u-X_{k_0}\|)+C(\|v-X_{k_n}\|)\right]\,f(u)\,f(v) \,du\,dv \nonumber \\ & = & V\;\mu^2\;\kappa^{\textrm{\tiny loc}}\;, \end{eqnarray} where $\mu:=\int f$ is the average total number of users and, by stationarity of the point process $M$, \begin{equation*} \kappa^{\textrm{\tiny loc}}= \mathbb E\left[C(\|u-X_{k_0}\|)\right]+ \mathbb E\left[C(\|v-X_{k_n}\|)\right]= 2\;\mathbb E\left[C(\|X_0\|)\right] \end{equation*} with $X_0$ defined as the center of the cell containing the origin. Note that $\kappa^{\textrm{\tiny loc}}$ denotes the average local cost per secret bit and per pair of users. If $M$ is a Poisson point process with intensity $\alpha_{\textrm{\tiny bb}}^{-2}$, we further have \begin{equation*} \mathbb P(\|X_0\| > t)=\mathbb P(\#\{X_k\;:\;\|X_k\|\leq t\}=0)=\exp(-\pi t^2\alpha_{\textrm{\tiny bb}}^{-2})\;, \end{equation*} and hence \begin{equation} \label{eq:kappalocHomPoissonM} \fl \;\;\;\; \kappa^{\textrm{\tiny loc}} = 4\pi\alpha_{\textrm{\tiny bb}}^{-2} \; \int_{\mathbb R_+} C(t) \; t \; \exp(-\pi t^2\alpha_{\textrm{\tiny bb}}^{-2}) dt = 4\pi \; \int_{\mathbb R_+} C(\alpha_{\textrm{\tiny bb}} u) \; u \; \exp(-\pi u^2) du \; . \end{equation} For $C^{\textrm{\tiny bb}}$, we can write \begin{equation*} C^{\textrm{\tiny bb}}=V\times\int_{D\times D}\mathbb E\left[\sum_{i=1}^n C(\|X_{k_{i}}-X_{k_{i-1}}\|)\right]\,f(u)\,f(v) \,du\,dv\;. \end{equation*} Applying~\cite[Theorem~2]{baccelli:aap00} or the results (in particular Theorem~2.41 and Remark~2.4.2) in section~2.4 of \cite{tchoumatchenko:phd99} (as done in Corollaries~2.5.1 and~2.5.2 in~\cite{tchoumatchenko:phd99}), we obtain \begin{equation*} \mathbb E\left[\sum_{i=1}^n C(\|X_{k_{i}}-X_{k_{i-1}}\|)\right]= \kappa^{\tiny bb}\, \|u-v\| \, , \end{equation*} where \begin{equation} \label{eq:kappaBBHomPoissonM} \fl \kappa^{\tiny bb} := 2\alpha_{\textrm{\tiny bb}}^{-1} \int_{(r,\psi,\phi)\in\mathcal{A}} C\left(2\alpha_{\textrm{\tiny bb}} r\sin(\{\psi-\phi\}/2)\right) \,\{\cos(\phi)-\cos(\psi)\}\,r^2\,\mathrm{e}^{-\pi\,r^2}\,d\psi\,d\phi\,dr \; , \end{equation} and $\mathcal{A}=\mathbb R_+\times\{(\psi,\phi):\,0<|\phi|\leq\psi<\pi\}$. Finally we find that \begin{equation} \label{eq:CBBStationaryM} C^{\textrm{\tiny bb}}=V\,\kappa^{\textrm{\tiny bb}}\, \mathrm{d}elta\;, \end{equation} where $\mathrm{d}elta$ is the average total distance between two different users defined in equation~(\mathrm{Re}f{eq:deltaDef}) and computed in equation~(\mathrm{Re}f{eq:deltaVal}), and $\kappa^{\textrm{\tiny bb}}$ denotes the average backbone cost per secret bit and per length unit of the distance separating a pair of users. From equations~(\mathrm{Re}f{eq:totalCost}),~(\mathrm{Re}f{eq:ClocStationaryM}) and~(\mathrm{Re}f{eq:CBBStationaryM}), and observing that here the average total number of backbone cells $N^2=(L/\alpha_{\textrm{\tiny bb}})^2$, we find \begin{equation} \label{eq:totalCostStationary} \mathcal{C} =: C^{\textrm{\tiny loc}} +C^{\textrm{\tiny bb}} + \mathcal{C}^{\textrm{\tiny node}}=V\times\left[\mu^2\kappa^{\textrm{\tiny loc}}+\mathrm{d}elta\kappa^{\textrm{\tiny bb}}\right] + C_{\textrm{\tiny node}}(L/\alpha_{\textrm{\tiny bb}})^{2}\ \;, \end{equation} where $\mu^2$ and $\mathrm{d}elta$ are related to the spatial distribution of the users, and $\kappa^{\textrm{\tiny loc}}$ and $\kappa^{\textrm{\tiny bb}}$ are constants related to the geometry of the backbone and to the routing policy. For users uniformly distributed in a square of side length $L$ with intensity $\alpha_{\textrm{\tiny u}}^{-2}$, we have $\mu^2\simeq (L/\alpha_{\textrm{\tiny u}})^4$ and $\mathrm{d}elta\simeq L^5/\alpha_{\textrm{\tiny u}}^4$. Using~(\mathrm{Re}f{eq:kappalocHomPoissonM}),~(\mathrm{Re}f{eq:kappaBBHomPoissonM}),~(\mathrm{Re}f{eq:totalCostStationary}) and the above approximations of $\mu^2$ and $\mathrm{d}elta$, we see that the total cost $\mathcal{C}$ only depends on $L$, $\alpha_{\textrm{\tiny u}}$ and $\alpha_{\textrm{\tiny bb}}$. Now, for given $\alpha_{\textrm{\tiny u}}$ and $L$, we take $\alpha_{\textrm{\tiny bb}}$ so that $\mathcal{C}$ is minimized and examine which term in the right-hand side of~(\mathrm{Re}f{eq:totalCostStationary}) dominates the total cost $\mathcal{C}$ as $L\to\infty$ in this context. To this end, we first study each term separately. We let $c$ denote a constant not depending on $L,\alpha_{\textrm{\tiny bb}}$ in the following reasoning. Observe that since $C$ is convex and increasing, $C(\ell)\geq c\times \ell$. Using this in~(\mathrm{Re}f{eq:kappalocHomPoissonM}) and in~(\mathrm{Re}f{eq:kappaBBHomPoissonM}), we get $C^{\textrm{\tiny loc}}\geq c\,\alpha_{\textrm{\tiny bb}} L^4$ and $C^{\textrm{\tiny bb}}\geq c\, L^5$, respectively. Concerning the last term, we have $\mathcal{C}^{\textrm{\tiny node}}\approx c\, L^2/\alpha_{\textrm{\tiny bb}}^2$. It follows that at fixed $L$, $C^{\textrm{\tiny loc}}\to\infty$ as $\alpha_{\textrm{\tiny bb}}\to\infty$ and $\mathcal{C}^{\textrm{\tiny node}}\to\infty$ as $\alpha_{\textrm{\tiny bb}}\to0$, from which we can deduce that the optimal $\alpha_{\textrm{\tiny bb}}$ stays away of 0 and $\infty$. Now, clearly, if $\alpha_{\textrm{\tiny bb}}$ stays away from 0 and $\infty$, the above bounds show that $C^{\textrm{\tiny bb}}$ dominates as $L\to\infty$. Hence, for large $L$, the optimal intensity $\alpha_{\textrm{\tiny bb}}$ is the one that minimizes $C^{\textrm{\tiny bb}}$ or, equivalently, $\kappa^{\textrm{\tiny bb}}$. To find this optimal intensity, the following result is useful for an exponential cost $C(\ell) = \frac{C_{\textrm{\tiny QKD}}}{R_0} e^{\, \ell/\lambda_{\textrm{\tiny QKD}}}$: \begin{lem} \label{lem:kappaLocComp} Define $\kappa^{\textrm{\tiny bb}}$ as in equation~(\mathrm{Re}f{eq:kappaBBHomPoissonM}) with $C(\ell) = \frac{C_{\textrm{\tiny QKD}}}{R_0} e^{\, \ell/\lambda_{\textrm{\tiny QKD}}}$. Then the following analytical formula holds \begin{equation*} \kappa^{\textrm{\tiny bb}}=C_{\textrm{\tiny QKD}} R_0^{-1} \lambda_{\textrm{\tiny QKD}}^{-1} \frac4\pi\left[ \mathrm{e}^{\alpha_{\textrm{\tiny bb}}^2/(\pi \lambda_{\textrm{\tiny QKD}}^2)}\{1+\mathrm{erf}(\alpha_{\textrm{\tiny bb}}/(\sqrt{\pi}\lambda_{\textrm{\tiny QKD}}))\}+ \lambda_{\textrm{\tiny QKD}}/\alpha_{\textrm{\tiny bb}}\right]\;, \end{equation*} where \begin{equation*} \mathrm{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x\mathrm{e}^{-t^2}\,dt\;. \end{equation*} \end{lem} \noindent \emph{Proof.} Let $s= \lambda_{\textrm{\tiny QKD}}/\alpha_{\textrm{\tiny bb}}$. We have \begin{eqnarray*} & & \int_{(r,\psi,\phi)\in\mathcal{A}} \exp\left(2s^{-1} r\sin(\{\psi-\phi\}/2)\right) \,\{\cos(\phi)-\cos(\psi)\}\,r^2\,\mathrm{e}^{-\pi\,r^2}\,d\psi\,d\phi\,dr \\ & & = 8\int_{v=0}^{\pi/2}\int_{r=0}^{\infty}\exp(2s^{-1}r\sin(v)-\pi r^2)\,r^2\,\sin(v)\,dv\,dr . \end{eqnarray*} Integrating with respect to $r$ yields \begin{eqnarray*} \fl \kappa^{\textrm{\tiny bb}} & = C_{\textrm{\tiny QKD}} R_0^{-1} \lambda_{\textrm{\tiny QKD}}^{-1} \\ \fl & \times\left[\frac2\pi+\frac{4s}{\pi} \int_{v=0}^{\pi/2}\sin(v)\{1+2\sin^2(v)/(\pi s^2)\}\exp(\sin^2(v)/(\pi s^2))(1+\mathrm{erf}(\sin(v)/(\sqrt{\pi}s)\,dv\right]\;. \end{eqnarray*} Further computations yield \begin{equation*} \kappa^{\textrm{\tiny bb}}=C_{\textrm{\tiny QKD}} R_0^{-1} \lambda_{\textrm{\tiny QKD}}^{-1} \frac4\pi\left[ \mathrm{e}^{1/(\pi s^2)}\{1+\mathrm{erf}(1/(s\sqrt{\pi}))\}+s\right]\;, \end{equation*} which is the desired expression.\\ Using Lemma~\mathrm{Re}f{lem:kappaLocComp}, the $\alpha_{\textrm{\tiny bb}}$ minimizing $\kappa^{\textrm{\tiny bb}}$, denoted as $\alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}}$ below, can easily be calculated using a numerical procedure. We find \begin{equation} \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} \approx 1.2490 \, \lambda_{\textrm{\tiny QKD}} \;. \end{equation} This result should be compared with the result of equation~(\mathrm{Re}f{eq:alphaOptSquareBB}), where the backbone geometry is deterministic and also characterized by the node intensity $1/ \alpha_{\textrm{\tiny bb}}^2$. The two results show that the choice of the backbone and routing policy does influence the optimal node intensity, albeit in a modest way. \subsection{From cost optimization results to QKD network planning} \label{subsec:QKDnetplanning} \noindent \emph{Matching QKD network topology with QKD links optimum working distance} \noindent The calculations in sections~\mathrm{Re}f{subsec:square} and \mathrm{Re}f{subsec:stochasticQBB} point to one common result: it appears that, for large networks, the costs associated with the QKD devices that have to be deployed in backbone nodes to serve the demand are always dominant over the local costs, associated to the end connections between QKD users and backbone nodes. Moreover, the optimization of backbone costs indicates that minimum cost will be reached when the typical distance between backbone nodes is of the order of $\lambda_{\textrm{\tiny QKD}}$, the scaling parameter of the curve $R(l)$. These results lead to the following statements: \begin{itemize} \item When a QKD network deployment is planned, is seems optimal to choose the location of network nodes so that QKD links will be operated over distances comparable to the optimal distance $\ell^{\textrm{\tiny opt}}$. As we have seen in our different models, $\ell^{\textrm{\tiny opt}}$ is always lower bounded by a pre-factor times $\lambda_{\textrm{\tiny QKD}}$. Indeed, when the total cost of node equipment can be neglected compared to the cost of QKD devices, as it is the case for large networks, then the optimum distance $\ell^{\textrm{\tiny opt}}$ is indeed comparable to $\lambda_{\textrm{\tiny QKD}}$, which is roughly equal to 20~km. This indicates that current QKD technologies, for which $D_{\textrm{\tiny max}}$ is already significantly larger than 20~km, are well suited for metropolitan operation. On the other hand, the typical distance between amplifiers, in optical wide area networks, is of the order of 80~km. If we wanted to deploy trusted QKD networks with the current generation of QKD devices, the QKD links would have to be operated close to their maximum distance, where the unit of secret bit rate becomes very expensive. Although technically already feasible, the deployment of wide area QKD networks thus remains a challenge. We can however anticipate that this challenge will be overcome within the next years, as new generations of QKD protocols and devices, able to generate keys at higher rates, and with larger maximum distances are already being presented~\cite{stucki:qp08, leverrier:qp08, dixon:qp08}. \item The results on cost minimization that we have obtained could provide some helpful guidelines for QKD device developers: they may help promoting the idea that what will really matter, in the perspective of real network deployment, will be to focus on the optimization of their systems around typical network-optimum working distances. Optimizing QKD devices in this regime means reducing the cost of a unit bit rate at a \emph{reasonable} distance, where the throughput of the QKD link is not considerably smaller than $R_0$. It will be of course always profitable to design QKD devices that can reach very long distances, but as discussed in~\cite{alleaume:inprep09}, from a system development point of view it can be significantly different to optimize QKD devices to reach the longest possible distance $D_{\textrm{\tiny max}}$, and to optimize them so that the cost of unit of bit rate is as low as possible, around the distance $\ell^{\textrm{\tiny opt}}$ minimizing network costs.\\ \end{itemize} \noindent \emph{In which regime are backbones useful?} \noindent We would like now to use our calculation results to analyze in which regime QKD backbones become \emph{economically interesting}, \emph{i.e.} under which conditions it is interesting to introduce some hierarchy and resource mutualization in QKD networks, in order to decrease the total deployment cost. In the previous sections we have performed cost calculations that can be used to establish some quantitative comparisons between: \begin{itemize} \item The cost of a QKD network with no hierarchy as in the generalized linear chain QKD network, whose cost calculations have been performed in section~\mathrm{Re}f{subsec:2Dchain}. \item The cost of a QKD network with one level of hierarchy, which is the case of the square backbone QKD network studied in section~\mathrm{Re}f{subsec:square}. \end{itemize} Since these two cost calculations have been performed under the same assumptions regarding user distribution and traffic demand, we can use the results given in equations~(\mathrm{Re}f{eq:Cchaintotal}) and (\mathrm{Re}f{eq:CbbSquare}) to compare the total network deployment costs, respectively for the generalized linear chain model and for a QKD network with a square backbone (for which we have seen that we could neglect the cost of the local access network). The condition under which it will be more cost effective to deploy a quantum backbone than to connect all pair of users by one-dimensional chains of QKD links can be described by the following inequality between the respective optimal costs \begin{eqnarray} \fl \mathcal{C}_{\textrm{\tiny 2D,chain}}^{\textrm{\tiny opt,chain}} \geq \mathcal{C}_{\textrm{\tiny 2D,square}}^{\textrm{\tiny opt,square}} \nonumber \\ \fl \Leftrightarrow \Big( V \, C(\ell^{\textrm{\tiny opt}})/\ell^{\textrm{\tiny opt}} + C_{\textrm{\tiny node}}/\ell^{\textrm{\tiny opt}} \Big) \gamma \sigma^2 L^5 \, \geq \, \frac{2}{3} \, C(\alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}})/\alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} \, \sigma^2 L^5 \, V + \, C_{\textrm{\tiny node}} \, {L^2}/{{\alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}}}^2} \label{ineq:chainsquare1} \end{eqnarray} The above equation is not very convenient to handle because in general $ \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} \neq \ell^{\textrm{\tiny opt}} $. However, \begin{eqnarray} \mathcal{C}_{\textrm{\tiny 2D,chain}}^{\textrm{\tiny opt,chain}} \geq \mathcal{C}_{\textrm{\tiny 2D,square}}^{\textrm{\tiny opt,square}} \Rightarrow \mathcal{C}_{\textrm{\tiny 2D,chain}}^{\textrm{\tiny opt,square}} \geq \mathcal{C}_{\textrm{\tiny 2D,square}}^{\textrm{\tiny opt,square}} \label{implication:chainsquare1} \end{eqnarray} Thus, we can derive a necessary condition under which the deployment of a backbone for a QKD network is a better solution than a design that would solely rely on the generalized linear chain of QKD links to transport the traffic: \begin{eqnarray} \fl \mathcal{C}_{\textrm{\tiny 2D,chain}}^{\textrm{\tiny opt,square}} \geq \mathcal{C}_{\textrm{\tiny 2D,square}}^{\textrm{\tiny opt,square}} \Leftrightarrow \, C_{\textrm{\tiny node}} \, ( \sigma^2 L^3 \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} \, \gamma - 1 ) \, \geq \, C(\alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}}) V \, \sigma^2 L^3 \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} \, (\frac{2}{3} - \gamma) \nonumber \\ \fl \Leftrightarrow \,C_{\textrm{\tiny node}} \, ( \sigma^2/ {\sigma^\ast}^2 - 1) \, \geq \, C(\alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}}) \, V \, \sigma^2/ {\sigma^\ast}^2 \, ( \frac {2}{3 \gamma} - 1) \label{implication:chainsquare2} \end{eqnarray} with $\sigma^{\ast} = 1/ \sqrt{ L^3 \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}} \, \gamma}\; .$\\ Keeping in mind that $\frac {2}{3 \gamma} - 1$ is a positive number, we can use the last inequality to make the following observations: \begin{itemize} \item First, it appears that, if the user density $\sigma$ is smaller than $\sigma^{\ast}$, which we can qualify as a \emph{critical user density}, then equation~(\mathrm{Re}f{implication:chainsquare2}) can never be verified. This means that below $\sigma^\ast$ it will never be interesting to deploy a backbone. This result has a clear interpretation: backbone infrastructures can only be interesting in the case where sharing resources offers a cost reduction. And the incentive to share a backbone infrastructure can only exist if there are enough users. The minimum total number of users required to have a cost incentive towards backbone deployment is $\sigma^{\ast}\, L^2 = \sqrt{ L / (\gamma \alpha_{\textrm{\tiny bb}}^{\textrm{\tiny opt}})} $. \item In case $\sigma$ is larger than the critical user density $\sigma^{\ast}$, we enter a regime where there will be an incentive to deploy a quantum backbone essentially if the cost of a node $C_{\textrm{\tiny node}}$ dominates over the cost of QKD link equipment to be deployed, which scales as $C(\alpha_{bb}^{\textrm{\tiny opt}}) V$. This also has a clear interpretation: if we take the extreme case where building a node (and installing node equipment inside it) is zero, we can foresee that there will be no incentive to build a backbone: it will always be cheaper to deploy direct chains between each pair of users. The motivation to build a backbone arises when efforts associated to opening a QKD node are important. This will of course be the case if QKD node equipment is expensive, as we can see from equation~(\mathrm{Re}f{implication:chainsquare2}), but it is also intuitive that, in case significant efforts are required to build new QKD nodes, mutualization of nodes through a backbone structure will be a cost effective solution. \end{itemize} \section{Conclusion and Perspectives} \label{sec:perspectives} In this paper, we performed a topological analysis of quantum key distribution networks with trusted repeater nodes. In particular, under specific assumptions on the user and node distributions as well as the call traffic and routing in such networks, we derived cost functions for different network architectures. We first considered a linear chain network as a basic model that served the purpose of illustrating the main techniques and ideas that we used, and then moved on to more advanced network configurations that were in some cases enhanced with a backbone structure. Using cost minimization arguments, we obtained results on the optimal working points of QKD links and spatial distribution of QKD nodes, and examined the importance of introducing hierarchy into QKD networks. Our results indicate that, in the context of QKD networks, it is more cost-effective and therefore advantageous to operate individual QKD links at their optimal working point, which is in general significantly shorter than the maximum span of such links. This conclusion motivates the research of new experimental compromises in practical QKD systems, and can be illustrated by considering examples of such systems where the characteristics of either a hardware component (for example a single-photon detector) or a software algorithm (for example a reconciliation code) can be experimentally manipulated as a function of distance~\cite{alleaume:inprep09}. In general, it is clear that, as the realization of more and more advanced QKD networks approaches the realm of actual deployment, it becomes necessary to orient the research on QKD devices and links towards cost-related directions, and extend the techniques we have presented here to more sophisticated network technologies and architectures. \ack We acknowldge financial support from the Integrated European Project SECOQC (Grant No. IST-2002-506813). R. A. and E. D. acknowledge financial support from the French National Research Agency Projects PROSPIQ (ANR-06-NANO-041-05) and SEQURE (ANR-07-SESU-011-01). N. L. acknowledges support from the NSERC Innovation Platform QuantumWorks, a NSERC Discovery Grant, and the Ontario Centers of Excellence. \section*{References} \end{document}
math
72,392
\begin{document} \title{On Performance Estimation in Automatic Algorithm Configuration} \begin{abstract} Over the last decade, research on automated parameter tuning, often referred to as automatic algorithm configuration (AAC), has made significant progress. Although the usefulness of such tools has been widely recognized in real world applications, the theoretical foundations of AAC are still very weak. This paper addresses this gap by studying the performance estimation problem in AAC. More specifically, this paper first proves the universal best performance estimator in a practical setting, and then establishes theoretical bounds on the estimation error, i.e., the difference between the training performance and the true performance for a parameter configuration, considering finite and infinite configuration spaces respectively. These findings were verified in extensive experiments conducted on four algorithm configuration scenarios involving different problem domains. Moreover, insights for enhancing existing AAC methods are also identified. \end{abstract} \section{Introduction} Many high-performance algorithms for solving computationally hard problems, ranging from exact methods such as mixed integer programming solvers to heuristic methods such as local search, involve a large number of free parameters that need to be carefully tuned to achieve their best performance. In many cases, finding performance-optimizing parameter settings is performed manually in an ad-hoc way. However, the manually-tuning approach is often intensive in terms of human effort \cite{lopez2016irace} and thus there are a lot of attempts on automating this process (see \cite{hutter2009paramils} for a comprehensive review), which is usually referred to as automatic algorithm configuration (AAC) \cite{Hoos121}. Many AAC methods such as ParamILS \cite{hutter2009paramils}, GGA/GGA+\cite{ansotegui2009gender,AnsoteguiMSST15}, irace \cite{lopez2016irace} and SMAC \cite{hutter2011sequential} have been proposed in the last few years. They have been used for boosting the algorithm's performance in a wide range of domains such as the boolean satisfiability problem (SAT) \cite{hutter2009paramils}, the traveling salesman problem (TSP) \cite{lopez2016irace,LiuT019}, the answer set programming (ASP) \cite{HutterLFLHLS14} and machine learning \cite{FeurerKESBH15,KotthoffTHHL17}. Despite the notable success achieved in application, the theoretical aspects of AAC have been rarely investigated. To our best knowledge, for AAC the first theoretical analysis was given by \citet{Birattari2004}, in which the author analyzed expectations and variances of different performance estimators that estimate the true performance of a given parameter configuration on the basis of $N$ runs of the configuration. It is concluded in \cite{Birattari2004} that performing one single run on $N$ different problem instances guarantees that the variance of the estimate is minimized, which has served as a guidance in the design of the performance estimation mechanisms in later AAC methods including irace, ParamILS and SMAC. It is noted that the analysis in \cite{Birattari2004} assumes that infinite problem instances could be sampled for configuration evaluation. However, in practice we are often only given a set of finite training instances \cite{Hoos121}. Recently, \citet{KleinbergLL17} introduced a new algorithm configuration framework named Structured Procrastination (SP), which is guaranteed to find an approximately optimal parameter configuration within a logarithmic factor of the optimal runtime in a worst-case sense. Furthermore, the authors showed that the gap between worst-case runtimes of existing methods (ParamILS, GGA, irace, SMAC) and SP could be arbitrarily large. These results were later extended in \cite{WeiszGS18,WeiszGS19}, in which the authors proposed new methods, called LEAPSANDBOUNDS (LB) and CapsAndRuns (CR), with better runtime guarantees. However, there is a discrepancy between the algorithm configuration problem addressed by these methods (SP, LB and CR) and the problem that is most frequently encountered in practice. More specifically, these methods are designed to find parameter configurations with approximately optimal performances on the input (training) instances; while in practice it is more desirable to find parameter configurations that will perform well on new unseen instances rather than just the training instances \cite{Hoos121}. Indeed, one of the most critical issues that needs to be addressed in AAC is the over-tuning phenomenon \cite{Birattari2004}, in which the found parameter configuration is with excellent training performance, but performs badly on new instances \footnote{ To appropriately evaluate AAC methods, in the literature, including widely used benchmarks (e.g., AClib \cite{HutterLFLHLS14}) and major contests (e.g., the Configurable SAT Solver Challenge (CSSC) \cite{HutterLBBHL17}), the common scheme is to use an independent test set that has never been used during the configuration procedures to test the found configurations.}. Based on the above observation, this paper extends the results of \cite{Birattari2004} in several aspects. First, this paper introduces a new formulation of the algorithm configuration problem (Definition~\ref{def:AAC_definition}), which concerns the optimization of the expected performance of the configured algorithm on an instance distribution $\mathcal{D}$. Compared to the one considered by \citet{Birattari2004} in which $\mathcal{D}$ is directly given (thus could be sampled infinitely), in the problem considered here $\mathcal{D}$ is unknown and inaccessible, and the assumption is that the input training instances (and the test instances) are sampled $i.i.d$ from $\mathcal{D}$. Therefore when solving this configuration problem, we can only use the given finite training instances. One key difficulty is that the true performance of a parameter configuration is unachievable. Subsequently, we could only run a configuration on the training instances to obtain an estimate of its true performance. Thus a natural and important question is that, given finite computational budgets, e.g., $N$ runs of the configuration, how to allocate them over the training instances to obtain the most reliable estimate. Moreover, given that we could obtain an estimate of the true performance, is it possible to quantify the difference between the estimate and the true performance? The second and the most important contribution of this paper is that it answers the above questions theoretically. More specifically, this paper first introduces a universal best performance estimator (Theorem~\ref{theorem:best_estimator}) that always distributes the $N$ runs of a configuration to all training instances as evenly as possible, such that the performance estimate is most reliable. Then this paper investigates the estimation error, i.e., the difference between the training performance (the estimate) and the true performance, and establishes a bound on the estimation error that holds for all configurations in the configuration space, considering the cardinality of the configuration space is finite (Theorem~\ref{theorem:finite}). It is shown that the bound deteriorates as the number of the considered configurations increases. Since in practice the cardinality of the configuration space considered could be considerably large or even infinite, by making two mild assumptions on the considered configuration scenarios, we remove the dependence on the cardinality of the configuration space and finally establish a new bound on the estimation error (Theorem~\ref{theorem:infinite}). The effectiveness of these results have been verified in extensive experiments conducted on four configuration scenarios involving problem domains including SAT, ASP and TSP. Some potential directions for improving current AAC methods from these results have also been identified. \section{Algorithm Configuration Problem} \label{sec:2} In a nutshell, the algorithm configuration problem concerns optimization of the free parameters of a given parameterized algorithm (called target algorithm) for which the performance is optimized. Let $\mathcal{A}$ denote the target algorithm and let $p_{1},...,p_{h}$ be parameters of $\mathcal{A}$. Denote the set of possible values for each parameter $p_{i}$ as $\Theta_{i}$. A parameter configuration $\theta$ (or simply configuration) of $\mathcal{A}$ refers to a complete setting of $p_{1},...,p_{h}$, such that the behavior of $\mathcal{A}$ on a given problem instance is completely specified (up to randomization of $\mathcal{A}$ itself). The configuration space $\bm{\Theta} = \Theta_{1} \times \Theta_{2}...\times \Theta_{h}$ contains all possible configurations of $\mathcal{A}$. For brevity, henceforth we will not distinguish between $\theta$ and the instantiation of $\mathcal{A}$ with $\theta$. In real application $\mathcal{A}$ is often randomized and its output is determined by the used configuration $\theta$, the input instance $z$ and the random seed $v$. Let $\mathcal{D}$ denote a probability distribution over a space $\mathcal{Z}$ of problem instances from which $z$ is sampled. Let $\mathcal{G}$ be a probability distribution over a space $\mathcal{V}$ of random seeds from which $v$ is sampled. In practice $\mathcal{G}$ is often given implicitly through a random number generator. Given an instance $z$ and a seed $v$, the quality of $\theta$ at $(z, v)$ is measured by a utility function $f_{\theta}:\mathcal{Z} \times \mathcal{V} \rightarrow [L,U]$, where $L,U$ are bounded real numbers. In practice, it means running $\theta$ with $v$ on $z$, and maps the result of this run to a scalar score. Note how the mapping is done depends on the considered performance metric. For examples, if we are interested in optimizing quality of the solutions found by $\mathcal{A}$, then we might take the (normalized) cost of the solution output by $\mathcal{A}$ as the utility; if we are interested in minimizing computational resources consumed by $\mathcal{A}$ (such as runtime, memory or communication bandwidth), then we might take the quantity of the consumed resource of the run as the utility. No matter which performance metric is considered, in practice the value of $f_{\theta}$ is bounded for all $\theta \in \bm{\Theta}$, i.e., for all $\theta \in \bm{\Theta}$ and all $(z,v) \in \mathcal{Z} \times \mathcal{V}$, $f_{\theta}(z,v) \in [L, U]$. To measure the performance of $\theta$, the expected value of the utility scores of $\theta$ across different $(z,v)$, which is the most widely adopted criterion in AAC applications \cite{Hoos121}, is considered here. More specifically, as presented in Definition~\ref{def:AAC_definition}, the performance of $\theta$, denoted as $u(\theta)$, is its expected utility score over instance distribution $\mathcal{D}$ and random seed distribution $\mathcal{G}$. Without loss of generality, we always assume a smaller value is better for $u(\theta)$. The goal of the algorithm configuration problem is to find a configuration from the configuration space $\bm{\Theta}$ with the best performance. \begin{definition}[Algorithm Configuration Problem] \label{def:AAC_definition} Given a target algorithm $\mathcal{A}$ with configuration space $\bm{\Theta}$, an instance distribution $\mathcal{D}$ defined over space $\mathcal{Z}$, a random seed distribution $\mathcal{G}$ defined over space $\mathcal{V}$ and a utility function $f_{\theta}:\mathcal{Z} \times \mathcal{V} \rightarrow [L,U]$ that measures the quality of $\theta$ at $(z, v)$, the algorithm configuration problem is to find a configuration $\theta^{\star}$ from $\bm{\Theta}$ with the best performance: \[ \label{eq:denitionofutheta} \theta^{\star} \in \argmin_{\theta \in \bm{\Theta}} u(\theta), \] where $u(\theta) = \mathbb{E}_{z \sim \mathcal{D}, v \sim \mathcal{G}}[f_{\theta}(z,v)]$. \end{definition} In practice, $\mathcal{D}$ is usually unknown and the analytical solution of $u(\theta)$ is unachievable. Instead, usually we have a set of problem instances $\{z_{1},...,z_{K}\}$, called training instances, which are assumed to be sampled $i.i.d$ from $\mathcal{D}$. To estimate $u(\theta)$, a series of experiments of $\theta$ on $\{z_1, z_2,...,z_K\}$ could be run. As presented in Definition~\ref{def:exsetting}, an experimental setting $S_{N}$ to estimate $u(\theta)$ is to run $\theta$ on $\{z_{1},...,z_{K}\}$ for $N$ times, each time with a random seed sampled $i.i.d$ from $\mathcal{G}$. \begin{definition}[Experimental Setting $S_{N}$] \label{def:exsetting} Given a configuration $\theta$, a set of $K$ training instances $\{z_{1},...,z_{K}\}$ and the total number $N$ of runs of $\theta$, an experimental setting $S_{N}$ to estimate $u(\theta)$ is a list of $N$ tuples, in which each tuple $(z,v)$ consists of an instance $z$ and a random seed $v$, meaning a single run of $\theta$ with $v$ on $z$. Let $n_{i}$ denote the number of runs performed on $z_{i}$ (note $n_{i}$ could be 0, meaning $\theta$ will not be run on $z_{i}$). It holds that $\sum_{i=1}^{K} n_{i}=N$ and $S_{N}$ could be written as: \[ \begin{aligned} S_{N}=[&(z_{1}, v_{1,1}),...,(z_{1},v_{1,n_{1}}),...,(z_{i}, v_{i,1}),...,\\ &(z_{i},v_{i,n_{i}}),...,(z_{K}, v_{K,1}),...,(z_{K},v_{K,n_{K}})]. \end{aligned} \] \end{definition} After performing the $N$ runs of $\theta$ as specified in $S_{N}$, the utility scores of these runs are aggregated to estimate $u(\theta)$. The following estimator $\hat{u}_{S_{N}}(\theta)$, which calculates the mean utility across all runs and is widely adopted in AAC methods \cite{hutter2009paramils,lopez2016irace,hutter2011sequential}, is presented in Definition~\ref{def:estimator}. \begin{definition}[Estimator $\hat{u}_{S_{N}}(\theta)$] \label{def:estimator} Given a configuration $\theta$ and an experimental setting $S_{N}$, the training performance of $\theta$, which is an estimate of $u(\theta)$, is given by: \[ \label{eq:estimator} \hat{u}_{S_{N}}(\theta) = \frac{1}{N}\sum_{i=1}^{K}\sum_{j=1}^{n_{i}}f_{\theta}(z_{i}, v_{i, j}). \] \end{definition} Since different experimental settings represent different performance estimators, which have different behaviors. It is thus necessary to investigate which $S_N$ is the best. \section{Universal Best Performance Estimator} \label{section3} To determine the values of $n_{1},...,n_{K}$ in $S_{N}$, \citet{Birattari2004} analyzed expectations and variances of $\hat{u}_{S_{N}}(\theta)$, and concluded that $\hat{u}_{S^{\circ}_{N}}(\theta)$ with $S^{\circ}_{N}:=[(z_{1}, v_{1,1}),(z_{2},v_{2,1}),...,(z_{N},v_{N,1})]$ has the minimal variance. It is noted that the analysis in \cite{Birattari2004} assumes that infinite problem instances could be sampled from $\mathcal{D}$; thus for performing $N$ runs of $\theta$, as specified in $S^{\circ}_{N}$, it is always the best to sample $N$ instances from $\mathcal{D}$ and perform one single run of $\theta$ on each instance. In other words, $S^{\circ}_{N}$ is established on the basis that the number of the training instances $K$ could always be set equal to $N$. However, in practice usually we only have a finite number of training instances. In the case that $K \not= N$, which $S_{N}$ is the best? Theorem~\ref{theorem:best_estimator} answers this question for arbitrary relationship between $K$ and $N$. Before presenting Theorem~\ref{theorem:best_estimator}, some necessary definitions are introduced. Given a configuration $\theta$ and an instance $z$, the expected utility of $\theta$ within $z$, denoted as $u_{z}(\theta)$, is $\mathbb{E}_{\mathcal{G}}[f_{\theta}(z,v)|z]$. The variance of the utility of $\theta$ within $z$, denoted as $\sigma^{2}_{z}(\theta)$, is $\mathbb{E}_{\mathcal{G}}[(f_{\theta}(z,v)-u_{z}(\theta))^{2}|z]$. Based on $u_{z}(\theta)$ and $\sigma^{2}_{z}(\theta)$, the expected within-instance variance $\bar{\sigma}^{2}_{WI}(\theta)$ of $\theta$ and the across-instance variance $\bar{\sigma}^{2}_{AI}(\theta)$ of $\theta$ are defined in Definition~\ref{def:within_instance} and Definition~\ref{def:across_instance}, respectively. \begin{definition}[Expected within-instance Variance of $\theta$] $\bar{\sigma}^{2}_{WI}(\theta)$ is the expected value of $\sigma^{2}_{z}(\theta)$ over instance distribution $\mathcal{D}$: \small \[ \bar{\sigma}^{2}_{WI}(\theta) = \mathbb{E}_{\mathcal{D}}[\sigma^{2}_{z}(\theta)]. \] \label{def:within_instance} \end{definition} \begin{definition}[Across-instance Variance of $\theta$] $\bar{\sigma}^{2}_{AI}(\theta)$ is the variance of $u_{z}(\theta)$ over instance distribution $\mathcal{D}$: \small \[ \bar{\sigma}^{2}_{AI}(\theta) = \mathbb{E}_{\mathcal{D}}[(u_{z}(\theta) - u(\theta))^{2}]. \] \label{def:across_instance} \end{definition} The expectation and the variance of an estimator $\hat{u}_{S_{N}}(\theta)$ are presented in Lemma~\ref{lem:unbiased} and Lemma~\ref{lem:variance}, respectively. The proofs are omitted here due to space limitations. \begin{lemma} \label{lem:unbiased} The expectation of $\hat{u}_{S_{N}}(\theta)$ is $u(\theta)$, that is, $\hat{u}_{S_{N}}(\theta)$ is an unbiased estimator of $u(\theta)$ no matter how $n_{1},...,n_{K}$ in $S_{N}$ are set: \small \[ \mathbb{E}_{S_{N}}[\hat{u}_{S_{N}}(\theta)] = u(\theta). \] \end{lemma} \begin{lemma} \label{lem:variance} The variance of $\hat{u}_{S_{N}}(\theta)$ is given by: \small \begin{equation} \label{eq:variance} \mathbb{E}_{S_{N}}[(\hat{u}_{S_{N}}(\theta)-u(\theta))^{2}] = \frac{1}{N} \bar{\sigma}^{2}_{WI}(\theta) + \frac{\Sigma_{i=1}^{K}n_{i}^{2}}{N^{2}} \bar{\sigma}^{2}_{AI}(\theta). \end{equation} \end{lemma} \begin{theorem} \label{theorem:best_estimator} Given a configuration $\theta$, a training set of $K$ instances and the total number $N$ runs of $\theta$, the universal best estimator $\hat{u}_{S^{\ast}_{N}}(\theta)$ for $u(\theta)$ is obtained by setting $S^{\ast}_{N}:= n_{i} \in \{\lfloor{\frac{N}{K}}\rfloor, \lceil{\frac{N}{K}}\rceil \} $ for all $i \in \{1,2,...,K\}$, s.t. $\sum_{i=1}^{K} n_{i}=N$. $\hat{u}_{S^{\ast}_{N}}(\theta)$ is an unbiased estimator of $u(\theta)$ and is with the minimal variance among all possible estimators. \end{theorem} \begin{proof} By Lemma~\ref{lem:unbiased}, $\hat{u}_{S^{\ast}_{N}}(\theta)$ is an unbiased estimator of $u(\theta)$. We now prove $\hat{u}_{S^{\ast}_{N}}(\theta)$ has the minimal variance. By Lemma~\ref{lem:variance}, the variance of $\hat{u}_{S_{N}}(\theta)$ is $\frac{1}{N} \bar{\sigma}^{2}_{WI}(\theta) + \frac{\Sigma_{i=1}^{K}n_{i}^{2}}{N^{2}} \bar{\sigma}^{2}_{AI}(\theta)$. Since $N$ and $K$ are fixed, and $\bar{\sigma}^{2}_{WI}(\theta)$ and $\bar{\sigma}^{2}_{AI}(\theta)$ are constants for a given $\theta$, we need to minimize $\sum_{i=1}^{K}n_{i}^{2}$, s.t. $\sum_{i=1}^{K}n_{i}=N$. Define $Q_{n}=\sqrt{\sum_{k=1}^{K}n_{i}^{2}}$ and $\bar{n}=\frac{\sum_{i=1}^{K}n_{i}}{K}=\frac{N}{K}$, it then follows that $Q_{n}^{2}=K\bar{n}^{2}+\sum_{i=1}^{K}(n_{i}-\bar{n}^{2})$. Then it suffices to prove that $Q_{n}^{2}$ is minimized on the condition $n_{i} \in \{\lfloor{\frac{N}{K}}\rfloor, \lceil{\frac{N}{K}}\rceil \}$ for all $i \in \{1,2,...,K\}$. Assuming $Q_{n}^{2}$ is minimized while the condition not satisfied, then there must exist $n_{i}$ and $n_{j}$, such that $n_{i}-n_{j}>1$; then we have $(n_{i}-\bar{n})^{2}+(n_{j}-\bar{n})^{2} > (n_{i}-\bar{n})^{2}+(n_{j}-\bar{n})^{2}-2(n_{i}-n_{j})+2 = (n_{i}-\bar{n}-1)^{2}+(n_{j}-\bar{n}+1)^{2}$. This contradicts the assumption that $Q_{n}^{2}$ is minimized. The proof is complete. \end{proof} Theorem~\ref{theorem:best_estimator} states that it is always the best to distribute the $N$ runs of $\theta$ to all training instances as evenly as possible, in which case $\max_{i,j \in \{1,...,K\}} |n_{i}-n_{j}| \leq 1$, no matter $K \neq N$ or $K=N$. When $K=N$, $S^{\ast}_{N}(\theta)$ is actually equivalent to $S^{\circ}_{N}$ that performs one single run of $\theta$ on each instance. When $K \neq N$, $S^{\ast}_{N}(\theta)$ will perform $\lceil\frac{N}{K}\rceil$ runs of $\theta$ on each of $(N\ \mathrm{mod}\ K)$ instances and perform $\lfloor\frac{N}{K}\rfloor$ runs on each of the rest instances. It is worth mentoring that practical AAC methods including ParamILS, SMAC and irace actually adopt the same or quite similar estimators as $S^{\ast}_{N}(\theta)$. Theorem~\ref{theorem:best_estimator} provides a theoretical guarantee for these estimators, and $S^{\ast}_{N}(\theta)$ will be further evaluated in the experiments. \section{Bounds on Estimation Error} Although Theorem~\ref{theorem:best_estimator} presents the estimator with the universal minimal variance, it cannot provide any information about how large the estimation error, i.e., $u(\theta)-\hat{u}_{S_{N}}(\theta)$, could be. Bounds on estimation error are useful in both theory and practice because we could use them to establish bounds on the true performance $u(\theta)$, given that in algorithm configuration process the training performance $\hat{u}_{S_{N}}(\theta)$ is actually known. In general, given a configuration $\theta$, its training performance $\hat{u}_{S_{N}}(\theta)$ is a random variable because the training instances and the random seeds specified in $S_{N}$ are drawn from distributions $\mathcal{D}$ and $\mathcal{G}$, respectively. Thus we focus on establishing probabilistic inequalities for $u(\theta)-\hat{u}_{S_{N}}(\theta)$, i.e., for any $0<\delta<1$, with probability at least $1-\delta$, there holds $u(\theta)-\hat{u}_{S_{N}}(\theta) \leq A(\delta)$. In particular, probabilistic bounds on uniform estimation error, i.e., $\sup_{\theta \in \bm{\Theta}}[u(\theta)-\hat{u}_{S_{N}}(\theta)]$, that hold for all $\theta \in \bm{\Theta}$ are established. Recalling that Lemma~\ref{lem:unbiased} states $\mathbb{E}_{S_{N}}[\hat{u}_{S_{N}}(\theta)] = u(\theta)$, the key technique for deriving bounds on $u(\theta)-\hat{u}_{S_{N}}(\theta)$ is the concentration inequality presented in Lemma~\ref{lem:concen} that bounds how $\hat{u}_{S_{N}}(\theta)$ deviates from its expected value $u(\theta)$. \begin{lemma}[Bernstein's Inequality \cite{bernstein1927theory}] Let $X_{1},X_{2},...,X_{n}$ be independent centered bounded random variables, i.e., $\mathrm{Prob}\{|X_{i}| \leq a\}=1$ and $\mathbb{E}[X_{i}]=0$. Let $\sigma^{2}=\frac{1}{n}\sum_{i=1}^{n}Var[X_{i}]$ where $Var[X_{i}]$ is the variance of $X_{i}$. Then for any $\epsilon>0$ we have \[ \mathrm{Prob}\{\frac{1}{n}\sum_{i=1}^{n}X_{i} \geq \epsilon\} \leq \mathrm{exp}(-\frac{n\epsilon^{2}}{2\sigma^{2}+\frac{2a\epsilon}{3}}). \] \label{lem:berstein} \end{lemma} \begin{lemma} Given a configuration $\theta$, an experimental setting $S_{N}=[(z_{1}, v_{1,1}),...,(z_{K},v_{K,n_{K}})]$ and a performance estimator $\hat{u}_{S_{N}}(\theta)=\frac{1}{N}\sum_{i=1}^{K}\sum_{j=1}^{n_{i}}f_{\theta}(z_{i}, v_{i, j})$. Let $\tau_{\theta}^{2}=\bar{\sigma}^{2}_{WI}(\theta) + \frac{\sum_{i=1}^{K}n_{i}^{2}}{N}\bar{\sigma}^{2}_{AI}(\theta)$. Let $C=U-L$, where $L,U$ are the lower bound and the upper bound of $f_{\theta}$ respectively (see Definition~\ref{def:AAC_definition}), and let $n=\max\{n_{1},n_{2},...,n_{K}\}$. Then for any $\epsilon>0$, we have \[ \mathrm{Prob}\{u(\theta) - \hat{u}_{S_{N}}(\theta) \geq \epsilon \} \leq \mathrm{exp}(-\frac{N\epsilon^{2}}{2\tau_{\theta}^{2}+\frac{2nC\epsilon}{3}}). \] \label{lem:concen} \end{lemma} \begin{proof} Define random variables $x_{i,j}=u(\theta)-f_{\theta}(z_{i}, v_{i,j})$, and define random variables $X_{i}=\sum_{j=1}^{n_{i}}x_{i,j}$. First we prove that $X_{1},...,X_{K}$ satisfy the conditions in Lemma~\ref{lem:berstein}. $\mathbb{E}[X_{i}]=\sum_{j=1}^{n_{i}}\mathbb{E}[x_{i,j}]=\sum_{j=1}^{n_{i}}[u(\theta)-\mathbb{E}[f_{\theta}(z_{i},v_{i,j})]]=0$. By Definition~\ref{def:AAC_definition}, $\mathrm{Prob}\{L \leq f_{\theta}(z_{i},v_{i,j}) \leq U\}=1$, it holds that $L \leq u(\theta) \leq U$ (since $u(\theta) = \mathbb{E}[f_{\theta}(z_{i},v_{i,j})]$). Thus we have $\mathrm{Prob}\{|x_{i,j}| \leq U-L\}=1$ and $\mathrm{Prob}\{|X_{i}| \leq n(U-L) \}=1$. For any $p \neq q$, $X_{p}$ and $X_{q}$ are independent. Thus $X_{1},X_{2},...,X_{K}$ are independent random variables. Let $\bar{X}=\frac{1}{K}\sum_{i=1}^{K}X_{i}$. By Lemma~\ref{lem:berstein}, it holds that, for any $\epsilon>0$, $\mathrm{Prob}\{\bar{X} > \epsilon\} \leq \mathrm{exp}(-\frac{K\epsilon^{2}}{2\sigma^{2}+\frac{2C\epsilon}{3}})$, where $\sigma^{2}=\frac{1}{K}\sum_{i=1}^{K}Var[X_{i}]$. Notice that $\frac{K}{N}\bar{X}=u(\theta) - \hat{u}_{S_{N}}$; thus it holds that for any $\epsilon>0$, \begin{equation} \label{eq:midresult1} \mathrm{Prob}\{u(\theta) - \hat{u}_{S_{N}}>\epsilon\} \leq \mathrm{exp}(-\frac{N\epsilon^{2}}{\frac{2K}{N}\sigma^{2}+\frac{2nC\epsilon}{3}}). \end{equation} The rest of the proof focuses on $\sigma^{2}$. Since $E[x_{i}]=0$, $Var[X_{i}]=\mathbb{E}[(X_{i}-E[X_{i}])^{2}]=\mathbb{E}[X_{i}^{2}]$. Substitute $X_{i}$ with $\sum_{j=1}^{n_{i}}x_{i,j}$ and we have $Var[X_{i}]=\sum_{j=1}^{n_{i}}\mathbb{E}[x_{i,j}^{2}] +\sum_{1 \leq j < l \leq n_{i}}2\mathbb{E}[x_{i,j}x_{i,l}]$. We analyze $\mathbb{E}[x_{i,j}^{2}]$ and $\mathbb{E}[x_{i,j}x_{i,l}]$ in turn. $\mathbb{E}[x_{i,j}^{2}]=Var[x_{i,j}]+\mathbb{E}[x_{i,j}]^{2}=Var[x_{i,j}]+0=\bar{\sigma}^{2}_{WI}(\theta)+\bar{\sigma}^{2}_{AI}(\theta)$ (by setting $N,K=1$ in Eq.~(\ref{eq:variance})). $\mathbb{E}[x_{i,j}x_{i,l}]=\mathbb{E}[(f_{\theta}(z_{i},v_{i,j})-u(\theta))(f_{\theta}(z_{i},v_{i,l})-u(\theta))]=\mathbb{E}[(f_{\theta}(z_{i},v_{i,j})(f_{\theta}(z_{i},v_{i,l})]-u(\theta)^{2}$. Given an instance $z_{i}$, $f_{\theta}(z_{i},v_{i,j})$ and $f_{\theta}(z_{i},v_{i,l})$ are independent because $v_{i,j}$ and $v_{i,l}$ are sampled $i.i.d$ from $\mathcal{G}$. Thus it holds that: \small \begin{align} &\mathbb{E}[(f_{\theta}(z_{i},v_{i,j})(f_{\theta}(z_{i},v_{i,l})] = \mathbb{E}_{\mathcal{D}}[\mathbb{E}_{\mathcal{G}} [f_{\theta}(z_{i},v_{i,j}) f_{\theta}(z_{i},v_{i,l}) |z_i]] \nonumber \\ &= \mathbb{E}_{\mathcal{D}}[\mathbb{E}_{\mathcal{G}} [f_{\theta}(z_{i},v_{i,j})|z_{i}] \mathbb{E}_{\mathcal{G}} [f_{\theta}(z_{i},v_{i,l})|z_{i}]] =\mathbb{E}_{\mathcal{D}}[u_{z_{i}}(\theta)^{2}]. \nonumber \end{align} By the fact $\mathbb{E}_{\mathcal{D}}[u_{z}(\theta)]=u(\theta)$, $\mathbb{E}_{\mathcal{D}}[u_{z_i}(\theta)^{2}] - u(\theta)^{2} =\mathbb{E}_{\mathcal{D}}[(u_{z_i}(\theta)-u(\theta))^{2}]=\bar{\sigma}^{2}_{AI}(\theta)$. The last step is by Definition~\ref{def:across_instance}. Summing up the above results, we have $Var[X_{i}]=n_{i}(\bar{\sigma}^{2}_{WI}(\theta)+\bar{\sigma}^{2}_{AI}(\theta))+n_{i}(n_{i}-1)\bar{\sigma}^{2}_{AI}(\theta) =n_{i}\bar{\sigma}^{2}_{WI}(\theta) + n_{i}^{2}\bar{\sigma}^{2}_{AI}(\theta)$. Thus $\sigma^{2}=\frac{1}{K}\sum_{i=1}^{K}Var[X_{i}]=\frac{N}{K}\bar{\sigma}^{2}_{WI}(\theta) + \frac{\sum_{i=1}^{K}n_{i}^{2}}{K}\bar{\sigma}^{2}_{AI}(\theta)$. Substitute $\sigma^{2}$ in Eq.~(\ref{eq:midresult1}) with this result and the proof is complete. \end{proof} \subsection{On Configuration Space with Finite Cardinality} Theorem~\ref{theorem:finite} presents the bound for uniform estimation error when $\bm{\Theta}$ is of finite cardinality. \begin{theorem} \label{theorem:finite} Given a performance estimator $\hat{u}_{S_{N}}(\theta)$. Let $\theta^{\dagger}=\argmax_{\theta \in \bm{\Theta}}\tau_{\theta}^{2}$, where $\tau_{\theta}^{2} = \bar{\sigma}^{2}_{WI}(\theta) + \frac{\sum_{i=1}^{K}n_{i}^{2}}{N}\bar{\sigma}^{2}_{AI}(\theta)$, and let $\tau^{2}=\tau_{\theta^{\dagger}}^{2}$, $\bar{\sigma}^{2}_{WI}=\bar{\sigma}^{2}_{WI}(\theta^{\dagger})$ and $\bar{\sigma}^{2}_{AI}=\bar{\sigma}^{2}_{AI}(\theta^{\dagger})$. Let $n=\max\{n_{1},n_{2},...,n_{K}\}$ and $C=U-L$. Given that $\bm{\Theta}$ is of finite cardinality, i.e., $\bm{\Theta}=\{\theta_{1},\theta_{2},...,\theta_{m}\}$, then for any $0<\delta<1$, with probability at least $1-\delta$, there holds: \small \begin{align} \label{eq:finite} \sup_{\theta \in \bm{\Theta}}&[u(\theta)-\hat{u}_{S_{N}}(\theta)] \nonumber \\ &\leq \frac{2nC\ln{\frac{m}{\delta}}}{3N} + \sqrt{2\ln{\frac{m}{\delta}}(\frac{1}{N}\bar{\sigma}^{2}_{WI} + \frac{\sum_{i=1}^{K}n^{2}_{i}}{N^{2}}\bar{\sigma}^{2}_{AI})}. \end{align} \end{theorem} \begin{proof} By Lemma~\ref{lem:concen}, for a given configuration $\theta$, for any $\epsilon>0$, it holds that $ \mathrm{Prob}\{u(\theta) - \hat{u}_{S_{N}}(\theta) \geq \epsilon \} \leq \mathrm{exp}(-\frac{N\epsilon^{2}}{2\tau_{\theta}^{2}+\frac{2nC\epsilon}{3}}) $. By union bound, $ \mathrm{Prob}\{\sup_{\theta \in \bm{\Theta}}[u(\theta) - \hat{u}_{S_{N}}(\theta)] \geq \epsilon \} \leq \sum_{i=1}^{m} \mathrm{Prob}\{u(\theta_{i}) - \hat{u}_{S_{N}}(\theta_{i}) \geq \epsilon\} \leq m \mathrm{exp}(-\frac{N\epsilon^{2}}{2\tau^{2}+\frac{2nC\epsilon}{3}}). $ Let $\delta=m \mathrm{exp}(-\frac{N\epsilon^{2}}{2\tau^{2}+\frac{2nC\epsilon}{3}}) $, and $\epsilon$ is solved as: $\epsilon=\frac{1}{2N} [\frac{2nC}{3}\ln{\frac{m}{\delta}} + \sqrt{(\frac{2nC}{3}\ln{\frac{1}{\delta})^{2} +8N\tau^{2}\ln{\frac{m}{\delta}} }}] \leq \frac{2nC\ln{\frac{m}{\delta}}}{3N} + \sqrt{2\ln{\frac{m}{\delta}}(\frac{\tau^{2}}{N})}. $ Substituting $\tau^{2}$ with $\bar{\sigma}^{2}_{WI} + \frac{\sum_{i=1}^{K}n_{i}^{2}}{N}\bar{\sigma}^{2}_{AI}$ proves Theorem~\ref{theorem:finite}. \end{proof} Note that for different $S_{N}$, the bounds on the right side of Eq.~(\ref{eq:finite}) are different. The proof of Theorem~\ref{theorem:best_estimator} shows that $\sum_{i=1}^{K}n_{i}^{2}$, s.t. $\sum_{i=1}^{K}n_{i}=N$, is minimized on the condition $n_{i} \in \{\lfloor{\frac{N}{K}}\rfloor, \lceil{\frac{N}{K}}\rceil \}$ for all $i \in \{1,2,...,K\}$. Moreover, it is easy to verify that $n=\max\{n_{1},n_{2},...,n_{K}\}$ is also minimized on the same condition, in which case $n=\lceil{\frac{N}{K}\rceil}$. Thus we can immediately obtain Corollary~\ref{cor:finite}. \begin{corollary} \label{cor:finite} The estimator $\hat{u}_{S^{\ast}_{N}}$ established in Theorem~\ref{theorem:best_estimator}, has the best bound for uniform estimation error in Theorem~\ref{theorem:finite}. Given that $K$ divides $N$, for any $0<\delta<1$, with probability at least $1-\delta$, there holds: \small \begin{equation*} \sup_{\theta \in \bm{\Theta}}[u(\theta)-\hat{u}_{S_{N}^{\ast}}(\theta)] \leq \frac{2C\ln{\frac{m}{\delta}}}{3K} + \sqrt{2\ln{\frac{m}{\delta}}(\frac{1}{N}\bar{\sigma}^{2}_{WI} + \frac{1}{K}\bar{\sigma}^{2}_{AI})}. \end{equation*} \end{corollary} \subsection{On Configuration Space with Infinite Cardinality} Since in practice the cardinality of $\bm{\Theta}$ could be considerably large (e.g., $10^{12}$), in which case the bound provided by Theorem~\ref{theorem:finite} could be very loose. Moreover, when the cardinality of $\bm{\Theta}$ is infinite, Theorem~\ref{theorem:finite} does not apply anymore. To address these issues, we establish new uniform error bound without dependence on the cardinality of $\bm{\Theta}$ based on two mild assumptions given below. \begin{assumption}\label{ass} \begin{enumerate}[(a)] \item We assume there exists $R>0$ such that $\bm{\Theta}\subseteq B_R$, where $B_{R}=\{\mathbf{w}\in\mathbb{R}^h:\|\mathbf{w}\|_2\leq R\}$ is a ball of radius $R$ and $\|\mathbf{w}\|_2=\sum_{i=1}^{h}w_i^2$ for $\mathbf{w}=(w_1,\ldots,w_h)$. \item We assume for any $(z,v) \in \mathcal{Z} \times \mathcal{V}$, the utility function $f$ is L-Lipschitz continuous, i.e., $|f_{\theta}(z,v)-f_{\tilde{\theta}}(z,v)| \leq L||\theta -\tilde\theta||_{2}$ for all $\theta,\tilde{\theta}\in\bm{\Theta}$. \end{enumerate} \end{assumption} Part (a) of Assumption \ref{ass} means the ranges of the values of all parameters considered are bounded, which holds in nearly all practical algorithm configuration scenarios \cite{HutterLFLHLS14}. Part (b) of Assumption \ref{ass} poses limitations on how fast $f_{\theta}$ can change across $\bm{\Theta}$. This assumption is also mild in the sense that it is expected that configurations with similar parameter values would result in similar behaviors of $\mathcal{A}$, thus getting similar performances. The key technique for deriving the new bound is \textit{covering numbers} as defined in Definition~\ref{def:covering_numers}. \begin{definition} \label{def:covering_numers} Let $\mathcal{F}$ be a set and $d$ be a metric. For any $\eta>0$, a set $\mathcal{F}^\triangle \subset \mathcal{F}$ is called an $\eta$-cover of $\mathcal{F}$ if for every $f \in \mathcal{F}$ there exists an element $g \in \mathcal{F}^\triangle$ satisfying $d(f,g) \leq \eta$. The covering number $\mathcal{N}(\eta,\mathcal{F}, d)$ is the cardinality of the minimal $\eta$-cover of $\mathcal{F}$: \[ \mathcal{N}(\eta,\mathcal{F},d):=\min\{|\mathcal{F}^\triangle|:\mathcal{F}^\triangle\text{ is an $\epsilon$-cover of }\mathcal{F}\}. \] \end{definition} Lemma~\ref{lem:bound_on_cover_B} presents a covering number bound on $B_R$. \begin{lemma}[\cite{pisier1999volume}] \label{lem:bound_on_cover_B} \[\ln\mathcal{N}(\eta,B_R,d_2)\leq h\ln(3R/\eta),\] where $d_2(\mathbf{w},\tilde{\mathbf{w}})=\|\mathbf{w}-\tilde{\mathbf{w}}\|_2$. \end{lemma} Since $\bm{\Theta} \subset B_{R}$, it is easy to verify that $\ln\mathcal{N}(\eta,\bm{\Theta},d_2) \leq \ln\mathcal{N}(\eta,B_R,d_2)$. Based on the L-Lipschitz continuity assumption, Lemma~\ref{lem:bound_on_cover_F} establishes a bound for $\mathcal{N}(\eta,\mathcal{F},d_\infty)$, where $\mathcal{F}=\{f_{\theta}:\theta \in \bm{\Theta}\}$. \begin{lemma} \label{lem:bound_on_cover_F} Let $\mathcal{F}=\{f_{\theta}:\theta \in \bm{\Theta}\}$ and $ d_\infty(f_{\theta},f_{\tilde{\theta}}) =\sup_{(z,v) \in \mathcal{Z} \times \mathcal{V}} |f_{\theta}(z,v)-f_{\tilde{\theta}}(z,v)|$. If Assumption \ref{ass} holds, then $\ln\mathcal{N}(\eta,\mathcal{F},d_\infty) \leq h\ln(3RL/\eta)$. \end{lemma} \begin{proof} For any $\theta,\tilde{\theta}\in \bm{\Theta}$, by the Lipschitz continuity we know $d_\infty(f_\theta,f_{\tilde{\theta}})\leq L\|\theta-\tilde{\theta}\|_2$. Then, any $(\epsilon/L)$-cover of $B_R$ w.r.t. $d_2$ would imply an $\epsilon$-cover of $\mathcal{F}$ w.r.t. $d_\infty$. This together with Lemma \ref{lem:bound_on_cover_B} implies the stated result. The proof is complete. \end{proof} \begin{table*}[tbp] \centering \caption{Summary of the configuration scenarios and gathered performance matrix in each scenario. h is the \#parameters of the target algorithm. Tmax is the cutoff time. The $portgen$ generator \cite{johnson2007experimental} was used to generate the TSP instances (in which the cities are randomly distributed). For each scenario, $\bm{\Theta}_{M}$ was composed of the default parameter configuration and $M-1$ random configurations. } \scalebox{0.66}{ \begin{tabular}{c|c|c|c|c|c|c} \hline Scenario& Algorithm& \multicolumn{1}{l|}{Domain} &Bechmark & $M$ & $P$ & \multicolumn{1}{l}{Tmax} \\ \hline SATenstein-QCP & SATenstein \cite{DKhudaBukhshXHL16}, h = 54 & SAT& Randomly selected from QCP \cite{aaai/GomesS97} & 500 & 500 & 5s \\ \hline clasp-weighted-sequence & clasp \cite{lpnmr/GebserKNS07a}, h=98 & ASP& "small" type weighted-sequence \cite{padl/LierlerSTW12} & 500 & 120 & 25s \\ \hline LKH-uniform-400 & LKH \cite{helsgaun2000effective}, h=23 & TSP& Generated by $portgen$ \cite{johnson2007experimental}, \#city=400 & 500 & 250 & 10s \\ \hline LKH-uniform-1000 & LKH \cite{helsgaun2000effective}, h=23 & TSP& Generated by $portgen$ \cite{johnson2007experimental}, \#city=1000 & 500 & 250 & 10s \\ \hline \end{tabular}} \label{tab:scenarios} \end{table*} \captionsetup[sub]{font=small} \begin{figure*} \caption{SATenstein-QCP} \caption{clasp-weighted-sequence} \caption{LKH-uniform-400} \caption{LKH-uniform-1000} \caption{Estimation error for different estimators in different scenarios at $r_1=0.5$.} \label{fig:com_estimator} \end{figure*} With the bound for $\mathcal{N}(\eta,\mathcal{F},d_\infty)$, the new bound for $u(\theta)-\hat{u}_{S_{N}}(\theta)$ is established in Theorem~\ref{theorem:infinite}. \begin{lemma}\label{lem:inequality-solution} For any positive constants $k, l, b, c$, the inequality $\epsilon^k l+b\ln\epsilon\geq c$ has a solution \begin{equation*}\label{eq:inequality-solution} \epsilon_0=\left(\frac{c+b\max(\ln l-\ln c,0)/k}{l}\right)^{1/k}. \end{equation*} \end{lemma} \begin{theorem} \label{theorem:infinite} If Assumption \ref{ass} holds and $h\ln(12LR)\geq1$, then for any $0<\delta<1$, with probability $1-\delta$ there holds: \small \begin{align*} &\sup_{\theta \in \bm{\Theta}}[u(\theta)-\hat{u}_{S_{N}}(\theta)] \\ & \leq \sqrt{\frac{h\ln(12LR)+\ln(\frac{1}{\delta})+\frac{1}{2}h\ln\frac{N}{8\tau^2+\frac{4nC}{3}}}{N}\Big(8\tau^2+\frac{4nC}{3}\Big)}, \end{align*} where $n, C,\tau^{2}, \bar{\sigma}^{2}_{WI},\bar{\sigma}^{2}_{AI}$ are defined the same as in Theorem~\ref{theorem:finite}. \end{theorem} \begin{proof} Without loss of generality we can assume $\epsilon\leq1$. Let $\{f_{\theta_1},...,f_{\theta_m}\}$ be a $\epsilon/4$-cover of $\mathcal{F}$ with $m=\mathcal{N}(\epsilon/4,\mathcal{F},d_\infty)$, where $\mathcal{F}=\{f_{\theta}:\theta \in \bm{\Theta}\}$. By Definition~\ref{def:covering_numers}, for any $f_{\theta} \in \mathcal{F}$ there exists $f_{\theta_{j}} \in \{f_{\theta_1},...,f_{\theta_m}\}$, such that $d_\infty(f_{\theta}, f_{\theta_{j}}) =\sup_{(z,v) \in \mathcal{Z} \times \mathcal{V}} |f_{\theta}(z,v)-f_{\theta_{j}}(z,v)| \leq \epsilon/4$; it follows that $|\mathbb{E}[f_{\theta}(z,v)]-\mathbb{E}[f_{\theta_{j}}(z,v)]| =|u(\theta)-u(\theta_{j})| \leq \epsilon/4 $ and $ |\hat{u}_{S_{N}}(\theta)-\hat{u}_{S_{N}}(\theta_{j})| = \frac{1}{N}\sum_{i=1}^{K}\sum_{j=1}^{n_{i}}|f_{\theta}(z_{i}, v_{i, j}) - f_{\theta_{j}}(z_{i}, v_{i, j})| \leq \epsilon/4. $ Then, \small \begin{align*} & \sup_{\theta\in\bm{\Theta}}\big[u(\theta)-\hat{u}_{S_N}(\theta)\big] \leq\\ & \sup_{\theta\in\bm{\Theta}}\Big[u(\theta)-u(\theta_j)+u(\theta_j)-\hat{u}_{S_N}(\theta_j)+\hat{u}_{S_N}(\theta_j)-\hat{u}_{S_N}(\theta)\Big]\\ &\leq \frac{\epsilon}{2}+\max_{j\in\{1,..,m\}}[u(\theta_{j})-\hat{u}_{S_{N}}(\theta_{j})]. \end{align*} It then follows that $\mathrm{Prob}\{ \sup_{\theta \in \bm{\Theta}}[u(\theta)-\hat{u}_{S_{N}}(\theta)] \geq \epsilon \} \leq \mathrm{Prob}\{ \max_{j\in\{1,..,m\}}[u(\theta_{j})-\hat{u}_{S_{N}}(\theta_{j})] \geq \epsilon/2 \}\leq \sum_{j=1}^{m} \mathrm{Prob}\{ [u(\theta_{j})-\hat{u}_{S_{N}}(\theta_{j})] \geq \epsilon/2 \}\leq m \mathrm{exp}(-\frac{\frac{N}{4}\epsilon^{2}}{2\tau^{2}+\frac{nC\epsilon}{3}})$, where the last inequality is due to Lemma \ref{lem:concen}. We need to find a $\epsilon$ satisfying $\mathrm{exp}(h\ln(12RL/\epsilon)-\frac{N\epsilon^{2}}{8\tau^{2}+\frac{4nC\epsilon}{3}})\leq \delta$, for which it suffices to find a solution of (by $\epsilon\leq1$) $\frac{N\epsilon^2}{8\tau^2+\frac{4nC}{3}}+h\ln\epsilon\geq h\ln(12LR)+\ln(1/\delta)$. This inequality takes the form of the inequality in Lemma~\ref{lem:inequality-solution} (the proof of Lemma~\ref{lem:inequality-solution} is omitted here due to space limitations). We can apply Lemma \ref{lem:inequality-solution} to show that a solution is (note $h\ln(12LR)\geq1$) \[ \epsilon=\bigg(\frac{h\ln(12LR)+\ln(1/\delta)+2^{-1}h\ln\frac{N}{8\tau^2+\frac{4nC}{3}}}{\frac{N}{8\tau^2+\frac{4nC}{3}}}\bigg)^{\frac{1}{2}}. \] \end{proof} \subsection{Discussion} There are some important findings from the above results. First, both Theorem~\ref{theorem:finite} and Theorem~\ref{theorem:infinite} relate the bounds on $u(\theta)-\hat{u}_{S_{N}}(\theta)$ with the complexity of $\bm{\Theta}$, and the bounds deteriorate as the complexity increases. This means as the considered configuration space gets more complex, there is a possibility that the estimation error could be larger. Second, as expected, as $N$ and $K$ get larger, the estimation error gets smaller, and $\hat{u}_{S_{N}}(\theta)$ will converge to $u(\theta)$ with probability 1 with $N \rightarrow \infty$ and $K \rightarrow \infty$. Third, Corollary~\ref{cor:finite} shows that, for the estimator $\hat{u}_{S_{N}}(\theta^{\ast})$ which are widely used in current AAC methods, the gain on error reduction decreases rapidly as $N$ and $K$ get larger (which are also shown in Figure~\ref{fig:analysis} in the experiments), and the effects of increasing $N$ and $K$ also depend on $\bar{\sigma}^{2}_{WI}$ and $\bar{\sigma}^{2}_{AI}$, two quantities varying across different algorithm configuration scenarios. Thus for enhancing current AAC methods, instead of fixing $N$ as a large number (e.g., SMAC sets $N$ to 2000 by default) and using as many training instances as possible, it is more desirable to use different $N$ and $K$ according to the configuration scenario considered, in which case $N$ and $K$ may be adjusted dynamically in the configuration process as more data are gathered to estimate $\bar{\sigma}^{2}_{WI}$ and $\bar{\sigma}^{2}_{AI}$. \section{Experiments} \label{section5} In this section, we present our experimental studies. First we introduce our experiment setup. Then, we verify our theoretical results in two facets: 1) comparison of different performance estimators; 2) the effects of different values of $m$ (the number of considered configurations), $N$ (the number of runs of $\theta$ to estimate $u(\theta)$) and $K$ (the number of training instances) on the estimation error. We conducted experiments based on a re-sampling approach \cite{Birattari2004}, which is often used for time-consuming empirical analysis. Specifically, we considered 4 different scenarios. We selected two scenarios SATenstein-QCP and clasp-weighted-sequence from the Algorithm Configuration Library (AClib) \cite{HutterLFLHLS14} and built two new scenarios LKH-uniform-400/1000. For each scenario, we gathered a $M \times P \times 5$ matrix containing the performances of $M$ configurations on $P$ instances, with each configuration running on each instance for 5 times. Let $\bm{\Theta}_{M}$ be the set of the $M$ configurations and $\mathcal{Z}_{P}$ be the set of the $P$ instances. In the experiments, when acquiring the performance of a configuration $\theta$ on an instance, instead of actually running $\theta$, the value stored in the corresponding entry of the matrix was used. The details of the scenarios and the performance matrices are summarized in Table~\ref{tab:scenarios}. In the experiments the optimization goal considered is the runtime needed to solve the problem instances (for SAT and ASP) or to find the optima of the problem instances (for TSP). In particular, the performance metric was set to Penalized Average Runtime–10 (PAR-10) \cite{hutter2009paramils}, which counts a timeout as 10 times the given cutoff time. For convenience henceforth we will use ``$\mathrm{split}_{P_{1}|P_{2}}$'' to denote that we subsequently randomly select, without replacement, $P_{1}$ and $P_{2}$ instances from $\mathcal{Z}_{P}$ as training instances and test instances respectively. For a given $\theta$, we always used the performance obtained by an estimator on the training instances as its training performance, and used its performance on the test instances as its true performance. We use $\mathrm{uniform\_es\_error(\bm{\Theta})}$ to denote the maximal estimation error across the configurations in $\bm{\Theta}$. All the experiments were conducted on a Xeon machines with 128 GB RAM and 24 cores each (2.20 GHz, 30 MB Cache), running CentOS. The code was implemented based on AClib \cite{HutterLFLHLS14} \footnote{The code and the complete experiment results are available at \url{https://github.com/EEAAC/ac_estimation_error}}. \captionsetup[sub]{font=small} \begin{figure*} \caption{SATenstein-QCP} \caption{clasp-weighted-sequence} \caption{LKH-uniform-400} \caption{LKH-uniform-1000} \caption{Uniform estimation error at different $m$, $N$ and $K$ and the fit functions based on the theoretical results.} \label{fig:analysis} \end{figure*} \captionsetup[sub]{font=small} \begin{figure*} \caption{clasp-weighted-sequence} \caption{LKH-uniform-1000} \caption{LKH-uniform-400} \caption{SATenstein-QCP} \caption{Estimation error on $\theta^{\ast} \label{fig:train_bound} \end{figure*} \textbf{Comparison of Different Estimators.} We compared $\hat{u}_{S^{\ast}_{N}}(\theta)$ with two estimators $\hat{u}_{S^{\dagger}_{N}}(\theta)$ and $\hat{u}_{S^{\circ}_{N}}(\theta)$. For evaluating $\theta$, $\hat{u}_{S^{\dagger}_{N}}(\theta)$ repeatedly randomly selects an instance from $\mathcal{Z}_{P}$ without replacement, and runs $\theta$ for 5 times on the instance as long as the total number of runs of $\theta$ not exceeding $N$. $\hat{u}_{S^{\dagger}_{N}}(\theta)$ is greedier than $\hat{u}_{S^{\ast}_{N}}(\theta)$ in the sense that it ensures the estimated performance of $\theta$ on the used instances is as accurate as possible. Another estimator $\hat{u}_{S^{\circ}_{N}}(\theta)$ is the one presented in \cite{Birattari2004}, which repeatedly randomly selects an instance from $\mathcal{Z}_{P}$ with replacement, and runs $\theta$ for a single time on the instance. $\hat{u}_{S^{\circ}_{N}}(\theta)$ has more randomness than $\hat{u}_{S^{\ast}_{N}}(\theta)$ since it does not ensure that $N$ runs of $\theta$ are distributed evenly on all instances. We set $K=r_{1}P$ and $N=r_{2}K$, and ranged $r_{1}$ from 0.1 to 0.5 with a step of 0.05, $r_{2}$ from 0.25 to 4.0 with a step of 0.25. To reduce the variations of our experiments, for each combination of $r_{1}$ and $r_{2}$, we $\mathrm{split}_{{K}|P/2}$ for 2500 times, and on each split, we obtained the estimation error of an estimator on each $\theta \in \bm{\Theta}$, and then calculated the mean value, which was further averaged over all splits. That is, for each combination of $r_{1}$ and $r_{2}$, we obtained a mean estimation error for each estimator. Due to space limitations, we only present the results in terms of error bars (mean $\pm$ std) at $r_{1}=0.5$ in Figure~\ref{fig:com_estimator}. The results at other values of $r_{1}$ are similar. Figure~\ref{fig:com_estimator} is in line with Theorem~\ref{theorem:best_estimator}. Overall $\hat{u}_{S^{\ast}_{N}}(\theta)$ is the best estimator among the three, and its performance advantage is remarkable when $N$ is small. When $N$ gets larger, it is expected, and as shown in Figure~\ref{fig:com_estimator}, that the estimation error for all three estimators will converge to 0. The fact that $\hat{u}_{S^{\ast}_{N}}(\theta)$ is better than $\hat{u}_{S^{\circ}_{N}}(\theta)$ indicates that it is necessary to distribute $N$ runs of $\theta$ as evenly as possible over all instances. \textbf{Estimation Error at Different $\bm{m}$, $\bm{N}$ and $\bm{K}$.} We always fixed two values while ranging the other one. We ranged $m$ from 1 to $M$, while setting $K=P/2$ and $N=5K$. We ranged $N$ from 1 to $5K$ while setting $K=P/2$ and $m=M$. We ranged $K$ from 1 to $P/2$ while setting $N=5K$ and $m=M$. For a given $m$, we $\mathrm{split}_{K|P/2}$ for 2500 times, and on each split, we started with an empty set $\bm{\Theta}_{train}$ of configurations and then repeatedly expanded $\bm{\Theta}_{train}$ by adding a configuration randomly selected from $\bm{\Theta}_{M}\setminus\bm{\Theta}_{train}$. Each time a new configuration $\theta$ was added to $\bm{\Theta}_{train}$, $\mathrm{uniform\_es\_error(\bm{\Theta}_{train})}$ was recorded, which was further averaged over all 2500 splits. That is, for each $m$, we obtained a mean value of $\mathrm{uniform\_es\_error(\bm{\Theta}_{train})}$, denoted as $\mathrm{uniform\_es\_error(\bm{\Theta}_{train}, m)}$. Similarly, for a given $N$ or a given $K$, we always $\mathrm{split}_{K|P/2}$ for 2500 times, and on each split, we obtained $\mathrm{uniform\_es\_error(\bm{\Theta}_{M})}$, and then averaged it over all splits. Thus for each considered $N$ and $K$, we obtained $\mathrm{uniform\_es\_error(\bm{\Theta}_{M}, N)}$ and $\mathrm{uniform\_es\_error(\bm{\Theta}_{M}, K)}$, respectively. Due to space limitations, we only present parts of the results in Figure~\ref{fig:analysis} and other results are very similar. To verify whether our analysis (Theorem~\ref{theorem:finite}) correctly captures the dependence of estimation error on $N$, $M$ and $K$, we also plot the function $f(m)=a\ln{m}+b\sqrt{\ln{m}}$ for $m$, $f(N)=a+b\sqrt{1/N}$ for $N$ and $f(K)=a/K+b\sqrt{1/K}$ for $K$ in Figure~\ref{fig:analysis}, where the parameters $a,b$ are computed by fitting $f$ with the data collected in the experiments, i.e., $\{m \mapsto \mathrm{uniform\_es\_error(\bm{\Theta}_{train}, m)}:m \in \{1,...,M\}\}$, $\{N \mapsto \mathrm{uniform\_es\_error(\bm{\Theta}_{M}, N)}:N \in \{1,...,\frac{5}{2}P\}\}$ and $\{K \mapsto \mathrm{uniform\_es\_error(\bm{\Theta}_{M}, K)}: K \in \{1,...,\frac{1}{2}P\}\}$, respectively. Overall Figure~\ref{fig:analysis} demonstrates that our analysis managed to capture the dependence of uniform estimation error on $m$, $N$ and $K$. It is worth noting that in the experiments the effects of increasing $m$, $N$ and $K$ depend on $\bar{\sigma}^{2}_{WI}$ and $\bar{\sigma}^{2}_{AI}$, which vary across configuration scenarios. Moreover, the estimation error becomes very low and quite stable when $N$ approaches $K/2$, which means running $\theta$ on half of the training instances could already obtain a reliable estimate of $u(\theta)$. It is also meaningful to investigate whether our analysis could reflect how the estimation error on $\theta^{\ast}$, i.e., the configuration with the best training performance in $\bm{\Theta}$, denoted as $\mathrm{train\_es\_error(\bm{\Theta})}$, would change. We conducted the same experiments as described above to gather $\mathrm{train\_es\_error(\bm{\Theta}_{train}, m)}$, $\mathrm{train\_es\_error(\bm{\Theta}_{M},N)}$ and $\mathrm{train\_es\_error(\bm{\Theta}_{M},K)}$. Figure~\ref{fig:train_bound} plots the results and the fit functions. It could be seen that although the bounds are not directly established for $\mathrm{train\_es\_error(\bm{\Theta})}$, the findings also apply to it to a considerable extent. \section{Conclusion} The main results of this paper include the universal best performance estimator and bounds on the uniform estimation error, which were verified in extensive experiments. Possible future directions include data-dependent bounds that are tighter and computable from realization of training instances and analysis of the notorious over-tuning phenomenon based on the results in this paper. \fontsize{9.0pt}{10.3pt} \selectfont \end{document}
math
51,030
\betagin{document} \baselineskip=17pt \hbox{} {} \textitle[A conjecture of Zhi-Wei Sun on determinants over finite fields] {A conjecture of Zhi-Wei Sun on determinants over finite fields} \deltate{} \author[H.-L. Wu, Y.-F. She and H.-X. Ni]{Hai-Liang Wu,Yue-Feng She and He-Xia Ni*} \texthanks{2020 {\it Mathematics Subject Classification}. Primary 11C20; Secondary 11L05, 11R29. \newline\indent {\it Keywords}. determinants, the Legendre symbol, finite fields. \newline \indent The first author was supported by the National Natural Science Foundation of China (Grant No. 12101321 and Grant No. 11971222) and the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (Grant No. 21KJB110002). The third author was supported by the National Natural Science Foundation of China (Grant No. 12001279).} \texthanks{*Corresponding author.} \address {(Hai-Liang Wu) School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, People's Republic of China} \email{\textt [email protected]} \address {(Yue-Feng She) Department of Mathematics, Nanjing University, Nanjing 210093, People's Republic of China} \email{{\textt [email protected]}} \address{(He-Xia Ni) Department of Applied Mathematics, Nanjing Audit University, Nanjing 211815, People's Republic of China} \email{\textt [email protected]} \betagin{abstract} In this paper, we study certain determinants over finite fields. Let $\mathbb{F}_q$ be the finite field of $q$ elements and let $a_1,a_2,\cdots,a_{q-1}$ be all nonzero elements of $\mathbb{F}_q$. Let $T_q=\lefteft[\fracrac{1}{a_i^2-a_ia_j+a_j^2}\rightight]_{1\lefte i,j\lefte q-1}$ be a matrix over $\mathbb{F}_q$. We obtain the explicit value of $\det T_q$. Also, as a consequence of our result, we confirm a conjecture posed by Zhi-Wei Sun. \end{abstract} \maketitle \section{Introduction} Let $R$ be a commutative ring. Then for any $n\textimes n$ matrix $M=[a_{ij}]_{1\lefte i,j\lefte n}$ with $a_{ij}\in R$, we use $\det M$ or $|M|$ to denote the determinant of $M$. Let $p$ be an odd prime and let $(\fracrac{\cdot}{p})$ be the Legendre symbol. Carlitz \cite{carlitz} studied the following matrix $$C_p(\leftambda)=\binomgg[\leftambda+\lefteft(\fracrac{i-j}{p}\rightight)\binomgg]_{1\lefte i,j\lefte p-1}\ \ \ \ \ (\leftambda\in\mathbb{C}).$$ Carlitz \cite[Theorem 4]{carlitz} proved that the characteristic polynomial of $C_p(\leftambda)$ is $$P_{\chi}(t)=(t^2-(-1)^{(p-1)/2}p)^{(p-3)/2}(t^2-(p-1)\leftambda-(-1)^{(p-1)/2}).$$ Later Chapman \cite{chapman,evil} investigated many interesting variants of $C_p$. Moreover, Chapman \cite{evil} posed a challenging conjecture on the determinant of the $\fracrac{p+1}{2}\textimes\fracrac{p+1}{2}$ matrix $$E_p=\binomgg[\bg(\fracrac{j-i}{p}\bg)\binomgg]_{1\lefte i,j\lefte\fracrac{p+1}{2}}.$$ Due to the difficulty of the evaluation on $\det E_p$, Chapman called this determinant ``evil" determinant. Finally, by using sophisticated matrix decompositions, Vsemirnov \cite{M1,M2} solved this problem completely. Along this line, in 2019 Sun \cite{ffadeterminant} studied the following matrix $$S_p=\binomgg[\lefteft(\fracrac{i^2+j^2}{p}\rightight)\binomgg]_{1\lefte i,j\lefte \fracrac{p-1}{2}},$$ and Sun \cite[Theorem 1.2(iii)]{ffadeterminant} showed that $-\det S_p$ is always a quadratic residue modulo $p$. In the same paper, Sun also investigated the matrix $$A_p=\binomgg[\fracrac{1}{i^2+j^2}\binomgg]_{1\lefte i,j\lefte \fracrac{p-1}{2}}.$$ Sun \cite[Theorem 1.4(ii)]{ffadeterminant} proved that when $p\equivuiv3\pmod4$ the $p$-adic integer $2\det A_p$ is always a quadratic residue modulo $p$. In addition, let $$T_p=\binomgg[\fracrac{1}{i^2-ij+j^2}\binomgg]_{1\lefte i,j\lefte p-1}.$$ Sun \cite[Remark 1.3]{ffadeterminant} posed the following conjecture. \betagin{conjecture}[Zhi-Wei Sun]\leftabel{Conjecture of Sun} Let $p\equivuiv2\pmod3$ be an odd prime. Then $2\det T_p$ is a quadratic residue modulo $p$. \end{conjecture} Let $\mathbb{F}_q$ be the finite field of $q$ elements and let $$\mathbb{F}_q^{\textimes}=\mathbb{F}_q\setminus\{0\}=\{a_1,a_2,\cdots,a_{q-1}\}.$$ Motivated by this conjecture, we define a matrix $T_q$ over $\mathbb{F}_q$ by $$T_q=\binomgg[\fracrac{1}{a_i^2-a_ia_j+a_j^2}\binomgg]_{1\lefte i,j\lefte q-1}.$$ We obtain the following generalized result. \betagin{theorem}\leftabel{Thm. A} Let $q\equivuiv 2\pmod 3$ be an odd prime power and let $$T_q=\binomgg[\fracrac{1}{a_i^2-a_ia_j+a_j^2}\binomgg]_{1\lefte i,j\lefte q-1}.$$ Then $$\det T_q=(-1)^{\fracrac{q+1}{2}}2^{\fracrac{q-2}{3}}\in\mathbb{F}_p,$$ where $p$ is the characteristic of $\mathbb{F}_q$. \end{theorem} \betagin{remark} We give two examples here. Note that we also view $T_p$ as a matrix over $\mathbb{F}_p$ if $p$ is an odd prime. {\rightm (i)} If $p=5$, then $$T_p=\fracrac{11}{596232}=\fracrac{1}{2}=-2.$$ {\rightm (ii)} If $p=11$, then \betagin{align*} T_p=\fracrac{393106620416000000}{23008992710579652367225919172202284572822491031943}=\fracrac{4}{6}=2^3. \end{align*} \end{remark} As a direct consequence of our theorem, we confirm Sun's conjecture. \betagin{corollary}\leftabel{Corollary A} Conjecture \rightef{Conjecture of Sun} holds. \end{corollary} The outline of this paper is as follows. In section 2, we will prove some lemmas which are the key elements in the proof of our theorem. The proofs of Theorem \rightef{Thm. A} and Corollary \rightef{Corollary A} will be given in section 3. \maketitle \section{Some Preparations} Given any polynomials $A(T),B(T)\in\mathbb{F}_q[T]$, we say that $A(T)$ and $B(T)$ are equivalent (denoted by $A(T)\sim B(T)$) if $A(x)=B(x)$ for each $x\in\mathbb{F}_q$. Let $\chi_3(\cdot)=(\fracrac{\cdot}{3})$ be the quadratic character modulo $3$. We first have the following lemma. \betagin{lemma}\leftabel{Lemma equivalent reduced polynomials} Let $q\equivuiv 2\pmod 3$ be an odd prime power and let \betagin{equation}\leftabel{Eq. definition of G(T)} G(T)=1+\fracrac{1}{3}\sum_{k=2}^{q-2}\lefteft(\chi_3(k)+\chi_3(1-k)\rightight)T^{k-1} +\fracrac{1}{3}T^{q-2}-\fracrac{2}{3}T^{q-1}. \end{equation} Then $$(T^2+T+1)^{q-2}\sim G(T).$$ \end{lemma} \betagin{proof} We first show that $T^2+T+1$ is irreducible over $\mathbb{F}_q[T]$. Set $q=p^r$ with $p$ prime and $r\in\mathbb{Z}^{+}$. As $q\equivuiv2\pmod3$, clearly $p\equivuiv2\pmod3$ and $2\nmid r$. Hence $$(-3)^{\fracrac{q-1}{2}}=(-3)^{\fracrac{p-1}{2}(1+p+p^2+\cdots+p^{r-1})}=\lefteft(\fracrac{-3}{p}\rightight)^{1+p+p^2+\cdots+p^{r-1}}=(-1)^r=-1.$$ This implies that $-3$ is not a square over $\mathbb{F}_q$. Suppose now $T^2+T+1$ is reducible over $\mathbb{F}_q[T]$. Then there exists an element $\alphapha\in\mathbb{F}_q$ such that $\alphapha^2+\alphapha+1=0$. This implies $(2\alphapha+1)^2=-3$, which is a contradiction. Hence $T^2+T+1$ is irreducible over $\mathbb{F}_q[T]$. Moreover, since $$T^q-T=\prod_{x\in\mathbb{F}_q}\lefteft(T-x\rightight),$$ we have $T^2+T+1\nmid T^q-T$ and hence $T^2+T+1$ is coprime with $T^q-T$. Now via a computation, we obtain \betagin{equation*} (T^2+T+1)^2G(T)\equivuiv T^2+T+1\equivuiv (T^2+T+1)^q\pmod{(T^q-T)\mathbb{F}_q[T]}. \end{equation*} As $T^2+T+1$ is coprime with $T^q-T$, we obtain $$(T^2+T+1)^{q-2}\equivuiv G(T)\pmod{(T^q-T)\mathbb{F}_q[T]}.$$ This implies $$(T^2+T+1)^{q-2}\sim G(T).$$ In view of the above, we have completed the proof. \end{proof} We need the following lemma (cf. \cite[Lemma 10]{K2}). \betagin{lemma}\leftabel{Lemma formula for determinants} Let $R$ be a commutative ring and let $n$ be a positive integer. Set $P(T)=p_{n-1}T^{n-1}+\cdots+p_1T+p_0\in R[T]$. Then $$ \det[P(X_iY_j)]_{1\lefte i,j\lefte n} =\prod_{i=0}^{n-1}p_i\prod_{1\lefte i<j\lefte n}\lefteft(X_j-X_i\rightight)\lefteft(Y_j-Y_i\rightight). $$ \end{lemma} Now let $m$ be a positive integer. We introduce some basic facts on the permutations over $\mathbb{Z}/m\mathbb{Z}$. Fix an integer $a$ with $(a,m)=1$. Then the map $x\ {\rightm mod}\ m \mapsto ax\ {\rightm mod}\ m$ induces a permutation $\pi_a(m)$ over $\mathbb{Z}/m\mathbb{Z}$. Lerch \cite{ML} determined the sign of this permutation. \betagin{lemma}\leftabel{Lemma permutation} Let ${\rm sgn}(\pi_a(m))$ denote the sign of the permutation $\pi_a(m)$. Then $${\rm sgn}(\pi_a(m))=\betagin{cases}(\fracrac{a}{m})&\mbox{if $m$ is odd},\\1&\mbox{if}\ m\equivuiv2\pmod4,\\bg(-1)^{\fracrac{a-1}{2}}&\mbox{if}\ m\equivuiv0\pmod4,\end{cases}$$ where $(\fracrac{\cdot}{m})$ denotes the Jacobi symbol if $m$ is odd. \end{lemma} Recall that $$\mathbb{F}_q^{\textimes}=\mathbb{F}_q\setminus\{0\}=\lefteft\{a_1,a_2,\cdots,a_{q-1}\rightight\}.$$ The map $a_j\mapsto a_j^{-1}$ $(j=1,2,\cdots,q-1)$ induces a permutation $\sigma_{-1}$ on $\mathbb{F}_q^{\textimes}$. We also need the following lemma. \betagin{lemma}\leftabel{Lemma Inv of the inverse permutation} Let notations be as above. Then $${\rm sgn}(\sigma_{-1})={\rm sgn}(\pi_{-1}(q-1))=(-1)^{\fracrac{q+1}{2}}.$$ \end{lemma} \betagin{proof} Fix a generator $g$ of $\mathbb{F}_q^{\textimes}$. Let $f$ be the bijection on $\mathbb{F}_q^{\textimes}$ which sends $a_j$ to $g^j$ $(j=1,2,\cdots,q-1)$. Then it is easy to see that $${\rm sgn}(\sigma_{-1})={\rm sgn}(f\circ \sigma_{-1}\circ f^{-1}).$$ Note that $f\circ \sigma_{-1}\circ f^{-1}$ is the permutation on $\mathbb{F}_q^{\textimes}$ which sends $g^j$ to $g^{-j}$ $(j=1,2,\cdots,q-1)$. This permutation indeed corresponds to the permutation $\pi_{-1}(q-1)$ over $\mathbb{Z}/(q-1)\mathbb{Z}$ which sends $j$ {\rightm mod} $(q-1)$ to $-j$ {\rightm mod } $(q-1)$. Now our desired result follows from Lemma \rightef{Lemma permutation}. This completes the proof. \end{proof} \section{Proof of The Main Result} {\bf Proof of Theorem \rightef{Thm. A}.} Recall that $$T_q=\binomgg[\fracrac{1}{a_i^2-a_ia_j+a_j^2}\binomgg]_{1\lefte i,j\lefte q-1}.$$ By Lemma \rightef{Lemma permutation} $$ \det T_q =(-1)^{\fracrac{q-1}{2}}\det\binomgg[\fracrac{1}{a_i^2+a_ia_j+a_j^2}\binomgg]_{1\lefte i,j\lefte q-1}. $$ Also, $$ \det\binomgg[\fracrac{1}{a_i^2+a_ia_j+a_j^2}\binomgg]_{1\lefte i,j\lefte q-1} =\prod_{j=1}^{q-1}\fracrac{1}{a_j^2}\cdot\det\binomgg[\fracrac{1}{(a_i/a_j)^2+a_i/a_j+1}\binomgg]_{1\lefte i,j\lefte q-1}. $$ Since $q\equivuiv2\pmod3$, we have $a_i^2+a_ia_j+a_j^2\neq0$ for any $1\lefte i,j\lefte q-1$. Hence for any $1\lefte i,j\lefte q-1$ we have $$ \fracrac{1}{(a_i/a_j)^2+a_i/a_j+1}= \lefteft((a_i/a_j)^2+a_i/a_j+1\rightight)^{q-2}. $$ By Lemma \rightef{Lemma equivalent reduced polynomials} we have $(T^2+T+1)^{q-2}\sim G(T)$, where $G(T)$ is defined by (\rightef{Eq. definition of G(T)}). Hence $$\lefteft((a_i/a_j)^2+a_i/a_j+1\rightight)^{q-2}=G(a_i/a_j),$$ for any $1\lefte i,j\lefte q-1$. As $(a_i/a_j)^{q-1}=1$ for any $1\lefte i,j\lefte q-1$ , we have $$G(a_i/a_j)=H(a_i/a_j),$$ where $$ H(T)=G(T)-\fracrac{2}{3}+\fracrac{2}{3}T^{q-1}=\fracrac{1}{3}+\fracrac{1}{3} \sum_{k=2}^{q-2}\lefteft(\chi_3(k)+\chi_3(1-k)\rightight)T^{k-1} +\fracrac{1}{3}T^{q-2}. $$ Let $$S(T)=\prod_{1\lefte j\lefte q-1}\lefteft(T-a_j\rightight)$$ and let $S'(T)$ be the formal derivative of $S(T)$. It is easy to verify that $$S(T)=\prod_{1\lefte j\lefte q-1}\lefteft(T-a_j\rightight)=T^{q-1}-1.$$ By this it is clear that $S'(T)=(q-1)T^{q-2}=-T^{q-2}$ and \betagin{equation}\leftabel{Eq. Production of all aj} \prod_{1\lefte j\lefte q-1}a_j=-1. \end{equation} By the above we obtain \betagin{equation}\leftabel{Eq. A in the proof of theorem} \det T_q=(-1)^{\fracrac{q-1}{2}}\det\lefteft[H(a_i/a_j)\rightight]_{1\lefte i,j\lefte q-1}. \end{equation} By Lemma \rightef{Lemma formula for determinants} we know that $\det[H(a_i/a_j)]_{1\lefte i,j\lefte q-1}$ is equal to $$ \fracrac{1}{3^{q-1}}\prod_{k=2}^{q-2}\lefteft(\chi_3(k)+\chi_3(1-k)\rightight)\prod_{1\lefte i<j\lefte q-1}\lefteft(a_j-a_i\rightight)\lefteft(\fracrac{1}{a_j}-\fracrac{1}{a_i}\rightight). $$ We first consider the product $$ \prod_{1\lefte i<j\lefte q-1}\lefteft(a_j-a_i\rightight)\lefteft(\fracrac{1}{a_j}-\fracrac{1}{a_i}\rightight). $$ By Lemma \rightef{Lemma Inv of the inverse permutation} it is easy to see that \betagin{equation*} \prod_{1\lefte i<j\lefte q-1}\lefteft(a_j-a_i\rightight)\lefteft(\fracrac{1}{a_j}-\fracrac{1}{a_i}\rightight)= (-1)^{\fracrac{q+1}{2}}\prod_{1\lefte i<j\lefte q-1}\lefteft(a_j-a_i\rightight)^2. \end{equation*} It is easy to verify that \betagin{align*} \prod_{1\lefte i<j\lefte q-1}(a_j-a_i)^2 &=(-1)^{\fracrac{(q-1)}{2}}\prod_{1\lefte i\neq j \lefte q-1}(a_j-a_i)\\ &=(-1)^{\fracrac{(q-1)}{2}}\prod_{1\lefte j\lefte q-1}\prod_{i\neq j}(a_j-a_i)\\ &=(-1)^{\fracrac{(q-1)}{2}}\prod_{1\lefte j\lefte q-1}S'(a_j)\\ &=(-1)^{\fracrac{q-1}{2}}\prod_{1\lefte j\lefte q-1}\fracrac{-1}{a_j}=(-1)^{\fracrac{q+1}{2}} \end{align*} The last equality follows from (\rightef{Eq. Production of all aj}). Hence \betagin{equation} \prod_{1\lefte i<j\lefte q-1}\lefteft(a_j-a_i\rightight)\lefteft(\fracrac{1}{a_j}-\fracrac{1}{a_i}\rightight)=1. \end{equation} We now turn to the product $$ \prod_{k=2}^{q-2}\lefteft(\chi_3(k)+\chi_3(1-k)\rightight). $$ By definition $$\chi_3(k)+\chi_3(1-k)=\betagin{cases}1&\mbox{if}\ k\equivuiv 0,1\pmod 3, \\-2&\mbox{if}\ k\equivuiv 2\pmod 3. \end{cases}$$ Hence \betagin{equation}\leftabel{Eq. C in the proof of theorem} \prod_{k=2}^{q-2}\lefteft(\chi_3(k)+\chi_3(1-k)\rightight)=(-2)^{\fracrac{q-2}{3}}. \end{equation} In view of (\rightef{Eq. A in the proof of theorem})-(\rightef{Eq. C in the proof of theorem}), we obtain $$\det T_q=(-1)^{\fracrac{q+1}{2}}2^{\fracrac{q-2}{3}}\in\mathbb{F}_p,$$ where $p$ is the characteristic of $\mathbb{F}_q$. This completes the proof.\qed {\bf Proof of Corollary \rightef{Corollary A}.} Let $p\equivuiv2\pmod3$ be an odd prime. Then by Theorem \rightef{Thm. A} we have $$\lefteft(\fracrac{\det T_p}{p}\rightight) =\lefteft(\fracrac{-1}{p}\rightight)^{\fracrac{p+1}{2}}\lefteft(\fracrac{2}{p}\rightight)^{\fracrac{p-2}{3}} =\lefteft(\fracrac{2}{p}\rightight).$$ This completes the proof. \qed \betagin{thebibliography}{99} \baselineskip=17pt \binombitem{carlitz} L. Carlitz, Some cyclotomic matrices, Acta Arith. 5 (1959), 293--308. \binombitem{chapman} R. Chapman, Determinants of Legendre symbol matrices, Acta Arith. 115 (2004), 231--244. \binombitem{evil} R. Chapman, My evil determinant problem, preprint, December 12, 2012, available from http://empslocal.ex.ac.uk/people/staff/rjchapma/etc/evildet.pdf. \binombitem{K2} C. Krattenthaler, Advanced determinant calculus: a complement, Linear Algebra Appl. 411 (2005), 68--166. \binombitem{ML} M. Lerch, Sur un th\'{e}or\`{e}me de Zolotarev, Bull. Intern. de l'Acad. Fran\c{c}ois Joseph 3 (1896), 34--37. \binombitem{ffadeterminant} Z.-W. Sun, On some determinants with Legendre symbol entries, Finite Fields Appl. 56 (2019), 285--307. \binombitem{M1} M. Vsemirnov, On the evaluation of R. Chapman's ``evil determinant", Linear Algebra Appl. 436 (2012), 4101--4106. \binombitem{M2} M. Vsemirnov, On R. Chapman's ``evil determinant": case $ p \equivuiv 1 \pmod 4$, Acta Arith. 159 (2013), 331--344. \end{thebibliography} \end{document}
math
15,008
\begin{document} \catchline{}{}{}{}{} \title{Quantum synchronization and correlations of two qutrits in a non-Markovian bath} \author{Jian-Song Zhang} \address{Department of Applied Physics, East China Jiaotong University, Nanchang 330013, People's Republic of China\\ [email protected]} \maketitle \begin{history} \received{Day Month Year} \revised{Day Month Year} \end{history} \begin{abstract} We investigate quantum synchronization and correlations of two qutrits in one non-Markovian environment using the hierarchy equation method. There is no direct interaction between two qutrits and each qutrit interacts with the same non-Markovian environment. The influence of the temperature of the bath, correlation time, and coupling strength between qutrits and bath on the quantum synchronzation and correlations of two qutrits are studied without the Markovian, Born, and rotating wave approximations. We also discuss the influence of dissipation and dephasing on the synchronization of two qutrits. In the presence of dissipation, the phase locking between two qutrits without any direct interaction can be achieved when each qutrit interacts with the common bath. Two qutrits within one common bath can not be syncrhonized in the purely dephasing case. In addition, the Arnold tongue can be significantly broadened by decreasing the correlation time of two qutrits and bath. Markovian baths are more suitable for synchronizing qutrits than Non-Markovian baths. \end{abstract} \keywords{quantum synchronization; quantum correlations; non-Markovian environment.} \markboth{Authors' Names} {Instructions for Typing Manuscripts (Paper's Title)} \section{Introduction} Synchronization which describes the adjustment of rhythms of self-sustained oscillators due to an interaction is a fundamental phenomenon of nonlinear sciences. This phenomenon has been observed in physical, chemical, biological, and social systems \cite{1}. In recent years, many efforts have been devoted to extend the concept of synchronization to quantum systems such as Van der Pol oscillators \cite{2,3,4}, atomic ensembles \cite{5,6}, trapped ions \cite{7}, and cavity optomechanics \cite{8,9,10,11}. In general, a quantum system can be either continuous or discrete. In the previous studies \cite{2,3,4,5,6,7,8,9,10,11}, most authors have considered the quantum synchronization of continuous-variable systems with classical analogs since they can be described by quasiprobability distributions in phase space such as the Wigner function. For example, in Refs. \cite{8,9,10,11}, the authors have investigated the quantum synchronization of optomechanical systems formed by optical and mechanical modes \cite{12,13,14,15}. The measures of complete and phase synchronization of continuous-variable quantum systems have been proposed \cite{16}. For discrete-variable systems without classical analogue, the Pearson product-moment correlation coefficient can be used to measure the degree the synchronization of spin systems \cite{17}. The authors have investigated the synchronization of two qubits in a common environment using the Bloch-Redfield master equation and found that two qubits can not be synchronized for purely dephasing case \cite{17}. Recently, the measure of quantum synchronization using the Husimi Q representation and the concept of spin coherent states has been suggested by Roulet and Bruder \cite{18}. This measure can be used to study the synchronization of discrete-variable systems including qubits and qutrits. The authors have pointed out that qubits can not be synchronized since they lack a valid limit cycle and a spin 1 could be phase-locked to a weak external driving \cite{18}. Later, the authors investigated the quantum synchronization and entanglement generation of two qutrits using the Lindblad master equation \cite{19}. Very recently, the quantum synchronization of two quantum oscillators within one common dissipative environment at zero temperature was investigated with the help of a path integral formalism \cite{20}. In the previous works \cite{17,18,19}, the Markovian and Born approximations were employed and the temperature of bath was assumed to be zero. Note that the rotating wave approximation was used in the previous work \cite{20}. Thus, the influence of the temperature of the bath or the non-Markovian effects were not taken into accounted in the above works. In the present paper, we study the quantum synchronization and correlations of two qutrits within one common bath using the hierarchy equation method \cite{21,22,23,24}. The two qutrits have no direct interaction. In particular, in the derivation of hierarchy equations, the Markovian, Born, and rotating wave approximations are not used. The hierarchy equation method is a high-performance method and is suitable for strong- and ultrastrong-coupling systems like chemical and biophysical systems \cite{25,26,27,28}. Our results show that the measures of quantum synchronization and correlations could increase with the increase of the coupling strength between each qutrit and the common bath. The influence of the temperature of the bath depends heavily on the detuning of two qutrits. If the detuning is much smaller than the frequecies of two qutrits, then the maximal value of the measure of quantum synchronization increases with the increase of the temperature of the bath. However, if the detuning is not much smaller than the frequencies of two qutrits, the temperature of the bath could play a destructive role in the synchronization of two qutrits. In addition, the correlation time of the qutrits and bath plays an important role in the generation of quantum synchronization and correlations. The phase locking between two qutrits without direct interaction can be achieved if they are put into one bath and the dissipation is taken into accounted. In particular, two qutrits can not be synchronized in the purely dephasing case. The Arnold tongue of synchronization and quantum correlations (measured by quantum mutual information) can be obtained in the present model. The shape of the Arnold tongue can be adjusted by the temperature, coupling strength, and correlation time of the system. The organization of this paper is as follows. In Sec. II, we introduce the model and the hierarchy equation method. In Sec. III, we briefly review the measures of quantum synchronization and correlations. In Sec. IV, we investigate the influence of the temperature, coupling strength, and correlation time of the system on the quantum synchronization and correlations of two qutrits. In Sec. V, we summarize our results. \section{Model and hierarchy equation method} In this section, we introduce the model and hierarchy equation method used in the present work. We consider a system formed by two qutrits with no direct interaction and the free Hamiltonian is (set $\hbar = 1$) \begin{eqnarray} H_S = \omega_1 J_1^z + \omega_2 J_2^z, \end{eqnarray} where $\omega_1$ and $\omega_2$ are frequencies of qutrit 1 and qutrit 2, respectively. The detuning between two qutrits is $\Delta = \omega_2 - \omega_1$. We assume two qutrits are put into a common thermal bath. The free Hamiltonian of the thermal bath is \begin{eqnarray} H_B = \sum_k \omega_k b_k^{\dag} b_k, \end{eqnarray} where $\omega_k$ is the frequency of the $k$th mode of the thermal bath. The interaction Hamiltonian of two qutrits and bath is \begin{eqnarray} H_I = \sum_k g_k V(b_k^{\dag} + b_k), \end{eqnarray} where $g_k$ is the coupling strength between the qutrits and the $k$th mode of the bath. Here, $b_k^{\dag}$ and $b_k$ are the creation and annihilation operators of the thermal bath; V is the system operator coupled to the bath. Without loss of generality, we suppose \begin{eqnarray} V = (1 + h)(J_1^z + J_2^z) + (1 - h)(J_1^x + J_2^x), \end{eqnarray} where $h$ is an anisotropy coefficient with $-1 \leq h \leq 1$. In the interaction picture, the dynamics of the present system is \cite{22} \begin{eqnarray} \rho^I_S(t) &=& U(t) \rho_S(0), \label{sol} \\ U(t) &=& \mathcal{T} \exp\{ -\int _0^t dt_2\int _0^{t_2} dt_1 V(t_2)^{\times}[\Re[C(t_2 - t_1)]V(t_1)^{\times} \\ \nonumber && + i \Im[C(t_2 - t_1)] V(t_1)^\diamond] \} \rho_S(0), \end{eqnarray} where $\rho_S$ is the reduced density matrix of the system and $\mathcal{T}$ is the chronological time-ordering operator. Here, $O_1^{\times}O_2 \equiv [O_1,O_2] = O_1O_2 - O_2O_1$ and $O_1^{\diamond}O_2 \equiv \{O_1,O_2\} = O_1O_2 + O_2O_1$. Note that $\Re[C(t_2 - t_1)]$ and $\Im[C(t_2 - t_1)]$ are the real and imaginary parts of the bath time-correlation function $C(t_2 - t_1) = \langle B(t_2)B(t_1)\rangle$, respectively, and $B(t) = \sum_k (g_k b_k e^{-i\omega_k t} + g_k^*b^{\dag}e^{i\omega_k t})$. In the present work, we choose the Drude-Lorentz spectrum \cite{21,22,23,24} \begin{eqnarray} J(\omega) = \omega \frac{2\lambda\gamma}{\pi(\gamma^2 + \omega^2)}, \end{eqnarray} where $\lambda$ is the coupling strength between qutrits and bath, $\gamma$ represents the width of the spectral distribution of the bath mode. The quantity $1/\gamma$ represents the correlation time of the bath. Particularly, if $\gamma$ is much larger than any other frequency scale, the Markovian approximation is valid. For a bath with the Drude-Lorentz spectrum, the bath correlation function is \cite{23} \begin{eqnarray} \langle B(t_2)B(t_1)\rangle &=& \sum_{k=0}^{\infty} c_k e^{-\nu_k |t_2 - t_1|}, \label{bath_corr}\\ \nu_k &=& \frac{2\pi k }{\beta}(1 - \delta_{0k}) + \gamma \delta_{0k},\\ c_k &=& \frac{4\gamma \lambda \nu_k}{\beta(\nu_k^2 - \gamma^2)}(1 - \delta_{0k}) + \gamma \lambda [\cot(\frac{\gamma\beta}{2}) - i]\delta_{0k}, \end{eqnarray} where $\beta = 1/(kT)$ is the inverse temperature of the thermal bath. Using Eqs.(\ref{sol}) and (\ref{bath_corr}), the dynamics of the model can be describe by the hierarchy equation \cite{21} \begin{eqnarray} \dot{\rho}^n(t) &=& -(iH_s^{\times} + \sum_{\mu = 1,2}\sum_{k=0}^M n_{\mu k}\nu_k)\rho^n(t)\nonumber\\ && - \sum_{\mu = 1,2} (\frac{2\lambda}{\beta\gamma} - i\lambda - \sum_{k=0}^M\frac{c_k}{\nu_k}) V^{\times}_{\mu}V^{\times}_{\mu}\rho^n(t) \nonumber\\ && - i\sum_{\mu = 1,2}\sum_{k=0}^M n_{\mu k}[c_k V_{\mu}\rho^{n_{\mu k} ^-}(t) - c_k^* \rho^{n_{\mu k} ^-}(t)V_{\mu}]\nonumber\\ && - i\sum_{\mu = 1,2}\sum_{k=0}^M V^{\times}_{\mu} \rho^{n_{\mu k} ^+}(t). \end{eqnarray} Note that $\rho^{n_{\mu k} ^+} = \rho^{n_{\mu k} \rightarrow n_{\mu k} + 1}$ ($\rho^{n_{\mu k} ^-} = \rho^{n_{\mu k} \rightarrow n_{\mu k} - 1}$) denotes an increase (decrease) in the $\mu k$'th component of the multi-index. It is worth noting that in the derivation of the above equation, the Markovian, Born, and rotating wave approximations are not used. The hierarchy equation method is an exact method which is also suitable for strong- and ultrastrong-coupling systems. The density matrix of two qutrits at arbitrary time can be obtained from the initial state of the system and the above hierarchy equation of motion. In the present work, we assume two qutrits are put into a common bath, i.e., $V_1 = V_2 = V$. \section{Measures of quantum synchronization and correlations} For a discrete-variable system, one can use the Husimi Q representation to describe the phase portrait of a spin coherent state. In general, a spin coherent state is defined as \cite{18,19} \begin{eqnarray} |\theta, \phi\rangle = e^{-i\phi J_z} e^{-i\theta J_y} |J, J\rangle, \end{eqnarray} with the completeness relation \begin{eqnarray} \int_0^\pi d\theta \sin{\theta} \int_0^{2\pi} d\phi |\theta, \phi\rangle \langle \theta, \phi| = (4\pi)/(2J + 1). \end{eqnarray} For a spin 1 system, we have \begin{eqnarray} |\theta, \phi\rangle &=& \frac{e^{-i\phi}}{2}(1 + \cos\theta) |1,1\rangle + \frac{\sin{\theta}}{\sqrt{2}} |1,0\rangle \nonumber\\ && + \frac{e^{i\phi}}{2} (1 - \cos{\theta}) |1,-1\rangle. \label{spin_coherent_states} \end{eqnarray} The measure of quantum synchronization proposed by Roulet and Bruder is defined as \cite{19} \begin{eqnarray} S_{r}(\phi) &=& \int_0^{2 \pi} d\phi_2\int_0^{\pi} d\theta_1 \int_0^{\pi} d\theta_2 \sin{\theta_1} \sin{\theta_2} \nonumber\\ && \times Q(\theta_1, \theta_2, \phi + \phi_2, \phi_2) - \frac{1}{2\pi}, \label{def_S} \end{eqnarray} where \begin{eqnarray} Q(\theta_1, \theta_2, \phi + \phi_2, \phi_2) &=& \frac{9}{16\pi^2} (\langle \theta_1, \phi + \phi_2|\otimes\langle \theta_2, \phi_2|) \nonumber\\ && \rho (| \theta_1, \phi + \phi_2 \rangle \otimes | \theta_2, \phi_2\rangle). \label{Q} \end{eqnarray} Here, $Q(\theta_1, \theta_2, \phi + \phi_2, \phi_2)$ is the Husimi Q function and $\phi = \phi_1 - \phi_2$ is the relative phase of two spins. It can be viewed as a phase-space distribution of density matrix $\rho$ based on spin coherent states. Note that $S_r(\phi)$ depends upon the relative phase $\phi$ explicitly. Physically, it can be used to estimate whether two spins have tendency towards phase locking \cite{19}. If $S_r(\phi)$ is always zero, then there is no fixed phase relation of two spins, i.e., no phase locking of two spins. Using Eqs. (\ref{spin_coherent_states})-(\ref{Q}), we obtain the measure of synchronization of two spins 1 as \begin{eqnarray} S_r(\phi) &=& \frac{(32 \xi + 9\pi^2 \eta)}{256\pi}, \\ \xi &=& e^{2i\phi} \rho_{37} + e^{-2i\phi} \rho_{73}, \\ \eta &=& e^{i\phi}(\rho_{24} + \rho_{35} + \rho_{57} + \rho_{68}) \nonumber\\ && + e^{-i\phi}(\rho_{42} + \rho_{53} + \rho_{75} + \rho_{86}), \end{eqnarray} where $\rho_{jk}$ is the element of density matrix $\rho$. In order to measure the entanglement of two spins, we employ the logarithmic negativity which is defined by \cite{29,30} \begin{equation} E(\rho)\equiv \log_2{(1+2N)}= \log_2||\rho^{T}||, \end{equation} with $\rho^{T}$ being the partial transpose of density matrix $\rho$. Here, $||\rho^{T}||$ is the trace norm of $\rho^{T}$ and $N$ is negativity defined by \cite{29,30} \begin{equation} N\equiv\frac{||\rho^{T}||-1}{2}. \end{equation} $N$ is the absolute value of the sum of the negative eigenvalues of $\rho^{T}$. Now, we consider the quantum mutual information $I$ as a measure of all quantum correlations between two subsystems \cite{19,31} \begin{eqnarray} I = S(\rho_1) + S(\rho_2) - S(\rho), \end{eqnarray} with $\rho_1 = Tr_2(\rho)$ and $\rho_2 = Tr_1(\rho)$. Note that $S(\rho) = - Tr[\rho \ln(\rho)]$ is the Von Neumann entropy of density matrix $\rho$. In Ref. \cite{31}, the authors have proposed mutual information as an order parameter for quantum synchronization of a quantum system. \section{Discussions} \subsection{Influence of coupling strength $\lambda$} \begin{figure} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ for $\lambda = 0$ (red line), $\lambda = 0.02 \omega_1$ (green line), and $\lambda = 0.05 \omega_1$ (blue line). The parameters are $\beta = 0.3/\omega_1, \gamma = 2\omega_1, \Delta = 0.01\omega_1$, and $h = -1$. } \label{fig1} \end{figure} In Fig. 1, we plot $S_r(\phi)$ of steady state as a function of the relative phase $\phi$ of two spins for different values of coupling strength $\lambda$. From Fig. 1, one can find that if the coupling constant is zero, then $S_r(\phi)$ is always zero and there is no fixed phase relation between two spins. This implies two spins can not be synchronized in the case of $\lambda = 0$. Physically, in the case of $\lambda = 0$, there is no direct or indirect interaction between two qutrits. It is obvious that two qutrits can not be synchronized without any interaction. On the other hand, the maximal value of $S_r(\phi)$ increases with the increase of the coupling strength $\lambda$. For example, the maximum of $S_r(\phi)$ can be about 0.037 if $\lambda = 0.05$. Therefore, two spins can be synchronized in the presence of the interaction of spins and the common bath. \subsection{Influence of anisotropy coefficient $h$} \begin{figure} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ for $\gamma = 0.2\omega_1$ (upper panel) and $\gamma = 4\omega_1$ (lower panel). The parameters are $\beta = 0.3/\omega_1, \Delta = 0.01\omega_1$, and $\lambda = 0.03 \omega_1$. } \label{fig2} \end{figure} We now turn to discuss the influence of the anisotropy coefficient $h$ on the synchronization of two spins. The synchronization of two qubits within a common Markovian environment has been investigated by employing the Bloch-Redfield master equation \cite{17}. It is found that two qubits can not be synchronized for purely dephasing case. The Markovian and Born approximation were employed in this work \cite{17}. In the following, we will show that two spins can not be synchronized in purely dephasing case without using the Markovian and Born approximation. In Fig. 2, we plot $S_r(\phi)$ as a function of $\phi$ for different values of $h$ with $\gamma = 0.2\omega_1$ (upper panel) and $\gamma = 4\omega_1$ (lower panel). One can clearly see that the maximal value of $S_r(\phi)$ decreases with the increase of the parameter $h$. In particular, the values of $S_r(\phi)$ for $\gamma = 0.2 \omega_1$ (upper panel) and $\gamma = 4 \omega_1$ (lower panel) are always zero if $h = 1$ and two spins can not be synchronized for the purely dephasing case. Note that, in Ref. \cite{17}, the authors have assumed that $\gamma \gg \omega_1$ and $\gamma \gg \omega_2$ in order to ensure the validity the Markovian approximation. However, in the present work, we use the hierarchy equation method to investigate the present system without the Markovian and Born approximations. More precisely, it is not necessary to assume $\gamma \gg \omega_1$ and $\gamma \gg \omega_2$ in our work. We extend the result of Ref. \cite{17} to the case of non-Markovian bath, i.e., two spins without direct interaction can not be synchronized in the purely dephasing case. We find dissipation is indispensable for the synchronization of two spins in Markovian or non-Markovian environment. \subsection{Influence of temperature} \begin{figure} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ with $\Delta = 0.001\omega_1$ (upper panel) and $\Delta = 0.1\omega_1$ (lower panel). The parameters are $\gamma = 0.2 \omega_1$, $\lambda = 0.05 \omega_1$, and $h = -1$. } \label{fig3} \end{figure} \begin{figure} \caption{$S_r(\phi)$ is plotted as a function of $\phi$ with $\Delta = 0.001\omega_1$ (upper panel) and $\Delta = 0.1\omega_1$ (lower panel). The parameters are $\gamma = 20 \omega_1$, $\lambda = 0.05 \omega_1$, and $h = -1$. } \label{fig4} \end{figure} The synchronization of two spins has been studied with the help of the Lindblad master equation and the temperature of the baths was assumed to zero \cite{18,19}. In this section, we investigate the influence of the temperature of the bath. Comparing the upper panel and lower panel of Fig. 3 ($\gamma = 0.2\omega_1$) and Fig. 4 ($\gamma = 20\omega_1$), we see the effects of the temperature of the bath depends crucially on the detuning of two spins. On the one hand, if the detuning is much smaller than the frequencies of spins ($\Delta \ll \omega_i$), the maximal value of $S_r(\phi)$ increases with the increase of the temperature as one can see from the upper panel of Fig. 3 and Fig. 4. On the other hand, the maximum of $S_r(\phi)$ decreases with the increase of the temperature if $\Delta = 0.1\omega_1$ as one can see from the lower panel of Fig. 3 and Fig. 4. One possible reason for the different influences of the temperature of the bath on $S_r(\phi)$ for different detuning $\Delta$ is as follows. The interactions between the qutrits and the common bath plays an important role in the generation of $S_r(\phi)$. The two qutrits interact with each other indirectly via their direct interactions with the common bath. The temperature of the common bath plays a constructive role in this process. However, as the system evolves, the interactions between the common bath and two qutrits can disturb the dynamics of two qutrits. In this case, the temperature of the bath plays a destructive role. The steady state value of the quantum synchronization measure is a result of the two effects of the common bath. If the detuning $\Delta$ is very small, the two qutrits can be synchronized in a short time and the temperature of the bath plays a constructive role. However, if the detuning $\Delta$ is large enough, it takes a long time to synchronize the two qutrits and the temperature of the bath plays a destructive role. \subsection{Arnold tongue} \begin{figure} \caption{The logarithmic negativity $E$, mutual information $I$, and $S_r(\phi=0)$ are plotted as functions of the dimensionless time $\omega_1 t$ for $\Delta = 0.001\omega_1$, $\lambda = 0.05\omega_1$, $\beta = 0.3/ \omega_1$, and $h = -1$. } \label{fig5} \end{figure} \begin{figure} \caption{The Arnold tongue of the present system. The quantum mutual information $I$ (left panel) and maximal value of $S_r(\phi)$ (right panel) are plotted as functions of the detuning $\Delta$ and coupling strength $\lambda$ with $\gamma = 0.2 \omega_1$, $\beta = 0.3/ \omega_1$, and $h = -1$. } \label{fig6} \end{figure} \begin{figure} \caption{The Arnold tongue of the present system. The quantum mutual information $I$ (left panel) and maximal value of $S_r(\phi)$ (right panel) are plotted as functions of the detuning $\Delta$ and coupling strength $\lambda$ with $\gamma = 4 \omega_1$, $\beta = 0.3/ \omega_1$, and $h = -1$. } \label{fig7} \end{figure} In Fig. 5, we plot the logarithmic negativity $E$, mutual information $I$, and $S_r(\phi=0)$ as functions of the dimensionless time $\omega_1 t$. The entanglement first increases and then decreases with time. Eventually, the entanglement becomes zero at $\omega_1 t \approx 1.5$ while $I$ and $S_r$ are not zero at this time. After a certain time interval, the values of $I$ and $S_r$ are not changed with time and the two spins are synchronized. In order to see the steady state mutual information and synchronization measure more clearly, we plot the quantum mutual information $I$ (left panel) and maximum of $S_r(\phi)$ (right panel) as functions of the detuning $\Delta$ and coupling strength $\lambda$ in Figs 6 and 7. The Arnold tongue which is the characteristic property of synchronization can be observed in these figures. We calculate the logarithmic negativity of two spins for many different parameters and find that there is no steady state entanglement even in the presence of synchronization. This result is similar to the previous works \cite{3,31}. Consequently, the mutual information has bee proposed as an order parameter for quantum synchronization \cite{31}. In the present work, we assume there is no direct interaction between two spins and find they could not be entangled in the steady state, i.e., $E(\rho_{steady}) = 0$. However, the mutual information of two spins at steady state could be larger than zero. Therefore, we plot the mutual information of two spins in Figs. 6 and 7. Comparing Fig. 6 and Fig. 7, we find the Arnold tongue could be adjusted by the parameter $\gamma$. Particularly, the Arnold tongue in Fig. 6 is very narrow and it is usually very difficult observe synchronization of two spins experimentally \cite{1}. If we increase the parameter $\gamma$, then the Arnold tongue could be broadened significantly as one can see from Fig. 7. Therefore, the synchronization of two spins could be observed in experiments more easily if we increase the parameter $\gamma$. \section{Conclusions} In the present work, we have studied the quantum synchronization and correlations of two qutrits in one non-Markovian environment with the help of the hierarchy equation method. There is no direct interaction between two qutrits. Each qutrit interacts with the common non-Markovian bath. In order to measure quantum synchronization of discrete systems, we adopted the measure $S_r(\phi)$ proposed by Roulet and Bruder \cite{18,19}. This measure is based on the Husimi Q representation and spin coherent states. We have investigated the influence of the temperature, correlation time, and coupling strength between qutrits and bath on the quantum synchronzation and correlations of two qutrits without using the Markovian, Born, and rotating wave approximations. The influence of dissipation and dephasing on the synchronization of two qutrits was also discussed. We first discussed the influence of the coupling strength of qutrits and bath on the quantum synchronization of two qutrits. If there is no interaction between each qutrit and the common bath, then they do not interact with each other at all. Obviously, they can not be synchronized in this case. If we increase the coupling strength of qutrits and bath, they can be synchronized when dissipation is taken into accounted. Particularly, we found that two spins without direct interaction in a non-Markovian bath can not be synchronized for purely dephasing case which is a generalization of the Markovian case \cite{17}. In other words, dissipation is indispensable for the quantum synchronization of two spins in non-Markovian or Markovian bath. Then, we studied the influence of the temperature of the common bath on the quantum synchronization of two spins. Our results show that the influence of the temperature of the common bath depends heavily on the detuning between two spins. If the detuning is much smaller than the frequencies of two spins, the maximal value of $S_r(\phi)$ increases with the increase of the temperature. However, when the detuning is not much smaller than the frequencies of two spins, the maximal value of $S_r(\phi)$ decreases with the increase of the temperature. Finally, we plot the maximal value of $S_r(\phi)$ as a function of the detuning $\Delta$ and coupling strength $\lambda$. The Arnold tongue which is the characteristic property of synchronization can be observed in the present model. The logarithmic negativity of two spins for many different parameters was also calculated. We find that there is no steady state entanglement even in the presence of synchronization \cite{3,31}. Therefore, we plot the mutual information of two spins. The Arnold tongue could be adjusted by the parameter $\gamma$ significantly. Particularly, the Arnold tongue is very narrow in the non-Markovian case $\gamma < \omega_i$ ($i = 1, 2$). Thus, it is usually very difficult observe synchronization of two spins experimentally in the non-Markovian case \cite{1}. If we increase the parameter $\gamma$, then the Arnold tongue could be broadened significantly. Therefore, the synchronization of two spins could be observed in experiments more easily if they are put into a Markovian environment. \section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (Grant Nos. 11047115, 11365009 and 11065007), the Scientific Research Foundation of Jiangxi (Grant Nos. 20122BAB212008 and 20151BAB202020.) \end{document}
math
26,873
\begin{document} \title[Abstract Model Repair]{Abstract Model Repair\rsuper*} \author[G.~Chatzieleftheriou]{George~Chatzieleftheriou\lowercase{$^a$}} \address{$^a$Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece} \email{[email protected]} \author[B.~Bonakdarpour]{Borzoo~Bonakdarpour\lowercase{$^b$}} \address{$^b$Department of Computing and Software, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4L7, Canada} \email{[email protected]} \author[P.~Katsaros]{Panagiotis~Katsaros\lowercase{$^c$}} \address{$^c$Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece} \email{[email protected]} \author[S.~A.~Smolka]{Scott~A.~Smolka\lowercase{$^d$}} \address{$^d$Department of Computer Science, Stony Brook University, Stony Brook, NY 11794-4400, USA} \email{[email protected]} \keywords{Model Repair, Model Checking, Abstraction Refinement} \titlecomment{{\lsuper*}A preliminary version of the paper has appeared in~\cite{GBSK12}} \begin{abstract} Given a Kripke structure $M$ and CTL formula $\phi$, where $M$ does not satisfy $\phi$, the problem of \emph{Model Repair} is to obtain a new model $M'$ such that $M'$ satisfies $\phi$. Moreover, the changes made to $M$ to derive $M'$ should be minimum with respect to all such $M'$. As in model checking, \emph{state explosion} can make it virtually impossible to carry out model repair on models with infinite or even large state spaces. In this paper, we present a framework for model repair that uses \emph{abstraction refinement} to tackle state explosion. Our framework aims to repair Kripke Structure models based on a Kripke Modal Transition System abstraction and a 3-valued semantics for CTL. We introduce an abstract-model-repair algorithm for which we prove soundness and semi-completeness, and we study its complexity class. Moreover, a prototype implementation is presented to illustrate the practical utility of abstract-model-repair on an Automatic Door Opener system model and a model of the Andrew File System 1 protocol. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Given a model $M$ and temporal-logic formula $\phi$, \emph{model checking}~\cite{CES09} is the problem of determining whether or not $M \models \phi$. When this is not the case, a model checker will typically provide a \emph{counterexample} in the form of an execution path along which $\phi$ is violated. The user should then process the counterexample manually to correct $M$. An extended version of the model-checking problem is that of \emph{model repair}: given a model $M$ and temporal-logic formula $\phi$, where $M \not\models \phi$, obtain a new model $M'$, such that $M' \models \phi$. The problem of Model Repair for Kripke structures and Computation Tree Logic (CTL)~\cite{EH85} properties was first introduced in~\cite{BEGL99}. \emph{State explosion} is a well known limitation of automated formal methods, such as model checking and model repair, which impedes their application to systems having large or even infinite state spaces. Different techniques have been developed to cope with this problem. In the case of model checking, \emph{abstraction}~\cite{CGL94,LGSBBP95,GS97,DGG97,GHJ01} is used to create a smaller, more abstract version $\hat{M}$ of the initial concrete model $M$, and model checking is performed on this smaller model. For this technique to work as advertised, it should be the case that if $\hat{M} \models \phi$ then $M \models \phi$. Motivated by the success of abstraction-based model checking, we present in this paper a new framework for Model Repair that uses \emph{abstraction refinement} to tackle state explosion. The resulting \emph{Abstract Model Repair} (AMR) methodology makes it possible to repair models with large state spaces, and to speed-up the repair process through the use of smaller abstract models. The major contributions of our work are as follows: \begin{itemize} \item We provide an AMR framework that uses Kripke structures (KSs) for the concrete model $M$, Kripke Modal Transition Systems (KMTSs) for the abstract model $\hat{M}$, and a 3-valued semantics for interpreting CTL over KMTSs~\cite{HJS01}. An iterative refinement of the abstract KMTS model takes place whenever the result of the 3-valued CTL model-checking problem is undefined. If the refinement process terminates with a KMTS that violates the CTL property, this property is also falsified by the concrete KS $M$. Then, the repair process for the refined KMTS is initiated. \item We strengthen the Model Repair problem by additionally taking into account the following \emph{minimality} criterion (refer to the definition of Model Repair above): the changes made to $M$ to derive $M'$ should be minimum with respect to all $M'$ satisfying $\phi$. To handle the minimality constraint, we define a metric space over KSs that quantifies the structural differences between them. \item We introduce an Abstract Model Repair algorithm for KMTSs, which takes into account the aforementioned minimality criterion. \item We prove the soundness of the Abstract Model Repair algorithm for the full CTL and the completeness for a major fragment of it. Moreover, the algorithm's complexity is analyzed with respect to the abstract KMTS model size, which can be much smaller than the concrete KS. \item We illustrate the utility of our approach through a prototype implementation used to repair a flawed Automatic Door Opener system~\cite{BK08} and the Andrew File System 1 protocol. Our experimental results show significant improvement in efficiency compared to a concrete model repair solution. \end{itemize} \noindent\emph{Organization. } \ The rest of this paper is organized as follows. Sections~\ref{sec:mc} and~\ref{sec:abstr} introduce KSs, KMTSs, as well as abstraction and refinement based on a 3-valued semantics for CTL. Section~\ref{sec:mrp} defines a metric space for KSs and formally defines the problem of Model Repair. Section~\ref{sec:absmrp} presents our framework for Abstract Model Repair, while Section~\ref{sec:alg} introduces the abstract-model-repair algorithm for KMTSs and discusses its soundness, completeness and complexity properties. Section~\ref{sec:exp} presents the experimental evaluation of our method through its application to the Andrew File System 1 protocol (AFS1). Section~\ref{sec:relwork} considers related work, while Section~\ref{sec:concl} concludes with a review of the overall approach and pinpoints directions for future work. \section{Kripke Modal Transition Systems} \label{sec:mc} \enlargethispage{\baselineskip} Let $AP$ be a set of {\em atomic propositions}. Also, let $Lit$ be the set of {\em literals}: \[ Lit = AP \; \cup \; \{ \neg p \mid p \in AP\} \] \begin{defi} \label{def:ks} A {\em Kripke Structure} (KS) is a quadruple $M = (S, S_{0}, R, L)$, where: \begin{enumerate} \item $S$ is a finite set of {\em states}. \item $S_{0}\subseteq S$ is the set of {\em initial states}. \item $R\subseteq S \times S$ is a {\em transition relation} that must be total, i.e., $$\forall s \in S: \exists s' \in S:R(s,s').$$ \item $L: S \rightarrow 2^{Lit}$ is a state {\em labeling function}, such that $$\forall s \in S: \forall p \in AP: p \in L(s) \Leftrightarrow \neg p \notin L(s).\eqno{\qEd}$$ \qedhere \end{enumerate} \end{defi} \noindent The fourth condition in Def.~\ref{def:ks} ensures that any atomic proposition $p \in AP$ has one and only one truth value at any state.\\ \noindent \emph{Example.} We use the Automatic Door Opener system (ADO) of~\cite{BK08} as a running example throughout the paper. The system, given as a KS in Fig~\ref{fig:ado_system}, requires a three-digit code $(p_{0},p_{1},p_{2})$ to open a door, allowing for one and only one wrong digit to be entered at most twice. Variable $\mathit{err}$ counts the number of errors, and an alarm is rung if its value exceeds two. For the purposes of our paper, we use a simpler version of the ADO system, given as the KS $M$ in Fig.~\ref{fig:ado_initial}, where the set of atomic propositions is $AP = \{q\}$ and $q \equiv (open = true)$. \begin{figure} \caption{The Automatic Door Opener (ADO) System.} \label{fig:ado_system} \end{figure} \begin{defi} \label{def:kmts} A {\em Kripke Modal Transition System} (KMTS) is a 5-tuple $\hat{M} = (\hat{S}, \hat{S_{0}},$ $R_{must}, R_{may}, \hat{L})$, where: \begin{enumerate} \item $\hat{S}$ is a finite set of \emph{states}. \item $\hat{S_{0}}\subseteq \hat{S}$ is the set of \emph{initial states}. \item $R_{must} \subseteq \hat{S} \times \hat{S}$ and $R_{may} \subseteq \hat{S} \times \hat{S}$ are \emph{transition relations} such that $R_{must} \subseteq R_{may}$. \item $\hat{L}: \hat{S} \rightarrow 2^{Lit}$ is a state-labeling such that $\forall \hat{s} \in \hat{S}$, $\forall p \in AP$, $\hat{s}$ is labeled by {\em at most} one of $p$ and $\neg p$.\qed \end{enumerate} \end{defi} \noindent A KMTS has two types of transitions: \emph{must-transitions}, which exhibit \emph{necessary} behavior, and \emph{may-transitions}, which exhibit \emph{possible} behavior. Must-transitions are also may-transitions. The ``at most one'' condition in the fourth part of Def.~\ref{def:kmts} makes it possible for the truth value of an atomic proposition at a given state to be {\em unknown}. This relaxation of truth values in conjunction with the existence of may-transitions in a KMTS constitutes a \emph{partial modeling} formalism. Verifying a CTL formula $\phi$ over a KMTS may result in an undefined outcome ($\bot$). We use the \emph{3-valued semantics}~\cite{HJS01} of a CTL formula $\phi$ at a state $\hat{s}$ of KMTS $\hat{M}$. \begin{defi} \label{def:ctl3_semantics} {\bf \cite{HJS01}} Let $\hat{M} = (\hat{S}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be a KMTS. The 3-valued semantics of a CTL formula $\phi$ at a state $\hat{s}$ of $\hat{M}$, denoted as $(\hat{M},\hat{s}) \models^{3} \phi$, is defined inductively as follows: \begin{itemize} \item If $\phi = \mathit{false}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$ \end{itemize} \item If $\phi = \mathit{true}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$ \end{itemize} \item If $\phi = p$ where $p \in AP$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff $p \in \hat{L}(\hat{s})$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff $\neg p \in \hat{L}(\hat{s})$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = \neg\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff $[(\hat{M},\hat{s}) \models^{3} \phi_{1} ] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff $[(\hat{M},\hat{s}) \models^{3} \phi_{1} ] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = \phi_{1} \, \vee \, \phi_{2}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff $[(\hat{M},\hat{s}) \models^{3} \phi_{1}] = \mathit{true}$ or $[(\hat{M},\hat{s}) \models^{3} \phi_{2}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff $[(\hat{M},\hat{s}) \models^{3} \phi_{1}] = \mathit{false}$ and $[(\hat{M},\hat{s}) \models^{3} \phi_{2}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = \phi_{1} \, \wedge \, \phi_{2}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff $[(\hat{M},\hat{s}) \models^{3} \phi_{1}] = \mathit{true}$ and $[(\hat{M},\hat{s}) \models^{3} \phi_{2}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff $[(\hat{M},\hat{s}) \models^{3} \phi_{1}] = \mathit{false}$ or $[(\hat{M},\hat{s}) \models^{3} \phi_{2}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = AX\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff for all $\hat{s}_{i}$ such that $(\hat{s},\hat{s}_{i}) \in R_{may}$, $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff there exists some $\hat{s}_{i}$ such that $(\hat{s},\hat{s}_{i}) \in R_{must}$ and $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = EX\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff there exists $\hat{s}_{i}$ such that $(\hat{s},\hat{s}_{i}) \in R_{must}$ and $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff for all $\hat{s}_{i}$ such that $(\hat{s},\hat{s}_{i}) \in R_{may}$, $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = AG\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff for all may-paths $\pi_{may} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$ and for all $\hat{s}_{i} \in \pi_{may}$ it holds that $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff there exists some must-path $\pi_{must} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, such that for some $\hat{s}_{i} \in \pi_{must}$, $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = EG\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff there exists some must-path $\pi_{must} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, such that for all $\hat{s}_{i} \in \pi_{must}$, $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff for all may-paths $\pi_{may} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, there is some $\hat{s}_{i} \in \pi_{may}$ such that $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = AF\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff for all may-paths $\pi_{may} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, there is a $\hat{s}_{i} \in \pi_{may}$ such that $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff there exists some must-path $\pi_{must} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, such that for all $\hat{s}_{i} \in \pi_{must}$, $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = EF\phi_{1}$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff there exists some must-path $\pi_{must} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, such that there is some $\hat{s}_{i} \in \pi_{must}$ for which $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff for all may-paths $\pi_{may} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$ and for all $\hat{s}_{i} \in \pi_{may}$, $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{1}] = \mathit{false}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = A(\phi_{1} \, U \, \phi_{2})$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff for all may-paths $\pi_{may} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, there is $\hat{s}_{i} \in \pi_{may}$ such that $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{2}] = \mathit{true}$ and $\forall j < i: [(\hat{M},\hat{s}_{j}) \models^{3} \phi_{1}] = true$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff there exists some must-path $\pi_{must} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$, such that \begin{itemize} \item[i.] for all $0\leq k< |\pi_{must}|:$\\ $(\forall j < k : [(\hat{M},\hat{s}_{j}) \models^{3} \phi_{1}] \neq \mathit{false}) \Rightarrow ([(\hat{M},\hat{s}_{k}) \models^{3} \phi_{2}] = \mathit{false})$ \item[ii.] $(\text{for all } 0 \leq k < |\pi_{must}|:[(\hat{M},\hat{s}_{k}) \models^{3} \phi_{2}] \neq \mathit{false}) \Rightarrow |\pi_{must}| = \infty$ \end{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \end{itemize} \item If $\phi = E(\phi_{1}U\phi_{2})$ \begin{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{true}$, iff there exists some must-path $\pi_{must} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$ such that there is a $\hat{s}_{i} \in \pi_{must}$ with $[(\hat{M},\hat{s}_{i}) \models^{3} \phi_{2}] = \mathit{true}$ and for all $j < i, [(\hat{M},\hat{s}_{j}) \models^{3} \phi_{1}] = \mathit{true}$. \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \mathit{false}$, iff for all may-paths $\pi_{may} = [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$ \begin{itemize} \item[i.] for all $0 \leq k < |\pi_{may}|:$\\ $(\forall j < k : [(\hat{M},\hat{s}_{j}) \models^{3} \phi_{1}] \neq \mathit{false}) \Rightarrow ([(\hat{M},\hat{s}_{k}) \models^{3} \phi_{2}] = \mathit{false})$ \item[ii.] $(\text{for all } 0 \leq k < |\pi_{may}| : [(\hat{M},\hat{s}_{k}) \models^{3} \phi_{2}] \neq \mathit{false}) \Rightarrow |\pi_{may}| = \infty$ \end{itemize} \item $[(\hat{M},\hat{s}) \models^{3} \phi ] = \bot$, otherwise. \qed \end{itemize} \end{itemize} \end{defi}\enlargethispage{\baselineskip} \noindent From the 3-valued CTL semantics, it follows that must-transitions are used to check the truth of existential CTL properties, while may-transitions are used to check the truth of universal CTL properties. This works inversely for checking the refutation of CTL properties. In what follows, we use $\models$ instead of $\models^{3}$ in order to refer to the 3-valued satisfaction relation. \section{Abstraction and Refinement for 3-Valued CTL} \label{sec:abstr} \subsection{Abstraction} \emph{Abstraction} is a state-space reduction technique that produces a smaller abstract model from an initial {\em concrete} model, so that the result of model checking a property $\phi$ in the abstract model is preserved in the concrete model. This can be achieved if the abstract model is built with certain requirements~\cite{CGL94,GHJ01}. \begin{defi} \label{def:abs_kmts} Given a KS $M = (S, S_{0}, R, L)$ and a pair of total functions $(\alpha : S \rightarrow \hat{S}, \gamma : \hat{S} \rightarrow 2^{S})$ such that $$\forall s \in S: \forall\hat{s} \in \hat{S}: (\alpha(s) = \hat{s} \Leftrightarrow s \in \gamma(\hat{s}))$$ the KMTS $\alpha(M) = (\hat{S}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ is defined as follows: \begin{enumerate} \item $\hat{s} \in \hat{S_{0}}$ iff $\exists s \in \gamma(\hat{s})$ such that $s \in S_{0}$ \item $lit \in \hat{L}(\hat{s})$ only if $\forall s \in \gamma(\hat{s}): lit \in L(s)$ \item $R_{must} = \left\{(\hat{s_{1}},\hat{s_{2}}) \mid \forall s_{1} \in \gamma(\hat{s_{1}}): \exists s_{2} \in \gamma(\hat{s_{2}}): (s_{1},s_{2}) \in R\right\}$ \item $R_{may} = \left\{(\hat{s_{1}},\hat{s_{2}}) \mid \exists s_{1} \in \gamma(\hat{s_{1}}): \exists s_{2} \in \gamma(\hat{s_{2}}): (s_{1},s_{2}) \in R\right\}$\qed \end{enumerate} \end{defi} For a given KS $M$ and a pair of abstraction and concretization functions $\alpha$ and $\gamma$, Def.~\ref{def:abs_kmts} introduces the KMTS $\alpha(M)$ defined over the set $\hat{S}$ of \emph{abstract states}. In our AMR framework, we view $M$ as the \emph{concrete model} and the KMTS $\alpha(M)$ as the \emph{abstract model}. Any two concrete states $s_{1}$ and $s_{2}$ of $M$ are abstracted by $\alpha$ to a state $\hat{s}$ of $\alpha(M)$ if and only if $s_{1}$, $s_{2}$ are elements of the set $\gamma(\hat{s})$ (see Fig~\ref{fig:abstract_concrete}). A state of $\alpha(M)$ is initial \emph{if and only if} at least one of its concrete states is initial as well. An atomic proposition in an abstract state is true (respectively, false), \emph{only if} it is also true (respectively, false) in all of its concrete states. This means that the value of an atomic proposition may be unknown at a state of $\alpha(M)$. A must-transition from $\hat{s_{1}}$ to $\hat{s_{2}}$ of $\alpha(M)$ exists, if and only if there are transitions from all states of $\gamma(\hat{s_{1}})$ to at least one state of $\gamma(\hat{s_{2}})$ $(\forall\exists-condition)$. Respectively, a may-transition from $\hat{s_{1}}$ to $\hat{s_{2}}$ of $\alpha(M)$ exists, if and only if there is at least one transition from some state of $\gamma(\hat{s_{1}})$ to some state of $\gamma(\hat{s_{2}})$ $(\exists\exists-condition)$. \begin{figure} \caption{Abstraction and Concretization.} \label{fig:abstract_concrete} \end{figure} \begin{defi}\enlargethispage{\baselineskip} \label{def:concretize_kmts} Given a pair of total functions $(\alpha : S \rightarrow \hat{S}, \gamma : \hat{S} \rightarrow 2^{S})$ such that $$\forall s \in S: \forall \hat{s} \in \hat{S}: (\alpha(s) = \hat{s} \Leftrightarrow s \in \gamma(\hat{s}))$$ and a KMTS $\hat{M} = (\hat{S}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L})$, the set of KSs $\gamma(\hat{M}) = \{M \mid M = (S, S_{0}, R, L)\}$ is defined such that for all $M \in \gamma(\hat{M})$ the following conditions hold: \begin{enumerate} \item $s \in S_{0}$ iff $\alpha(s) \in \hat{S_{0}}$ \item $lit \in L(s)$ if $lit \in \hat{L}(\alpha(s))$ \item $(s_{1},s_{2}) \in R$ iff \begin{itemize}{} \item $\exists s_{1}^{\prime} \in \gamma(\alpha(s_{1})): \exists s_{2}^{\prime} \in \gamma(\alpha(s_{2})) : (\alpha(s_{1}),\alpha(s_{2})) \in R_{may}$ and, \item $\forall s_{1}^{\prime} \in \gamma(\alpha(s_{1})): \exists s_{2}^{\prime} \in \gamma(\alpha(s_{2})) : (\alpha(s_{1}),\alpha(s_{2})) \in R_{must}$ \qed \end{itemize}{} \end{enumerate} \end{defi} \noindent For a given KMTS $\hat{M}$ and a pair of abstraction and concretization functions $\alpha$ and $\gamma$, Def.~\ref{def:concretize_kmts} introduces a set $\gamma(\hat{M})$ of \emph{concrete} KSs. A state $s$ of a KS $M \in \gamma(\hat{M})$ is initial if its abstract state $\alpha(s)$ is also initial. An atomic proposition in a concrete state $s$ is true (respectively, false) if it is also true (respectively, false) in its abstract state $\alpha(s)$. A transition from a concrete state $s_{1}$ to another concrete state $s_{2}$ exists, if and only if \begin{itemize} \item{} there are concrete states $s_{1}^{\prime} \in \gamma(\alpha(s_{1}))$ and $s_{2}^{\prime} \in \gamma(\alpha(s_{2}))$, where $(\alpha(s_{1}),\alpha(s_{2})) \in R_{may}$, and \item{} there is at least one concrete state $s_{2}^{\prime} \in \gamma(\alpha(s_{2}))$ such that for all $s_{1}^{\prime} \in \gamma(\alpha(s_{1}))$ it holds that $(\alpha(s_{1}),\alpha(s_{2})) \in R_{must}$. \end{itemize} \paragraph{Abstract Interpretation.} A pair of abstraction and concretization functions can be defined within an \emph{Abstract Interpretation}~\cite{CC77,CC79} framework. Abstract interpretation is a theory for a set of abstraction techniques, for which important properties for the model checking problem have been proved~\cite{DGG97,D96}. \begin{defi} \label{def:mixsimul} \emph{~\cite{DGG97,GJ02}} Let $M = (S, S_{0}, R, L)$ be a concrete KS and $\hat{M}$ = $(\hat{S}, \hat{S_{0}}, R_{must},$ $R_{may}, \hat{L})$ be an abstract KMTS. A relation $H \subseteq S \times \hat{S}$ for $M$ and $\hat{M}$ is called a \emph{mixed simulation}, when $H(s,\hat{s})$ implies: \begin{itemize} \item $\hat{L}(\hat{s}) \subseteq L(s)$ \item if $r = (s,s^{\prime}) \in R$, then there is exists $\hat{s}^{\prime} \in \hat{S}$ such that $r_{may} = (\hat{s},\hat{s}^{\prime}) \in R_{may}$ and $(s^{\prime},\hat{s}^{\prime}) \in H$. \item if $r_{must} = (\hat{s},\hat{s}^{\prime}) \in R_{must}$, then there exists $s^{\prime} \in S$ such that $r = (s,s^{\prime}) \in R$ and $(s^{\prime},\hat{s}^{\prime}) \in H$.\qed \end{itemize} \end{defi} \noindent The abstraction function $\alpha$ of Def.~\ref{def:abs_kmts} is a mixed simulation for the KS $M$ and its abstract KMTS $\alpha(M)$. \begin{thm} \label{theor:preserv} \emph{\cite{GJ02}} Let $H \subseteq S \times \hat{S}$ be a mixed simulation from a KS $M = (S, S_{0}, R, L)$ to a KMTS $\hat{M} = (\hat{S}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L})$. Then, for every CTL formula $\phi$ and every $(s,\hat{s}) \in H$ it holds that \[ [(\hat{M},\hat{s}) \models \phi] \neq \bot \Rightarrow [(M,s) \models \phi] = [(\hat{M},\hat{s}) \models \phi] \] \end{thm} \noindent Theorem~\ref{theor:preserv} ensures that if a CTL formula $\phi$ has a definite truth value (i.e., true or false) in the abstract KMTS, then it has the same truth value in the concrete KS. When we get $\bot$ from the 3-valued model checking of a CTL formula $\phi$, the result of model checking property $\phi$ on the corresponding KS can be either true or false.\\ \noindent \emph{Example.} An abstract KMTS $\hat{M}$ is presented in Fig.~\ref{fig:ado_initial}, where all the states labeled by $q$ are grouped together, as are all states labeled by $\neg q$. \begin{figure} \caption{The KS and KMTSs for the ADO system.} \label{fig:ado_initial} \label{fig:ado_refined} \label{fig:ado_ks_kmts} \end{figure} \subsection{Refinement} When the outcome of verifying a CTL formula $\phi$ on an abstract model using the 3-valued semantics is $\bot$, then a \emph{refinement} step is needed to acquire a more \emph{precise} abstract model. In the literature, there are refinement approaches for the 2-valued CTL semantics~\cite{CGJLV00,CPR05,CGR07}, as well as a number of techniques for the 3-valued CTL model checking~\cite{GHJ01,SG04,SG07,GLLS07}. The refinement technique that we adopt is an automated two-step process based on~\cite{CGJLV00,SG04}: \begin{enumerate} \item Identify a \emph{failure state} in $\alpha(M)$ using the algorithms in~\cite{CGJLV00,SG04}; the cause of failure for a state $\hat{s}$ stems from an atomic proposition having an undefined value in $\hat{s}$, or from an outgoing may-transition from $\hat{s}$. \item Produce the abstract KMTS $\alpha_{\mathit{Refined}}(M)$, where $\alpha_{\mathit{Refined}}$ is a new abstraction function as in Def.~\ref{def:abs_kmts}, such that the identified failure state is refined into two states. If the cause of failure is an undefined value of an atomic proposition in $\hat{s}$, then $\hat{s}$ is split into states $\hat{s}_{1}$ and $\hat{s}_{2}$, such that the atomic proposition is true in $\hat{s}_{1}$ and false in $\hat{s}_{2}$. Otherwise, if the cause of failure is an outgoing may-transition from $\hat{s}$, then $\hat{s}$ is split into states $\hat{s}_{1}$ and $\hat{s}_{2}$, such that there is an outgoing must-transition from $\hat{s}_{1}$ and no outgoing may- or must-transition from $\hat{s}_{2}$. \end{enumerate} The described refinement technique does not necessarily converge to an abstract KMTS with a definite model checking result. A promising approach in order to overcome this restriction is by using a different type of abstract model, as in~\cite{SG04}, where the authors propose the use of Generalized KMTSs, which ensure monotonicity of the refinement process.\\ \noindent \emph{Example. } Consider the case where the ADO system requires a mechanism for opening the door from any state with a direct action. This could be an action done by an expert if an immediate opening of the door is required. This property can be expressed in CTL as $\phi = AGEXq$. Observe that in $\alpha(M)$ of Fig.~\ref{fig:ado_initial}, the absence of a must-transition from $\hat{s}_{0}$ to $\hat{s}_{1}$, where $[(\alpha(M),\hat{s}_{1}) \models q] = true$, in conjunction with the existence of a may-transition from $\hat{s}_{0}$ to $\hat{s}_{1}$, i.e. to a state where $[(\alpha(M),\hat{s}_{1}) \models q] = true$, results in an undefined model-checking outcome for $[(\alpha(M),\hat{s}_{0}) \models \phi]$. Notice that state $\hat{s}_{0}$ is the failure state, and the may-transition from $\hat{s}_{0}$ to $\hat{s}_{1}$ is the cause of the failure. Consequently, $\hat{s}_{0}$ is refined into two states, $\hat{s}_{01}$ and $\hat{s}_{02}$, such that the former has no transition to $\hat{s}_{1}$ and the latter has an outgoing must-transition to $\hat{s}_{1}$. Thus, the may-transition which caused the undefined outcome is eliminated and for the refined KMTS $\alpha_{\mathit{Refined}}(M)$ it holds that $[\alpha_{\mathit{Refined}}(M),\hat{s}_{1}) \models \phi] = \mathit{false}$. The initial KS and the refined KMTS $\alpha_{\mathit{Refined}}(M)$ are shown in Fig.~\ref{fig:ado_refined}. \section{The Model Repair Problem} \label{sec:mrp} In this section, we formulate the problem of Model Repair. A metric space over Kripke structures is defined to quantify their structural differences. This allows us taking into account the \emph{minimality of changes} criterion in Model Repair. \noindent Let $\pi$ be a function on the set of all functions $f: X \rightarrow Y$ such that: \[ \pi(f) = \{(x, f(x)) \mid x \in X\} \] \noindent A \emph{restriction operator} (denoted by $\upharpoonright$) for the domain of function $f$ is defined such that for $X_{1} \subseteq X$, \[ f\upharpoonright_{X_{1}} = \{(x, f(x)) \mid x \in X_{1}\} \] By $S^C$, we denote the complement of a set $S$. \begin{defi} \label{def:metric_space} For any two $M = (S,S_{0},R,L)$ and $M^{\prime} = (S^{\prime},S^{\prime}_{0},R^{\prime},L^{\prime})$ in the set $K_{M}$ of all KSs, where \begin{itemize} \item[] $S^{\prime} = (S \cup S_{\mathit{IN}}) - S_{\mathit{OUT}}$ for some $S_{\mathit{IN}} \subseteq S^{C}$, $S_{\mathit{OUT}} \subseteq S$, \item[] $R^{\prime} = (R \cup R_{\mathit{IN}}) - R_{\mathit{OUT}}$ for some $R_{\mathit{IN}} \subseteq R^{C}$, $R_{\mathit{OUT}} \subseteq R$, \item[] $L^{\prime} = S^{\prime} \rightarrow 2^{LIT}$, \end{itemize} the {\em distance function} $d$ over $K_{M}$ is defined as follows: \[ d(M,M^{\prime}) = |S\,\Delta \, S^{\prime}| + |R \, \Delta \, R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}}) \,\Delta \, \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2} \] with $A \, \Delta \, B$ representing the symmetric difference $(A-B)\cup(B-A)$.\qed \end{defi} \noindent For any two KSs defined over the same set of atomic propositions $AP$, function $d$ counts the number of differences $|S\,\Delta\, S^{\prime}|$ in the state spaces, the number of differences $|R\,\Delta\, R^{\prime}|$ in their transition relation and the number of common states with altered labeling. \begin{prop} \label{prop:metric_space} The ordered pair $(K_{M},d)$ is a metric space. \end{prop} \begin{proof} We use the fact that the cardinality of the symmetric difference between any two sets is a distance metric. It holds that: \begin{enumerate} \item $|S\Delta S^{\prime}| \geq 0$, $|R\Delta R^{\prime}| \geq 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| \geq 0$ (non-negativity) \item $|S\Delta S^{\prime}| = 0$ iff $S = S^{\prime}$, $|R\Delta R^{\prime}| = 0$ iff $R = R^{\prime}$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})|\Delta |\pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$ iff $\pi(L\upharpoonright_{S\cap S^{\prime}}) = \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})$ (identity of indiscernibles) \item $|S\Delta S^{\prime}| = |S^{\prime}\Delta S|$, $|R\Delta R^{\prime}| = |R^{\prime}\Delta R|$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| =\\ |\pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}}) \Delta \pi(L\upharpoonright_{S\cap S^{\prime}})|$(symmetry) \item $|S^{\prime}\Delta S^{\prime\prime}| \leq |S^{\prime}\Delta S| + |S \Delta S^{\prime\prime}|$, $|R^{\prime}\Delta R^{\prime\prime}| \leq |R^{\prime}\Delta R| + |R \Delta R^{\prime\prime}|$, \\ $|\pi(L^{\prime}\upharpoonright_{S^{\prime}\cap S^{\prime\prime}})\Delta \pi(L^{\prime\prime}|_{S^{\prime}\cap S^{\prime\prime}})| \leq |\pi(L^{\prime}\upharpoonright_{S^{\prime}\cap S})\Delta \pi(L\upharpoonright_{S^{\prime}\cap S})| + \\ |\pi(L\upharpoonright_{S\cap S^{\prime\prime}})\Delta \pi(L^{\prime\prime}|_{S\cap S^{\prime\prime}})|$ \\ (triangle inequality) \end{enumerate} We will prove that $d$ is a metric on $K_{M}$. Suppose $M, M^{\prime}, M^{\prime\prime} \in K_{M}$ \begin{itemize} \item It easily follows from (1) that $d(M,M^{\prime}) \geq 0$ (non-negativity) \item From (2), $d(M,M^{\prime}) = 0$ iff $M = M^{\prime}$ (identity of indiscernibles) \item Adding the equations in (3), results in $d(M,M^{\prime}) = d(M^{\prime},M)$ (symmetry) \item If we add the inequalities in (4), then we get $d(M^{\prime},M^{\prime\prime}) \leq d(M^{\prime},M) + d(M,M^{\prime\prime})$ (triangle inequality) \end{itemize} So, the proposition is true. \end{proof} \begin{defi} \label{def:metric_space_kmts} For any two $\hat{M}$ = $(\hat{S}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and $\hat{M}^{\prime}$ = $(\hat{S}^{\prime}, \hat{S_{0}}^{\prime}, R_{must}^{\prime},$ $R_{may}^{\prime}, \hat{L}^{\prime})$ in the set $K_{\hat{M}}$ of all KMTSs, where \begin{itemize} \item[] $\hat{S}^{\prime} = (\hat{S} \cup \hat{S}_{\mathit{IN}}) - \hat{S}_{\mathit{OUT}}$ for some $\hat{S}_{\mathit{IN}} \subseteq \hat{S}^{C}$, $\hat{S}_{\mathit{OUT}} \subseteq \hat{S}$, \item[] $\hat{R}_{must}^{\prime} = (\hat{R}_{must} \cup \hat{R}_{\mathit{IN}}) - \hat{R}_{\mathit{OUT}}$ for some $\hat{R}_{\mathit{IN}} \subseteq \hat{R}_{must}^{C}$, $\hat{R}_{\mathit{OUT}} \subseteq \hat{R}_{must}$, \item[] $\hat{R}_{may}^{\prime} = (\hat{R}_{may} \cup \hat{R}_{\mathit{IN}}^{\prime}) - \hat{R}_{\mathit{OUT}}^{\prime}$ for some $\hat{R}_{\mathit{IN}}^{\prime} \subseteq \hat{R}_{may}^{C}$, $\hat{R}_{\mathit{OUT}}^{\prime} \subseteq \hat{R}_{may}$, \item[] $\hat{L}^{\prime} = \hat{S}^{\prime} \rightarrow 2^{LIT}$, \end{itemize} the {\em distance function} $\hat{d}$ over $K_{\hat{M}}$ is defined as follows: \[ \begin{split} \hat{d}(M,M^{\prime}) = |\hat{S} \, \Delta \, \hat{S}^{\prime}| + |\hat{R}_{must} \, \Delta \, \hat{R}_{must}^{\prime}| + |(\hat{R}_{may} - \hat{R}_{must}) \, \Delta \, (\hat{R}_{may}^{\prime} - \hat{R}_{must}^{\prime})| + \\ \frac{|\pi(\hat{L}\upharpoonright_{\hat{S}\cap \hat{S}^{\prime}}) \, \Delta \, \pi(\hat{L}^{\prime}\upharpoonright_{\hat{S}\cap \hat{S}^{\prime}})|}{2} \end{split} \] with $A \Delta B$ representing the symmetric difference $(A-B)\cup(B-A)$. \end{defi} \noindent We note that $\hat{d}$ counts the differences between $\hat{R}_{may}^{\prime}$ and $\hat{R}_{may}$, and those between $\hat{R}_{must}^{\prime}$ and $\hat{R}_{must}$ separately, while avoiding to count the differences in the latter case twice (we remind that must-transitions are also included in $\hat{R}_{may}$). \begin{prop} \label{prop:kmts_metric_space} The ordered pair $(K_{\hat{M}},\hat{d})$ is a metric space. \end{prop} \begin{proof} The proof is done in the same way as in Prop.~\ref{prop:metric_space}. \end{proof} \begin{defi} Given a KS $M$ and a CTL formula $\phi$ where $M \not\models \phi$, the Model Repair problem is to find a KS $M^{\prime}$, such that $M^{\prime} \models \phi$ and $d(M,M^{\prime})$ is minimum with respect to all such $M^{\prime}$. \end{defi} \noindent The Model Repair problem aims at modifying a KS such that the resulting KS satisfies a CTL formula that was violated before. The distance function $d$ of Def.~\ref{def:metric_space} features all the attractive properties of a distance metric. Given that no quantitative interpretation exists for predicates and logical operators in CTL, $d$ can be used in a model repair solution towards selecting minimum changes to the modified KS. \section{The Abstract Model Repair Framework} \label{sec:absmrp} Our AMR framework integrates 3-valued model checking, model refinement, and a new algorithm for selecting the repair operations applied to the abstract model. The goal of this algorithm is to apply the repair operations in a way, such that the number of structural changes to the corresponding concrete model is minimized. The algorithm works based on a partial order relation over a set of basic repair operations for KMTSs. This section describes the steps involved in our AMR framework, the basic repair operations, and the algorithm. \subsection{The Abstract Model Repair Process} \label{subsec:absmrp_approach} \begin{figure} \caption{Abstract Model Repair Framework.} \label{fig:abs_repair} \end{figure} The process steps shown in Fig.~\ref{fig:abs_repair} rely on the KMTS abstraction of Def.~\ref{def:abs_kmts}. These are the following: \begin{description} \item[Step 1.] Given a KS $M$, a state $s$ of $M$, and a CTL property $\phi$, let us call $\hat{M}$ the KMTS obtained as in Def.~\ref{def:abs_kmts}. \item[Step 2.] For state $\hat{s} = \alpha(s)$ of $\hat{M}$, we check whether $(\hat{M},\hat{s}) \models \phi$ by 3-valued model checking. \begin{description} \item[Case 1.] If the result is \emph{true}, then, according to Theorem~\ref{theor:preserv}, $(M,s) \models \phi$ and there is no need to repair $M$. \item[Case 2.] If the result is \emph{undefined}, then a refinement of $\hat{M}$ takes place, and: \begin{description} \item[Case 2.1.] If an $\hat{M}_{Refined}$ is found, the control is transferred to Step~2. \item[Case 2.2.] If a refined KMTS cannot be retrieved, the repair process terminates with a failure. \end{description} \item[Case 3.] If the result is \emph{false}, then, from Theorem~\ref{theor:preserv}, $(M,s) \not\models \phi$ and the repair process is enacted; the control is transferred to Step 3. \end{description} \item[Step 3.] The \emph{AbstractRepair} algorithm is called for the abstract KMTS ($\hat{M}_{Refined}$ or $\hat{M}$ if no refinement has occurred), the state $\hat{s}$ and the property $\phi$. \begin{description} \item[Case 1.] \emph{AbstractRepair} returns an $\hat{M}^{\prime}$ for which $(\hat{M}^{\prime},\hat{s}) \models \phi$. \item[Case 2.] \emph{AbstractRepair} fails to find an $\hat{M}^{\prime}$ for which the property holds true. \end{description} \item[Step 4.] If \emph{AbstractRepair} returns an $\hat{M}^{\prime}$, then the process ends with selecting the subset of KSs from $\gamma(\hat{M}^{\prime})$, with elements whose distance $d$ from the KS $M$ is minimum with respect to all the KSs in $\gamma(\hat{M}^{\prime})$. \end{description} \subsection{Basic Repair Operations} \label{subsec:basic_ops} We decompose the KMTS repair process into seven basic repair operations: \begin{description} \item[AddMust] Adding a must-transition \item[AddMay] Adding a may-transition \item[RemoveMust] Removing a must-transition \item[RemoveMay] Removing a may-transition \item[ChangeLabel] Changing the labeling of a KMTS state \item[AddState] Adding a new KMTS state \item[RemoveState] Removing a disconnected KMTS state \end{description} \subsubsection{Adding a must-transition} \begin{defi}[AddMust] \label{def:AddMust} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and $\hat{r}_{n} = (\hat{s}_{1},\hat{s}_{2}) \notin R_{must}$, $AddMust(\hat{M},\hat{r}_{n})$ is the KMTS $\hat{M^{\prime}} = (\hat{S}, \hat{S_{0}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L})$ such that $R_{must}^{\prime} = R_{must} \cup \{\hat{r}_{n}\}$ and $R_{may}^{\prime} = R_{may} \cup \{\hat{r}_{n}\}$. \qed \end{defi} Since $R_{must} \subseteq R_{may}$, $\hat{r}_{n}$ must also be added to $R_{may}$, resulting in a new may-transition if $\hat{r}_{n} \notin R_{may}$. Fig.~\ref{fig:AddMust} shows how the basic repair operation \emph{AddMust} modifies a given KMTS. The newly added transitions are in bold. \begin{figure} \caption{\emph{AddMust} \label{fig:AddMust} \end{figure} \begin{prop} \label{prop:AddMust} For any $\hat{M}^{\prime} = AddMust(\hat{M},\hat{r}_{n})$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:add_must_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = AddMust(\alpha(M),\hat{r}_{n})$ for some $\hat{r}_{n} = (\hat{s}_{1},\hat{s}_{2}) \notin R_{must}$. The set $K_{min} \subseteq \gamma(\hat{M}^{\prime})$ with all KSs, whose distance $d$ from $M$ is minimized is: \begin{equation} K_{min} = \{M^{\prime} \mid M^{\prime} = (S, S_{0}, R \cup R_{n}, L)\} \end{equation} where $R_{n}$ is given for one $s_{2} \in \gamma(\hat{s}_{2})$ as follows: \begin{equation} \nonumber R_{n} = \bigcup_{s_{1} \in \gamma(\hat{s}_{1})} \{(s_{1}, s_{2}) \mid \nexists s \in \gamma(\hat{s}_{2}): (s_{1},s) \in R\} \eqno{\qEd} \end{equation} \end{defi} \noindent Def.~\ref{def:add_must_ks} implies that when the \emph{AbstractRepair} algorithm applies \emph{AddMust} on the abstract KMTS $\hat{M}$, then a set of KSs is retrieved from the concretization of $\hat{M}^{\prime}$. The same holds for all other basic repair operations and consequently, when \emph{AbstractRepair} finds a repaired KMTS, one or more KSs can be obtained for which property $\phi$ holds. \begin{prop} \label{prop:add_must} For all $M^{\prime} \in K_{min}$, it holds that $1 \leq d(M,M^{\prime}) \leq \left|S\right|$. \end{prop} \begin{proof} Recall that $$d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$$ Since $|S\Delta S^{\prime}| = 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$, $d(M,M^{\prime}) = |R\Delta R^{\prime}| = |R - R^{\prime}| + |R^{\prime} - R| = 0 + |R_{n}|$. Since $|R_{n}| \geq 1$ and $|R_{n}| \leq |S|$, it is proved that $1 \leq d(M,M^{\prime}) \leq \left|S\right|$. \end{proof} From Prop.~\ref{prop:add_must}, we conclude that a lower and upper bound exists for the distance between $M$ and any $M^{\prime} \in K_{min}$. \subsubsection{Adding a may-transition} \begin{defi}[AddMay] \label{def:AddMay} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and $\hat{r}_{n} = (\hat{s}_{1},\hat{s}_{2}) \notin R_{may}$, $AddMay(\hat{M},\hat{r}_{n})$ is the KMTS $\hat{M^{\prime}} = (\hat{S}, \hat{S_{0}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L})$ such that $R_{must}^{\prime} = R_{must} \cup \{\hat{r}_{n}\}$ if $\left|S_{1}\right| = 1$ or $R_{must}^{\prime} = R_{must}$ if $\left|S_{1}\right| > 1$ for $S_{1} = \{s_{1} \mid s_{1} \in \gamma(\hat{s}_{1})\}$ and $R_{may}^{\prime} = R_{may} \cup \{\hat{r}_{n}\}$. \qed \end{defi} From Def.~\ref{def:AddMay}, we conclude that there are two different cases in adding a new may-transition $\hat{r}_{n}$; adding also a must-transition or not. In fact, $\hat{r}_{n}$ is also a must-transition if and only if the set of the corresponding concrete states of $\hat{s}_{1}$ is a singleton. Fig.~\ref{fig:AddMay} displays the two different cases of applying basic repair operation \emph{AddMay} to a KMTS. \begin{figure} \caption{\emph{AddMay} \label{fig:AddMay} \end{figure} \begin{prop} \label{prop:AddMay} For any $\hat{M}^{\prime} = AddMay(\hat{M},\hat{r}_{n})$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:add_may_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = AddMay(\alpha(M),\hat{r}_{n})$ for some $\hat{r}_{n} = (\hat{s}_{1},\hat{s}_{2}) \notin R_{may}$. The set $K_{min} \subseteq \gamma(\hat{M}^{\prime})$ with all KSs, whose structural distance $d$ from $M$ is minimized is given by: \begin{equation} K_{min} = \{M^{\prime} \mid M^{\prime} = (S, S_{0}, R \cup \{r_{n}\}, L)\} \end{equation} where $r_{n} \in R_{n}$ and $R_{n} = \{r_{n}=(s_{1},s_{2}) \mid s_{1} \in \gamma(\hat{s}_{1}), s_{2} \in \gamma(\hat{s}_{2})$ and $r_{n} \notin R\}$.\qed \end{defi} \begin{prop} \label{prop:add_may} For all $M^{\prime} \in K_{min}$, it holds that $d(M,M^{\prime}) = 1$. \end{prop} \begin{proof} $d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$. Because $|S\Delta S^{\prime}| = 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$, $d(M,M^{\prime}) = |R\Delta R^{\prime}| = |R - R^{\prime}| + |R^{\prime} - R| = 0 + |\{r_{n}\}| = 1$. So, we prove that $d(M,M^{\prime}) = 1$. \end{proof} \subsubsection{Removing a must-transition} \begin{defi}[RemoveMust] \label{def:RemoveMust} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and $\hat{r}_{m} = (\hat{s}_{1},\hat{s}_{2}) \in R_{must}$, $RemoveMust(\hat{M},\hat{r}_{m})$ is the KMTS $\hat{M^{\prime}} = (\hat{S}, \hat{S_{0}}, R_{must}^{\prime},$ $R_{may}^{\prime}, \hat{L})$ such that $R_{must}^{\prime} = R_{must} - \{\hat{r}_{m}\}$ and $R_{may}^{\prime} = R_{may} - \{\hat{r}_{m}\}$ if $\left|S_{1}\right| = 1$ or $R_{may}^{\prime} = R_{may}$ if $\left|S_{1}\right| > 1$ for $S_{1} = \{s_{1} \mid s_{1} \in \gamma(\hat{s}_{1})\}$.\qed \end{defi} Removing a must-transition $\hat{r}_{m}$, in some special and maybe rare cases, could also result in the deletion of the may-transition $\hat{r}_{m}$ as well. In fact, this occurs if transitions to the concrete states of $\hat{s}_{2}$ exist only from one concrete state of the corresponding ones of $\hat{s}_{1}$. These two cases for function \emph{RemoveMust} are presented graphically in Fig.~\ref{fig:RemoveMust}. \begin{figure} \caption{\emph{RemoveMust} \label{fig:RemoveMust} \end{figure} \begin{prop} \label{prop:RemoveMust} For any $\hat{M}^{\prime} = RemoveMust(\hat{M},\hat{r}_{m})$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:remove_must_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = RemoveMust(\alpha(M),\hat{r}_{m})$ for some $\hat{r}_{m} = (\hat{s}_{1},\hat{s}_{2}) \in R_{must}$. The set $K_{min} \subseteq \gamma(\hat{M}^{\prime})$ with all KSs, whose structural distance $d$ from $M$ is minimized is given by: \begin{equation} K_{min} = \{M^{\prime} \mid M^{\prime} = (S, S_{0}, R - \{R_{m}\}, L)\} \end{equation} where $R_{m}$ is given for one $s_{1} \in \gamma(\hat{s}_{1})$ as follows: \begin{equation} \nonumber R_{m} = \bigcup_{s_{2} \in \gamma(\hat{s}_{2})} \{(s_{1}, s_{2}) \in R\} \eqno{\qEd} \end{equation} \end{defi} \begin{prop} \label{prop:remove_must} For $M^{\prime}$, it holds that $1 \leq d(M,M^{\prime}) \leq \left|S\right|$. \end{prop} \begin{proof} $d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$. Because $|S\Delta S^{\prime}| = 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$, $d(M,M^{\prime}) = |R\Delta R^{\prime}| = |R - R^{\prime}| + |R^{\prime} - R| = |R_{m}| + 0 = |R_{m}|$. It holds that $|R_{m}| \geq 1$ and $|R_{m}| \leq |S|$. So, we proved that $1 \leq d(M,M^{\prime}) \leq \left|S\right|$. \end{proof} \subsubsection{Removing a may-transition} \begin{defi}[RemoveMay] \label{def:RemoveMay} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and $\hat{r}_{m} = (\hat{s}_{1},\hat{s}_{2}) \in R_{may}$, $RemoveMay(\hat{M},\hat{r}_{m})$ is the KMTS $\hat{M^{\prime}} = (\hat{S}, \hat{S_{0}}, R_{must}^{\prime}, R_{may}^{\prime},$ $\hat{L})$ such that $R_{must}^{\prime} = R_{must} - \{\hat{r}_{m}\}$ and $R_{may}^{\prime} = R_{may} - \{\hat{r}_{m}\}$. \qed \end{defi} Def.~\ref{def:RemoveMay} ensures that removing a may-transition $\hat{r}_{m}$ implies the removal of a must-transition, if $\hat{r}_{m}$ is also a must-transition. Otherwise, there are not any changes in the set of must-transitions $R_{must}$. Fig.~\ref{fig:RemoveMay} shows how function \emph{RemoveMay} works in both cases. \begin{figure} \caption{\emph{RemoveMay} \label{fig:RemoveMay} \end{figure} \begin{prop} \label{prop:RemoveMay} For any $\hat{M}^{\prime} = RemoveMay(\hat{M},\hat{r}_{m})$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:remove_may_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = RemoveMay(\alpha(M),\hat{r}_{m})$ for some $\hat{r}_{m} = (\hat{s}_{1},\hat{s}_{2}) \in R_{may}$ with $\hat{s}_{1},\hat{s}_{2} \in \hat{S}$. The KS $M^{\prime} \in \gamma(\hat{M}^{\prime})$, whose structural distance $d$ from $M$ is minimized is given by: \begin{equation} M^{\prime} = (S, S_{0}, R - R_{m}, L\} \end{equation} where $R_{m} = \{r_{m}=(s_{1},s_{2}) \mid s_{1} \in \gamma(\hat{s}_{1}), s_{2} \in \gamma(\hat{s}_{2})$ and $r_{m} \in R\}$.\qed \end{defi} \begin{prop} \label{prop:remove_may} For $M^{\prime}$, it holds that $1 \leq d(M,M^{\prime}) \leq \left|S\right|^{2}$. \end{prop} \begin{proof} $d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$. Because $|S\Delta S^{\prime}| = 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$, $d(M,M^{\prime}) = |R\Delta R^{\prime}| = |R - R^{\prime}| + |R^{\prime} - R| = 0 + |R_{m}| = |R_{m}|$. It holds that $|R_{m}| \geq 1$ and $|R_{m}| \leq |S|^{2}$. So, we proved that $1 \leq d(M,M^{\prime}) \leq \left|S\right|^{2}$. \end{proof} \subsubsection{Changing the labeling of a KMTS state} \begin{defi}[ChangeLabel] \label{def:ChangeLabel} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$, a state $\hat{s} \in \hat{S}$ and an atomic CTL formula $\phi$ with $\phi \in 2^{LIT}$, $ChangeLabel(\hat{M},\hat{s},\phi)$ is the KMTS $\hat{M^{\prime}} = (\hat{S}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L^{\prime}})$ such that $\hat{L^{\prime}} = ( \hat{L} - \{\hat{l}_{old}\} ) \cup \{\hat{l}_{new}\}$ for $\hat{l}_{old} = (\hat{s},lit_{old})$ and $\hat{l}_{new} = (\hat{s},lit_{new})$ where $lit_{new} = \hat{L}(\hat{s}) \cup \{ lit \mid lit \in \phi \} - \{ \neg lit \mid lit \in \phi \}$. \qed \end{defi} Basic repair operation \emph{ChangeLabel} gives the possibility of repairing a model by changing the labeling of a state, thus without inducing any changes in the structure of the model (number of states or transitions). Fig.~\ref{fig:ChangeLabel} presents the application of \emph{ChangeLabel} in a graphical manner. \begin{figure} \caption{\emph{ChangeLabel} \label{fig:ChangeLabel} \end{figure} \begin{prop} \label{prop:ChangeLabel} For any $\hat{M}^{\prime} = ChangeLabel(\hat{M},\hat{s},\phi)$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:change_label_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = ChangeLabel(\alpha(M),\hat{s},\phi)$ for some $\hat{s} \in \hat{S}$ and $\phi \in 2^{LIT}$. The KS $M^{\prime} \in \gamma(\hat{M}^{\prime})$, whose structural distance $d$ from $M$ is minimized, is given by: \begin{equation} M^{\prime} = (S, S_{0}, R, L - L_{old} \cup L_{new}\} \end{equation} where \begin{equation} \nonumber L_{old} = \{ l_{old} = (s,lit_{old}) \mid s \in \gamma(\hat{s}), s \in S, \neg lit_{old} \not\in \phi \; \text{and} \; l_{old} \in L \} \end{equation} \begin{equation} \nonumber L_{new} = \{ l_{new} = (s,lit_{new}) \mid s \in \gamma(\hat{s}), s \in S, lit_{new} \in \phi \; \text{and} \; l_{new} \notin L \} \end{equation} \qed \end{defi} \begin{prop} \label{prop:change_label} For $M^{\prime}$, it holds that $1 \leq d(M,M^{\prime}) \leq |S|$. \end{prop} \begin{proof} $d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$. Because $|R\Delta R^{\prime}| = 0$ and $|R\Delta R^{\prime}| = 0$, $d(M,M^{\prime}) = \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}= \frac{|L_{old}| + |L_{new}|}{2} = |L_{old}| = |L_{new}|$. It holds that $L_{new} \geq 1$ and $L_{new} \leq |S|$. So, we prove that $1 \leq d(M,M^{\prime}) \leq |S|$. \end{proof} \subsubsection{Adding a new KMTS state} \begin{defi}[AddState] \label{def:AddState} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and a state $\hat{s}_{n} \notin \hat{S}$, $AddState(\hat{M},\hat{s}_{n})$ is the KMTS $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}}, R_{must}, R_{may}, \hat{L^{\prime}})$ such that $\hat{S^{\prime}} = \hat{S} \cup \{\hat{s}_{n}\}$ and $\hat{L^{\prime}} = \hat{L} \cup \{\hat{l}_{n}\}$, where $\hat{l}_{n} = (\hat{s}_{n},\bot)$. \qed \end{defi} The most important issues for function $AddState$ is that the newly created abstract state $\hat{s}_{n}$ is isolated, thus there are no ingoing or outgoing transitions for this state, and additionally, the labeling of this new state is $\bot$. Another conclusion from Def.~\ref{def:AddState} is the fact that the inserted stated is not permitted to be initial. Application of function $AddState$ is presented graphically in Fig.~\ref{fig:AddState}. \begin{figure} \caption{\emph{AddState} \label{fig:AddState} \end{figure} \begin{prop} \label{prop:AddState} For any $\hat{M}^{\prime} = AddState(\hat{M},\hat{s}_{n})$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:add_state_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = AddState(\alpha(M),\hat{s}_{n})$ for some $\hat{s}_{n} \notin \hat{S}$. The KS $M^{\prime} \in \gamma(\hat{M}^{\prime})$, whose structural distance $d$ from $M$ is minimized is given by: \begin{equation} M^{\prime} = (S \cup \{s_{n}\}, S_{0}, R, L \cup \{l_{n}\}) \end{equation} where $s_{n} \in \gamma(\hat{s}_{n})$ and $l_{n} = (s_{n},\bot)$. \qed \end{defi} \begin{prop} \label{prop:add_state} For $M^{\prime}$, it holds that $d(M,M^{\prime}) = 1$. \end{prop} \begin{proof} $d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$. Because $|R\Delta R^{\prime}| = 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$, $d(M,M^{\prime}) = |S\Delta S^{\prime}| = |S - S^{\prime}| + |S^{\prime} - S| = 0 + |\{s_{n}\}| = 1$. So, we proved that $d(M,M^{\prime}) = 1$. \end{proof} \subsubsection{Removing a disconnected KMTS state} \begin{defi}[RemoveState] \label{def:RemoveState} For a given KMTS $\hat{M} = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ and a state $\hat{s}_{r} \in \hat{S}$ such that $\forall \hat{s} \in \hat{S} : (\hat{s},\hat{s}_{r}) \not\in R_{may} \, \wedge \, (\hat{s}_{r},\hat{s}) \not\in R_{may}$, $RemoveState(\hat{M},\hat{s}_{r})$ is the KMTS $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}, R_{may}, \hat{L^{\prime}})$ such that $\hat{S^{\prime}} = \hat{S} - \{\hat{s}_{r}\}$, $\hat{S_{0}^{\prime}} = \hat{S_{0}} - \{\hat{s}_{r}\}$ and $\hat{L^{\prime}} = \hat{L} - \{\hat{l}_{r}\}$, where $\hat{l}_{r} = (\hat{s}_{r},lit) \in \hat{L}$. \qed \end{defi} From Def.~\ref{def:RemoveState}, it is clear that the state being removed should be isolated, thus there are not any may- or must-transitions from and to this state. This means that before using \emph{RemoveState} to an abstract state, all its ingoing or outgoing must have been removed by using other basic repair operations. \emph{RemoveState} are also used for the elimination of dead-end states, when such states arise during the repair process. Fig.~\ref{fig:RemoveState} presents the application of \emph{RemoveState} in a graphical manner. \begin{figure} \caption{\emph{RemoveState} \label{fig:RemoveState} \end{figure} \begin{prop} \label{prop:RemoveState} For any $\hat{M}^{\prime} = RemoveState(\hat{M},\hat{s}_{r})$, it holds that $\hat{d}(\hat{M},\hat{M}^{\prime}) = 1$.\qed \end{prop} \begin{defi} \label{def:remove_state_ks} Let $M = (S,S_{0},R,L)$ be a KS and let $\alpha(M) = (\hat{S},\hat{S_{0}}, R_{must}, R_{may}, \hat{L})$ be the abstract KMTS derived from $M$ as in Def.~\ref{def:abs_kmts}. Also, let $\hat{M}^{\prime} = RemoveState(\alpha(M),\hat{s}_{r})$ for some $\hat{s}_{r} \in \hat{S}$ with $\hat{l}_{r} = (\hat{s}_{r},lit) \in \hat{L}$. The KS $M^{\prime} \in \gamma(\hat{M}^{\prime})$, whose structural distance $d$ from $M$ is minimized, is given by: \begin{equation} M^{\prime} = (S^{\prime}, S_{0}^{\prime}, R^{\prime}, L^{\prime}) \mbox{ s.t. } S^{\prime} = S - S_{r}, S_{0}^{\prime} = S_{0} - S_{r}, R^{\prime} = R, L^{\prime} = L - L_{r} \end{equation} where $S_{r} = \{ s_{r} \mid s_{r} \in S \mbox{ and } s_{r} \in \gamma(\hat{s}_{r}) \}$ and $L_{r} = \{ l_{r} = (s_{r},lit) \mid l_{r} \in L \}$. \qed \end{defi} \begin{prop} \label{prop:remove_state} For $M^{\prime}$, it holds that $1 \leq d(M,M^{\prime}) \leq |S|$. \end{prop} \begin{proof} $d(M,M^{\prime}) = |S\Delta S^{\prime}| + |R\Delta R^{\prime}| + \frac{|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})|}{2}$. Because $|R\Delta R^{\prime}| = 0$ and $|\pi(L\upharpoonright_{S\cap S^{\prime}})\Delta \pi(L^{\prime}\upharpoonright_{S\cap S^{\prime}})| = 0$, $d(M,M^{\prime}) = |S\Delta S^{\prime}| = |S - S^{\prime}| + |S^{\prime} - S| = |S_{r}| + 0 = |S_{r}|$. It holds that $|S_{r}| \geq 1$ and $|S_{r}| \leq |S|$. So, we proved that $1 \leq d(M,M^{\prime}) \leq |S|$. \end{proof} \subsubsection{Minimality Of Changes Ordering For Basic Repair Operations} \label{subsec:minimal_basic_ops} The distance metric $d$ of Def.~\ref{def:metric_space} reflects the need to quantify structural changes in the concrete model that are attributed to model repair steps applied to the abstract KMTS. Every such repair step implies multiple structural changes in the concrete KSs, due to the use of abstraction. In this context, our distance metric is an essential means for the effective application of the abstraction in the repair process. Based on the upper bound given by Prop.~\ref{prop:add_must} and all the respective results for the other basic repair operations, we introduce the partial ordering shown in Fig.~\ref{fig:order_basic_ops}. This ordering is used in our \emph{AbstractRepair} algorithm to \emph{heuristically} select at each step the basic repair operation that \textit{generates the KSs with the least changes}. When it is possible to apply more than one basic repair operation with the same upper bound, our algorithm successively uses them until a repair solution is found, in an order based on the computational complexity of their application. \enlargethispage{2\baselineskip} If instead of our approach, all possible repaired KSs were checked to identify the basic repair operation with the minimum changes, this would defeat the purpose of using abstraction. The reason is that such a check inevitably would depend on the size of concrete KSs. \begin{figure} \caption{Minimality of changes ordering of the set of basic repair operations} \label{fig:order_basic_ops} \end{figure} \section{The Abstract Model Repair Algorithm} \label{sec:alg} The \emph{AbstractRepair} algorithm used in Step 3 of our repair process is a recursive, syntax-directed algorithm, where the syntax for the property $\phi$ in question is that of CTL. The same approach is followed by the SAT model checking algorithm in~\cite{HR04} and a number of model repair solutions applied to concrete KSs~\cite{ZD08,CR09}. In our case, we aim to the repair of an abstract KMTS by successively calling primitive repair functions that handle atomic formulas, logical connectives and CTL operators. At each step, the repair with the least changes for the concrete model among all the possible repairs is applied first. \begin{algorithm}[t] \caption{AbstractRepair} \label{alg:main} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi$ in PNF for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \IF {$\phi$ is $false$} \RETURN FAILURE \ELSIF {$\phi \in LIT$} \RETURN $AbstractRepair_{ATOMIC}(\hat{M},\hat{s},\phi,C)$ \ELSIF {$\phi$ is $\phi_{1} \wedge \phi_{2}$} \RETURN $AbstractRepair_{AND}(\hat{M},\hat{s},\phi,C)$ \ELSIF {$\phi$ is $\phi_{1} \vee \phi_{2}$} \RETURN $AbstractRepair_{OR}(\hat{M},\hat{s},\phi,C)$ \ELSIF {$\phi$ is $OPER\phi_{1}$} \RETURN $AbstractRepair_{OPER}(\hat{M},\hat{s},\phi,C)$ \STATE where $OPER \in \{AX,EX,AU,EU,AF,EF,AG,EG\}$ \ENDIF \end{algorithmic} \end{algorithm} The main routine of \emph{AbstractRepair} is presented in Algorithm~\ref{alg:main}. If the property $\phi$ is not in Positive Normal Form, i.e. negations are applied only to atomic propositions, then we transform it into such a form before applying Algorithm~\ref{alg:main}. An initially empty set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ is passed as an argument in the successive recursive calls of \emph{AbstractRepair}. We note that these constraints can also specify \emph{existing} properties that should be preserved during repair. If $C$ is not empty, then for the returned KMTS $\hat{M}^{\prime}$, it holds that $(\hat{M^{\prime}},\hat{s}_{c_{i}}) \models \phi_{c_{i}}$ for all $(\hat{s}_{c_{i}},\phi_{c_{i}}) \in C$. For brevity, we denote this with $\hat{M}^{\prime} \models C$. We use $C$ in order to handle conjunctive formulas of the form $\phi = \phi_{1} \wedge \phi_{2}$ for some state $\hat{s}$. In this case, \emph{AbstractRepair} is called for the KMTS $\hat{M}$ and property $\phi_{1}$ with $C = \{ (\hat{s},\phi_{2}) \}$. The same is repeated for property $\phi_{2}$ with $C = \{ (\hat{s},\phi_{1}) \}$ and the two results are combined appropriately. For any CTL formula $\phi$ and KMTS state $\hat{s}$, \emph{AbstractRepair} either outputs a KMTS $\hat{M}^{\prime}$ for which $(\hat{M^{\prime}},\hat{s}) \models \phi$ or else returns FAILURE, if such a model cannot be found. This is the case when the algorithm handles conjunctive formulas and a KMTS that simultaneously satisfies all conjuncts cannot be found. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{ATOMIC}$} \label{alg:ATOMIC} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi$ where $\phi$ is an atomic formula for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $\hat{M^{\prime}} := ChangeLabel(\hat{M},\hat{s},\phi)$ \IF {$\hat{M^{\prime}} \models C$} \RETURN $\hat{M^{\prime}}$ \ELSE \RETURN FAILURE \ENDIF \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{OR}$} \label{alg:OR} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = \phi_{1} \vee \phi_{2}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = ( (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) )$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$, $\hat{s} \in \hat{S^{\prime}}$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $RET_{1} := AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ \STATE $RET_{2} := AbstractRepair(\hat{M},\hat{s},\phi_{2},C)$ \IF { $RET_{1} \neq FAILURE$ \&\& $RET_{2} \neq FAILURE $ } \STATE $\hat{M}_{1} := RET_{1}$ \STATE $\hat{M}_{2} := RET_{2}$ \STATE $\hat{M^{\prime}} := MinimallyChanged(\hat{M},\hat{M_{1}},\hat{M_{2}})$ \ELSIF { $RET_{1} \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET_{1}$ \ELSIF { $RET_{2} \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET_{2}$ \ELSE \RETURN FAILURE \ENDIF \RETURN $\hat{M}^{\prime}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{AND}$} \label{alg:AND} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = \phi_{1} \wedge \phi_{2}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = ( (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) )$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$, $\hat{s} \in \hat{S^{\prime}}$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $RET_{1} := AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ \STATE $RET_{2} := AbstractRepair(\hat{M},\hat{s},\phi_{2},C)$ \STATE $C_{1} := C \cup \{ (\hat{s},\phi_{1}) \}$, $C_{2} := C \cup \{(\hat{s},\phi_{2})\}$ \STATE $RET_{1}^{\prime} := FAIURE$, $RET_{2}^{\prime} := FAIURE$ \IF { $RET_{1} \neq FAILURE$ } \STATE $\hat{M}_{1} := RET_{1}$ \STATE $RET_{1}^{\prime} := AbstractRepair(\hat{M}_{1},\hat{s},\phi_{2},C_{1})$ \IF { $RET_{1}^{\prime} \neq FAILURE$ } \STATE $\hat{M}_{1}^{\prime} := RET_{1}^{\prime}$ \ENDIF \ENDIF \IF { $RET_{2} \neq FAILURE$ } \STATE $\hat{M}_{2} := RET_{2}$ \STATE $RET_{2}^{\prime} := AbstractRepair(\hat{M}_{2},\hat{s},\phi_{1},C_{2})$ \IF { $RET_{2}^{\prime} \neq FAILURE$ } \STATE $\hat{M}_{2}^{\prime} := RET_{2}^{\prime}$ \ENDIF \ENDIF \IF { $RET_{1}^{\prime} \neq FAILURE$ \&\& $RET_{2}^{\prime} \neq FAILURE $ } \STATE $\hat{M^{\prime}} := MinimallyChanged(\hat{M},\hat{M}_{1}^{\prime},\hat{M}_{2}^{\prime})$ \ELSIF { $RET_{1}^{\prime} \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET_{1}^{\prime}$ \ELSIF { $RET_{2}^{\prime} \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET_{2}^{\prime}$ \ELSE \RETURN FAILURE \ENDIF \RETURN $\hat{M}^{\prime}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{AG}$} \label{alg:AG} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = AG\phi_{1}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \IF {$(\hat{M},\hat{s}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ \IF { $RET == FAILURE$ } \RETURN FAILURE \ELSE \STATE $\hat{M^{\prime}} := RET$ \ENDIF \ELSE \STATE $\hat{M^{\prime}} := \hat{M}$ \ENDIF \FORALL{ reachable states $\hat{s}_{k}$ through may-transitions from $\hat{s}$ such that $(\hat{M^{\prime}},\hat{s}_{k}) \not\models \phi_{1}$ } \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{k},\phi_{1},C)$ \IF { $RET == FAILURE$ } \RETURN FAILURE \ELSE \STATE $\hat{M^{\prime}} := RET$ \ENDIF \ENDFOR \IF { $\hat{M^{\prime}} \models C$ } \RETURN $\hat{M^{\prime}}$ \ENDIF \RETURN FAILURE \end{algorithmic} \end{algorithm} \subsection{Primitive Functions} \label{subsec:alg_prim_func} Algorithm~\ref{alg:ATOMIC} describes $AbstractRepair_{ATOMIC}$, which for a simple atomic formula, updates the labeling of the input state with the given atomic proposition. Disjunctive formulas are handled by repairing the disjunct leading to the minimum change (Algorithm~\ref{alg:OR}), while conjunctive formulas are handled by the algorithm with the use of constraints (Algorithm~\ref{alg:AND}). Algorithm~\ref{alg:AG} describes the primitive function $AbstractRepair_{AG}$ which is called when $\phi = AG\phi_{1}$. If $AbstractRepair_{AG}$ is called for a state $\hat{s}$, it recursively calls \emph{AbstractRepair} for $\hat{s}$ and for all reachable states through may-transitions from $\hat{s}$ which do not satisfy $\phi_{1}$. The resulting KMTS $\hat{M}^{\prime}$ is returned, if it does not violate any constraint in $C$. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{EX}$} \label{alg:EX} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = EX\phi_{1}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{M}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \IF {there exists $\hat{s}_{1} \in \hat{S}$ such that $(\hat{M},\hat{s}_{1}) \models \phi_{1}$} \FORALL {$\hat{s}_{i} \in \hat{S}$ such that $(\hat{M},\hat{s}_{i}) \models \phi_{1}$} \STATE $\hat{r}_{i} := (\hat{s},\hat{s}_{i})$, $\hat{M^{\prime}} := AddMust(\hat{M},\hat{r}_{i})$ \IF {$\hat{M^{\prime}} \models C$} \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \ELSE \FORALL{direct must-reachable states $\hat{s}_{i}$ from $\hat{s}$ such that $(\hat{M},\hat{s}_{i}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M},\hat{s}_{i},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \STATE $\hat{M^{\prime}} := AddState(\hat{M},\hat{s}_{n})$, $\hat{r}_{n} := (\hat{s},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMust(\hat{M^{\prime}},\hat{r}_{n})$ \STATE $\hat{r}_{n} := (\hat{s}_{n},\hat{s}_{n})$ \STATE $\hat{M^{\prime}} := AddMay(\hat{M^{\prime}},\hat{r}_{n})$ \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{n},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDIF \RETURN FAILURE \end{algorithmic} \end{algorithm} $AbstractRepair_{EX}$ presented in Algorithm~\ref{alg:EX} is the primitive function for handling properties of the form $EX\phi_{1}$ for some state $\hat{s}$. At first, $AbstractRepair_{EX}$ attempts to repair the KMTS by adding a must-transition from $\hat{s}$ to a state that satisfies property $\phi_{1}$. If a repaired KMTS is not found, then \emph{AbstractRepair} is recursively called for an immediate successor of $\hat{s}$ through a must-transition, such that $\phi_{1}$ is not satisfied. If a constraint in $C$ is violated, then (i) a new state is added, (ii) \emph{AbstractRepair} is called for the new state and (iii) a must-transition from $\hat{s}$ to the new state is added. The resulting KMTS is returned by the algorithm if all constraints of $C$ are satisfied. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{AX}$} \label{alg:AX} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = AX\phi_{1}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{M}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $\hat{M^{\prime}} := \hat{M}$ \STATE $RET := FAILURE$ \FORALL {direct may-reachable states $\hat{s}_{i}$ from $\hat{s}$ with $(\hat{s},\hat{s}_{i}) \in R_{may}$} \IF {$(\hat{M^{\prime}},\hat{s}_{i}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{i},\phi_{1},C)$ \IF {$RET == FAILURE$} \STATE BREAK \ENDIF \STATE $\hat{M^{\prime}} := RET$ \ENDIF \ENDFOR \IF {$RET \neq FAILURE$} \RETURN $\hat{M^{\prime}}$ \ENDIF \STATE $\hat{M^{\prime}} := \hat{M}$ \FORALL {direct may-reachable states $\hat{s}_{i}$ from $\hat{s}$ with $\hat{r}_{i} := (\hat{s},\hat{s}_{i}) \in R_{may}$} \IF {$(\hat{M^{\prime}},\hat{s}_{i}) \not\models \phi_{1}$} \STATE $\hat{M^{\prime}} := RemoveMay(\hat{M^{\prime}},\hat{r}_{i})$ \ENDIF \ENDFOR \IF {there exists direct may-reachable state $\hat{s}_{1}$ from $\hat{s}$ such that $(\hat{s},\hat{s}_{1}) \in R_{may}$} \IF {$\hat{M^{\prime}} \models C$} \RETURN $\hat{M^{\prime}}$ \ENDIF \ELSE \FORALL {$\hat{s}_{j} \in \hat{S}$ such that $(\hat{M^{\prime}},\hat{s}_{j}) \models \phi_{1}$} \STATE $\hat{r}_{j} := (\hat{s},\hat{s}_{j})$, $\hat{M^{\prime}} := AddMay(\hat{M^{\prime}},\hat{r}_{j})$ \IF {$\hat{M^{\prime}} \models C$} \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \STATE $\hat{M^{\prime}} := AddState(\hat{M},\hat{s}_{n})$ \IF {$\hat{s}_{n}$ is a dead-end state} \STATE $\hat{r}_{n} := (\hat{s}_{n},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMay(\hat{M^{\prime}},\hat{r}_{n})$ \ENDIF \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{n},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$, $\hat{r}_{n} := (\hat{s},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMay(\hat{M^{\prime}},\hat{r}_{n})$ \IF {$\hat{M^{\prime}} \models C$} \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDIF \ENDIF \RETURN FAILURE \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:AX} presents primitive function $AbstractRepair_{AX}$ which is used when $\phi = AX\phi_{1}$. Firstly, $AbstractRepair_{AX}$ tries to repair the KMTS by applying $AbstractRepair$ for all direct may-successors $\hat{s}_{i}$ of $\hat{s}$ which do not satisfy property $\phi_{1}$, and in the case that all the constraints are satisfied the new KMTS is returned by the function. If such states do not exist or a constraint is violated, all may-transitions $(\hat{s},\hat{s}_{i})$ for which $(\hat{M},\hat{s}_{i}) \not\models \phi_{1}$, are removed. If there are states $\hat{s}_{i}$ such that $r_{m} := (\hat{s},\hat{s}_{i}) \in R_{may}$ and all constraints are satisfied then a repaired KMTS has been produced and it is returned by the function. Otherwise, a repaired KMTS results by the application of $AddMay$ from $\hat{s}$ to all states $\hat{s}_{j}$ which satisfy $\phi_{1}$. If any constraint is violated, then the KMTS is repaired by adding a new state, applying $AbstractRepair$ to this state for property $\phi_{1}$ and adding a may-transition from $\hat{s}$ to this state. If all constraints are satisfied, the repaired KMTS is returned. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{EG}$} \label{alg:EG} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = EG\phi_{1}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $\hat{M}_{1} := \hat{M}$ \IF {$(\hat{M},\hat{s}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ \IF { $RET == FAILURE$ } \RETURN FAILURE \ENDIF \STATE $\hat{M}_{1} := RET$ \ENDIF \WHILE {there exists maximal path $\pi_{must} := [\hat{s}_{1},\hat{s}_{2},...]$ such that $\forall \hat{s}_{i} \in \pi_{must}$ it holds that $(\hat{M}_{1},\hat{s}_{i}) \models \phi_{1}$} \STATE $\hat{r}_{1} := (\hat{s},\hat{s}_{1})$, $\hat{M^{\prime}} := AddMust(\hat{M}_{1},\hat{r}_{1})$ \IF { $\hat{M^{\prime}} \models C$ } \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDWHILE \WHILE {there exists maximal path $\pi_{must} := [\hat{s},\hat{s}_{1},\hat{s}_{2},...]$ such that $\forall \hat{s}_{i} \neq \hat{s} \in \pi_{must}$ it holds that $(\hat{M}_{1},\hat{s}_{i}) \not\models \phi_{1}$} \STATE $\hat{M^{\prime}} := \hat{M}_{1}$ \FORALL {$\hat{s}_{i} \in \pi_{must}$} \IF {$(\hat{M}_{1},\hat{s}_{i}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{i},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \ELSE \STATE continue to next path \ENDIF \ENDIF \ENDFOR \RETURN $\hat{M^{\prime}}$ \ENDWHILE \STATE $\hat{M^{\prime}} := AddState(\hat{M}_{1},\hat{s}_{n})$ \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{n},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \STATE $\hat{r}_{n} := (\hat{s},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMust(\hat{M^{\prime}},\hat{r}_{n})$ \IF {$\hat{s}_{n}$ is a dead-end state} \STATE $\hat{r}_{n} := (\hat{s}_{n},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMust(\hat{M^{\prime}},\hat{r}_{n})$ \ENDIF \IF { $\hat{M^{\prime}} \models C$ } \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDIF \RETURN FAILURE \end{algorithmic} \end{algorithm} $AbstractRepair_{EG}$ which is presented in Algorithm~\ref{alg:EG} is the primitive function which is called when input CTL property is in the form of $EG\phi_{1}$. Initially, if $\phi_{1}$ is not satisfied at $\hat{s}$ $AbstractRepair$ is called for $\hat{s}$ and $\phi_{1}$, and a KMTS $\hat{M}_{1}$ is produced. At first, a must-transition is added from $\hat{s}$ to a state $\hat{s}_{1}$ of a maximal must-path (i.e. a must-path in which each transition appears at most once) $\pi_{must} := [\hat{s}_{1},\hat{s}_{2},...]$ such that $\forall \hat{s}_{i} \in \pi_{must}$, $(\hat{M}_{1},\hat{s}_{i}) \models \phi_{1}$. If all constraints are satisfied, then the repaired KMTS is returned. Otherwise, a KMTS is produced by recursively calling $AbstractRepair$ to all states $\hat{s}_{i} \neq \hat{s}$ of any maximal must-path $\pi_{must} := [\hat{s}_{1},\hat{s}_{2},...]$ with $\forall \hat{s}_{i} \in \pi_{must}$, $(\hat{M}_{1},\hat{s}_{i}) \not\models \phi_{1}$. If there are violated constraints in $C$, then a repaired KMTS is produced by adding a new state, calling $AbstractRepair$ for this state and property $\phi_{1}$ and calling $AddMust$ to insert a must-transition from $\hat{s}$ to the new state. The resulting KMTS is returned by the algorithm, if all constraints in $C$ are satisfied. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{AF}$} \label{alg:AF} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = AF\phi_{1}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $\hat{M^{\prime}} := \hat{M}$ \WHILE {there exists maximal path $\pi_{may} := [\hat{s},\hat{s}_{1},...]$ such that $\forall \hat{s}_{i} \in \pi_{may}$ it holds that $(\hat{M^{\prime}},\hat{s}_{i}) \not\models \phi_{1}$} \FORALL {$\hat{s}_{i} \in \pi_{may}$} \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{i},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \STATE continue to next path \ENDIF \ENDFOR \RETURN FAILURE \ENDWHILE \RETURN $\hat{M}^{\prime}$ \end{algorithmic} \end{algorithm} $AbstractRepair_{AF}$ shown in Algorithm~\ref{alg:AF} is called when the CTL formula $\phi$ is in the form of $AF\phi_{1}$. While there is maximal may-path $\pi_{may} := [\hat{s},\hat{s}_{1},...]$ such that $\forall \hat{s}_{i} \in \pi_{may}$, $(\hat{M^{\prime}},\hat{s}_{i}) \not\models \phi_{1}$, $AbstractRepair_{AF}$ tries to obtain a repaired KMTS by recursively calling $AbstractRepair$ to some state $\hat{s}_{i} \in \pi_{may}$. If all constraints are satisfied to the new KMTS, then it is returned as the repaired model. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{EF}$} \label{alg:EF} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = EF\phi_{1}$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \FORALL {must-reachable states $\hat{s}_{i}$ from $\hat{s}$ with $(\hat{M},\hat{s}_{i}) \not\models \phi_{1}$ or $\hat{s}_{i} := \hat{s}$} \FORALL {$\hat{s}_{k} \in \hat{S}$ such that $(\hat{M},\hat{s}_{k}) \models \phi_{1}$ } \STATE $\hat{r}_{k} := (\hat{s}_{i},\hat{s}_{k})$, $\hat{M^{\prime}} := AddMust(\hat{M},\hat{r}_{k})$ \IF {$\hat{M^{\prime}} \models C$} \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \ENDFOR \FORALL {must-reachable states $\hat{s}_{i}$ from $\hat{s}$ with $(\hat{M},\hat{s}_{i}) \not\models \phi_{1}$ } \STATE $RET := AbstractRepair(\hat{M},\hat{s}_{i},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \STATE $\hat{M}_{1} := AddState(\hat{M^{\prime}},\hat{s}_{n})$, $RET := AbstractRepair(\hat{M}_{1},\hat{s}_{n},\phi_{1},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M}_{1} := RET$ \FORALL {must-reachable states $\hat{s}_{i}$ from $\hat{s}$ with $(\hat{M},\hat{s}_{i}) \not\models \phi_{1}$ or $\hat{s}_{i} := \hat{s}$} \STATE $\hat{r}_{i} := (\hat{s}_{i},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMust(\hat{M}_{1},\hat{r}_{i})$ \IF {$\hat{s}_{n}$ is a dead-end state} \STATE $\hat{r}_{n} := (\hat{s}_{n},\hat{s}_{n})$, $\hat{M^{\prime}} := AddMust(\hat{M^{\prime}},\hat{r}_{n})$ \ENDIF \IF { $\hat{M^{\prime}} \models C$ } \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \ENDIF \RETURN FAILURE \end{algorithmic} \end{algorithm} $AbstractRepair_{EF}$ shown in Algorithm~\ref{alg:EF} is called when the CTL property $\phi$ is in the form $EF\phi_{1}$. Initially, a KMTS is acquired by adding a must-transition from a must-reachable state $\hat{s}_{i}$ from $\hat{s}$ to a state $\hat{s}_{k} \in \hat{S}$ such that $(\hat{M},\hat{s}_{k}) \models \phi_{1}$. If all constraints are satisfied then this KMTS is returned. Otherwise, a KMTS is produced by applying $AbstractRepair$ to a must-reachable state $\hat{s}_{i}$ from $\hat{s}$ for $\phi_{1}$. If none of the constraints is violated then this KMTS is returned. At any other case, a new KMTS is produced by adding a new state $\hat{s}_{n}$, recursively calling $AbstractRepair$ for this state and $\phi_{1}$ and adding a must-transition from $\hat{s}$ or from a must-reachable $\hat{s}_{i}$ from $\hat{s}$ to $\hat{s}_{n}$. If all constraints are satisfied, then this KMTS is returned as a repaired model by the algorithm. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{AU}$} \label{alg:AU} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = A(\phi_{1}U\phi_{2})$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $\hat{M}_{1} := \hat{M}$ \IF {$(\hat{M},\hat{s}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ \IF { $RET == FAILURE$ } \RETURN FAILURE \ELSE \STATE $\hat{M}_{1} := RET$ \ENDIF \ENDIF \WHILE {there exists path $\pi_{may} := [\hat{s}_{1},...,\hat{s}_{m}]$ such that $\forall \hat{s}_{i} \in \pi_{may}$ it holds that $(\hat{M}_{1},\hat{s}_{i}) \models \phi_{1}$ and there does not exist $\hat{r}_{m} := (\hat{s}_{m},\hat{s}_{n}) \in R_{may}$ such that $(\hat{M}_{1},\hat{s}_{n}) \models \phi_{2}$} \FORALL {$\hat{s}_{j} \in \pi_{may}$ for which $(\hat{M}_{1},\hat{s}_{j}) \not\models \phi_{2}$ with $\hat{s}_{j} \neq \hat{s}_{1}$ } \STATE $RET := AbstractRepair(\hat{M}_{1},\hat{s}_{j},\phi_{2},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \STATE continue to next path \ENDIF \ENDFOR \RETURN FAILURE \ENDWHILE \RETURN $\hat{M^{\prime}}$ \end{algorithmic} \end{algorithm} $AbstractRepair_{AU}$ is presented in Algorithm~\ref{alg:AU} and is called when $\phi = A(\phi_{1}U\phi_{2})$. If $\phi_{1}$ is not satisfied at $\hat{s}$, then a KMTS $\hat{M}_{1}$ is produced by applying $AbstractRepair$ to $\hat{s}$ for $\phi_{1}$. Otherwise, $\hat{M}_{1}$ is same to $\hat{M}$. A new KMTS is produced as follows: for all may-paths $\pi_{may} := [\hat{s}_{1},...,\hat{s}_{m}]$ such that $\forall \hat{s}_{i} \in \pi_{may}$, $(\hat{M}_{1},\hat{s}_{i}) \models \phi_{1}$ and for which there does not $\hat{r}_{m} := (\hat{s}_{m},\hat{s}_{n}) \in R_{may}$ with $(\hat{M}_{1},\hat{s}_{n}) \models \phi_{2}$, $AbstractRepair$ is called for property $\phi_{2}$ for some state $\hat{s}_{j} \in \pi_{may}$ with $(\hat{M}_{1},\hat{s}_{j}) \not\models \phi_{2}$. If the resulting KMTS satisfies all constraints, then it is returned as a repair solution. \begin{algorithm}[htb] \floatname{algorithm}{Algorithm} \caption{$AbstractRepair_{EU}$} \label{alg:EU} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE $\hat{M} = (\hat{S}, \hat{S}_{0}, R_{must}, R_{may}, \hat{L})$, $\hat{s} \in \hat{S}$, a CTL property $\phi = E(\phi_{1}U\phi_{2})$ for which $(\hat{M},\hat{s}) \not\models \phi$, and a set of constraints $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ where $\hat{s}_{c_{i}} \in \hat{S}$ and $\phi_{c_{i}}$ is a CTL formula. \ENSURE $\hat{M^{\prime}} = (\hat{S^{\prime}}, \hat{S_{0}^{\prime}}, R_{must}^{\prime}, R_{may}^{\prime}, \hat{L^{\prime}})$ and $(\hat{M^{\prime}},\hat{s}) \models \phi$ or FAILURE. \STATE $\hat{M}_{1} := \hat{M}$ \IF {$(\hat{M},\hat{s}) \not\models \phi_{1}$} \STATE $RET := AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ \IF { $RET == FAILURE$ } \RETURN FAILURE \ELSE \STATE $\hat{M}_{1} := RET$ \ENDIF \ENDIF \WHILE { there exists path $\pi_{must} := [\hat{s}_{1},...,\hat{s}_{m}]$ such that $\forall \hat{s}_{i} \in \pi_{must}$, $(\hat{M}_{1},\hat{s}_{i}) \models \phi_{1}$} \FORALL { $\hat{s}_{j} \in \hat{S}$ with $(\hat{M}_{1},\hat{s}_{j}) \models \phi_{2}$} \STATE $\hat{r}_{j} := (\hat{s}_{m},\hat{s}_{j})$, $\hat{M}^{\prime} := AddMust(\hat{M}_{1},\hat{r}_{j})$ \IF { $\hat{M^{\prime}} \models C$ } \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDFOR \ENDWHILE \STATE $\hat{M^{\prime}} := AddState(\hat{M}_{1},\hat{s}_{k})$ \STATE $RET := AbstractRepair(\hat{M^{\prime}},\hat{s}_{k},\phi_{2},C)$ \IF { $RET \neq FAILURE$ } \STATE $\hat{M^{\prime}} := RET$ \STATE $\hat{r}_{n} := (\hat{s},\hat{s}_{k})$, $\hat{M^{\prime}} := AddMust(\hat{M^{\prime}},\hat{r}_{n})$ \IF {$\hat{s}_{k}$ is a dead-end state} \STATE $\hat{r}_{k} := (\hat{s}_{k},\hat{s}_{k})$, $\hat{M^{\prime}} := AddMust(\hat{M^{\prime}},\hat{r}_{k})$ \ENDIF \IF { $\hat{M^{\prime}} \models C$ } \RETURN $\hat{M^{\prime}}$ \ENDIF \ENDIF \RETURN FAILURE \end{algorithmic} \end{algorithm} $AbstractRepair_{EU}$ is called if for input CTL formula $\phi$ it holds that $\phi = E(\phi_{1}U\phi_{2})$. $AbstractRepair_{EU}$ is presented in Algorithm~\ref{alg:EU}. Firstly, if $\phi_{1}$ is not satisfied at $\hat{s}$, then $AbstractRepair$ is called for $\hat{s}$ and $\phi_{1}$ and a KMTS $\hat{M}_{1}$ is produced for which $(\hat{M}_{1},\hat{s}) \models \phi_{1}$. Otherwise, $\hat{M}_{1}$ is same to $\hat{M}$. A new KMTS is produced as follows: for a must-path $\pi_{must} := [\hat{s}_{1},...,\hat{s}_{m}]$ such that $\forall \hat{s}_{i} \in \pi_{must}$, $(\hat{M}_{1},\hat{s}_{i}) \models \phi_{1}$ and for a $\hat{s}_{j} \in \hat{S}$ with $(\hat{M}_{1},\hat{s}_{j}) \models \phi_{2}$, a must-transition is added from $\hat{s}_{m}$ to $\hat{s}_{j}$. If all constraints are satisfied then the new KMTS is returned. Alternatively, a KMTS is produced by adding a new state $\hat{s}_{n}$, recursively calling $AbstractRepair$ for $\phi_{2}$ and $\hat{s}_{n}$ and adding a must-transition from $\hat{s}$ to $\hat{s}_{n}$. In the case that no constraint is violated then this is a repaired KMTS and it is returned from the function. \subsection{Properties of the Algorithm} \label{subsec:alg_props} \emph{AbstractRepair} is \emph{well-defined}~\cite{BGS07}, in the sense that the algorithm always proceeds and eventually returns a result $\hat{M}^{\prime}$ or FAILURE such that $(\hat{M}^\prime,\hat{s}) \models \phi$, for any input $\hat{M}$, $\phi$ and $C$, with $(\hat{M},\hat{s}) \not\models \phi$. Moreover, the algorithm steps are well-ordered, as opposed to existing concrete model repair solutions~\cite{CR11,ZD08} that entail nondeterministic behavior. \subsubsection{Soundness} \label{subsubsec:alg_soundness} \begin{lem} \label{theor:sound_help} Let a KMTS $\hat{M}$, a CTL formula $\phi$ with $(\hat{M},\hat{s}) \not\models \phi$ for some $\hat{s}$ of $\hat{M}$, and a set $C = \{ (\hat{s}_{c_{1}},\phi_{c_{1}}), (\hat{s}_{c_{2}},\phi_{c_{2}}), ..., (\hat{s}_{c_{n}},\phi_{c_{n}}) \}$ with $(\hat{M},\hat{s}_{c_{i}}) \models \phi_{c_{i}}$ for all $(\hat{s}_{c_{n}},\phi_{c_{n}}) \in C$. If $AbstractRepair(\hat{M},\hat{s},\phi,C)$ returns a KMTS $\hat{M}^{\prime}$, then $(\hat{M}^{\prime},\hat{s}) \models \phi$ and $(\hat{M}^{\prime},\hat{s}_{c_{i}}) \models \phi_{c_{i}}$ for all $(\hat{s}_{c_{i}},\phi_{c_{i}}) \in C$. \end{lem} \begin{proof} We use structural induction on $\phi$. For brevity, we write $\hat{M} \models C$ to denote that $(\hat{M},\hat{s}_{c_{i}}) \models \phi_{c_{i}}$, for all $(\hat{s}_{c_{i}},\phi_{c_{i}}) \in C$. \paragraph{Base Case: } \begin{itemize} \item if $\phi = \top$, the lemma is trivially true, because $(\hat{M},\hat{s}) \models \phi$ \item if $\phi = \bot$, then $AbstractRepair(\hat{M},\hat{s},\phi,C)$ returns FAILURE at line 2 of Algorithm~\ref{alg:main} and the lemma is also trivially true. \item if $\phi = p \in AP$, $AbstractRepair_{ATOMIC}(\hat{M},\hat{s},p,C)$ is called at line 4 of Algorithm~\ref{alg:main} and an $\hat{M^{\prime}} = ChangeLabel(\hat{M},\hat{s},p)$ is computed at line 1 of Algorithm~\ref{alg:ATOMIC}. Since $p \in \hat{L}^{\prime}(\hat{s})$ in $\hat{M^{\prime}}$, from 3-valued semantics of CTL over KMTSs we have $(\hat{M^{\prime}},\hat{s}) \models \phi$. Algorithm~\ref{alg:ATOMIC} returns $\hat{M^{\prime}}$ at line 3, if and only if $\hat{M}^{\prime} \models C$ and the lemma is true. \end{itemize} \paragraph{Induction Hypothesis:} For CTL formulae $\phi_{1}, \phi_{2}$, the lemma is true. Thus, for $\phi_{1}$ (resp. $\phi_{2}$), if $AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ returns a KMTS $\hat{M}^{\prime}$, then $(\hat{M^{\prime}},\hat{s}) \models \phi_{1}$ and $\hat{M}^{\prime} \models C$. \paragraph{Inductive Step:} \begin{itemize} \item if $\phi = \phi_{1} \vee \phi_{2}$, then $AbstractRepair(\hat{M},\hat{s},\phi,C)$ calls $AbstractRepair_{OR}(\hat{M},\hat{s},\phi_{1} \vee \phi_{2},C)$ at line 8 of Algorithm~\ref{alg:main}. From the induction hypothesis, if a KMTS $\hat{M}_{1}$ is returned by $AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ at line 1 of Algorithm~\ref{alg:OR} and a KMTS $\hat{M}_{2}$ is returned by $AbstractRepair(\hat{M},\hat{s},\phi_{2},C)$ respectively, then $(\hat{M}_{1},\hat{s}) \models \phi_{1}$, $\hat{M}_{1} \models C$ and $(\hat{M}_{2},\hat{s}) \models \phi_{1}$, $\hat{M}_{2} \models C$. $AbstractRepair_{OR}(\hat{M},\hat{s},\phi_{1} \vee \phi_{2},C)$ returns at line 8 of Algorithm~\ref{alg:main} the KMTS $\hat{M^{\prime}}$, which can be either $\hat{M}_{1}$ or $\hat{M}_{2}$. Therefore, $(\hat{M^{\prime}},\hat{s}) \models \phi_{1}$ or $(\hat{M^{\prime}},\hat{s}) \models \phi_{2}$ and $\hat{M^{\prime}} \models C$ in both cases. From 3-valued semantics of CTL, $(\hat{M^{\prime}},\hat{s}) \models \phi_{1} \vee \phi_{2}$ and the lemma is true. \item if $\phi = \phi_{1} \wedge \phi_{2}$, then $AbstractRepair(\hat{M},\hat{s},\phi,C)$ calls $AbstractRepair_{AND}(\hat{M},\hat{s},\phi_{1} \wedge \phi_{2},C)$ at line 6 of Algorithm~\ref{alg:main}. From the induction hypothesis, if at line 1 of Algorithm~\ref{alg:AND} $AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ returns a KMTS $\hat{M}_{1}$, then $(\hat{M}_{1},\hat{s}) \models \phi_{1}$ and $\hat{M}_{1} \models C$. Consequently, $\hat{M}_{1} \models C_{1}$, where $C_{1} = C \cup {(\hat{s},\phi_{1})}$. At line 7, if $AbstractRepair(\hat{M}_{1},\hat{s},\phi_{2},C_{1})$ returns a KMTS $\hat{M}_{1}^{\prime}$, then from the induction hypothesis $(\hat{M}_{1}^{\prime},\hat{s}) \models \phi_{2}$ and $\hat{M}_{1}^{\prime} \models C_{1}$. In the same manner, if the calls at lines 2 and 12 of Algorithm~\ref{alg:AND} return the KMTSs $\hat{M}_{2}$ and $\hat{M}_{2}^{\prime}$, then from the induction hypothesis $(\hat{M}_{2},\hat{s}) \models \phi_{2}$, $\hat{M}_{2} \models C$ and $(\hat{M}_{2}^{\prime},\hat{s}) \models \phi_{1}$, $\hat{M}_{2}^{\prime} \models C_{2}$ with $C_{2} = C \cup {(\hat{s},\phi_{2})}$. The KMTS $\hat{M^{\prime}}$ at line 6 of Algorithm~\ref{alg:main} can be either $\hat{M}_{1}^{\prime}$ or $\hat{M}_{2}^{\prime}$ and therefore, $(\hat{M^{\prime}},\hat{s}) \models \phi_{1}$, $(\hat{M^{\prime}},\hat{s}) \models \phi_{2}$ and $\hat{M^{\prime}} \models C$. From 3-valued semantics of CTL it holds that $(\hat{M^{\prime}},\hat{s}) \models \phi_{1} \wedge \phi_{2}$ and the lemma is true. \item if $\phi = EX\phi_{1}$, $AbstractRepair(\hat{M},\hat{s},\phi,C)$ calls $AbstractRepair_{EX}(\hat{M},\hat{s},EX\phi_{1},C)$ at line 10 of Algorithm~\ref{alg:main}. If a KMTS $\hat{M}^{\prime}$ is returned at line 5 of Algorithm~\ref{alg:EX}, there is a state $\hat{s}_{1}$ with $(\hat{M},\hat{s}_{1}) \models \phi_{1}$ such that $\hat{M}^{\prime} = AddMust(\hat{M},(\hat{s},\hat{s}_{1}))$ and $\hat{M^{\prime}} \models C$. From 3-valued semantics of CTL, we conclude that $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. If a $\hat{M}^{\prime}$ is returned at line 11, there is $(\hat{s},\hat{s}_{1}) \in R_{must}$ such that $(\hat{M^{\prime}},\hat{s}_{1}) \models \phi_{1}$ and $\hat{M^{\prime}} \models C$ from the induction hypothesis, since $\hat{M^{\prime}}=AbstractRepair(\hat{M},\hat{s}_{1},\phi_{1},C)$. From 3-valued semantics of CTL, we conclude that $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. If a $\hat{M}^{\prime}$ is returned at line 18, a must transition $(\hat{s},\hat{s}_{n})$ to a new state has been added and $\hat{M}^{\prime}=AbstractRepair(AddMust(\hat{M},(\hat{s},\hat{s}_{n})),\hat{s}_{n},\phi_{1},C)$. Then, from the induction hypothesis $(\hat{M^{\prime}},\hat{s}_{n}) \models \phi_{1}$, $\hat{M^{\prime}} \models C$ and from 3-valued semantics of CTL, we also conclude that $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. \item if $\phi = AG\phi_{1}$, $AbstractRepair(\hat{M},\hat{s},\phi,C)$ calls $AbstractRepair_{AG}(\hat{M},\hat{s},AG\phi_{1},C)$ at line 10 of Algorithm~\ref{alg:main}. If $(\hat{M},\hat{s}) \not\models \phi_{1}$ and $AbstractRepair(\hat{M},\hat{s},\phi_{1},C)$ returns a KMTS $\hat{M}_{0}$ at line 2 of Algorithm~\ref{alg:AG}, then from the induction hypothesis $(\hat{M}_{0},\hat{s}) \models \phi_{1}$ and $\hat{M}_{0} \models C$. Otherwise, $\hat{M}_{0} = \hat{M}$ and $(\hat{M}_{0},\hat{s}) \models \phi_{1}$ also hold true. If Algorithm~\ref{alg:AG} returns a $\hat{M}^{\prime}$ at line 16, then $\hat{M}^{\prime} \models C$ and $\hat{M}^{\prime}$ is the result of successive $AbstractRepair(\hat{M_{i}},\hat{s}_{k},\phi_{1},C)$ calls with $\hat{M_{i}}=AbstractRepair(\hat{M}_{i-1},\hat{s}_{k},\phi_{1},C)$ and $i=1, . . .$, for all may-reachable states $\hat{s}_{k}$ from $\hat{s}$ such that $(\hat{M}_{0},\hat{s}_{k}) \not\models \phi_{1}$. From the induction hypothesis, $(\hat{M}^{\prime},\hat{s}_{k}) \models \phi_{1}$ and $\hat{M^{\prime}} \models C$ for all such $\hat{s}_{k}$ and from 3-valued semantics of CTL we conclude that $(\hat{M^{\prime}},\hat{s}) \models AG\phi_{1}$. \end{itemize} \noindent We prove the lemma for all other cases in a similar manner. \end{proof} \begin{thm}[Soundness] \label{theor:sound} Let a KMTS $\hat{M}$, a CTL formula $\phi$ with $(\hat{M},\hat{s}) \not\models \phi$, for some $\hat{s}$ of $\hat{M}$. If $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ returns a KMTS $\hat{M}^{\prime}$, then $(\hat{M}^{\prime},\hat{s}) \models \phi$. \end{thm} \begin{proof} We use structural induction on $\phi$ and Lemma~\ref{theor:sound_help} in the inductive step for $\phi_{1} \wedge \phi_{2}$. \paragraph{Base Case:} \begin{itemize} \item if $\phi = \top$, Theorem~\ref{theor:sound} is trivially true, because $(\hat{M},\hat{s}) \models \phi$. \item if $\phi = \bot$, then $AbstractRepair(\hat{M},\hat{s},\bot,\emptyset)$ returns FAILURE at line 2 of Algorithm~\ref{alg:main} and the theorem is also trivially true. \item if $\phi = p \in AP$, $AbstractRepair_{ATOMIC}(\hat{M},\hat{s},p,\emptyset)$ is called at line 4 of Algorithm~\ref{alg:main} and an $\hat{M^{\prime}} = ChangeLabel(\hat{M},\hat{s},p)$ is computed at line 1. Because of the fact that $p \in \hat{L}^{\prime}(\hat{s})$ in $\hat{M^{\prime}}$, from 3-valued semantics of CTL over KMTSs we have $(\hat{M^{\prime}},\hat{s}) \models \phi$. Algorithm~\ref{alg:ATOMIC} returns $\hat{M^{\prime}}$ at line 3 because $C$ is empty, and the theorem is true. \end{itemize} \paragraph{Induction Hypothesis:} For CTL formulae $\phi_{1}$, $\phi_{2}$, the theorem is true. Thus, for $\phi_{1}$ (resp. $\phi_{2}$), if $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ returns a KMTS $\hat{M}^{\prime}$, then $(\hat{M^{\prime}},\hat{s}) \models \phi_{1}$. \paragraph{Inductive Step:} \begin{itemize} \item if $\phi = \phi_{1} \vee \phi_{2}$, then $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ calls $AbstractRepair_{OR}(\hat{M},\hat{s},\phi_{1} \vee \phi_{2},\emptyset)$ at line 8 of Algorithm~\ref{alg:main}. From the induction hypothesis, if $AbstractRepair(\hat{M},\hat{s},\phi_{1},\emptyset)$ returns a KMTS $\hat{M}_{1}$ at line 1 of Algorithm~\ref{alg:OR} and $AbstractRepair(\hat{M},\hat{s},\phi_{2},\emptyset)$ returns a KMTS $\hat{M}_{2}$ respectively, then $(\hat{M}_{1},\hat{s}) \models \phi_{1}$ and $(\hat{M}_{2},\hat{s}) \models \phi_{1}$. $AbstractRepair_{OR}(\hat{M},\hat{s},\phi_{1} \vee \phi_{2},\emptyset)$ returns at line 8 of Algorithm~\ref{alg:main} the KMTS $\hat{M^{\prime}}$, which can be either $\hat{M}_{1}$ or $\hat{M}_{2}$. Therefore, $(\hat{M^{\prime}},\hat{s}) \models \phi_{1}$ or $(\hat{M^{\prime}},\hat{s}) \models \phi_{2}$. From 3-valued semantics of CTL, $(\hat{M^{\prime}},\hat{s}) \models \phi_{1} \vee \phi_{2}$ and the theorem is true. \item if $\phi = \phi_{1} \wedge \phi_{2}$, then $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ calls $AbstractRepair_{AND}(\hat{M},\hat{s},\phi_{1} \wedge \phi_{2},\emptyset)$ at line 6 of Algorithm~\ref{alg:main}. From the induction hypothesis, if at line 1 of Algorithm~\ref{alg:AND} $AbstractRepair(\hat{M},\hat{s},\phi_{1},\emptyset)$ returns a KMTS $\hat{M}_{1}$, then $(\hat{M}_{1},\hat{s}) \models \phi_{1}$. Consequently, $\hat{M}_{1} \models C_{1}$, where $C_{1} = \emptyset \cup {(\hat{s},\phi_{1})}$. At line 7, if $AbstractRepair(\hat{M}_{1},\hat{s},\phi_{2},C_{1})$ returns a KMTS $\hat{M}_{1}^{\prime}$, then from Lemma~\ref{theor:sound_help} $(\hat{M}_{1}^{\prime},\hat{s}) \models \phi_{2}$ and $\hat{M}_{1}^{\prime} \models C_{1}$. Likewise, if the calls at lines 2 and 12 of Algorithm~\ref{alg:AND} return the KMTSs $\hat{M}_{2}$ and $\hat{M}_{2}^{\prime}$, then from the induction hypothesis $(\hat{M}_{2},\hat{s}) \models \phi_{2}$ and from Lemma~\ref{theor:sound_help} $(\hat{M}_{2}^{\prime},\hat{s}) \models \phi_{1}$, $\hat{M}_{2}^{\prime} \models C_{2}$ with $C_{2} = \emptyset \cup {(\hat{s},\phi_{2})}$. The KMTS $\hat{M^{\prime}}$ at line 7 of Algorithm~\ref{alg:main} can be either $\hat{M}_{1}^{\prime}$ or $\hat{M}_{2}^{\prime}$ and therefore, $(\hat{M^{\prime}},\hat{s}) \models \phi_{1}$ and $(\hat{M^{\prime}},\hat{s}) \models \phi_{2}$. From 3-valued semantics of CTL it holds that $(\hat{M^{\prime}},\hat{s}) \models \phi_{1} \wedge \phi_{2}$ and the lemma is true. \item if $\phi = EX\phi_{1}$, $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ calls $AbstractRepair_{EX}(\hat{M},\hat{s},EX\phi_{1},\emptyset)$ at line 10 of Algorithm~\ref{alg:main}. If a KMTS $\hat{M}^{\prime}$ is returned at line 5 of Algorithm~\ref{alg:EX}, there is a state $\hat{s}_{1}$ with $(\hat{M},\hat{s}_{1}) \models \phi_{1}$ such that $\hat{M}^{\prime} = AddMust(\hat{M},(\hat{s},\hat{s}_{1}))$. From 3-valued semantics of CTL, we conclude that $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. If a $\hat{M}^{\prime}$ is returned at line 11, there is $(\hat{s},\hat{s}_{1}) \in R_{must}$ such that $(\hat{M^{\prime}},\hat{s}_{1}) \models \phi_{1}$ from the induction hypothesis, since $\hat{M^{\prime}} = AbstractRepair(\hat{M},\hat{s}_{1},\phi_{1},\emptyset)$. From 3-valued semantics of CTL, we conclude that $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. If a $\hat{M}^{\prime}$ is returned at line 18, a must transition $(\hat{s},\hat{s}_{n})$ to a new state has been added and $\hat{M}^{\prime} = AbstractRepair(AddMust(\hat{M},(\hat{s},\hat{s}_{n})),\hat{s}_{n},\phi_{1},\emptyset)$. Then, from the induction hypothesis $(\hat{M^{\prime}},\hat{s}_{n}) \models \phi_{1}$ and from 3-valued semantics of CTL, we also conclude that $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. \item if $\phi = AG\phi_{1}$, $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ calls $AbstractRepair_{AG}(\hat{M},\hat{s},AG\phi_{1},\emptyset)$ at line 10 of Algorithm~\ref{alg:main}. If $(\hat{M},\hat{s}) \not\models \phi_{1}$ and $AbstractRepair(\hat{M},\hat{s},\phi_{1},\emptyset)$ returns a KMTS $\hat{M}_{0}$ at line 2 of Algorithm~\ref{alg:AG}, then from the induction hypothesis $(\hat{M}_{0},\hat{s}) \models \phi_{1}$. Otherwise, $\hat{M}_{0} = \hat{M}$ and $(\hat{M}_{0},\hat{s}) \models \phi_{1}$, $\hat{M}_{0} \models C$ also hold true. If Algorithm~\ref{alg:AG} returns a $\hat{M}^{\prime}$ at line 16, this KMTS is the result of successive calls of $AbstractRepair(\hat{M_{i}},\hat{s}_{k},\phi_{1},\emptyset)$ with $\hat{M_{i}}=AbstractRepair(\hat{M}_{i-1},\hat{s}_{k},\phi_{1},\emptyset)$ and $i=1, . . .$, for all may-reachable states $\hat{s}_{k}$ from $\hat{s}$ such that $(\hat{M}_{0},\hat{s}_{k}) \not\models \phi_{1}$. From the induction hypothesis, $(\hat{M}^{\prime},\hat{s}_{k}) \models \phi_{1}$ for all such $\hat{s}_{k}$ and from 3-valued semantics of CTL we conclude that $(\hat{M^{\prime}},\hat{s}) \models AG\phi_{1}$. \end{itemize} \noindent We prove the theorem for all other cases in the same way. \end{proof} \noindent Theorem~\ref{theor:sound} shows that \emph{AbstractRepair} is \emph{sound} in the sense that if it returns a KMTS $\hat{M}^{\prime}$, then $\hat{M}^{\prime}$ satisfies property $\phi$. In this case, from the definitions of the basic repair operations, it follows that one or more KSs can be obtained for which $\phi$ holds true. \subsubsection{Semi-completeness} \label{subsubsec:alg_completeness} \begin{defi}[\emph{mr}-CTL] Given a set $AP$ of atomic propositions, we define the syntax of a CTL fragment inductively via a Backus Naur Form: \begin{align*} \phi ::== &\bot \, | \, \top \, | \, p \, | \, (\neg \phi) \, | \, (\phi \vee \phi) \, | \, AXp \, | \, EXp \, | \, AFp \\ & | \, EFp \, | \, AGp \, | \, EGp \, | \, A[p \, U \, p] \, | \, E[p \, U \, p] \end{align*} where $p$ ranges over $AP$. \end{defi} \emph{mr}-CTL includes most of the CTL formulae apart from those with nested path quantifiers or conjunction. \begin{thm}[Completeness] \label{theor:complete} Given a KMTS $\hat{M}$, an \textit{mr}-CTL formula $\phi$ with $(\hat{M},\hat{s}) \not\models \phi$, for some $\hat{s}$ of $\hat{M}$, if there exists a KMTS $\hat{M}^{\prime\prime}$ over the same set $AP$ of atomic propositions with $(\hat{M}^{\prime\prime},\hat{s}) \models \phi$, $AbstractRepair(\hat{M},\hat{s},\phi,\emptyset)$ returns a KMTS $\hat{M}^{\prime}$ such that $(\hat{M}^{\prime},\hat{s}) \models \phi$. \end{thm} \begin{proof} We prove the theorem using structural induction on $\phi$. \paragraph{Base Case:} \begin{itemize} \item if $\phi = \top$, Theorem~\ref{theor:complete} is trivially true, because for any KMTS $\hat{M}$ it holds that $(\hat{M},\hat{s}) \models \phi$. \item if $\phi = \bot$, then the theorem is trivially true, because there does not exist a KMTS $\hat{M}^{\prime\prime}$ such that $(\hat{M}^{\prime\prime},\hat{s}) \models \phi$. \item if $\phi = p \in AP$, there is a KMTS $\hat{M}^{\prime\prime}$ with $p \in \hat{L}^{\prime\prime}(\hat{s})$ and therefore $(\hat{M}^{\prime\prime},\hat{s}) \models \phi$. Algorithm~\ref{alg:main} calls $AbstractRepair_{ATOMIC}(\hat{M},\hat{s},p,\emptyset)$ at line 4 and an $\hat{M^{\prime}} = ChangeLabel(\hat{M},\hat{s},p)$ is computed at line 1 of Algorithm~\ref{alg:ATOMIC}. Since $C$ is empty, $\hat{M^{\prime}}$ is returned at line 3 and $(\hat{M}^{\prime},\hat{s}) \models \phi$ from 3-valued semantics of CTL. Therefore, the theorem is true. \end{itemize} \paragraph{Induction Hypothesis:} For \emph{mr}-CTL formulae $\phi_{1}$, $\phi_{2}$, the theorem is true. Thus, for $\phi_{1}$ (resp. $\phi_{2}$), if there is a KMTS $\hat{M}^{\prime\prime}$ over the same set $AP$ of atomic propositions with $(\hat{M}^{\prime\prime},\hat{s}) \models \phi_{1}$, $AbstractRepair(\hat{M},\hat{s},\phi_{1},\emptyset)$ returns a KMTS $\hat{M}^{\prime}$ such that $(\hat{M}^{\prime},\hat{s}) \models \phi_{1}$. \paragraph{Inductive Step:} \begin{itemize} \item if $\phi = \phi_{1} \vee \phi_{2}$, from the 3-valued semantics of CTL a KMTS that satisfies $\phi$ exists if and only if there is a KMTS satisfying any of the $\phi_{1}$, $\phi_{2}$. From the induction hypothesis, if there is a KMTS $\hat{M}_{1}^{\prime\prime}$ with $(\hat{M}_{1}^{\prime\prime},\hat{s}) \models \phi_{1}$, $AbstractRepair(\hat{M},\hat{s},\phi_{1},\emptyset)$ at line 1 of Algorithm~\ref{alg:OR} returns a KMTS $\hat{M}_{1}^{\prime}$ such that $(\hat{M}_{1}^{\prime},\hat{s}) \models \phi_{1}$. Respectively, $AbstractRepair(\hat{M},\hat{s},\phi_{2},\emptyset)$ at line 2 of Algorithm~\ref{alg:OR} can return a KMTS $\hat{M}_{2}^{\prime}$ with $(\hat{M}_{2}^{\prime},\hat{s}) \models \phi_{2}$. In any case, if either $\hat{M}_{1}^{\prime}$ or $\hat{M}_{2}^{\prime}$ exists, for the KMTS $\hat{M}^{\prime}$ that is returned at line 13 of Algorithm~\ref{alg:OR} we have $(\hat{M}^{\prime},\hat{s}) \models \phi_{1}$ or $(\hat{M}^{\prime},\hat{s}) \models \phi_{2}$ and therefore $(\hat{M}^{\prime},\hat{s}) \models \phi$. \item if $\phi = EX\phi_{1}$, from the 3-valued semantics of CTL a KMTS that satisfies $\phi$ at $\hat{s}$ exists if and only if there is KMTS satisfying $\phi_{1}$ at some direct must-successor of $\hat{s}$. If in the KMTS $\hat{M}$ there is a state $\hat{s}_{1}$ with $(\hat{M},\hat{s}_{1}) \models \phi_{1}$, then the new KMTS $\hat{M}^{\prime} = AddMust(\hat{M},(\hat{s},\hat{s}_{1}))$ is computed at line 3 of Algorithm~\ref{alg:EX}. Since $C$ is empty $\hat{M}^{\prime}$ is returned at line 5 and $(\hat{M^{\prime}},\hat{s}) \models EX\phi_{1}$. Otherwise, if there is a direct must-successor $\hat{s}_{i}$ of $\hat{s}$, $AbstractRepair(\hat{M},\hat{s}_{i},\phi_{1},\emptyset)$ is called at line 8. From the induction hypothesis, if there is a KMTS $\hat{M}^{\prime\prime}$ with $(\hat{M}^{\prime\prime},\hat{s}_{i}) \models \phi_{1}$, then a KMTS $\hat{M}^{\prime}$ is computed such that $(\hat{M}^{\prime},\hat{s}_{i}) \models \phi_{1}$ and therefore the theorem is true. If there are no must-successors of $\hat{s}$, a new state $\hat{s}_{n}$ is added and subsequently connected with a must-transition from $\hat{s}$. $AbstractRepair$ is then called for $\phi_{1}$ and $\hat{s}_{n}$ as previously and the theorem holds also true. \item if $\phi = AG\phi_{1}$, from the 3-valued semantics of CTL a KMTS that satisfies $\phi$ at $\hat{s}$ exists, if and only if there is KMTS satisfying $\phi_{1}$ at $\hat{s}$ and at each may-reachable state from $\hat{s}$. $AbstractRepair(\hat{M},\hat{s},\phi_{1},\emptyset)$ is called at line 2 of Algorithm~\ref{alg:AG} and from the induction hypothesis if there is KMTS $\hat{M}_{0}^{\prime}$ with $(\hat{M}_{0}^{\prime},\hat{s}) \models \phi_{1}$, then a KMTS $\hat{M}_{0}$ is computed such that $(\hat{M}_{0},\hat{s}) \models \phi_{1}$. $AbstractRepair$ is subsequently called for $\phi_{1}$ and for all may-reachable $\hat{s}_{k}$ from $\hat{s}$ with $(\hat{M}_{0},\hat{s}_{k}) \not\models \phi_{1}$ one-by-one. From the induction hypothesis, if there is KMTS $\hat{M}_{i}^{\prime}$ that satisfies $\phi_{1}$ at each such $\hat{s}_{k}$, then all $\hat{M}_{i}=AbstractRepair(\hat{M}_{i-1},\hat{s}_{k},\phi_{1},\emptyset), \, i=1, . . .,$ satisfy $\phi_{1}$ at $\hat{s}_{k}$ and the theorem holds true. \end{itemize} \noindent We prove the theorem for all other cases in the same way. \end{proof} \noindent Theorem~\ref{theor:complete} shows that \emph{AbstractRepair} is \emph{semi-complete} with respect to full CTL: if there is a KMTS that satisfies a \emph{mr}-CTL formula $\phi$, then the algorithm finds one such KMTS. \subsection{Complexity Issues} \label{subsec:alg_complex} AMR's complexity analysis is restricted to \emph{mr}-CTL, for which the algorithm has been proved complete. For these formulas, we show that AMR is upper bounded by a polynomial expression in the state space size and the number of may-transitions of the abstract KMTS, and depends also on the length of the \emph{mr}-CTL formula. For CTL formulas with nested path quantifiers and/or conjunction, AMR is looking for a repaired model satisfying all conjunctives (constraints), which increases the worst-case execution time exponentially to the state space size of the abstract KMTS. In general, as shown in~\cite{BK12}, the complexity of all model repair algorithms gets worse when raising the level of their completeness, but AMR has the advantage of working exclusively over an abstract model with a reduced state space compared to its concrete counterpart. Our complexity analysis for \emph{mr}-CTL is based on the following results. For an abstract KMTS $\hat{M} = (\hat{S}, \hat{S_{0}},$ $R_{must}, R_{may}, \hat{L})$ and a \emph{mr}-CTL property $\phi$, (i) 3-valued CTL model checking is performed in $O(|\phi| \cdot (|\hat{S}|+|R_{may}|))$~\cite{GHJ01}, (ii) Depth First Search (DFS) of states reachable from $\hat{s} \in \hat{S}$ is performed in $O(|\hat{S}|+|R_{may}|)$ in the worst case or in $O(|\hat{S}|+|R_{must}|)$ when only must-transitions are accessed, (iii) finding a maximal path from $\hat{s} \in \hat{S}$ using Breadth First Search (BFS) is performed in $O(|\hat{S}|+|R_{may}|)$ for may-paths and in $O(|\hat{S}|+|R_{must}|)$ for must-paths. We analyze the computational cost for each of the AMR's primitive functions: \begin{itemize} \item if $\phi = p \in AP$, $AbstractRepair_{ATOMIC}$ is called and the operation $ChangeLabel$ is applied, which is in $O(1)$. \item if $\phi = EX\phi_{1}$, then $AbstractRepair_{EX}$ is called and the applied operations with the highest cost are: (1) finding a state satisfying $\phi_{1}$, which depends on the cost of 3-valued CTL model checking and is in $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$, (2) finding a must-reachable state, which is in $O(|\hat{S}| + |R_{must}|)$. These operations are called at most once and the overall complexity for this primitive functions is therefore in $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. \item if $\phi = AX\phi_{1}$, then $AbstractRepair_{AX}$ is called and the most costly operations are: (1) finding a may-reachable state, which is in $O(|\hat{S}| + |R_{may}|)$, and (2) checking if a state satisfies $\phi_{1}$, which is in $O(|\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. These operations are called at most $|\hat{S}|$ times and the overall bound class is $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. \item if $\phi = EF\phi_{1}$, $AbstractRepair_{EF}$ is called and the operations with the highest cost are: (1) finding a must-reachable state, which is in $O(|\hat{S}| + |R_{must}|)$, (2) checking if a state satisfies $\phi_{1}$ with its bound class being $O(|\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$ and (3) finding a state that satisfies $\phi_{1}$, which is in $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. These three operations are called at most $|\hat{S}|$ times and consequently, the overall bound class is $O(|\hat{S}|^{2} \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. \item if $\phi = AF\phi_{1}$, $AbstractRepair_{AF}$ is called and the most costly operation is: finding a maximal may-path violating $\phi_{1}$ in all states, which is in $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|)$. This operation is called at most $|\hat{S}|$ times and therefore, the overall bound class is $O(|\hat{S}|^2 \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. \end{itemize} In the same way, it is easy to show that: (i) if $\phi = EG\phi_{1}$, then $AbstractRepair_{EG}$ is in $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{must}|)$, (ii) if $\phi = AG\phi_{1}$, then $AbstractRepair_{AG}$ is in $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$, (iii) if $\phi = E(\phi_{1}U\phi_{2})$, then the bound class of $AbstractRepair_{EU}$ is $O(|\hat{S}| \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{must}|)$, (iv) if $\phi = A(\phi_{1}U\phi_{2})$ then $AbstractRepair_{AU}$ is in $O(|\hat{S}|^2 \cdot |\phi_{1}| \cdot (|\hat{S}|+|R_{may}|))$. For a \emph{mr}-CTL property $\phi$, the main body of the algorithm is called at most $|\phi|$ times and the overall bound class of the AMR algorithm is $O(|\hat{S}|^2 \cdot |\phi|^{2} \cdot (|\hat{S}|+|R_{may}|))$. \subsection{Application} \label{sec:app} We present the application of \emph{AbstractRepair} on the ADO system from Section~\ref{sec:mc}. After the first two steps of our repair process, \emph{AbstractRepair} is called for the KMTS $\alpha_{\mathit{Refined}}(M)$ that is shown in Fig.~\ref{fig:ado_refined}, the state $\hat{s}_{01}$ and the CTL property $\phi = AGEXq$. \emph{AbstractRepair} calls $AbstractRepair_{AG}$ with arguments $\alpha_{\mathit{Refined}}(M)$, $\hat{s}_{01}$ and $AGEXq$. The $AbstractRepair_{AG}$ algorithm at line 10 triggers a recursive call of \emph{AbstractRepair} with the same arguments. Eventually, $AbstractRepair_{EX}$ is called with arguments $\alpha_{\mathit{Refined}}(M)$, $\hat{s}_{01}$ and $EXq$, that in turn calls \emph{AddMust} at line 3, thus adding a must-transition from $\hat{s}_{01}$ to $\hat{s}_{1}$. \emph{AbstractRepair} terminates by returning a KMTS $\hat{M^{\prime}}$ that satisfies $\phi = AGEXq$. The repaired KS $M^{\prime}$ is the single element in the set of KSs derived by the concretization of $\hat{M^{\prime}}$ (cf. Def.~\ref{def:add_must_ks}). The execution steps of \emph{AbstractRepair} and the obtained repaired KMTS and KS are shown in Fig.~\ref{fig:ado_repair_process} and Fig.~\ref{fig:ado_repaired} respectively. \begin{figure} \caption{Repair of ADO system using abstraction.} \label{fig:ado_repair_process} \label{fig:ado_repaired} \label{fig:ado_repair} \end{figure} \noindent Although the ADO is not a system with a large state space, it is shown that the repair process is accelerated by the proposed use of abstraction. If on the other hand model repair was applied directly to the concrete model, new transitions would have have been inserted from all the states labeled with $\neg open$ to the one labeled with \emph{open}. In the ADO, we have seven such states, but in a system with a large state space this number can be significantly higher. The repair of such a model without the use of abstraction would be impractical. \section{Experimental Results: The Andrew File System 1 (AFS1) Protocol} \label{sec:exp} In this section, we provide experimental results for the relative performance of a prototype implementation of our AMR algorithm in comparison with a prototype implementation of a concrete model repair solution~\cite{ZD08}. The results serve as a proof of concept for the use of abstraction in model repair and demonstrate the practical utility of our approach. As a model we use a KS for the Andrew File System Protocol 1 (AFS1)~\cite{WV95}, which has been repaired for a specific property in~\cite{ZD08}. AFS1 is a client-server cache coherence protocol for a distributed file system. Four values are used for the client's belief about a file (nofile, valid, invalid, suspect) and three values for the server's belief (valid, invalid, none). A property which is not satisfied in the AFS1 protocol in the form of CTL is: \[ AG((Server.belief = valid) \rightarrow (Client.belief = valid)) \] \begin{figure} \caption{The KS and the KMTS of the AFS1 protocol after the 2nd refinement step.} \label{fig:afs1_refined2_ks} \label{fig:afs1_refined2_kmts} \label{fig:afs1_ks_kmts} \end{figure} \begin{figure} \caption{The repaired KMTS and KS of the AFS1 protocol.} \label{fig:afs1_repaired_kmts} \label{fig:afs1_repaired_ks} \label{fig:afs1_repaired_ks_kmts} \end{figure} We define the atomic proposition $p$ as $Server.belief = valid$ and $q$ as $Client.belief = valid$, and the property is thus written as $AG(p \rightarrow q)$. The KS for the AFS1 protocol is depicted in Fig.~\ref{fig:afs1_refined2_ks}. State colors show how they are abstracted in the KMTS of Fig.~\ref{fig:afs1_refined2_kmts}, which is derived after the 2nd refinement step of our AMR framework (Fig.~\ref{fig:abs_repair}). The shown KMTS and the CTL property of interest are given as input in our prototype AMR implementation. To obtain larger models of AFS1 we have extended the original model by adding one more possible value for three model variables. Three new models are obtained with gradually increasing size of state space. The results of our experiments are presented in Table~\ref{table:exp_results}. The time needed for the AMR prototype to repair the original AFS1 model and its extensions is from 124 to even 836 times less than the needed time for concrete model repair. The repaired KMTS and KS for the original AFS1 model are shown in Fig.~\ref{fig:afs1_repaired_ks_kmts}. An interesting observation from the application of the AMR algorithm on the repair of the AFS1 KS is that the distance $d$ (cf. Def.~\ref{def:metric_space}) of the repaired KS from the original KS is less than the corresponding distance obtained from the concrete model repair algorithm in~\cite{ZD08}. This result demonstrates in practice the effect of the minimality of changes ordering, on which the AMR algorithm is based on (cf. Fig.~\ref{fig:order_basic_ops}). \begin{table}[t] \begin{center} \begin{tabular}{ | p{4cm} | p{2cm} | p{2cm} | p{2cm} | p{2cm} |} \hline Models & Concrete States & Concr. Repair (Time in sec.) & AMR (Time in sec.) & Improvement (\%) \\ \hline $AFS1$ & $26$ & $17.4$ & $0.14$ & $124$ \\ \hline $AFS1 (Extension 1)$ & $30$ & $24.9$ & $0.14$ & $178$ \\ \hline $AFS1 (Extension 2)$ & $34$ & $35.0$ & $0.14$ & $250$ \\ \hline $AFS1 (Extension 3)$ & $38$ & $117.0$ & $0.14$ & $836$ \\ \hline \end{tabular} \end{center} \caption{Experimental results of AMR with respect to concrete repair} \label{table:exp_results} \end{table} \section{Related Work} \label{sec:relwork} To the best of our knowledge this is the first work that suggests the use of abstraction as a means to counter the state space explosion in search of a Model Repair solution. However, abstraction and in particular abstract interpretation has been used in \emph{program synthesis}~\cite{VYY2010}, a different but related problem to the Model Repair. Program synthesis refers to the automatic generation of a program based on a given specification. Another related problem where abstraction has been used is that of \emph{trigger querying}~\cite{AK14}: given a system $M$ and a formula $\phi$, find the set of scenarios that trigger $\phi$ in $M$. The related work in the area of \emph{program repair} do not consider KSs as the program model. In this context, abstraction has been previously used in the repair of data structures~\cite{ZMK13}. The problem of repairing a Boolean program has been formulated in~\cite{SJB05,JGB07,GBC06,EJ12} as the finding of a winning strategy for a game between two players. The only exception is the work reported in~\cite{SDE08}. Another line of research on program repair treats the repair as a search problem and applies innovative evolutionary algorithms~\cite{A11}, \emph{behavioral programming} techniques~\cite{HKMW12} or other informal heuristics~\cite{WC08,AAG11,WPFSBMZ10}. Focusing exclusively on the area of Model Repair without the use of abstraction, it is worth to mention the following approaches. The first work on Model Repair with respect to CTL formulas was presented in~\cite{A95}. The authors used only the removal of transitions and showed that the problem is NP-complete. Another interesting early attempt to introduce the Model Repair problem for CTL properties is the work in~\cite{BEGL99}. The authors are based on the AI techniques of abductive reasoning and theory revision and propose a repair algorithm with relatively high computational cost. A formal algorithm for Model Repair in the context of KSs and CTL is presented in~\cite{ZD08}. The authors admit that their repair process strongly depends on the model's size and they do not attempt to provide a solution for handling conjunctive CTL formulas. In~\cite{CR09}, the authors try to render model repair applicable to large KSs by using ``table systems'', a concise representation of KSs that is implemented in the NuSMV model checker. A limitation of their approach is that table systems cannot represent all possible KSs. In~\cite{ZKZ10}, tree-like local model updates are introduced with the aim of making the repair process applicable to large-scale domains. However, the proposed approach is only applicable to the universal fragment of the CTL. A number of works attempt to ensure completeness for increasingly larger fragments of the CTL by introducing ways of handling the constraints associated with conjunctive formulas. In~\cite{KPYZ10}, the authors propose the use of constraint automata for ACTL formulas, while in~\cite{CR11} the authors introduce the use of protected models for an extension of the CTL. Both of the two methods are not directly applicable to formulas of the full CTL. The Model Repair problem has been also addressed in many other contexts. In~\cite{E12}, the author uses a distributed algorithm and the processing power of computing clusters to fight the time and space complexity of the repair process. In~\cite{MLB11}, an extension of the Model Repair problem has been studied for Labeled Transition Systems. In~\cite{BGKRS11}, we have provided a solution for the Model Repair problem in probabilistic systems. Another recent effort for repairing discrete-time probabilistic models has been proposed in~\cite{PAJTK15}. In~\cite{BBG11}, model repair is applied to the \emph{fault recovery} of component-based models. Finally, a slightly different but also related problem is that of Model Revision, which has been studied for UNITY properties in~\cite{BEK09,BK08-OPODIS} and for CTL in~\cite{GW10}. Other methods in the area of fault-tolerance include the work in~\cite{gr09}, which uses discrete controller synthesis and~\cite{fb15}, which employs SMT solving. Another interesting work in this direction is in~\cite{df09}, where the authors present a repair algorithm for fault-tolerance in a fully connected topology, with respect to a temporal specification. \section{Conclusions} \label{sec:concl} In this paper, we have shown how abstraction can be used to cope with the state explosion problem in Model Repair. Our model-repair framework is based on Kripke Structures, a 3-valued semantics for CTL, and Kripke Modal Transition Systems, and features an abstract-model-repair algorithm for KMTSs. We have proved that our AMR algorithm is sound for the full CTL and complete for a subset of CTL. We have also proved that our AMR algorithm is upper bounded by a polynomial expression in the size of the abstract model for a major fragment of CTL. To demonstrate its practical utility, we applied our framework to an Automatic Door Opener system and to the Andrew File System 1 protocol. As future work, we plan to apply our method to case studies with larger state spaces, and investigate how abstract model repair can be used in different contexts and domains. A model repair application of high interest is in the design of fault-tolerant systems. In~\cite{bka12}, the authors present an approach for the repair of a distributed algorithm such that the repaired one features fault-tolerance. The input to this model repair problem includes a set of uncontrollable transitions such as the faults in the system. The model repair algorithm used works on concrete models and it can therefore solve the problem only for a limited number of processes. With this respect, we believe that this application could be benefited from the use of abstraction in our AMR framework. At the level of extending our AMR framework, we aim to search for ``better" abstract models, in order to either restrict failures due to refinement or ensure completeness for a larger fragment of the CTL. We will also investigate different notions of minimality in the changes introduced by model repair and the applicability of abstraction-based model repair to probabilistic, hybrid and other types of models. \section{Acknowledgment} This work was partially sponsored by Canada NSERC Discovery Grant 418396-2012 and NSERC Strategic Grants 430575-2012 and 463324-2014. The research was also co-financed by the European Union (European Social Fund ESF) and Greek national funds through the Operational Program ``Education and Lifelong Learning'' of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thalis Athens University of Economics and Business - SOFTWARE ENGINEERING RESEARCH PLATFORM. \end{document}
math
132,204
\begin{document} \title{Random walks and forbidden minors III: $\poly(d\eps^{-1} \thispagestyle{empty} \abstract{Consider the family of bounded degree graphs in any minor-closed family (such as planar graphs). Let $d$ be the degree bound and $n$ be the number of vertices of such a graph. Graphs in these classes have hyperfinite decompositions, where, for a sufficiently small $\hbox{\bf var}epsilon > 0$, one removes $\hbox{\bf var}epsilon dn$ edges to get connected components of size independent of $n$. An important tool for sublinear algorithms and property testing for such classes is the \emph{partition oracle}, introduced by the seminal work of Hassidim-Kelner-Nguyen-Onak (FOCS 2009). A partition oracle is a local procedure that gives consistent access to a hyperfinite decomposition, without any preprocessing. Given a query vertex $v$, the partition oracle outputs the component containing $v$ in time independent of $n$. All the answers are consistent with a single hyperfinite decomposition. The partition oracle of Hassidim et al. runs in time $d^{\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})}$ per query. They pose the open problem of whether $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-time partition oracles exist. Levi-Ron (ICALP 2013) give a refinement of the previous approach, to get a partition oracle that runs in time $d^{\log(d\hbox{\bf var}epsilon^{-1})}$-per query. In this paper, we resolve this open problem and give $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-time partition oracles for bounded degree graphs in any minor-closed family. Unlike the previous line of work based on combinatorial methods, we employ techniques from spectral graph theory. We build on a recent spectral graph theoretical toolkit for minor-closed graph families, introduced by the authors to develop efficient property testers. A consequence of our result is a $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-query tester for \emph{any monotone and additive} property of minor-closed families (such as bipartite planar graphs). Our result also gives $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-query algorithms for additive $\hbox{\bf var}epsilon n$-approximations for problems such as maximum matching, minimum vertex cover, maximum independent set, and minimum dominating set for these graph families. } \setcounter{page}{1} \section{Introduction} \langlebel{sec:intro} The algorithmic study of planar graphs is a fundamental direction in theoretical computer science and graph theory. Classic results like the Kuratowski-Wagner characterization \cite{K30, W37}, linear time planarity algorithms \cite{HT74}, and the Lipton-Tarjan separator theorem underscore the significance of planar graphs \cite{LiptonT:80}. The celebrated theory of Robertson-Seymour give a grand generalization of planar graphs through minor-closed families \cite{RS:12, RS:13, RS:20}. This has led to many deep results in graph algorithms, and an important toolkit is provided by separator theorems and associated decompositions \cite{AST:94}. Over the past decade, there have been many advances in \emph{sublinear} algorithms for planar graphs and minor-closed families. We focus on the model of random access to bounded degree adjacency lists, introduced by Goldreich-Ron~\cite{GR02}. Let $G = (V,E)$ be a graph with vertex set $V = [n]$ and degree bound $d$. The graph is accessed through \emph{neighbor queries}: there is an oracle that on input $v \in V$ and $i \in [d]$, returns the $i$th neighbor of $v$. (If none exist, it returns $\bot$.) One of the key properties of bounded-degree graphs in minor-closed families is that they exhibit hyperfinite decompositions. A graph $G$ is hyperfinite if $\forall \; 0 < \hbox{\bf var}epsilon < 1$, one can remove $\hbox{\bf var}epsilon dn$ edges from $G$ and obtain connected components of size independent of $n$ (we refer to these as pieces). For minor-closed families, one can remove $\hbox{\bf var}epsilon dn$ edges and get pieces of size $O(\hbox{\bf var}epsilon^{-2})$. The seminal result of Hassidim-Kelner-Nguyen-Onak (HKNO) \cite{HKNO} introduced the notion of \emph{partition oracles}. This is a local procedure that provides ``constant-time" access to a hyperfinite decomposition. The oracle takes a query vertex $v$ and outputs the piece containing $v$. Each piece is of size independent of $n$, and at most $\hbox{\bf var}epsilon dn$ edges go between pieces. Furthermore, all the answers are consistent with a single hyperfinite decomposition, despite there being no preprocessing or explicit coordination. (All queries uses the same random seed, to ensure consistency.) Partition oracles are extremely powerful as they allow a constant time procedure to directly access a hyperfinite decomposition. As observed in previous work, partition oracles lead to a plethora of property testing results and sublinear time approximation algorithms for minor-closed graph families~\cite{HKNO,NS13}. In some sense, one can think of partition oracles as a moral analogue of Sz\'{e}meredi's regularity lemma for dense graph property testing: it is a decomposition tool that immediately yields a litany of constant time (or constant query) algorithms. We give a formal definition of partition oracles. (We deviate somewhat from the definition in Chap. 9.5 of Goldreich's book~\cite{G17-book} by including the running time as a parameter, instead of the set size.) \begin{definition} \langlebel{def:oracle} Let $\mathcal{P}$ be a family of graphs with degree bound $d$ and $T: (0,1) \to \mathbb NN$ be a function. A procedure $\boldsymbol{A}$ is an \emph{$(\hbox{\bf var}epsilon,T(\hbox{\bf var}epsilon))$-partition oracle} for $\mathcal{P}$ if it satisfies the following properties. The deterministic procedure takes as input random access to $G = (V,E)$ in $\mathcal{P}$, random access to a random seed $r$ (of length polynomial in graph size), a proximity parameter $\hbox{\bf var}epsilon > 0$, and a vertex $v$ of $G$. (We will think of fixing $G, r, \hbox{\bf var}epsilon$, so we use the notation $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}$. All probabilities are with respect to $r$.) The procedure $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)$ outputs a set of vertices and satisfies the following properties. \begin{enumerate} \item (Consistency) The sets $\{\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)\}$, over all $v$, form a partition of $V$. Also, these sets $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)$ induce connected graphs for all $v \in V$. \item (Cut bound) With probability (over $r$) at least $2/3$, the number of edges between the sets $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)$ is at most $\hbox{\bf var}epsilon dn$. \item (Running time) For every $v$, $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)$ runs in time $T(\hbox{\bf var}epsilon)$. \end{enumerate} \end{definition} We stress that there is no explicit ``coordination" or sharing of state between calls to $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)$ and $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v')$ (for $v \neq v'$). There is no global preprocessing step once the random seed is fixed. The consistency guarantee holds with probability $1$. Note that the running time $T(\hbox{\bf var}epsilon)$ is clearly an upper bound on the size of the sets $\boldsymbol{A}_{G,r,\hbox{\bf var}epsilon}(v)$. For minor-closed families, one can convert any partition oracle to one that output sets of size $O(\hbox{\bf var}epsilon^{-2})$ with a constant factor increase in the cut bound. (refer to the end of Sec. 9.5 in~\cite{G17-book}). The challenge in partition oracles is to bound the running time $T(\hbox{\bf var}epsilon)$. HKNO gave a partition oracle with running time $(d\hbox{\bf var}epsilon^{-1})^{\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})}$. Levi-Ron \cite{LR15} built on the ideas from HKNO and dramatically improved the bound to $(d\hbox{\bf var}epsilon^{-1})^{\log (d\hbox{\bf var}epsilon^{-1})}$. Yet, for all minor-closed families, one can (in linear time) remove $\hbox{\bf var}epsilon dn$ edges to get connected components of size $O(\hbox{\bf var}epsilon^{-2})$. HKNO raise the natural open question as to whether $(\hbox{\bf var}epsilon,\mathrm{poly}(d\hbox{\bf var}epsilon^{-1}))$-partition oracles exist. In this paper, we resolve this open problem. \begin{theorem} \langlebel{thm:main-intro} Let $\mathcal{P}$ be the set of $d$-bounded degree graphs in a minor-closed family. There is an $(\hbox{\bf var}epsilon,\mathrm{poly}(d\hbox{\bf var}epsilon^{-1}))$-partition oracle for $\mathcal{P}$. \end{theorem} \subsection{Consequences} \langlebel{sec:conseq} As observed by HKNO and Newman-Sohler \cite{NS13}, partition oracles have many consequences for property testing and sublinear algorithms. Recall the definition of property testers. Let $\mathcal{Q}$ be a property of graphs with degree bound $d$. The distance of $G$ to $\mathcal{Q}$ is the minimum number of edge additions/removals required to make $G$ have $\mathcal{Q}$, divided by $dn$. A property tester for $\mathcal{P}$ is a randomized procedure that takes query access to an input graph $G$ and a proximity parameter, $\hbox{\bf var}epsilon > 0$. If $G \in \mathcal{P}$, the tester accepts with probability at least $2/3$. If the distance of $G$ to $\mathcal{Q}$ is at least $\hbox{\bf var}epsilon$, the tester rejects with probability at least $2/3$. We often measure the query complexity as well as time complexity of the tester. A direct consequence of \Thm{main-intro} is an ``efficient" analogue (for monotone and additive properties) of a theorem of Newman-Sohler stating that all properties of hyperfinite graphs are testable. A graph property closed under vertex/edge removals is called \emph{monotone}. A graph property closed under disjoint union of graphs is called \emph{additive}. \begin{theorem} \langlebel{thm:testers} Let $\mathcal{Q}$ be any monotone and additive property of bounded degree graphs of a minor-closed family. There exists a $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-query tester for $\mathcal{Q}$. If membership in $\mathcal{Q}$ can be determined exactly in polynomial (in input size) time, then $\mathcal{Q}$ has $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-time testers. \end{theorem} An appealing consequence of \Thm{testers} is that the property of bipartite planar graphs can be tested in $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time. For any fixed subgraph $H$, the property of $H$-free planar graphs can be tested in the same time. And all of these bounds hold for any minor-closed family. As observed by Newman-Sohler, partition oracles give sublinear query algorithms for any additive graph parameter that is ``robust" to edge changes. Again, \Thm{main-intro} implies an efficient version for minor-closed families. \begin{theorem} \langlebel{thm:approx} Let $f$ be a real-valued function on graphs that changes by $O(1)$ on edge addition/removals, and has the property that $f(G_1 \cup G_2) = f(G_1) + f(G_2)$ for graphs $G_1, G_2$ that are not connected to each other. For any minor-closed family $\mathcal{P}$, there is a randomized algorithm that, given $\hbox{\bf var}epsilon > 0$ and $G \in \mathcal{P}$, outputs an additive $\hbox{\bf var}epsilon n$-approximation to $f(G)$ and makes $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ queries. If $f$ can be computed exactly in polynomial time, then the above algorithm runs in $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time. \end{theorem} The functions captured by \Thm{approx} are quite general. Functions such as maximum matching, minimum vertex cover, maximum independent set, minimum dominating set, maxcut, etc. all have the robustness property. As a compelling application of \Thm{approx}, we can get $(1+\hbox{\bf var}epsilon)$-approximations\footnote{The maximum matching is $\Omega(n/d)$ for a connected bounded degree graph. One simply sets $\hbox{\bf var}epsilon \ll 1/d$ in \Thm{approx}.} for the maximum matching in planar (or any minor-closed family) graphs in $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time. These theorems are easy consequences of \Thm{main-intro}. Using the partition oracle, an algorithm can essentially assume that the input is a collection of connected components of size $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$, and run an exact algorithm on a collection of randomly sampled components. We sketch the proofs in \Sec{appl}. \subsection{Related work} \langlebel{sec:related} The subject of property testing and sublinear algorithms in bounded degree graphs is a vast topic. We refer the reader to Chapters 9 and 10 of Goldreich's textbook \cite{G17-book}. We focus on the literature relevant to sublinear algorithms for minor-closed families. The first step towards a characterization of testable properties in the bounded-degree model was given by Czumaj-Sohler-Shapira, who showed hereditary properties in non-expanding graphs are testable \cite{CSS09}. This was an indication that notions like hyperfiniteness are connected to property testing. Benjamini-Schramm-Shapira achieved a breakthrough by showing that all minor-closed properties are testable, in time triply-exponential in $d\hbox{\bf var}epsilon^{-1}$ \cite{BSS08}. Hassidim-Kelner-Nguyen-Onak introduced partition oracles, and designed one running in time $\exp(d\hbox{\bf var}epsilon^{-1})$. Levi-Ron improved this bound to quasipolynomial in $d\hbox{\bf var}epsilon^{-1}$, using a clever analysis inspired by algorithms for minimum spanning trees~\cite{LR15}. Newman-Sohler built on partition oracles for minor-close families to show that all properties of hyperfinite graphs are testable~\cite{NS13}. Fichtenberger-Peng-Sohler showed any testable property contains a hyperfinite property~\cite{FiPeSo19}. There are two dominant combinatorial ideas in this line of work. The first is using subgraph frequencies in neighborhood of radius $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ to characterize properties. This naturally leads to exponential dependencies in $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$. The second idea is to use random edge contractions to reduce the graph size. Recursive applications lead to hyperfinite decompositions, and the partition oracles of HKNO and Levi-Ron simulate this recursive procedure. This is extremely non-trivial, and leads to a recursive local procedure with a depth dependent of $\hbox{\bf var}epsilon$. Levi-Ron do a careful simulation, ensuring that the recursion depth is at most $\log(d\hbox{\bf var}epsilon^{-1})$, but this simulation requires looking at neighborhoods of radius $\log(d\hbox{\bf var}epsilon^{-1})$. Following this approach, there is little hope of getting a recursion depth independent of $\hbox{\bf var}epsilon$, which is required for a $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-time procedure. Much of the driving force behind this work was the quest for a $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$-time tester for planarity. This question was resolved recently using a different approach from spectral graph theory, which was itself developed for sublinear time algorithms for finding minors~\cite{KSS:18, KSS:19}. A major inspiration is the random walk based one-sided bipartiteness tester of Goldreich-Ron \cite{GR99}. This paper is a continuation of that line of work, and is a further demonstration of the power of spectral techniques for sublinear algorithms. The tools build on local graph partitioning techniques pioneered by Spielman-Teng \cite{ST12}, which is itself based on classic mixing time results of Lov\'{a}sz-Simonovits \cite{LS:90}. In this paper, we develop new diffusion-based local partitioning tools that form the core of partition oracles. We also mention other key results in the context of sublinear algorithms for minor-closed families, notably the Czumaj et al \cite{C14} upper bound of $O(\sqrt n)$ for testing cycle minor-freeness, the Fichtenberger et al \cite{FLVW:17} upper bound of $O(n^{2/3})$ for testing $K_{2,r}$-minor-freeness, and $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ testers for outerplanarity and bounded treewidth graphs~\cite{YI:15,EHNO11}. \section{Main Ideas} \langlebel{sec:ideas} The starting point for this work are the spectral methods used in~\cite{KSS:18,KSS:19}. These methods discover cut properties within a neighborhood of radius $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$, without explicitly constructing the entire neighborhood. One of the key tools used in these results in a local partitioning algorithm, based on techniques of Spielman-Teng~\cite{ST12}. The algorithm takes a seed vertex $s$, performs a diffusion from $s$ (equivalently, performs many random walks) of length $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$, and tracks the diffusion vector to detect a low conductance cut around $s$ in $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time. We will use the term \emph{diffusions}, instead of random walks, because we prefer the deterministic picture of a unit of ``ink" spreading through the graph. A key lemma in previous results states that, for graphs in minor-closed families, this procedure succeeds from more than $(1-\hbox{\bf var}epsilon)n$ seed vertices. This yields a global algorithm to construct a hyperfinite decomposition with components of $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ size. Pick a vertex $s$ at random, run the local partitioning procedure to get a low conductance cut, remove and recurse. Can there be a local implementation of this algorithm? Let us introduce some setup. We will think of a global algorithm that processes seed vertices in some order. Given each seed vertex $s$, a local partitioning algorithm generates a low conductance set $C(s)$ containing $s$ (this is called a cluster). The final output is the collection of these clusters. For any vertex $v$, let the \emph{anchor} of $v$ be the vertex $s$ such that $v \in C(s)$. A local implementation boils down to finding the anchor of query vertex $v$. Observe that at any point of the global procedure, some vertices have been clustered, while the remaining are still \emph{free}. The global procedure described above seems hopeless for a local implementation. The cluster $C(s)$ is generated by diffusion in some subgraph $G'$ of $G$, which was the set of free vertices when seed $s$ was processed. Consider a local procedure trying to discover the anchor of $v$. It would need to figure out the free set corresponding to every potential anchor $s$, so that it can faithfully simulate the diffusion used to cluster $v$. From an implementation standpoint, it seems that the natural local algorithm is to use diffusions from $v$ in $G$ to discover the anchor. But diffusion in a subgraph $G'$ is markedly different from $G$ and difficult to simulate locally. Our first goal is to design a partitioning method using diffusions directly in $G$. {\bf Finding low conductance cuts in subsets, by diffusion in supersets:} Let us now modify the global algorithm with this constraint in mind. At some stage of the global algorithm, there is a set $F$ of free vertices. We need to find a low conductance cut contained in $F$, while running random walks in $G$. Note that we must be able to deal with $F$ as small as $O(\hbox{\bf var}epsilon n)$. Thus, random walks (even starting from $F$) will leave $F$ quite often; so how can these walks/diffusions find cuts in $F$? One of our main insights is that these challenges can be dealt with, even for diffusions of $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ length. We show that, for a uniform random vertex $s \in F$, a spectral partitioning algorithm that performs diffusion from $s$ in $G$ can detect low conductance cuts contained in $F$. Diffusion in the superset (all of $V$) provides information about the subset $F$. This is a technical and non-trivial result, and crucially uses the spectral properties of minor-closed families. Note that diffusions from $F$ can spread very rapidly in short random walks, even in planar graphs. Consider a graph $G$, where $F$ is a path on $\hbox{\bf var}epsilon n$ vertices, and there is a tree of size $1/\hbox{\bf var}epsilon$ rooted at every vertex of $F$. Diffusions from any vertex in $F$ will initially be dominated by the trees, and one has to diffuse for at least $1/\hbox{\bf var}epsilon$ timesteps before structure within $F$ can be detected. Thus, the proof of our theorem has to look at average behavior over a sufficiently large time horizon before low conductance cuts in $F$ are ``visible". Remarkably, it suffices to look at $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ timesteps to find structure in $F$, because of the behavior of diffusions in minor-closed families. The main technical tool used is the Lov\'{a}sz-Simonovits curve technique \cite{LS:90}, whose use was pioneered by Spielman-Teng \cite{ST12}. We also use the truncated probability vector technique from Spielman-Teng to give cleaner implementations and proofs. A benefit of using diffusion (instead of random walks) on truncated vectors is that the clustering becomes deterministic. {\bf The problem of ordering the seeds:} With one technical hurdle out of the way, we end up at another gnarly problem. The above procedure only succeeds if the seed is in $F$. Quite naturally, one does not expect to get any cuts in $F$ by diffusing from a random vertex in $G$. From the perspective of the global algorithm, this means that we need some careful ordering of the seeds, so that low conductance cuts are discovered. Unfortunately, we also need local implementations of this ordering. The authors struggled with carrying out this approach, but to no avail. To rid ourselves of the ordering problem, let us consider the following, almost naive global algorithm. First, order the vertices according to a uniform random permutation. At any stage, there is a free set $F$. We process the next seed vertex $s$ by running some spectral partitioning procedure, to get a low conductance cut $C(s)$. Simply output $C(s) \cap F$ (instead of $C(s)$) as the new cluster, and update $F$ to $F \setminus C(s)$. It is easy to locally implement this procedure. To find the anchor of $v$, perform a diffusion of $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ timesteps from $v$. For every vertex $s$ with high enough value in the diffusion vector, determine if $C(s) \ni v$. The vertex $s$ that is lowest according to the random ordering is the anchor of $v$. Unfortunately, there is little hope of bounding the number of edges cut by the clustering. When $s$ is processed, it may be that $s \notin F$, and there is no guarantee of $C(s) \cap F$. Can we modify the procedure to bound the number of cut edges, but still maintain its ease of local implementability? {\bf The amortization argument:} Consider the scenario when $F = \Theta(\hbox{\bf var}epsilon n)$. Most of the subsequent seeds processed are not in $F$ and there is no guarantee on the cluster conductance. But every $\Theta(1/\hbox{\bf var}epsilon)$ seeds (in expectation), we will get a ``good" seed $s$ contained in $F$, such that $C(s) \cap F$ is a low conductance set. (This is promised by the diffusion algorithm that we develop in this paper, as discussed earlier.) Our aim is to perform some amortization, to argue that $|C(s) \cap F|$ is so large, that we can ``charge" away the edges cut by the previous $\Theta(1/\hbox{\bf var}epsilon)$ seeds. This amortization is possible because our spectral tools give us much flexibility in the (low) conductances obtained. Put differently, we essentially prove that existence of many cuts of extremely low conductance, and show that it is ``easy" for a diffusion-based algorithm to find such cuts. (This is connected to the spectral behavior of minor-closed families.) As a consequence, we can actually pre-specify the size of the low conductance cuts obtained. We show that as long as $|F| = \Omega(\hbox{\bf var}epsilon n)$, we can find a \emph{size threshold} $k = \mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ such that for at least $\Omega(\hbox{\bf var}epsilon^2n)$ vertices $s \in F$, a spectral partitioning procedure seeded at $s$ can find a cut of size $\Theta(k)$ and conductance at most $\hbox{\bf var}epsilon^c$. Moreover, this cut is guaranteed to contain at least $\hbox{\bf var}epsilon^{c'} k$ vertices in $F$, despite the procedure being oblivious to $F$. The parameter $c$ can be easily tuned, so we can increase $c$ arbitrarily while keeping $c'$ fixed, at the cost of polynomial increases in running time. This tunability is crucial to our amortization argument. We also show that given query access to $F$, a size threshold $k$ can be computed in $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time. So when the global algorithm processes seed $s$, it runs the above spectral procedure to try to obtain a set of size $\Theta(k)$ with conductance at most $\hbox{\bf var}epsilon^c$. (If the procedure fails, the global algorithm simply set $C(s) = \{s\}$.) Thus, we cut $O(\hbox{\bf var}epsilon^c kd)$ edges for each seed processed. But after every $O(1/\hbox{\bf var}epsilon)$ seeds, we choose a ``good" seed such that $|C(s) \cap F| > \hbox{\bf var}epsilon^{c'} k$. The total number of edges cut is $O(\hbox{\bf var}epsilon^c kd \times \hbox{\bf var}epsilon^{-1}) = O(\hbox{\bf var}epsilon^{c-1} kd)$. The total number of new vertices clustered is at least $\hbox{\bf var}epsilon^{c'}k$. Because we can tune parameters with much flexibility, we can set $c \gg c'$. So the total number of edges cut is $O(\hbox{\bf var}epsilon^{c-c'-1}d)$ times the number of vertices clustered, where $c-c'-1 > 1$. Overall, we will cut only $O(\hbox{\bf var}epsilon nd)$ edges. {\bf Making it work through phases:} Unfortunately, as the process described above continues, $F$ shrinks. Thus, the original choice of $k$ might not work, and the guarantees on $|C(s) \cap F|$ for good seeds no longer hold. So we need to periodically recompute the value of $k$. In a careful analysis, we show that this recomputation is only required $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ times. Formally, we implement the recomputation through \emph{phases}. Each vertex is independently assigned to one of $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ phases. (Technically, we choose the phase of a vertex by sampling an independent geometric random variable. We heavily use the memoryless property of the geometric distribution.) For each phase, the value of $k$ is fixed. The local partition oracle will compute these size thresholds for all phases, as a $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time preprocessing step. The oracle (for $v$) runs a diffusion from $v$ to get a collection of candidate anchors. For each candidate $s$, the oracle determines its phase, runs the spectral partitioning algorithm with correct phase parameters, and determines if the candidate's low conductance cut contains $v$. The anchor is simply such a candidate of minimum phase, with ties broken by vertex id. \subsection{Outline of sections} \langlebel{sec:outline} The algorithm description and proof has many moving parts, encapsulated by different sections. \Sec{prelims} begins by discussing the truncated diffusion process, the main algorithmic tool for partitioning. We then describe the global partitioning algorithm {\tt globalPartition}{} (modulo a preprocessing step called {\tt findr}), which is far more convenient to analyze. It will be readily apparent that this global procedure outputs a partition of $G$ into connected components; the main challenge is to bound the number of edges cut. Within \Sec{prelims}, we discuss how to implement {\tt globalPartition}{} by a local procedure. By ensuring that the output of the local procedure is identical to {\tt globalPartition}, we prove the consistency property of \Def{oracle}. We then perform a fairly straightforward running time analysis, which proves the running time property of \Def{oracle}. The real heavy lifting begins in \Sec{findr}, where we describe the procedure {\tt findr}{} that computes the size thresholds. This section is devoted to proving salient properties of the size thresholds output by {\tt findr}. The analysis hinges on the diffusion and cut properties stated in \Thm{restrict-cut}, which is the main tool connecting minor-freeness, diffusions, and local partitioning. \Sec{amort} uses all these tools to prove the cut bound of {\tt globalPartition}. At this stage, the complete description and guarantees of the partition oracle are complete, modulo the proof of \Thm{restrict-cut}. The proof of \Thm{restrict-cut} is split into sections. In \Sec{diffusion}, we use the hyperfiniteness of minor-closed families to prove properties of truncated diffusions on minor-free families. \Sec{ls-cluster} has the key spectral calculations, where the Lov\'{a}sz-Simonovits curve technique is used to find low conductance cuts. This section has the crucial insights that allow for partitioning in the free set, using diffusions in the overall graph. \Sec{appl} has short proofs of the applications \Thm{testers} and \Thm{approx}. These are provided for completeness, since identical calculations appear in the proof of Theorem 9.28 in~\cite{G17-book}. \section{Global partitioning and its local implementation} \langlebel{sec:prelims} There are a number of parameters that are used in the algorithm. We list them out here for reference. It is convenient to fix the value of $\hbox{\bf var}epsilon$ in advance, so that all the values of the following parameters are fixed. Note that all these parameters are polynomial is $\hbox{\bf var}epsilon$. We will express all running times as polynomials in these parameters, ensuring all running time are $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$. \parameterinfo \subsection{Truncated diffusion} \langlebel{sec:trunc} The main process used to find sets of the partition is a \emph{truncated diffusion}. We assume that the input graph $G$ is connected, has $n$ vertices, and degree bound $d$. Define the symmetric random walk matrix $M$ as follows. For every edge $(u,v)$, $M_{u,v} = M_{v,u} = 1/2d$. For every vertex $v$, $M_{v,v} = 1-d(v)/2d$, where $d(v)$ is the degree of $v$. The matrix $M$ is doubly stochastic, symmetric, and the (unique) stationary distribution is the uniform distribution. Given a vector $\vec{x} \in (\mathbb RR^+)^n$, diffusion is the evolution $M^t\vec{x}$. We define a truncated version, where after every step, small values are removed. For any vector $\vec{x}$, let $\supp(\vec{x})$ denote the support of the vector. \begin{definition} \langlebel{def:trun} Define the operator $\widehat{M} \colon (\mathbb R^+)^n \to (\mathbb R^+)^n$ as follows. For $\vec{x} \in (\mathbb R^+)^n$, the vector $\widehat{M}\vec{x}$ is obtained by zeroing out all coordinates in $M\vec{x}$ whose value is at most $\rho$. For $t > 1$, the operator $\widehat{M}^t$ is the $t$-step truncated diffusion, and is recursively defined as $\widehat{M}(\widehat{M}^{t-1}\vec{x})$. Define $\trunp{v}{t}{w}$ to be the coordinate corresponding to vertex $w$ in the $t$-step truncated diffusion starting from vertex $v$. \end{definition} We stress that the $t$-step truncated diffusion is obtained from a standard diffusion by truncating low values at \emph{every} step of the diffusion. Note that as the truncated diffusion progresses, the $l_1$-norm of the vector may decrease at each step. Importantly, for any distribution vector $\vec{x}$, $\supp(\widehat{M}^t \vec{x})$ has size at most $\rho^{-1}$. We heavily use this property in our running time analysis. We define \emph{level sets}, a standard concept in spectral partitioning algorithms. Somewhat abusing notation, for vertex $v \in V$, we use $\vec{v}$ to denote the unit vector in $(\mathbb R^+)^n$ corresponding to the vertex $v$. (We never use to vector notation for any other kind of vectors.) \begin{definition} \langlebel{sec:level} For vertex $v \in V$, length $t$, and threshold $k$, let $\level{v}{t}{k}$ be the set of vertices corresponding to the $k$ largest coordinates in $\widehat{M}^t\vec{v}$ (ties are broken by vertex id). For any set $S$ of vertices, the conductance of $S$ is $\Phi(S) := E(S,\overline{S})/[2\min(|S|,|\overline{S}|)d]$. (We use $E(S,\overline{S})$ to denote the number of edges between $S$ and its complement.) \end{definition} We describe the key subroutine that finds low conductance cuts. It performs a sweep cut over the truncated diffusion vector. \noindent \texttt{cluster}code \\ \begin{claim} \langlebel{clm:cluster} The procedure \texttt{cluster}$(v,t,k)$ runs in time $O(\rho^{-1}td\log(\rho^{-1}td) + kd\log k)$. The output set $C$ has the following properties. (i) $v \in C$. (ii) If $C$ is not a singleton, then $|C| \in [k,2k]$, $\Phi(C) \leq \phi$, and $C \subseteq \supp(\widehat{M}^t\vec{v})$. \end{claim} \begin{proof} The latter properties are apparent from the description of \texttt{cluster}. We analyze the running time. The convenience of the truncated diffusion is that it can computed exactly by a deterministic process. First, for any $b \geq 1$, we show that the running time to compute $\widehat{M}^b\vec{v}$ is $O(\rho^{-1}bd)$. Note that for any $t$, $\supp(\widehat{M}^b\vec{v})$ has size at most $\rho^{-1}$, since $M$ is a stochastic matrix and all non-zero entries in $\widehat{M}^b\vec{v}$ have value at least $\rho$. Given the vector $\widehat{M}^b\vec{v}$, the vector $\widehat{M}^{b+1}\vec{v}$ can be computed by determining $M\widehat{M}^b\vec{v}$ and then zeroing out coordinates that are less than $\rho$. This process can be done in $O(d| \supp(\widehat{M}^b\vec{v})|) = O(\rho^{-1}d)$. By summing this running time over all timesteps, we get that the total time is $O(\rho^{-1}bd)$. Thus, $\widehat{M}^t\vec{v}$ can be computed exactly in $O(\rho^{-1}td)$ time. To compute the level sets, one can sort the coordinates of this vector (breaking ties by id), and process them in decreasing order. One can iteratively store $\level{v}{t}{k}$ in a dictionary data structure. Given $\Phi(\level{v}{t}{k})$, one can compute $\Phi(\level{v}{t}{k+1})$ by $O(d)$ lookups into the dictionary. The total running time of this step is $O(kd\log k)$. \end{proof} \subsection{The global partitioning procedure} \langlebel{sec:global} The global partitioning procedure {\tt globalPartition}{} will output a partition of the vertices satisfying the conditions in \Def{oracle}. This global procedure will run in linear time. In the next subsection, we show how the output of the global procedure can be generated locally in $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ time, thereby giving us the desired partition oracle. It will be significantly easier to understand and analyze the partition properties of the global procedure. The key ingredient in {\tt globalPartition}{} that allows for a local implementation is a \emph{preprocessing} step. The preprocessing allows for the ``coordination" required for consistency of various local partitioning steps. All the randomness is used in the preprocessing, after which the actual partitioning is deterministic. The job of the preprocessing is to find the following sets of values, which are used for two goals: (i) ordering vertices, (ii) setting parameters for calls to \texttt{cluster}. The preprocessing generates, for all vertices $v$, the following values. \begin{asparaitem} \item $h_v$: The \emph{phase} of $v$. \item $k_v$: The size threshold of $v$. \item $t_v$: The walk length of $v$. \end{asparaitem} Before giving the procedure description, we explain how these values are generated. {\em Phases:} For each $v$, $h_v$ is set to $\max(X,\overline{h})$, where $X$ is independently sampled from $Geo(\delta)$, the geometric distribution with parameter $\delta$. Moreover $\overline{h} := 2 \delta^{-1}\log(\delta^{-1})$, so the maximum phase value is capped. {\em Size thresholds:} The computation of these thresholds is the most complex part of our algorithm (and analysis), and is the ``magic ingredient" that makes the partition oracle possible. We first run a procedure ${\tt findr}$ that runs in $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$ time and outputs a set of \emph{phase size thresholds} $k_1, k_2, \ldots, k_{\overline{h}}$. All the thresholds have value at most $\rho^{-1}$ and $k_{\overline{h}}$ will be zero. The (involved) description of {\tt findr}{} and its properties are in \Sec{findr}. For now, it suffices to say that its running time is $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$, and that it outputs phase size thresholds. The size threshold for a vertex $v$ is simply $k_{h_v}$, corresponding to the phase it belongs to. {\em Walk lengths:} These are simply chosen independently and uniformly in $[1,\ell]$. The analysis is more transparent when we assume that all the randomness used by the algorithm is in a random seed $\boldsymbol{R}$, of $O(n\cdot\mathrm{poly}(\hbox{\bf var}epsilon^{-1}))$ length. The seed $\boldsymbol{R}$ is passed as an argument to the partitioning procedure, which uses $\boldsymbol{R}$ to generate all the values described above. (For convenience, we will assume random access to the adjacency list of $G$, without passing the graph as a parameter.) It is convenient to define an ordering on the vertices, given these values. For cleaner notation, we drop the dependence on $\boldsymbol{R}$. \begin{definition} \langlebel{def:order} For vertex $u, v \in V$, we say that $u \prec v$ if: $h_u < h_v$ or if $h_u = h_v$, the id of $v$ is less than that of $v$. \end{definition} \fbox{ \begin{minipage}{0.9\textwidth} {{\tt globalPartition}$(\boldsymbol{R})$} \\ Preprocessing: \begin{compactenum} \item For every $v \in V$: \begin{compactenum} \item Use $\boldsymbol{R}$ to set $h_v := \max(X,\overline{h})$ ($X \sim Geo(\delta)$). \item Use $\boldsymbol{R}$ to set $t_v$ uniform random in $[1,\ell]$. \end{compactenum} \item Call {\tt findr}$(\boldsymbol{R})$ to generate values $k_1, k_2, \ldots, k_{\overline{h}}$. For every $v \in V$, set $k_v = k_{h_v}$. \end{compactenum} Partitioning: \begin{compactenum} \item Initialize the partition $\boldsymbol{P}$ as an empty collection. Initialize the free set $F := V$. \item For all vertices in $V$ in increasing order of $\prec$: \begin{compactenum} \item Compute $C = \texttt{cluster}(v, t_v, k_v)$. \item Add the connected components of $C \cap F$ to the partition $\boldsymbol{P}$. \item Reset $F = F \setminus C$. \end{compactenum} \item Output $\boldsymbol{P}$. \end{compactenum} \end{minipage} } Since all of our subsequent discussions are about {\tt globalPartition}, we abuse notation assuming that the preprocessing is fixed. We refer to \texttt{cluster}$(v)$ to denote \texttt{cluster}$(v, t_v, k_v)$. These are the only calls to cluster that are ever discussed, so it is convenient to just parametrize by the vertex argument. Furthermore, for ease of notation, we sometimes refer to the output of the procedure as \texttt{cluster}$(v)$. We observe that the output $\boldsymbol{P}$ is indeed a partition of $V$ into connected components. At any intermediate step, the free set $F$ is precisely the set of vertices that have not been assigned to a cluster. Note that \texttt{cluster}$(v)$ always contains $v$ (\Clm{cluster}), so all vertices eventually enter (the sets of) $\boldsymbol{P}$. We note that $v$ might not be in $F$ when \texttt{cluster}$(v)$ is called. This may lead to new components in $\boldsymbol{P}$ that do not involve $v$, which may actually not be low conductance cuts. This may seem like an oversight: why initiate diffusion clusters from vertices that are already partitioned? Many challenges in our analysis arise from such clusters. On the other hand, such an ``oblivious" partitioning scheme leads to a simple local implementation. \subsection{The local implementation} \langlebel{sec:local} A useful definition in the local implementation is that of \emph{anchors} of vertices. As mentioned earlier, we fix the output of the preprocessing (which is equivalent to fixing $\boldsymbol{R}$). \begin{definition} \langlebel{def:anchor} Consider the running of {\tt globalPartition}$(\boldsymbol{R})$. The \emph{anchor} of $w$ is the (unique) vertex $w$ such that the component in $\boldsymbol{P}$ containing $v$ was created by the call to \texttt{cluster}$(v)$. \end{definition} Suppose we label every vertex by its anchor. We can easily determine the sets of $\boldsymbol{P}$ locally. \begin{claim} \langlebel{clm:anchor} The sets of $\boldsymbol{P}$ are exactly the maximal connected components of vertices with the same anchor. \end{claim} \begin{proof} We prove by induction over the $\prec$ ordering of vertices. The base case is vacuously true. Suppose, just before $v$ is considered, all current sets in $\boldsymbol{P}$ are maximal connected components with the same anchor, which cannot be $v$. No vertex in $F$ can have an anchor yet; otherwise, it would be clustered and part of (a set in) $\boldsymbol{P}$. All the new vertices clustered have $v$ as anchor. Moreover, the sets added to $\boldsymbol{P}$ are precisely the maximal connected components with $v$ as anchor. \end{proof} We come to a critical definition that allows for searching for anchors. We define the ``inverse ball" of a vertex: this is the set of all vertices that reach $v$ through truncated diffusions. We note that reachability is not symmetric, because the diffusion is truncated at every step. \begin{definition} \langlebel{def:ball} For $v \in V$, let $IB(v) = \{w \ | \ \exists t \in [0,\ell], v \in \supp(\widehat{M}^t\vec{w})\}$. \end{definition} \begin{claim} \langlebel{clm:ballsize} $|IB(v)| \leq \ell\rho^{-1}$. \end{claim} \begin{proof} All vertices $w \in IB(v)$ have the property that (for some $t \leq \ell$) $\trunp{w}{t}{v} \neq 0$. That implies that $\pvector{w}{t}{v} \geq \rho$. By the symmetry of the random walk, $\pvector{v}{t}{w} \geq \rho$. For any fixed $t$, there are at most $\rho^{-1}$ such vertices $w$. Overall, there can be at most $\ell\rho^{-1}$ vertices in $IB(v)$. \end{proof} Now we have a simple characterization of the anchor that allows for local implementations. \begin{lemma} \langlebel{lem:anchor} The anchor of $v$ is the smallest vertex (according to $\prec$) in the set $\{s | s \in IB(v) \ \textrm{and} \ v \in \textrm{\texttt{cluster}}(s)\}$. \end{lemma} \begin{proof} Let the anchor of $v$ be the vertex $u$. We first argue that $u$ in the given set. Clearly, $v \in \texttt{cluster}(u)$. If $u=v$, then $u = v \in IB(v)$ and we are done. Suppsoe $u \neq v$. Then $\texttt{cluster}(u)$ is not a singleton (since it contains $v$). By \Clm{cluster}, $\texttt{cluster}(u)$ is contained in the support of $\widehat{M}^{t_v}\vec{u}$, implying that $v \in \supp(\widehat{M}^{t_v}\vec{u})$. Thus, $u \in IB(v)$ and the anchor $u$ is present in the given set. It remains to argue that $u$ is the smallest such vertex. Suppose there exists $u' \prec u$ in this set. In {\tt globalPartition}{}, \texttt{cluster}$(u')$ is called before \texttt{cluster}$(u)$. At the end of this call, $v$ is partitioned and would have $u'$ as its anchor. Contradiction. \end{proof} We are set for the local implementation. For a vertex $v$, we compute $IB(v)$ and run \texttt{cluster}$(u)$ for all $u \in IB(v)$. By \Lem{anchor}, we can compute the anchor of $v$, and by \Clm{anchor}, we can perform a BFS to find all connected vertices with the same anchor. We begin by a procedure that computes $IB(v)$. Since the truncated diffusion is not symmetric, this requires a little care. We use $N(u)$ to denote the neighborhood of vertex $u$. \fbox{ \begin{minipage}{0.9\textwidth} {{\tt findIB}$(v)$} \begin{compactenum} \item Initialize $S = \{v\}$. \item For every $t = 1, \ldots, \ell$: \begin{compactenum} \item For every $w \in S \cup N(S)$, compute $\widehat{M}^t\vec{w}$. If $v \in \supp(\widehat{M}^t\vec{w})$, add $v$ to $S$. \end{compactenum} \item Return $S$. \end{compactenum} \end{minipage} } \begin{claim} \langlebel{clm:findball} The output of {\tt findIB}$(v)$ is $IB(v)$. The running time is $O(d^2\ell^3\rho^{-2})$. \end{claim} \begin{proof} We prove by induction on $t$, that after $t$ iterations of the loop, $S$ is the set $\{w \ | \ \exists t' \in [0,t], v \in \supp(\widehat{M}^{t'}\vec{w})\}$. The base case $t=0$ holds because $S$ is initialized to $\{v\}$. Now for the induction. Consider some $w$ such that $v \in \supp(\widehat{M}^{t+1}\vec{w})$. This means that $(1-d(w)/2d)\trunp{w}{t}{v} + (1/2d)\sum_{w' \in N(w)} \trunp{w'}{t}{v} \geq \rho$. Since the LHS is an average, for some $w' \in N(w) \cap \{w\}$, $\trunp{w'}{t}{v} \geq \rho$. Hence, $v \in \supp(\widehat{M}^t\vec{w'})$, and by induction $w' \in S$ at the beginning of the $(t+1)$th iteration. The inner loop will consider $w$ (as it is either $w'$ or a neighbor of $w'$), correctly determine that $v \in \supp(\widehat{M}^{t+1}\vec{w})$, and add it to $S$. By construction, every (new) vertex $w$ added to $S$ has the property that $v \in \supp(\widehat{M}^{t+1}\vec{w})$. This completes the induction and the output property. For the running time, observe that for all iterations, $S \subseteq IB(v)$. By \Clm{ballsize}, $|S| \leq \ell\rho^{-1}$. Hence, $|S \cup N(S)|$ has size $O(d\ell\rho^{-1})$. The computation of each $\widehat{M}^t\vec{w}$ can be done in $O(d\ell\rho^{-1})$ time, since the distribution vector after each step has support size at most $\rho^{-1}$. The total running time of each iteration is $O(d^2\ell^2\rho^{-2})$. There are at most $\ell$ iterations, leading to a total running time of $O(d^2\ell^3\rho^{-2})$. \end{proof} We can now describe the local partitioning oracle (modulo the description of {\tt findr}). \fbox{ \begin{minipage}{0.9\textwidth} {{\tt findAnchor}$(v, \boldsymbol{R})$} \begin{compactenum} \item Run {\tt findr}$(\boldsymbol{R})$ to get the set $K = \{k_1, k_2, \ldots, k_{\overline{h}}\}$. \item Run {\tt findIB}$(v)$ to compute $IB(v)$. \item Initialize $A = \emptyset$. \item For every $s \in IB(v)$: \begin{compactenum} \item Using $\boldsymbol{R}$ determine $h_s, t_s$. Using $K$, determine $k_s$. \item Compute $C = \texttt{cluster}(s,t_s,k_s)$. \item If $C \ni v$, then add $s$ to $A$. \end{compactenum} \item Output the smallest vertex according to $\prec$ in $A$. \end{compactenum} \end{minipage} } \fbox{ \begin{minipage}{0.9\textwidth} {{\tt findPartition}$(v, \boldsymbol{R})$} \begin{compactenum} \item Call {\tt findAnchor}$(v,\boldsymbol{R})$ to get the anchor $s$. \item Perform BFS from $v$. For every vertex $w$ encountered, first call {\tt findAnchor}$(w,\boldsymbol{R})$. If the anchor is $s$, add $w$ to the BFS queue (else, ignore $w$). \item Output the set of vertices that entered the BFS queue. \end{compactenum} \end{minipage} } \\ The following claim is a direct consequence of \Lem{anchor} and \Clm{findball}. \begin{claim} \langlebel{clm:findanchor} The procedure {\tt findAnchor}$(v,\boldsymbol{R})$ outputs the anchor of $v$ and runs in time $O((d\ell\rho^{-1})^3)$ plus the running time of {\tt findr}. \end{claim} \begin{proof} Observe that {\tt findAnchor}$(v,\boldsymbol{R})$ finds $IB(v)$, computes \texttt{cluster}$(s)$ for each $s \in IB(v)$, and outputs the smallest (by $\prec$) $s$ such that $v \in \texttt{cluster}(s)$. By \Lem{anchor}, the output is the anchor of $v$. By \Clm{findball}, the running time of {\tt findIB}$(v)$ is $O(d^2\ell^3\rho^{-2})$. The number of calls to \texttt{cluster}{} is $|IB(v)|$, which is at most $\ell\rho^{-1}$ (\Clm{ballsize}). Each call to \texttt{cluster}{} runs in time $O(d\ell\rho^{-2})$, by \Clm{cluster} and the fact that $k_s \leq \rho^{-1})$. Ignoring the call to {\tt findr}, the total running time is $O(d^2\ell^3\rho^{-3})$. \end{proof} \begin{theorem} \langlebel{thm:findpart} The output of {\tt findPartition}$(v,\boldsymbol{R})$ is precisely the set in $\boldsymbol{P}$ containing $v$, where $\boldsymbol{P}$ is the partition output by {\tt globalPartition}$(\boldsymbol{R})$. The running time of {\tt findPartition}$(v,\boldsymbol{R})$ is $O((d\ell\rho^{-1})^4)$ plus the running time of {\tt findr}. \end{theorem} \begin{proof} By \Clm{findanchor}, {\tt findAnchor}{} correctly outputs the anchor. By \Clm{anchor}, the set $S$ in $\boldsymbol{P}$ containing $v$ is exactly the maximal connected component of vertices sharing the same anchor (as $v$). The set $S$ in $\boldsymbol{P}$ is generated in {\tt globalPartition}$(\boldsymbol{R})$ by a call to \texttt{cluster}, whose output is a set of size at most $\rho^{-1}$. The total number of calls to {\tt findAnchor}{} made by {\tt findPartition}$(v,\boldsymbol{R})$ is at most $d\rho^{-1}$, since a call is made to either a vertex in the set $S$ or a neighbor of $S$. Overall, the total running time is $O((d\ell\rho^{-1})^5)$ plus the running time of {\tt findr}. (Instead of calling {\tt findr}{} in each call to {\tt findAnchor}, one can simply store its output.) \end{proof} \section{Coordination through the size thresholds: the procedure {\tt findr}} \langlebel{sec:findr} We now come to the heart of our algorithm; coordination through {\tt findr}. This section gives the crucial ingredient in arguing that the partitioning scheme does not cut too many edges. The ordering of vertices (to form clusters) is chosen independent of the graph structure. It is highly likely that, as the partitioning proceeds, newer \texttt{cluster}$(v)$ sets overlap heavily with the existing partition. Such clusters may cut many new edges, without clustering enough vertices. Note that \texttt{cluster}$(v)$ is a low conductance cut only in the original graph; it might have high conductance restricted to $F$ (the current free set). To deal with such ``bad" clusters, we need to prove that every so often, \texttt{cluster}$(v)$ will successfully partition enough new vertices. Such ``good" clusters allow the partitioning scheme to suffer many bad clusters. This argument is finally carried about by a careful charging argument. First, we need to argue that such good clusters exist. The key tool is given by the following theorem, which is proved using spectral graph theoretic methods. We state the theorem as an independent statement. \begin{theorem} \langlebel{thm:restrict-cut} Let $G$ be a bounded degree graph in a minor-closed family. Let $F$ be an arbitrary set of vertices of size at least $\beta n$. There exists a size threshold $k \leq \rho^{-1}$ such that the following holds. For at least $(\beta^2/\log^2\beta^{-1})n$ vertices $s \in F$, there are at least $(\beta/\log^2\beta^{-1})\ell$ timesteps $t \leq \ell$ such that: there exists $k' \in [k,2k]$ such that (i) $\level{s}{t}{k'} \subseteq \supp(\widehat{M}^t\vec{s})$, (ii) $\Phi(\level{s}{t}{k'} \cup \{s\}) < \phi$, and (iii) $|\level{s}{t}{k'} \cap F| \geq \beta^3 k$. \end{theorem} The proof of this theorem is deferred to \Sec{ls-cluster}. In this section, we apply this theorem to complete the description of the partition oracle and prove its guarantees. We discuss the significance of this theorem. The diffusion used to define $\level{s}{t}{k'}$ occurs in $G$, but we are promised a low conductance cut with non-trivial intersection with $F$ (since $\phi \ll \beta^3$). Moreover, such cuts are obtained for a non-trivial fraction of timesteps, so we can choice one uar. Given oracle access to membership in $F$, it is fairly easy to find such a size threshold by random sampling. {\em The importance of phases:} Recall the global partitioning procedure {\tt globalPartition}. We can think of the partitioning process as divided into phases, where the $h$th phase involves calling \texttt{cluster}$(v,t_v,k_v)$ for all vertices $v$ whose phase value is $h$. Consider the free set at the beginning of a phase $h$, denoting it $F_h$. We apply \Thm{restrict-cut} to determine the size threshold $k_h$. Since all $k_v$ values in this phases are precisely $k_h$, this size threshold ``coordinates" all clusters in this phase. As the phase proceeds, the free set shrinks, and the size threshold $k_h$ stops satisfying the properties of \Thm{restrict-cut}. Roughly speaking, at this point, we start a new phase $h+1$, and recompute the size threshold. The frequency of recomputation is chosen carefully to ensure that the total running time remains $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$. We now discuss the randomness involved in selecting phases and why geometric random variables are used. Recall that $h_v$ is independently (for all $v$) set to be $\min(X, \overline{h})$, where $X \sim Geo(\delta)$. We first introduce some notation regarding phases. \begin{definition} \langlebel{def:phase-v} The \emph{phase $h$ seeds}, denoted $V_h$, are the vertices whose phase value is $h$. Formally, $V_h = \{v \ | \ h_v = h\}$. We use $V_{< h}$ to denote $\bigcup_{h' < h} V_h$. (We analogously define $V_{\leq h}, V_{\geq h}$.) The \emph{free set at phase $h$}, denoted $F_h$, is the free set $F$ in {\tt globalPartition}, just before the first phase $h$ vertex is processed. Formally, $F_h = V \setminus \bigcup_{v \in V_{<h}} \texttt{cluster}(v)$. \end{definition} One can think of the $V_h$s being generated iteratively. Assume that we have fixed the vertices in $V_1, \ldots, V_{h-1}$. All other vertices are in $V_{\geq h}$, implying that $h_v \geq h$ for such vertices. By the properties of the geometric random variables, $\Pr[h_v = h+1 | h_v > h] = \delta$. Thus, we can imagine that $V_{h+1}$ is generated by independently sampling each element in $V_{\geq h}$ with $\delta$ probability. We restate this observation as \Clm{geo-phase}. \Clm{vh-size} is a simple Chernoff bound argument. Before proceeding, we state some standard Chernoff bounds (Theorem 1.1 of~\cite{DuPa-book}). \begin{theorem} \langlebel{thm:chernoff} Let $X_1, X_2, \ldots, X_r$ be independent variables in $[0,1]$. Let $\mu := \hbox{\bf E}[\sum_i X_i]$. \begin{asparaitem} \item $\Pr[X \geq 3\mu/2] \leq \exp(-\mu/12)$. \item $\Pr[X \leq \mu/2] \leq \exp(-\mu/8)$. \item For $t \geq 6\mu$, $\Pr[X \geq t] \leq 2^{-t}$. \end{asparaitem} \end{theorem} \begin{claim} \langlebel{clm:geo-phase} For all $v \in V$ and $1 < h < \overline{h}$, $\Pr[v \in V_h \ | \ v \in V_{\geq h}] = \delta$. \end{claim} \begin{claim} \langlebel{clm:vh-size} Let $h < \overline{h}$. Condition on the randomness used to specify $V_1, V_2, \ldots, V_{h-1}$. Let $S$ be an arbitrary subset of $V_{\geq h}$. With probability at least $1-2\exp(-\delta |S|/12)$ over the choice of $V_h$, $|S \cap V_h| \in [\delta|S|/2, 2\delta|S|]$. \end{claim} \begin{proof} For every $s \in S$, let $X_s$ be the indicator random variable for $s \in V_h$. By \Clm{geo-phase} and independent phase choices for each vertex, the $X_s$ are independent Bernoullis with $\delta$ probability. By the Chernoff lower tail of \Thm{chernoff}, $\Pr[\sum_{s \in S} X_s \leq \delta|S|/2] \leq \exp(-\delta|S|/8)$ and $\Pr[\sum_{s \in S} X_s \geq 2\delta|S|] \leq \exp(\delta|S|/12)$. A union bound completes the proof. \end{proof} \begin{claim} \langlebel{clm:last-phase} With probability at least $1-2^{-\delta n}$, $|V_{\overline{h}}| \leq \delta n$. \end{claim} \begin{proof} Recall that $\overline{h}$ is the last phase and $\overline{h} = 2 \delta^{-1}\log(\delta^{-1})$. The probability that $X \sim Geo(\delta)$ is at least $2 \delta^{-1}\log(\delta^{-1})$ is $(1-\delta)^{2 \delta^{-1}\log(\delta^{-1})-1} < \delta/6$. Hence, the probability that any vertex lies in $V_{\overline{h}}$ is at most $\delta/6$ and the expectation of $V_{\overline{h}}$ is at most $\delta n/6$. . By the Chernoff bound of \Thm{chernoff}, $\Pr[|V_{\overline{h}}| \geq \delta n] \leq 2^{-\delta n}$. \end{proof} With this preamble, we proceed to the description of {\tt findr}{} and the main properties of its output. \subsection{The procedure {\tt findr}} \langlebel{sec:sub-findr} It is convenient to assume that for all $v$, $h_v$ and $t_v$ have been chosen. These quantities are chosen independently for each vertex using simple distributions, so we will not carry as arguments the randomness used to decide these quantities. Recall that the output of {\tt findr}{} is the set of size thresholds $\{k_1, k_2, \ldots, k_{\overline{h}}\}$. It is convenient to use $K_h$ to denote $\{k_1, k_2, \ldots, k_{h}\}$. Before describing {\tt findr}, we define a procedure that is a membership oracle for $F_h$. \noindent \fbox{ \begin{minipage}{0.9\textwidth} {{\tt IsFree}$(u,h,K_{h-1})$} \begin{compactenum} \item If $h = 1$, output YES. \item Run {\tt findIB}$(u)$ to determine $IB(u)$. Let $C$ be $IB(u) \cap V_{< h}$. \item Using $K_{h-1}$, determine $k_v$ for all $v \in C$. \item For all $v \in C$, compute \texttt{cluster}$(v,t_v,k_v)$. If the union contains $u$, output NO. Else, output YES.\langlebel{step:union} \end{compactenum} \end{minipage} } \begin{claim} \langlebel{clm:isfree} Assume that $K_{h-1}$ is provided correctly. Then {\tt IsFree}$(v,h,K_{h-1})$ outputs YES iff $v \in F_h$. The running time is $O((d\ell\rho^{-1})^3)$. \end{claim} \begin{proof} If $h=1$, then all vertices are free (this is the free set before {\tt globalPartition}{} begins any partitioning). Assume $h > 1$. So $F_h = V \setminus \bigcup_{v \in V_{<h}} \texttt{cluster}(v)$. If $u \notin F_h$, then there exists $v \in V_{<h}$ such that $u \in \texttt{cluster}(v)$. By construction \texttt{cluster}$(v)$ is contained in $\supp(\widehat{M}^t \vec{v})$ for some $t \leq \ell$. Thus, $v \in IB(u)$ and $h_v < h$. Hence, $v$ will be considered in \Step{union} and the union will contain $u$. The output is NO. For the converse, observe that if the output is NO, then there is a $v \in V_{< h}$ such that $u \in \texttt{cluster}(v)$. Hence, $u \notin F_h$. Now for the running time analysis. The running time of {\tt findIB}$(v)$ is $O(d^2\ell^3\rho^{-2})$ (\Clm{findball}) and $|C| \leq \ell\rho^{-1}$ (\Clm{ballsize}). Each call to \texttt{cluster}{} takes $O(d\ell\rho^{-2})$ (\Clm{cluster}). The total running time is $O((d\ell\rho^{-1})^3)$. \end{proof} We have the necessary tools to define the procedure {\tt findr}. We will need the following definition in our description and analysis of {\tt findr}. \begin{definition} \langlebel{def:viable} Assume $\free{h} \geq \beta n$. A vertex $s \in V_{\geq h}$ is called \emph{$(h,k)$-viable} if $C := \texttt{cluster}(s,t_s,k)$ is not a singleton and $|C \cap \free{h}| \geq \beta^3 k$. (If $\free{h} < \beta n$, no vertex is $(h,k)$-viable.) \end{definition} Let us motivate this definition. When $C := \texttt{cluster}(s,t_s,k)$ is not a singleton, it is a low conductance cut of $\Theta(k)$ vertices. The vertex $s$ is $(h,k)$-viable if $C$ contains a non-trivial fraction of free vertices available in the $h$th phase. The viable vertices are those from which clustering will make significant ``progress" in the $h$th phase. For each $h$, the procedure {\tt findr}{} searches for values of $k$ that lead to many $(h,k)$-viable vertices. In the next section, we prove that having sufficiently many clusters come from viable vertices ensures the cut bound of \Def{oracle}. \noindent\fbox{ \begin{minipage}{0.9\textwidth} {{\tt findr}$(\boldsymbol{R})$} \begin{compactenum} \item For $h = 1$ to $\overline{h}$: \begin{compactenum} \item Sample $\beta^{-10}$ uar vertices independently. Let $S_h$ be the multiset of sampled vertices that are in phase $\geq h$. \item If $|S_h| \leq \beta^{-9}/2$, set $k_h = 0$ and continue for loop. Else, reset $S_h$ to the multiset of the first $\beta^{-8}$ vertices sampled. \item For $k \in [\rho^{-1}]$ and for every $s \in S_h$: \langlebel{step:loop} \begin{compactenum} \item Compute $C := \texttt{cluster}(s,t_s,k)$. \item For all $u \in C$, call {\tt IsFree}$(u,h,K_{h-1})$ to determine if $u \in F_{h-1}$. \item If $C$ is not a singleton and $|C \cap F_{h-1}| \geq \beta^3 k$, mark $s$ as being $(h,k)$-viable. \end{compactenum} \item If there exists some $k$ such that there are at least $12\beta^{4}|S_h|$ $(h,k)$-viable vertices, assign an arbitrary such $k$ as $k_h$. Else, assign $k_h := 0$. \langlebel{step:setthresh} \end{compactenum} \item Output $K_{\overline{h}} = \{k_1, k_2, \ldots, k_{\overline{h}}\}$. \end{compactenum} \end{minipage} } \begin{claim} \langlebel{clm:findr-time} The running time of {\tt findr}{} is $O((d\ell\delta^{-1}\rho^{-1})^5)$. \end{claim} \begin{proof} There are $\overline{h} = 2 \delta^{-1}\log(\delta^{-1})$ iterations. We compute the running time of each iteration. There are at most $\rho^{-1}\beta^{-8}$ calls to \texttt{cluster}, each of which takes $O(d\ell\rho^{-2})$ time by \Clm{cluster}. For each call to \texttt{cluster}, there are at most $\rho^{-1}$ calls to {\tt IsFree}. Each call to {\tt IsFree}{} takes $O((d\ell\rho^{-1})^3)$ time (\Clm{isfree}). The running time of each iteration is $O(\beta^{-10} + d\ell\rho^{-3}\beta^{-8} + d^3\ell^3\rho^{-5}\beta^{-8})$. By the parameter settings, since $\ell^2 \geq \hbox{\bf var}epsilon^{2\cdot\ellexp} \geq (\betaval)^{-8} = \beta^{-8}$, the running time of each iteration $O((d\ell\rho^{-1})^5)$. The total running time is $O((d\ell\delta^{-1}\rho^{-1})^5)$. \end{proof} The following theorem gives the main guarantee of {\tt findr}. The proof is a fairly straightforward Chernoff bound on top of an application of \Thm{restrict-cut}. Quite simply, the proof just says the following. \Thm{restrict-cut} shows the existence of $(h,k)$ pairs for which many vertices are viable. The {\tt findr}{} procedure finds such pairs by random sampling. \begin{theorem} \langlebel{thm:findr} The following property of the values $K_{\overline{h}}$ of {\tt findr}$(\boldsymbol{R})$ and the preprocessing choices holds with probability at least $1-\exp(-1/\hbox{\bf var}epsilon)$ over all the randomness in $\boldsymbol{R}$. For all $h \leq \overline{h}$, if $|\free{h}| \geq \beta n$, at least $\beta^5 \delta n$ vertices in $V_{h}$ are $(h,k_h)$-viable. \end{theorem} \begin{proof} The proof has two parts. In the first part, we argue that whp, if $|\free{h}| \geq \beta n$, then a non-zero $k_h$ is output. This part is an application of \Thm{restrict-cut}. In the second part, we prove that (whp), if a non-zero $k_h$ is output, then it satisfies the desired properties. This part is proven using a simple Chernoff bound argument. Fix an $h$. Condition on any choice of $V_1, V_2, \ldots, V_{h-1}$ such that $|\free{h}| \geq \beta n$. Note that $V_{\geq h} \supseteq \free{h}$, since all vertices in $V_{< h}$ are necessarily clustered by the $h$th phase. (Recall that \texttt{cluster}$(v)$ always contains $v$.) Hence, $|V_{\geq h}| \geq \beta n$. There will be numerous low probability ``bad" events that we need to track. We will describe these bad events, and refer to their probabilities as ``Error 1", ``Error 2", etc. {\bf Error 1, $\exp(-\beta^{-8})$.} The probability that a uar vertex is in $V_{\geq h}$ is at least $\beta$, and the expected size of $S_h$ is at least $\beta \times \beta^{-10} = \beta^{-9}$. By the Chernoff bound of \Thm{chernoff}, $\Pr[|S_h| \leq \beta^{-9}/2] \leq \exp(-\beta^{-9}/12)$ $\leq \exp(-\beta^{-8})$. Thus, with probability at least $1-\exp(-\beta^{-8})$, \Step{loop} is reached and $S_h$ is a multiset of iid uar $\beta^{-8}$ elements in $V_{\geq h}$. Let us assume that $S_h$ is such a multiset, and prove that a non-zero $k_h$ is output whp. We bring out the main tool, \Thm{restrict-cut}. Since $|\free{h}| \geq \beta n$, there exists a size threshold $k \leq \rho$ such that the following holds. For at least $(\beta^2/\log^2\beta^{-1})n$ vertices $s \in \free{h}$, there are at least $(\beta/\log^2\beta^{-1})\ell$ timesteps $t$ such that: there exists $k' \in [k,2k]$ such that (i) $\level{s}{t}{k'} \subseteq \supp(\widehat{M}^t\vec{s})$, (ii) $\Phi(\level{s}{t}{k'} \cup \{s\}) < \phi$, and (iii) $|\level{s}{t}{k'} \cap F| \geq \beta^3 k$. For any such $(s, t, k)$ triple, consider a call to \texttt{cluster}$(s,t,k)$. Observe that the call will output the largest level set of size in $[k,2k]$ satisfying (i) and (ii). Hence, it will output (non-singleton) $\level{s}{t}{k''}$ such that $k' \leq k'' \leq 2k$ and (i) and (ii) hold. Note that $\level{s}{t}{k''} \supseteq \level{s}{t}{k'}$, so the third item will also hold. Thus, \emph{if $t_s$ is set to one of these $(\beta/\log^2\beta^{-1})\ell$ timesteps $t$}, then $s$ will be $(h,k)$-viable. {\bf Error 2, $\exp(-\beta^{-1})$.} Let us fix a size threshold $k$ promised by \Thm{restrict-cut}. The probability that a uar element on $V_{\geq h}$ is marked as $(k,h)$-viable is at least the product of probability of choosing an appropriate $s$ with the probability that $t_s$ is chosen appropriately. Thus, the probability of find an $(h,k)$-viable vertex is at least $(\beta^2/\log^2\beta^{-1}) \times (\beta/\log^2\beta^{-1}) = \beta^3/\log^4\beta^{-1}$. This probability is independent for all vertices in $V_{\geq h}$. By the Chernoff bound in \Thm{chernoff}, with probability at least $1-\exp(-\beta^4 |S_h|/12)$, at least $\beta^3 |S_h|/2\log^4\beta^{-1} \geq 12\beta^4|S_h|$ $(h,k)$-viable vertices are discovered in {\tt findr}. In this case, in \Step{setthresh}, $k_h$ is set to a non-zero value. The probability of this event happening is at least $1-\exp(-\beta^{-8})-\exp(-\beta^4|S_h|/8)$ $\geq 1 - \exp(\beta^{-1})$. (Recall that whp $S_h$ is a multiset of iid uar $\beta^{-8}$ vertices. In the union bound above, the first ``bad event" is $S_h$ \emph{not} having $\beta^{-8}$ vertices and the second ``bad event" is discovering too few viable vertices.) We have concluded that whp, if $|\free{h}| \geq \beta n$, then $k_h$ is non-zero. We move to the second part of the proof, which asserts that (with high probability), an output non-zero $k_h$ has the desired properties. Condition on any choice of the preprocessing. Note that the randomness is only over the choice of $S_h$. Fix any $k \leq \rho^{-1}$. Suppose that the number of $(h,k)$-viable vertices in $V_{\geq h}$ is at most $2\beta^5n$. Then, the expected number of such vertices in $S_h$ is at most $2\beta^5 n/|V_{\geq h}| \times |S_h| \leq 2\beta^4 |S_h|$. (We use the lower bound $|V_{\geq h}| \geq |\free{h}| \geq \beta n$.) {\bf Error 3, $2^{-12\beta^{-4}}$.} Let $X_k$ denote the random variable of the number of $(h,k)$-viable vertices in $S_h$. Since $X_k$ is distributed as a binomial, by the Chernoff bound of \Thm{chernoff}, $\Pr[X_k > 12\beta^4|S_h|] \leq 2^{-12\beta^4|S_h|}$. Note than when $X_k < 12\beta^4|S_h|$, then $k_h$ cannot be $k$. All in all, for any $h$, any choice of the $t_v$s, and any choice of $k$, if \Step{loop} is reached and the number of $(h,k)$-viable vertices in $V_{\geq h}$ is at most $2\beta^5n$, then $k_h \neq k$ with probability at least $1-2^{-12\beta^{-4}}$. Taking the contrapositive, if $k_h \neq 0$ (\Step{loop} must have been reached), then the number of $(h,k_h)$-viable vertices in $V_{\geq h}$ is at least $2\beta^5n$. {\bf Error 4, $2\exp(-\delta\beta^5n/12)$.} Suppose the number of $(h,k_h)$-viable vertices in $V_{\geq h}$ is at least $2\beta^5n$ . By \Clm{vh-size} applied on the set of $(h,k_h)$-viable vertices in $V_{\geq h}$, with probability at least $1-2\exp(-\delta\beta^5n/12)$, the number of such viable vertices in $V_h$ is at least $\delta\beta^5n$. We take a union bound over the $2 \delta^{-1}\log(\delta^{-1})$ values of $h$, the $\rho^{-1}$ values of $k$, and all errors encountered thus far. The total error probability is at most $2 \delta^{-1}\log(\delta^{-1})\cdot\rho^{-1}(\exp(-\beta^{-8}) + \exp(\beta^{-1}) + 2^{-12\beta^{-4}} + 2\exp(-\delta\beta^5n/12))$. Note that $2 \delta^{-1}\log(\delta^{-1}), \beta, \rho^{-1}$ are $\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$, and thus the total error probability is at most $\exp(-\hbox{\bf var}epsilon^{-1})$. With the remaining probability, the following holds. For all phases $h$, if $|\free{h}| \geq \beta n$, a non-zero $k_h$ is output. If a non-zero $k_h$ is output, the number of $(h,k_h)$-viable vertices in $V_{h}$ is at least $\delta\beta^5n$. \end{proof} \section{Proving the cut bound: the amortization argument} \langlebel{sec:amort} We come to the final piece of proving the guarantees of \Thm{main-intro}. We need to prove that the number of edges cut by the partition of {\tt globalPartition}{} is at most $\hbox{\bf var}epsilon nd$. This requires an amortization argument explained below. For the sake of exposition, we will ignore constant factors in this high-level description. One of the important takeaways is how various parameters are chosen to prove the cut bound. Consider phase $h$ where $|\free{h}| \geq \beta n$. Let us upper bound the number of edges cut by the clustering done on this phase. Roughly speaking, $|V_h| = \delta n$, so there are $\delta n$ clusters created in this phase. Each cluster in this phase has at most $2k_h$ vertices. The number of edges cut by each such cluster is at most $2\phi k_hd$ (since \texttt{cluster}{} outputs a low conductance cut; ignore singleton outputs). So the total number of edges cut is at most $2\phi \delta k_h nd$. Let us now lower bound the number of new vertices that are partitioned in phase $h$; this is the set $\free{h+1} \setminus \free{h}$. For each $(h,k_h)$-viable $v$ in $V_h$, \texttt{cluster}$(v)$ contains at least $\beta^3 k_h$ vertices in $\free{h}$. These will be newly partitioned vertices. Here comes the primary difficulty: the clusters for the different such $v$ might not be disjoint. We need to lower bound the union of the clustered vertices in $\free{h}$. An alternate description of the challenge is as follows. We are only guaranteed that clusters from viable vertices $v$ contains many vertices in $\free{h}$, the free set at the \emph{beginning} of phase $h$. What we really need is for the cluster from $v$ to contain many free vertices \emph{at the time} that $v$ is processed. Phases were introduced to solve this problem. By reducing $\delta$, we can limit the size of $V_h$, thereby limiting the intersection between the clusters produced in this phase. We now explain the math behind this argument. Consider some $w \in \free{h}$ and let $c_w$ be the number of vertices in $V_{\geq h}$ that cluster $v$ (call these seeds). Thus, $c_w = |\{s \ | \ s \in V_{\geq h}, v \in \texttt{cluster}(s)\}$. The vertex $w$ is clustered in phase $h$ iff one of these $c_w$ seeds is selected in $V_h$. By \Clm{geo-phase}, each such seed is independently selected in $V_h$ with probability $\delta$. The probability that $w$ is clustered in this phases is precisely $1-(1-\delta)^{c_w}$. Crucially, $c_w \leq |IB(w)| \leq \ell\rho^{-1}$. We chose $\delta \ll \ell\rho^{-1}$, so $1-(1-\delta)^{c_w} \approx \delta c_w$. Thus, the expected number of newly clustered vertices is at least $\sum_{w \in \free{h}} \delta c_w$. By rearranging summations, $\sum_{w \in \free{h}} c_w = \sum_{v \in V_{\geq h}} |\texttt{cluster}(v) \cap \free{h}|$. For every $(h,k_h)$-viable vertex $v$ in $V_{\geq h}$, $|\texttt{cluster}(v) \cap \free{h}| \geq \beta^3 k_h$. The arguments in the proof of \Thm{findr} shows that there are $\beta^5 n$ such vertices in $V_{\geq h}$ whp. Hence, we can lower bound (in expectation) the new number of newly clustered vertices as follows: $$\sum_{w \in \free{h}} \delta c_w \geq \delta\cdot(\beta^5 n)\cdot (\beta^3 k_h) = \delta\beta^8 k_h n$$ We upper bounded the number of edges cut by $2\phi \delta k_h nd$. The ratio of edges cut to vertices clustered is $8\phi \beta^{-8}d$. The parameters are set to ensure that $8\phi \beta^{-8} \ll \hbox{\bf var}epsilon$, so the total number of edges cut is $\hbox{\bf var}epsilon nd$. The formal analysis requires some care to deal with conditional probabilities and dependencies between various phases. Also, \Thm{findr} talks about $V_h$ and not $V_{\geq h}$, which necessitates some changes. But the essence of the argument is the same. Our main theorem is a cut bound for {\tt globalPartition}. \begin{theorem} \langlebel{thm:edge-cut} The expected number of edges cut by the partitioning of {\tt globalPartition}$(\boldsymbol{R})$ is at most $\hbox{\bf var}epsilon nd$. \end{theorem} We will break up the proof into two technical claims. Somewhat abusing notation, we say a vertex in $V_{\geq h}$ is $h$-viable if it is $(h,k_h)$-viable. \begin{claim} \langlebel{clm:cut-cluster} $$ \hbox{\bf E}[\textrm{\# edges cut by {\tt globalPartition}$(\boldsymbol{R})$}] \leq 32 \phi\beta^{-8}d^2 \Big(\sum_{h < \overline{h}} \hbox{\bf E}[\sum_{v \in V_h} |\texttt{cluster}(v) \cap \free{h}|)]\Big) + 2\beta nd $$ \end{claim} \begin{proof} The proof goes phase by phase. We call a phase significant if $|\free{h}| \geq \beta n$. Edges cut in a significant phase are also called significant. Observe that the total number of edges cut is at most the number of significant edges cut plus $\beta nd$. (This contributes to the extra additive term in the claim statement.) Below, we will bound the total number of significant edges cut. By \Clm{last-phase}, with probability at least $1-2^{-\delta n}$, $|V_{\overline{h}}| \leq \delta n$. Note that $|\free{h}| \leq V_{\geq \overline{h}} = |V_{\overline{h}}|$. (The equality is because this is the last phase.) Since $\delta n < \beta n$, the expected number of significant edges cut in the last phase is at most $2^{-\delta n} nd < 1$. Now assume that $h < \overline{h}$. Consider the edges cut in the $h$th phase. Consider any choice of $V_1, V_2, \ldots, V_{h-1}$ and $k_1, k_2, \ldots, k_{h}$. If $|\free{h}| < \beta n$, no significant edges are cut. Let us assume that $|\free{h}| \geq \beta n$. Each set \texttt{cluster}$(v)$ output in this phase is either a singleton or a set of size at most $2k_h$ and conductance at most $\phi$. In either case, the number of edges cut by removing \texttt{cluster}$(v) \cap F$ (in {\tt globalPartition}) is at most $2\phi k_h d + d$. Note that $2\phi k_h d \geq 1$ (otherwise, by the connectedness of $G$, there can never be a set of size at most $2k_h$ of conductance $\leq \phi$). Hence, the number of significant edges cut by a single cluster is at most $2\phi k_h(d+d^2) \leq 4\phik_h d^2$. Note that $|V_{\geq h}| \geq |\free{h}| \geq \beta n$ and $|V_{\geq h}|$ is obviously at most $n$. By \Clm{vh-size} with $S = V_{\geq h}$, with probability at least $1-2\exp(-\delta \beta n/12)$ over the choice of $V_h$, $|V_h| \leq 2\delta n$. Hence, the total number of significant edges cut is at most $4\phik_h d^2 \times 2\delta n = 8\phi\delta k_h d^2n$. By \Thm{findr}, with probability at least $1-\exp(\hbox{\bf var}epsilon^{-1})$, if $|\free{h}| \geq \beta n$, at least $\beta^5 \delta n$ vertices in $V_h$ are $h$-viable. Call this event ${\cal E}$. For every $h$-viable vertex in $V_h$, $|\texttt{cluster}(v) \cap \free{h}| \geq \beta^3 k_h$. For convenience, let $X_h := \sum_{v \in V_h} |\texttt{cluster}(v) \cap \free{h}|)$. Conditioned on ${\cal E}$, $X_h \geq \beta^8 (\delta k_h n)$. Recall that with probability at least $1-2\exp(-\delta\beta n/12)$, the number of significant edges cut in this phase is at most $8\phi d^2(\delta k_h n)$. If ${\cal E}$ occurs, we can apply the bound $\beta^{-8}X_h \geq \delta k_h n$ and upper bound the number of significant edges cut in this phase by $8\phi\beta^{-8}d^2 X_h$, Thus, with probability at least $1-\exp(\hbox{\bf var}epsilon^{-1})-2\exp(-\delta\beta n/12)$, the number of significant edges cut in phase $h$ is at most $(8\phi\beta^{-8}d^2)X_h$. In other words, there is an event $\mathcal{F}_h$ conditioned on which the above bound happens, and $\Pr[\mathcal{F}_h] \geq 1-\exp(\hbox{\bf var}epsilon^{-1})-2\exp(-\delta\beta n/12)$. In the calculation below, we break into conditional expectations and use the fact that $\delta = \mathrm{poly}(\hbox{\bf var}epsilon)$, $\beta = \Theta(\hbox{\bf var}epsilon)$, and that the number of phases is at most $2 \delta^{-1}\log(\delta^{-1})$. We also use the fact that $X_h$ is non-negative. \begin{eqnarray} & & \sum_h \hbox{\bf E}[\textrm{\# significant edges cut in phase $h$}] \leq \sum_h (\Pr[\mathcal{F}] \hbox{\bf E}[X_h | \mathcal{F}] + \Pr[\overline{\mathcal{F}}] nd) \\ & \leq & \sum_h \hbox{\bf E}[X_h] + 2 \delta^{-1}\log(\delta^{-1})(\exp(\hbox{\bf var}epsilon^{-1})+2\exp(-\delta\beta n/12))nd \leq \sum_h \hbox{\bf E}[X_h] + \beta nd/2 \end{eqnarray} To this bound, we add the expected number of edges cut in the last phase (at most $1$) and the number of non-significant edges cut (at most $\beta n$). This completes the proof. \end{proof} \begin{claim} \langlebel{clm:charging} $$ \sum_{h < \overline{h}} \hbox{\bf E}[\sum_{{v \in V_h}} |\texttt{cluster}(v) \cap \free{h}|)] \leq 4n $$ \end{claim} \begin{proof} We will apply the following charging argument. When a vertex $v$ is processed in {\tt globalPartition}$(\boldsymbol{R})$, we will add one unit of charge to every vertex in $\texttt{cluster}(v) \cap \free{h}$. Note that the total amount of charge is exactly the quantity we wish to bound. Crucially, note that any vertex $w$ receives charge in at most one phase; the phase where it leaves the free set. We will prove that the expected charge that any vertex receives is at most 4 units, which will prove the claim. Fix a vertex $w$. Let $\chi$ be the random variable denoting the charge that $w$ receives, and ${\cal E}_{h}$ be the event that $w$ receives charge in phase $h$. Since $w$ receives charge in exactly one phase, $\hbox{\bf E}[\chi] = \sum_h \hbox{\bf E}[\chi | {\cal E}_h] \Pr[{\cal E}_h]$. We will prove that, for all $h$, $\hbox{\bf E}[\chi | {\cal E}_h] \leq 4$, which implies that $\hbox{\bf E}[\chi] \leq 4$ as desired. To analyze $\hbox{\bf E}[\chi | {\cal E}_h]$, first condition on a setting of $V_1, V_2, \ldots, V_{h-1}$ (such that $w \in \free{h}$) and all other preprocessing for all vertices. We refer to this setting as the event ${\cal C}$. The randomness for specifying $V_h$ has not been set. The event ${\cal E}_h$ occurs if there is a $v \in V_h$ such that $w \in \texttt{cluster}(v)$. The charge $\chi$ is the number of vertices $v \in V_h$ such that $w \in \texttt{cluster}(v)$. Let $c$ be the number of such vertices in $V_{\geq h}$. Note that $v \in IB(w)$, and by \Clm{ballsize}, $c \leq \ell\rho^{-1}$. By \Clm{geo-phase}, every vertex in $V_{\geq h}$ is in $V_h$ with probability $\delta$. Hence, $\Pr[{\cal E}_h | {\cal C}] = 1-(1-\delta)^c$. Note that $\delta c \leq \delta \ell \rho^{-1} = (d^{-70+6+60}\hbox{\bf var}epsilon^{3100} \cdot \hbox{\bf var}epsilon^{\ellexp} \cdot \hbox{\bf var}epsilon^{-\rhoexp} < 1/2$. Hence $(1-\delta)^c \leq 1-\delta c + (\delta c)^2 \leq 1-\delta c/2$ and $\Pr[{\cal E}_h | {\cal C}] \geq \delta c/2$. Note that $\hbox{\bf E}[\chi | {\cal C}] = \sum_{b > 0} {c \choose b} \delta^b \leq \sum_{b > 0} (\delta c)^b \leq 2 \delta c$. Observe that $\hbox{\bf E}[(\chi | {\cal E}_h) | {\cal C}] \leq (2\delta c)/(\delta c/2) = 4$. Note that the event ${\cal E}_h$ can be partitioned according to the different ${\cal C}$ events. Hence $\hbox{\bf E}[\chi | {\cal E}_h] = \sum_{{\cal C}} \hbox{\bf E}[(\chi | {\cal E}_h) | {\cal C}] \Pr[{\cal C}] \leq 4$. Thus, the proof is completed. \end{proof} \Thm{edge-cut} follows by a direct application of these claims and plugging in the parameter values. \begin{proof} (of \Thm{edge-cut}) By \Clm{cut-cluster} and \Clm{charging}, the expected number of edges cut by {\tt globalPartition}$(\boldsymbol{R})$ is at most $128\phi\beta^{-8}d\cdot nd + 2\beta nd$. Plugging in the parameters $\phi = d^{-1}\hbox{\bf var}epsilon^{10}$, $\beta = \betaval$, and noting that $\hbox{\bf var}epsilon$ is sufficiently small, the expectation is at most $\hbox{\bf var}epsilon nd$. \end{proof} We can now wrap up the proof of \Thm{main-intro}, showing the existence of $(\hbox{\bf var}epsilon, \mathrm{poly}(d\hbox{\bf var}epsilon^{-1}))$-partition oracles for minor-closed families. \begin{proof} (of \Thm{main-intro}) The procedure for the partition oracle is {\tt findPartition}$(v,\boldsymbol{R})$. Let us prove each property of \Def{oracle}. Consistency: By \Thm{findpart}, the partition created by calls to {\tt findPartition}$(v,\boldsymbol{R})$ is precisely the same as the partition created by {\tt globalPartition}$(\boldsymbol{R})$. Cut bound: By \Thm{edge-cut}, the expected number of edges cut is at most $\hbox{\bf var}epsilon nd$. Running time: The running time of {\tt findPartition}$(v,\boldsymbol{R})$ is $O((d\ell\rho^{-1})^5)$ plus the running time of {\tt findr}. The running time of {\tt findr}{} is $O((d\ell\delta^{-1}\rho^{-1})^5)$, by \Clm{findr-time}. By the parameter settings, $\ell, \delta^{-1}, \rho^{-1}$ are all $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$. Hence, the total running time of {\tt findPartition}$(v,\boldsymbol{R})$ is also $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$. \end{proof} \section{Diffusion Behavior on Minor-Free Families}\langlebel{sec:diffusion} In this section, we state and prove the main theorem about diffusions on minor-free graph classes. This is the (only) part of the paper where the property minor-freeness makes an appearance. \Thm{goodseed} is used in the proof of \Sec{ls-cluster}. For convenience, we recall the parameters involved. \parameterinfo \begin{theorem} \langlebel{thm:goodseed} Let $G$ be a bounded degree graph in minor-closed family. Let $F$ be an arbitrary subset of at least $\beta n$ vertices. There are at least $\beta^2 n/8$ vertices $s \in F$ such that: for at least $\beta\ell/8$ timesteps $t \in [\ell]$, $\widehat{M}^t\vec{s}(F) \geq \beta/16$. \end{theorem} We note that this theorem holds for all graphs, if we replace the truncated walk $\widehat{M}$ by the standard random walk $M$. The main insight is that, for $G$ in a minor-closed family, ``polynomial" truncation of the walk distribution does not significantly affect the behavior. The main property of bounded degree minor-free graphs we require is hyperfiniteness, as expressed by Proposition 4.1 of~\cite{AST90} (also used as Lemma 3.3 of~\cite{KSS:19}). \begin{theorem} \langlebel{thm:ast} There is an absolute constant $\gamma$ such that the following holds. Let $H$ be a graph on $r$ vertices. Suppose $G$ is an $H$-minor-free graph. Then, for all $b \in \mathbb NN$, there exists a set of at most $\gamma r^{3/2}n/\sqrt{b}$ vertices whose removal leaves $G$ with all connected components of size at most $k$. \end{theorem} The key stepping stone to proving \Thm{goodseed} is \Lem{walk}, which shows that truncation does not affect walk distributions from many vertices. Let us first state a simple fact on $l_1$-norms. \begin{fact} \langlebel{fact:norm} Let $\vec{x}$ and $\vec{y}$ be vectors with non-negative entries, such that for all coordinates $i$, $\vec{x}(i) \geq \vec{y}(i)$. Then $\|\vec{x} - \vec{y}\|_1 = \|\vec{x}\|_1 - \|\vec{y}\|_1$. \end{fact} \begin{proof} $\|\vec{x} - \vec{y}\|_1 \geq \sum_i |\vec{x}(i) - \vec{y}(i)| = \sum_i (\vec{x}(i) - \vec{y}(i)) = \|\vec{x}\|_1 - \|\vec{y}\|_1$. \end{proof} This fact bears relevance for us, since truncations of walk distribution vectors only reduce coordinates. \begin{lemma} \langlebel{lem:walk} For at least $(1-\rho^{1/8})n$ vertices $v$, the following holds. For every $t \leq \ell$, $\| M^t \vec{v} - \widehat{M}^t \vec{v}\|_1 \leq \ell \rho^{1/9}$. \end{lemma} \begin{proof} Let $H$ be an arbitrary forbidden minor for the minor-closed family of interest. We first apply \Thm{ast} with $k = \lceil 1/\sqrt{\rho} \rceil$. There exists a set $C$ of at most $\gamma r^{3/2} \rho^{1/4} dn$ edges who removal leads to connected components of size at most $\lceil 1/\sqrt{\rho} \rceil \leq 2/\sqrt{\rho}$. For convenience, set the constant $\gamma' := \gamma r^{3/2}$. We will need the following claim. \begin{claim} \langlebel{clm:trapped} For at least $(1-\rho^{1/8})n$ vertices $v$, the probability that an $\ell$-length random walk encounters an edge of $R$ is at most $\gamma' \ell \rho^{1/8}$. \end{claim} \begin{proof} The proof is a Markov bound argument. Suppose not; so there exist strictly more than $\rho^{1/8}n$ vertices $v$ such that an $\ell$-length random walk encounters an edge of $C$ with at least $\gamma' \ell \rho^{1/8}$ probability. Consider an $\ell$-length random walk that starts from the uniform (also stationary) distribution. The above assumption implies that the expected number of $C$ edges encountered is $> \rho^{1/8} \cdot \gamma' \ell \rho^{1/8} = \gamma' \ell \rho^{1/4}$. On the other hand, since the walk remains in the stationary distribution, for all $t \leq \ell$, the probability of encountering an edge in $C$ at the $t$th step is precisely $|C|/2dn$. (Recall that the lazy random walk has $1/2dn$ of taking any edge.) By linearity of expectation, the expected number of $C$ edges encountered is $\ell |C|/2dn$. By the bound of \Thm{ast}, $\ell |C|/2dn \leq \gamma' \ell \rho^{1/4}$ contradicting the bound obtained from the assumption. \end{proof} Consider such a vertex $v$, as promised by the previous paragraph. Let $S$ be the connected component over vertices that contains $v$, after removing the edge cut $C$. Let $q_t$ be the probability that the walk from $v$ leaves $S$ at the $t$th step; by the property of the previous parameter, $\sum_{t \leq \ell} p_t \leq \gamma' \ell \rho^{1/8}$. Let $M_S$ be the transition matrix of the random walk $M$ \emph{restricted to $S$}. Note that $M_S$ is not necessarily stochastic. We will use the truncated walk $\widehat{M}_S$. Observe that $\|\widehat{M}^t \vec{v}\|_1 \geq \|\widehat{M}^t_S \vec{v}\|_1$. Since all coordinates of $\widehat{M}^t \vec{v}$ are at most those of $M^t \vec{v}$, by \mathbb Fact{norm}, $\|M^t \vec{v} - \widehat{M}^t \vec{v}\|_1 = \|M^t \vec{v}\|_1 - \|\widehat{M}^t \vec{v}\|_1$. Since $\|M^t \vec{v}\|_1 = 1 = \|\vec{v}\|_1$ and $\|\widehat{M}^t \vec{v}\|_1 \geq \|\widehat{M}^t_S \vec{v}\|_1$, we can upper bound as follows by a telescoping sum. \begin{eqnarray} \|M^t \vec{v} - \widehat{M}^t \vec{v}\|_1 & \leq & \sum_{l = 1}^t \Big(\|\widehat{M}^{l-1}_S\vec{v}\|_1 - \|\widehat{M}^{l}_S \vec{v}\|_1\Big) \\ & = & \sum_{l=1}^t \Big(\|\widehat{M}^{l-1}_S\vec{v}\|_1 - \|M_S\widehat{M}^{l-1}_S\vec{v}\|_1 + \|M_S\widehat{M}^{l-1}_S\vec{v}\|_1 - \|\widehat{M}^{l}_S \vec{v}\|_1\Big)\langlebel{eq:norm-diff} \end{eqnarray} The quantity $\|\widehat{M}^{l-1}_S\vec{v}\|_1 - \|M_S\widehat{M}^{l-1}_S\vec{v}\|_1$ is exactly the probability that a single step (according to $M$) from $\widehat{M}^{l-1}_S\vec{v}$ leaves $S$. Since all coordinates in $\widehat{M}^{l-1}_S\vec{v}$ are at most those of $M^{l-1}\vec{v}$, this probability is at most $q_l$. The quantity $\|M_S\widehat{M}^{l-1}_S\vec{v}\|_1 - \|\widehat{M}^{l}_S \vec{v}\|_1$ is the probability mass lost by truncation of $M_S\widehat{M}^{l-1}_S\vec{v}$. We apply the trivial bound $\rho|S|$. This is where the hyperfiniteness plays a role; since $|S| \leq 2/\sqrt{\rho}$, $\|\widehat{M}_S \vec{x}_{l-1} - M_S \vec{x}_{l-1}\|_1 \leq \rho\cdot/2\sqrt{\rho} = 2\sqrt{\rho}$. We sum all the these bounds over $l \leq t$, and plug into \Eqn{norm-diff}. We bound $\|M^t \vec{v} - \widehat{M}^t \vec{v}\|_1 \leq \sum_{l \leq t} p_l + 2t \sqrt{\rho}$. By the properties of $v$, this is at most $\gamma'\ell \rho^{1/8} + 2\ell\sqrt{\rho} \leq \ell\rho^{1/9}$ (for sufficiently small $\rho$). \end{proof} We are now ready to prove \Thm{goodseed}. We will need the following simple ``reverse Markov" inequality for bounded random variables. \begin{fact} \langlebel{fact:markov} Let $X$ be a random variable taking values in $[0,1]$ such that $\hbox{\bf E}[X] \geq \delta$. Then $\Pr[X \geq \delta/2] \geq \delta/2$. \end{fact} \begin{proof} Let $p$ be the probability that $\Pr[X \geq \delta/2]$. \begin{eqnarray*} \delta \leq \hbox{\bf E}[X] & = & \Pr[X \geq \delta/2] \hbox{\bf E}[X | X \geq \delta/2] + \Pr[X < \delta/2] \hbox{\bf E}[X | X < \delta/2] \\ & \leq & p + (1-p)(\delta/2) \leq p + \delta/2 \end{eqnarray*} \end{proof} \begin{proof} (of \Thm{goodseed}) Define $\theta_{s,t}$ as follows. For $s \in F$ and $t \in [\ell]$: if $t$ is odd, $\theta_{s,t} = 0$. If $t$ is even, then $\theta_{s,t}$ is the probability that the $t$-length random walk starting from $s$ ends in $F$. Let us pick a uar source vertex in $s \in F$, pick a uar length $t \in [\ell]$. We use the fact that $M$ is a symmetric matrix. We use ${\bf 1}_F$ to denote the all $1$s vector on $F$. \begin{eqnarray} \hbox{\bf E}_{s,t}[\theta_{s,t}]= {\bf 1}^T_F \sum_{i=1}^{\ell/2} (M^{2i}/\ell) ({\bf 1}_F/|F|) = (\ell|F|)^{-1} \sum_{i \leq \ell/2} {\bf 1}^T_F M^{2i} {\bf 1}_F = (\ell|F|)^{-1} \sum_{i \leq \ell/2} \|M^i {\bf 1}_F\|^2_2 \langlebel{eq:tau} \end{eqnarray} Note that $\|M^i{\bf 1}_F\|_1 = |F|$, so by Jensen's inequality, $ \|M^i {\bf 1}_F\|^2_2 \geq |F|^2/n$. Plugging in \Eqn{tau}, $\hbox{\bf E}_{s,t}[\theta_{s,t}] \geq \ell^{-1}\times (\ell/2)|F|/n \geq \beta/2$. For any $s$, $\hbox{\bf E}_t[\theta_{s,t}] \leq 1$. By \mathbb Fact{markov}, there are at least $\beta|F|/4$ vertices $s \in F$ such that $\hbox{\bf E}_t[\theta_{s,t}] \geq \beta/4$. Again applying \mathbb Fact{markov}, for at least $\beta|F|/4$ vertices $s \in F$, there are at least $\beta \ell/8$ timesteps $t \in [\ell]$ such that $\theta_{s,t} \geq \beta/8$, implying that $M^t\vec{s}(F) \geq \beta/8$. By \Lem{walk}, there are at least $(1-\rho^{1/8})n$ vertices $s$ such that for all $t \leq \ell$, $\| M^t \vec{s} - \widehat{M}^t \vec{s}\|_1 \leq \ell \rho^{1/9} = d^{6 - 60/9}\hbox{\bf var}epsilon^{-\ellexp + \rhoexp/9} \leq \beta/16$. By the parameters settings, $\rho^{1/8} < \hbox{\bf var}epsilon^{\rhoexp/8} \leq \beta|F|/8$. Invoking the bound from the previous paragraph, there are at least $\beta|F|/8$ satisfying the property of \Lem{walk} and the condition at the end of the previous paragraph. For all such vertices $s$, for all $t \leq \ell$, $\widehat{M}^t\vec{s}(F) \geq M^t\vec{s}(F) - \beta/16$. Thus, for all such $s$, there are at least $\beta\ell/8$ timesteps $t \in [\ell]$ such that $\widehat{M}^t\vec{s}(F) \geq \beta/16$. \end{proof} \section{The proof of \Thm{findr}: local partitioning within $F$} \langlebel{sec:ls-cluster} We repeat the parameter values for convenience. \parameterinfo Recall that \Thm{restrict-cut} shows that there are many $s \in F$ from which (level sets of) diffusions in $G$ discover low conductances cuts in $F$. We use the Lov\'{a}sz-Simonovits curve to represent the truncated diffusion vector, and keep track of the vertices of $F$ wrt to the curve. This is done via a careful adaptation of Lov\'{a}sz-Simonovits method, as presented in \Lem{flatten}. The main technical tool which we will use in our analysis is the Lov\'{a}sz-Simonovits method, defined in~\cite{LS:90}, whose use for clustering was pioneered by~\cite{ST12}. \begin{definition} For a non-negative vector $\mathbf{p}$ over $V$, the function $I: \mathbb R^n \times [n] \rightarrow [0, 1]$ is defined as $$ I(\mathbf{p}, x) = \max_{\substack{\mathbf{w} \in [0, 1]^n \\ \sum \mathbf{w}(u) = x }}\sum_{u \in V} \mathbf{p}(u)\mathbf{w}(u)$$ This is equivalent to summing over the $x$ heaviest elements of $\mathbf{p}$ when $x$ is an integer, and linearly interpolating between these points otherwise. For notational convenience, we define: $$I_{s, t}(x) = I(\widehat{M}^t \vec{s}, x)\textrm{.}$$ \end{definition} Note that $I_{s,t}$ is a concave curve. \subsection{The Lov\'{a}sz-Simonovits lemma} \langlebel{sec:ls} The fundamental lemma of Lov\'{a}sz-Simonovits is the following (Lemma 1.4 of~\cite{LS:90}, also refer to Theorem 7.3.3 of Lecture 7 of~\cite{Sp-notes}). \begin{lemma} \langlebel{lem:ls} Let $\overline{x} = \min(x,n-x)$. Consider any non-negative vector $\vec{p}$, and let $S_x$ denote the level set of $M\vec{p}$ with $x$ vertices. $$ I(M\vec{p}, x) \leq (1/2)(I(\vec{p},x-2\overline{x}\Phi(S_x)) + I(\vec{p},x-2\overline{x}\Phi(S_x))) $$ \end{lemma} The concavity of the curves implies monotonicity, $I(M\vec{p}) \leq I(\vec{p})$. The application of this lemma to our setting leads to the following statement. \begin{lemma} \langlebel{lem:lstrun} For all $t \leq \ell$ and $x \leq 1/\rho$, $$ \ls{s}{t}(x) \leq (1/2)(\ls{s}{t-1}(x(1-\Phi(\level{s}{t}{x}))) + \ls{s}{t-1}(x(1+\Phi(\level{s}{t}{x}))))$$ \end{lemma} Let $\lin{t}{w}{y}$ be the straight line between the points $(w,\ls{s}{t}(w))$ and $(y,\ls{s}{t}(y))$. \begin{lemma} \langlebel{lem:flatten} Let $t_0 < t_1 < \ldots < t_h$ be time steps. Suppose $\forall i \leq h$ and $x \in [w,y]$: $\level{s}{t_i}{x} \subseteq \supp(\widehat{M}^t \vec{s})$ $\Longrightarrow$ $\Phi(\level{s}{t_i}{x}) \geq \psi$. Then, $\forall i \leq h, \forall x \in [w,y]$ $$ \ls{s}{t_i}(x) \leq \lin{t_0-1}{w}{y}(x) + \sqrt{\min(x-w, y-x)} (1-\psi^2/128)^i $$ \end{lemma} \begin{proof} For convenience, let $\Delta_x = \min(x-w, y-x)$. We prove by induction over $i$. For showing the base case take $i=0$. Now consider the following cases. \begin{asparaitem} \item Suppose $x = w$ or $x = y$. By monotonicity, $\ls{s}{t_0}(x) \leq \ls{s}{t_0-1}(x)$. Since $x \in \{w,y\}$, the latter is exactly $\lin{t_0}{w}{y}(x)$. \item Suppose $x \in [w+1, y-1]$. Then $\Delta_x \geq 1$ and $\ls{s}{t_0}(x) \leq 1 \leq \sqrt{\Delta_x}$. \item Suppose $x \in (w,w+1)$. Note that $\Delta_x = w-x < 1$. By the definition of the LS curve, $\ls{s}{t_0}(x) = \ls{s}{t_0}(w) + (w-x)(\ls{s}{t_0}(w+1) - \ls{s}{t_0}(w)) $ $\leq \ls{s}{t_0-1}(w) + \sqrt{w-x}$ $\leq \lin{t_0-1}{w}{y}(x) + \sqrt{\Delta_x}$. \item Suppose $x \in (y-1,y)$. An identical argument to the above holds. \end{asparaitem} Now for the induction. Suppose the premise holds at step $t_i$. Namely for $x \in [w,y]$, for all level sets $\level{s}{t_i}{x}$ contained inside $\supp(\widehat{M}^t \vec{s})$, $\Phi(\level{s}{t_i}{x}) \geq \psi \geq \psi$. We would like to upperbound $\ls{s}{t_i}(x)$. To this end, let us consider some $x \in [w,y]$. By \Lem{lstrun}, \begin{eqnarray} \ls{s}{t_i}(x) & \leq & (1/2)[\ls{s}{{t_i-1}}(x(1-\Phi(\level{s}{t_i}{x}))) + \ls{s}{{t_i-1}}(x(1+\Phi(\level{s}{t_i}{x})))] \\ & \leq & (1/2)[\ls{s}{{t_{i-1}}}(x(1-\Phi(\level{s}{t_i}{x}))) + \ls{s}{{t_{i-1}}}(x(1+\Phi(\level{s}{t_i}{x})))] \end{eqnarray} \noindent The second inequality follows by monotonicity, since $t_{i-1} \leq t_i - 1$. Note that $\Delta_x = \min(x-w, y-x) \leq x$ for all $x \in [w,y]$. \Clm{step-by-step-drop} (which we prove after the current lemma) shows the following. \begin{claim}\langlebel{clm:step-by-step-drop} For all $1 \leq i \leq h$, for all $x \in [w,y]$, the following holds \begin{eqnarray} \ls{s}{t_i}(x) & \leq & (1/2)[\ls{s}{{t_{i-1}}}(x-\Delta_x\psi/4)) + \ls{s}{{t_{i-1}}}(x+\Delta_x \psi/4))] \langlebel{eq:ls-recur} \end{eqnarray} \end{claim} \noindent Now, let $x_L = x - \Delta_x \psi/4$ and $x_R = x + \Delta_x \psi/4$. Using \Clm{step-by-step-drop} we get \begin{eqnarray} \ls{s}{t_i}(x) & \leq & (1/2)[\lin{t_0-1}{w}{y}(x_L) + \sqrt{\Delta_{x_L}} (1-\psi^2/128)^{i-1} \nonumber \\ & & + \lin{t_0-1}{w}{y}(x_R) + \sqrt{\Delta_{x_R}}(1-\psi^2/128)^{i-1}] \\ & = & (1/2)[\lin{t_0-1}{w}{y}(x_L) + \lin{t_0-1}{w}{y}(x_R)] \nonumber\\ & & + (1/2)[\sqrt{\Delta_{x_L}}) (1-\psi^2/8)^{i-1} + \sqrt{\Delta_{x_R}}(1-\psi^2/128)^{i-1}] \langlebel{eq:ls} \end{eqnarray} \noindent Here, \Eqn{ls} follows from the induction hypothesis. Since $\lin{t_0-1}{w}{y}$ is a linear function, the first term is exactly $\lin{t_0-1}{w}{y}(x)$. We analyze the second term. We first assume that $\Delta_{x} = x-w$ (instead of $y-x$). \begin{eqnarray} \Delta_{x_L} & = & \min(x-\psi\Delta_{x}/4 - w, y-x+\psi\Delta_{x}/4) \\ & = & \min((1-\psi/4)\Delta_{x}, y-x+\psi/4\Delta_{x}) \leq (1-\psi/4)\Delta_{x} \end{eqnarray} Analogously, \begin{eqnarray} \Delta_{x_R} &= & \min(x+\psi\Delta_{x}/4 - w, y-x-\psi\Delta_{x}/4) \\ & = & \min((1+\psi/4)\Delta_{x}, y-x-\psi\Delta_{x}/4) \leq (1+\psi/4)\Delta_{x} \end{eqnarray} Thus, the second term of \Eqn{ls} is at most $(1/2)(1-\psi^2/128)^{i-1}\sqrt{\Delta_{x}}(\sqrt{1-\psi/4} + \sqrt{1+\psi/4})$. Now, we consider $\Delta_{x} = y-x$. \begin{eqnarray} \Delta_{x_L} & = & \min(x-\psi\Delta_{x}/4 - w, y-x+\psi\Delta_{x}/4) \\ & = & \min(x-\psi\Delta_{x}/4 - w, (1+\psi/4)\Delta_{x}) \leq (1+\psi/4)\Delta_{x} \end{eqnarray} Analogously, \begin{eqnarray} \Delta_{x_R} &= & \min(x+\psi\Delta_{x}/4 - w, y-x-\psi\Delta_{x}/4) \\ & = & \min(x+\psi\Delta_{x}/4 - w, (1-\psi/4)\Delta_{x}) \leq (1-\psi/4)\Delta_{x} \end{eqnarray} In this case as well, the second term of \Eqn{ls} is at most $(1/2)(1-\psi^2/128)^{i-1}\sqrt{\Delta_{x}}(\sqrt{1-\psi/4} + \sqrt{1+\psi/4})$. In both cases, we can upper bound \Eqn{ls} as follows. (We use the inequality $\frac{\sqrt{1-z} + \sqrt{1+z}}{2} \leq 1-z^2/8$. $$ \ls{s}{t_i}(x) \leq \lin{t_0-1}{w}{y}(x) + (1-\psi^2/128)^{i-1}\sqrt{\Delta_{x}}\frac{\sqrt{1-\psi/4} + \sqrt{1+\psi/4}}{2} \leq \lin{t_0-1}{w}{y}(x) + (1-\psi^2/128)^i\sqrt{\Delta_{x}}$$ \end{proof} Now, we establish \Clm{step-by-step-drop}, the missing piece in the above proof. \begin{proof} (of \Clm{step-by-step-drop}) Suppose $x_{max} \in [w,y]$ is the maximum value of $x \in [w,y]$ for which $\level{s}{t_i}{x}$ is still inside the support of the truncated diffusion at the $t_i$-th step. We split into three cases: $x \leq x_{max}$, $x \in (x_{max}, x_{max} + \Delta_{x_{max}}\psi/2]$, $x > x_{max} + \Delta_{x_{max}}\psi/2$. Note that in the latter two cases, $\level{s}{t_i}{x}$ is not contained in $\supp(\widehat{M}^{t_i}\vec{s})$. {\bf Case 1, $x \leq x_{max}$:} Note that \Eqn{ls-recur} holds by concavity of the Lov\'{a}sz-Simonovits curve when $\level{s}{t_i}{x} \subseteq \supp(\widehat{M}^{t_i} \vec{s})$ (because then this level set has conductance at least $\psi$). {\bf Case 2, $x \in (x_{max}, x_{max} + \Delta_{x_{max}}\psi/2]$:} Let $S = \level{s}{t_i}{x_{max}}$ and let $T = \level{s}{t_i}{x}$. Observe that \begin{align} \Phi(T) = \frac{|E(T, \overline{T})|}{d|T|} \stackrel{({\bf 1})}\geq \frac{|E(S,\overline{S})| - \psi/2 \cdot d|S|}{d|S| + \psi/2 \cdot d|S|} \stackrel{({\bf 2})}\geq \frac{\psi d|S|/2 }{2d|S|} \geq \frac{\psi}{4} \end{align} Here, $({\bf 1})$ follows because $T$ could contain at most $\psi |S|/2 $ neighbors of $S$ which could cost us at most $\psi d|S|/2$ edges in the cut $(S, \overline{S})$. $({\bf 2})$ follows by upperbounding $\psi$ by $1$. Again the claim in \Eqn{ls-recur} follows by concavity of the Lov\'{a}zs-Simonovits curve. \\ {\bf Case 3, $x > x_{max} + \Delta_{x_{max}}\psi/2$:} Now let $x_r = x_{max} + \Delta_{x_{max}} \psi/2$. Write $x = x_{max} + \Delta_{x_{max}} \psi/2 + s$. Recall $\Delta_x = \min(x-w, y-x)$. We claim that $x - \Delta_x \psi/4 \geq x_{max}$. First let us see how to establish \Eqn{ls-recur} assuming this claim holds. Assuming this claim, we have $$\ls{s}{t_i}(x - \Delta_x \psi/4) = \ls{s}{t_i}(x_{max}) = \ls{s}{t_i}(x + \Delta_x \psi/4) = \|\widehat{M}^{t_i} \vec{s}\|_1.$$ And therefore, \begin{align*} \ls{s}{t_i}(x) &= \frac{1}{2} \cdot \left[\ls{s}{t_i}(x - \Delta_x \psi/4) + \ls{s}{t_i}(x + \Delta_x \psi/4) \right] \\ &\leq \frac{1}{2} \cdot \left[\ls{s}{t_{i-1}}(x - \Delta_x \psi/4) + \ls{s}{t_{i-1}}(x + \Delta_x \psi/4)\right] \end{align*} Now, all that remains to establish \Eqn{ls-recur} is to show $x - \Delta_x \psi/4 \geq x_{max}$. For simplicity, write $\Delta_m = \Delta_{x_{max}}$. Now consider two cases depending on the value of $\Delta_m$ \begin{enumerate} \item {\bf Case 1} $\Delta_m = x_{max} - w$. In this case note that \begin{align*} x - \Delta_x \psi/4 &= x_{max} + \Delta_m \psi/2 + s - (x - w) \psi/4 \\ &\geq x_{max} + \Delta_m \psi/2 + s - (x_{max} + \Delta_m \psi/2 + s - w) \psi/4 \\ &\geq x_{max} + \Delta_m \psi/4 - \Delta_m \psi^2/8 + s - s \psi/4 \\ &\geq x_{max} + \Delta_m \psi/8 + s(1 - \psi/4) \geq x_{max} \end{align*} which establishes the claim above as desired. \item {\bf Case 2} $\Delta_m = y - x_{max}$. In this case note that \begin{align*} x - \Delta_x \psi/4 &= x_{max} + \Delta_m \psi/2 + s - (y - x) \psi/4 \\ &\geq x_{max} + \Delta_m \psi/2 + s - (y - x_{max} - \Delta_m \psi/2 - s) \psi/4 \\ &\geq x_{max} + \Delta_m \psi/4 + \Delta_m \psi^2/8 + s + s \psi/4 \\ &\geq x_{max} \end{align*} \end{enumerate} Thus, in both cases, the claim from above holds. This means that \Eqn{ls-recur} holds as long as the premise holds for the $t_i$-th step. \end{proof} \subsection{From leaking timesteps to the dropping of the LS curve} \langlebel{sec:leak} We fix a source vertex $s$, and consider the evolution of $\widehat{M}^t\vec{s}$. Therefore, we drop the dependence of $s$ from much of the notation. We use $\prw{t}$ to denote $\widehat{M}^t\vec{s}$. We begin with a few definitions. \begin{definition} \langlebel{def:leaking} A timestep $t$ is called \emph{leaking for source $s$} if, for all $k \leq \rho^{-1}$: if $\level{s}{t}{k} \subseteq \supp(\widehat{M}^{t}\vec{s})$ and $|\level{s}{t}{k} \cap F| \geq \alpha^2 k/400$, then $\Phi(\level{s}{t}{k}) \geq 1/d \ell^{1/3}$. If timestep $t$ is not leaking for $s$, there exists $k \leq \rho^{-1}$ such that $\level{s}{t}{k} \subseteq \supp(\widehat{M}^{t}\vec{s})$, $|\level{s}{t}{k} \cap F| \geq \alpha^2 k/400$, and $\phi(\level{s}{t}{k}) < 1/d \ell^{1/3}$. Such a $k$ is denoted as an \emph{$(s,t)$-certificate of non-leakiness}. \end{definition} We set $\alpha = \hbox{\bf var}epsilon^{4/3}/300,000$. Following the construction of the LS curve $\ls{s}{t}$, we will order each vector $\prw{t}$ in decreasing order, breaking ties by id. The \emph{rank} of a vertex is its position in (the sorted version of) $\prw{t}$. \begin{definition} \langlebel{def:bucket} Let the \emph{bucket} $\boldsymbol{u}cket{t}{r}$ denote the set of vertices whose rank in $\prw{t}$ is in the range $[2^r, 2^{r+1})$. A bucket $\boldsymbol{u}cket{t}{r}$ is called \emph{heavy} if $\sum_{v \in \boldsymbol{u}cket{t}{r} \cap F} \prw{t}(v) \geq \alpha$. (The bucket restricted to $F$ has large probability.) \end{definition} The following lemma says that if there are many leaking timesteps, then the LS curve drops at heavy buckets. \begin{lemma} \langlebel{lem:drop} Fix $r \geq 0$. Suppose for some $s \in F$, there exist $\ell' \geq \beta^3 \ell/8$ leaking timesteps $t_0 < t_1 < \ldots < t_{\ell'}$ such that for all $0 \leq i \leq \ell'$, $\boldsymbol{u}cket{t_i}{r}$ is heavy. Then, $\ls{s}{t_{\ell'}}(2^{r+1}) < \ls{s}{t_0}(2^{r+1}) - \alpha/4$. \end{lemma} The main tool used in our proof is our adaptation of Lov\'{a}sz-Simonovits lemma done in \Lem{flatten}. We first make a definition. \begin{definition} \langlebel{def:balanced:split} Fix $r \geq 0$, a source $s$ and a timestep $t$. A vertex $w \in [2^r, 2^{r+1}]$ is called a balanced split for $t$ if $|\lev{t}{w} \cap F| \geq \alpha 2^r/3$ and $\sum_{v \in \boldsymbol{u}cket{t}{r} \setminus \lev{t}{w}} \prw{t}(v) \geq \alpha/3$. \end{definition} We will first prove the following claim which essentially follows by averaging arguments. \begin{claim} \langlebel{clm:split} Fix $r \geq 0$ and suppose for some source vertex $s \in F$, there exist $\ell'$ leaking timesteps $t_0 < t_1 < \ldots < t_{\ell'}$ such that for all $0 \leq i \leq \ell'$, $\boldsymbol{u}cket{t_i}{r}$ is heavy. Then, there exists a vertex $w$ that is a balanced split for at least an $\alpha/3$-fraction of timesteps in $T = \{t_0, t_1, \ldots t_{\ell'} \}$. \end{claim} \begin{proof} Since $\boldsymbol{u}cket{t_0}{r}$ is heavy, $\ls{s}{t_0}(2^r) < 1$. Since the support of $\prw{t}$ is at most $\rho^{-1}$, this implies that $2^r < \rho^{-1}$ and $r \leq -\lg \rho$ (and this holds by the choice of parameters). For all $v \in \boldsymbol{u}cket{t}{r}$, $\prw{t}(v) \leq 1/2^r$. Since $\sum_{v \in \boldsymbol{u}cket{t}{r} \cap F} \prw{t}(v) \geq \alpha$, $|\boldsymbol{u}cket{t}{r} \cap F| \geq \alpha 2^r$. For convenince, let $T = \{t_0, t_1, \ldots t_{\ell'}\}$. Pick $w$ uar in $[2^r,2^{r+1})$. Let $X_i$ be the indicator for $w$ being a balanced split for $t_i$. Recall that $|\boldsymbol{u}cket{t_i}{r} \cap F| \geq \alpha 2^r$. Sort the vertices of $\boldsymbol{u}cket{t_i}{r} \cap F$ by increasing rank and consider the vertices in positions $\alpha 2^r/3$ and $2\alpha 2^r/3]$. Let the rank corresponding to these vertices by $u_1$ and $u_2$. We first argue that any rank $w \in [u_1, u_2]$ is a balanced split. We have $|\lev{t}{w} \cap F| \geq \alpha 2^r/3$ because $w \geq u_1$. For all $v \in \boldsymbol{u}cket{t_i}{r}$, $\prw{t_i}(v) \leq 1/2^r$. Thus, $\sum_{v \in \lev{t_i}{u_2} \cap \boldsymbol{u}cket{t_i}{r}} \prw{t_i}(v) \leq (1/2^r)(2\alpha 2^r/3) = 2\alpha/3$. Note that $\sum_{v \in \boldsymbol{u}cket{t_i}{r}} \prw{t}(v) \geq \alpha$, since the bucket is heavy Hence, for any $w \leq u_2$, $\sum_{v \in \boldsymbol{u}cket{t}{r} \setminus \lev{t}{w}} \prw{t}(v) \geq \alpha - 2\alpha/3 = \alpha/3$. As a consequence, for any $t_i$, there are at least $\alpha 2^r/3$ values of $w$ that are balanced splits. In other words, $\hbox{\bf E}[X_i] \geq \alpha/3$. By linearity of expectation, $\hbox{\bf E}[\sum_{i \leq \ell'}X_i] \geq \alpha {\ell'}/3$. Thus, there must exist some $w \in [2^r, 2^{r+1})$ that is a balanced split for at least $\alpha \ell'/3$ timesteps. \end{proof} Next, we show the following claim which essentially uses leakiness of a timestep $t \in T$ and the balanced split vertex $w$ promised by \Clm{split} to spell out a set with enough free vertices with large conductance. \begin{claim}\langlebel{clm:large:conductance} Fix $r \geq 0$ and let $w \in [2^r, 2^{r+1})$ be a split vertex as promised by \Clm{split} and let $t_{i_1} < t_{i_2} < \ldots < t_{i_{\alpha \ell'/3}}$ denote the timesteps for which $w$ is a balanced split. Let $y = \min(2^{r+6+\lceil \lg(1/\alpha)\rceil}, \rho^{-1})$. Then, for all $x \in [w,y]$ and for all $t \in \{t_{i_1}, t_{i_2}, \cdots, t_{i_{\alpha \ell'/3}} \}$, whenever $\lev{t}{x} \subseteq \supp(\widehat{M}^t \vec{s})$, then $\Phi(\lev{t}{x}) \geq 1/d \ell^{1/3}$. \end{claim} \begin{proof} Take $x \in [w,y]$ and a leaking timestep $t \in \{t_{i_1}, t_{i_2}, \cdots, t_{i_{\alpha \ell'/3}} \}.$ Note that $x \leq y \leq \rho^{-1}$ clearly holds. Now, to establish the lower bound on conductance claimed, we first unpack what it means for $t$ to be a leaking timestep \Def{leaking}. It says: If $\lev{t}{x} \subseteq \supp(\widehat{M}^t \vec{s})$ and $|\lev{t}{x} \cap F| \geq \alpha^2 k/400$, then it better hold that $\phi(\lev{t}{x}) \geq 1/d \ell^{1/3}$. Note that $y \leq 2^{r+6+\lceil \lg(1/\alpha)\rceil} \in [2^r (64/\alpha), 2^{r+1}(64/\alpha)]$. Since $r \leq -\lg \rho$, $y \leq 128(\rho\alpha)^{-1}$. Note that for all $t \in \{t_{i_1}, t_{i_2}, \cdots, t_{i_{\alpha \ell'/3}} \}$ and $x \in [w,y]$, $\lev{t}{x}$ contains at least $\alpha 2^r/3$ vertices of $F$. Thus, at least a $(\alpha 2^r/3)/(2^{r+1} \cdot 64/\alpha) \geq \alpha^2/400$-fraction of $\lev{t}{x}$ is in $F$. Now note that since $t$ is leaking, we see that one of the following will hold. Either \begin{asparaitem} \item $\lev{t}{x} \subseteq \supp(\widehat{M}^t \vec{s})$ and $\Phi(\lev{t}{x}) \geq 1/d \ell^{1/3}$, Or \item $\lev{t}{x} \not\subseteq \supp(\widehat{M}^t \vec{s})$. \end{asparaitem} And this establishes the claim. \end{proof} Now, we have all the ingredients to prove \Lem{drop}. The key step which remains is an application of \Lem{flatten}. \begin{proof} (Of \Lem{drop}) Suppose $w \in [2^r, 2^{r+1})$ is a balanced split at $\alpha \ell'/3$ timesteps as promised by \Clm{split}. Let $y = \min(2^{r+6+\lceil \lg(1/\alpha) \rceil}, \rho^{-1})$ and as observed in \Clm{large:conductance}, note that for $x \in [w,y]$ if $\lev{t}{x} \subseteq \supp(\widehat{M}^t \vec{s})$, it holds that $\phi(\lev{t}{x}) \geq 1/d \ell^{1/3}$. Now, we apply \Lem{flatten}. For all $x \in [w,y]$, we have $\ls{s}{t_{\ell'}}(x) \leq \ls{s}{t_{i_{\alpha \ell'/3}}}(x) \leq \lin{t_{i_1-1}}{w}{y}(x) + \sqrt{x} (1-1/128 d^2\ell^{2/3})^{\alpha \ell'/3}$. By the premise, $\ell' \geq \beta^3 \ell/8$ and therefore we have $$(1 - 1/128 d^2 \ell^{2/3})^{\alpha \ell'/3} \leq (1 - 1/128 d^2\ell^{2/3})^{\alpha \beta^3 \ell/3} = \ (1 - 1/128 d^2 \ell^{2/3})^{128 d^2\ell^{2/3} \cdot \frac{\alpha \beta^3 \ell^{1/3}}{3 \cdot 128d^2}} \leq \exp(-1/\alpha)$$ which holds because, for sufficiently small $\hbox{\bf var}epsilon > 0$, we have $$\ell^{1/3} = \frac{d^2}{\hbox{\bf var}epsilon^{10}} \geq \frac{d^2 \cdot 10^{20}}{\hbox{\bf var}epsilon^7} \geq \frac{d^2}{\alpha^3 \beta^3}.$$ Further, by the monotonicity of LS curves, $\ls{s}{t_{\ell'}}(x) \leq \lin{t_{i_1-1}}{w}{y}(x) + \exp(-1/\alpha)$ $\leq \lin{t_{i_0}}{w}{y}(x) + \exp(-1/\alpha)$. Specifically, we get \begin{equation} \langlebel{eq:lhs-lem-drop} \ls{s}{t_{\ell'}}(2^{r+1}) \leq \lin{t_{i_0}}{w}{y}(2^{r+1}) + \exp(-1/\alpha). \end{equation} Since $w$ is a good split, $\ls{s}{t_{i_0}}(2^{r+1}) \geq \ls{s}{t_{i_0}}(w) + \alpha/3$. Note that \begin{eqnarray} \lin{t_{i_0}}{w}{y}(2^{r+1}) & = & \ls{s}{t_{i_0}}(w) + (2^{r+1}-w)\left(\frac{\ls{s}{t_{i_0}}(y) - \ls{s}{t_{i_0}}(w)}{y-w}\right) \nonumber \\ & \leq & \ls{s}{t_{i_0}}(w) + 2^{r+1}/(y/2) \\ &\leq& \ls{s}{t_{i_0}}(w) + 2^{r+1} \times \left(\frac{2 \alpha}{2^r \cdot 64} \right) = \ls{s}{t_{i_0}}(w) + \alpha/16 \end{eqnarray} The first inequality above follows by upper bounding $\ls{s}{t_{i_0}}(y) - \ls{s}{t_{i_0}}(w)$ by $1$, dropping the negative term and noting that $y-w \geq y/2$ for a sufficiently small $\alpha$. Together with \Eqn{lhs-lem-drop}, we get \begin{eqnarray} \ls{s}{t_{\ell'}}(2^{r+1}) \leq \lin{t_{i_0}}{w}{y}(2^{r+1}) + \exp(-1/\alpha) & \leq & \ls{s}{t_{i_0}}(w) + \alpha/16 + \exp(-1/\alpha) \nonumber \\ &\leq & \ls{s}{t_{i_0}}(2^{r+1}) - \alpha/3 +\alpha/16 + \exp(-1/\alpha) \end{eqnarray} By monotonicity of the LS curve, $\ls{s}{t_{\ell'}}(2^{r+1}) < \ls{s}{t_0}(2^{r+1}) - \alpha/4$. \end{proof} Now, we state a key lemma. It says that a fixed bucket (parameterized by $r$) satisfies the following at most timesteps: (i) either it does not contain enough free vertices, or (ii) if it contains many free vertices at a particular timestep, then most of the corresponding timesteps are not leaky. \begin{lemma} \langlebel{lem:heavy} Fix $r \geq 0$ and take any $s \in F$. There are at most $\beta^3 \ell/\alpha$ leaking timesteps $t$ (with respect to $s$) where $\boldsymbol{u}cket{t}{r}$ is heavy. \end{lemma} \begin{proof} We prove by contradiction. Suppose there are more than $\beta^3 \ell/\alpha$ leaking timesteps $t$ where $\boldsymbol{u}cket{t}{r}$ is heavy. We break these up into $4/\alpha$ contiguous blocks of $\beta^3 \ell / 4$ leaking timesteps. By \Lem{drop}, after every such block of timesteps, $\ls{s}{t}(2^{r+1})$ reduces by more than $\alpha/4$. Note that $\ls{s}{0}(2^{r+1}) \leq 1$, and thus, after $4/\alpha$ blocks, $\ls{s}{t}(2^{r+1})$ becomes negative. Contradiction to the non-negativity of $\ls{s}{t}(2^{r+1})$. \end{proof} \subsection{Proof of \Thm{restrict-cut}} \langlebel{sec:relevant} We finally prove \Thm{restrict-cut}. In particular, recall that this theorem claims that for an arbitrary set $F \subseteq V$ with $|F| \geq \beta n$, there exists a size threshold $k$ such that one can find enough source vertices $s \in F$ such that $\ell$-step diffusions from $s$ contain enough non-leaky timesteps. Moreover, these non-leaky timesteps can be used to obtain a low conductance cut restricted to $F$. We begin by showing that indeed many sources $s \in F$ have the desired behavior. \begin{lemma} \langlebel{lem:relevant} There are at least $\beta^2n/8$ vertices $s \in F$, such that: there are at least $\beta\ell/16$ timesteps $t$ in $[\ell]$ that are not leaking for $s$. \end{lemma} \begin{proof} We fix any vertex $s$ satisfying the conditions of \Thm{goodseed}. Let us recall what this means. This means that for at least $\beta \ell/8$ timesteps $t$, it holds that $\widehat{M}^t \vec{s}(F) \geq \beta/16$. We will show that conclusion in \Lem{relevant} above holds for $s$ which will establish the lemma. We prove by contradiction. To this end, let us suppose for any vertex $s$ satisfying the conditions of \Thm{goodseed}, there are at most $\beta\ell/16$ non-leaky timesteps. There are at least $\beta\ell/8-\beta\ell/16 = \beta\ell/16$ timesteps $t$ that are leaking for $s$, such that $\widehat{M}^t\vec{s}(F) \geq \beta/16$. Fix any such timestep $t$ and consider the buckets $\boldsymbol{u}cket{t}{r}$. There are at most $-\lg \rho$ buckets with non-zero probability mass, and by averaging, there exists $r \leq -\lg\rho$ such that $$\sum_{v \in F \cap \boldsymbol{u}cket{t}{r}} \prw{t}(v) \geq \beta/(-16\lg \rho) = \frac{\hbox{\bf var}epsilon}{160 \cdot 3000 \lg(1/\hbox{\bf var}epsilon)} \geq \frac{\hbox{\bf var}epsilon^{4/3}}{300,000} = \alpha$$ where the last step holds for sufficiently small $\hbox{\bf var}epsilon$ and therefore, $\boldsymbol{u}cket{t}{r}$ is heavy. Thus, for each of the $\beta\ell/16$ leaking timesteps $t$ above, there exists some $r \leq -\lg\rho$ such that $\boldsymbol{u}cket{t}{r}$ is heavy. By averaging, there exists some $r \leq -\lg\rho$ such that for $\beta\ell/(-16\lg \rho)$ leaking timesteps $t$, $\boldsymbol{u}cket{t}{r}$ is heavy. However, for sufficiently small $\hbox{\bf var}epsilon$ ($\hbox{\bf var}epsilon < 2^{-30}$), we have $$\frac{\beta\ell}{-16 \lg \rho} = \frac{\hbox{\bf var}epsilon \cdot \ell}{160 \cdot 3000 \log(1/\hbox{\bf var}epsilon)} \geq 1000 \hbox{\bf var}epsilon^{3-4/3} \ell \geq \frac{\beta^3 \ell}{\alpha}$$ which contradicts \Lem{heavy}. \end{proof} \begin{lemma} \langlebel{lem:goodr} Let $|F| \geq \beta n$. There exists a $r \leq \lg(1/\rho)$ such that for $\geq \beta^2 n/(8 \lg^2(\rho^{-1}))$ vertices $s \in F$, the following holds. For at least $\beta \ell/(\lg^2(\rho^{-1}))$ timesteps $t$, there exists $k \in [2^r, 2^{r+1}]$ that is an $(s,t)$-certificate of non-leakiness. \end{lemma} \begin{proof} This is an averaging argument. Apply \Lem{relevant}. For each of the $\beta^2n/8$ vertices $s \in F$, there are at least $\beta\ell/16$ timesteps $t$ that are not leaking for $s$. Thus, for every such $(s,t)$ pair, there exists $k_{s,t} \leq \rho^{-1}$ that is an $(s,t)$-certificate of non-leakiness. We basically bin the logarithm of the certificates. Thus, to every pair $(s,t)$ (of the above form), we associate $r_{s,t} = \lfloor \lg k_{s,t} \rfloor$. By averaging, for each relevant $s$, there is a value $r_s$ such that for at least $\beta\ell/(16\lg(\rho^{-1}))$ timesteps $t$, there is an $(s,t)$-certificate in $[2^{r_s}, 2^{r_s+1}]$. Again, by averaging there exists $r \leq \lg(\rho{-1})$ such that there are at least $\beta^2 n/(8 \lg(\rho^{-1})) \geq \beta^2 n/(\lg^2(\rho^{-1}))$ vertices $s \in F$ for which there exist at least $\beta\ell/(16 \lg(\rho^{-1})) \geq \beta\ell/\lg^2(\rho^{-1})$ timesteps $t$, such that there is an $(s,t)$-certificate for non-leakiness in $[2^r, 2^{r+1}]$. \end{proof} \Thm{restrict-cut} follows as a corollary of \Lem{goodr}. We now present the proof. \begin{proof} (Of \Thm{restrict-cut}) As seen from \Lem{goodr}, there exists some $r \leq -\lg(\rho)$ such that there are at least $\Omega(\beta^2/\lg(\beta^{-1})) \cdot n$ vertices $s \in F$ each of which in turn has $(s,t)$-certificates of non-leakiness for at least $\Omega(\beta/16 \lg^2(\beta^{-1})) \cdot \ell$ different values of $t$. We simply choose $k = 2^r$. Let $S \subseteq F$ denote the collection of these relevant sources. And for $s \in S$, define $$C_s = \{t \leq \ell : \text{ there exists a } (s,t)-\text{ certificate of non-leakiness} \}.$$ Take $s \in S$, $t \in C_s$. We will show that there exists $k' = k'(s,t) \in [k,2k]$ such that the level set $\level{s}{t}{k'}$ satisfies the following. \begin{asparaitem} \item $\level{s}{t}{k} \subseteq \supp(\widehat{M}^t\vec{s})$. \item $\phi(\level{s}{t}{k'} \cup \{s\}) \leq 1/ \ell^{1/3}$. \item $|\level{s}{t}{k'} \cap F| \geq \alpha^2 k'/400 \geq \beta^3 k$. \end{asparaitem} The first item above follows from the conclusion of \Lem{goodr}, \Def{leaking} and taking contrapositive in \Lem{flatten}. Unpacking, this means that since $t \in C_s$ is a non-leaking timestep for $s$, it follows that there exists $k' = k'(s,t) \in [k,2k]$ for which $\level{s}{t}{k'} \subseteq \supp(\widehat{M}^t\vec{s})$. The last item above holds for this choice of $k'$ from the conclusion of \Lem{goodr}. For item 2 above, again note that our choice of $k'$ and \Lem{goodr} imply that $$\phi(\level{s}{t}{k'}) \leq 1/d \ell^{1/3} = 1/d \cdot \frac{\hbox{\bf var}epsilon^{10}}{d^2} = \hbox{\bf var}epsilon^{10}/d^3 = \phi/d^3$$ and therefore $\phi(\level{s}{t}{k'} \cup \{s\}) \leq \phi$ also follows as by (possibly) including a single vertex in the set, the number of cut-edges can only increase by $d$. \end{proof} \section{Proofs of applications} \langlebel{sec:appl} The proofs here are quite straightforward and appear (in some form) in previous work. We sketch the proofs, and do not give out the specifics of the Chernoff bound calculations. Specifically, we mention Theorem 9.28 and its proof in ~\cite{G17-book}, which contains these calculations. \begin{proof} (of \Thm{testers}) Given input graph $G$, we set up the partition oracle with proximity parameter $\hbox{\bf var}epsilon/8$. Therefore, with probability at least $2/3$ over the random seed $\boldsymbol{R}$, the number of cut edges is at most $\hbox{\bf var}epsilon dn/8$. The tester repeats the following $O(1)$ times. For a random $\boldsymbol{R}$, we first estimate the number of edges cut by random sampling. The tester samples $\Theta(1/\hbox{\bf var}epsilon)$ uar vertices $u$, picks a uar neighbor $v$ of $u$, and calls the partition oracle on $u$ and $v$. If these lie in different components, the edge $(u,v)$ is cut. If more that an $\hbox{\bf var}epsilon/4$ fraction of edges are cut, then repeat with a new $\boldsymbol{R}$. Otherwise, we fix the seed $\boldsymbol{R}$ and proceed to the second phase of the tester. (If no such $\boldsymbol{R}$ is found, the tester rejects.) In the second phase, we sample a multiset $S \subseteq V$ of $O(\hbox{\bf var}epsilon^{-1})$ uar vertices, and query the subgraph induced by the component $C(v)$ (of the partition given by the oracle) that each $v \in S$ belongs to. For each ($\mathrm{poly}(\hbox{\bf var}epsilon^{-1})$-sized) component $C(v)$, we directly determine if it belongs to $\mathcal{Q}$. (If there is an efficient algorithm, we can run that algorithm.) If any of these components does not belong to $\mathcal{Q}$, the tester rejects, otherwise it accepts. Now, let us argue that this is a bonafide tester for $\mathcal{Q}$. Recall $\mathcal{Q}$ is both monotone and additive. Suppose $G \in \mathcal{Q}$. Since $\mathcal{Q}$ is a subproperty of a minor-closed property, the first phase of setting the partition oracle succeeds with high probability. Since $\mathcal{Q}$ is monotone and additive, all the subgraphs induced on the connected components $C(v)$ also satisfy $\mathcal{Q}$. So the tester accepts whp. Suppose $G$ is $\hbox{\bf var}epsilon$-far from $\mathcal{Q}$. If the first phase does not succeed, then the tester rejects. So assume that the first phase succeeds. Whp, by a Chernoff bound, the number of cut edges (of the partition) is at most $\hbox{\bf var}epsilon dn/2$. Since $\mathcal{Q}$ is monotone, the graph obtained by removing these cut edges is at least $\hbox{\bf var}epsilon/2$-far from $\mathcal{Q}$. Since $\mathcal{Q}$ is additive, at least $\Omega(\hbox{\bf var}epsilon n)$ vertices participate in connected components that not in $\mathcal{Q}$. Hence, by a Chernoff bound, the second phase rejects whp. The query complexity has at most an $O(d\hbox{\bf var}epsilon^{-1})$ multiplicative overhead of the time complexity of the partition oracle, which is $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$. If $\mathcal{Q}$ can be decided in polynomial time, then the second phase also runs in $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ time. \end{proof} \begin{proof} (of \Thm{approx}) As with the previous proof, we set up the partition oracle with proximity parameter $\hbox{\bf var}epsilon dn/c$, where $c$ is the largest amount by which an edge addition/deletion changes $f$. As before, there is a first phase to determine an appropriate setting of $\boldsymbol{R}$ for the partition oracle. We sample $\mathrm{poly}(d\hbox{\bf var}epsilon^{-1})$ uar vertices and determine the component that each vertex belongs to. For each component, we compute $f$ exactly. We take the sum of $f$-values, and rescale appropriately to get an additive $\hbox{\bf var}epsilon nd$ estimate for $f$. \end{proof} \end{document}
math
121,636
\begin{document} \newtheorem{prop}{Proposition}[section] \newtheorem{Def}{Definition}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{Cor}{Corollary}[section] \title[Yang-Mills and Yang-Mills-Higgs]{\bf Local well-posedness for the (n+1)-dimensional Yang-Mills and Yang-Mills-Higgs system in temporal gauge} \author[Hartmut Pecher]{ {\bf Hartmut Pecher}\\ Fakult\"at f\"ur Mathematik und Naturwissenschaften\\ Bergische Universit\"at Wuppertal\\ Gau{\ss}str. 20\\ 42119 Wuppertal\\ Germany\\ e-mail {\tt [email protected]}} \date{} \begin{abstract} The Yang-Mills and Yang-Mills-Higgs equations in temporal gauge are locally well-posed for small and rough initial data, which can be shown using the null structure of the critical bilinear terms. This carries over a similar result by Tao for the Yang-Mills equations in the (3+1)-dimensional case to the more general Yang-Mills-Higgs system and to general dimensions. \end{abstract} \maketitle \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext{\hspace{-1.5em}{\it 2010 Mathematics Subject Classification:} 35Q40, 35L70 \\ {\it Key words and phrases:} Yang-Mills, Yang-Mills-Higgs, local well-posedness, temporal gauge} \normalsize \setcounter{section}{0} \section{Introduction and main results} \noindent Let $\mathcal{G}$ be the Lie goup $SO(n,\mathbb{R})$ (the group of orthogonal matrices of determinant 1) or $SU(n,\mathbb{C})$ (the group of unitary matrices of determinant 1) and $g$ its Lie algebra $so(n,\mathbb{R})$ (the algebra of trace-free skew symmetric matrices) or $su(n,\mathbb{C})$ (the algebra of trace-free skew hermitian matrices) with Lie bracket $[X,Y] = XY-YX$ (the matrix commutator). For given $A_{\alpha}: \mathbb{R}^{1+n} \rightarrow g $ we define the curvature by $$ F_{\alpha \beta} = \partial_{\alpha} A_{\beta} - \partial_{\beta} A_{\alpha} + [A_{\alpha},A_{\beta}] \, , $$ where $\alpha,\beta \in \{0,1,...,n\}$ and $D_{\alpha} = \partial_{\alpha} + [A_{\alpha}, \cdot \,]$ . Then the Yang-Mills system is given by \begin{equation} \label{1'} D^{\alpha} F_{\alpha \beta} = 0 \end{equation} in Minkowski space $\mathbb{R}^{1+n} = \mathbb{R}_t \times \mathbb{R}^n_x$ , where $n \ge 3$, with metric $diag(-1,1,...,1)$. Greek indices run over $\{0,1,...,n\}$, Latin indices over $\{1,...,n\}$, and the usual summation convention is used. We use the notation $\partial_{\mu} = \frac{\partial}{\partial x_{\mu}}$, where we write $(x^0,x^1,...,x^n)=(t,x^1,...,x^n)$ and also $\partial_0 = \partial_t$. Setting $\beta =0$ in (\ref{1'}) we obtain the Gauss-law constraint \begin{equation} \nonumber \partial^j F_{j 0} + [A^j,F_{j0} ]=0 \, . \end{equation} The system is gauge invariant. Given a sufficiently smooth function $U: {\mathbb R}^{1+n} \rightarrow \mathcal{G}$ we define the gauge transformation $T$ by $T A_0 = A_0'$ , $T(A_1,...,A_n) = (A_1',...,A_n'),$ where \begin{align*} A_{\alpha} & \longmapsto A_{\alpha}' = U A_{\alpha} U^{-1} - (\partial_{\alpha} U) U^{-1} \, . \end{align*} It is well-known that if $(A_0,...A_n)$ satisfies (\ref{1'}) so does $(A_0',...,A_n')$. Hence we may impose a gauge condition. We exlusively study the temporal gauge $A_0=0$. The Yang-Mills-Higgs system is given by \begin{align} \label{1} D^{\alpha} F_{\alpha \beta} & = [D_{\beta} \phi, \phi ] \\ \label{2} D^{\alpha} D_{\alpha} \phi & = |\phi|^{N-1} \phi \, . \end{align} Setting $\beta =0$ in (\ref{1}) we obtain the Gauss-law constraint \begin{equation} \nonumber \partial^j F_{j 0} = -[A^j,F_{j0}] + [D_0 \phi,\phi ] \, \end{equation} where $\phi: \mathbb{R}^{1+n} \rightarrow g $ . This system is also gauge invariant. Similarly as above we define the gauge transformation $T$ by $T A_0 = A_0'$ , $T(A_1,...,A_n) = (A_1',...,A_n')$ , $T\phi = \phi'$ , where \begin{align*} A_{\alpha} & \longmapsto A_{\alpha}' = U A_{\alpha} U^{-1} - (\partial_{\alpha} U) U^{-1} \\ \phi & \longmapsto \phi' = U \phi U^{-1} \, . \end{align*} If $(A_0,...,A_n,\phi)$ satisfies (\ref{1}),(\ref{2}), so does $(A_0',...,A_n',\phi')$. Some historical remarks: Concerning the well-posedness problem for the Yang-Mills equation in three space dimensions Klainerman and Machedon \cite{KM1} proved global well-posedness in energy space in the temporal gauge. Selberg and Tesfahun \cite{ST} proved local well-posedness for finite energy data in Lorenz gauge. This result was improved by Tesfahun \cite{Te} to data without finite energy, namely for $(A(0),(\partial_t A)(0) \in H^s \times H^{s-1}$ with $s > \frac{6}{7}$. Local well-posedness in energy space was given by Oh \cite{O} using a new gauge, namely the Yang-Mills heat flow. He was also able to shows that this solution can be globally extended \cite{O1}. Tao \cite{T1} showed local well-posedness for small data in $H^s \times H^{s-1}$ for $ s > \frac{3}{4}$ in temporal gauge. In space dimension four where the energy space is critical with respect to scaling Klainerman and Tataru \cite{KT} proved small data local well-posedness for a closely related model problem in Coulomb gauge for $s>1$. Very recently this result was significantly improved by Krieger and Tataru \cite{KrT}, who were able to show global well-posedness for data with small energy. Sterbenz \cite{St} considered also the four-dimensional case in Lorenz gauge and proved global well-posedness for small data in Besov space $\dot{B}^{1,1} \times \dot(B)^{0,1}$. In high space dimension $n \ge 6$ (and $n$ even) Krieger and Sterbenz \cite{KrSt} proved global well-posedness for small data in the critical Sobolev space. Concerning the more general Yang-Mills-Higgs system Eardley and Moncrief \cite{EM},\cite{EM1} proved local and global well-posedness for initial data $(A(0),(\partial_t A)(0)$ and $(\phi(0),(\partial_t \phi)(0)$) in $H^s \times H^{s-1}$ and $s \ge 2$. In Coulomb gauge global well-posedness in energy space $H^1 \times L^2$ was shown by Keel \cite{K}. Recently Tesfahun \cite{Te1} considered the problem in Lorenz gauge and obtained local well-posedness in energy space. We now study the Yang-Mills equation and also the Yang-Mills-Higgs system in arbitrary space dimension $n \ge 3$ in temporal gauge for low regularity data, which in three space dimension not necessarily have finite energy and which fulfill a smallness assumption, which reads in the Yang-Mills-Higgs case as follows $$ \|A(0)\|_{H^s} + \|(\partial_t A)(0)\|_{H^{s-1}} + \|\phi(0)\|_{H^s} + \|(\partial_t \phi)(0)\|_{H^{s-1}} < \epsilon $$ with a sufficiently small $\epsilon > 0$ , under the assumption $s> \frac{3}{4}$ for $n=3$ and $s > \frac{n}{2}-\frac{5}{8}-\frac{5}{8(2n-1)}$ in general dimension $n\ge3$. We obtain a solution which satisfies $A,\phi \in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1})$. A corresponding result holds for the Yang-Mills equation. Uniqueness holds in a certain subspace of Bourgain-Klainerman-Machedon type. The basis for our results is Tao's paper \cite{T1}. We carry over his three-dimensional result for the Yang-Mills equation to the more general Yang-Mills-Higgs equations and to arbitrary dimensions $n\ge 3$. The result relies on the null structure of all the critical bilinear terms. We review this null structure which was partly detected already by Klainerman-Machedon in the Yang-Mills case \cite{KM1} and by Tesfahun \cite{Te} for Yang-Mills-Higgs in the situation of the Lorenz gauge. The necessary estimates for the nonlinear terms in spaces of $X^{s,b}$-type in the (3+1)-dimensional case then reduce essentially to Tao's result \cite{T1}. One of these estimates is responsible for the small data assumption. Because these local well-posedness results (Prop. \ref{Prop'}) and (Prop. \ref{Prop}) can initially only be shown under the condition that the curl-free part $A^{cf}$ of $A$ (as defined below) vanishes for $t=0$ we have to show that this assumption can be removed by a suitable gauge transformation (Lemma \ref{Lemma}) which preserves the regularity of the solution. This uses an idea of Keel and Tao \cite{T1}. Our main results read as follows: \begin{theorem} \label{Theorem1'} Let $ n \ge 3$ , $s > \frac{n}{2}-\frac{5}{8}-\frac{5}{8(2n-1)}$ . Let $a \in H^s({\mathbb R}^n)$ , $a' \in H^{s-1}({\mathbb R}^n)$ be given, where $a=(a_1,...,a_n)$ , $a' =(a_1',...,a_n')$ , satisfying the compatability condition $\partial^j a_j' = - [a^j,a_j']$. Assume $\, \|a\|_{H^s} + \|a'\|_{H^{s-1}} \le \epsilon \, , $ where $\epsilon > 0$ is sufficiently small. Then the Yang-Mills equation (\ref{1'}) in temporal gauge $A_0=0$ with initial conditions $ A(0)=a \, , \, (\partial_t A)(0) = a' \, ,$ where $A=(A_1,...,A_n)$, has a unique local solution $A= A_+^{df} + A_-^{df} +A^{cf}$ , where $$ A^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] \, , \, A^{cf} \in X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1] \, , \, \partial_t A^{cf} \in C^0([0,1],H^{s-1}) \, .$$ These spaces are defined below and $\alpha = \frac{3n+1}{8(2n-1)}$. This solution fulfills $$ A \in C^0([0,1],H^s({\mathbb R}^n)) \cap C^1([0,1],H^{s-1}({\mathbb R}^n)) \, . $$ \end{theorem} {\bf Remark:} In the (3+1)-dimensional case we assume $ s > \frac{3}{4} $ and $\alpha = \frac{1}{4}$ , so that data without finite energy are admissible. This is Tao's result \cite{T1}. \begin{theorem} \label{Theorem1} Let $ n \ge 3$ , $s > \frac{n}{2}-\frac{5}{8}-\frac{5}{8(2n-1)}$ , and $2 \le N < 1+\frac{7}{4(\frac{n}{2}-s)}$ , if $s < \frac{n}{2}$ , and $N< \infty$ , if $s \ge \frac{n}{2}$ . Here $N$ is an odd integer, or $N \in\mathbb{N}$ with $N > s$. Let $a \in H^s({\mathbb R}^n)$ , $a' \in H^{s-1}({\mathbb R}^n)$ , $\phi_0 \in H^s({\mathbb R}^3)$ , $\phi_1 \in H^{s-1}({\mathbb R}^3)$ be given, where $a=(a_1,...,a_n)$ , $a' =(a_1',...,a_n')$ , satisfying $\partial^j a_j' = -\partial^j a_j'- [\phi_1,\phi_0]$. Assume $$ \|a\|_{H^s} + \|a'\|_{H^{s-1}} + \|\phi_0\|_{H^s} + \|\phi_1\|_{H^{s-1}} \le \epsilon \, , $$ where $\epsilon > 0$ is sufficiently small. Then the Yang-Mills-Higgs equations (\ref{1}) , (\ref{2}) in temporal gauge $A_0=0$ with initial conditions $$ A(0)=a \, , \, (\partial_t A)(0) = a' \, , \, \phi(0)=\phi_0 \, , \, (\partial_t \phi)(0) = \phi_1 \, ,$$ where $A=(A_1,...,A_n)$, has a unique local solution $A= A_+^{df} + A_-^{df} +A^{cf}$ and $\phi = \phi_+ + \phi_-$ , where $$ A^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] , A^{cf} \in X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1] , \partial_t A^{cf} \in C^0([0,1],H^{s-1}) , \phi_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] \, , $$ where these spaces are defined below and $\alpha = \frac{3n+1}{8(2n-1)}$. This solution fulfills $$ A \, , \, \phi \in C^0([0,1],H^s({\mathbb R}^n)) \cap C^1([0,1],H^{s-1}({\mathbb R}^n)) \, . $$ \end{theorem} Remark: The assumption $N>s$ or $N$ odd ensures that the function $f(s) = |s|^{N-1} s$ for $s\in \mathbb{R}$ is smooth enough at the origin. We denote the Fourier transform with respect to space and time and with respect to space by $\,\,\widehat{}\,\,$ and ${\mathcal F}$, respectively. The operator $|\nabla|^{\alpha}$ is defined by $({\mathcal F}(|\nabla|^{\alpha} f))$ $(\xi) = |\xi|^{\alpha} ({\mathcal F}f)(\xi)$ and similarly $ \langle \nabla \rangle^{\alpha}$. $\Box = \partial_t^2 - \Delta$ is the d'Alembert operator.\\ $a+ := a + \epsilon$ for a sufficiently small $\epsilon >0$ , so that $a<a+<a++$ , and similarly $a--<a-<a$ , and $\langle \cdot \rangle := (1+|\cdot|^2)^{\frac{1}{2}}$ . The standard spaces $X^{s,b}_{\pm}$ of Bourgain-Klainerman-Machedon type belonging to the half waves are the completion of the Schwarz space $\mathcal{S}({\mathbb R}^4)$ with respect to the norm $$ \|u\|_{X^{s,b}_{\pm}} = \| \langle \xi \rangle^s \langle \tau \mp |\xi| \rangle^b \widehat{u}(\tau,\xi) \|_{L^2_{\tau \xi}} \, . $$ Similarly we define the wave-Sobolev spaces $X^{s,b}_{|\tau|=|\xi|}$ with norm $$ \|u\|_{X^{s,b}_{|\tau|=|\xi|}} = \| \langle \xi \rangle^s \langle |\tau| - |\xi| \rangle^b \widehat{u}(\tau,\xi) \|_{L^2_{\tau \xi}} $$ and also $X^{s,b}_{\tau =0}$ with norm $$\|u\|_{X^{s,b}_{\tau=0}} = \| \langle \xi \rangle^s \langle \tau \rangle^b \widehat{u}(\tau,\xi) \|_{L^2_{\tau \xi}} \, .$$ We also define $X^{s,b}_{\pm}[0,T]$ as the space of the restrictions of functions in $X^{s,b}_{\pm}$ to $[0,T] \times \mathbb{R}^3$ and similarly $X^{s,b}_{|\tau| = |\xi|}[0,T]$ and $X^{s,b}_{\tau =0}[0,T]$. We frequently use the estimates $\|u\|_{X^{s,b}_{\pm}} \le \|u\|_{X^{s,b}_{|\tau|=|\xi|}}$ for $b \le 0$ and the reverse estimate for $b \ge 0$. \\ \section{Reformulation of the problem and null structure} In temporal gauge $A_0=0$ the system (\ref{1'}) is equivalent to \begin{align*} \partial_t \,div \, A & = - [A_i,\partial_t A^i] \\ \Box A_j & = \partial_j \, div A - [div A,A_j] - 2[A^i,\partial_i A_j] + [A^i,\partial_j A_i] - [A^i,[A_i,A_j]] \end{align*} and the Gauss constraint reduces to $$\partial^j \partial_t A_j = - [A_j,\partial_t A^j] \, . $$ Similarly in temporal gauge $A_0=0$ the system (\ref{1}),(\ref{2}) is equivalent to \begin{align*} \partial_t \,div \, A & = -[\partial_t \phi , \phi] - [A_i,\partial_t A^i] \\ \Box A_j & = \partial_j \, div A - [div A,A_j] - 2[A^i,\partial_i A_j] + [A^i,\partial_j A_i] - [\phi,\partial_j \phi] - [A^i,[A_i,A_j]] \\ &\hspace{1em} - [\phi,[A_j,\phi]] \\ \Box \phi & = -[div A,\phi] - 2[A_i,\partial^i \phi] - [A^i,[A_i,\phi]] + |\phi|^{N-1} \phi \end{align*} and the Gauss constraint reduces to $$\partial^j \partial_t A_j = -[A_j,\partial_t A^j]- [\partial_t \phi,\phi] \, . $$ We decompose $A$ into its divergence-free part $A^{df}$ and its curl-free part $A^{cf}$ : $$ A = A^{df} + A^{cf} \, , $$ where $$ A_j^{df} = (PA)_j := R^k(R_j A_k - R_k A_j) \quad , \quad A_j^{cf} = - R_j R_k A^k \, . $$ Here $P$ denotes the Leray projection onto the divergence-free part, and $R_j := |\nabla|^{-1} \partial_j$ is the Riesz transform. Then we obtain the following system which is equivalent to (\ref{1'}): \begin{align} \label{3'} \partial_t A^{cf} &= (-\Delta)^{-1} \nabla [A_i,\partial_t A^i] \\ \label{4'} \Box A^{df} & = -P [div \, A^{cf},A] - 2 P[A^i,\partial_i A] + P[A^i,\nabla A_i] - P[A^i,[A_i,A]] \end{align} Similarly the following system is equivalent to (\ref{1}),(\ref{2}): \begin{align} \label{3} \partial_t A^{cf} &= - (-\Delta)^{-1} \nabla [\partial_t \phi,\phi] + (-\Delta)^{-1} \nabla [A_i,\partial_t A^i] \\ \nonumber \Box A^{df} & = -P [div \, A^{cf},A] - 2P[A^i,\partial_i A] + P[A^i,\nabla A_i] -P[\phi,\nabla \phi] - P[A^i,[A_i,A]] \\ \label{4} & \hspace{1em}- P[\phi,[A,\phi]] \\ \label{5} \Box \phi & = -[div \, A^{cf},\phi] - 2[A_i,\partial^i \phi] - [A^i,[A_i,\phi]] + |\phi|^{N-1} \phi \, . \end{align} We now show that all the critical terms in (\ref{4'}), (\ref{4}) and (\ref{5}), namely the quadratic terms which contain only $A^{df}$ or $\phi$ have null structure. Those quadratic terms which contain $A^{cf}$ are less critical, because $A^{cf}$ is shown to be more regular than $A^{df}$, and the cubic terms are also less critical, because they contain no derivatives. The only critical term in (\ref{5}) is $[A^{df}_i,\partial^i \phi]$. We easily calculate \begin{align} \nonumber &[A^{df}_i,\partial^i A^{df}] = [R^k(R_i A_k - R_k A_i),\partial^i A^{df}] \\ \nonumber &= \frac{1}{2} \big([R^k(R_i A_k - R_k A_i),\partial^i A^{df}] + [R^i(R_k A_i - R_i A_k),\partial^k A^{df}]\big) \\ \nonumber &=\frac{1}{2} \big([R^k(R_i A_k - R_k A_i),\partial^i A^{df}] - [R^i(R_i A_k - R_k A_i),\partial^k A^{df}]\big) \\ \label{50} &= \frac{1}{2} Q^{ik} [ |\nabla|^{-1}(R_i A_k - R_k A_i),A^{df}] \end{align} where $$ Q_{ij}[u,v] := [\partial_i u,\partial_jv] - [\partial_j u,\partial_i v] = Q_{ij}(u,v) + Q_{ji}(v,u) $$ with the standard null form $$ Q_{ij}(u,v) := \partial_i u \partial_j v - \partial_j u \partial_i v \, . $$ Thus, ignoring $P$, which is a bounded operator we obtain \begin{equation} \label{N2} P[A_i^{df},\partial^i A^{df}] \sim \sum Q_{ik}[|\nabla|^{-1} A^{df},A^{df}] \, , \end{equation} and similarly \begin{equation} \label{N2'} P[A_i^{df},\partial^i \phi] \sim \sum Q_{ik}[|\nabla|^{-1} A^{df},\phi] \, . \end{equation} Moreover \begin{align*} (\phi \nabla \phi')^{df}_j & = R^k(R_j(\phi \partial_k \phi') - R_k(\phi \partial_j \phi')) \\ & = |\nabla|^{-2} \partial^k(\partial_j (\phi \partial_k \phi') - \partial_k (\phi \partial_j \phi'))) \\ & = |\nabla|^{-2} \partial^k(\partial_j \phi \partial_k \phi' - \partial_k \phi \partial_j \phi') \\ & = |\nabla|^{-2} \partial^k Q_{jk}(\phi,\phi') \end{align*} so that \begin{equation} \label{N3} P[\phi,\nabla \phi] \sim \sum |\nabla|^{-1} Q_{jk} [\phi,\phi] \, , \end{equation} and \begin{equation} \label{N3'} P[A^{df}_i,\nabla A^{df}_i] \sim \sum |\nabla|^{-1}Q_{jk} [A^{df},A^{df}] \, , \end{equation} All the other quadratic terms contain at least one factor $A^{cf}$. Defining \begin{align*} \phi_{\pm} = \frac{1}{2}(\phi \mp i \langle \nabla \rangle^{-1} \partial_t \phi)& \Longleftrightarrow \phi=\phi_+ + \phi_- \, , \, \partial_t \phi = i \langle \nabla \rangle (\phi_+ - \phi_-) \\ A^{df}_{\pm} = \frac{1}{2}(A^{df} \mp i \langle \nabla \rangle^{-1} \partial_t A^{df}) & \Longleftrightarrow A^{df} = A^{df}_+ + A_-^{df} \, , \, \partial_t A^{df} = i \langle \nabla \rangle(A^{df}_+ - A^{df}_-) \end{align*} we can rewrite (\ref{3'}),(\ref{4'}) as \begin{align} \label{6'} \partial_t A^{cf} &= (-\Delta)^{-1} \nabla [A_i,\partial_t A^i] \\ \label{7'} (i \partial_t \pm \langle \nabla \rangle)A_{\pm} ^{df} & = \mp 2^{-1} \langle \nabla \rangle^{-1} ( R.H.S. \, of \, (\ref{4'}) - A^{df}) \, . \end{align} with initial data \begin{align} \label{1.15*'} A^{df}_\pm(0) & = \frac{1}{2}(A^{df}(0) \mp i^{-1} \langle \nabla \rangle^{-1} \partial_t A^{df}(0) \, . \end{align} Similarly we can rewrite (\ref{3}),(\ref{4}),(\ref{5}) as \begin{align} \label{6} \partial_t A^{cf} &= (-\Delta)^{-1} \nabla [\partial_t \phi,\phi] + (-\Delta)^{-1} \nabla [A_i,\partial_t A^i] \\ \label{7} (i \partial_t \mp \langle \nabla \rangle)A_{\pm} ^{df} & = \mp 2^{-1} \langle \nabla \rangle^{-1} ( R.H.S. \, of \, (\ref{4}) - A^{df}) \\ \label{8} (i \partial_t \mp \langle \nabla \rangle) \phi_{\pm} &= \mp 2^{-1} \langle \nabla \rangle^{-1}( R.H.S. \, of \, (\ref{5}) - \phi) \, . \end{align} The initial data are transformed as follows: \begin{align} \label{1.14*} \phi_{\pm}(0) &= \frac{1}{2}(\phi(0) \mp i^{-1} \langle \nabla \rangle^{-1} \partial_t \phi(0)) \\ \label{1.15*} A^{df}_\pm(0) & = \frac{1}{2}(A^{df}(0) \mp i^{-1} \langle \nabla \rangle^{-1} \partial_t A^{df}(0) \, . \end{align} \section{The preliminary local well-posedness results} We now state and prove preliminary local well-posedness of (\ref{3'}),(\ref{4'}) as well as (\ref{3}),(\ref{4}),(\ref{5}), for which it is essential to have data for $A$ with vanishing curl-free part. \begin{prop} \label{Prop'} For space dimension $n \ge 3$ assume $s>\frac{n}{2}- \frac{5}{8}-\frac{5}{8(2n-1)}$ and $\alpha = \frac{3n+1}{8(2n-1)}$ . Let $a^{df} = (a_1^{df},...,a_n^{df}) \in H^s$ , $a'^{df}= ({a'}_1^{df},...,{a'}_n^{df}) \in H^{s-1}$ be given with $$ \sum_j \|a_j^{df}\|_{H^s} + \sum_j \|{a'}_j^{df}\|_{H^{s-1}} \le \epsilon_0 \, ,$$ where $\epsilon_0 >0$ is sufficiently small. Then the system (\ref{3'}),(\ref{4'}) with initial conditions $$ A^{df}(0)=a^{df} \, , \, (\partial_t A^{df})(0) = {a'}^{df} \, , \, A^{cf}(0) = 0 \, , $$ has a unique local solution $$ A= A^{df}_+ + A^{df}_- + A^{cf} \, , $$ where $$ A^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] \, , \, A^{cf} \in X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1] \, , \, \partial_t A^{cf} \in C^0([0,1],H^{s-1}) \, .$$ Uniqueness holds (of course) for not necessarily vanishing initial data $A^{cf}(0) = a^{cf}$. The solution satisfies $$ A \in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1}) \, . $$ \end{prop} \begin{prop} \label{Prop} For space dimension $n \ge 3$ assume $s>\frac{n}{2}- \frac{5}{8}-\frac{5}{8(2n-1)}$ and $\alpha = \frac{3n+1}{8(2n-1)}$ . Assume $2 \le N < 1 + \frac{7}{4(\frac{n}{2}-s)}$ , if $s < \frac{n}{2}$ , and $2 \le N < \infty$, if $ s \ge \frac{n}{2}$. Here $N$ is an odd integer, or $N \in\mathbb{N}$ with $N > s$. Let $a^{df} = (a_1^{df},...,a_n^{df}) \in H^s$ , $a'^{df}= ({a'}_1^{df},...,{a'}_n^{df}) \in H^{s-1}$ , $\phi_0 \in H^s$, $\phi_1 \in H^{s-1}$ be given with $$ \sum_j \|a_j^{df}\|_{H^s} + \sum_j \|{a'}_j^{df}\|_{H^{s-1}} + \|\phi_0\|_{H^s} + \|\phi_1\|_{H^{s-1}} \le \epsilon_0 \, ,$$ where $\epsilon_0 >0$ is sufficiently small. Then the system (\ref{3}),(\ref{4}),(\ref{5}) with initial conditions $$ \phi(0) = \phi_0 \, , \, (\partial_t \phi)(0) = \phi_1 \, , \, A^{df}(0)=a^{df} \, , \, (\partial_t A^{df})(0) = {a'}^{df} \, , \, A^{cf}(0) = 0 \, , $$ has a unique local solution $$ \phi= \phi_+ + \phi_- \quad , \quad A= A^{df}_+ + A^{df}_- + A^{cf} \, , $$ where $$ \phi_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] , A^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] , A^{cf} \in X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1] , \partial_t A^{cf} \in C^0([0,1],H^{s-1}) \, . $$ Uniqueness holds (of course) for not necessarily vanishing initial data $A^{cf}(0) = a^{cf}$. The solution satisfies $$ A,\phi \in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1}) \, . $$ \end{prop} Fundamental for their proof are the following estimates. \begin{prop} \label{Prop.2} Let $n \ge 2$. \begin{enumerate} \item For $2 < q \le \infty $ , $ 2 \le r < \infty$ , $ \frac{2}{q} = (n-1)(\frac{1}{2}-\frac{1}{r})$ , $ \mu = n(\frac{1}{2}-\frac{1}{r})-\frac{1}{q}$ the following estimate holds \begin{equation} \label{15} \|u\|_{L^q_t L^r_x} \lesssim \|u\|_{X^{\mu,\frac{1}{2}+}_{|\tau|=|\xi|}} \, . \end{equation} \item For $k \ge 0$ , $ p < \infty$ and $ \frac{n-1}{2(n+1)} \ge \frac{1}{p} \ge \frac{n-1}{2(n+1)} - \frac{k}{n}$ the following estimate holds: \begin{equation} \label{Tao} \|u\|_{ L^p_x L^2_t} \lesssim \|u\|_{X^{k+\frac{n-1}{2(n+1)},\frac{1}{2}+}_{|\tau|=|\xi|}} \, . \end{equation} \end{enumerate} \end{prop} \begin{proof} (\ref{15}) is the Strichartz type estimate, which can be found for e.g. in \cite{GV}, Prop. 2.1, combined with the transfer principle. Concerning (\ref{Tao}) we use \cite{KMBT}, Thm. B.2: $$ \|\mathcal{F}_t u \|_{L^2_{\tau} L_x^{\frac{2(n+1)}{n-1}}} \lesssim \|u_0\|_{\dot{H}^{\frac{n-1}{2(n+1)}}} \, , $$ if $u=e^{it |\nabla|} u_0$ and $\mathcal{F}_t$ denotes the Fourier transform with respect to time. This immediately implies by Plancherel, Minkowski's inequality and Sobolev's embedding theorem $$\|u\|_{L^p_x L^2_t} = \|\mathcal{F}_t u \|_{L^p_x L^2_\tau} \le \|\mathcal{F}_t u \|_{L^2_{\tau} L^p_x} \lesssim \|\mathcal{F}_t u \|_{L^2_{\tau} H^{k,\frac {2(n+1)}{n-1}}_x} \lesssim \|u_0\|_{H^{k+\frac{n-1}{2(n+1)}}} \, . $$ The transfer principle implies (\ref{Tao}). \end{proof} \begin{proof}[Proof of Prop. \ref{Prop} and Prop. \ref{Prop'}] We use the system (\ref{6'}),(\ref{7'}) (instead of (\ref{3'}),(\ref{4'})) and (\ref{6}),(\ref{7}),(\ref{8}) (instead of (\ref{3}),(\ref{4}),(\ref{5})) with initial conditions (\ref{1.15*'}) and (\ref{1.14*}),(\ref{1.15*}). We want to use a contraction argument for $A_{\pm}^{df} \in X_{\pm}^{s,\frac{3}{4}+\epsilon}[0,1] \, , \, A^{cf} \in X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau=0}$ $[0,1]$ , $\partial_t A^{cf} \in C^0([0,1],H^{s-1}$ ), and in the Yang-Mills-Higgs case in addition for $\phi \in X_{\pm}^{s,\frac{3}{4}+ \epsilon}[0,1]$ . Provided that our small data assumption holds this can be reduced by well-known arguments to suitable multilinear estimates of the right hand sides of these equations. For (\ref{7'}) e.g. we make use of the following well-known estimate: $$ \|A^{df}_{\pm}\|_{X^{l,b}_{\pm}[0,1]} \lesssim \|A^{df}_{\pm}(0)\|_{H^l} + \| R.H.S. \, of \, (\ref{7'}) \|_{X^{l,b-1}_{\pm}[0,1]} \, , $$ which holds for $l\in{\mathbb R}$ , $\frac{1}{2} < b \le 1$ . Thus the local existence and uniqueness can be reduced to the following estimates. In order to control $A^{cf}$ we need \begin{align} \label{16} \| |\nabla|^{-1} (\phi_1 \partial_t \phi_2)\|_{X^{s+\alpha,-\frac{1}{2}+\epsilon+}_{\tau=0}} &\lesssim \|\phi_1\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|\phi_2\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \\ \label{17} \| |\nabla|^{-1} (\phi_1 \partial_t \phi_2)\|_{X^{s+\alpha,-\frac{1}{2}+2\epsilon-}_{\tau=0}} &\lesssim \|\phi_1\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau=0}} \|\phi_2\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau=0}} \\ \label{18} \| |\nabla|^{-1} (\phi_1 \partial_t \phi_2)\|_{X^{s+\alpha,-\frac{1}{2}+\epsilon}_{\tau=0}} &+ \| |\nabla|^{-1} (\phi_2 \partial_t \phi_1)\|_{X^{s+\alpha,-\frac{1}{2}+\epsilon}_{\tau=0}} \\ \nonumber &\lesssim \|\phi_1\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau=0}} \|\phi_2\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, . \end{align} In order to control $\partial_t A^{cf}$ we need \begin{align} \label{19} \| |\nabla|^{-1} (A_1 \partial_t A_2)\|_{C^0(H^{s-1})} \lesssim & (\|A_1^{cf}\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} + \sum_{\pm} \|A^{df}_{1\pm}\|_{X^{s,\frac{1}{2}+}_{\pm}})\\ \nonumber &(\|\partial_t A^{cf}_2\|_{C^0(H^{s-1})} + \sum_{\pm} \|A^{df}_{2\pm}\|_{X^{s,\frac{1}{2}+}_{\pm}}) \, . \end{align} The estimate for $A^{df}$ and $\phi$ by use of (\ref{N2}),(\ref{N2'}),(\ref{N3}),(\ref{N3'}) reduces to \begin{align} \nonumber &\|Q_{ij}(|\nabla|^{-1}\phi_1,\phi_2)\|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} + \|\nabla^{-1}Q_{ij}(\phi_1,\phi_2)\|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \\ \label{28} &\hspace{1em}\lesssim \|\phi_1\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|\phi_2\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, . \end{align} For the proof of (\ref{28}) we refer to \cite{T}, Prop. 9.2 (slightly modified), which is given under the assumption $s > \frac{n}{2}-\frac{3}{4}$. This assumption is weaker than our assumption, if $n\ge 4$, and they coincide for $n=3$. Moreover for the terms $P[div \,A^{cf},A]$ , $P[A^i,\partial_i A]$ and $P[A^i,\partial_j A_i]$ we need \begin{equation} \label{29} \| \nabla A^{cf} A^{df} \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} + \| A^{cf} \nabla A^{df} \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \|A^{cf}\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau =0}} \|A^{df}\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \end{equation} and \begin{equation} \label{30} \| \nabla A^{cf} A^{cf} \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \|A^{cf}\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau =0}}^2 \, . \end{equation} All the cubic terms are estimated by \begin{equation} \label{31} \| A_1 A_2 A_3 \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \prod_{i=1}^3 \min(\|A_i\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}},\|A_i\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau =0}} ) \, . \end{equation} Remark that in (\ref{19}), (\ref{29}) and (\ref{31}) $A^{df}$ may be replaced by $\phi$ . \\ For the Yang-Mills-Higgs system we additionally need \begin{equation} \label{32} \| |\phi|^{N-1} \phi \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \|\phi\|^N_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, . \end{equation} All these estimates up to (\ref{19}) and (\ref{32}) have been essentially given by Tao \cite{T1} for the Yang-Mills case in space dimension $n=3$. We remark that it is especially (\ref{18}) which prevents a large data result, because it seems to be difficult to replace $X^{s+\alpha,-\frac{1}{2}+\epsilon}_{\tau=0}$ by $X^{s+\alpha,-\frac{1}{2}+\epsilon+}_{\tau=0}$ on the left hand side.\\ {\bf Proof of (\ref{17}).} As usual the regularity of $|\nabla|^{-1}$ is harmless in dimension $n\ge 3$ (\cite{T}, Cor. 8.2) and it can be replaced by $\langle \nabla \rangle^{-1}$. Taking care of the time derivative we reduce to \begin{align*} \big|\int \int u_1 u_2 u_3 dx dt\big| \lesssim \|u_1\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau =0}} \|u_2\|_{X^{s+\alpha,-\frac{1}{2}+\epsilon}_{\tau =0}} \|u_3\|_{X^{1-(\alpha +s),\frac{1}{2}-2\epsilon+}_{\tau =0}} \, , \end{align*} which follows from Sobolev's multiplication rule, because under our assumption on $s$ and the choice of $\alpha$ we obtain $ 2(s+\alpha)+1-(\alpha +s) > \frac{n}{2}$ , as one easily calculates. \\ {\bf Proof of (\ref{18}).} a. If $\widehat{\phi}$ is supported in $ ||\tau|-|\xi|| \gtrsim |\xi| $ , we obtain $$ \|\phi\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau=0}} \lesssim \|\phi\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \,, $$ when we remark that $\alpha \le \frac{1}{4}$ for $n\ge 3$ . Thus (\ref{18}) follows from (\ref{17}).\\ b. It remains to show $$ \big|\int\int (uv_t w + uvw_t) dxdt \big| \lesssim \|u\|_{X^{1-\alpha-s,\frac{1}{2}-\epsilon}_{\tau =0}} \|w\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau| =|\xi|}} \|v\|_{X^{s+\alpha-\epsilon,\frac{1}{2}+\epsilon}_{\tau =0}} \, $$ whenever $\widehat w$ is supported in $||\tau|-|\xi|| \ll |\xi|$. This is equivalent to $$ \int_* m(\xi_1,\xi_2,\xi_3,\tau_1,\tau_2,\tau_3) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, $$ where $d\xi = d\xi_1 d\xi_2 d\xi_3$ , $d\tau = d\tau_1 d\tau_2 d\tau_n$ and * denotes integration over $\sum_{i=1}^3 \xi_i = \sum_{i=1}^3 \tau_i = 0$. The Fourier transforms are nonnegative without loss of generality. Here $$ m= \frac{(|\tau_2|+|\tau_3|) \chi_{||\tau_3|-|\xi_3|| \ll |\xi_3|}}{\langle \xi_1 \rangle^{1-\alpha-s} \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon} \langle \xi_2 \rangle^{s+\alpha-\epsilon} \langle \tau_2 \rangle^{\frac{1}{2}+\epsilon} \langle \xi_3 \rangle^s \langle |\tau_3|-|\xi_3|\rangle^{\frac{3}{4}+\epsilon}} \, . $$ Since $\langle \tau_3 \rangle \sim \langle \xi_3 \rangle$ and $\tau_1+\tau_2+\tau_3=0$ we have \begin{equation} \label{N4'} |\tau_2| + |\tau_3| \lesssim \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon} \langle \tau_2 \rangle^{\frac{1}{2}+\epsilon} +\langle \tau_1 \rangle^{\frac{1}{2}-\epsilon} \langle \xi_3 \rangle^{\frac{1}{2}+\epsilon} +\langle \tau_2 \rangle^{\frac{1}{2}+\epsilon} \langle \xi_3 \rangle^{\frac{1}{2}-\epsilon} , \end{equation} so that concerning the first term on the right hand side of (\ref{N4'}) we have to show $$\big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{1-\alpha-s,0}_{\tau=0}} \|v\|_{X^{s+\alpha-\epsilon,0}_{\tau=0}} \|w\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \ , $$ which easily follows from Sobolev's multiplication rule, because $s> \frac{n}{2}-1$.\\ Concerning the second term on the right hand side of (\ref{N4'}) we use $\langle \xi_1 \rangle^{s-1+\alpha} \lesssim \langle \xi_2 \rangle^{s-1+\alpha} + \langle \xi_3 \rangle^{s-1+\alpha}$, so that we reduce to \begin{equation} \label{51} \big|\int\int uvw dx dt\big| \lesssim\|u\|_{X^{0,0}_{\tau=0}} \|v\|_{X^{1-\epsilon,\frac{1}{2}+\epsilon}_{\tau=0}} \|w\|_{X^{s-\frac{1}{2}-\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \end{equation} and \begin{equation} \label{52} \big|\int\int uvw dx dt\big| \lesssim\|u\|_{X^{0,0}_{\tau=0}} \|v\|_{X^{s+\alpha-\epsilon,\frac{1}{2}+\epsilon}_{\tau=0}} \|w\|_{X^{\frac{1}{2}-\alpha-\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \,. \end{equation} To obtain (\ref{51}) in the case $n \ge 4$ we estimate as follows: $$ \big| \int\int uvw dx dt \big| \le \|u\|_{L^2_xL^2_t} \|v\|_{L^{\frac{2n}{n-2+2\epsilon}}_x L^{\infty}_t} \|w\|_{L^{\frac{n}{1-\epsilon}}_xL^2_t} \, . $$ We use (\ref{Tao}) with $p=\frac{n}{1-\epsilon}$ and $k= n(\frac{n-1}{2(n+1)}-\frac{1}{p})$, so that one easily checks that $$ k + \frac{n-1}{2(n+1)} = n(\frac{n-1}{2(n+1)} - \frac{1-\epsilon}{n}) + \frac{n-1}{2(n+1)} < \frac{n}{2} - \frac{5}{4} < s - \frac{1}{2} - \epsilon \, . $$ Thus $$ \|w\|_{L^{\frac{n}{1-\epsilon}}_x L^2_t} \lesssim \|w\|_{X^{k+\frac{n-1}{2(n+1)},\frac{1}{2}+}_{|\tau|=|\xi|}} \le \|w\|_{X^{s-\frac{1}{2}-\epsilon,\frac{1}{2}+}_{|\tau|=|\xi|}}$$ and by Sobolev $$ \|v\|_{L^{\frac{2n}{n-2+2\epsilon}}_x L^{\infty}_t} \lesssim \|v\|_{X^{1-\epsilon,\frac{1}{2}+}_{\tau =0}} \, . $$ In the case $n=3$ we estimate by Sobolev and (\ref{Tao}) $$ \big| \int\int uvw dx dt \big| \le \|u\|_{L^2_xL^2_t} \|v\|_{L^4_x L^{\infty}_t} \|w\|_{L^4_xL^2_t} \lesssim \|u\|_{X^{0,0}_{\tau =0}} \|v\|_{X^{1-\epsilon,\frac{1}{2}+}_{|\tau|=|\xi|}} \|w\|_{X^{\frac{1}{4},\frac{1}{2}+}_{|\tau|=|\xi|}} $$ In order to obtain (\ref{52}) we estimate as follows: $$ \big| \int\int uvw dx dt \big| \le \|u\|_{L^2_x L^2_t} \|v\|_{L^{\tilde{q}}_x L^{\infty}_t} \|w\|_{L^p_x L^2_t}$$ with $\frac{1}{\tilde{q}} = \frac{2(\frac{1}{2}-\alpha-\epsilon)}{n-1}$ and $ \frac{1}{p}= \frac{n-1-4(\frac{1}{2}-\alpha-\epsilon)}{2(n-1)}$. Then we use the embedding $H^{s+\alpha-\epsilon}_x \subset L^{\tilde{q}}_x$. This is true, because one easily checks $\frac{2(\frac{1}{2}-\alpha - \epsilon)}{n-1} \ge \frac{1}{2} - \frac{s+\alpha-\epsilon}{n}$, using $\alpha \le \frac{1}{4}$ and $s> \frac{n}{2}-\frac{3}{4}$. We next show that $$\|w\|_{L^p_x L^2_t} \lesssim \|w\|_{X^{\frac{1}{2}-\alpha-\epsilon,\frac{1}{2}+}_{|\tau|=|\xi|}} \, . $$ This follows by interpolation between (\ref{Tao}) (with $k=0$) and the trivial identity $\|w\|_{L^2_x L^2_t} = \|u\|_{X^{0,0}_{|\tau|=|\xi|}} $ with interpolation parameter $\theta$ given by $\theta \frac{n-1}{2(n+1)} = \frac{1}{2}-\alpha-\epsilon$. One checks that $\theta < 1$ and $\frac{1}{p} = \frac{1-\theta}{2} + \theta \frac{n-1}{2(n+1)}$, so that (\ref{52}) follows. Concerning the last term on the right hand side of (\ref{N4'}) we use $\langle \xi_1 \rangle^{s-1+\alpha} \lesssim \langle \xi_2 \rangle^{s-1+\alpha} + \langle \xi_3 \rangle^{s-1+\alpha}$ so that we reduce to \begin{equation} \label{53} \big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}} \|v\|_{X^{1-\epsilon,0}_{\tau=0}} \|w\|_{X^{s-\frac{1}{2}+\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \end{equation} and \begin{equation} \label{54} \big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}} \|v\|_{X^{s+\alpha-\epsilon,0}_{\tau=0}} \|w\|_{X^{\frac{1}{2}-\alpha+\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \,. \end{equation} In order to obtain (\ref{53}) in the case $n \ge 4$ we estimate by H\"older's inequality $$ \big| \int\int uvw dx dt \big| \le \|u\|_{L^2_x L^{\frac{1}{\epsilon}}_t} \|v\|_{L^{\frac{2n}{n-2+2\epsilon}}_x L^2_t} \|w\|_{L^{\frac{n}{1-\epsilon}} L^{\frac{2}{1-2\epsilon}}_t} \, . $$ By Sobolev we have $$ \|v\|_{L^{\frac{2n}{n-2+2\epsilon}}_x L^2_t} \lesssim \|v\|_{X^{1-\epsilon,0}_{\tau=0}} \,,$$ and by (\ref{Tao}) we obtain for $\frac{1}{p} = \frac{1}{n}-O(\epsilon)$ : $$\|w\|_{L^p_x L^2_t} \lesssim \|w\|_{X^{k+\frac{n-1}{2(n+1)},\frac{1}{2}+}_{|\tau|=|\xi|}} \, , $$ where $$\frac{k}{n}=\frac{n-1}{2(n+1)}-\frac{1}{n}+ O(\epsilon) \, \Leftrightarrow \, k+\frac{n-1}{2(n+1)} = \frac{n}{2}- \frac{3}{2} + O(\epsilon) < s-\frac{3}{4} \, . $$ Interpolation with the standard Strichartz inequality (\ref{15}) for $q=r= \frac{2(n+1)}{n-1}$: $$ \|w\|_{L_x^{\frac{2(n+1)}{n-1}} L_t^{\frac{2(n+1)}{n-1}}} = \|w\|_{L_t^{\frac{2(n+1)}{n-1}} L_x^{\frac{2(n+1)}{n-1}}} \lesssim \|w\|_{X^{\frac{1}{2},\frac{1}{2}+}_{|\tau|=|\xi|}} $$ and interpolation parameter $\theta = (n+1)\epsilon$ gives $$\|w\|_{L^{\frac{n}{1-\epsilon}}_x L^{\frac{2}{1-2\epsilon}}_t} \lesssim \|w\|_{X^{s-\frac{3}{4},\frac{1}{2}+}_{|\tau|=|\xi|}} \, , $$ which is more than we need.\\ In order to obtain (\ref{53}) in the case $n = 3$ we estimate as follows: \begin{align*} \big| \int\int uvw dx dt \big| &\lesssim \|u\|_{L^2_x L^{\frac{1}{\epsilon}}_t} \|v\|_{L^{4}_x L^2_t} \|w\|_{L^4 L^{\frac{2}{1-2\epsilon}}_t} \\ &\lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}} \|v\|_{X^{1-\epsilon,0}_{\tau=0}} \|w\|_{X^{\frac{1}{4}+\epsilon,\frac{1}{2}+\epsilon}_{|\tau|=|\xi|}} \, , \end{align*} which is sufficient under our assumption $ s > \frac{3}{4} $ . \\ In order to obtain (\ref{54}) we estimate $$ \big| \int\int uvw dx dt \big| \le \|u\|_{L^2_x L^{\frac{1}{\epsilon}}_t} \|v\|_{L^{p}_x L^2_t} \|w\|_{L^{q}_x L^{\frac{2}{1-2\epsilon}}_t} \, , $$ where $\frac{1}{p}= \frac{1}{2}-\frac{s+\alpha-\epsilon}{n}$ and $\frac{1}{q} = \frac{s+\alpha-\epsilon}{n}$, so that by Sobolev $$\|v\|_{L^p_x L^2_t} \lesssim \|v\|_{X^{s+\alpha-\epsilon,0}_{\tau =0}} \, . $$ One easily checks that $\frac{1}{\tilde{q}} := \frac{1}{q}- O(\epsilon) > \frac{n-1}{2(n+1)}$ under our assumptions on $s$ and $\alpha$. By (\ref{Tao}) we obtain $$\|w\|_{L^{\frac{2(n+1)}{n-1}}_x L^2_t} \lesssim \|w\|_{X^{\frac{n-1}{2(n+1)},\frac{1}{2}+}_{|\tau|=|\xi|}} \, , $$ which we interpolate with the trivial identity $\|w\|_{L^2_xL^2_t} = \|w\|_{X^{0,0}_{|\tau|=|\xi|}} $, where the interpolation parameter $\theta$ is chosen such that $$ \frac{1}{\tilde{q}} = \theta \frac{n-1}{2(n+1)} + (1-\theta) \frac{1}{2} \, \Leftrightarrow \, \theta = (n+1)(\frac{1}{2}-\frac{s+\alpha}{n}) + O(\epsilon) \, , $$ we obtain $$\|w\|_{L^{\tilde{q}}_x L^2_t} \lesssim \|w\|_{X^{k,\frac{1}{2}+}_{|\tau|=|\xi|}} $$ with $k=\theta \frac{n-1}{2(n+1)} = \frac{n-1}{2}(\frac{1}{2}-\frac{s+\alpha}{n})+O(\epsilon) $. An easy calculation now shows that $k < \frac{1}{2}-\alpha$, so that another interpolation with Strichartz' inequality $$ \|w\|_{L_x^{\frac{2(n+1)}{n-1}} L_t^{\frac{2(n+1)}{n-1}}} \lesssim \|w\|_{X^{\frac{1}{2},\frac{1}{2}+}_{|\tau|=|\xi|}} $$ and interpolation parameter $\theta = (n+1)\epsilon$ gives $$\|w\|_{L^q_x L^{\frac{2}{1-2\epsilon}}_t} \lesssim \|w\|_{X^{k+O(\epsilon),\frac{1}{2}+}_{|\tau|=|\xi|}} \lesssim \|w\|_{X^{\frac{1}{2}-\alpha+\epsilon,\frac{1}{2}+}_{|\tau|=|\xi|}} \, .$$ This completes the proof of (\ref{18}). \\ {\bf Proof of (\ref{16}).} If $\widehat{\phi}$ is supported in $||\tau|-|\xi|| \gtrsim |\xi|$ we obtain $$\|\phi\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau =0}} \lesssim \|\phi\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, . $$ which implies that (\ref{16}) follows from (\ref{18}), if $\widehat{\phi}_1$ or $\widehat{\phi}_2$ have this support property. So we may assume that both functions are supported in $||\tau|-|\xi|| \ll |\xi|$. This means that it suffices to show $$ \int_* m(\xi_1,\xi_2,\xi_3,\tau_1,\tau_2,\tau_3) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, , $$ where $$m= \frac{|\tau_3|\chi_{||\tau_2|-|\xi_2|| \ll |\xi_2|} \chi_{||\tau_3|-|\xi_3|| \ll |\xi_3|}}{\langle \xi_1 \rangle^{1-\alpha-s} \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon-} \langle \xi_2 \rangle^s \langle |\tau_2|-|\xi_2| \rangle^{\frac{3}{4}+\epsilon} \langle \xi_3 \rangle^s \langle |\tau_3|-|\xi_3|\rangle^{\frac{3}{4}+\epsilon}} \, . $$ Since $\langle \tau_3 \rangle \sim \langle \xi_3 \rangle$ , $\langle \tau_2 \rangle \sim \langle \xi_2 \rangle$ and $\tau_1+\tau_2+\tau_3=0$ we have \begin{equation} |\tau_3| \lesssim \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon-} \langle \xi_3 \rangle^{\frac{1}{2}+\epsilon+} +\langle \xi_2 \rangle^{\frac{1}{2}-\epsilon-} \langle \xi_3 \rangle^{\frac{1}{2}+\epsilon+} , \end{equation} Concerning the first term on the right hand side we have to show $$\big|\int \int uvw dx dt\big| \lesssim \|u\|_{X^{1-\alpha-s,0}_{\tau=0}} \|v\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, .$$ We use \cite{FK} ,Thm. 1.1 , which shows $$\|vw\|_{L^2_t H^{s-\frac{3}{4}}_x} \lesssim \|v\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}} \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{1}{2}+}_{|\tau|=|\xi|}} $$ under the assumption $s > \frac{n}{2}-\frac{3}{4}$. This is enough, because $\alpha \le \frac{1}{4}$.\\ Concerning the second term on the right hand side we use $\langle \xi_1 \rangle^{s-1+\alpha} \lesssim \langle \xi_2 \rangle^{s-1+\alpha} + \langle \xi_3 \rangle^{s-1+\alpha}$ , so that we reduce to $$ \big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon-}_{\tau=0}} \|v\|_{X^{\frac{1}{2}-\alpha+\epsilon+,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} $$ and $$ \big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}} \|v\|_{X^{s-\frac{1}{2}+\epsilon+,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|w\|_{X^{\frac{1}{2}-\alpha-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, . $$ We even show the slightly stronger estimate $$ \big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon-}_{\tau=0}} \|v\|_{X^{\frac{1}{2}-\alpha-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, , $$ which implies both. We start with the estimate \begin{align*} \big|\int \int uvw dx dt\big| \lesssim \|u\|_{L^2_x L^{\frac{1}{\epsilon}-}_t} \|v\|_{L^p_x L^{\frac{2}{1-2\epsilon}+}_t} \|w\|_{L^q_x L^2_t} \, , \end{align*} where $\frac{1}{p} + \frac{1}{q} = \frac{1}{2}$. Interpolating (\ref{Tao}) $$ \|v\|_{L^{\frac{2(n+1)}{n-1}}_x L^2_t} \lesssim \|v\|_{X^{\frac{n-1}{2(n+1)},\frac{1}{2}+}_{|\tau|=|\xi|}} $$ with the trivial identity $\|v\|_{L^2_x L^2_t} = \|v\|_{X^{0,0}_{|\tau|=|\xi|}}$ with interpolation parameter $\theta$ given by $\theta \frac{n-1}{2(n+1)} = \frac{1}{2}-\alpha-2\epsilon$ (where we remark that $\theta < 1$) this gives $$\|v\|_{L^{\tilde{p}}_x L^2_t} \lesssim \|v\|_{X^{\frac{1}{2}-\alpha-O(\epsilon),\frac{1}{2}+}_{|\tau|=|\xi|}} \, , $$ where $$\frac{1}{\tilde{p}} = \frac{n-1}{2(n+1)} \theta + (1-\theta)\frac{1}{2} = \frac{n-3}{2(n-1)} + \frac{2}{n-1}\alpha + O(\epsilon) \, . $$ Interpolating this estimate with Strichartz' estimate just slightly changing the parameters we obtain $$ \|v\|_{L^p_x L^{\frac{2}{1-2\epsilon}+}_x} \lesssim \|v\|_{X^{\frac{1}{2}-\alpha-\epsilon-,\frac{1}{2}+}_{|\tau|=|\xi|}} \, , $$ where $\frac{1}{p}= \frac{1}{\tilde{p}}+ O(\epsilon)$. Thus $\frac{1}{q} = \frac{1}{2}-\frac{1}{p}= \frac{1}{n-1} - \frac{2}{n-1}\alpha - O(\epsilon)$.\\ Next we apply (\ref{Tao}) to obtain $$ \|w\|_{L^q_x L^2_t} \lesssim \|w\|_{X^{k+\frac{n-1}{2(n+1)},\frac{1}{2}+}_{|\tau|=|\xi|}} $$ with $$ \frac{1}{q} = \frac{n-1}{2(n+1)} - \frac{k}{n} \, \Leftrightarrow \, k = n(\frac{n-1}{2(n+1)} - \frac{1}{n-1} + \frac{2}{n-1}\alpha) + O(\epsilon) \, . $$ In order to conclude the desired estimate $$ \|w\|_{L^q_x L^2_t} \lesssim \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{1}{2}+}_{|\tau|=|\xi|}} $$ we need \begin{equation} \label{*****} s \ge k + \frac{n}{n+1} + O(\epsilon) = \frac{n}{2} - \frac{n}{n-1} + \frac{2n}{n-1}\alpha + O(\epsilon) \, . \end{equation} This means that in order to obtain a minimal lower bound for $s$ one should also minimize $\alpha$. On the other hand in the proof of (\ref{30}) below we have to maximize $\alpha$. Comparing condition (\ref{*****}) with (\ref{****}) below we optimize $\alpha$ by choosing \begin{equation} \label{*} \frac{n}{2} - \frac{n}{n-1} + \frac{2n}{n-1}\alpha = \frac{n}{2} - \frac{1}{4} -2\alpha \, \Leftrightarrow \, \alpha = \frac{3n+1}{8(2n-1)} \, , \end{equation} which leads to our choice of $\alpha$. Thus the condition on $s$ reduces to $$ s \ge \frac{n}{2} - \frac{1}{4} -2\alpha +O(\epsilon) = \frac{n}{2} - \frac{5}{8} - \frac{5}{8(2n-1)} + O(\epsilon) \, . $$ This is exactly our assumption on $s$.\\ {\bf Proof of (\ref{19}):} Sobolev's multiplication law shows the estimate $$ \| |\nabla|^{-1} (A_1 \partial_t A_2)\|_{C^0(H^{s-1})} \lesssim \|A_1\|_{C^0(H^s)} \|\partial_t A_2\|_{C^0(H^{s-1})}$$ for $s > \frac{n}{2}-1$. Use now $$ A=A^{cf} + \sum_{\pm} A^{df}_{\pm} \quad , \quad \partial_t A = \partial_t A^{cf} + i \langle \nabla \rangle(A_+^{df} -A_-^{df}) \, $$ from which the estimate (\ref{19}) easily follows. \\ {\bf Proof of (\ref{29}):} This a generalization of the proof given by Tao (\cite{T1}) in dimension $n=3$. We have to show $$ \int_* m(\xi,\tau) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, , $$ where $\xi=(\xi_1,\xi_2,\xi_3) \, , \,\tau=(\tau_1,\tau_2,\tau_3)$ , * denotes integration over $ \sum_{i=1}^3 \xi_i = \sum_{i=1}^3 \tau_i = 0$ , and $$ m = \frac{(|\xi_2|+|\xi_3|) \langle \xi_1 \rangle^{s-1} \langle |\tau_1|-|\xi_1|) \rangle^{-\frac{1}{4}+2\epsilon}}{\langle \xi_2 \rangle^s \langle |\tau_2| - |\xi_2|\rangle^{\frac{3}{4}+\epsilon} \langle \xi_3 \rangle^{s+\alpha}\langle \tau_3 \rangle^{\frac{1}{2}+\epsilon}} \, .$$ Case 1: $|\xi_2| \le |\xi_1|$ ($\Rightarrow$ $|\xi_2|+|\xi_3| \lesssim |\xi_1|$). \\ By two applications of the averaging principle (\cite{T}, Prop. 5.1) we may replace $m$ by $$ m' = \frac{ \langle \xi_1 \rangle^s \chi_{||\tau_2|-|\xi_2||\sim 1} \chi_{|\tau_3| \sim 1}}{ \langle \xi_2 \rangle^s \langle \xi_3 \rangle^{s+\alpha}} \, . $$ Let now $\tau_2$ be restricted to the region $\tau_2 =T + O(1)$ for some integer $T$. Then $\tau_1$ is restricted to $\tau_1 = -T + O(1)$, because $\tau_1 + \tau_2 + \tau_3 =0$, and $\xi_2$ is restricted to $|\xi_2| = |T| + O(1)$. The $\tau_1$-regions are essentially disjoint for $T \in {\mathbb Z}$ and similarly the $\tau_2$-regions. Thus by Schur's test (\cite{T}, Lemma 3.11) we only have to show \begin{align*} &\sup_{T \in {\mathbb Z}} \int_* \frac{\langle \xi_1 \rangle^s \chi_{\tau_1=-T+O(1)} \chi_{\tau_2=T+O(1)} \chi_{|\tau_3|\sim 1} \chi_{|\xi_2|=|T|+O(1)}}{\langle \xi_2 \rangle^s \langle \xi_3 \rangle^{s+\alpha}} \prod_{i=1} \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \\ & \hspace{25em} \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, . \end{align*} The $\tau$-behaviour of the integral is now trivial, thus we reduce to \begin{equation} \label{55} \sup_{T \in {\mathbb N}} \int_{\sum_{i=1}^3 \xi_i =0} \frac{ \langle \xi_1 \rangle^s \chi_{|\xi_2|=|T|+O(1)}}{ \langle T \rangle^s \langle \xi_3 \rangle^{s+\alpha}} \widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{f}_3(\xi_3)d\xi \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, . \end{equation} Assuming now $|\xi_3| \le |\xi_1|$ (the other case being simpler) it only remains to consider the following two cases: \\ Case 1.1: $|\xi_1| \sim |\xi_3| \gtrsim T$. We obtain in this case \begin{align*} L.H.S. \, of \, (\ref{55}) &\lesssim \sup_{T \in{\mathbb N}} \frac{1}{T^{s+\alpha}} \|f_1\|_{L^2} \|f_3\|_{L^2} \| {\mathcal F}^{-1}(\chi_{|\xi|=T+O(1)} \widehat{f}_2)\|_{L^{\infty}({\mathbb R}^n)} \\ &\lesssim \sup_{T \in{\mathbb N}} \frac{1}{ T^{s+\alpha}} \|f_1\|_{L^2} \|f_3\|_{L^2} \| \chi_{|\xi|=T+O(1)} \widehat{f}_2\|_{L^1({\mathbb R}^n)} \\ &\lesssim \hspace{-0.1em}\sup_{T \in {\mathbb N}} \frac{T^{\frac{n-1}{2}}}{T^{s+\alpha}} \prod_{i=1}^3 \|f_i\|_{L^2} \lesssim\hspace{-0.1em} \prod_{i=1}^3 \|f_i\|_{L^2} \, , \end{align*} because one easily calculates that $2(s+\alpha) > n-1$ under our choice of $s$ and $\alpha$. Case 1.2: $|\xi_1| \sim T \gtrsim |\xi_3|$. An elementary calculation shows that \begin{align*} L.H.S. \, of \, (\ref{55}) \lesssim \sup_{T \in{\mathbb N}} \| \chi_{|\xi|=T+O(1)} \ast \langle \xi \rangle^{-2(s+\alpha)}\|^{\frac{1}{2}}_{L^{\infty}(\mathbb{R}^{n-1})} \prod_{i=1}^3 \|f_i\|_{L^2_x} \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, , \end{align*} using as in case 1.1 that $2(s+\alpha) > n-1$ , so that the desired estimate follows.\\ Case 2. $|\xi_1| \le |\xi_2|$ ($\Rightarrow$ $|\xi_2|+|\xi_3| \lesssim |\xi_2|$). \\ Exactly as in case 1 we reduce to $$ \sup_{T \in {\mathbb N}} \int_{\sum_{i=1}^3 \xi_i =0} \frac{ \langle \xi_1 \rangle^{s-1} \chi_{|\xi_2|=|T|+O(1)}}{ \langle T \rangle^{s-1} \langle \xi_3 \rangle^{s+\alpha}} \widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{f}_2(\xi_3)d\xi \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, . $$ This can be treated as in case 1.\\ {\bf Proof of (\ref{30}):} By Sobolev's multiplication law we obtain $$ |\int \int fgh dx dt| \lesssim \|f\|_{X^{s+\alpha,\frac{1}{2}+\epsilon}_{\tau=0}} \|g\|_{X^{s+\alpha -1,\frac{1}{2}+\epsilon}_{\tau=0}} \|h\|_{X^{-s+\frac{5}{4}-2\epsilon,-\frac{1}{2}}_{\tau=0}} \, , $$ where we need that \begin{equation} \label{****} s+2\alpha+\frac{1}{4}-2\epsilon > \frac{n}{2} \, , \end{equation} which holds under our assumptions on $s$ and $\alpha$. Using the elementary estimate $$ \frac{\langle \xi \rangle^{\frac{1}{4}-2\epsilon}}{\langle \tau \rangle^{\frac{1}{4}-2\epsilon}} \lesssim \langle |\tau|-|\xi| \rangle^{\frac{1}{4}-2\epsilon}$$ we obtain $$\|h\|_{X^{-s+\frac{5}{4}-2\epsilon,-\frac{1}{2}}_{\tau=0}} \lesssim \|h\|_{[X^{1-s,\frac{1}{4}-2\epsilon}_{|\tau|=|\xi|}} \,$$ which implies (\ref{30}). \\ {\bf Proof of (\ref{31}):} We use the following consequences of Sobolev's embedding and Strichartz' inequality: \begin{align} \|A\|_{L^{\infty}_t H^{s+\alpha}_x} & \lesssim \|A\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} \, , \\ \label{56} \|A\|_{L^{4-}_t H^{1-s}_x} & \lesssim \|A\|_{X^{1-s,\frac{1}{4}-}_{|\tau|=|\xi|}} \\ \label{Str} \|A\|_{L^4_t H^{s-\frac{n+1}{4(n-1)},\frac{2(n-1)}{n-2}}_x} & \lesssim \|A\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}} \, , \end{align} where we applied (\ref{15}) with $q=4$ , $r= \frac{2(n-1)}{n-2}$ , $\mu = \frac{n+1}{4(n-1)}$ and also \begin{equation} \label{57} \|A\|_{L^{4+}_t H^{s-\frac{n+1}{4(n-1)}+,\frac{2(n-1)}{n-2}-}_x} \lesssim \|A\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}} \, . \end{equation} Assume now $s \ge 1$. Taking the dual of (\ref{56}) we obtain $$ \|A_1 A_2 A_3\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|A_1 A_2 A_3\|_{L^{\frac{4}{3}+}_t H^{s-1}_x} \, . $$ This can be estimated by $$ \|A_1\|_{L^{4+}_t H^{s-1,p-}_x} \|A_2\|_{L^4_t L^{\frac{2n}{1+\alpha}+}_x} \|A_3\|_{L^4_t L^{\frac{2n}{1+\alpha}+}_x} $$ where $\frac{1}{p} = \frac{1}{2} - \frac{1+\alpha}{n}$ , and similar terms with reversed roles of $A_j$ . Now by Sobolev we have $ H^{s+\alpha,2}_x \subset H^{s-1,p-}_x$ , so that $$\|A_1\|_{L^{4+}_t H^{s-1,p-}_x} \lesssim \|A_1\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} \, . $$ Next we obtain $H^{s-\frac{n+1}{4(n-1)}+,\frac{2(n-1)}{n-2}-}_x \subset H^{s-1,p-}_x$ , because $$\frac{1}{p} > \frac{n-2}{2(n-1)} - \frac{1}{n}(1-\frac{n+1}{4(n-1)}) \, \Leftrightarrow \frac{3n+1}{4n(n-1)} > \frac{\alpha}{n} \, , $$ which holds, because $\frac{1}{4} \ge \alpha = \frac{3n+1}{8(2n-1)} \ge \frac{3}{16}$ . This implies by (\ref{57}) $$\|A_1\|_{L^{4+}_t H^{s-1,p-}_x} \lesssim \|A_1\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}} \, . $$ Next we have $H^{s+\alpha,2}_x \subset L^{\frac{2n}{1+\alpha}+}_x $ , because the inequality $$\frac{1+\alpha}{2n} > \frac{1}{2} -\frac{s+\alpha}{n}$$ holds by $s> \frac{n}{2} - \frac{3}{4}$ and $\alpha \ge \frac{3}{16}$ . This implies $$\|A_j\|_{L^4_t L^{\frac{2n}{1+\alpha}+}_x} \lesssim \|A_j\|_{L^4_t H^{s+\alpha,2}_x }\lesssim \|A_j\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} \, . $$ Finally by Sobolev $H^{s-\frac{n+1}{4(n-1)},\frac{2(n-1)}{n-2}}_x \subset L^{\frac{2n}{1+\alpha}+}_x$ , because one easily calculates that $ \frac{1+\alpha}{2n} > \frac{n-2}{2(n-1)} - \frac{1}{n}(s-\frac{n+1}{4(n-1)}) $ using $s \ge \frac{n}{2}-\frac{3}{4}$ and $\alpha \ge \frac{3}{16}$. Thus by (\ref{Str}) $$\|A_j\|_{L^4_t L^{\frac{2n}{1+\alpha}+}_x} \lesssim \|A_j\|_{L^4_t H^{s-\frac{n+1}{4(n-1)},\frac{2(n-1)}{n-2}}_x} \lesssim \|A_j\|_{X^{s,\frac{1}{2}+}_{|\tau| = |\xi|}} \, . $$ This completes the proof of (\ref{31}) for $s \ge 1$. It remains to consider the case $1>s> \frac{3}{4}$ in dimension $n=3$ and $\alpha = \frac{1}{4}$. This case is much easier. We only use $$ \|A\|_{L^{4-}_t L^2_x} \lesssim \|A\|_{X^{0,\frac{1}{4}-}_{|\tau|=|\xi|}} \lesssim \|A\|_{X^{1-s,\frac{1}{4}-}_{|\tau|=|\xi|}} \, , $$ so that by duality $$ \|A_1 A_2 A_3\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|A_1 A_2 A_3\|_{L^{\frac{4}{3}+}_t L^2_x} \lesssim \prod_{I=1}^3 \|A_i\|_{L^{4+}_t L^6_x} \, . $$ Now by Sobolev for $s>\frac{3}{4}$ we obtain $$ \|A_i\|_{L^{4+}_t L^6_x} \lesssim \|A_i\|_{L^{4+}_t H^1_x} \lesssim \|A_i\|_{X^{s+\frac{1}{4},\frac{1}{2}+}_{\tau=0}} \, , $$ and using Sobolev's embedding and Strichartz' inequality (\ref{15}) gives $$ \|A_i\|_{L^{4+}_t L^6_x} \lesssim \|A_i\|_{L^{4+}_t H^{\frac{1}{4}+,4-}_x} \lesssim \|A_i\|_{X^{\frac{3}{4}+,\frac{1}{2}+}_{|\tau|=|\xi|}} \lesssim \|A_i\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}} \, . $$ \\ {\bf Proof of (\ref{32}):} The case $N = 3$ reduces to (\ref{31}). Next we consider the case $N=4$ in dimension $n=3$. We may assume $s\le 1$, because the general case can be reduced to this case easily. This follows from Prop. \ref{Prop.2} as follows: $$ \| |\phi|^3 \phi\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \| |\phi|^3 \phi\|_{L^{\frac{4}{3}+}_t H^{s-1}_x} \lesssim \| |\phi|^3 \phi\|_{L^{\frac{4}{3}+}_t L^p_x} \lesssim \|\phi\|_{L^{\frac{16}{3}+}_t L^{4p}_x}^4 \, ,$$ where $\frac{1}{p} = \frac{1}{2}-\frac{s-1}{3}$ . We now use Strichartz estimate (\ref{15}) with $q=\frac{16}{3}+$, $r=\frac{16}{5}-$, $\mu=\frac{3}{8}+$ to conclude $$\| |\phi|^3 \phi\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|\phi\|_{L^{\frac{16}{3}+}_t H^{l,\frac{16}{5}-}_x}^4 \lesssim \|\phi\|_{X^{l+\mu,\frac{1}{2}+}_{|\tau|=|\xi|}}^4 \lesssim \|\phi\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}}^4 \, , $$ provided $H^{l,\frac{16}{5}-}_x \subset L^{4p}_x$, which is fulfilled, if $l=\frac{5+4s}{16}+$, so that $l+\mu \le s$, if $\frac{5+4s}{16} + \frac{3}{8} < s \, \Leftrightarrow \, s > \frac{11}{12}$, which is equivalent to our assumption $N < 1+\frac{7}{4(\frac{n}{2}-s)}$. The case $N=2$ for $n=3$ is much easier handled by the standard Strichartz inequality: $$ \| |\phi| \phi \|_{H^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|\phi\|_{L^4_t L^4_x}^2 \lesssim \|\phi\|_{X^{\frac{1}{2},\frac{1}{2}+}_{|\tau|=|\xi|}} \, . $$ In all the other cases under our assumptions we have $s\ge 1$. We have $$ \| |\phi|^{N-1} \phi\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \| |\phi|^{N-1} \phi\|_{L^{\frac{4}{3}+}_t H^{s-1}_x} \lesssim \|\phi\|_{L^{\frac{4}{3}N+}_t L^{\tilde{q}}_x}^{N-1} \|\phi\|_{L^{\frac{4}{3}N+}_t H^{s-1,p}_x} \, .$$ Here $\frac{1}{p} + \frac{N-1}{\tilde{q}} = \frac{1}{2}$. We obtain $H^{s-1,p} \subset L^{\tilde{q}}$ , if $\frac{1}{\tilde{q}} = \frac{1}{p} - \frac{s-1}{n}$ , so that $$ \frac{1}{p} = \frac{1}{N}\Big(\frac{1}{2}+\frac{(N-1)(s-1)}{n}\Big) \, , $$ thus $$ \| |\phi|^{N-1} \phi\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|\phi\|_{L^{\frac{4}{3}N+}_t H^{s-1,p}_x}^N \, . $$ The case $N=2$ is again easy. In this case we have $ \frac{1}{p} = \frac{s-1}{2n} + \frac{1}{4}$, which implies by Sobolev $H^{s,2} \subset H^{s-1,p}$ under the condition $\frac{1}{p} \ge \frac{1}{2}-\frac{1}{n}$, which is easily seen to be equivalent to $s \ge \frac{n}{2} -1$, which certainly holds, so that we obtain the desired bound $\|\phi\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}}^2$. It remains to consider $N\ge 4$. We use Strichartz' estimate (\ref{15}) with $q=\frac{4}{3}N+$, $\frac{1}{r} = \frac{1}{2} - \frac{3}{2N(n-1)} +$ , $\mu = n(\frac{1}{2}-\frac{1}{r}) - \frac{1}{q} = \frac{3(n+1)}{4N(n-1)} +$ to conclude \begin{equation} \label{60} \| |\phi|^{N-1} \phi\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|\phi\|_{L^{\frac{4}{3}N+}_t H^{l,r}_x}^N \lesssim \|\phi\|_{X^{l+\mu,\frac{1}{2}+}_{|\tau|=|\xi|}}^N \lesssim \|\phi\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}}^N \, , \end{equation} if we $H^{l,r} \subset H^{s-1,p}$ and $l+\mu \le s$. By Sobolev we need $$ \frac{1}{r} \ge \frac{1}{p} \ge \frac{1}{r}-\frac{l-s+1}{n} \, . $$ We calculate \begin{align} \nonumber \frac{1}{r} \ge \frac{1}{p} \, &\Leftrightarrow \, \frac{1}{2} - \frac{3}{2N(n-1)} > \frac{1}{N}\Big(\frac{1}{2}+\frac{(N-1)(s-1)}{n}\Big) \\ \label{61} &\Leftrightarrow \, s < \frac{n}{2} + 1 - \frac{3n}{(N-1)2(n-1)} \, . \end{align} In this case we can choose $l=\frac{n}{r}-\frac{n}{p}+s-1$ , so that one easily calculates \begin{align*} &l+\mu \le s \\ &\, \Leftrightarrow n\Big(\frac{1}{2}-\frac{3}{2N(n-1)}\Big) - \frac{n}{N}\Big(\frac{1}{2}+\frac{(N-1)(s-1)}{n}\Big)+s-1 + \frac{3(n+1)}{4N(n-1)} < s \\ &\Leftrightarrow \, s > \frac{n}{2}-\frac{7}{4(N-1)} \, \Leftrightarrow \, \Big( N < 1+\frac{7}{4(\frac{n}{2}-s)} \,\, {\mbox if}\, s < \frac{n}{2} \,\, {\mbox and} \, N < \infty \,\, {\mbox if} \, s \ge \frac{n}{2} \Big) \, . \end{align*} This is exactly our assumption on $s$ and $N$. This lower bound on $s$ and also the lower bound on $s$ in Prop \ref{Prop} is compatible with the upper bound (\ref{61}) in our case $N \ge 4$ and $n \ge 3$, as an easy calculation shows. As always the desired estimate (\ref{60}) for greater $s$ can be reduced to this case so that (\ref{61}) is redundant. Thus (\ref{60}) is proven. This completes the proof of (\ref{32}) and also the proof of Prop. \ref{Prop} and Prop. \ref{Prop'}. \end{proof} \section{Removal of the assumption $A^{cf}(0)=0$} Applying an idea of Keel and Tao \cite{T1} we use the gauge invariance of the Yang-Mills-Higgs system to show that the condition $A^{cf}(0)=0$, which had to be assumed in Prop. \ref{Prop}, can be removed. A completely analogous result holds for the Yang-Mills equation and Prop. \ref{Prop'}. \begin{lemma} \label{Lemma} Let $ n\ge 3$ , $s>\frac{n}{2}-\frac{3}{4}$ and $0 < \epsilon \ll 1$. Assume $(A,\phi)\in( C^0([0,1],H^s) \cap C^1([0,1],H^{s-1}) \times (C^0([0,1],H^s) \cap C^1([0,1],H^{s-1}))$ , $A_0 = 0$ and \begin{equation} \label{***} \|A^{df}(0)\|_{H^s} + \|(\partial_t A)^{df}(0)\|_{H^{s-1}} + \|A^{cf}(0)\|_{H^s} + \|\phi(0)\|_{H^s} + \|(\partial_t \phi)(0)\|_{H^{s-1}} \le \epsilon \, . \end{equation} Then there exists a gauge transformation $T$ preserving the temporal gauge such that $(TA)^{cf}(0) = 0$ and \begin{align} \label{T1} \|(TA)^{df}(0)\|_{H^s} + \|(\partial_t TA)^{df}(0)\|_{H^{s-1}} + \|(T \phi)(0)\|_{H^s} + \|(\partial_t T\phi)(0)\|_{H^{s-1}} \lesssim \epsilon \, . \end{align} T preserves also the regularity, i.e. $TA\in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1})$ , $T\phi \in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1})$. If $A \in X^{s,\frac{3}{4}+}_+[0,1] + X^{s,\frac{3}{4}+}_-[0,1] + X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1]$, where $\alpha = \frac{3n+1}{8(2n-1)}$ , $\partial_t A^{cf} \in C^0([0,1],H^{s-1})$ and $\phi\in X^{s,\frac{3}{4}+}_+[0,1] + X^{s,\frac{3}{4}+}_-[0,1]$ , then $TA$ , $T\phi$ belong to the same spaces. Its inverse $T^{-1}$ has the same properties. \end{lemma} In the proof we frequently use \begin{lemma} Let $ n \ge 3$ , $s > \frac{n}{2}-1$ and define $\|f\|_X := \|\nabla f\|_{H^s}$ . The following estimates hold: \begin{align*} \| fg \|_X &\le c_1 \|f\|_X \|g\|_X \\ \| fg \|_{H^s} &\le c_1 \|f\|_X \|g\|_{H^s} \\ \| fg \|_{H^{s-1}} &\le c_1 \|f\|_X \|g\|_{H^{s-1}} \, . \end{align*} \end{lemma} \begin{proof} This follows essentially by Sobolev's multiplication law, where we remark that the singularity of $|\nabla|^{-1}$ is harmless in dimension $n \ge 3$. \end{proof} \begin{proof}[Proof of Lemma \ref{Lemma}] This is achieved by an iteration argument. Assume that one has besides (\ref{***}): \begin{equation} \label{42} \|A^{cf}(0)\|_{H^s} \le \delta \end{equation} for some $0<\delta\le \epsilon$. In the first step we set $\delta=\epsilon$, so that the condition is fulfilled, in the next steps $\delta = \epsilon^{\frac{3}{2}}$ , $\delta=\epsilon^2$ etc. We use the Hodge decomposition of $A$: $$A=A^{cf}+A^{df} = (-\Delta)^{-1} \nabla \,div \,A + A^{df}\, . $$ We define $V_1 := - (-\Delta)^{-1}div \,A(0)$ , so that $\nabla V_1 = A^{cf}(0)$. Thus $$ \|V_1\|_X := \| \nabla V_1\|_{H^s} = \|A^{cf}(0)\|_{H^s} \le \delta \, . $$ We define $U_1 := \exp(V_1)$ and consider the gauge transformation $T_1$ with \begin{align*} A_0 & \longmapsto U_1 A_0 U_1^{-1} - (\partial_t U_1) U_1^{-1} \\ A & \longmapsto U_1 A U_1^{-1} - (\nabla U_1) U_1^{-1} \\ \phi & \longmapsto U_1 \phi U_1^{-1} \, . \end{align*} Then $T_1$ preserves the temporal gauge, because $U_1$ is independent of $t$, a property, which is true for all the gauge transformations in the sequel as well. Moreover \begin{align} \nonumber (T_1 A)(0) & = \exp V_1 A(0) \exp (-V_1) - \nabla(\exp V_1) \exp(-V_1) \\ \nonumber & = A^{df}(0) + (\exp V_1 A^{df}(0) \exp(-V_1) - A^{df}(0)) \\ \label{40} & \hspace{1em} + (\exp V_1 A^{cf}(0) - \nabla(\exp V_1)) \exp(-V_1) \end{align} and thus \begin{align} \nonumber (T_1 A)^{cf}(0) & =-(-\Delta)^{-1} \nabla div(\exp V_1 A^{df}(0) \exp(-V_1) - A^{df}(0)) \\ \label{53'} & \hspace{1em}-(-\Delta)^{-1} \nabla div ((\exp V_1 A^{cf}(0) - \nabla(\exp V_1)) \exp(-V_1)) \, . \end{align} Using a Taylor expansion and Lemma \ref{Lemma} we obtain \begin{align*} & \| \exp V_1 A^{df}(0) \exp(-V_1) - A^{df}(0)\|_{H^s} \\ &\lesssim \|(\exp V_1-I) A^{df}(0) (\exp(-V_1)-I)\|_{H^s} +\|A^{df}(0) (\exp(-V_1)-I)\|_{H^s} \\ & \hspace{1em} + \|(\exp V_1-I)A^{df}(0) \|_{H^s} \\ & \lesssim (\|\exp V_1-I\|_X +1)\|A^{df}(0)\|_{H^s} \|\exp(-V_1)-I\|_X \\ & \hspace{1em}+ \|\exp V_1-I\|_X \|A^{df}(0)\|_{H^s} \\ & \lesssim(1+\delta) \epsilon \delta \\ & \le \frac{c_0}{2} \epsilon \delta \, . \end{align*} We used the estimate \begin{align*} \| \exp V_1 - I\|_X & \le \sum_{k=1}^{\infty} \frac{\|V_1^k\|_X}{k !} \le \sum_{k=1}^{\infty} \frac{(c_1 \|V_1\|_X)^k}{c_1 k !} = c_1^{-1}(\exp(c_1 \|V_1\|_X) - 1) \\ &\le c_1^{-1}(\exp(c_1 \delta) -1) \lesssim \delta \, . \end{align*} Furthermore we obtain \begin{align*} &\|\exp V_1 A^{cf}(0) - \nabla(\exp V_1)) \|_{H^s} = \| \sum_{k=0}^{\infty} \frac{V_1^k}{k !} \nabla V_1 - \sum_{k=1}^{\infty} \frac{\nabla(V_1^k)}{k!} \|_{H^s} \\ & =\| \sum_{k=1}^{\infty} \frac{V_1^k}{k!}\nabla V_1 - \sum_{k=2}^{\infty} \frac{\nabla(V_1^k)}{k!} \|_{H^s} \lesssim \sum_{k=1}^{\infty} \frac{\|V_1^k\|_X}{k!} \|\nabla V_1\|_{H^s} + \sum_{k=2}^{\infty} \frac{\|\nabla(V_1^k)\|_{H^s}}{k!} \\ & \lesssim \sum_{k=1}^{\infty}\frac{c_1^k \|V_1\|_X^k}{ k!} \|\nabla V_1\|_{H^s} + \sum_{k=2}^{\infty} \frac{c_1^k \|V_1\|_X^k}{k!} \\ & \lesssim (\exp(c_1 \|V_1\|_X) - 1) \|\nabla V_1\|_{H^s}+( \exp (c_1 \|V_1\|_X) - 1 - c_1 \|V_1\|_X) \\ & \le \frac{c_0}{2} \delta^2 \, . \end{align*} These estimates imply bv (\ref{53'}) in the case $\delta = \epsilon \ll 1$ : \begin{equation} \label{54'} \|(T_1 A)^{cf}(0)\|_{H^s} \lesssim c_0 \epsilon \delta = c_0 \epsilon^2 \le \frac{1}{2} \epsilon^{\frac{3}{2}} \, . \end{equation} Moreover by (\ref{40}) \begin{equation} \label{**} \|(T_1 A)(0)\|_{H^s} \le \|A^{df}(0)\|_{H^s} + c_0 \epsilon \delta \le \epsilon + \frac{1}{2} \epsilon^{\frac{3}{2}} \le 2 \epsilon \, , \end{equation} and combining this with (\ref{54'}) : \begin{equation} \label{56'} \|(T_1 A)^{df}(0)\|_{H^s} \le \epsilon + \epsilon^{\frac{3}{2}} \le 2 \epsilon \, . \end{equation} Similarly we also obtain by Lemma \ref{Lemma} \begin{align*} \|\partial_t(T_1 A)^{cf}(0)\|_{H^{s-1}} &\lesssim c_0 \epsilon \delta = c_0 \epsilon^2 \le \frac{1}{2} \epsilon^{\frac{3}{2}} \\ \|\partial_t (T_1 A)(0)\|_{H^{s-1}} & \le \epsilon + \frac{1}{2}\epsilon^{\frac{3}{2}} \\ \|\partial_t(T_1 A)^{df}(0)\|_{H^{s-1}} &\le \epsilon + \epsilon^{\frac{3}{2}} \le 2 \epsilon \, , \end{align*} and $$\|\partial_t(T_1 \phi)(0)\|_{H^s} + \|(\partial_t T_1 \phi)(0)\|_{H^{s-1}} \le \epsilon + \frac{1}{2} \epsilon^{\frac{3}{2}} \, .$$ We have now shown that (\ref{***}) with $\epsilon$ replaced by $\epsilon+\frac{1}{2}\epsilon^{\frac{3}{2}}$ and (\ref{42}) with $\delta = \frac{1}{2}\epsilon^{\frac{3}{2}}$ are fulfilled with $A$ and $\phi$ replaced by $T_1 A$ and $T_1 \phi$. In a next step we define $V_2 := -(-\Delta)^{-1} div (T_1 A)(0)$ so that $\nabla V_2 = (T_1A)^{cf}(0)$ and thus by (\ref{54'}) \begin{equation} \label{41} \|V_2\|_X = \|\nabla V_2\|_{H^s} \le \epsilon^{\frac{3}{2}} \,. \end{equation} We define the next gauge transform $T_2$ by \begin{align*} A & \longmapsto U_2 T_1A U_2^{-1} - \nabla U_2 U_2^{-1} \\ \phi & \longmapsto U_2 T_1 \phi U_2^{-1} \end{align*} with $U_2 = \exp V_2$. \\ Calculating as above we obtain \begin{align*} (T_2A)(0) & = (T_1 A)^{df}(0) + (\exp V_2 (T_1A)^{df}(0) \exp(-V_2) - (T_1A)^{df}(0)) \\ & \hspace{1em} +( (\exp V_2 \nabla V_2 - \nabla(\exp V_2)) \exp(-V_2)) \end{align*} where we used $\nabla V_2 = (T_1A)^{cf}(0)$. This implies : \begin{align*} &\|(T_2 A)^{cf}(0)\|_{H^s} \\& \le c_2 (\|\exp V_2 (T_1 A)^{df}(0) (\exp(-V_2) - I)\|_{H^s} + \|(\exp V_2 - I) (T_1 A)^{df}(0)\|_{H^s} \\ & \hspace{1em} + \| ( (\exp V_2 \nabla V_2 - \nabla(\exp V_2)) \exp(-V_2))\|_{H^s})\, . \end{align*} The first two terms on the right hand side are bounded by (\ref{41}) by $$c_2((\exp(c_1 \|V_2\|_X )-1) + 1) 2\epsilon (\exp(c_1\|V_2\|_X) -1) \lesssim (\epsilon^{\frac{3}{2}} + 1) \epsilon \epsilon^{\frac{3}{2}} \lesssim \epsilon^{\frac{5}{2}} \le \frac{1}{4} \epsilon^2 \, , $$ where we used (\ref{56'}), whereas the last term on the right hand side can be handled similarly as in the first iteration step : $$ c_2\| ( (\exp V_2 \nabla V_2 - \nabla(\exp V_2)) \exp(-V_2))\|_{H^s} \lesssim \epsilon^3 \le \frac{1}{4} \epsilon^2 \, . $$ This implies $$ \|(T_2A)^{cf}(0)\|_{H^s} \le \frac{1}{2} \epsilon^2$$ and also $$ \|(T_2A)(0)\|_{H^s} \le \|(T_1A)^{df}(0)\|_{H^s} + \frac{1}{2} \epsilon^2 \le \epsilon + \epsilon^{\frac{3}{2}}+ \frac{1}{2}\epsilon^2 \le 2\epsilon \, , $$ thus $$\|(T_2A)^{df}(0)\|_{H^s} \le \epsilon + \epsilon^{\frac{3}{2}} + \epsilon^2 \le 2\epsilon \, . $$ Similar estimates are also obtained for $\|\partial_t(T_2A)^{cf}(0)\|_{H^{s-1}}$ , $ \|\partial_t(T_2A)(0)\|_{H^{s-1}}$ and $\|\partial_t (T_2A)^{df}(0)\|_{H^s}$ We also obtain $$ \|(T_2 \phi)(0)\|_{H^s} + \|(\partial_t T_2 \phi)(0)\|_{H^{s-1}} \le \epsilon + \epsilon^{\frac{3}{2}} + \epsilon^2 \, . $$ We have now shown that (\ref{***}) with $\epsilon$ replaced by $\epsilon+\epsilon^{\frac{3}{2}}+\epsilon^2$ and (\ref{42}) with $\delta = \frac{1}{2}\epsilon^2$ are fulfilled with $A$ and $\phi$ replaced by $T_2 A$ and $T_2 \phi$ . By iteration we obtain a sequence of gauge transforms $T_k$ defined by \begin{align*} A &\longmapsto \prod_{l=k}^1 \exp V_l A \prod_{l=1}^k \exp(- V_l) - \nabla (\prod_{l=k}^1\exp V_l) \prod_{l=1}^k \exp(- V_l) \\ \phi &\longmapsto \prod_{l=k}^1 \exp V_l \, \phi \prod_{l=1}^k \exp(- V_l) \end{align*} with $$ V_l := -(-\Delta)^{-1} div \,(T_{l-1}A)(0) \, $$ where $T_0 := id$. We remark that $\nabla V_{k}= (T_{k-1} A)^{cf}(0)$. We now make the assumption that for some $k\ge 2$ we know that $$\|(T_{k-1}A)^{df} (0)\|_{H^s} \le \epsilon + \epsilon^{\frac{3}{2}} + ... + \epsilon^{\frac{k+1}{2}} \le 2 \epsilon$$ and \begin{equation} \label{58'} \|V_k\|_X = \|(T_{k-1}A)^{cf} (0)\|_{H^s} \le \frac{1}{2} \epsilon^{\frac{k+1}{2}} \, . \end{equation} This holds for the case $k=2$ as shown before. Exactly as in the first two steps we obtain the estimate (with implicit constants independent of $k$ from now on) : \begin{align*} &\|V_{k+1}\|_X = \|\nabla V_{k+1}\|_{H^s} = \|(T_k A)^{cf}(0)\|_{H^s} \\ &\lesssim ((\exp(c_1 \|V_k\|_X)-1) + 1) \|(T_{k-1}A)^{df}(0)\|_{H^s} (\exp(c_1 \|V_k\|_X -1) \\ &\hspace{1em} + (\exp(c_1\|V_k\|_X) -1)\|\nabla V_k\|_{H^s} + (\exp(c_1 \|V_k\|_X) -1 - c_1 \|V_k\|_X) \\ &\lesssim (\|(T_{k-1}A)^{cf}(0)\|_{H^s} + 1) \|(T_{k-1}A)^{df}(0)\|_{H^s} \|(T_{k-1}A)^{cf}(0)\|_{H^s} \\ &\hspace{1em}+ \|(T_{k-1}A)^{cf}(0)\|_{H^s} \|(T_{k-1}A)^{df}(0)\|_{H^s} + \|(T_{k-1}A)^{cf}(0)\|_{H^s}^2 \\ &\lesssim (\epsilon^{\frac{k+1}{2}}+1) 2\epsilon \epsilon^{\frac{k+1}{2}} + \epsilon^{\frac{k+1}{2}}\epsilon^{\frac{k+1}{2}}+\epsilon^{k+1} \lesssim \epsilon^{\frac{k+3}{2}} + \epsilon^{k+1} \le \frac{1}{2} \epsilon^{\frac{k}{2}+1} \end{align*} and $$ \|(T_kA)(0)\|_{H^s} \le \|(T_{k-1}A)^{df}(0)\|_{H^s} + \frac{1}{2} \epsilon^{\frac{k}{2} +1} \le \epsilon + \epsilon^{\frac{3}{2}} + ... + \epsilon^{\frac{k+1}{2}} + \frac{1}{2} \epsilon^{\frac{k}{2}+1} \le 2\epsilon \, ,$$ thus \begin{equation} \label{CC} \|(T_kA)^{df}(0)\|_{H^s} \le \epsilon+\epsilon^{\frac{3}{2}} + ... + \epsilon^{\frac{k}{2}+1} \le 2 \epsilon \, . \end{equation} Thus these estimates hold for any $k \ge 2$. Similarly one can show that \begin{align} \nonumber &\|(\partial_t T_kA)(0)\|_{H^{s-1}}+ \|(\partial_t T_kA)^{df}(0)\|_{H^{s-1}} +\|(T_k \phi)(0)\|_{H^s} + \|(\partial_t T_k \phi)(0)\|_{H^{s-1}} \\ \label{CCC} & \hspace{1em} \lesssim \epsilon \, . \end{align} Next we estimate \begin{align*} \|T_k A\|_{H^s} &\le \|(\prod_{l=k}^1 \exp V_l) A \prod_{l=1}^k \exp(-V_l)\|_{H^s} + \|\nabla(\prod_{l=k}^1 \exp V_l) \prod_{l=1}^k \exp(-V_l) \|_{H^s} \\ & = I + II \, . \end{align*} We further estimate \begin{align*} I &\le \|A\|_{H^s} + \|((\prod_{l=k}^1 \exp V_l)- I) A ((\prod_{l=1}^k \exp(-V_l)-I)\|_{H^s} \\ &\hspace{1em} + \| A ((\prod_{l=1}^k \exp(-V_l)-I)\|_{H^s} + \|((\prod_{l=k}^1 \exp V_l)- I) A \|_{H^s} \\ &= \|A\|_{H^s}+ I_1 + I_2 + I_3 \end{align*} In order to control $I_1$ we consider first \begin{align} \nonumber &\| \prod_{l=k}^1 \exp V_l - I \|_X = \|\prod_{l=k}^1 \sum_{n=0}^{\infty} \frac{V_l^n}{n!} - I \|_X = \| \sum_{m=1}^{\infty} \sum _{n_1+...+n_k = m} \prod_{l=k}^1 \frac{V_l^{n_l}}{n_l !} \|_X \\ \label{C} &\lesssim \sum_{m=1}^{\infty} \sum _{n_1+...+n_k = m} \prod_{l=k}^1 \frac{(c_1 \|V_l\|_X)^{n_l}}{n_l !} = \prod_{l=k}^1 \exp(c_1 \|V_l\|_X) - 1 \\ \nonumber &=\exp(\sum_{l=1}^k c_1 \|V_l\|_X) - 1 \lesssim \exp(\sum_{l=1}^k c_1 \epsilon^{\frac{l+1}{2}}) - 1 \lesssim \exp(2c_1 \epsilon) - 1 \lesssim \epsilon \end{align} independently of $k$ where we used (\ref{58'}). Consequently $$ I_1 \lesssim \|(\prod_{l=k}^1 \exp V_l)- I\|_X \| A\|_{H^s} \| (\prod_{l=1}^k \exp(-V_l)-I\|_X \lesssim \|A\|_{H^s} \epsilon^2\, . $$ Estimating $I_2$ and $I_3$ similarly we obtain $$ I \lesssim \|A\|_{H^s} (1+ \epsilon^2 + \epsilon) \, .$$ Moreover \begin{align*} II & \le \| \nabla(\prod_{l=k}^1 \exp V_l - I)(\prod_{l=1}^k \exp(-V_l) - I ) \|_{H^s} + \|\nabla (\prod_{l=k}^1 \exp V_l - I)\|_{H^s} \\ & \lesssim \|\prod_{l=k}^1 \exp V_l - I\|_X \|(\prod_{l=1}^k \exp(-V_l) - I ) \|_X + \| (\prod_{l=k}^1 \exp V_l - I)\|_X \\ & \lesssim \epsilon^2 + \epsilon \, , \end{align*} Summarizing we obtain with implicit constants which are independent of $k$ : $$ \|T_k A\|_{H^s} \ \lesssim \|A\|_{H^s} + \epsilon $$ Similarly we also obtain $$ \|\partial_t( T_k A )\|_{H^{s-1}} \lesssim \|\partial_t A\|_{H^{s-1}} $$ and $$ \|T_k \phi\|_{H^s} \lesssim \|\phi\|_{H^s} \quad , \quad \|\partial_t (T_k \phi)\|_{H^{s-1}} \lesssim \|\partial_t \phi\|_{H^{s-1}} \, . $$ We want to consider the mapping $T$ defined by $TA = \lim_{k\to \infty} T_k A$ and $T\phi = \lim_{k\to \infty} T_k \phi$ , where the limit is taken in $C^0([0,1],H^s) \cap C^1([0,1],H^{s-1})$. This would imply by (\ref{58'}): $\|(TA)^{cf}(0)\|_{H^s} = \lim_{k \to \infty} \|(T_kA)^{cf}\|_{H^s} = 0$ , thus the desired property $$(TA)^{cf}(0)=0 \, .$$ Now define $$ SA := \prod_{l=\infty}^1(\exp V_l) A \prod_{l=1}^{\infty} \exp(-V_l)- \nabla (\prod_{l=\infty}^1 \exp V_l) \prod_{1=1}^{\infty} \exp(-V_l) = UAU^{-1} - \nabla U U^{-1}\, ,$$ with $U:=\prod_{l=\infty}^1 \exp V_l$, where the limit is taken with respect to $\| \cdot \|_X$ . \\ This limit in fact exists, because by the calculations in (\ref{C}) we obtain for $N > k$ the estimate $$ \| \prod_{l=N}^1 \exp V_l - \prod_{l=k}^1 \exp V_l \|_X \lesssim \|\prod_{l=N}^{k+1} \exp V_l - I \|_X (\|\prod_{l=k}^1 \exp V_l - I\|_X + 1)\lesssim \epsilon^{\frac{k}{2}+1} (\epsilon +1) \, . $$ We also obtain $U^{-1}=\prod_{l=1}^{\infty} \exp(-V_l)$, which is defined in the same way. \\ In order to prove $S=T$ we estimate as follows : \begin{align*} &\|SA-T_kA\|_{H^s} \\ & \le \|(\prod_{l=\infty}^1\exp V_l-\prod_{l=k}^1 \exp V_l) A \prod_{l=1}^{\infty}\exp(-V_l)\|_{H^s} \\ & + \|\prod_{l=k}^1\exp V_l A(\prod_{l=1}^{\infty}\exp(- V_l)-\prod_{l=1}^k \exp(- V_l))\|_{H^s} \\ & + \|\nabla(\prod_{l=\infty}^1(\exp V_l)-\prod_{l=k}^1 \exp V_l) (\prod_{l=1}^{\infty} \exp(- V_l)\|_{H^s} \\ & + \|\nabla(\prod_{l=k}^1\exp V_l(\prod_{l=1}^{\infty} \exp(- V_l) - \prod_{l=1}^k \exp(- V_l))\|_{H^s} \\ & = I + II + III + IV \end{align*} Now \begin{align*} I & =\| (\prod_{l=\infty}^{k+1} \exp V_l - I) \prod_{l=k}^1 \exp V_l A \prod_{l=1}^{\infty} \exp(-V_l) \|_{H^s} \\ & \lesssim \|\prod_{l=\infty}^{k+1} \exp V_l - I \|_X (\| \prod_{l=k}^1 \exp V_l -I\|_X + 1) \| A \prod_{l=1}^{\infty} \exp(-V_l) \|_{H^s}\, . \end{align*} Now by (\ref{C}) we obtain $$\| \prod_{l=k}^1 \exp V_l -I\|_X \lesssim \epsilon \, , $$ and $$ \|A \prod_{l=1}^{\infty} \exp (-V_l) \|_{H^s} \lesssim \|A\|_{H^s} (1+\| \prod_{l=1}^{\infty} \exp (-V_l) -I\|_X) \lesssim \|A\|_{H^s} (1+\epsilon) $$ and also similarly as in (\ref{C}) $$ \| \prod_{l=\infty}^{k+1} \exp(- V_l) -I\|_X \le \exp(\sum_{l=k+1}^{\infty} c_1 \epsilon^{\frac{k}{2}+1}) - 1 \lesssim \exp(c \epsilon^{\frac{k}{2}+1}) -1\lesssim \epsilon^{\frac{k}{2} + 1} \, $$ so that $$ I + II \lesssim \epsilon^{\frac{k}{2}+1} \|A\|_{H^s} \, . $$ Next we estimate \begin{align*} III & \le \|\nabla(\prod_{l=\infty}^1(\exp V_l)-\prod_{l=k}^1 \exp V_l) ((\prod_{l=1}^{\infty} \exp(- V_l)- I)\|_{H^s} \\ &\hspace{1em}+\|\nabla(\prod_{l=\infty}^1(\exp V_l)-\prod_{l=k}^1 \exp V_l)\|_{H^s} \\ & \lesssim \|I - \prod_{l=\infty}^{k+1} \exp V_l \|_X (\|\prod_{l=k}^1 \exp V_l - I\|_X + 1) (\|\prod_{l=1}^{\infty} \exp(-V_l) - I\|_X +1) \\ & \lesssim \epsilon^{\frac{k}{2}+1} (\epsilon +1)(\epsilon +1) \lesssim \epsilon^{\frac{k}{2}+1} \, . \end{align*} Finally \begin{align*} IV &\le \|\nabla(\prod_{l=k}^1 \exp V_l- I) \prod_{l=1}^k \exp(- V_l) (I - \prod_{l=k+1}^{\infty} \exp(- V_l))\|_{H^s} \\ & \lesssim \|\prod_{l=k}^1 \exp V_l- I\|_X (\prod_{l=1}^k \exp(- V_l) -I\|_X + 1) \|I - \prod_{l=k+1}^{\infty} \exp(- V_l)\|_X \\ & \lesssim \epsilon (\epsilon +1) \epsilon^{\frac{k}{2}+1} \lesssim \epsilon^{\frac{k}{2}+2} \, , \end{align*} so that we obtain $$ \|SA - T_kA\|_{H^s} \lesssim \epsilon^{\frac{k}{2}+1} (\|A\|_{H^s} + 1)\, \rightarrow \, 0 \quad (k \to \infty) \, , $$ thus $T_kA \to SA $ in $C^0([0,1],H^s)$ and similarly $\partial_t T_k A \to \partial_t SA$ in $C^0([0,1],H^{s-1})$ as well as $T_k \phi \to S\phi$ in $C^0([0,1],H^s)$ and $\partial_t T_k \phi \to \partial_t S\phi$ in $C^0([0,1],H^{s-1}) \,.$ We have shown that $T=S$ is a gauge transformation which besides fulfilling the temporal gauge has the property $(TA)^{cf}(0)=0$ and preserves the regularity $A,\phi \in C^0([0,1],H^s)\cap C^1([0,1],H^{s-1})$. From the properties (\ref{CC}) and (\ref{CCC}) of $T_k$ we also deduce $$ \|(TA)^{df}(0)\|_{H^s} + \|(\partial_t TA)^{df}(0)\|_{H^{s-1}} + \|(T\phi)(0)\|_{H^s} + \|(\partial_t T\phi)(0)\|_{H^{s-1}} \lesssim \epsilon \, . $$ Assume now that $ A = A_- + A_+ + A' $ , where $A_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1]$ , $A' \in X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1]$ and $\partial_t A' \in C^0([0,1],H^{s-1})$ . Let $$TA=U A U^{-1} -\nabla U U^{-1} \, ,$$ where $U = \prod_{l=\infty}^1 \exp V_l$ , is defined as above. We want to show that $TA$ has the same regularity. Let $\psi=\psi(t)$ be a smooth function with $\psi(t)=1$ for $0 \le t \le 1$ and $\psi(t)=0$ for $t\ge 2$. Then we obtain by Lemma \ref{Lemma1} below and (\ref{C}) : \begin{align*} \|U A_{\pm} \psi\|_{X^{s,\frac{3}{4}+}_{\pm}} & \lesssim \|\nabla U \psi\|_{X^{s,1}_{\pm}} \|A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}} \lesssim \|\nabla U\|_{H^s} \|A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}} \\ &\lesssim \| U -I\|_X \|A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}} \lesssim \epsilon \|A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}} \, , \end{align*} thus $$ \|U A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} \lesssim \epsilon \|A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} \, . $$ Similarly we obtain $$ \|U A_{\pm} U^{-1}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} \lesssim \epsilon \|U A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}} \lesssim \epsilon^2\|A_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} < \infty \, . $$ We also have \begin{align*} &\|(\nabla U)\psi U^{-1}\psi \|_{X^{s,\frac{3}{4}+}_{\pm}} \lesssim \|\nabla U \psi\|_{X^{s,\frac{3}{4}+}_{\pm}} \|\nabla ( U^{-1}) \psi\|_{X^{s,1}_{\pm}} \lesssim \|\nabla U\|_{H^s} \|\nabla(U^{-1})\|_{H^s} \, , \end{align*} thus \begin{align*} &\|(\nabla U) U^{-1}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} \lesssim \|\nabla U\|_{H^s} \|\nabla(U^{-1})\|_{H^s} \, . \end{align*} Moreover by Sobolev we obtain \begin{align*} \|U A' \psi\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} & \lesssim \| \nabla(U) \psi\|_{X^{s,1}_{\tau=0}} \|A'\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} \\ & \lesssim \| \nabla U \|_{H^s} \|A'\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} \lesssim \epsilon \|A'\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}} \, . \end{align*} Similarly as before this implies $$ \|U A' U^{-1}\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1]} \lesssim \epsilon^2\|A'\|_{X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1]} < \infty \, . $$ By Sobolev's muliplication law we also obtain $$ \|U \partial_t A'\|_{C^0([0,1],H^{s-1})} \lesssim \|\nabla U\|_{H^s} \|\partial_t A'\|_{C^0([0,1],H^{s-1})} \lesssim \epsilon \|\partial_t A'\|_{C^0([0,1],H^{s-1})} \, . $$ As before this implies $$\|U \partial_tA' U^{-1}\|_{C^0([0,1],H^{s-1})} \lesssim \epsilon^2 \|\partial_t A'\|_{C^0([0,1],H^{s-1})} < \infty \, .$$ We have thus shown that $TA$ has the same regularity as $A$. The same estimates also show that $$ \|U \phi_{\pm} U^{-1}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} \lesssim \epsilon^2\|\phi_{\pm}\|_{X^{s,\frac{3}{4}+}_{\pm}[0,1]} < \infty \, , $$ so that $T\phi = U \phi U^{-1}$ maps $X^{s+\frac{1}{4},\frac{1}{2}+}_+ + X^{s+\frac{1}{4},\frac{1}{2}+}_-$ into itself. The same properties also hold for its inverse $T^{-1}$ which is given by \begin{align*} B &\longmapsto U^{-1} B U + U^{-1} \nabla U \\ \phi' &\longmapsto U \phi' U^{-1} \, . \end{align*} \end{proof} In the last proof we used the following \begin{lemma} \label{Lemma1} The following estimate holds for $s>\frac{n}{2}-\frac{3}{4}$ and $\epsilon >0$ sufficiently small: $$ \|uv\|_{X^{s,\frac{3}{4}+\epsilon}_{\pm}} \lesssim \|\nabla u\|_{X^{s,1}_{\pm}} \|v\|_{X^{s,\frac{3}{4}+\epsilon}_{\pm}} \, . $$ \end{lemma} \begin{proof} By Tao \cite{T}, Cor. 8.2 we may replace $\nabla$ by $\langle \nabla \rangle$ so that it suffices to prove $$ \|uv\|_{X^{s,\frac{3}{4}+\epsilon}_{\pm}} \lesssim \| u\|_{X^{s+1,1}_{\pm}} \|v\|_{X^{s,\frac{3}{4}+\epsilon}_{\pm}} \, . $$ We start with the elementary estimate $$|(\tau_1 + \tau_2)\mp |\xi_1+\xi_2|| \le |\tau_1 \mp |\xi_1|| + |\tau_2 \mp |\xi_2|| + |\xi_1| + |\xi_2| - |\xi_1+\xi_2| \, . $$ Assume now w.l.o.g. $|\xi_2|\ge|\xi_1|$. We have $$|\xi_1|+|\xi_2|-|\xi_1+\xi_2| \le |\xi_1|+|\xi_2| + |\xi_1| - |\xi_2| = 2|\xi_1| \, ,$$ so that $$|(\tau_1 + \tau_2)\mp |\xi_1+\xi_2|| \le |\tau_1 \mp\xi_1| + |\tau_2 \mp |\xi_2|| + 2\min(|\xi_1|,|\xi_2|) \, . $$ Using Fourier transforms by standard arguments it thus suffices to show the following three estimates: \begin{align*} \|uv\|_{X_{\pm}^{s,0}} & \lesssim \|u\|_{X^{s+1,\frac{1}{4}-\epsilon}_{\pm}} \|v\|_{X^{s,\frac{3}{4}+\epsilon}_{\pm}} \\ \|uv\|_{X_{\pm}^{s,0}} & \lesssim \|u\|_{X^{s+1,1}_{\pm}} \|v\|_{X^{s,0}_{\pm}} \\ \|uv\|_{X_{\pm}^{s,0}} & \lesssim \|u\|_{X^{s+\frac{1}{4}-\epsilon,1}_{\pm}} \|v\|_{X^{s,\frac{3}{4}+\epsilon}_{\pm}} \end{align*} The first and second estimate easily follow from Sobolev, whereas the last one is implied by \cite{FK} , Thm. 1.1. \end{proof} \section{Proof of Theorem \ref{Theorem1} and Theorem \ref{Theorem1'}} \begin{proof} We only prove Theorem \ref{Theorem1}. It suffices to construct a unique local solution of (\ref{3}),(\ref{4}),(\ref{5}) with initial conditions $$ A^{df}(0) = a^{df} \, , \, (\partial_t A^{df})(0) = {a'}^{df} \, , \, A^{cf}(0) = a^{cf} \, , \,\phi(0)=\phi_0 \, , \, (\partial_t \phi)(0) = \phi_1 \, ,$$ which fulfill $$ \|A^{df}(0)\|_{H^s} + \|(\partial_t A)^{df}(0)\|_{H^{s-1}} + \|A^{cf}(0)\|_{H^s} + \|\phi(0)\|_{H^s} + \|(\partial_t \phi)(0)\|_{H^{s-1}} \le \epsilon $$ for a sufficiently small $\epsilon > 0$. By Lemma \ref{Lemma} there exists a gauge transformation $T$ which fulfills (\ref{T1}) and $(TA)^{cf}(0) =0$. We use Prop. \ref{Prop} to construct a unique solution $(\tilde{A},\tilde{\phi})$ of (\ref{3}),(\ref{4}),(\ref{5}) , where $\tilde{A}=\tilde{A}_+^{df} + \tilde{A}_-^{df} +\tilde{A}^{cf}$ and $\tilde{\phi} = \tilde{\phi}_+ + \tilde{\phi}_-$ , with data $$\tilde{A}^{df}(0)= (TA)^{df}(0) \, , \, (\partial_t \tilde{A})^{df}(0) = (\partial_t (TA)^{df})(0) \, , \, \tilde{A}^{cf}(0)= (TA)^{cf}(0)=0 \, ,$$ $$ \, \tilde{\phi}(0) = (T\phi)(0) \, , \, (\partial_t \tilde{\phi})(0) = (\partial_t T \phi)(0) $$ with the regularity $$ \tilde{A}^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] , \tilde{A}^{cf} \in X^{s+\alpha,\frac{1}{2}+}_{\tau=0}[0,1] , \partial_t \tilde{A}^{cf} \in C^0([0,1],H^{s-1}) , \tilde{\phi}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] \, . $$ This solution satisfies also $\tilde{A},\tilde{\phi} \in C^0([0,1],H^s) \cap C^1[0,1],H^{s-1})$. Applying the inverse gauge transformation $T^{-1}$ according to Lemma \ref{Lemma} we obtain a unique solution of (\ref{3}),(\ref{4}),(\ref{5}) with the required initial data and also the same regularity. The proof of Theorem \ref{Theorem1'} is completely analogous by use of Prop. \ref{Prop'}. \end{proof} \end{document}
math
80,403
\begin{document} \title{Mixed state geometric phases, entangled systems, and local unitary transformations} \author{Marie Ericsson} \affiliation{Department of Quantum Chemistry, Uppsala University, Box 518, Se-751 20 Sweden} \author{Arun K. Pati} \affiliation{Institute of Physics, Bhubaneswar-751005, Orissa, India} \author{Erik Sj\"{o}qvist} \affiliation{Department of Quantum Chemistry, Uppsala University, Box 518, Se-751 20 Sweden} \author{Johan Br\"{a}nnlund} \affiliation{SCFAB, Department of Physics, Stockholm University, Se-106 91 Stockholm, Sweden} \author{Daniel. K. L. Oi} \affiliation{Centre for Quantum Computation, Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \begin{abstract} The geometric phase for a pure quantal state undergoing an arbitrary evolution is a ``memory'' of the geometry of the path in the projective Hilbert space of the system. We find that Uhlmann's geometric phase for a mixed quantal state undergoing unitary evolution not only depends on the geometry of the path of the system alone but also on a constrained bi-local unitary evolution of the purified entangled state. We analyze this in general, illustrate it for the qubit case, and propose an experiment to test this effect. We also show that the mixed state geometric phase proposed recently in the context of interferometry requires uni-local transformations and is therefore essentially a property of the system alone. \end{abstract} \pacs{03.65.Vf, 42.50.Dv} \maketitle Pancharatnam \cite{pancharatnam56} was first to introduce the concept of geometric phase in his study of interference of light in distinct states of polarization. Its quantal counterpart was discovered by Berry \cite{berry84}, who proved the existence of geometric phases in cyclic adiabatic evolutions. This was generalized to the case of nonadiabatic \cite{aharonov87} and noncyclic \cite{samuel88} evolutions. The geometric phase was also derived on the basis of purely kinematic considerations \cite{aitchison92}. In a general context, the geometric phase was defined for nonunitary and non-Schr\"{o}dinger \cite{pati95} evolutions. Since the geometric phase for a pure state is a nonintegrable quantity and depends only on the geometry of the path traced in the projective Hilbert space, it acts as a memory of a quantum system. Another important development in this field was initiated by Uhlmann \cite{uhlmann86} (see also \cite{uhlmann89}), who introduced a notion of geometric phase for mixed quantal states. More recently, using ideas of interferometry, another definition of mixed state phase was introduced in \cite{sjoqvist00a} (see also \cite{bhandari01}) and experimentally verified in \cite{du03}. A renewed interest in geometric phases for mixed states is due to its potential relevance to geometric quantum computation \cite{pachos99}. Mixed states naturally arise when we ignore the ancilla subsystem of a composite object (system+ancilla) that is described by a pure entangled state. In this Letter we wish to consider the mixed state geometric phases in \cite{uhlmann86,sjoqvist00a} in terms of such purifications, and to investigate whether they should be regarded as properties of the system alone or not. More precisely, we would like to address the following question: do mixed state geometric phases depend only on the evolution of the system of interest, or do they also depend on the evolution of the ancilla part with which the system is entangled? By examining, in detail, the case of mixed states undergoing local unitary evolutions, we find that the Uhlmann phase \cite{uhlmann86} indeed contains a memory of the ancilla part, while the mixed state phase proposed in \cite{sjoqvist00a} does not. In particular, we propose an experiment to test the Uhlmann phase using a Franson set up \cite{franson89} with polarization entangled photons \cite{hessmo00,white99} that would verify this new memory effect. More importantly, we show that the phase holonomies given in \cite{uhlmann86} and in \cite{sjoqvist00a} are generically different. Consider first the unitary path $\eta : t \in [0,\tau ] \mapsto |\psi_{t} \rangle \langle \psi_{t}|$ of normalized pure state projectors with $\langle \psi_{0} | \psi_{\tau} \rangle \neq 0$. The geometric phase associated with $\eta$ is defined as \begin{equation} \beta =\arg \lim_{N \rightarrow \infty} \Big( \langle \psi_{0} | \psi_{\tau} \rangle \ \langle \psi_{\tau} | \psi_{[(N-1)\tau /N]} \rangle \times \ ... \ \times \langle \psi_{[\tau /N]} | \psi_{0} \rangle \Big) \label{eq:puregp} \end{equation} $\beta$ is a property only of the path $\eta$ as it is independent of the lift $\eta \longrightarrow \tilde{\eta} : t \in [0,\tau ] \mapsto |\psi_{t}\rangle$. A parallel lift is defined by requiring that each $\langle\psi_{[(j+1)\tau/N]}|\psi_{[j\tau /N]}\rangle$ be real and positive (i.e. $\inprod{\psi}{\dot{\psi}}=0$ when $N\rightarrow\infty$), so that $\beta$ takes the form \begin{equation} \beta=\arg\inprod{\psi_{0}}{\psi_{\tau}}. \label{eq:pureparallel} \end{equation} One may measure $\beta$ in interferometry as a relative phase shift in the interference pattern characterized by $\nu e^{i\beta}=\inprod{\psi_{0}}{\psi_{\tau}}$, where $\nu = |\langle \psi_{0}|\psi_{\tau}\rangle|$ is the visibility~\cite{sjoqvist01}. To generalize the above to mixed states, consider the path $\zeta : t \in [ 0,\tau ] \longrightarrow \rho_{t}$ of density operators $\rho_{t}$. A standard purification (lift) of $\zeta$ is a path $\tilde{\zeta} : t \in [ 0,\tau ] \longrightarrow w_{t}$ in the Hilbert space of Hilbert-Schmidt operators with scalar product $\langle w_{t} , w_{t'} \rangle = {\text{Tr}} ( w_{t}^{\dagger} w_{t'})$ such that $w_{t} w^{\dagger}_{t} = \rho_{t}$. Note that $w_{t} = \rho^{1/2}_{t} x_{t}$ is a purification of $\rho_{t}$ for any unitary $x_{t}$. For a purification where each $| \langle w_{t} , w_{t'} \rangle |$ is constrained to its maximum $d[\rho_{t},\rho_{t'}]_{Bures}=\text{Tr}\left[\sqrt{\sqrt{\rho_{t}} \rho_{t'} \sqrt{\rho_{t}}}\right]$ \cite{uhlmann76}, Uhlmann \cite{uhlmann86} defines the geometric phase associated with $\zeta$ as \begin{eqnarray} \phi_{g} & = & \arg \lim_{N \rightarrow \infty} \Big( \langle w_{0} , w_{\tau} \rangle \ \langle w_{\tau} , w_{[(N-1)\tau /N]} \rangle \nonumber \\ & & \times ... \times \langle w_{[\tau /N]} , w_{0} \rangle \Big). \end{eqnarray} The Uhlmann phase $\phi_{g}$ is independent of the purification $\zeta \longrightarrow \tilde{\zeta}$ as long as it obeys the maximality constraint, thus $\phi_{g}$ is a property of the path $\zeta$. For pure states $\rho_{t} = |\psi_{t} \rangle \langle \psi_{t} |$ the constrained purification is characterized by $\langle w_{t} , w_{t'} \rangle = \langle \psi_{t} |\psi_{t'} \rangle$ up to an arbitrary phase factor so that $\phi_{g}$ reduces to the pure state geometric phase $\beta$. A parallel purification is introduced by requiring that each $w_{[(j+1)\tau /N]}^{\dagger} w_{[j\tau /N]}$ be hermitian and positive for all $j=0,...,N$. Infinitesimally, this entails that \begin{equation} w^{\dagger}_{t} \dot{w}_{t} = \dot{w}^{\dagger}_{t}w_{t} . \label{eq:uhlmannpc} \end{equation} For such a parallel purification, the geometric phase becomes \begin{equation} \phi_{g} = \arg \langle w_{0} , w_{\tau} \rangle , \label{eq:uhlmannholonomy} \end{equation} which reduces to $\beta$ for pure states. We show below that $\phi_{g}$ could be verified in interferometry as a relative phase shift in the interference pattern characterized by the visibility $|\langle w_{0} , w_{\tau} \rangle |$. To elucidate the above purification approach, consider the unitary case $\rho_{0} \longrightarrow \rho_{t} = u_{t} \rho_{0} u_{t}^{\dagger}$. We introduce a set of eigenvectors $\{ |k\rangle \}$, $k=1,...,N$ with $N$ the (finite) dimension of Hilbert space, with eigenvalues $\{ \lambda_k \}$ of $\rho_{0}$ so that \begin{eqnarray} w_{0} &=& \rho_{0}^{1/2} = \sum_{k} \sqrt{ \lambda_k} |k \rangle \langle k| \nonumber\\ \longrightarrow w_{t} &=& u_{t} \rho_{0}^{1/2} v_{t} = \sum_{k} \sqrt{ \lambda_k} u_{t}|k \rangle \langle k| v_{t} \end{eqnarray} with the unitarity $v_{t} = u_{t}^{\dagger}x_{t}$. With $u_{t}$ and $v_{t}$ related via the parallel transport condition Eq.~(\ref{eq:uhlmannpc}), we obtain the geometric phase from Eq.~(\ref{eq:uhlmannholonomy}) as \begin{equation} \phi_{g} = \arg \sum_{k,l} \sqrt{ \lambda_k \lambda_l} \langle l |u_{\tau}| k \rangle \langle k |v_{\tau} | l \rangle . \label{eq:wholonomy} \end{equation} The standard purification used by Uhlmann is equivalent to considering a pure state of the system+ancilla, $w_0\longleftrightarrow \ket{\Psi_0}\in{\cal H}_s\otimes{\cal H}_a$ evolving under a bi-local operator $u_t\otimes y_t$, in Schmidt form, \begin{equation} w_t\longleftrightarrow|\Psi_{t} \rangle = \sum_{k} \sqrt{ \lambda_k} (u_{t}|k \rangle) \otimes(y_{t}|k \rangle), \label{eq:purification} \end{equation} where the ancilla unitary $y_t=v_t^T$ (transpose with respect to the instantaneous eigenbasis of $\rho_t$) obeys the same parallel condition as before. In this view the geometric phase is given by \begin{equation} \phi_{g} = \arg \langle \Psi_{0} | \Psi_{\tau} \rangle . \label{eq:uhlpurification} \end{equation} Let us now consider the case where the composite system undergoes uni-local unitary transformations so that only the `system' part is affected, i.e. unitarities of the form $u_{t} \otimes {\mathbf 1}$. The purified state now evolves to \begin{equation} |\Psi_{t} \rangle = \sum_{k} \sqrt{ \lambda_k} (u_{\tau}|k \rangle) \otimes |k \rangle \label{eq:uniphasediff} \end{equation} and the phase difference between the initial and final state reads \begin{equation} \arg \langle \Psi_{0} | \Psi_{\tau} \rangle = \arg \sum_{k} \lambda_k \langle k |u_{\tau}|k \rangle =\arg \text{Tr}[\rho_0 u_{\tau}] . \label{eq:erik} \end{equation} If we require $u_{t}$ to transport each pure state component $|k\rangle$ of the density matrix in a parallel manner, then \begin{equation} \Phi_{g} = \arg \sum_{k} \lambda_k \nu_{k} e^{i\beta_{k}}, \label{eq:interfergp} \end{equation} where $\langle k |u_{\tau}|k \rangle = \nu_{k} e^{i\beta_{k}}$ and $\beta_{k}$ is the pure state (noncyclic) geometric phase for $|k\rangle$. $\Phi_{g}$ is the mixed state geometric phase proposed in \cite{sjoqvist00a}. It is natural to ask when the two mixed state geometric phases match. To see this, let us write $u_{t}=\exp(-itH)$ and $v_{t}=\exp(it\tilde{H})$, $H$ and $\tilde{H}$ being the Hamiltonian of system and ancilla, respectively (we set $\hbar = 1$). The Hamiltonians $H$ and $\tilde{H}$ are both assumed to be time-independent. To determine $\tilde{H}$ from the parallel transport condition Eq.~(\ref{eq:uhlmannpc}), we write $\rho_{0}$ in its diagonal basis yielding \cite{uhlmann93} \begin{equation} \tilde{H} = \sum_{k,l} \frac{2\sqrt{\lambda_{k}\lambda_{l}}} {\lambda_{k}+\lambda_{l}} |k \rangle \langle l| \langle k|H|l \rangle . \label{eq:htilde} \end{equation} Now, $v_{t}={\mathbf 1}$ iff $\tilde{H}=0$, which implies that $H=0$ when all $\lambda_{k}$ are nonvanishing. That is, when all $\lambda_{k}\neq 0$ the two geometric phases can match only in the trivial case where neither the system nor ancilla evolve. Thus, in generic cases the two phases are distinct and one cannot obtain one from the other. However, if $\rho$ is not of full rank, $\tilde{H}=0$ does not imply $H=0$ in order to match the two geometric phases. Only in the extreme case of $\rho$ being pure, the two geometric phases are identical and equal to the standard geometric phase of the system. It can be seen that Uhlmann's geometric phase is in general a property of a composite system in a pure entangled state that undergoes a certain bi-local unitary transformation. Hence, this geometric phase depends on the history of the system as well as on the history of its entangled counterpart. On the other hand the geometric phase proposed in \cite{sjoqvist00a} requires that the entangled composite system undergoes a uni-local unitary transformation, i.e. the evolution of the ancilla is independent of the evolution of the system. Thus, this geometric phase is essentially a property of the system alone; the role of the ancilla is just to make the reduced state of the system mixed. It should be noted that the above memory effects are not equivalent to that of the standard geometric phase acquired by the purified state, as computed in Ref. \cite{sjoqvist00b}. In fact, the parallelity condition $\langle \Psi_{t} | \dot{\Psi}_{t} \rangle = 0$ on the purified state is a much weaker constraint on the bi-local transformation than Eq.~(\ref{eq:uhlmannpc}). Indeed, by writing $|\Psi_{t}\rangle=u_{t}\otimes y_{t}|\Psi_{0}\rangle$ the parallel transport constitutes a single condition \begin{equation} \langle \Psi_{0} | u_{t}^{\dagger} \dot{u}_{t} \otimes {\mathbf 1} |\Psi_{0} \rangle + \langle \Psi_{0} | {\mathbf 1} \otimes y_{t}^{\dagger} \dot{y}_{t} |\Psi_{0} \rangle = 0, \label{eq:bistandardpc} \end{equation} and there are infinitely many $y_{t}$ that fulfill Eq.~(\ref{eq:bistandardpc}) but not Eq.~(\ref{eq:uhlmannpc}). For uni-local transformations Eq.~(\ref{eq:bistandardpc}) reduces to \begin{equation} \langle \Psi_{0} | u_{t}^{\dagger} \dot{u}_{t} \otimes {\mathbf 1} |\Psi_{0} \rangle = 0, \label{eq:unistandardpc} \end{equation} which is also a weaker condition than that for $\Phi_{g}$. In fact, $\Phi_{g}$ requires that each $\langle k| u_{t}^{\dagger} \dot{u}_{t} |k\rangle$ associated with nonvanishing $\lambda_{k}$ does vanish, while in Eq.~(\ref{eq:unistandardpc}) only their sum vanishes. Only for $|\Psi \rangle$ being a product state, corresponding to a pure state of the system, the new memory effects match with the standard geometric phase. Let us now compute Uhlmann's geometric phase in the noncyclic case for a qubit (two-level system) undergoing unitary precession. We assume that the qubit's Bloch vector initially points in the $z$ direction and has length $r$ so that $\rho_{0}$ has eigenvalues $\frac{1}{2} (1\pm r)$. Furthermore, assume that the Hamiltonian of the system is $H= \frac{1}{2} \vec{n}\cdot {\vec{\sigma}} = \frac{1}{2}(n_{x}\sigma_{x}+n_{z}\sigma_{z})$, $|\vec{n}|^{2} = n_{x}^{2}+n_{z}^{2}=1$. This determines the Hamiltonian $\tilde{H}$ of the ancilla via Eq.~(\ref{eq:htilde}) as $\tilde{H} = \frac{1}{2} (\sqrt{1-r^{2}} n_{x} \sigma_{x} + n_{z} \sigma_{z})$. By introducing the unit vector $\tilde{\vec{n}} = (\tilde{n}_{x},0,\tilde{n}_{z})$ with the components $\tilde{n}_{x}= \sqrt{1-r^{2}}n_{x}/\sqrt{1-r^{2} n_{x}^{2}}, \tilde{n}_{z}= n_{z}/\sqrt{1-r^{2} n_{x}^{2}}$, and the parameter $\tilde{\tau} = \tau \sqrt{1-r^{2} n_{x}^{2}}$ we obtain the noncyclic Uhlmann phase as \begin{eqnarray} \phi_{g} & = & -\arctan \Big( \big( rn_{z} \tan \frac{\tau}{2} - r\tilde{n}_{z} \tan \frac{\tilde{\tau}}{2} \big) \Big/ \big( 1 + (n_{z} \tilde{n}_{z} \nonumber \\ & & + \sqrt{1-r^{2}} n_{x} \tilde{n}_{x}) \tan \frac{\tau}{2} \tan \frac{\tilde{\tau}}{2} \big) \Big) . \label{eq:uhlnoncyclic} \end{eqnarray} Let us consider some important special cases. Firstly, the cyclic Uhlmann phase is obtained by inserting $\tau = 2\pi$ and using $-\tan x = \tan (\pi -x)$ yielding \begin{equation} \phi_{g}=\arctan\left(\frac{rn_{z}}{\sqrt{1-r^{2}n_{x}^{2}}} \tan\left(\pi\sqrt{1-r^{2}n_{x}^{2}}\right)\right). \label{eq:uhlcyclic} \end{equation} Secondly, in the noncyclic pure state case ($r=1$), we have $\tilde{\vec{n}}=(0,0,1)$ and $\sqrt{1- n_{x}^{2}} = |n_{z}|$, which yields \begin{eqnarray} \phi_{g}=-\arctan\left(n_{z}\tan(\tau/2) \right) + \frac{\tau}{2} n_{z} \mod{2\pi} . \label{eq:noncyclicpure} \end{eqnarray} This equals minus one-half of the geodesically closed solid angle of the open path on the Bloch sphere and is consistent with known expression for the geometric phase in the case of a pure qubit undergoing noncyclic precession (see, e.g., Ref. \cite{klyshko89}). Finally, in the case of the maximally mixed state ($r=0$), $\tilde{\vec{n}} = \vec{n}$ and $\rho_{0}^{1/2} = {\mathbf 1}/\sqrt{2}$, which yields $w_{0} w_{\tau}^{\dagger} = \rho_{0}$ so that the geometric phase vanishes, i.e. $\phi_{g} = \arg {\text{Tr}} \rho_{0} = 0$. Let us now compare the above results with the mixed state geometric phase in \cite{sjoqvist00a}. In the diagonal basis $\{ |0\rangle , |1\rangle \}$ of $\rho_{0}$ we have $\nu_{0} = \nu_{1}$ and $\beta_{0}=-\beta_{1}=-\frac{1}{2} \Omega$, where $\Omega$ is the geodesically closed solid angle on the Bloch sphere. For $r\neq 0$, we obtain $\Phi_{g}=-\arctan\left(r\tan(\Omega/2)\right)$. These expressions become identical to those of the Uhlmann approach only for pure states and in the trivial case $\vec{n}=(0,0,1)$, where neither system nor ancilla evolve. In the maximally mixed case $\Phi_{g}$ is even indeterminate as the parallel transport conditions $\langle 0| u_{t}^{\dagger} \dot{u}_{t}| 0\rangle = \langle 1| u_{t}^{\dagger} \dot{u}_{t}| 1\rangle=0$ do not specify a unique $u_{t}$ for a degenerate density operator, making $\Phi_{g} = \arg \text{Tr} [ \rho_{0} u_{\tau}] = \arg \text{Tr} [\frac{1}{2} u_{\tau}]$ undefined. As is clear from Eq.~(\ref{eq:purification}), Uhlmann's geometric phase retains a memory of the evolution of both system and ancilla due to the parallelity condition Eq.~(\ref{eq:uhlmannpc}). Using the above purification scheme $w_{t} \longrightarrow |\Psi_{t} \rangle$, the memory effect associated with $\phi_{g}$ could be tested experimentally in polarization entangled two-photon interferometry, as now shall be demonstrated. A detailed description of the relevant set up shown in Fig.~(1) may be found in Ref.~\cite{hessmo00}. A photon pair (system and ancilla photon) is produced in a polarization entangled pure state that takes the Schmidt form in the horizontal-vertical $(H-V)$ basis: \begin{equation} |\Psi_{0} \rangle = \sqrt{\frac{1+r}{2}} |H \rangle \otimes|H \rangle + \sqrt{\frac{1-r}{2}} |V \rangle\otimes |V \rangle . \label{eq:puremultiphoton} \end{equation} This source is described in Ref.~\cite{white99}, and is used as input in a Franson interferometer~\cite{franson89}. Note that $\rho_{0} = {\text{Tr}}_{a} |\Psi_{0} \rangle \langle \Psi_{0} | = \frac{1}{2} (1+r\sigma_{z})$ in the $H-V$ basis, and that $|\Psi_{0} \rangle$ is isomorphic to $w_{0} = \sqrt{\frac{1+r}{2}} |H\rangle \langle H| + \sqrt{\frac{1-r}{2}} |V\rangle \langle V| $. \begin{figure} \caption{Two-photon interferometry set up to test the Uhlmann phase.} \label{fig:figuhl1} \end{figure} The two unitary operators $u_{\tau}$ and $y_{\tau}$ are applied to the two longer arms. Thus, $u_{\tau}$ is applied to the system photon, say, and $y_{\tau}$ is applied to the ancilla photon. In one of the shorter arms a $U(1)$ shift $\chi$ is applied. To observe interference of $|\Psi_{0} \rangle$ and $|\Psi_{\tau}\rangle = u_{\tau} \otimes v_{\tau} |\Psi_{0}\rangle$ we require that the source produces photon pairs randomly \cite{franson89}, as is the case with the present type of source. If the photons arrive in the detector pair simultaneously, they either both took the shorter path ($\Psi_{0}$) or the longer path ($\Psi_{\tau}$). The state detected in coincidence is the desired superposition $|\Psi \rangle \sim e^{i\chi} |\Psi_{0}\rangle + |\Psi_{\tau} \rangle$. The measured coincidence intensity is proportional to $\langle \Psi | \Psi \rangle \propto 1 + \nu \cos(\chi - \phi_{g})$, where the visibility is $\nu = |\langle \Psi_{0}|\Psi_{\tau} \rangle |$. Thus, by varying $\chi$ the Uhlmann phase $\phi_{g}$ could be tested using this two-photon set up. An explicit realization of the operators $u_{\tau}$ and $y_{\tau}$ could be constructed in terms of an appropriate pair of $\lambda-$plates as follows. The $SU(2)$ part of the effect in the $H-V$ basis of a $\lambda-$plate making an angle $\theta$ with the vertical ($V$) axis is given by $u(\alpha ,\theta ) = \exp( -i\frac{\alpha}{2} \vec{n}_{\theta} \cdot \vec{\sigma)}$ with $\vec{n}_{\theta} = (\sin [2\theta] ,0, \cos [2\theta] )$. The precession angle $\alpha$ is proportional to the thickness of the $\lambda-$plate (e.g., $\alpha = \frac{\pi}{2}$ for a $\frac{\lambda}{4}-$plate). Now, the Uhlmann phase is obtained by taking $u_{\tau} = u(\alpha ,\theta )$ and $y_{\tau} = u^{\dagger}(\tilde{\alpha} ,\tilde{\theta} )$, where the thickness and orientation of the two $\lambda-$plates are related as $\tilde{\alpha} / \alpha = \sqrt{1-r^{2} \sin^{2}(2\theta)}$ and $\tan(2\tilde{\theta})=\sqrt{1-r^{2}}\tan(2\theta)$. In the cyclic case, $\alpha = 2\pi$ and the visibility of the interference pattern is reduced by the geometric factor \begin{eqnarray} \nu&=&\Bigg(\cos^{2}\left(\pi\sqrt{1-r^{2}\sin^{2}(2\theta)}\right) \nonumber\\ &+&\frac{r^{2}\cos^{2}(2\theta)}{1-r^{2}\sin^{2}(2\theta)} \sin^{2}\left(\pi\sqrt{1-r^{2}\sin^{2}(2\theta)}\right)\Bigg)^{1/2} . \end{eqnarray} Thus, the visibility is reduced by the entanglement of the purified state. For maximally mixed states, corresponding to maximally entangled $\Psi_0$ \cite{strekalov97}, $\tilde{\alpha} = \alpha$ and $\tilde{\theta} = \theta$ so that $y_{\tau} = u^{\dagger}_{\tau}(\alpha,\theta )$. Thus one should choose the same thickness of the two $\lambda-$plates and their half axes being perpendicular. The scalar product $\langle \Psi_{0} |u_{\tau} \otimes y_{\tau} |\Psi_{0} \rangle = \langle \Psi_{0} |u_{\tau} \otimes u^{\dagger}_{\tau} |\Psi_{0} \rangle$ becomes real-valued and hence $\phi_{g} = 0$. The absence of phase shift could, e.g., be tested by varying the common angle $\theta$. For pure states, $\tilde{\alpha} = \alpha \cos 2\theta$ and $\tilde{\theta}=0\mod{\frac{\pi}{2}}$. This yields the pure state geometric phase $\phi_{g} = -\frac{1}{2} \Omega$, which also could be tested in single-photon interferometry \cite{sjoqvist01}. The mixed state geometric phase in \cite{sjoqvist00a} could be tested by canceling the accumulation of local phase changes for each pure state component in each beam of a single-photon interferometer. Thus, if one of the beams is exposed to the unitarity $u_{t}$, the other beam should be exposed to the unitarity $\tilde{u}_{t}$ fulfilling $\langle 0| \tilde{u}_{t}^{\dagger} \dot{\tilde{u}}_{t} |0\rangle = - \langle 0| u_{t}^{\dagger} \dot{u}_{t} |0\rangle$ and $\langle 1| \tilde{u}_{t}^{\dagger} \dot{\tilde{u}}_{t} |1\rangle = - \langle 1| u_{t}^{\dagger} \dot{u}_{t} |1\rangle$ \cite{sjoqvist01}. To conclude, we have shown that the mixed state geometric phases proposed in \cite{uhlmann86} and \cite{sjoqvist00a} can be interpreted as two types of generically distinct phase holonomy effects for entangled systems undergoing certain local unitary transformations. We have shown that these phase effects are different from the standard geometric phase of the purified state. In the unitary case, the Uhlmann phase depends on the path of the system as well as on the ancilla undergoing a constrained bi-local unitary operation. This is a new type of memory effect that is present only for mixed state phase holonomy. We have proposed an experiment using polarization entangled photons to test this effect. The geometric phase in \cite{sjoqvist00a} depends on a certain uni-local transformation in which the ancilla part does not evolve. Thus, this geometric phase is essentially a property of the system part alone and is testable in one-particle interferometry. We hope that the mixed state phases would have applications in many areas of physics and future experiments would test these memory effects. We would like to thank Artur Ekert for useful suggestions. The work by E.S. was financed by the Swedish Research Council. D.K.L.O acknowledges the support of CESG (UK) and QAIP grant IST-1999-11234. \end{document}
math
23,818
\begin{document} \baselineskip24pt \title{State-independent Uncertainty Relations and Entanglement Detection} \begin{abstract}\doublespacing The uncertainty relation is one of the key ingredients of quantum theory. Despite the great efforts devoted to this subject, most of the variance-based uncertainty relations are state-dependent and suffering from the triviality problem of zero lower bounds. Here we develop a method to get uncertainty relations with state-independent lower bounds. The method works by exploring the eigenvalues of a Hermitian matrix composed by Bloch vectors of incompatible observables and is applicable for both pure and mixed states and for arbitrary number of $N$-dimensional observables. The uncertainty relation for incompatible observables can be explained by geometric relations related to the parallel postulate and the inequalities in Horn's conjecture on Hermitian matrix sum. Practical entanglement criteria are also presented based on the derived uncertainty relations. \end{abstract} \section{Introduction} The uncertainty relation is one of the distinguishing features of quantum theory and plays important roles in quantum information sciences \cite{PBusch, HHofmann, OGuhne, CAFuchs}. The original form, $p_1q_1\sim h$, was introduced by Heisenberg in explaining the non-simultaneous precision measurements of the position $q$ and the momentum $p$ of a microscopic particle, where $h$ is Planck constant and $p_1$ and $q_1$ are the precisions of measuring $p$ and $q$ \cite{heis}. Soon it was cast into the following form by Kennard \cite{Kennard} \begin{eqnarray} \Delta x\Delta p \geq \frac{\hbar}{2}\; . \label{heis} \end{eqnarray} Here $\Delta x$ and $\Delta p$ are the standard deviations in measuring the canonical observables of $x$ and $p$. The most well-known formulation, however, was that by Robertson \cite{Robertson} \begin{eqnarray}\label{Robertson} \Delta A^2 \Delta B^2\geq \left|\frac{1}{2}\langle [A,B] \rangle\right|^2\; , \end{eqnarray} where $\Delta X^2 = \langle X^2\rangle - \langle X\rangle^2$ means the variance (square of the standard deviation) of the observable (not just the canonical observables $x$ and $p$), and $[A,B]\equiv AB-BA$ is the commutator. Soon afterward Schr\"odinger presented an improvement, $ \Delta A^2\Delta B^2 \geq \left|\frac 12 \langle[A,B]\rangle \right|^2 +\left|\frac{1}{2} \langle\{A,B\}\rangle - \langle A\rangle\langle B\rangle\right|^2$ \cite{schrodinger}, with the anti-commutator defined as $\{A, B\} \equiv AB+BA$. These variance-based uncertainty relations have a common trait of state-dependent lower bound: The optimal lower bounds of the right hand sides may be trivially zero, which blurs the trade-offs between $\Delta A$ and $\Delta B$ for variant quantum states. A recent work of Maccone and Pati's \cite{mp} presented new improvements to the uncertainty relation with a typical form of \begin{eqnarray} \Delta A^2 + \Delta B^2 & \geq & \pm i\langle\psi|[A,B]|\psi\rangle+|\langle\psi|A \pm iB|\psi^\perp\rangle|^2 \; . \label{Maccone} \end{eqnarray} Here $|\psi^{\perp}\rangle$ is defined to be $\langle \psi|\psi^{\perp}\rangle =0$, and the lower bound keeps on to be state-dependent. Since then, great efforts have been devoted to improve the lower bound of the variance-based uncertainty relation \cite{PU, F1, S1, S2, WUR, S3, Vur, WYI, MBP, Mprod, Uplow}. Of those new developments, the variance generally exists on both sides of the uncertainty relations, and thus the state-dependence remains. Moreover, those new uncertainty relations are mostly devoted to pure state and are hence not very suitable for mixed states \cite{mixed-un}. Generally, the infimum over all states for the right-hand sides of these uncertainty relations, e.g., see equations (\ref{Robertson}) and (\ref{Maccone}), will not give the real infimum for the left-hand sides. To get state-independent lower bounds, Bloch vector method was introduced in \cite{NewUR}, which may yield an exact uncertainty relation among arbitrary number of observables in principle \cite{TUR}. However, since the uncertainty relations obtained by means of Bloch vectors involve complicated functions of the variances of different observables \cite{NewUR, TUR}, the trade-offs among incompatible observables may not manifest explicitly. Numerical method is also helpful in analyzing the lower limits for the sum of observables' variances \cite{Neumerical-1}, e.g., variances of angular momentums \cite{Neumerical-2}. For the ever increasing number of uncertainty relations, the fundamental question remains open: How to get an explicit form of uncertainty relation with state-independent lower bound. In this work, we present a method on how to derive the state-independent uncertainty relation for the sum of variances. The upper and lower bounds of the sum are obtained by exploring the eigenvalues of a Hermitian matrix composed of Bloch vectors of observables, which is applicable to both pure and mixed states and to arbitrary number of $N$-dimensional observables. In this sense, the quantum uncertainty relation stems from the geometric relation pertaining to the postulate in Euclidean geometry and the Horn's inequalities for the spectrum of Hermitian matrix sums \cite{Horn-conjecture} (the conjecture was proved around 2000 \cite{Horn-inequalities}). We also present a practical uncertainty-relation-based entanglement criterion for bipartite mixed states, which is shown to be superior to the Bloch representation criterion in detecting entanglement. \section{The state-independent uncertainty relation} An arbitrary quantum state (pure or mixed) may be represented by a density matrix. The density matrix $\rho$ is a positive semidefinite Hermitian matrix with trace one and may be expressed as \cite{N-vector} \begin{eqnarray} \rho = \frac{1}{N} \mathds{1} + \frac{1}{2} \sum_{\mu=1}^{N^2-1} r_{\mu}\lambda_{\mu} = \frac{1}{N} \mathds{1} + \frac{1}{2}\vec{r} \cdot \vec{\lambda}\; , \label{stat and obs} \end{eqnarray} where $\lambda_{\mu}$ are the $N^2-1$ SU($N$) generators with $\mathrm{Tr}[\lambda_{\mu}\lambda_{\nu}] = 2\delta_{\mu\nu}$, and $r_{\mu} = \mathrm{Tr}[\rho \lambda_{\mu}]$ are components of a $N^2-1$-dimensional real vector $\vec{r}$ called Bloch vector of the density matrix. The Bloch vector $\vec{r}$ subjects to a series of constraints to ensure the normalization and semipositivity of the density matrix \cite{N-bloch, N-Bloch-Positivity}. In quantum mechanics, physical observables are represented by Hermitian matrices. Because adding (subtracting) a constant to (from) an observable does not change its variance, we can always treat the observable to be traceless and write $A = \sum_{\mu=1}^{N^2-1} a_{\mu}\lambda_{\mu} = \vec{a}\cdot \vec{\lambda}$, where $\vec{a}$ is called the Bloch vector of $A$. The variance of any observable $A$ in quantum state $\rho$ now can be written as \cite{NewUR} \begin{eqnarray} \Delta A^2 = \mathrm{Tr}[A^2\rho] - \mathrm{Tr}[A\rho]^2 = \frac{2}{N} |\vec{a}|^2 + (\vec{a}*\vec{a}) \cdot \vec{r} -(\vec{a}\cdot\vec{r}\,)^2\; . \label{Var-Bloch-N} \end{eqnarray} Here $(\vec{a}*\vec{a})_k = \sum_{\mu,\nu=1}^{N^2-1} a_{\mu} a_{\nu} d_{\mu\nu k}$ with $d_{\mu\nu k}$ being the symmetric structure constant of SU($N$) group. The variance of a physical observable now is expressed in terms of geometric relations between the Bloch vectors of the observable and the quantum state and varies with the states. For $M$ observables $A_i=\vec{a}_i\cdot \vec{\lambda}$ in $N$-dimensional Hilbert space, we may construct a real symmetric matrix $\mathcal{A} = \sum_{i=1}^M \vec{a}_i \vec{a}_i^{\, \mathrm{T}}$. The Bloch vectors of $\{A_i\}$ span a space $\mathcal{S}_1 \equiv \mathrm{span}\{\vec{a}_i|i=1,\ldots, M\}$, where the whole $(N^2-1)$-dimensional Bloch vector space is constructed by $\mathcal{S}= \mathcal{S}_1 \cup \mathcal{S}_0$ with $\mathcal{S}_0 \equiv \overline{\mathcal{S}_1}$. The dimension $m$ of $\mathcal{S}_1$ lies in $1\leq m \leq \min\{M,N^2-1\}$. Then any Bloch vector can be decomposed accordingly as: \begin{equation} \vec{\alpha} = \sum_i^M \vec{a}_i * \vec{a}_i = \vec{\alpha}_1 + \vec{\alpha}_0\; , \; \vec{r} = \vec{r}_1 + \vec{r}_0 \; , \label{Space-Decom} \end{equation} where $\vec{\alpha}_1, \vec{r}_1 \in \mathcal{S}_1$ and $\vec{\alpha}_0, \vec{r}_0 \in \mathcal{S}_0$. We have the following theorem. \begin{theorem} For $M$ observables $A_i$, $i\in \{1,\ldots, M\}$, we have the following uncertainty relation \begin{align} \sum_{i=1}^M \Delta A_i^2 & \geq \frac{2}{N} \mathrm{Tr}[\mathcal{A}] + \mathcal{C}_0 - \mathcal{C}_{1} \; , \label{N-upper} \\ \sum_{i=1}^M \Delta A_i^2 & \leq \frac{2}{N} \mathrm{Tr}[\mathcal{A}] + \mathcal{C}_0 - \mathcal{C}_{2} \; . \label{N-low} \end{align} Here $\mathcal{C}_0 = \frac{1}{4}\vec{\alpha_1}^{\mathrm{T}} \mathcal{A}^{-1} \vec{\alpha}_1$ is state independent, and \begin{align} \mathcal{C}_1 & = \max_{\theta \in [0,\pi/2]} \{(|\vec{r}\,| \sin\theta + \frac{1}{2} |\mathcal{A}^{-1} \vec{\alpha}_1|)^2\sigma_{1}(\mathcal{A}) + |\vec{\alpha}_0||\vec{r}\,|\cos\theta \} , \\ \mathcal{C}_2 & = \min_{\theta \in [0,\pi/2]} \{(|\vec{r}\,| \sin\theta - \frac{1}{2} |\mathcal{A}^{-1}\vec{\alpha}_1|)^2 \sigma_{m}(\mathcal{A}) - |\vec{\alpha}_0||\vec{r}\,| \cos\theta \} , \end{align} where $\sigma_i(\cdot)$ are eigenvalues in descending order, $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ depend only on the norm of Bloch vector $|\vec{r}\,|$. \label{Theorem-N} \end{theorem} \noindent{\bf Proof:} According to equations (\ref{Var-Bloch-N}) and (\ref{Space-Decom}), we may write \begin{align} \sum_{i=1}^M \Delta A_i^2 = \frac{2}{N} \mathrm{Tr}[\mathcal{A}] + \frac{1}{4} \vec{\alpha_1}^{\mathrm{T}} \mathcal{A}^{-1} \vec{\alpha}_1 - (\vec{r}_1 - \frac{1}{2} \mathcal{A}^{-1} \vec{\alpha}_1)^{\mathrm{T}} \mathcal{A} (\vec{r}_1 - \frac{1}{2} \mathcal{A}^{-1} \vec{\alpha}_1) + \vec{\alpha}_0 \cdot \vec{r}_0 \; . \label{sum-Ai=} \end{align} Because $\mathcal{A} = \sum_{i=1}^M \vec{a}_i \vec{a}_i^{\, \mathrm{T}}$, it is invertible within $\mathcal{S}_1$. Equation (\ref{sum-Ai=}) has the lower bound \begin{equation} \sum_{i=1}^M \Delta A_i^2 \geq \frac{2}{N} \mathrm{Tr}[\mathcal{A}] + \frac{1}{4} \vec{\alpha_1}^{\mathrm{T}} \mathcal{A}^{-1} \vec{\alpha}_1 - (|\vec{r}_1| + \frac{1}{2} |\mathcal{A}^{-1} \vec{\alpha}_1|)^{2} \sigma_1(\mathcal{A}) - |\vec{\alpha}_0||\vec{r}_0| \; , \label{sum-Ai-lower} \end{equation} where $\sigma_1$ is the largest eigenvalue of $\mathcal{A}$. As $|\vec{r}\,|^2 = |\vec{r}_1|^2 + |\vec{r}_0|^2$, we have $|\vec{r}_1| = |\vec{r}\,|\sin\theta$ and $|\vec{r}_0| = |\vec{r}\,|\cos\theta$, $\theta \in [0,\pi/2]$, and therfore equation (\ref{sum-Ai-lower}) leads to equation (\ref{N-upper}). Equation (\ref{N-low}) is analogously obtained where $\sigma_m$ denotes the smallest eigenvalue of $\mathcal{A}$. Q.E.D. As qubit systems have the most wide applications in quantum information sciences, we present several important corollaries of Theorem \ref{Theorem-N} for qubit. As $d_{\mu\nu k}=0$ for qubit, the variance in equation (\ref{Var-Bloch-N}) becomes $\Delta A^2 = \vec{a}\cdot \vec{a}- (\vec{a} \cdot \vec{r}\,)^2$. We have the following Corollary \begin{corollary} For $M$ observables $A_i = \vec{a}_i \cdot \vec{\lambda}$ in qubit system, we have \begin{eqnarray} \sum_{i=1}^{M} \Delta A_{i}^2 & \geq & (1-|\vec{r}\,|^2) \sigma_{1} + \sigma_{2} + \sigma_{3} \; , \label{l3} \\ \sum_{i=1}^{M} \Delta A_{i}^2 & \leq & \sigma_{1} + \sigma_{2} + (1-|\vec{r}\,|^2) \sigma_{3} \; . \label{l2} \end{eqnarray} Here $\sigma_i$ are eigenvalues of $\mathcal{A} = \sum_{i=1}^M \vec{a}_i\vec{a}_i^{\,\mathrm{T}}$ with $\sigma_1\geq \sigma_2 \geq \sigma_3\geq 0$. \label{Theorem-singular} \end{corollary} \noindent{\bf Proof:} As $\vec{\alpha} = 0$ and $\Delta A^2 = \vec{a}\cdot \vec{a}- (\vec{a} \cdot \vec{r}\,)^2$, it is easy to get the following result: \begin{eqnarray} \sum_{i=1}^{M} \Delta A_{i}^2 = \mathrm{Tr}[\mathcal{A}] - \vec{r}^{\,\mathrm{T}} \mathcal{A} \, \vec{r} \; . \label{Var-norm-expression} \end{eqnarray} Because $\mathcal{A}$ is a positive semi-definite real symmetric matrix with eigenvalues $\{\sigma_1, \sigma_2, \sigma_3\}$, we have $|\vec{r}\,|^2 \sigma_3 \leq \vec{r}^{\,\mathrm{T}} \mathcal{A} \, \vec{r} \leq |\vec{r}\,|^2\sigma_1$ which directly leads to equations (\ref{l3}, \ref{l2}). Q.E.D. Corollary \ref{Theorem-singular} gives both the upper and lower bounds for the sum of the variances of $A_i$, which rely only on trace norm of the density matrix, i.e., $|\vec{r}\,|^2$. For the special case of pure qubit state where $|\vec{r}\,|=1$, we have \begin{equation} \sigma_2 + \sigma_3 \leq \sum_{i=1}^{M} \Delta A_{i}^2 \leq \sigma_1 + \sigma_2 \; . \end{equation} It is noticed that the inequalities in Theorem \ref{Theorem-singular} actually arise from the Horn's inequalities for the sum of Hermitian matrices, which will be clear with the following Corollary: \begin{corollary} For two independent observables $A_1$ and $A_2$ in qubit system, i.e., the two observables are not proportional $A_1 \neq \kappa A_2$, where $ \Delta A_1^2 \geq c_1$ and $\Delta A_2^2 \geq c_2$ with $c_{1}$ and $c_2$ being dependent only on the Bloch vector norm of the state, there exists the following \begin{equation} \Delta A_1^2 + \Delta A_2^2 > c_1+c_2 \; . \end{equation} That is, the lower bound of the sum of their variances are greater than the sum of their variances' lower bounds for all the states with the same Bloch vector norm. \end{corollary} \noindent{\bf Proof:} According to Theorem \ref{Theorem-singular}, we have \begin{align} \Delta A_1^2 = |\vec{a}_1|^2 - \vec{r}^{\mathrm{T}} \mathcal{A}_1 \vec{r} \geq |\vec{a}_1|^2 - |\vec{r}\,|^2\sigma_1(\mathcal{A}_1) \equiv c_1\; , \\ \Delta A_2^2 = |\vec{a}_2|^2 - \vec{r}^{\mathrm{T}} \mathcal{A}_2 \vec{r} \geq |\vec{a}_2|^2 - |\vec{r}\,|^2\sigma_1(\mathcal{A}_2) \equiv c_2 \; , \end{align} where $\vec{a}_i$ are the Bloch vectors of $A_i$; $\mathcal{A}_i = \vec{a}_i \vec{a}_i^{\,\mathrm{T}}$, $i=1,2$ are real symmetric (Hermitian) matrices; and $\sigma_1(\cdot)$ means the largest eigenvalue of a matrix. Meanwhile, the sum of the two variances is \begin{equation} \Delta A_{1}^2 + \Delta A_2^2 = \vec{a}_1 \cdot \vec{a}_1 + \vec{a}_2 \cdot \vec{a}_2- \vec{r}^{\,\mathrm{T}}( \mathcal{A}_1 + \mathcal{A}_2 ) \vec{r} \; . \label{sum-lower-bound} \end{equation} The lower bound of equation (\ref{sum-lower-bound}) is $|\vec{a}_1|^2 + |\vec{a}_2|^2 - |\vec{r}\,|^2 \sigma_1(\mathcal{A}_1 + \mathcal{A}_2)$. However, the Horn's inequalities \cite{Horn-inequalities} tell that $\sigma_1(\mathcal{A}_1 + \mathcal{A}_2) < \sigma_1(\mathcal{A}_1) + \sigma_1(\mathcal{A}_2)$ for the present configuration of $\mathcal{A}_1$ and $\mathcal{A}_2$, i.e. $\vec{a}_1 \neq \kappa \vec{a}_2$. Q.E.D. Two physical quantities $A_1$ and $A_2$ may be regarded as linearly independent in the sense that $xA_1+yA_2 =0 \Leftrightarrow x=y=0$. In classical probability theory, the linear independence leads to the following: If there are probability distributions for $A_1$ and $A_2$ where $\Delta A_{1,2}^2$ could reach the values $c_{1,2}$ respectively, then there always exist the joint probability distribution that makes $\Delta A_1^2 + \Delta A_2^2 = c_1+c_2$, e.g., multiplying the two probability distributions will simply do the job. However, the quantum theory predicts differently: There does not exist the state (joint probability distribution for $A_1$ and $A_2$ in the statistical interpretation in quantum mechanics) where the variances of $A_1$ and $A_2$ could reach the individual minimum values of $c_1$ and $c_2$ simultaneously, i.e., $\Delta A_1^2 + \Delta A_2^2$ cannot reach $c_1+c_2$ due to the Horn's inequalities on the matrix sum in equation (\ref{sum-lower-bound}). Similar situation as that of equation (\ref{sum-lower-bound}) happens in $N$-dimensional systems, i.e. \begin{equation} \Delta A_1^2 + \Delta A_2^2 = \frac{2}{N}\left(|\vec{a}_1|^2 + |\vec{a}_2|^2 \right) + (\vec{a}_1' + \vec{a}_2')\cdot \vec{r} - \vec{r}^{\, \mathrm{T}}(\mathcal{A}_1 + \mathcal{A}_2) \vec{r} \; , \end{equation} where $\vec{a}_i' = \vec{a}_i*\vec{a}_i$ and $\mathcal{A}_i = \vec{a}_i \vec{a}_i^{\mathrm{T}}$. If $|\vec{a}_i'| \gg \sigma_1(\mathcal{A}_{i})$, then $\Delta A_1^2 + \Delta A_2^2 >c_1+c_2$ because $\vec{r}$ cannot be anti-parallel (or parallel) to two non-parallel vectors $\vec{a}_1'$ and $\vec{a}_2'$ simultaneously, which is originated from the parallel postulate of Euclidean geometry. While if $|\vec{a}_i'| \ll \sigma_1(\mathcal{A}_{i})$, $\Delta A_1^2 + \Delta A_2^2 >c_1+c_2$ satisfies, due to the Horn's inequalities for matrix sum (see equation (\ref{sum-lower-bound})). The actual case of $\frac{|\vec{a}_i'|}{\sigma_1(\mathcal{A}_i)} \in [0, \left(\frac{2(N-1)}{N}\right)^{\frac{1}{2}}]$ \cite{NewUR} may be more complex because of the possible interferences between terms $\vec{a}*\vec{a}$ and $\vec{a} \vec{a}^{\,\mathrm{T}}$. However for a complete set of orthogonal observables $\{A_i\}$, where $\mathrm{Tr}[A_iA_j] = 2|\vec{a}\,|^2 \delta_{ij}$, $i,j \in \{1,\ldots, N^2-1\}$, there exists a concise result for the variance-based uncertainty relation. We express a complete set of orthogonal observables $\{A_i\}$ in the form of \begin{equation} A_{i} = |\vec{a}| \sum_{j =1}^{N^2-1} O_{ij} \lambda_{j} \; , \label{orth-A} \end{equation} where $O\in$ SO($N^2-1$), and we have \begin{corollary} For the complete set of orthogonal observables $\{A_i\}$, there exists the following relation for the state-dependent variances \begin{eqnarray} \sum_{i=1}^{N^2-1} \Delta A_i ^2 = |\vec{a}|^2 \left(\frac{2(N^2-1)}{N} - |\vec{r}\,|^2 \right) \geq 2|\vec{a}|^2 (N-1) \; . \end{eqnarray} Here $\vec{r}$ is the Bloch vector of the quantum state. \label{Theorem-ortho-complete} \end{corollary} \noindent{\bf Proof:} Taking equation (\ref{orth-A}) into equation (\ref{Var-Bloch-N}) we have \begin{equation} \Delta A_i^2 = \frac{2}{N} |\vec{a}|^2 + |\vec{a}|^2 \sum_{\mu,\nu, k} O_{i\mu}O_{i\nu}d_{\mu\nu k} r_k - |\vec{a}|^2(\sum_{\mu=1}^{N^2-1} O_{i\mu}r_{\mu})^2 \; . \end{equation} Summing over $i$, we have \begin{align} \sum_{i=1}^{N^2-1} \Delta A_i^2 & = |\vec{a}|^2 \frac{2(N^2-1)}{N} + |\vec{a}|^2 \sum_{\mu,k=1}^{N^2-1} d_{\mu\mu k} r_{k} - |\vec{a}|^2 |\vec{r}\,|^2 \nonumber \\ & = |\vec{a}|^2 \left(\frac{2(N^2-1)}{N} - |\vec{r}\,|^2 \right) \; , \end{align} where the condition $\sum_{\mu=1}^{N^2-1} d_{\mu\mu k} = 0$, $\forall k \in \{1,\ldots, N^2-1\}$ is employed. As the Bloch vectors $|\vec{r}\,|^2 \leq \frac{2(N-1)}{N}$, the theorem is then sound. Q.E.D. Corollary \ref{Theorem-ortho-complete} states that when a complete set of orthogonal observables is considered, the sum of their variances appears to be an identity. For Pauli matrices in SU(2), Corollary \ref{Theorem-ortho-complete} reduces to $\Delta \sigma_1^2 + \Delta \sigma_2^2 + \Delta \sigma_3^2 = 3- |\vec{r}\,|^2$ which agrees with the result of \cite{NewUR}. \section{The detection of entanglement via uncertainty relations} Uncertainty relations can also be used to characterize quantum entanglement \cite{Quant}. We consider the following $N\times N$ quantum state in Bloch representation \cite{Separability-Horn} \begin{align} \rho_{AB} & = \frac{1}{N^2} \mathds{1} \otimes \mathds{1} + \frac{1}{2N} \vec{r}\cdot \vec{\lambda}\otimes \mathds{1} + \frac{1}{2N} \mathds{1} \otimes \vec{s} \cdot \vec{\lambda} + \frac{1}{4} \sum_{\mu=1}^{N^2-1} \sum_{\nu=1}^{N^2-1} \mathcal{T}_{\mu\nu} \, \lambda_{\mu} \otimes \lambda_{\nu} \; , \label{rho-Bloch} \end{align} where $r_{\mu} = \mathrm{Tr}[\rho_{AB}(\lambda_{\mu} \otimes \mathds{1})]$, $s_{\nu} = \mathrm{Tr}[\rho_{AB} (\mathds{1} \otimes \lambda_{\nu})]$, and $\mathcal{T}_{\mu\nu} = \mathrm{Tr}[\rho_{AB} (\lambda_{\mu} \otimes \lambda_{\nu})]$ is called the correlation matrix. The reduce density matrices are $\rho_A = \mathrm{Tr}_B[\rho_{AB}] = \frac{1}{N}\mathds{1} + \frac{1}{2}\vec{r} \cdot \vec{\lambda}$, $\rho_B = \mathrm{Tr}_A[\rho_{AB}] = \frac{1}{N}\mathds{1} + \frac{1}{2}\vec{s} \cdot \vec{\lambda}$, and the quantum state $\rho_{AB}$ is separable when \begin{equation} \vec{r} = \sum_k p_k \vec{r}_k\; , \vec{s} = \sum_k p_k \vec{s}_k\; , \; \mathcal{T} = \sum_{k} p_k \vec{r}_k \vec{s}_k^{\, \mathrm{T}}\; . \end{equation} Here, $\rho_{AB} = \sum_k p_k \rho^{(A)}_k \otimes \rho^{(B)}_{k}$, $\{p_i\}$ is probability distribution, and $\vec{r}_k$ and $\vec{s}_k$ denote the Bloch vectors of $\rho_k^{(A)}$ and $\rho_k^{(B)}$ respectively. We call a set of local observables $M_i = A_i\otimes \mathds{1} + \mathds{1} \otimes B_i$ to be complete and orthonormal if $\mathrm{Tr}[A_iA_j] = \mathrm{Tr}[B_iB_j] = 2\delta_{ij}$, $\forall i,j \in \{1,\ldots, N^2-1\}$, and the following Corollary exists: \begin{corollary} If an $N\times N$ state $\rho_{AB}$ is separable, then the following relation exists for arbitrary complete orthonormal local observables $\{M_i=A_i\otimes \mathds{1} + \mathds{1} \otimes B_i\}$ \begin{align} \sum_{i=1}^{N^2-1} \Delta M_i^2 & \geq 4(N-1) \; . \label{uncertainty-ent} \end{align} Equation (\ref{uncertainty-ent}) directly tells that if $\rho_{AB}$ is separable, then \begin{align} ||\mathcal{T}||_{\mathrm{KF}} & \leq \frac{2(N-1)}{N} - \frac{1}{2} \left( |\vec{r}\,| - |\vec{s}\,| \right)^2 \; , \label{uncertainty-BR} \end{align} where $||\mathcal{T}||_{\mathrm{KF}} \equiv \sum_i \sigma_{i}(\mathcal{T})$ is the Ky Fan norm of a matrix, $\vec{r}$ and $\vec{s}$ are the Bloch vectors of the reduce density matrices of particles $A$ and $B$. \label{Coro-separable} \end{corollary} \noindent{\bf Proof:} Taking equation (\ref{rho-Bloch}) into $\Delta M_i^2 = \mathrm{Tr}[M_i^2 \rho_{AB}] - \mathrm{Tr}[M_i\rho_{AB}]^2$, we have \begin{align} \sum_{i=1}^{N^2-1} \Delta M_i^2 & = \frac{4}{N}(N^2-1) + 2 \sum_i \vec{a}_i^{\,\mathrm{T}} \mathcal{T} \vec{b}_i - \sum_i (\vec{r} \cdot \vec{a}_i + \vec{s} \cdot \vec{b}_i)^2\nonumber \\ & \leq \frac{4}{N}(N^2-1) + 2 \sum_i \vec{a}_i^{\,\mathrm{T}} \mathcal{T} \vec{b}_i - \left( |\vec{r}\,| - |\vec{s}\,| \right)^2 \; . \label{DeltaMi-NS} \end{align} While taking equation (\ref{rho-Bloch}) into $\Delta M_i^2 = \mathrm{Tr}[M_i^2\rho_{AB}] - \mathrm{Tr}[M_i\rho_{AB}]^2$ with $\mathcal{T} = \sum_{k} p_k \vec{r}_k \vec{s}_k^{\mathrm{T}}$, we have \begin{align} \sum_{i=1}^{N^2-1} \Delta M_i^2 & = \frac{4(N^2-1)}{N} + 2 \sum_{i,k} p_k r_{ki}s_{ki} - \sum_i \left(\sum_{ k}p_kr_{ki}+ p_ks_{ki} \right)^2 \nonumber \\ & \geq \frac{4(N^2-1)}{N} - \sum_{k}p_k (|\vec{r}_k|^2+|\vec{s}_k|^2) \nonumber \\ & \geq 4(N-1) \; . \label{DeltaMi-S} \end{align} Here, $r_{ki}=\vec{r}_k\cdot \vec{a}_i$, $s_{ki}=\vec{s}_k\cdot \vec{b}_i$, and $|\vec{r}\,|^2, |\vec{s}\,|^2 \leq \frac{2(N-1)}{N}$; the relation $2\sum_{k} p_k r_{ki}s_{ki} = \sum_{k} p_k \left[(r_{ki}+s_{ki})^2 - r_{ki}^2 - s_{ki}^2 \right]$ is used. Then equations (\ref{DeltaMi-NS}, \ref{DeltaMi-S}) give \begin{equation} \sum_{i=1}^{N^2-1} \vec{a}_i^{\,\mathrm{T}} \mathcal{T} \vec{b}_i \geq - \frac{2(N-1)}{N} +\frac{1}{2} \left( |\vec{r}\,| - |\vec{s}\,| \right)^2\; , \label{pre-result} \end{equation} which is satisfied by all possible bases $\vec{a}_i$ and $\vec{b}_i$. By choosing $\vec{a}_i = \vec{u}_i$ and $\vec{b}_i = -\vec{v}_i$, we have \begin{equation} ||\mathcal{T}||_{\mathrm{KF}} \leq \frac{2(N-1)}{N} - \frac{1}{2} \left( |\vec{r}\,| - |\vec{s}\,| \right)^2 \; . \end{equation} Here $\vec{u}_i$ and $\vec{v}_i$ are the left and right singular vectors of $\mathcal{T}$. Q.E.D. Corollary \ref{Coro-separable} represents an uncertainty-relation-based entanglement criterion for bipartite mixed states. Equation (\ref{uncertainty-BR}) provides a better upper bound than Theorem 1 of Ref. \cite{Separable-BR}. When the subsystems of an $N\times N$ quantum state are completely mixed, i.e. $|\vec{r}\,|=|\vec{s}\,|=0$, the computable cross-norm or realignment (CCNR) criterion \cite{norm-Rudolph, norm-realign}, Bloch representation criterion \cite{Separable-BR}, the covariance matrix criterion \cite{covariance-matrix} and the local uncertainty relation criterion of equation (\ref{uncertainty-BR}) all converge to the same relation: $||\mathcal{T}||_{\mathrm{KF}} \leq \frac{2(N-1)}{N}$. Most importantly, our method also provides a way to construct the optimal observable set $\{M_i\}$ to detect the entanglement, which is generally a difficult task for the uncertainty-relation-based entanglement criteria. That is, the Bloch vectors should be properly chosen according to the left and right singular vectors of the correlation matrix $\mathcal{T}$. \section{Conclusion} In this work we have proposed a state-independent variance-based uncertainty relation by virtue of the observables' Bloch vectors. By exploring the eigenvalues of the Hermitian matrix composed of the Bloch vectors, the upper and lower bounds for the sum of variances are obtained. It is found that the incompatibility of observables may be attributed to geometric relations related to the parallel postulate of Euclidean geometry and the Horn's conjecture on the Hermitian matrix sum, which provides an alternative interpretation for the variance-based uncertainty relation. Also, our method leads to a practical entanglement criterion for bipartite mixed states. Considering the important roles it plays in the separability problem \cite{Separability-Horn, Separable-decomp}, we believe the Bloch representation can be as a useful tool for further quantitative study and deeper understanding of the fundamental concepts in quantum theory, e.g. the uncertainty relation, quantum entanglement, and quantum steering \cite{Steering}. \end{document}
math
25,702
\begin{document} \title{Tetrapartite entanglement features of W-Class state in uniform acceleration} \author{Qian Dong$^{1}$} \email{E-mail address: [email protected] (Q. Dong). } \author{Ariadna J. Torres-Arenas$^{1}$} \email{E-mail address: [email protected] (Ariadna J. Torres-Arenas). } \author{Guo-Hua Sun$^{2}$} \email{E-mail address: [email protected] (G. H. Sun). } \author{Shi-Hai Dong$^1$} \email[Corresponding author:]{E-mail address: [email protected] (S. H. Dong), Tel: 52-55-57296000 ext. 52522. } \affiliation{$^1$ Laboratorio de Informaci\'{o}n Cu\'{a}ntica, CIDETEC, Instituto Polit\'{e}cnico Nacional, UPALM, CDMX 07700, Mexico} \affiliation{$^2$ Catedr\'{a}tica CONACyT, Centro de Investigaci\'{o}n en Computaci\'{o}n, Instituto Polit\'{e}cnico Nacional, UPALM, CDMX 07738, Mexico} \pacs{03. 67. -a, 03. 67. Mn, 03. 65. Ud, 04. 70. Dy} \keywords{Tetrapartite, W-Class state, entanglement, Dirac field, noninertial frames} \begin{abstract} Using the single-mode approximation, we first calculate entanglement measures such as negativity ($1-3$ and $1-1$ tangles) and von Neumann entropy for a tetrapartite W-Class system in noninertial frame and then analyze the whole entanglement measures, the residual $\pi_{4}$ and geometric $\Pi_{4}$ average of tangles. Notice that the difference between $\pi_{4}$ and $\Pi_{4}$ is very small or disappears with the increasing accelerated observers. The entanglement properties are compared among the different cases from one accelerated observer to four accelerated observers. The results show that there still exists entanglement for the complete system even when acceleration $r$ tends to infinity. The degree of entanglement is disappeared for the $1-1$ tangle case when the acceleration $r > 0. 472473$. We reexamine the Unruh effect in noninertial frames. It is shown that the entanglement system in which only one qubit is accelerated is more robust than those entangled systems in which two or three or four qubits are accelerated. It is also found that the von Neumann entropy $S$ of the total system always increases with the increasing accelerated observers, but the $S_{\kappa\xi}$ and $S_{\kappa\zeta\delta}$ with two and three involved noninertial qubits first {\it increases} and then {\it decreases} with the acceleration parameter $r$, but they are equal to constants $1$ and $0. 811278$ respectively for zero involved noninertial qubit. \end{abstract} \maketitle \section{Introduction} One of the most studied notions of quantum correlations is the entanglement due to its important role in quantum information theory. The study of entanglement begins with Einstein, Podolsky and Rosen \cite{Einstein}, and Schr\"odinger \cite{Schrodinger1, Schrodinger2, Schrodinger3} around 1930s. Now, entanglement is regarded as a key resource in quantum technology and it is often intertwined with quantum non-locality \cite{Werner, Horodecki, Guhne, Bell}. To quantify entanglement, a well justified and mathematically tractable measure is required. Negativity is one of the most common methods to quantify entanglement \cite{Peres, Zyczkowski} as well as whole entanglement \cite{Yazhou}. Also, another useful measurement is von Neumann entropy, relative entropy \cite{Vedral1, Vedral2, Vedral3}. Up to now, some works have been treated on bipartite systems except for a few multipartite systems \cite{Murao, Dur, Bennet3} since entanglement shared between two or multiple parties \cite{Horodecki, Modi, Alsing, Montero, Shamirzaie, Metwally} illustrates novel features. Collections of shared entangled qubits allow one to perform a number of quantum mechanical forms of communication, such as quantum dense coding and quantum teleportation \cite{Bennet1, Bennet2, Bouwmeester} since they play a significant role in efficient quantum communication \cite{Gisin, Terhal, Sen, yu16, yu17, yu18} and computational tasks \cite{Raussendorf, Briegel}. In this work, we will investigate the tetrapartite entanglement of Dirac fields and consider the implementation of quantum information task between observers in uniform relative motion for a tetrapartite state, which is initially entangled in a W-Class state. This is because the quantum information in noninertial frame, which is a combination of general relativity, quantum field theory and quantum information theory, has been a focus of research topic in recent years. Its main aim is to incorporate relativistic effects to improve quantum-information tasks and to understand how such protocols will happen in curved space times. Since tripartite entangled state was worked out \cite{Alsing} and the Unruh effect was studied, most of the papers focus their investigation in two main states, i.e., Greenberger-Horne-Zeilinger (GHZ), W-state and other related states, but the W-state with less study due to the complexity of their calculations \cite{PRA_83_(2011)_012111, wang, ou, horn, seba, park16}. It should be pointed out that the computation of entanglement for the tripartite pure or mixed state in an accelerated frame is much more complicated because the density matrix cannot be written as the form of an X matrix. Nevertheless, we have recognized that the degree of entanglement for the W-Class state is more robust than that of the GHZ or relevant states {\color{red}\cite{dong18,peng10}}. This is an another reason why we attempt to carry out the W-Class entangled pure states even though its relevant calculations are rather complicated in comparison with other entangled states. Among the recent study on the Unruh effect in quantum information, it is found that in the fermionic case the degree to which entanglement is degraded depends on the election of Unruh modes. As done before, we also make use of the Rindler coordinates which define two disconnected regions I and II \cite{Takagi, Martin, Martin1}. For tetrapartite W-Class state, say Alice, Bob, Charlie and David, in this work we will consider all different cases from one accelerated observer to four accelerated observers and calculate their negativity, and whole entanglement $\pi_{4}$-tangle and $\Pi_{4}$-tangle but we restrict ourself to use the single mode approximation. This work is organized as follows. In Section II we describe the tetrapartite entanglement of the W-Class for various cases which are from one accelerated observer to four accelerated observers. We obtain their density matrices and calculate their negativities ($1-1$ tangle and $1-3$ tangle) and whole entanglement measures. The Von Neumann entropy will be studied in Section III. Finally, some discussions and concluding remarks are given in Section IV. \section{Tetrapartite entanglement from one to four accelerated observers} A generalization for $N$ qubits of the W-Class entangled state which we are going to consider in this work has the form \cite{Dur2}: \begin{equation}\label{w-class} \left|W\right\rangle_{N}=\frac{1}{\sqrt{N}}\left|N-1, 1\right\rangle, \end{equation} where $\left|N-1, 1\right\rangle$ is a symmetric state involving a "1" and others $(N-1)$ "0"s. For the tetrapartite system $N=4$, the W-Class entangled state can be written as follows \begin{equation}\label{w} \begin{array}{l} |W\rangle=\frac{1}{2}\left[|1_{\hat{A}}0_{\hat{B}}0_{\hat{C}}0_{\hat{D}}\right\rangle+\left|0_{\hat{A}}1_{\hat{B}}0_{\hat{C}}0_{\hat{D}}\right\rangle\\[2mm] ~~~~~~~~+\left|0_{\hat{A}}0_{\hat{B}}1_{\hat{C}}0_{\hat{D}}\right\rangle +\left|0_{\hat{A}}0_{\hat{B}}0_{\hat{C}}1_{\hat{D}}\right\rangle ], \end{array} \end{equation}where we use subscripts $A, B, C$ and $D$ to denote those observers and the Minkowski modes labeled with $M$ are omitted for observers $A, B, C$ and $D$. For this entangled W-Class state in noninertial frame, it is conventional to use Rindler coordinates to describe this system. The Rindler coordinates describe a family of observers with uniform acceleration and divide Minkowski space-time into two inaccessible Regions I and II. The rightward accelerating observers are located in Region I and causally disconnected from their analogous counterparts in Region II \cite{Socolovsky, Mikio}. We first give a brief review of the connection between the vacuum and excitation states in Minkowski coordinates and those in Rindler coordinates. Our setting consists of two observers: Alice and Bob. We first let Alice stay stationary, while Bob moves in uniform acceleration. Consider Bob to be accelerated uniformly in the $(t, z)$ plane. Rindler coordinates $(\tau, \xi)$ are appropriate for describing the viewpoint of an observer moving in uniform acceleration. Two different sets of the Rindler coordinates, which differ from each other by an overall change in sign, are necessary for covering Minkowski space. These sets of coordinates define two Rindler regions that are disconnected from each other \cite{Birrel and Davies, Alsing} \begin{equation}\label{} \begin{array}{l} t=a^{-1}e^{a\xi}\sinh(a\tau), ~z=a^{-1}e^{a\xi}\cosh(a\tau), ~{\rm Region~ I}\\[2mm] t=-a^{-1}e^{a\xi}\sinh(a\tau), ~z=-a^{-1}e^{a\xi}\cosh(a\tau), ~{\rm Region~ II}. \end{array} \end{equation} A free Dirac field in $(3 + 1)$ dimensional Minkowski space satisfies the Dirac equation \begin{equation}\label{} i\gamma^{\mu}\partial_{\mu}\psi-m\psi=0, \end{equation} where $m$ is the particle mass, $\gamma^{\mu}$ are the Dirac gamma matrices, and $\psi$ is a spinor wave function, which is composed of the complete orthogonal set of fermion $\psi_{k}^{+}$, and antifermion $\psi_{k}^{-}$ modes and can be written as the following form \begin{equation}\label{} \psi=\int(a_{k}\psi_{k}^{+}+b_{k}^{\dagger}\psi_{k}^{-})dk, \end{equation}where $a_{k}^{\dagger}(b_{k}^{\dagger})$ and $a_{k}(b_{k})$ are the creation and annihilation operators for fermions (antifermions) of the momentum $k$, respectively. They satisfy the anticommutation relation $\{a_{i}, a_{j}^{\dagger}\}=\{b_{i}, b_{j}^{\dagger}\}=\delta_{ij}$. The quantum field theory for a Rindler observer can be constructed by expanding the spinor field in terms of a complete set of fermion and antifermion modes in Regions I and II as \begin{equation}\label{} \psi=\int\sum_{\tau}(c_{k}^{\tau}\psi_{k}^{\tau+}+d_{k}^{\tau\dagger}\psi_{k}^{\tau-})dk, ~~~~\tau\in \{\rm I, \rm II\}. \end{equation} Similarly, $c_{k}^{\tau\dagger}(d_{k}^{\tau\dagger})$ and $c_{k}^{\tau}(d_{k}^{\tau})$ are the creation and annihilation operators for fermion (antifermions), respectively, acting on Region I~(II) for $\tau=\rm I~(\rm II)$ and satisfying similar anticommutation relation as above. The relation between creation and annihilation operators in Minkowski and Rindler space times can be found by using Bogoliubov transformation \begin{equation}\label{} a_{k}=\cos(r)\, c_{k}^{\rm I}-\sin(r)\, d_{-k}^{\rm II\dagger}, b_{k}=\cos(r)\, d_{k}^{\rm I}-\sin(r)\, c_{-k}^{\rm II\dagger}, \end{equation} where $\cos(r)=1/\sqrt{1+e^{-2\pi\omega_{k} c/a}}$ with $\omega_{k}=\sqrt{|\rm {\bf k}|^2+m^2}$ and $r$ is Bob's acceleration parameter with the range $r\in[0, \pi/4]$ for $a\in[0, \infty)$. It is seen from this equation and its adjoint that Bogoliubov transformation mixes a fermion in Region I and antifermions in Region II. As a result, it is assumed that the Minkowski particle vacuum state for mode $k$ based on Rindler Fock states is given by \begin{equation}\label{0state} |0_{k}\rangle_{M}=\sum_{n=0}^{1}A_{n}|n_{k}\rangle_{I}^{+}|n_{-k}\rangle_{II}^{-}, \end{equation}where the Rindler Region I or II Fock states carry a subscript I and II, respectively on the kets, but the Minkowski Fock states are indicated by the subscript $M$ on the kets. As what follows, we are only interested in using the {\it single mode approximation} \cite{Alsing, PRA_86_2012_012306, wang, arXiv1, qiangplb,qian18,arXiv1, annals2011}, i. e. , $w_{A,B,C,D}=w$ and also uniform acceleration $a_{A,B,C,D}=a$ ($a_{w,M}\approx a_{w,U}$ is considered to relate Minkowski and Unruh modes) for simplicity and we will drop all labels $(k,-k)$ on the states. Even though the single mode approximation is invalid for general states, however the approximation holds for a family of peaked Minkowski wave packets provided constraints imposed by an appropriate Fourier transform are satisfied \cite{bruschi10}. Using the single mode approximation, Bob's vacuum state $|0_B\rangle$ and one-particle state $|1_B\rangle$ in Minkowski space are transformed into Rindler space. By applying the creation and annihilation operators to above equation (\ref{0state}) and using the normalization condition, we can obtain \cite{Alsing,PRA_86_2012_012306, wang, arXiv1, qiangplb,qian18,arXiv1, annals2011} \begin{equation}\label{01} \begin{array}{l} |0\rangle_{M}=\cos (r) |0_\Rmnum{1} 0_{\Rmnum{2}}\rangle+\sin (r) |1_\Rmnum{1} 1_{\Rmnum{2}}\rangle, \\[2mm] |1\rangle_{M} =|1_\Rmnum{1} 0_{\Rmnum{2}}\rangle, \end{array} \end{equation} where $|n_{B_\Rmnum{1}}\rangle$ and $|n_{B_{\Rmnum{2}}}\rangle$ ($n=0, 1$) are the mode decomposition of $|n_B\rangle$ into two causally disconnected Regions I and II in Rindler space. It should be pointed out that Bruschi {\it et al. } discussed the Unruh effect {\it beyond} the {\it single mode approximation} \cite{bruschi10}, in which two complex numbers $q_{R}$ and $q_{L}$ (the subindexes $L$ and $R$ corresponding to the Left and Right regions in Rindler diagram, i. e. Regions I and II) are used to construct the one-particle state, i. e. , $|1\rangle=q_{R}|1_{R}0_{L}\rangle+q_{L}|0_{R}1_{L}\rangle$. However, in the present case for single mode approximation one has $q_{R}=1, q_{L}=0$ to satisfy the normalization condition $|q_{R}|^2+|q_{L}|^2=1$. It is also worth noting that a Minkowski mode that defines the Minkowski vacuum is related to a highly nonmonochromatic Rindler mode rather than a single mode with the same frequency (see Refs. \cite{Martin,Martin1,bruschi10} for details). Other relevant contributions \cite{eduardo12,bruschi13,bruschiclass,eduardo13, mann13} have also been made. Since the moving observers are confined to Region I, we have to trace out the part of the antiparticle state in Region II. Let us apply Eq. (\ref{01}) to the $\left|W\right\rangle$ state (\ref{w}). We study this entanglement system in four different cases. First, we study the case when David is accelerated, \begin{equation} \begin{array}{l} \left|W_{D}\right\rangle=\displaystyle\frac{1}{2}\Big[\sin r (\left|0_{\hat{A}} 0_{\hat{B}} 1_{\hat{C}} 1_{\hat{\text{DI}}}\right\rangle + \left|0_{\hat{A}} 1_{\hat{B}} 0_{\hat{C}} 1_{\hat{\text{DI}}}\right\rangle\\[2mm] ~~~~~~~~~+\left|1_{\hat{A}} 0_{\hat{B}}0_{\hat{C}}1_{\hat{\text{DI}}}\right\rangle)+\cos r (\left|0_{\hat{A}}0_{\hat{B}}1_{\hat{C}}0_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~+ \left|0_{\hat{A}}1_{\hat{B}}0_{\hat{C}}0_{\hat{\text{DI}}}\right\rangle +\left|1_{\hat{A}}0_{\hat{B}}0_{\hat{C}}0_{\hat{\text{DI}}}\right\rangle )+\left|0_{\hat{A}}0_{\hat{B}}0_{\hat{C}}1_{\hat{\text{DI}}}\right\rangle\Big]. \end{array} \end{equation} Second, we consider the case when Charlie and David are accelerated, \begin{equation} \begin{array}{l} \left|W_{CD}\right\rangle=\displaystyle\frac{1}{2}\Big[ \sin ^2 r (\left|0_{\hat{A}}1_{\hat{B}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle +\left|1_{\hat{A}}0_{\hat{B}}1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle)\\[2mm] ~~~~~~~~~~~~~~~+ \cos ^2 r (\left|0_{\hat{A}}1_{\hat{B}}0_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \left|1_{\hat{A}}0_{\hat{B}}0_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle)\\[2mm] ~~~~~~~~~~~~~~~+\cos r \sin r (\left|0_{\hat{A}}1_{\hat{B}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle+\left|0_{\hat{A}}1_{\hat{B}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle\\ ~~~~~~~~~~~~~~~+ \left|1_{\hat{A}}0_{\hat{B}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle+\left|1_{\hat{A}}0_{\hat{B}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle)\\[2mm] ~~~~~~~~~~~~~~~+ \sin r (\left|0_{\hat{A}}0_{\hat{B}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle+ \left|0_{\hat{A}}0_{\hat{B}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle)\\ ~~~~~~~~~~~~~~~+ \cos r (\left|0_{\hat{A}}0_{\hat{B}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle+ \left|0_{\hat{A}}0_{\hat{B}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle)\Big]. \end{array} \end{equation} Third, we consider the case when Bob, Charlie and David are accelerated, \begin{widetext} \begin{equation} \begin{array}{l} \left|W_{BCD}\right\rangle=\displaystyle\frac{1}{2}\Big(\sin ^3 r \left|1_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle + \cos ^3 r \left|1_{\hat{A}} 0_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle + \cos ^2 r \sin r \left|1_{\hat{A}} 0_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle +\cos ^2 r \sin r \left|1_{\hat{A}} 0_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle\\[2mm] ~~~~~~~~~~~~~~~+\cos r \sin ^2 r \left|1_{\hat{A}} 0_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle + \sin r \cos ^2 r \left|1_{\hat{A}} 1_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle +\sin ^2 r \cos r \left|1_{\hat{A}} 1_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle + \sin ^2 r \cos r \left|1_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle\\[2mm] ~~~~~~~~~~~~~~~+\sin ^2 r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle +\cos ^2 r \left|0_{\hat{A}} 0_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle+ \cos r \sin r \left|0_{\hat{A}} 0_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle + \sin r \cos r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle\\[2mm] ~~~~~~~~~~~~~~~+\sin r \cos r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle + \sin ^2 r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle + \cos ^2 r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle+\sin ^2 r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~~~~~~~+\cos ^2 r \left|0_{\hat{A}} 0_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle + \cos r \sin r \left|0_{\hat{A}} 0_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle+\cos r \sin r\left|0_{\hat{A}} 1_{\hat{\text{BI}}} 0_{\hat{\text{CI}}} 1_{\hat{\text{DI}}}\right\rangle + \sin r \cos r \left|0_{\hat{A}} 1_{\hat{\text{BI}}} 1_{\hat{\text{CI}}} 0_{\hat{\text{DI}}}\right\rangle\Big). \end{array} \end{equation} \end{widetext} Fourth, we will study the case when Alice, Bob, Charlie and David are accelerated, \begin{widetext} \begin{equation} \begin{array}{l} \left|W_{ABCD}\right\rangle=\displaystyle\frac{1}{2}\Big(\cos ^3 r \left|0_{\hat{\text{AI}}}0_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \cos ^3 r \left|0_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \cos ^2 \sin r \left|0_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle+\cos ^2 r \sin r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~~~~~~+\cos ^2 r \sin r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \cos ^2 r \sin r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle +\cos r \sin ^2 r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \cos r \sin ^2 r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~~~~~~+\cos r \sin ^2 r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle +\sin r \cos^2 r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \cos^2 r \sin r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \sin^2 r \cos r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~~~~~~+\sin^2 r \cos r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \cos r \sin^2 r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \sin r \cos^2 r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle +\sin r \cos^2 r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~~~~~+\sin^2 r \cos r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \sin^2 r \cos r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle +\sin^2 r \cos r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \sin^2 r \cos r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle \\[2mm] ~~~~~~~~~~~~~+\sin^2 r \cos r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle +\cos ^3 r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}0_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \sin r \cos ^2 r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \cos^2 r \sin r \left|1_{\hat{\text{AI}}}0_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle\\[2mm] ~~~~~~~~~~~~~~+\cos ^2 r \sin r \left|0_{\hat{\text{AI}}}0_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \cos ^3 r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle + \cos ^2 r \sin r \left|0_{\hat{\text{AI}}}1_{\hat{\text{BI}}}0_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle +\sin^2 r \cos r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}0_{\hat{\text{DI}}}\right\rangle\\[2mm] ~~~~~~~~~~~~~~+\sin^3 r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \sin^3 r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle +\sin^3 r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle + \sin^3 r \left|1_{\hat{\text{AI}}}1_{\hat{\text{BI}}}1_{\hat{\text{CI}}}1_{\hat{\text{DI}}}\right\rangle\Big). \end{array} \end{equation} \end{widetext} As what follows, we will study these different cases for their negativities and von Neumann entropy to show their entanglement properties which are related to the uniform acceleration $r$. \subsection{Negativity} As a quantitative entanglement measure, the negativity has been computed for many entangled systems. Negativity quantifies the entanglement in a state as the degree to see whether the entangled system is still entangled or not. An entangled system $\rho$ is entangled if there exists at least one negative eigenvalue for the partial transpose of the corresponding density matrix. The negativity for a tetrapartite state is defined as \cite{Yazhou} \begin{equation}\label{} N_{\kappa (\xi \o \zeta)}=||\rho_{\kappa(\xi \o \zeta)}^{T_{\kappa}}||-1, ~~~~N_{\kappa \xi}=||\rho_{\kappa \xi}^{T_{\kappa}}||-1, \end{equation}which describe the entanglements $1-3$ tangle and $1-1$ tangle, respectively. The notations $||\rho_{\kappa \xi \o \zeta}^{T_{\kappa}}||$ and $||\rho_{\kappa \xi}^{T_{\kappa}}||$ are the trace-norm of each partial transpose matrix. Alternatively, since $||O||={\rm tr} \sqrt{O^{\dagger} O}$ for any Hermitian operator $O$ \cite{Williams}, one can write \begin{equation}\label{neg} ||M||-1=2\sum_{i=1}^{N}|\lambda_{M}^{(-)}|^{i}, \end{equation} where $\lambda_{M}^{(-)}$ are the negative eigenvalues of the matrix $M$. After obtaining the density matrix of each system and tracing out the antiparticle in Region II, we proceed to find the negative eigenvalues of each density matrix in order to solve equation (\ref{neg}). This will make us find negativities ($1-3$ tangle) for $N_{A(BCD)}, N_{B(ACD)}, N_{C(ABD)}, N_{D(ABC)}$ by varying the quantities of accelerated qubits 1, 2, 3 or all. Analytical expressions of the negativity are not written out due to their complications, while we illustrate them in FIG. 1. Considering the symmetry of this entangled system, we have $N_{A(BCD_{I})}=N_{B(ACD_{I})}=N_{C(ABD_{I})}$, $N_{C_I(ABD_I)}=N_{D_I(ABC_I)}$, $N_{B_I(AC_ID_I)}=N_{C_I(AB_ID_I)}=N_{D_I(AB_IC_I)}$ and $N_{A_I(B_IC_ID_I)}=N_{B_I(A_IC_ID_I)}=N_{C_I(A_IB_ID_I)}$. We notice that the entanglement degree decreases along with the increasing accelerated observers. This means that the entanglement system in which only one qubit is accelerated is more robust than those entangled systems in which two or three or four qubits are accelerated. It should be recognized that the degree of entanglement for each system has never been disappeared even in the infinite acceleration. \begin{figure} \caption{\label{1-3 tangle} \label{1-3 tangle} \label{1-3 tangle} \end{figure} On the other hand, it is also important to find the $1-1$ tangle which is required to calculate the whole entanglement measures. With the similar process to the $1-3$ tangle case, we also trace out the necessary qubits and generate bipartite subsystems with all possible combinations of all qubits. Importantly, we might use the symmetry between the negativity computations for each pair of qubits to get those commutative negativities. The corresponding results are plotted in FIG. 2. There are 24 analytical results of the $1-1$ tangle, which have the following possible values, \begin{equation} \begin{array}{l} N_{\kappa \xi}=\displaystyle\frac{1}{2} (\sqrt{2}-1)=0. 2071, \\[2mm] N_{\kappa_{I} \xi}=\displaystyle\frac{1}{16} \Big[-2 \cos (2 r)-6\\[2mm] ~~~~~~~+\sqrt{2} \sqrt{28 \cos (2 r) +9 \cos (4 r)+27}\Big], \\[2mm] N_{\kappa_{I} \xi_{I}}=\displaystyle\frac{1}{8} \Big[2 \cos (2 r)- \cos (4 r)-5\\[2mm] ~~~~~~~+2 \sqrt{5 \cos (4 r) -4 ( \cos (2 r) )+7}\Big], \end{array} \end{equation} where $N_{\kappa \xi} > N_{\kappa_{I} \xi} > N_{\kappa_{I} \xi_{I}}$ and the $N_{\kappa \xi}$, $N_{\kappa_{I} \xi}$ and $N_{\kappa_{I} \xi_{I}}$ with $\kappa,\xi\in(A, B, C, D)$, are all possible subsystem combinations with two inertial qubits, one inertial qubit and without any inertial qubit. It is interesting to see that the degree of entanglement will be disappeared for the acceleration parameter $r>0. 472473$ in the case of the four qubits being accelerated simultaneously. \begin{figure} \caption{\label{1-1 tangle} \label{1-1 tangle} \end{figure} \begin{figure} \caption{\label{Residual entanglement} \label{Residual entanglement} \label{Residual entanglement} \end{figure} \subsection{Whole entanglement measures } Another quantification of multipartite entanglement is the residual tangle $\pi_4$. The residual tangle which measures entanglement among the four components can be calculated by the following form \cite{Oliveira} (see FIG. 3) \begin{eqnarray}\label{eq24-27} \pi_{\kappa}=N_{\kappa(\xi \o \zeta)}^{2}-N_{\kappa \xi}^{2}-N_{\kappa \o}^{2}-N_{\kappa \zeta}^2,\\ \pi_{\xi}=N_{\xi(\kappa \o \zeta)}^{2}-N_{\xi\kappa}^{2}-N_{\xi \o}^{2}-N_{\xi \zeta}^2,\\ \pi_{\o}=N_{\o(\kappa \xi \zeta)}^{2}-N_{\o\kappa}^{2}-N_{\o\xi}^{2}-N_{\o \zeta}^2,\\ \pi_{\zeta}=N_{\zeta(\kappa \xi \o)}^{2}-N_{\zeta\kappa}^{2}-N_{\zeta\xi}^{2}-N_{\zeta \o}^2, \end{eqnarray} from which we are able to obtain the $\pi_4$-tangle by \begin{eqnarray}\label{eq28} \pi_{4}=\frac{1}{4}\left(\pi_{\kappa}+\pi_{\xi}+\pi_{\o}+\pi_{\zeta}\right). \end{eqnarray} \begin{figure} \caption{\label{Pi-tangle} \label{pi-tangle} \label{Pi-tangle} \end{figure} Moreover, we might use another whole entanglement measurement defined as geometric mean \cite{Sabin} to describe the entanglement property of this tetrapartite system \begin{eqnarray}\label{eq29} \Pi_4=\left(\pi_{\kappa} \pi_{\xi} \pi_{\o} \pi_{\zeta}\right)^{\frac{1}{4}}. \end{eqnarray} Likewise, we are going to omit the analytical results due to the size of the polynomials and show their corresponding plots in FIG. 4. In FIG. 5 we show a comparison between $\pi_{4}$ and $\Pi_{4}$. Their difference implies that the entangled system becomes more robust only when one qubit, say Alice in our case, is accelerated but other observers are stationary. However, we notice that either $\pi_{4}$-tangle or $\Pi_4$ can be used to describe the entanglement property of this system since because of their small difference between them when three qubits (Bob, Charlie and David) or four qubits (Alice, Bob, Charlie and David) are moving in uniform acceleration. \begin{figure} \caption{\label{Pi-tangle} \label{pi-tangle} \label{Pi-tangle} \end{figure} \section{von Neumann entropy} In order to know the measure of information for an entangled quantum system it is necessary to study the von Neumann entropy defined as \cite{VoN}, \begin{eqnarray} S=-\mathrm{Tr}(\rho \log_{2} \rho)=-\sum_{i=1}^{n} \lambda^{(i)} \log_{2}{\lambda^{(i)}} \end{eqnarray}where $\lambda^{(i)}$ denotes the $i$-th eigenvalue of the density matrix $\rho$. It should be pointed out that the density matrix is not taken as its partial transpose. Based on this we are able to measure the degree of the satiability of the studied quantum state. We show the behaviour of the von Neumann entropy in FIG. 6. As expected, the von Neumann entropy of whole tetrapartite system increases with the increasing acceleration. It is more interesting to see that the von Neumann entropy becomes large with the number of accelerated observers as shown in panel (a) of FIG. 6. \begin{figure} \caption{ (color online) The Von Neumann Entropies $S$, $S_{\kappa\zeta\delta} \label{CA1} \label{CA1} \label{CA1} \end{figure} On the other hand, we show the subsystem entropies for the bipartite case, which exists only 3 possible entropy values. For the case when there is no any accelerated qubit, the entropy of the subsystem will be $S_{\kappa \xi}=1$. However, when the system has only one accelerated qubit we have the following eigenvalues: \begin{equation} \begin{array}{l} \lambda_{\kappa_{I} \xi}^{(1)}=\frac{1}{2}\cos ^2 r, ~~~\lambda_{\kappa_{I} \xi}^{(2)}=\frac{1}{2}\sin ^2 r, \\[2mm] \lambda_{\kappa_{I} \xi}^{(3, 4)}=\frac{1}{32} (10-2 \cos (2 r)\\[2mm] ~~~~~~~~\mp\sqrt{2} \sqrt{-20 \cos (2 r)+9 \cos (4 r)+43}), \end{array} \end{equation}where $"\mp"$ corresponds to $\lambda_{\kappa_{I} \xi}^{(3)}$ and $\lambda_{\kappa_{I} \xi}^{(4)}$, respectively. On the other hand, we find the eigenvalues for the bipartite system which has all the qubits accelerated, \begin{equation} \begin{array}{l} \lambda_{\kappa_{I} \xi_{I}}^{(1)}=\frac{1}{2}\cos ^4 r\\[2mm] \lambda_{\kappa_{I} \xi_{I}}^{(2)}=\frac{1}{16} [1-\cos (4 r)], \\[2mm] \lambda_{\kappa_{I} \xi_{I}}^{(3)}=\frac{1}{16} [4 \cos (2 r)-\cos (4 r)+5], \\[2mm] \lambda_{\kappa_{I} \xi_{I}}^{(4)}=-\frac{1}{4} \sin ^2(r) [\cos (2 r)-3]. \end{array} \end{equation} The von Neumann entropies for the cases when one and two observers are accelerated in uniform acceleration are illustrated in panel (b) of FIG. 6. Finally, let us consider the tripartite systems which include all possible combinations, e. g. without accelerated observer and with one, two and three accelerated observers. When there is no any accelerated observer, the eigenvalues are given by $3/4$ and $1/4$ so one has $S_{\kappa\zeta\delta}=0. 811278$. When the tripartite system has only one accelerated observer, the eigenvalues are given by \begin{equation}\label{} \begin{array}{l} \lambda_{\kappa_{I} \zeta\delta}^{(1)}=\frac{1}{4}\cos ^2(r),\\[2mm] \lambda_{\kappa_{I} \zeta\delta}^{(2)}=\frac{1}{4} [1-\cos (2 r)],\\[2mm] \lambda_{\kappa_{I} \zeta\delta}^{(3,4)}=\frac{1}{32} [2 \cos (2 r)\\[2mm] ~~~~~\mp\sqrt{2} \sqrt{20 \cos (2 r)+9 \cos (4 r)+43}+10], \end{array} \end{equation}where the symbols $"\mp"$ correspond to $\lambda_{\kappa_{I} \zeta\delta}^{(3)}$ and $\lambda_{\kappa_{I} \zeta\delta}^{(4)}$, respectively. When the tripartite system has two accelerated observers, the eigenvalues are given by \begin{equation}\label{} \begin{array}{l} \lambda_{\kappa_{I} \zeta_{I}\delta}^{(1)}=\frac{1}{4}\cos ^4(r),\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta}^{(2)}=\lambda_{\kappa_{I} \zeta_{I}\delta}^{(2)}=\frac{1}{32} (1-\cos (4 r)),\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta}^{(4,5)}=\frac{1}{16} \Big[3 \cos (2 r)+3\\[2mm] ~~~~~~~\mp\sqrt{2} \sqrt{\cos (4 r) \cos ^4(r)+17 \cos ^4(r)}\Big],\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta}^{(6,7)}=3-3 \cos (2 r)\\[2mm] ~~~~~~~\mp\sqrt{2} \sqrt{17 \sin ^4(r)+\sin ^4(r) \cos (4 r)}, \end{array} \end{equation} When the tripartite system has three accelerated observers, the eigenvalues are given by \begin{equation}\label{} \begin{array}{l} \lambda_{\kappa_{I} \zeta_{I}\delta_{I}}^{(1)}=\frac{1}{4}\cos ^6(r),\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta_{I}}^{(2)}=\lambda_{\kappa_{I} \zeta_{I}\delta}^{(3)}=\frac{1}{128} [\cos (2 r)-2 \cos (4 r)-\cos (6 r)+2],\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta_{I}}^{(4)}=\frac{1}{128} [49 \cos (2 r)+10 \cos (4 r)-\cos (6 r)+38],\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta_{I}}^{(5)}=\frac{1}{128} [-\cos (2 r)-18 \cos (4 r)+\cos (6 r)+18],\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta_{I}}^{(6,7)}=\frac{1}{128} [-\cos (2 r)-6 \cos (4 r)+\cos (6 r)+6],\\[2mm] \lambda_{\kappa_{I} \zeta_{I}\delta_{I}}^{(8)}=-\frac{1}{8} \sin ^4(r) [\cos (2 r)-7]. \end{array} \end{equation} Their von Neumann entropies are shown in panel (c) of FIG. 6. The solid blue, dot-dashed grey, dotted green and dashed red lines represent the von Neumann entropies for the cases without any accelerated observer, with one, two and three accelerated observers, respectively. \section{Discussions and concluding remarks} In this work we first computed the negativity of the entangled W-Class tetrapartite state. We have noticed that there exists disentanglement, i.e. entanglement of sudden death, for $1-1$ tangle case when $r>0. 472473$ only when four observers are accelerated at the same time. Other cases for $1-1$ tangle and those for $1-3$ tangle, however, are always entangled. On the other hand, we have reverified the fact that entanglement is an observer-dependent quantity in noninertial frame. When we compare the whole entanglement measures such as the arithmetic average $\pi_{4}$ and geometric average value $\Pi_{4}$, it is seen that for the cases when the system has one or two accelerated qubits there is a significant difference, that is, the arithmetic average value $\pi_{4}$ is greater than the geometric average value $\Pi_{4}$. However, when the system depends on three and four accelerated qubits, we find that their difference is almost zero. This implies that we might make use of either $\pi_{4}$ or $\Pi_{4}$ to describe this entangled system. For the von Neumann entropy we have observed that the entropy increases as the number of accelerated qubits increases in the system. Moreover, we have noticed that the von Neumann entropies for both bipartite and tripartite subsystems $S_{\kappa_{I}\xi_{I}}$ and $S_{\kappa_{I} \zeta_{I}\delta_{I}}$ are measured, we can see that they arrive to a maximum entropy and then begin to decrease. This implies that the subsystems $\rho_{\kappa_{I}\xi_{I}}$ and $\rho_{\kappa_{I} \zeta_{I}\delta_{I}}$ are first more disorder and then the disorder is reduced with the increasing acceleration. In addition, we note that the von Neumann entropies are calculated as $S_{\kappa\xi\delta\eta}=0$, $S_{\kappa\zeta\delta}=0. 811278$ and $S_{\kappa\xi}=1$, which correspond to the tetrapartite, tripartite and bipartite cases without any accelerated observer. This implies that the system with more observers which are in stationary case is more stable. Before ending this work, we give a useful remark on the acceleration limit value $r$. As we know, there exists the disentanglement phenomenon after the acceleration $r \approx 0. 472473$ only when the $1-1$ tangle of the W-Class tetrapartite includes two accelerated observers. This value is different from that of GHZ tetrapartite system, in which this value was given by $r\approx 0. 417$. This implies that the present W-Class tetrapartite system is more robust than that of the GHZ tetrapartite system since $r_{\rm W-Class} \approx 0. 472473> r_{\rm GHZ} \approx 0. 417$. {\color{red}Finally, it should be mentioned that we are going to see whether or how the present study extends to the thermodynamic properties as treated in \cite{hass1, hass2}.} \end{document}
math
36,885
\begin{document} \title[Eigenvalue asymptotic of Robin Laplace operators] {Eigenvalue asymptotic of Robin Laplace operators on two-dimensional domains with cusps} \author {Hynek Kova\v{r}\'{\i}k} \address { Dipartimento di Matematica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, ITALY } \email {[email protected]} \date {\today} \begin {abstract} We consider Robin Laplace operators on a class of two-dimensional domains with cusps. Our main results include the formula for the asymptotic distribution of the eigenvalues of such operators. In particular, we show how the eigenvalue asymptotic depends on the geometry of the cusp and on the boundary conditions. \end{abstract} \maketitle {\bf AMS 2000 Mathematics Subject Classification:} 35P20, 35J20\\ {\bf Keywords:} Eigenvalue asymptotic, Laplace operators, Schr\"odinger operator \\ \section{Introduction} Let $\Omega\subset\mathbb{R}^2$ be an open domain such that the spectrum of the Dirichlet Laplacian $-\mathcal{}elta_\Omega^D$ on $\Omega$ is discrete. Denote by $N_\lambda(-\mathcal{}elta_\Omega^D)$ the counting function of $-\mathcal{}elta_\Omega^D$, i.e. the number of eigenvalues of $-\mathcal{}elta_\Omega^D$ less than $\lambda$. The classical result by H.~Weyl, \cite{we}, states that if $\Omega$ is bounded, then \begin{equation} \label{weyl-cl} N_\lambda(-\mathcal{}elta_\Omega^D) \, = \, \frac{ \lambda}{4 \pi}\, |\Omega| +o(\lambda) \qquad \lambda\to\infty, \end{equation} where $|\Omega|$ denotes the volume of $\Omega$. The proof of \eqref{weyl-cl} for unbounded domains with finite volume is due to M.~Birman, M.~Solomyak and B.~Boyarski, see e.g.~\cite{bs}. The situation is different for the Neumann Laplacian $-\mathcal{}elta_\Omega^N$. In this case equation \eqref{weyl-cl}, with $N_\lambda(-\mathcal{}elta_\Omega^N)$ in place of $N_\lambda(-\mathcal{}elta_\Omega^D)$, holds whenever $\Omega$ is bounded and has sufficiently regular boundary, see e.g.~\cite{iv1,net,ns} for the estimates on the rest term in \eqref{weyl-cl}. However, the Neumann Laplacian might not satisfy \eqref{weyl-cl} (its spectrum might even not be discrete) if $\Omega$ has rough boundary or if $\Omega$ is unbounded, \cite{ber,ds,hss,jms,ns,sol1}. Here we will focus on unbounded domains with regular boundary and we will consider two-dimensional domains of the form \begin{equation} \label{region-def} \Omega = \{(x,y)\in\mathbb{R}^2\, : \, x>1,\, |y|<f(x)\}\, , \end{equation} where $f:(1,\infty)\to\mathbb{R}$ is a positive function such that $f(x)\to 0$ as $x\to\infty$. Then the counting function $N_\lambda(-\mathcal{}elta_\Omega^D)$ of the Dirichlet Laplacian satisfies \eqref{weyl-cl} as long as $f$ is integrable. If $f$ decays too slowly, so that $|\Omega|=\infty$, then the spectrum of $-\mathcal{}elta_\Omega^D$ is still discrete, but $N_\lambda(-\mathcal{}elta_\Omega^D)$ grows super-linearly in $\lambda$, see \cite{be,da,ro,si}. On the other hand, the spectrum of the Neumann Laplacian $-\mathcal{}elta_\Omega^N$ is discrete {\it if and only if} \begin{equation} \label{compact-N} \lim_{x\to\infty}\, \left( \int_1^x \frac{dt}{f(t)}\right) \left( \int_x^\infty f(t)\, dt\right)=0. \end{equation} This remarkable fact was proved in \cite{eh}, see also \cite{mz}. Asymptotic behaviour of $N_\lambda(-\mathcal{}elta_\Omega^N)$ on domains of this type was studied in \cite{be,ivrii,jms,net,sol1}. We would like to point out that $f$ must decay faster than any power function for \eqref{compact-N} to hold. We thus notice a huge difference between the spectral properties of $-\mathcal{}elta_\Omega^D$ and $-\mathcal{}elta_\Omega^N$ on such domains. Motivated by this discrepancy, we want to study the gap between Dirichlet and Neumann Laplacians. To do so we consider a family of Laplace operators on $\Omega$ which formally correspond to the so-called Robin boundary conditions \begin{equation} \label{robin-bc} \frac{\partial u}{\partial n}\, (x,y) +h(x)\, u(x,y)=0, \qquad x>1, \, \, y= \pm f(x), \end{equation} where $\frac{\partial u}{\partial n}$ denotes the normal derivative of $u$ and $h:(1,\infty)\to\mathbb{R}_+$ is a sufficiently smooth bounded function. The extreme cases $h\equiv 0$ and $h\equiv \infty$ correspond to Neumann and Dirichlet Laplacians respectively. First question that arises is under what conditions on $h$ and $f$ is the spectrum of the associated Robin Laplacian discrete. Next we would like to know how the coefficient $h(x)$ of the boundary conditions affects the asymptotic distribution of eigenvalues of the Robin Laplacian. The paper is organised as follows. In section \ref{asympt:main} we formulate our main results, see Theorems \ref{principle} and \ref{main-thm}. Similarly as in \cite{ber,jms}, we show that the leading term of the eigenvalue asymptotic has two contributions, one of which results from an auxiliary one-dimensional Schr\"odinger operator. The boundary conditions affect the eigenvalue asymptotic through the term $h(x)\sqrt{1+f'(x)^2}/f(x)$ which enters into the potential of this operator, see equations \eqref {potentials} and \eqref{H}. For some particular choices of $h$ and $f$ this contribution can be calculated explicitly, the corresponding results are given in section \ref{examples}. The proofs of the main results are given in section \ref{proof}. Our strategy is to treat separately the contribution to the counting function from a finite part of $\Omega$ and from the tail. In section \ref{step1} it is shown that the contribution from the finite part satisfies the Weyl law \eqref{weyl-cl}. The key point of the proof is to transform, in the remaining part of $\Omega$, the problem to a Neumann Laplacian plus a positive potential that reflects the boundary term, see section \ref{step 2}. To this end we employ the technique known as ground state representation, which has been recently used e.g. in \cite{fsw} to derive eigenvalue estimates for Schr\"odinger operators with regular ground states, see also \cite{fls}. Once this transformation is done, we show, by rather standard arguments, that one part of the eigenvalue distribution of such Neumann Laplacian with additional potential is asymptotically (i.e. for $\lambda\to\infty$) equivalent to eigenvalue distribution of a direct sum of certain one-dimensional Schr\"odinger operators, see section \ref{step 3}. This enables us to prove Theorem \ref{principle}. Finally, in the closing section \ref{gen} we discuss some generalisations for Robin Laplacians with non symmetric boundary conditions. \section{Preliminaries and notation} \label{prelim} Given a self-adjoint operator $T$ with a purely discrete spectrum we denote by $N_\lambda(T)$ the number of its eigenvalues, counted with multiplicities, less than $\lambda$. We will write $A \, \simeq \, B$ if the operators $A$ and $B$ are unitarily equivalent and we will use the notation $$ f_1( \lambda) \sim f_2( \lambda) \quad \lambda\to\infty \quad \Longleftrightarrow \quad \lim_{ \lambda\to\infty} \frac{f_1( \lambda)}{f_2( \lambda) }=1. $$ We will consider the eigenvalue behaviour of the Robin boundary value problem in a weak sense. Therefore the main object of our interest is the self-adjoint operator $A_\sigma$ in $L^2(\Omega)$ associated with the closure of the quadratic form \begin{equation} \label{q-form} Q_\sigma[u]= \int_\Omega |\nabla u|^2\, dxdy + \int_1^\infty\! \sigma(x)\left(|u(x,f(x))|^2+|u(x,-f(x))|^2\right)\, dx \end{equation} on $C^2_{0}(\bar\Omega)$. Here $C^2_{0}(\bar\Omega)$ denotes the restriction to $\Omega$ of functions from $C^2(\mathbb{R}^2)$ such that for each $y$ the support of $u(\cdot, y)$ is a compact subset of $(1,\infty)$. The operator $A_\sigma$ formally corresponds to the Laplace operator on $\Omega$ with Dirichlet boundary condition at $\{x=1\}$ and mixed boundary conditions \eqref{robin-bc} at the rest of the boundary, if we chose $\sigma$ such that \begin{equation*} \sigma(x) = h(x) \, \sqrt{1+f'(x)^2}\, . \end{equation*} \begin{remark} Since we work under the assumption that $f'(x)\to 0$ as $x\to \infty$, see below, and since the asymptotic of $N_\lambda(A_\sigma)$ depends only on the behaviour of $\sigma$ at infinity, from now on we will work with the function $\sigma$ instead of $h$. \end{remark} \noindent We will also need the following auxiliary potentials: \begin{equation} \label{potentials} V(x) = \frac 14\, \left(\frac{f'}{f}\right)^2+\frac 12\, \left(\frac{f'}{f}\right)', \qquad W_\sigma(x)= V(x) + \frac{\sigma(x)}{f(x)}. \end{equation} \noindent Throughout the whole paper we will suppose that $f$ satisfies \begin{assumption} \label{ass-f1} $f\in C^\infty(1,\infty)$ is positive and such that $f'(x)\leq 0$ for all $x$ large enough. Moreover, \begin{equation} \label{f} \lim_{x\to\infty}\, f(x) = \lim_{x\to\infty}\, f''(x) = 0. \end{equation} \end{assumption} \noindent Note that \eqref{f} implies $f'(x)\to 0$ as $x\to\infty$, see Lemma \ref{landau} below. \begin{lemma} \label{landau} Let $f\in C^2(1,\infty)$ be a nonnegative function. Assume that $f$ and $|f''|$ are bounded on $(1,\infty)$. For a given $x>1$ define $M_x= \sup_{s\geq x} f(s)$ and $M''_x= \sup_{s\geq x} |f''(s)|$. Then \begin{equation} \label{taylor-new} (f'(x))^2 \leq 2\, M_x \, M''_x . \end{equation} \end{lemma} \begin{proof} Let $s>x$. The Taylor expansion of $f$ at the point $x$ gives \begin{equation} \label{tayl-2} f(s)- f(x) = t f'(x) +\frac{t^2}{2}\, f''(y), \qquad y\in [x,s], \quad t= s-x. \end{equation} On the other hand, $f\geq 0$ ensures that $|f(x)-f(s)| \leq M_x$ for all $s>x$. This together with \eqref{tayl-2} implies that the inequality $$ |f'(x)| \leq \, \frac{M_x}{t}\, + \frac{t\, M_x''}{2} $$ holds for all $t>0$. Optimization with respect to $t$ then gives the result. \end{proof} \begin{remark} Note that if we leave out the assumption $f\geq 0$, then the above proof still works with the modification that now $|f(x)-f(s)| \leq 2 M_x$. This results into the Landau inequality, i.e.~inequality \eqref{taylor-new} with the factor $2$ on the right hand side replaced by $4$. \end{remark} \noindent The hypothesis on $\sigma$ are the following: \begin{assumption} \label{ass-h} The function $\sigma\in C^2(1,\infty)$ is non negative. Moreover, $\sigma, \, \sigma'$ and $\sigma''$ are bounded and \begin{equation} \label{compact-h} \lim_{x\to\infty}\, W_\sigma(x)\, =\infty. \end{equation} \end{assumption} \noindent In order to formulate our next assumption, we introduce the operator \begin{equation} \label{H} \mathcal{H}_\sigma= -\frac{d^2}{dx^2}\, + W_\sigma(x) \qquad \text{in\, \, } L^2(1,\infty) \end{equation} with Dirichlet boundary condition at $x=1$. More precisely, $\mathcal{H}_\sigma$ is the operator generated by the closure of the quadratic form $$ \int_1^\infty\left(|\psi'|^2+W_\sigma\, \psi^2 \right)\, dx, \qquad \psi \in C_0^2(1,\infty). $$ Alongside with $\mathcal{H}_\sigma$ we will also consider the auxiliary operator \begin{equation} \mathcal{B} = -\partial_x^2 -\frac{1}{f^2(x)}\, \, \partial_y^2 \qquad \text{in\, \, } L^2((1,\infty)\times(-1,1)) \end{equation} with Dirichlet boundary conditions. \begin{assumption} \label{number} For $0< \varepsilon <1 $ we have \begin{align} N_\lambda((1\pm\varepsilon)\, \mathcal{H}_\sigma) & = N_\lambda(\mathcal{H}_\sigma)(1+\mathcal{O}(\varepsilon)), \label{eps} \\ N_\lambda((1\pm\varepsilon)\, \mathcal{B}) & = N_\lambda(\mathcal{B})(1+\mathcal{O}(\varepsilon)) \label{eps-B} \end{align} \end{assumption} \begin{remark} A similar assumption was made in \cite{jms}. Although this assumption is essential for the approach used in the proof of Theorem \ref{principle} below, it is natural to believe that the statement holds under more general conditions. Note also that for domains with finite volume \eqref{eps-B} holds automatically. \end{remark} \section{Main results} \label{asympt:main} \begin{theorem} \label{disc} If \ref{ass-f1} and \ref{ass-h} are satisfied, then the spectrum of $A_\sigma$ is discrete. \end{theorem} \begin{remark} Contrary to the case of Neumann Laplacian, the spectrum of $A_\sigma$ can be discrete also if the volume of $\Omega$ is infinite. For example if $\sigma$ is constant, then \eqref{compact-h} is automatically satisfied in view of the fact that $f(x) V(x)\to 0$ as $x\to\infty$, see equation \eqref{taylor}. On the other hand, condition \eqref{compact-h} is, unlike \eqref{compact-N}, only sufficient. \end{remark} \begin{theorem} \label{principle} Suppose that assumptions \ref{ass-f1},\, \ref{ass-h} and \ref{number} are satisfied. Then \begin{equation} \label{principle-eq} N_\lambda(A_\sigma) \, \sim \, N_\lambda(-\mathcal{}elta_\Omega^D) + N_\lambda(\mathcal{H}_\sigma) \qquad \lambda\to\infty. \end{equation} \end{theorem} \begin{remark} The second term in \eqref{principle-eq} is a contribution from the eigenvalues of the operator $A_\sigma$ restricted to the space of functions which depend only on $x$. This is analogous to the case of Neumann Laplacian, \cite{ds,jms, sol1}. On the other hand, the presence of the boundary term $\sigma(x)$ enables us to apply \eqref{principle-eq} also in the situation in which the Neumann Laplacian does not have purely discrete spectrum. \end{remark} \begin{remark} \label{general} Theorem \ref{principle} allows a straightforward generalisation to Robin Laplacians with different boundary conditions on the upper and lower boundary of $\Omega$, say given through functions $\sigma_1(x)$ and $\sigma_2(x)$. In that case we only have to replace $\sigma(x)$ in \eqref{potentials} by $(\sigma_1(x)+\sigma_2(x))/2$, see section \ref{non-sym} for details. \end{remark} \noindent For domains with finite volume Theorem \ref{principle} and the Weyl formula \eqref{weyl-cl} give \begin{theorem} \label{main-thm} Let $|\Omega| < \infty$ and suppose that assumptions \ref{ass-f1},\, \ref{ass-h} and \eqref{eps} are satisfied. Then \begin{equation} \label{main} N_\lambda(A_\sigma) \, \sim \, \frac {\lambda}{4\pi}\, |\Omega| + N_\lambda(\mathcal{H}_\sigma) \qquad \lambda\to\infty. \end{equation} \end{theorem} \begin{remark} Note that if $\sigma\equiv 0$, then the condition $|\Omega|<\infty$ is necessary for the spectrum of $A_0=-\mathcal{}elta_\Omega^N$ to be discrete, see \eqref{compact-N}. Hence in that case there is no difference between Theorems \ref{principle} and \ref{main-thm} and the resulting eigenvalue asymptotic agrees with the one obtained in \cite{ber, jms}. \end{remark} \begin{corollary} \label{robin-w} Let $|\Omega|<\infty$ and let $\sigma(x) = \sigma$ be constant. Assume that $f$ satisfies \ref{ass-f1}. Then \begin{align} \label{robin-weyl} \limsup_{x\to\infty}\, x^2 f(x)\, = 0 & \quad \Longrightarrow \quad N_\lambda(A_\sigma)\, \sim\, \frac{|\Omega|}{4\pi}\, \, \lambda & \lambda\to\infty \\ \label{robin-linear} \lim_{x\to\infty} x^2 f(x)\, = a^2 & \quad \Longrightarrow \quad N_\lambda(A_\sigma)\, \sim \left(\frac{|\Omega|}{4\pi}+ \frac{|a|}{4\sqrt{\sigma}}\, \right) \lambda & \lambda\to\infty. \end{align} \end{corollary} \begin{remark} \noindent Equation \eqref{robin-weyl} provides a sufficient condition on the decay of $f$ for the Weyl's law in the case of constant $\sigma$. Notice that the borderline decay behaviour is $f(x) \sim x^{-2}$ which is in contrast to $f(x) \sim x^{-1}$ in the case of Dirichlet Laplacian. The reason behind this is that the principle eigenvalues of Robin and Dirichlet Laplacians on an interval of the width $2f(x)$ scale in a different way as $f(x)\to 0$. Observe also that \eqref{robin-linear} turns into \eqref{robin-weyl} when $\sigma\to\infty$, as expected. \end{remark} If the volume of $\Omega$ is infinite, then we confine ourselves to situations when $f$ is a power function. The asymptotic distribution of the Dirichlet-Laplacian on such region is known, see \cite{ro}, \cite{si}. These results together with Theorem \ref{principle} yield \begin{corollary} \label{vol-infty} Let $f(x)=x^{-\alpha},\, 0<\alpha\leq 1$. If $\mathcal{H}_\sigma$ satisfies \eqref{eps}, then as $ \lambda\to\infty$ we have \begin{eqnarray*} N_\lambda(A_\sigma) & \sim & \frac{1}{\pi}\, \left(\frac{2}{\pi}\right)^{\frac{1}{\alpha}}\zeta\left(\frac{1} {\alpha}\right)\, B\left(1+\frac{1}{2\alpha},\, \frac 12\right)\, \lambda^{\frac 12+\frac{1}{2\alpha}} + N_\lambda(\mathcal{H}_\sigma) \quad \alpha <1, \\ N_\lambda(A_\sigma) & \sim & \frac{1}{\pi}\, \, \lambda \log \lambda + N_\lambda(\mathcal{H}_\sigma) \quad \, \, \alpha =1, \end{eqnarray*} where $\zeta(\cdot)$ and $B(\cdot\, , \cdot)$ denote the Riemann zeta and the Euler beta function respectively. \end{corollary} \subsection{Examples} \label{examples} We give the asymptotic of $N_\lambda(A_\sigma)$ for some concrete choices of $f$ and $\sigma$. \subsubsection{$f(x) = x^{-\alpha},\, \alpha >1, \, \, \sigma(x) = \sigma= \mbox{const}$.} Here $$ W_\sigma(x) = \left(\frac{\alpha^2}{4}\, + \frac{\alpha}{2}\right) x^{-2} + \sigma x^\alpha $$ is convex and increasing at infinity so that assumption \ref{number} is satisfied, see \cite[Chap. 7]{ti}. Theorem \ref{main-thm} in combination with Theorem \ref{tit}, see Section \ref{auxiliary}, gives \begin{equation} \label{h-const} N_\lambda(A_\sigma) \, \sim \, \frac{|\Omega|}{4 \pi}\, \lambda\, +\, \frac{1}{\alpha\pi}\, \sigma^{-\frac{1}{\alpha}}\, B\left(\frac{1}{\alpha}\, , \, \frac 32\right)\, \lambda^{\frac 12+\frac{1}{\alpha}}, \qquad \lambda\to\infty. \end{equation} Note that, in agreement with Corollary \ref{robin-w}, $N_\lambda(A_\sigma)$ obeys Weyl's law as long as $\alpha >2$ and for $\alpha=2$ the order of $ \lambda$ is linear, but the coefficient is different from the one in the Weyl asymptotic. When $\alpha < 2$, then the behaviour of $N_\lambda(A_\sigma)$ for $ \lambda\to\infty$ is fully determined by the second term on the right hand side of \eqref{h-const}. \subsubsection{$f(x) = x^{-\alpha},\, 0<\alpha \leq 1, \, \, \sigma(x) = \sigma\, x^{-\beta}$.} Assumptions \ref{ass-h} is satisfied if and only if $0 \leq \beta < \alpha. $ For these values of $\beta$ Corollary \ref{vol-infty} and Theorem \ref{tit} give $$ N_\lambda(A_\sigma)\, \sim\, \frac{\sigma^{-\frac{1}{\alpha-\beta}}}{(\alpha-\beta)\pi}\, \, B\left(\frac{1}{\alpha-\beta}\, , \, \frac 32\right)\, \lambda^{\frac 12+\frac{1}{\alpha-\beta}}, \qquad \lambda\to \infty. $$ \section{Auxiliary material} \label{auxiliary} \noindent In this section we collect some auxiliary material, which will be used in the proof of the main results. First we fix some necessary notation. Given a continuous function $q:(1,\infty)\to \mathbb{R}$ such that $q(x)\to \infty$ as $x\to\infty$, we denote by $T^{D,D}_{(a,b)}$ the operator in $L^2(a,b)$ acting as $$ T^{D,D}_{(a,b)} = -\frac{d^2}{dx^2} +q(x), \qquad 1\leq a < b < \infty $$ with Dirichlet boundary conditions at $x=a$ and $x=b$. Operators $T^{D,N}_{(a,b)}, \, T^{N,N}_{(a,b)}$ and $T^{N,D}_{(a,b)}$ are defined accordingly. For $b=\infty$ we use the simplified notation $T^{D}_{(a,\infty)}$ etc. to indicate the corresponding boundary condition at $x=a$. It is well known that imposing Dirichlet boundary condition at $x=a$ is a rank one perturbation. Variational principle thus implies that \begin{equation} \label{rank1} 0\leq N_\lambda\big(T^{N}_{(a,\infty)}\big)-N_\lambda\big(T^{D}_{(a,\infty)}\big)\leq 1\quad \forall\, a. \end{equation} \begin{lemma} \label{1-dim} Suppose that $q(x)$ is a continuous function such that $q(x)\to \infty$ as $x\to\infty$. Then for any $s >1$ it holds \begin{equation} \label{equiv} N_\lambda \big( T^{N}_{(1,\infty)} \big) \, \sim \, N_\lambda\big( T^D_{(1,\infty)} \big)\, \sim\, N_\lambda\big( T^{D}_{(s,\infty)} \big) \, \sim \, N_\lambda\big( T^{N}_{(s,\infty)} \big) \qquad \lambda\to\infty. \end{equation} \end{lemma} \begin{proof} In view of \eqref{rank1} it suffices to consider the Dirichlet operator only. Let $I_ \lambda:=\{x>s:\, q(x) < \lambda/2\}$. Then $$ N_\lambda\big(T^{D}_{(s,\infty)}\big) \, \geq\, N_{\frac{ \lambda}{2}}\big(-\frac{d^2}{dx^2}\big)^{Dir}_{L^2(I_ \lambda)} \, \geq \frac{\sqrt{ \lambda}}{\pi\sqrt{2}}\, |I_ \lambda|\, (1+o(1))\quad \lambda\to\infty, $$ where the superscript $Dir$ indicates Dirichlet boundary conditions at the end points of $I_\lambda$. Since $|I_ \lambda|\to\infty$ as $ \lambda\to\infty$, this shows that $\liminf_{ \lambda\to\infty}\, \lambda^{-1/2}\, N_\lambda (T^{D}_{(s,\infty)} ) = \infty$. In view of the equation $$ N_\lambda\big( T^{D,N}_{(1,s)}\big)\, \sim \, N_\lambda\big( T^{D,D}_{(1,s)}\big) = \, \mathcal{O}(\sqrt{ \lambda}) \, \qquad \lambda\to\infty \quad \forall\, s>1, $$ the result follows from the Dirichlet-Neumann bracketing (by putting additional boundary conditions at $x=s$), see e.g. \cite[Chap.13]{rs}. \end{proof} \noindent Under certain additional assumptions one can recover the eigenvalue distribution of such operators from the potential $q$. The following theorems are due to \cite[Chap. 7]{ti}: \begin{theorem}[Titchmarsh] \label{tit} Suppose that $q(x)$ is continuous increasing unbounded function, that $q'(x)$ is continuous and $ x^3 q'(x) \to \infty$ as $x\to\infty$. Then \begin{equation} \label{cl} N_\lambda\left(T^{D}_{(s,\infty)}\right) \, \sim \, \frac{1}{\pi}\, \int_{s}^\infty \left( \lambda-q(x)\right)^{\frac 12}_+\, dx,\qquad \lambda\to\infty. \end{equation} \end{theorem} \begin{theorem}[Titchmarsh] \label{tit-2} Suppose that $q(x)$ is continuous increasing and convex at infinity. Then \eqref{cl} holds true. \end{theorem} \noindent A simple combination of the above results gives \begin{lemma} \label{enough} Assume that there exists some $x_c$ such that $q$ satisfies the hypothesis of Theorem \ref{tit} or \ref{tit-2} for all $x>x_c$. Then for any $s \geq x_c$ we have \begin{equation} \label{egal} N_\lambda\left(T^{D}_{(1,\infty)}\right) \, \sim \, \frac{1}{\pi}\, \int_{1}^\infty \left( \lambda-q(x)\right)^{\frac 12}_+\, dx \, \sim \, \frac{1}{\pi}\, \int_{s}^\infty \left( \lambda-q(x)\right)^{\frac 12}_+\, dx \qquad \lambda\to\infty. \end{equation} \end{lemma} \noindent Next we consider the operators $$ \mathcal{B}_n^{N/D} = -\partial_x^2 -\frac{1}{f^2(x)}\, \, \partial_y^2 \quad \text{in\, \, } L^2((n,\infty)\times(-1,1)) $$ subject to Dirichlet boundary conditions on $(n,\infty)\times (\{1\}\cup \{-1\})$ and Neumann/Dirichlet boundary condition on $\{n\}\times(-1,1)$ respectively. We have \begin{lemma} \label{B-op} For any $n\in\mathbb{N}$ it holds \begin{align} N_\lambda\left( \mathcal{B}_n^{N}\right) & \sim \, \, N_\lambda\left(\mathcal{B}_n^{D}\right) \, \sim \, \frac{ \lambda}{2\pi}\, \, \int_n^\infty f(x)\, dx \qquad \lambda\to\infty \quad \text{if \, \,} |\Omega| < \infty \label{B-2}, \\ N_\lambda\left( \mathcal{B}_n^{N}\right) & \sim \, \, N_\lambda\left(\mathcal{B}_n^{D}\right)\, \sim\, N_\lambda(\mathcal{B}) \qquad \qquad \qquad \, \lambda\to\infty \quad \text{if\, \, } |\Omega|=\infty. \label{B-1} \end{align} \end{lemma} \begin{proof} Equation \eqref{B-2} for $\mathcal{B}_n^D$ follows directly from \cite[Thm. 1.2.1]{sv}. Hence it remains to prove \eqref{B-2} for $\mathcal{B}_n^N$ and \eqref{B-1}. Note that $$ N_\lambda(\mathcal{B}_n^D) = \sum_{k=1}^\infty N_\lambda(L^D_{k,n})\, , \quad N_\lambda(\mathcal{B}_n^N) = \sum_{k=1}^\infty N_\lambda(L^N_{k,n}) , \quad \mbox{where} \quad L^{N/D}_{k,n} = -\frac{d^2}{dx^2} +\frac{\pi^2 k^2}{4 f(x)^2}\ $$ are one-dimensional operators acting in $L^2(n,\infty)$ with Neumann/Dirichlet boundary conditions on $x=n$. Obviously there exists a positive constant $c$ such that for any $n$ and any $k$ the operator inequality $L^D_{k,n} \geq L^N_{k,n} \geq c\, k^2$ holds. This means that there exists some $K( \lambda)$ with $K( \lambda) = \mathcal{O}(\sqrt{ \lambda})$ as $ \lambda\to\infty$ and such that $$ \sum_{k\geq 1} N_\lambda(L^N_{k,n}) = \sum_{k\geq 1}^{K( \lambda)} N_\lambda(L^N_{k,n}), \quad \sum_{k\geq 1} N_\lambda(L^D_{k,n}) = \sum_{k\geq 1}^{K( \lambda)} N_\lambda(L^D_{k,n}) $$ Moreover, since $ 0 \leq N_\lambda(L^N_{k,n}) - N_\lambda(L^D_{k,n}) \leq 1$ holds for all $n\in\mathbb{N}$ and for all $k\geq 1$, see \eqref{rank1}, \begin{equation} \label{sqrt} \sum_{k\geq 1} N_\lambda(L^D_{k,n}) = \sum_{k\geq 1}^{K( \lambda)} N_\lambda(L^D_{k,n}) \, \leq \, \sum_{k\geq 1}^{K( \lambda)} N_\lambda(L^N_{k,n}) \, \leq \, \sum_{k\geq 1} N_\lambda(L^D_{k,n}) + \mathcal{O}(\sqrt{ \lambda}). \end{equation} The latter implies \eqref{B-2} since $N_\lambda(\mathcal{B}_n^D)$ grows linearly in $ \lambda$ when $|\Omega| <\infty$ as mentioned above. To prove \eqref{B-1} we consider the operators $\mathcal{B}_{n,m}^D$ obtained from $\mathcal{B}_n^D$ by putting additional Dirichlet boundary condition at $\{x=m\},\, m>n$. From \cite[Thm. 1.2.1]{sv} we get $$ \liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(\mathcal{B}_n^D) \geq \liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(\mathcal{B}_{n,m}^D) = \frac{1}{2\pi}\, \int_n^m\, f(x)\, dx\qquad \forall\, m>n, $$ which implies, by letting $m\to\infty$, that $\liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(\mathcal{B}_n^D) = \infty$. In view of \eqref{sqrt} we obtain $N_\lambda\left( \mathcal{B}_n^{N}\right) \sim N_\lambda\left(\mathcal{B}_n^{D}\right)$. Finally, from the Dirichlet-Neumann bracketing we deduce that $N_\lambda\left( \mathcal{B}\right) \sim N_\lambda\left(\mathcal{B}_n^{D}\right)$. \end{proof} \section{Proofs of the main results} \label{proof} \noindent As mentioned in the introduction, the idea of the proof is to split $N_\lambda(A_\sigma)$ into two parts corresponding to the contribution from a finite part of $\Omega$ and from the tail. \subsection{Step 1} \label{step1} Here we show that the contribution from the part of $\Omega$ where $x<n$ obeys the Weyl asymptotic irrespectively of the boundary conditions. Let us define \begin{align*} \Omega_{n} & := \left\{(x,y) \in\Omega\, : \, 1<x< n \right\}, \quad E_n := \Omega\setminus\Omega_n. \end{align*} We denote by $Q^N_{n,l}$ and $Q^N_{n,r}$ the quadratic forms defined by the reduction of $Q_\sigma$ on $\Omega_{n}$ and $E_n$ and acting on the functions from $C^2(\overline{\Omega}_{n})$ and $C_0^2(\overline{E_n})$ respectively. Moreover, let $T^N_n$ and $S^N_n$ be the operators associated with the closures of the forms $Q^N_{n,l}$ and $Q^N_{n,r}$. Similarly we denote by $Q^D_{n,l}$ and $Q^D_{n,r}$ the respective quadratic forms which are defined in the same way as $Q^N_{n,l}$ and $Q^N_{n,r}$ but with the additional Dirichlet boundary condition at $\{x=n\}$. We then denote by $T^D_n$ and $S^D_n$ the operators associated with the closures of the forms $Q^D_{n,l}$ and $Q^D_{n,r}$. From the Dirichlet-Neumann bracketing we obtain the operator inequality \begin{equation} \label{two-sided-operators} T^N_n \oplus S^N_n \leq A_\sigma \leq T^D_n \oplus S^D_n, \quad n\in\mathbb{N}, \end{equation} which implies that \begin{equation} \label{two-sided} N_\lambda(T^D_n) + N_\lambda(S^D_n) \leq N_\lambda(A_\sigma) \leq N_\lambda(T^N_n) + N_\lambda(S^N_n), \quad n\in\mathbb{N},\, \, \lambda>0. \end{equation} \begin{lemma} \label{finite-part} For any $n\in\mathbb{N}$ it holds \begin{equation} \label{weyl-bis} \lim_{ \lambda\to\infty} \, \lambda^{-1}\, N_\lambda(T^D_n) = \lim_{ \lambda\to\infty} \, \lambda^{-1}\, N_\lambda(T^N_n) \, = \frac{1}{2\pi}\, \int_1^n f(x)\, dx. \end{equation} \end{lemma} \begin{proof} Fix $n\in\mathbb{N}$. Since $\sigma$ is bounded and $H^1(-f(x),f(x))$ is for every $x\in (1,n)$ continuously embedded into $L^\infty(-f(x),f(x))$, it follows that there exists a constant $c_n$ such that $$ \|\nabla u\|^2_{L^2(\Omega_n)}+\|u\|^2_{L^2(\Omega_n)}\, \leq\, Q^N_{n,l}[u] +\|u\|^2_{L^2(\Omega_n)}\, \leq \, c_n \left(\, \|\nabla u\|^2_{L^2(\Omega_n)} +\|u\|^2_{L^2(\Omega_n)}\right) $$ holds for all $u\in C^2(\overline{\Omega_n})$. Hence the domain of the closure of the quadratic form $Q^N_{n,l}$ is a subset of $H^1(\Omega_n)$. The same reasoning shows that the domain of the closure of $Q^D_{n,l}$ contains the space $H_0^1(\Omega_n)$. From the fact that $\sigma \geq 0$ and from the variational principle we thus conclude that \begin{equation} \label{dn-bracket} N_\lambda(-\mathcal{}elta^D_{\Omega_n}) \leq N_\lambda(T^D_n) \leq N_\lambda(T^N_n) \leq N_\lambda(-\mathcal{}elta^N_{\Omega_n}), \end{equation} where $-\mathcal{}elta^D_{\Omega_n}$ and $-\mathcal{}elta^N_{\Omega_n}$ denote the Dirichlet and Neumann Laplacian on $\Omega_n$ respectively. Since $\Omega_n$ has the $H^1-$extension property, the Weyl formula $$ \lim_{ \lambda\to\infty} \, \lambda^{-1}\, N_\lambda(-\mathcal{}elta^D_{\Omega_n})= \lim_{ \lambda\to\infty} \, \lambda^{-1}\, N_\lambda(-\mathcal{}elta^N_{\Omega_n}) = \frac{|\Omega_n|}{4\pi} $$ holds for both $-\mathcal{}elta^D_{\Omega_n}$ and $-\mathcal{}elta^N_{\Omega_n}$, see \cite{me,bs}, \cite{ns}. In view of \eqref{dn-bracket}, this completes the proof. \end{proof} \subsection{Step 2} \label{step 2} Next we will treat the contribution to the counting function $N_\lambda(A_\sigma)$ from the tail of $\Omega$. Our first aim is to transform the boundary term in \eqref{q-form} into en effective additional potential. To this end we use a ground state representation for the test functions $\psi$. Let $\mu(x)$ be the first eigenvalue of the one-dimensional problem \begin{align} \label{robin} -\partial_y^2\, v(x,y) & = \mu(x)\, v(x,y), \\ \partial_y v(x,-f(x)) = \sigma(x)\, v(x,-f(x)),\ \ & \ \ \partial_y v(x,f(x)) = -\sigma(x)\, v(x,f(x)) \nonumber \end{align} with the corresponding eigenfunction $v$. By lemma \ref{implicit} $0<v\leq 1$ and $v(x,y)\to 1$ as $x\to\infty$ uniformly in $y$. Moreover, $v\in C^2(\overline{E}_{n})$. Thus every function $\psi\in D(\mathcal{Q}^N_n)$ can be written as \begin{equation} \label{factor-N} \psi(x,y) = v(x,y)\, \varphi(x,y), \quad \varphi\in C_0^2(\overline{E}_{n})\, . \end{equation} Similarly, for every function $\psi\in D(\mathcal{Q}^D_n)$ we have \begin{equation} \label{factor-D} \psi(x,y) = v(x,y)\, \varphi(x,y), \quad \varphi\in C_0^2(\overline{E}_{n})\cap \left\{\varphi :\, \varphi(n, \cdot) =0 \right\} \end{equation} In view of \eqref{factor-N} and \eqref{factor-D} we can thus identify $Q^N_n[\psi]$ and $Q^D_n[\psi]$ with quadratic forms $\mathcal{Q}^N_n[\varphi]$ and $\mathcal{Q}^D_n[\varphi]$ given by $$ \mathcal{Q}^{N / D}_n[\varphi] = Q^{N /D}_n[v\, \varphi]. $$ and acting in the weighted space $L^2(E_{n}, v^2 dxdy)$. The forms $\mathcal{Q}^N_n[\varphi]$ and $\mathcal{Q}^D_n[\varphi]$ are defined on $ D(\mathcal{Q}^N_n)=C_0^2(\overline{E}_{n})$ and $D(\mathcal{Q}^D_n) = C_0^2(\overline{E}_{n})\cap \left\{\varphi :\, \varphi(n, \cdot) = 0 \right\}$ respectively. A straightforward calculation based on integration by parts in $y$ then gives \begin{align} \label{gr-state} \mathcal{Q}^{N,D}_n[\varphi] & = \int_{E_{n}} \left( |\partial_x (v \varphi)|^2 + \mu(x)\, v^2\, |\varphi|^2+ v^2\, |\partial_y\varphi|^2 \right )\, dxdy. \end{align} \noindent Since $v\to 1$ and $\mu(x)\sim \sigma(x)/f(x)$ as $x\to\infty$, see appendix, it is natural to compare $\mathcal{Q}^{N,D}_n$ with the quadratic form $$ q_n[\varphi]= \int_{E_n} \! \big( |\partial_x\varphi|^2 +|\partial_y\varphi|^2\, + \frac{\sigma(x)}{f(x)}\, |\varphi|^2\big)\, dxdy . $$ Let $\mathfrak{S}_n^N$ and $\mathfrak{S}_n^D$ be the operators in $L^2(E_n)$ generated by the closures of the quadratic form $q_n[u]$ on $D(\mathcal{Q}^N_n)$ and $D(\mathcal{Q}^D_n)$ respectively. \begin{lemma} \label{lem-intermed} Suppose that assumptions \ref{ass-f1} and \ref{ass-h} are satisfied. For any $\varepsilon$ there exists an $N_\varepsilon$ such that for all $n>N_\varepsilon$ \begin{equation} \label{intermed} N_\lambda(S^N_n) \leq N_\lambda((1-\varepsilon)\mathfrak{S}_n^N -\varepsilon), \quad N_\lambda(S^D_n) \geq N_\lambda((1+\varepsilon)\mathfrak{S}_n^D +\varepsilon) \quad \lambda>0 . \end{equation} \end{lemma} \begin{proof} Let $\varepsilonilon>0$ and let $\varphi$ belong to the domain of the quadratic forms $\mathcal{Q}^N_{n}\, (\mathcal{Q}^D_{n}$). From the fact that $$ \lim_{x\to\infty}\, v(x,y) =1,\quad \lim_{x\to\infty}\, \partial_x v(x,y) =0 \, \, \, (\text{uniformly\, in\, }\, y), \quad \lim_{x\to\infty}\, \frac{\mu(x) f(x)}{ \sigma(x)} = 1 \, , $$ see Lemma \ref{implicit}, and from the estimate $$ | 2v\partial_x v \, \varphi\, \partial_x \varphi| \leq \, \varepsilonilon\, |\partial_x \varphi|^2v^2 +\varepsilonilon^{-1}\, |\varphi|^2 |\partial_x v|^2 $$ we conclude that for $n$ large enough \begin{align} \mathcal{Q}^N_{n}[\varphi] & \geq (1-\varepsilonilon) \int_{E_n} \left( |\partial_x\varphi|^2 +|\partial_y\varphi|^2\, + \frac{\sigma(x)}{f(x)}\, |\varphi|^2\right)\, dxdy -\varepsilonilon\, \|\varphi\|^2_{L^2(E_n)}\nonumber \\ \mathcal{Q}^D_{n}[\varphi] & \leq (1+\varepsilonilon) \int_{E_n} \left( |\partial_x\varphi|^2 +|\partial_y\varphi|^2\, + \frac{\sigma(x)}{f(x)}\, |\varphi|^2\right)\, dxdy + \varepsilonilon\, \|\varphi\|^2_{L^2(E_n)}\, . \label{equiv-norm} \end{align} Moreover, by \eqref{eq-v} we also have $|v|\leq 1$ so that (still for $n$ large enough) $$ (1-\varepsilonilon) \|\varphi\|^2_{L^2(E_n)} \leq \int_{E_n}\, |\varphi|^2 v^2\, dxdy \, \leq \, \|\varphi\|^2_{L^2(E_n)}. $$ Equation \eqref{intermed} then follows from the variational principle by choosing $\varepsilonilon$ in appropriate way (depending on $\varepsilon$). \end{proof} \subsection{Step 3} \label{step 3} We transform the problem of studying the Laplace operator on $E_n$ to the problem of studying a modified operator on the simpler domain $$ D_n = (n,\infty)\times (-1,1) . $$ To this end we introduce the transformation $U:L^2(E_n) \to L^2(D_n)$ defined by $$ (U \varphi)(x,t) = \sqrt{f(x)}\, \, \varphi(x,f(x)\, t),\quad (x,t)\in D_n. $$ Let $\mathcal{A}_n^{N}$ and $\mathcal{A}_n^{D}$ be the operators associated with the closure of the form \begin{equation} \label{new-form} \widehat{Q}_n[u] := q_n[U^{-1} u], \end{equation} on $C_0^2(\overline{D}_{n})$ and $C_0^2(\overline{D}_{n})\cap \left\{u :\, u(n, \cdot) = 0 \right\}$ respectively. Since $U$ maps $L^2(E_n)$ unitarily onto $L^2(D_n)$ and $U\, C_0^2(\overline{E}_{n})=C_0^2(\overline{D}_{n})$, the variational principle gives \begin{equation} \label{n-equal} N_\lambda(\mathcal{A}_n^N) = N_\lambda(\mathfrak{S}_n^N),\quad N_\lambda(\mathcal{A}_n^D) = N_\lambda(\mathfrak{S}_n^D). \end{equation} By a direct calculation \begin{equation*} \label{bordel} \widehat{Q}_n[u] = \int_{D_n} \! \big( |\partial_x u|^2+W_\sigma\, u^2 -2t\, \frac{f'}{f}\, \partial_x u\partial_t u + \frac{f'^2}{f^2}( t\, u\partial_t u+t^2|\partial_t u|^2) + \frac{1}{f^2}\, |\partial_t u|^2\big)\, dxdt . \end{equation*} \noindent Now Let $\eta \in (0,1)$ be arbitrary. Since $|t| \leq 1$ we get \begin{align*} \big |2t\, \frac{f'}{f}\, \partial_x u\, \partial_t u\big| & \, \leq \, \eta \, |\partial_x u|^2 + \eta^{-1}\, \frac{f'^2}{f^2}\, |\partial_t u|^2, \quad \frac{f'^2}{f^2}\, |t\, u\, \partial_t u| \, \leq \, \eta^{-1} \, \frac{f'^4}{f^2}\, \, |u|^2+ \frac{\eta}{f^2}\, \, |\partial_t u|^2 . \end{align*} Moreover, from \eqref{taylor-new} and from the fact $f$ is decreasing at infinity, by assumption \ref{ass-f1}, it follows that \begin{equation} \label{taylor} f'(x)^2 \leq 2 f(x)\, \sup_{s\geq x}\, |f''(s)|, \end{equation} for all $x$ large enough. Since $f''\to 0$ as $x\to\infty$, for any $\eta\in (0,1)$ there clearly exists an $N_\eta$ such that for any $n>N_\eta$ it holds \begin{align} \label{pm-estim} \widehat{Q}_n[u] & \lessgtr \int_{D_n}\! \big( (1\pm \eta) |\partial_x u|^2+ W_\sigma\, u^2 + \frac{1\pm 2\eta}{f^2}\, |\partial_t u|^2 \pm \eta u^2 \big )\, dxdt . \end{align} \noindent We denote by $H^N_n$ and $H^{D}_n$ the operators acting in $L^2(\Omega_{n,r})$ associated with the closures of the quadratic form $$ \int_{D_n} \big( |\partial_x u|^2 +\frac{|\partial_t u|^2}{f^2(x)}\, + W_\sigma(x)\, u^2\big)\, dxdt $$ defined on $C_0^2(\overline{D}_{n})$ and $C_0^2(\overline{D}_{n})\cap \left\{u :\, u(n, \cdot) = 0 \right\}$ respectively. \begin{lemma} \label{lem6.3} Suppose that assumptions \ref{ass-f1} and \ref{ass-h} are satisfied. For any $\varepsilon$ there exists an $N_\varepsilon$ such that for all $n>N_\varepsilon$ and any $ \lambda>0$ it holds \begin{equation} \label{2-side-eps} N_\lambda(\mathcal{A}^N_n) \leq N_\lambda((1-\varepsilon) H^N_n), \quad N_\lambda(\mathcal{A}^D_n) \geq N_\lambda((1+\varepsilon) H^D_n). \end{equation} \end{lemma} \begin{proof} In view of the fact that $W_\sigma(x)\to\infty$ the statement follows from \eqref{pm-estim}. \end{proof} \noindent Next we observe that since $W_\sigma$ depends only on $x$, the matrix representations of the operators $H^N_n$ and $H^D_n$ in the basis of (normalised) eigenfunctions of the operator $-f(x)^{-2}\, \frac{d^2}{dt^2}$ on the interval $(-1, 1)$ with Neumann boundary conditions are diagonal. We thus have the following unitary equivalence: \begin{equation} \label{ort-sum} H^N_n \simeq \bigoplus_{k=0}^\infty \mathcal{H}^N_{k,n}\, , \quad H^D_n \simeq \bigoplus_{k=0}^\infty \mathcal{H}^D_{k,n}, \qquad \mathcal{H}^{N/D}_{k,n}= -\frac{d^2}{dx^2}\, + W_\sigma(x) + \frac{k^2\pi^2}{4 f(x)^2}\, , \end{equation} where $\mathcal{H}^{N/D}_{k,n}$ are operators in $L^2(n,\infty)$ with Neumann/Dirichlet boundary condition at $x=n$. We denote $$ \mathcal{H}_{0,n}^N = \mathcal{H}^N_{n}, \quad \mathcal{H}_{0,n}^D = \mathcal{H}^D_{n}\, . $$ \noindent As a consequence of \eqref{ort-sum} we get \begin{proof}[Proof of Theorem \ref{disc}] We make use of inequality \eqref{two-sided-operators} for some fixed $n$ and show that the operator on the left hand side of \eqref{two-sided-operators} has purely discrete spectrum. By general arguments of the spectral theory this will imply the statement. Since the spectrum of $T^N_n$ is discrete, it suffices to show that the same is true for $S^N_n$. In view of Lemma \ref{lem-intermed} and equations \eqref{n-equal}, \eqref{2-side-eps} it is enough to prove the discreteness of the spectrum of $H^{N}_n$. By \eqref{ort-sum} we have $$ \mbox{spect}(H^{N}_n) = \cup_{k=0}^\infty\, \mbox{spect}(\mathcal{H}^N_{k,n}), $$ First we notice that $\mbox{spect}(\mathcal{H}^N_{k,n})$ is purely discrete for each $k$ and $n$. Indeed, a sufficient condition for the spectrum of $\mathcal{H}^N_{k,n}$ to be purely discrete is that \begin{equation} \label{infinity} W_\sigma(x) + \frac{k^2\pi^2}{4 f(x)^2} \to \infty \quad \text{as\, \, } x\to\infty\, , \end{equation} see e.g. \cite[Thm. 13.67]{rs}, which is a direct consequence of assumption (\ref{ass-h}). Hence the spectrum of $H^N_n$ is pure point, i.e. consists only of eigenvalues. Moreover, since $f^2(x) W_\sigma(x) \to 0$ as $x\to\infty$ by \eqref{taylor} and boundedness of $\sigma$, it is easy to see that $$ \forall\, n \quad \inf\, \mbox{spect}(\mathcal{H}^N_{k,n}) \to \infty\qquad \mbox{as}\quad k\to\infty. $$ Hence all the eigenvalues in the spectrum of $H^N_n$ have finite multiplicity and $\mbox{spect}(H^N_n)$ contains no finite point of accumulation. This means that $\mbox{spect}(H^N_n)$ is discrete. \end{proof} \begin{proof}[Proof of Theorem \ref{principle}] Case $|\Omega|<\infty$. By Lemma \ref{1-dim} the asymptotic behaviour of $N_\lambda(\mathcal{H}^{N,D}_n)$ does not depend on the boundary condition at $x=n$, nor on $n$ itself: \begin{equation} \label{asymp-equal} N_\lambda(\mathcal{H}_\sigma) \sim N_\lambda(\mathcal{H}^N_{n}) \sim N_\lambda(\mathcal{H}^D_{n}) \quad \lambda\to\infty, \, \, \, \forall\, n\in\mathbb{N}\, . \end{equation} Now fix an $\varepsilon>0$. From Lemma \ref{lem-intermed}, \eqref{n-equal} and \eqref{2-side-eps} we see that for all $n$ large enough it holds \begin{equation} \label{bounds} N_\lambda(S^N_n) \, \leq \, N_\lambda((1-\varepsilon)\, H_n^N), \qquad N_\lambda(S^D_n) \, \geq \, N_\lambda((1+\varepsilon)\, H_n^D) \end{equation} Moreover, $f^2(x) W_\sigma(x) \to 0$ at infinity so that \begin{equation} \label{w-eps} (1-\varepsilon)\, \frac{k^2\pi^2}{4 f(x)^2} \leq W_\sigma(x) + \frac{k^2\pi^2}{4 f(x)^2} \leq (1+\varepsilon)\, \frac{k^2\pi^2}{4 f(x)^2}\qquad \forall\, k\geq 1 \end{equation} for all $x$ large enough uniformly in $k$. Now observe that the sequence $\left\{k^2\pi^2/4 f(x)^2\right\}_{k\geq 1}$ enlists {\it all} the eigenvalues of the operator $-f(x)^{-2}\, \frac{d^2}{dt^2}$ on the interval $(-1, 1)$ with Dirichlet boundary conditions. Hence it follows from \eqref{ort-sum} and \eqref{w-eps} that for $n$ large enough \begin{align} N_\lambda((1+\varepsilon)\, H_n^D) & \geq \, N_\lambda((1+\varepsilon)^2\, \mathcal{B}_n^D)+ N_\lambda((1+\varepsilon)\, \mathcal{H}_n^D) \nonumber \\ N_\lambda((1-\varepsilon)\, H_n^N) & \leq \, N_\lambda((1-\varepsilon)^2\, \mathcal{B}_n^N)+ N_\lambda((1-\varepsilon)\, \mathcal{H}_n^N) , \label{up} \end{align} where $\mathcal{B}_n^{N/D}$ are the operators defined in Section \ref{auxiliary}. Note that $\mathcal{B}_n^{N/D}$ and $\mathcal{H}_n^{N/D}$ satisfy assumption \eqref{eps-B} by Lemma \ref{B-op} and equation \eqref{asymp-equal}. In view of \eqref{two-sided} we then conclude that for $n$ large enough \begin{align} N_\lambda(A_\sigma) & \leq \, (1+\mathcal{O}(\varepsilon))\, \left(N_\lambda(T_n^N)+N_\lambda(\mathcal{B}_n^N) +N_\lambda(\mathcal{H}_n^N)\right) \label{almost-}\\ N_\lambda(A_\sigma) & \geq \, (1+\mathcal{O}(\varepsilon))\, \left(N_\lambda(T_n^D)+N_\lambda(\mathcal{B}_n^D) +N_\lambda(\mathcal{H}_n^D)\right), \label{almost+} \end{align} If the volume of $\Omega$ is finite then it follows from Lemmas \ref{B-op}, \ref{finite-part} and equations \eqref{asymp-equal}, \eqref{almost-}, \eqref{almost+} that for any $\varepsilon>0$ \begin{equation*} \label{finite-vol} 1+\mathcal{O}(\varepsilon) \, \leq\, \liminf_{\lambda\to\infty}\, \frac{N_\lambda(A_\sigma)}{\frac{\lambda}{4\pi}\, |\Omega| +N_\lambda(\mathcal{H}_\sigma)}\, \leq \, \limsup_{\lambda\to\infty}\, \frac{N_\lambda(A_\sigma)}{\frac{\lambda}{4\pi}\, |\Omega| +N_\lambda(\mathcal{H}_\sigma)}\, \leq 1+\mathcal{O}(\varepsilon). \end{equation*} By letting $\varepsilon\to 0$ we arrive at \eqref{principle-eq}. \noindent Case $|\Omega|=\infty$. If the volume of $\Omega$ is infinite, then Lemma \ref{B-op} gives \begin{equation} \label{aux-inf} N_\lambda(T_n^N)+N_\lambda(\mathcal{B}_n^N) \, \sim \, N_\lambda(\mathcal{B}_n^N) \, \sim\, N_\lambda(\mathcal{B}_n^D)\, \sim\, N_\lambda(T_n^D)+N_\lambda(\mathcal{B}_n^D)\, \sim\, N_\lambda(\mathcal{B}) \end{equation} as $ \lambda\to\infty$. Moreover, mimicking all the above estimates for the Dirichlet-Laplacian $-\mathcal{}elta_\Omega^D$ instead of $A_\sigma$ it is straightforward to verify that for any $\varepsilon>0$ and $n$ large enough, depending on $\varepsilon$, it holds $$ N_\lambda((1-\varepsilon)\, \mathcal{B}_n^N)\, \leq \, N_\lambda(-\mathcal{}elta_\Omega^D)\, \leq \, N_\lambda((1+\varepsilon)\, \mathcal{B}_n^D). $$ This together with \eqref{eps-B} and \eqref{aux-inf} implies that $N_\lambda(\mathcal{B}) \, \sim \, N_\lambda(-\mathcal{}elta_\Omega^D)$ as $ \lambda\to\infty $. Equation \eqref{principle-eq} thus follows again from \eqref{almost-} and \eqref{almost+}. \end{proof} \begin{proof}[Proof of Corollary \ref{robin-w}] Note that the assumption \ref{ass-h} is fulfilled. Indeed, equation \eqref{taylor} shows that $f(x) V(x)\to 0$. Consequently \eqref{compact-h} holds true since $f\to 0$ and \begin{equation} \label{pot-asymp} W_\sigma(x) \, \sim\, \frac{\sigma}{f(x)}\qquad x\to \infty. \end{equation} To prove \eqref{robin-weyl} we recall that $$ \liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(A_\sigma) \geq \liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(-\mathcal{}elta_\Omega^D) = \frac{|\Omega|}{4\pi}\, . $$ On the other hand, if $\limsup_{x\to\infty} x^2 f(x) =0$, then \eqref{pot-asymp} says for any $\varepsilon>0$ exists an $x_\varepsilon$ such that $W_\sigma(x) \geq \frac{x^2}{\varepsilon^2}$ holds for all $x\geq x_\varepsilon$. Lemma \ref{enough} gives $$ \limsup_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(\mathcal{H}_\sigma) \leq \limsup_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda\left( -\frac{d^2}{dx^2}\, +\frac{x^2}{\varepsilon^2}\right)_{L^2(x_\varepsilon,\infty)} = \frac{\varepsilon}{4}\, . $$ From Lemma \ref{B-op} and the proof of Theorem \ref{principle}, see equations \eqref{two-sided}, \eqref{finite-vol}, \eqref{bounds} and \eqref{up} we then get $$ \limsup_{ \lambda\to\infty}\, \frac{N_\lambda(A_\sigma)}{\lambda} \leq \, (1+\mathcal{O}(\varepsilon))\, \frac{|\Omega|}{4\pi} + \limsup_{ \lambda\to\infty}\, \frac{N_\lambda((1-\varepsilon)\, \mathcal{H}_\sigma)}{ \lambda} \leq (1+\mathcal{O}(\varepsilon))\, \frac{|\Omega|}{4\pi} +\mathcal{O}(\varepsilon)\, . $$ Equation \eqref{robin-weyl} now follows by letting $\varepsilon\to 0$. In order to prove \eqref{robin-linear} we note that $W_\sigma(x) \sim \sigma\, a^{-2}\, x^2$ as $x\to\infty$, see \eqref{pot-asymp}. From Lemma \ref{enough} we thus deduce that $$ \lim_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(\mathcal{H}_\sigma) \, = \, \frac{|a|}{4\sqrt{\sigma}}, $$ so that \eqref{eps} is satisfied and \eqref{robin-linear} follows from Theorem \ref{main-thm}. \end{proof} \section{Generalisations} \label{gen} \subsection{Non symmetric boundary conditions} \label{non-sym} As mentioned in Remark \ref{general}, the above approach can be applied also to Robin Laplacians with different boundary conditions on the upper and lower boundary of $\Omega$. More precisely, to operators $A_{\sigma_1,\sigma_2}$ generated by the closure of the form \begin{equation} \label{form-gen} Q_{\sigma_1,\sigma_2}[u]= \int_\Omega |\nabla u|^2\, dxdy + \int_1^\infty\! \left(\sigma_1(x)\, u(x,f(x))^2+\sigma_2(x)\, u(x,-f(x))^2 \right)\, dx \end{equation} on $C_0^2(\overline{\Omega})$. We can proceed in the same way as in section \ref{proof} replacing the function $v(x,y)$ in step 2 by the function $w(x,y)$, which solves the eigenvalue problem \begin{align} \label{new-bc} -\partial_y^2\, w(x,y) & = \bar\mu(x)\, w(x,y), \\ \partial_y w(x,-f(x)) = \sigma_1(x)\, w(x,-f(x)),\ \ & \ \ \partial_y w(x,f(x)) = -\sigma_2(x)\, w(x,f(x)) \nonumber, \end{align} with $\bar\mu(x)$ being the principle eigenvalue. From equation \eqref{new-limit}, see appendix, we then get a generalisation of Theorem \ref{principle}. \begin{proposition} Suppose that assumptions \ref{ass-f1},\, \ref{ass-h} and \ref{number} for $\sigma_1, \, \sigma_2$ are satisfied. Then \begin{equation} \label{principle-gen} N_\lambda(A_{\sigma_1,\sigma_2}) \, \sim \, N_\lambda(-\mathcal{}elta_\Omega^D) + N_\lambda(\mathcal{H}_{\bar\sigma}) \qquad \lambda\to\infty, \qquad \bar\sigma(x) = \frac{\sigma_1(x)+\sigma_2(x)}{2} . \end{equation} \end{proposition} \subsection{Dirichlet-Neumann Laplacian} \label{d-n} Our second remark concerns the case in which we impose Dirichlet boundary condition on one of the boundaries of $\Omega$. We confine ourselves to the special situation when we have Dirichlet boundary condition on one boundary and Neumann on the other. We denote the resulting operator by $A_{0,\infty}$. \begin{proposition} \label{d_n} Let $|\Omega| < \infty$ and assume that $f$ is decreasing at infinity. Then \begin{equation} \label{d-n-eq} N_\lambda(A_{0,\infty}) \, \sim \, \frac{|\Omega|}{4\pi}\, \, \lambda, \qquad \lambda\to\infty. \end{equation} \end{proposition} \begin{proof} First we observe that by the variational principle. \begin{equation} \label{lowerb-d} \liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(A_{0,\infty}) \geq \liminf_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(-\mathcal{}elta_\Omega^D) = \frac{|\Omega|}{4\pi}\, . \end{equation} Assume that $f$ is decreasing on $(a,\infty)$ and that $\lambda$ is large enough so that there exists a unique point $x_\lambda>a$ such that $f(x_\lambda) = \pi/(4 \sqrt{\lambda})$. We impose additional Neumann boundary condition at $\{x= x_\lambda\}$ dividing thus $\Omega$ into the finite part $\Omega_\lambda: =\{(x,y)\in\Omega\, :\, x<x_\lambda\}$ and its complement $\Omega_\lambda^c$. It is then easy to see that the quadratic form of the corresponding operator acting on $\Omega_\lambda^c$ is bounded from below by $$ \int_{x_\lambda}^\infty \int_{-f(x)}^{f(x)}\, \mathcal{B}ig (\frac{\pi^2}{16\, f^2(x)}\, \, u^2 +|\partial_x u|^2\mathcal{B}ig )\, dy\, dx\, \geq \, \lambda\, \int_{x_\lambda}^\infty \int_{-f(x)}^{f(x)}\, u^2\, dy\, dx $$ for all functions $u$ from its domain. Consequently, this operator does not have any eigenvalues below $\lambda$. To estimate the number of eigenvalues of the operator acting on $\Omega_\lambda$, we cover $\Omega_\lambda$ with a finite collection of disjoint cubes of size $L = 1/(\varepsilon\sqrt{\lambda})$ with $\varepsilon>0$. Since $\Omega_\lambda$ has the extension property, the standard technique of Neumann bracketing gives \begin{align} \lambda^{-1} N_\lambda(A_{0,\infty}) & \leq \, \lambda^{-1} N_\lambda(-\mathcal{}elta_{\Omega_\lambda}^N) \leq \, \frac{|\Omega_\lambda|}{4\pi}\, (1+\mathcal{O}(\varepsilon)) + c\, \, \frac{|\partial\Omega_\lambda|}{\sqrt{\lambda}} \, (1+\varepsilon^{-1}) \nonumber \\ & \leq \, \lambda^{-1} N_\lambda(-\mathcal{}elta_{\Omega_\lambda}^N) \leq \, \frac{|\Omega_\lambda|}{4\pi}\, (1+\mathcal{O}(\varepsilon)) + \tilde c\, \, \frac{x_\lambda}{\sqrt{\lambda}} \, (1+\varepsilon^{-1}), \label{upperb-n} \end{align} where $\tilde c$ is independent of $\lambda$. However, since $f$ is integrable and decreasing at infinity it is easily seen that $x f(x) \to 0$ as $x\to \infty$. Hence $$ \limsup_{ \lambda\to\infty}\, \frac{x_\lambda}{\sqrt{\lambda}} = \frac{4}{\pi}\, \limsup_{ \lambda\to\infty}\, x_\lambda\, f(x_\lambda)= 0. $$ Letting first $\lambda\to\infty$ and then $\varepsilon\to 0$ in \eqref{upperb-n} we obtain $\limsup_{ \lambda\to\infty}\, \lambda^{-1} N_\lambda(A_{0,\infty}) \leq |\Omega|/4\pi$, which together with \eqref{lowerb-d} implies the statement. \end{proof} \appendix \section{} \label{impl} \noindent \begin{lemma} \label{mu-limit} Let $\mu(x)$ be the function defined by the problem \eqref{robin}. Then \begin{equation} \label{mu-lim} \mu(x) \, \leq \, \frac{\sigma(x)}{f(x)} \qquad \forall\, x > 1. \end{equation} \end{lemma} \begin{proof} For each fixed $x\in(1,\infty)$ we define the quadratic form \begin{equation} \label{a-form} a_x[u] = \int_{-f(x)}^{f(x)}\, |u'(y)|^2\, dy + \sigma(x)\left(|u(f(x))|^2+|u(-f(x))|^2\right), \quad u\in D(a_x), \end{equation} where $D(a_x) = H^1(-f(x),f(x))$. The variational definition of $\mu$ says that $$ \mu(x) = \inf_{u\in D(a_x)}\, \frac{a_x[u]}{\|u\|^2_{L^2(-f(x),f(x))}} \leq \frac{a_x[1]}{\|1\|^2_{L^2(-f(x),f(x))}}\, = \frac{\sigma(x)}{f(x)}\, . $$ \end{proof} \noindent In the next Lemma we use the notation $\kappa(x) := \sqrt{\mu(x)}$. \begin{lemma} \label{implicit} Let the assumption \ref{ass-h} be satisfied. Then the eigenfunction $v(x,y)$ of the problem \eqref{robin} associated to the eigenvalue $\mu(x)$ is twice continuously differentiable in $x$. Moreover we have \begin{align} \lim_{x\to\infty}\, \frac{f(x)\, \mu(x)}{\sigma(x)} & = 1 \label{mu} \\ \lim_{x\to\infty} v(x,y) & = 1 \, \, \, \qquad \text{uniformly\, in \,} y , \label{v}\\ \lim_{x\to\infty} \partial_x v(x,y) & = 0 \, \, \, \qquad \text{uniformly\, in\, } y \label{v'}. \end{align} \end{lemma} \begin{proof} It is easy to see that \begin{equation} \label{eq-v} v(x,y) = \cos(\kappa(x) y), \end{equation} where $\kappa(x)$ is the first positive solution to the implicit equation \begin{equation} \label{kappa} F(x,\kappa): =\kappa\, \tan(\kappa f(x))-\sigma(x) =0. \end{equation} Since $f(x) \kappa(x)\to 0$ as $x\to\infty$ by Lemma \ref{mu-lim} (recalling that $\sigma(x)f(x)\to 0$), we easily deduce from \eqref{kappa} that \begin{equation} \label{g} \lim_{x\to\infty} \, \frac{f(x) \kappa^2(x)}{\sigma(x)} = 1, \end{equation} which proves \eqref{mu}. Equation \eqref{v} thus follows directly from \eqref{eq-v} and the fact that $f(x)\kappa(x)\to 0$. Next we note that \eqref{kappa} implies $$ 0 \, < \kappa(x) \, < \frac{\pi}{2\, f(x)} \qquad \forall\, x >1, $$ and hence \begin{equation} \label{Fk} \partial_\kappa F(x,\kappa) = \tan(f(x)\kappa) +\frac{f(x)\, \kappa}{\cos^2(f(x)\kappa)} >0. \end{equation} Since $\sigma\in C^2(1,\infty)$, the implicit function theorem shows that $\kappa$ is of the class $C^2$ and in view of \eqref{eq-v} we see that $v$ is twice continuously differentiable in $x$. In order to prove \eqref{v'} we need some information about the behaviour of $\kappa'$ for large $x$. From the positivity of $f$ and $\sigma$ and from the Taylor theorem we conclude that $\sigma'/\sqrt{\sigma}$ is bounded and that $f'/\sqrt{f}\to 0$, see equation \eqref{taylor}. Equations \eqref{Fk} and \eqref{g} then give \begin{equation} \label{kappa'} \kappa'(x) = -\frac{\partial_x F}{\partial_\kappa F}\, \sim \, \frac{\sqrt{\sigma(x)}}{2\sqrt{f(x)}}\, \, (f'(x)-\sigma'(x)) \quad x\to\infty. \end{equation} On the other hand, a direct calculation shows that $$ |\partial_x v(x,y)| \, \leq \, |\kappa'(x)|\, f^{3/2}(x)\, \sqrt{\sigma(x)}\, \qquad \forall\, x>1. $$ This implies \eqref{v'}. \end{proof} \noindent Notice that if we replace the eigenvalue problem \eqref{robin} by \eqref{new-bc}, then a straightforward analysis of the associated implicit equation shows that \begin{equation} \label{new-limit} \lim_{x\to\infty}\, \frac{f(x)\, \bar\mu(x)}{\bar\sigma(x)} = 1, \qquad \bar\sigma(x) = \frac{\sigma_1(x)+\sigma_2(x)}{2}\, . \end{equation} \end{document}
math
53,338
\begin{document} \title{Domino cooling of a coupled mechanical-resonator chain via cold-damping feedback} \author{Deng-Gao Lai} \affiliation{Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \author{Jian Huang} \affiliation{Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China} \author{Bang-Pin Hou} \affiliation{College of Physics and Electronic Engineering, Institute of Solid State Physics, Sichuan Normal University, Chengdu 610068, China} \author{Franco Nori} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Physics Department, The University of Michigan, Ann Arbor, Michigan 48109-1040, USA} \author{Jie-Qiao Liao} \email{[email protected]} \affiliation{Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China} \begin{abstract} We propose a domino-cooling method to realize simultaneous ground-state cooling of a coupled mechanical-resonator chain through an optomechanical cavity working in the unresolved-sideband regime. This domino-effect cooling is realized by combining the cold-damping feedback on the first mechanical resonator with nearest-neighbor couplings between other neighboring mechanical resonators. We obtain analytical results for the effective susceptibilities, noise spectra, final mean phonon numbers, and cooling rates of these mechanical resonators, and find the optimal-cooling condition for these resonators. Particularly, we analyze a two-mechanical-resonator case and find that by appropriately engineering either the laser power or the feedback, a flexible switch between symmetric and asymmetric ground-state cooling can be achieved. This could be used for preparing symmetric quantum states in mechanical systems. We also simulate the cooling performance of a coupled $N$-mechanical-resonator chain and confirm that these resonators can be simultaneously cooled to their quantum ground states in the unresolved-sideband regime. Under proper parameter conditions, the cooling of the mechanical-resonator chain shows a temperature gradient along the chain. This study opens a route to quantum manipulation of multiple mechanical resonators in the bad-cavity regime. \end{abstract} \maketitle \section{Introduction\label{sec1}} Cavity optomechanical systems~\cite{Kippenberg2008Science,Meystre2013AP,Aspelmeyer2014RMP}, addressing the radiation-pressure coupling between mechanical motion of mesoscopic or even macroscopic objects and electromagnetic degrees of freedom, provide a promising platform for manipulating cavity-field statistics by mechanically changing the cavity boundary or controlling the mechanical properties through optical means ~\cite{Rabl2011PRL,Nunnenkamp2011,Liao2012PRA,Liao2013PRA,Liao2013,Wang2013PRL,Liu2013PRL,Vitali2007PRL,Agarwal2010PRA,Genes2011PRA,Cirio2017PRL,Xu12015PRA,Hou2015PRA,Wu2018PRApplied,Qin2018PRL,Zippilli2018PRA}. Optomechanical cooling~\cite{Wilson-Rae2007PRL,Marquardt2007PRL,Genes2008NJP,Xia2009PRL,Liu2013PRL1,Xu2017PRL,Teufel2011Nature,Clarkl2017Nature,MXu2020PRL,Qiu2020PRL,Mancini1998PRL,Genes2008PRA,Steixner2005PRA,Bushev2006PRL,Rossi2017PRL,Rossi2018Nature,Conangla2019PRL,Tebbenjohanns2019PRL,Sommer2019PRL,Guo2019PRL,Sommer2020PRR}, as a prominent application closely relevant to this platform, has become an important research topic in this field. This is because a prerequisite for observing the signature of quantum mechanical effects is to cool the systems to their quantum ground states, such that thermal noise can be suppressed. So far, the ground-state cooling of a single mechanical resonator based on optomechanical platforms has been mainly achieved by two cooling mechanisms: (i) resolved-sideband cooling~\cite{Wilson-Rae2007PRL,Marquardt2007PRL,Genes2008NJP,Xia2009PRL,Liu2013PRL1,Xu2017PRL,Teufel2011Nature,Clarkl2017Nature,MXu2020PRL,Qiu2020PRL}, which is preferable in the good-cavity regime; and (ii) feedback-aided cooling~\cite{Mancini1998PRL,Genes2008PRA,Steixner2005PRA,Bushev2006PRL,Rossi2017PRL,Rossi2018Nature,Conangla2019PRL,Tebbenjohanns2019PRL,Sommer2019PRL,Guo2019PRL,Sommer2020PRR}, which is more efficient in the bad-cavity regime. Alternatively, cooling can also be achieved in superconducting quantum circuits~\cite{Grajcar2008PRB,Zhang2009PRA,Liberato2011PRA,Xue2007PRB,You2008PRL,Nori2008NP,Xiang2013RMP}. Note that the ground-state cooling of the mechanical resonators means that the final average occupancies in these resonators are well below unity~\cite{Wilson-Rae2007PRL,Marquardt2007PRL}. In recent years, much attention has been paid to the multimode optomechanical systems involving multiple mechanical resonators~\cite{Shkarin2014PRL,Malz2018PRL,Shen2016NP,Shen2018NC,Fang2017NP,Xu2019Nature,Mathew2018arXiv,Yang2020NC,Massel2012Nc,Mari2013PRL,Matheny2014PRL,Zhang2015PRL,Riedinger2018Nature,Ockeloen-Korppi2018,Stefano2019PRL,Li2020PRA,Pelka2020PRR,Xuereb2012PRL,Xuereb2014PRL,Heinrich2011PRL,Ludwig2013PRL,Xuereb2015NJP,Mahmoodian2018PRL,Xu2016Nature,SanavioPRB2020}. The motivations for exploring these systems include the study of macroscopic mechanical coherence in multimode mechanical systems~\cite{Shkarin2014PRL,Massel2012Nc,Mari2013PRL,Matheny2014PRL,Zhang2015PRL,Riedinger2018Nature,Riedinger2018Nature,Ockeloen-Korppi2018,Stefano2019PRL,Li2020PRA,Pelka2020PRR}, the engineering of complex long-range interactions among the mechanical components~\cite{Xuereb2012PRL,Xuereb2014PRL}, the investigation of quantum many-body phenomena~\cite{Heinrich2011PRL,Ludwig2013PRL,Xuereb2015NJP,Mahmoodian2018PRL}, and the implementation of nonreciprocal photon or phonon transport~\cite{Malz2018PRL,Shen2016NP,Shen2018NC,Fang2017NP,Xu2019Nature,Mathew2018arXiv,Yang2020NC,Xu2016Nature,SanavioPRB2020}. However, these applications are fundamentally limited by thermal noise. To suppress these thermal effects, the simultaneous ground-state refrigeration of these mechanical resonators becomes an urgent and important task. Although some schemes for cooling multiple mechanical resonators in the good-cavity regime have been proposed using the cavity resolved-sideband-cooling mechanism~\cite{Ockeloen-Korppi2019PRA,Lai2018PRA,Lai2020PRARC}, the answer to the question whether we can utilize the feedback-cooling technique to simultaneously cool these mechanical resonators to their quantum ground states is yet unclear. \begin{figure*} \caption{(a) Schematic of the cascade optomechanical system. A cavity field with resonance frequency $\omega_{c} \label{Figmodel} \end{figure*} In this paper, we demonstrate that an array of $N$ mechanical resonators coupled in series can be simultaneously cooled to their quantum ground states with cold-damping feedback. Here, the feedback technique is applied to the optomechanical cavity via a feedback loop, which is utilized to design a direct force applied on the first resonator. This leads to the freezing of thermal fluctuations of the first mechanical resonator (cold-damping effect). Other neighboring mechanical resonators are connected to each other via position-position interactions (namely nearest-neighbor interactions). Physically, the feedback loop applied on the first mechanical resonator acts as a cooling channel of the first mechanical resonator. Successively, the former resonator provides a cooling channel for the next resonator via the nearest-neighbor coupling, as a cascade-cooling process. This acts like a domino-effect or chain-reaction cooling through the system. By deriving analytical results of the effective susceptibilities, noise spectra, final mean phonon numbers, and cooling rates of these mechanical resonators, we obtain the optimal-cooling condition for this coupled mechanical-resonator chain. Our proposal allows both degenerate and non-degenerate mechanical resonators to reach simultaneous ground-state cooling in the unresolved-sideband regime. We also find that a flexible switch between asymmetric and symmetric ground-state coolings can be achieved by appropriately engineering either the laser power or the feedback parameters (e.g., feedback gain and feedback bandwidth) applied on the first mechanical resonator. Note that the asymmetric (symmetric) cooling means that the final mean phonon numbers of the two mechanical resonators are different (the same). The symmetric cooling case will be helpful to the creation of symmetric quantum states in the two mechanical resonators, because their initial states are almost the same. Additionally, we extend this domino cooling method to the simultaneous cooling of $N$ mechanical resonators. The results show that, when the mechanical coupling strength is much smaller than the mechanical frequency, the cooling efficiency is higher for the mechanical resonator which is closer to the cavity. Physically, the feedback loop extracts the thermal excitations from the first resonator though the feedback cooling channel, and then the feedback-cooled resonator extracts the thermal excitations from the next one via the mechanical cooling channel. In this case, the feedback cooling rate should be much larger than the mechanical cooling rates, which leads to the highest cooling efficiency for the feedback-cooled resonator. However, by increasing the mechanical coupling, an anomalous cooling occurs, i.e., the feedback-cooled resonator is not the coldest. This is because the counter-rotating-wave (CRW) interaction terms will create more and more phonon excitations with the increase of the mechanical coupling strength, and then the cooling of the first resonator is suppressed. This study will pave a way towards quantum manipulation of multimode mechanical systems in the bad-cavity regime. The rest of this paper is organized as follows. In Sec.~\ref{sec2}, we introduce the physical model and the Hamiltonians. In Sec.~\ref{sec3}, we derive the Langevin equations and the final mean phonon numbers. In Secs.~\ref{sec4} and ~\ref{sec5}, we study the cooling of two and $N$ coupled mechanical resonators, respectively. Finally, we provide a brief conclusion in Sec.~\ref{sec6}. An Appendix is presented to display the detailed calculation of the final mean phonon numbers in the two-mechanical-resonator case. \section{Model and Hamiltonian\label{sec2}} We consider a multimode optomechanical system in which a single-mode cavity field couples to an array of $N$ mechanical resonators coupled in series, as illustrated in Fig.~\ref{Figmodel}(a). The first mechanical resonator is coupled to the cavity field via the radiation-pressure coupling, and these nearest-neighboring mechanical resonators are coupled to each other through position-position couplings (forming a cascade configuration). A strong driving field (the driving amplitude $\Omega$ is much larger than the cavity-field decay rate $\kappa$) is applied to the optical cavity for manipulating the optical and mechanical degrees of freedom. The Hamiltonian of the system reads ($\hbar =1$)~\cite{Lai2018PRA} \begin{eqnarray} H &=&\omega _{c}a^{\dagger }a+\sum_{j=1}^{N}\left(\frac{p_{x,j}^{2}}{2m_{j}}+\frac{m_{j}\tilde{\omega}_{j}^{2}x_{j}^{2}}{2}\right)-\lambda a^{\dagger}ax_{1} \notag \\ &&+\sum_{j=1}^{N-1}\eta_{j}(x_{j}-x_{j+1})^{2}+\Omega(a^{\dagger }e^{-i\omega _{L}t}+ae^{i\omega _{L}t}),\label{eq1iniH} \end{eqnarray} where $a$ ($a^{\dagger }$) is the annihilation (creation) operator of the cavity-field mode with the resonance frequency $\omega_{c}$. The momentum and position operators $p_{x,j}$ and $x_{j}$ describe the $j$th mechanical resonator with resonance frequency $\tilde{\omega}_{j}$ and mass $m_{j}$. The $\lambda$ term in Eq.~(\ref{eq1iniH}) denotes the optomechanical interaction between the cavity field and the first mechanical resonator, where $\lambda=\omega_{c}/L$ is the radiation-pressure force of a single photon, with $L$ being the rest length of the optical cavity. The nearest-neighbor interactions between these neighboring mechanical resonators are depicted by these $\eta_{j}$ terms. The last term in Eq.~(\ref{eq1iniH}) describes the input laser driving with the driving frequency $\omega_{L}$ and amplitude $\Omega=\sqrt{2P_{L}\kappa/\omega_{L}}$, where $P_{L}$ and $\kappa$ are, respectively, the driving power and cavity-field decay rate. For convenience, we introduce the dimensionless coordinator and momentum operators $q_{j}=\sqrt{m_{j}\omega_{j}}x_{j}$ and $p_{j}=\sqrt{1/(m_{j}\omega_{j})}p_{x,j}$ ($[q_{j},p_{j}]=i$) for $j\in[1,N]$, and the normalized resonance frequencies $\omega _{1(N)}=\sqrt{\tilde{\omega}_{1(N)}^{2}+2\eta_{1(N-1)}/m_{1(N)}}$ and $\omega_{j\in[2,N-1]}=\sqrt{\tilde{\omega}_{j}^{2}+2(\eta _{j-1}+\eta _{j}) /m_{j}}$ for these resonators. In a rotating frame defined by the unitary transformation operator $\exp(-i\omega_{L}ta^{\dagger}a)$, Hamiltonian~(\ref{eq1iniH}) becomes \begin{eqnarray} H_{I} &=&\Delta _{c}a^{\dagger }a+\sum_{j=1}^{N}\frac{\omega _{j}}{2}\left(p_{j}^{2}+q_{j}^{2}\right) -\tilde{\lambda}a^{\dagger }aq_{1} \notag \\ &&-\sum_{j=1}^{N-1}2\tilde{\eta}_{j}q_{j}q_{j+1}+\Omega (a^{\dagger}+a),\label{Hamlt2dimless} \end{eqnarray} where $\Delta_{c}=\omega_{c}-\omega_{L}$ is the driving detuning of the cavity field, $\tilde{\lambda}=\lambda \sqrt{1/(m_{1}\omega _{1})}$ and $\tilde{\eta}_{j\in[1,N-1]}=\eta _{j}\sqrt{1/(m_{j}m_{j+1}\omega _{j}\omega _{j+1})}$ are, respectively, the strength of the optomechanical coupling and the mechanical interaction expressed with dimensionless coordinator and momentum operators. \section{Langevin equations and final mean phonon numbers \label{sec3}} In this section, we derive the quantum Langevin equations of the system, analyze the cold-damping feedback scheme, and obtain the final mean phonon numbers of the $N$-mechanical-resonator chain. \subsection{Langevin equations\label{sec3A}} To include the damping and noise effects in this system, we consider the case where the optical mode is coupled to a vacuum bath and $N$ mechanical modes are subjected to quantum Brownian forces. In this case, the evolution of the system can be described by the quantum Langevin equations \begin{subequations} \label{Langevineqorig} \begin{align} \dot{a}=&-[\kappa +i(\Delta_{c}-\tilde{\lambda} q_{1})]a-i\Omega +\sqrt{2\kappa }a_{\text{in}}, \\ \dot{q}_{j\in[1,N]}=&\omega _{j}p_{j}, \\ \dot{p}_{1}=&-\omega _{1}q_{1}+\lambda _{0}a^{\dagger }a+2\tilde{\eta}_{1}q_{2}-\gamma _{1}p_{1}+\xi_{1}, \\ \dot{p}_{j\in[2,N-1]}=&-\omega _{j}q_{j}+2\tilde{\eta}_{j-1}q_{j-1}+2\tilde{\eta}_{j}q_{j+1}-\gamma_{j}p_{j}+\xi _{j}, \\ \dot{p}_{N}=&-\omega _{N}q_{N}+2\tilde{\eta}_{N-1}q_{N-1}-\gamma_{N}p_{N}+\xi_{N}, \end{align} \end{subequations} where $\kappa$ and $\gamma_{j\in[1,N]}$ are, respectively, the decay rates of the cavity mode and the $j$th mechanical resonator. The operators $a_{\textrm{in}}$ $(a^{\dagger}_{\textrm{in}})$ and $\xi_{j\in[1,N]}$ denote the noise operators of the cavity field and the Brownian force acting on the $j$th mechanical resonator, respectively. These noise operators have zero mean values and the following correlation functions, \begin{subequations} \label{correlationfun} \begin{align} \langle a_{\textrm{in}}(t) a_{\textrm{in}}^{\dagger}(t^{\prime})\rangle=&\delta(t-t^{\prime}), \hspace{0.5 cm} \langle a_{\textrm{in}}^{\dagger}(t) a_{\textrm{in}}(t^{\prime})\rangle =0, \\ \langle \xi_{j}(t)\xi_{j}(t^{\prime})\rangle=&\frac{\gamma_{j}}{\omega_{j}}\int \frac{d\omega }{2\pi}e^{-i\omega(t-t^{\prime})}\omega \left[\coth\left(\frac{\omega}{2k_{B}T_{j}}\right) +1\right], \end{align} \end{subequations} where $k_{B}$ is the Boltzmann constant, and $T_{j\in[1,N]}$ is the temperature of the thermal reservoir associated with the $j$th mechanical resonator. For cooling these mechanical resonators, the strong-driving regime of the cavity is considered, so that the average photon number in the cavity is sufficiently large and then we can simplify this physical model by a linearization procedure. To this end, we write the operators in Eq.~(\ref{Langevineqorig}) as sums of averages plus fluctuations: $o=\left\langle o\right\rangle_{\textrm{ss}} +\delta o$ for operators $a$, $a^{\dagger}$, $q_{j\in[1,N]}$, and $p_{j\in[1,N]}$. By separating the classical motion and quantum fluctuations, the linearized quantum Langevin equations become \begin{subequations} \label{fluceq} \begin{align} \delta \dot{X}=&-\kappa \delta X+\Delta \delta Y+\sqrt{2\kappa }X_{\text{in}}, \\ \delta \dot{Y}=&-\kappa \delta Y-\Delta \delta X+G\delta q_{1}+\sqrt{2\kappa }Y_{\text{in}}, \\ \delta \dot{q}_{j\in[1,N]}=&\omega _{j}\delta p_{j}, \\ \delta \dot{p}_{1}=&-\omega _{1}\delta q_{1}+G\delta X+2\tilde{\eta}_{1}\delta q_{2}-\gamma _{1}\delta p_{1}+\xi _{1}, \\ \delta \dot{p}_{j\in[2,N-1]}=&-\omega _{j}\delta q_{j}+2\tilde{\eta}_{j-1}\delta q_{j-1}+2\tilde{\eta}_{j}\delta q_{j+1}-\gamma _{j}\delta p_{j}+\xi _{j}, \\ \delta \dot{p}_{N}=&-\omega _{N}\delta q_{N}+2\tilde{\eta}_{N-1}\delta q_{N-1}-\gamma _{N}\delta p_{N}+\xi _{N}, \end{align} \end{subequations} where $X=(\delta a^{\dagger}+\delta a)/\sqrt{2}$ and $Y=i(\delta a^{\dagger}-\delta a)/\sqrt{2}$ are the quadratures of the cavity field, and $X_{\text{in}}$ and $Y_{\text{in}}$ denote the corresponding Hermitian input noise quadratures. Note that we have chosen the phase reference of the cavity field such that $\left\langle a\right\rangle_{\textrm{ss}}$ is real and positive. We have also defined the normalized driving detuning $\Delta=\Delta_{c}-\tilde{\lambda}\langle q_{1}\rangle_{\textrm{ss}}$ and the effective optomechanical coupling $G=\sqrt{2}\tilde{\lambda}\langle a\rangle_{\textrm{ss}}$ with $\langle a\rangle_{\textrm{ss}}=-i\Omega/(\kappa +i\Delta)$. \subsection{Cold-damping feedback\label{sec3B}} To realize the cold-damping feedback, we consider the case of $\Delta=0$, which indicates the highest sensitivity for position measurements of the mechanical resonator~\cite{Genes2008PRA,Sommer2019PRL}. Owing to the application of a negative derivative feedback, this cold-damping feedback technique can significantly increase the effective decay rate of the mechanical resonator without increasing the thermal noise~\cite{Courty2001EPJD,Vitali20024PRA}. The position of the first mechanical resonator is measured through a phase-sensitive detection of the cavity output field, and then the readout of the cavity output field is fed back onto the first mechanical resonator by applying a feedback force. The intensity of the feedback force is proportional to the time derivative of the output signal, and therefore to the velocity of the first mechanical resonator~\cite{Courty2001EPJD,Vitali20024PRA,Genes2008PRA,Sommer2019PRL,Sommer2020PRR}. Then, the linearized quantum Langevin equations become \begin{subequations} \label{fluceqcd} \begin{align} \delta \dot{X}=&-\kappa \delta X+\sqrt{2\kappa }X_{\text{in}}, \\ \delta \dot{Y}=&-\kappa \delta Y+G\delta q_{1}+\sqrt{2\kappa }Y_{\text{in}}, \\ \delta \dot{q}_{j\in[1,N]}=&\omega_{j}\delta p_{j}, \\ \delta \dot{p}_{1}=&-\omega_{1}\delta q_{1}+G\delta X+2\tilde{\eta}_{1}\delta q_{2}-\gamma_{1}\delta p_{1}+\xi_{1}\notag \\ &-\int_{-\infty }^{t}g(t-s)\delta Y^{\text{est}}(s)ds, \\ \delta \dot{p}_{j\in[2,N-1]}=&-\omega _{j}\delta q_{j}+2\tilde{\eta}_{j-1}\delta q_{j-1}+2\tilde{\eta}_{j}\delta q_{j+1}-\gamma _{j}\delta p_{j}+\xi_{j}, \\ \delta \dot{p}_{N}=&-\omega_{N}\delta q_{N}+2\tilde{\eta}_{N-1}\delta q_{N-1}-\gamma_{N}\delta p_{N}+\xi_{N}. \end{align} \end{subequations} In Eq.~(\ref{fluceqcd}d), the convolution term $\int_{-\infty }^{t}g(t-s)\delta Y^{\text{est}}(s)ds$ denotes the feedback force acting on the first mechanical resonator. This force depends on the past dynamics of the detected quadrature $\delta Y$, which is driven by the weighted sum of the fluctuations of the first mechanical resonator. The causal kernel is defined by~\cite{Genes2008PRA,Sommer2019PRL,Sommer2020PRR} \begin{align} &g(t)=g_{\text{cd}}\frac{d}{dt}[\theta(t)\omega_{\text{fb}}e^{-\omega_{\text{fb}}t}], \end{align} where $g_{\text{cd}}$ and $\omega_{\text{fb}}$ are the dimensionless feedback gain and the feedback bandwidth, respectively. The estimated intracavity phase quadrature $\delta Y^{\text{est}}$ results from the measurement of the output quadrature $Y^{\text{out}}(t)$, which satisfies the usual input-output relation $\delta Y^{\text{out}}(t)=\sqrt{2\kappa}\delta Y(t)-Y_{\text{in}}(t)$. This relation is generalized to the case of a nonunit detection efficiency by modeling a detector with quantum efficiency $\zeta$ with an ideal detector preceded by a beam splitter (with transmissivity $\sqrt{\zeta}$), which mixes the incident field with an uncorrelated vacuum field $Y^{\upsilon}(t)$. Then, the estimated phase quadrature $\delta Y^{\text{est}}(t)$ is obtained as~\cite{Genes2008PRA,Sommer2019PRL,Sommer2020PRR} \begin{align} &\delta Y^{\text{est}}(t)=\delta Y(t)-\frac{Y_{\text{in}}(t)+\sqrt{\zeta^{-1}-1}Y^{\upsilon}(t)}{\sqrt{2\kappa}}. \end{align} Below, we seek for the steady-state solution of Eq.~(\ref{fluceqcd}) by solving the variables in the frequency domain with the Fourier transformation. We define the Fourier transform for an operator $r(t)=(1/2\pi)^{1/2}\int_{-\infty }^{\infty }e^{-i\omega t}\tilde{r}(\omega) d\omega$ ($r=\delta X$, $\delta Y$, $\delta q_{j}$, $\delta p_{j}$, $\xi_{j}$, $X_{\text{in}}$, $Y_{\text{in}}$), and the quantum Langevin equations~(\ref{fluceqcd}) with the cold-damping feedback can be solved in the frequency domain. Based on the steady-state solution, we can calculate the spectra of the position and momentum operators for $N$ mechanical resonators, and then the final mean phonon numbers in these resonators can be obtained by integrating the corresponding fluctuation spectra. \subsection{Final mean phonon numbers \label{sec3C}} Mathematically, the final mean phonon numbers in $N$ mechanical resonators can be obtained by the relation~\cite{Genes2008PRA,Lai2018PRA} \begin{equation} n^{f}_{j\in[1,N]}=\frac{1}{2}[\langle\delta q_{j}^{2}\rangle +\langle\delta p_{j}^{2}\rangle-1],\label{finalphonumber} \end{equation} where $\langle\delta q_{j}^{2}\rangle$ and $\langle\delta p_{j}^{2}\rangle$ are, respectively, the variances of the position and momentum operators. These variances can be obtained by solving Eq.~(\ref{fluceqcd}) in the frequency domain, and integrating the corresponding fluctuation spectra, \begin{subequations} \label{specintegral} \begin{align} \langle\delta q_{j\in[1,N]}^{2}\rangle=&\frac{1}{2\pi}\int_{-\infty}^{\infty}S_{q_{j}}(\omega)d\omega,\\ \langle\delta p_{j\in[1,N]}^{2}\rangle=&\frac{1}{2\pi\omega^{2}_{j}}\int_{-\infty}^{\infty}\omega^{2}S_{q_{j}}(\omega)d\omega. \end{align} \end{subequations} Here, the fluctuation spectra of the position and momentum operators for the corresponding resonators are defined by \begin{equation} S_{o}(\omega)=\int_{-\infty}^{\infty}e^{-i\omega\tau}\langle \delta o(t+\tau) \delta o(t)\rangle_{\textrm{ss}}d\tau,\hspace{0.5 cm}(o=q_{j},p_{j}), \label{spectrumtimedomain} \end{equation} where $\langle \cdot\rangle_{\textrm{ss}}$ denotes the steady-state average of the system. The fluctuation spectrum can also be expressed in the frequency domain as \begin{equation} \langle\delta\tilde{o}(\omega)\delta\tilde{o}(\omega')\rangle_{\textrm{ss}}=S_{o}(\omega) \delta(\omega+\omega'), \hspace{0.5 cm}(o=q_{j},p_{j}).\label{spectrumfdomain} \end{equation} Below, we will solve this system in the frequency domain. \section{Cooling of a two-mechanical-resonator chain\label{sec4}} In this section, we study the cooling of a two-mechanical-resonator chain by analyzing the effective susceptibilities and noise spectra. We also find the laser-cooling rates of the two mechanical resonators. \subsection{Analytical results of the effective susceptibilities, cooling rates, and noise spectra \label{sec4A}} \begin{figure*} \caption{(a) Cooling mechanism of a two-mechanical-resonator chain. Here $\gamma_{1,\text{C} \label{weffreff} \end{figure*} In the two-mechanical-resonator case, the position fluctuation spectra of the two mechanical resonators can be obtained as \begin{subequations} \label{spectra12} \begin{eqnarray} S_{q_{1}}(\omega )&=&|\chi _{1,\text{eff}}(\omega )|^{2}\Big[S_{\text{fb},1}(\omega )+S_{\text{rp},1}(\omega )\notag \\ &&+S_{\text{th},1}(\omega )+S_{\text{me},1}(\omega )\Big], \\ S_{q_{2}}(\omega )&=&|\chi _{2,\text{eff}}(\omega )|^{2}\Big[S_{\text{th},2}(\omega )+S_{\text{me},2}(\omega )\Big]. \end{eqnarray} \end{subequations} Here we introduce the \emph{effective susceptibility} of the $j$th ($j=1,2$) mechanical resonator as \begin{equation} \chi_{j,\text{eff}}(\omega )=\omega _{j}[\Omega _{j,\text{eff}}^{2}(\omega )-\omega ^{2}-i\omega \Gamma _{j,\text{eff}}(\omega )]^{-1},\label{susceptibility} \end{equation} where $\Omega _{j,\text{eff}}(\omega )$ and $\Gamma _{j,\text{eff}}(\omega )$ are, respectively, the \emph{effective damping rate and resonance frequency} of the $j$th mechanical resonator, defined as \begin{subequations} \label{effective} \begin{eqnarray} \Omega _{1,\text{eff}}(\omega ) &=&\Bigg[\omega _{1}^{2}+\frac{Gg_{\text{cd} }\omega ^{2}\omega _{\text{fb}}\omega _{1}(\kappa +\omega _{\text{fb}})}{ (\kappa ^{2}+\omega ^{2})(\omega ^{2}+\omega _{\text{fb}}^{2})} \notag \\ &&+\frac{4\tilde{\eta}_{1}^{2}\omega _{1}\omega _{2}(\omega ^{2}-\omega _{2}^{2})}{\gamma _{2}^{2}\omega ^{2}+( \omega ^{2}-\omega _{2}^{2}) ^{2}}\Bigg]^{1/2}, \\ \Omega _{2,\text{eff}}(\omega ) &=&\Bigg[\omega _{2}^{2}+\frac{4\tilde{\eta} _{1}^{2}A(\omega )}{C(\omega )}\Bigg]^{1/2}, \\ \Gamma _{j,\text{eff}}(\omega ) &=&\gamma _{j}+\gamma_{j,\text{C}}(\omega). \end{eqnarray} \end{subequations} In Eq.~(\ref{effective}c), the \emph{cooling rates} of the first and second mechanical resonators are defined as \begin{subequations} \label{coolingrate0} \begin{eqnarray} \gamma_{1,\text{C}}(\omega)&=&\frac{Gg_{\text{cd}}\omega _{ \text{fb}}\omega _{1}(\kappa \omega _{\text{fb}}-\omega ^{2})}{(\kappa ^{2}+\omega ^{2})(\omega ^{2}+\omega _{\text{fb}}^{2})}\notag \\ &&+\frac{4\tilde{\eta}_{1}^{2}\omega _{1}\omega _{2}\gamma _{2}}{\gamma _{2}^{2}\omega ^{2}+( \omega ^{2}-\omega _{2}^{2}) ^{2}}, \\ \gamma_{2,\text{C}}(\omega)&=&\frac{4\tilde{\eta} _{1}^{2}B(\omega )}{C(\omega )}, \end{eqnarray} \end{subequations} with \begin{subequations} \label{coeff0} \begin{align} A(\omega ) =&\omega _{1}\omega _{2}[\omega ^{6}-Gg_{\text{cd}}\kappa \omega ^{2}\omega _{1}\omega _{\text{fb}}-\omega ^{2}\omega _{1}(Gg_{\text{cd}}+\omega _{1})\omega _{\text{fb}}^{2}\notag \\ &+\kappa ^{2}(\omega ^{2}-\omega _{1}^{2})(\omega ^{2}+\omega _{\text{fb}}^{2})+\omega ^{4}(\omega _{\text{fb}}^{2}-\omega _{1}^{2})], \\ B(\omega ) =&\omega _{1}\omega _{2}\{Gg_{\text{cd}}\kappa \omega _{1}\omega_{\text{fb}}^{2}+\kappa ^{2}\gamma _{1}(\omega ^{2}+\omega _{\text{fb}}^{2})\notag \\ &+\omega ^{2}[-Gg_{\text{cd}}\omega _{1}\omega _{\text{fb}}+\gamma _{1}(\omega ^{2}+\omega _{\text{fb}}^{2})]\}, \\ C(\omega ) =&\{\omega ^{2}(-\kappa \gamma _{1}+\omega ^{2}-\omega_{1}^{2})-[(\kappa +\gamma _{1})\omega ^{2}-\kappa \omega _{1}^{2}]\omega _{\text{fb}}\}^{2}\notag \\ &+\{\omega \lbrack \gamma _{1}\omega ^{2}+\big( \omega^{2}-\omega _{1}(Gg_{\text{cd}}+\omega _{1})\big) \omega _{\text{fb}}\notag \\ &+\kappa (\omega ^{2}-\omega _{1}^{2}-\gamma _{1}\omega _{\text{fb}})]\}^{2}. \end{align} \end{subequations} In Eq.~(\ref{spectra12}), we introduce the feedback-induced noise spectrum $S_{\text{fb},1}(\omega )$ and the radiation-pressure noise spectrum $S_{\text{rp},1}(\omega )$ for the first mechanical resonator, and the mechanical-coupling-induced noise spectrum $S_{\text{me},j}(\omega )$ and the thermal noise spectrum $S_{\text{th},j}(\omega )$ for the $j$th ($j=1,2$) mechanical resonator, \begin{eqnarray} S_{\text{fb},1}(\omega ) &=&\frac{g_{\text{cd}}^{2}\omega _{\text{fb} }^{2}\omega ^{2}}{4\kappa \zeta (\omega ^{2}+\omega _{\text{fb}}^{2})}, \label{Spectra0}\\ S_{\text{rp},1}(\omega ) &=&\frac{G^{2}\kappa }{\kappa ^{2}+\omega ^{2}}, \\ S_{\text{th},j}(\omega ) &=&\frac{\gamma _{j}\omega }{\omega _{j}}\coth \left( \frac{\hbar \omega }{2\kappa _{B}T_{j}}\right), \\ S_{\text{me},1}(\omega ) &=&\frac{4\tilde{\eta}_{1}^{2}\omega _{2}^{2}}{ \gamma _{2}^{2}\omega ^{2}+\left( \omega ^{2}-\omega _{2}^{2}\right) ^{2}} \frac{\gamma _{2}\omega }{\omega _{2}}\coth \left( \frac{\hbar \omega }{ 2\kappa _{B}T_{2}}\right), \\ S_{\text{me},2}(\omega ) &=&\frac{\tilde{\eta}_{1}^{2}E(\omega )}{\left\vert D(\omega )\right\vert ^{2}},\label{Spectra1} \end{eqnarray} where we introduce \begin{subequations} \label{coeff} \begin{align} D(\omega ) =&(\kappa -i\omega )(-i\gamma _{1}\omega -\omega ^{2}+\omega_{1}^{2})\omega +[(\kappa -i\omega )(\gamma _{1}-i\omega )\omega\notag \\ &+Gg_{\text{cd}}\omega \omega _{1}+(i\kappa +\omega )\omega _{1}^{2}]\omega _{\text{fb}},\\ E(\omega ) =&4(\omega ^{2}+\omega _{\text{fb}}^{2})\left[G^{2}\kappa +\frac{\gamma _{1}\omega }{\omega _{1}}\coth \left( \frac{\hbar \omega }{2\kappa_{B}T_{1}}\right) (\kappa ^{2}+\omega ^{2})\right]\omega _{1}^{2}\notag \\ &+\frac{g_{\text{cd}}^{2}\omega ^{2}\omega _{1}^{2}\omega _{\text{fb}}^{2}}{\kappa \zeta }(\kappa ^{2}+\omega ^{2}). \end{align} \end{subequations} We note that the \emph{exact analytical results of the final mean phonon numbers} are obtained based on Eqs.~(\ref{finalphonumber}), (\ref{specintegral}), and (\ref{spectra12}), and these results are presented in the Appendix. \subsection{Analyses of the effective susceptibilities, laser-cooling rates, and noise spectra\label{sec4AB}} In the above subsection, we have derived the effective mechanical resonance frequency $\Omega _{j,\text{eff}}$ and damping rate $\Gamma_{j,\text{eff}}$ of the $j$th mechanical resonator [see Eq.~(\ref{effective})]. We have also found the analytical expressions of the final thermal excitations in these mechanical resonators [see Eq.~(\ref{exactcoolresult})]. Now, we study how the feedback loop affects the cooling performance by analyzing the dependence of the mechanical resonance frequency $\Omega _{j,\text{eff}}$ and decay rate $\Gamma_{j,\text{eff}}$ on the loop coupling parameters. Concretely, Figure~\ref{weffreff} plots the effective mechanical resonance frequencies $\Omega _{j,\text{eff}}$ and decay rates $\Gamma_{j,\text{eff}}$ as functions of the frequency $\omega$, optomechanical coupling $G$, feedback gain $g_{\text{cd}}$, and nearest-neighbor interaction $\tilde{\eta}_{1}$. We can see from Figs.~\ref{weffreff}(b) and \ref{weffreff}(c) that at resonance $\omega=0$, the mechanical frequencies change slightly [$\Omega _{j,\text{eff}}(0)\approx0.98\omega_{m}$], while the effective mechanical dampings are significantly increased [$\Gamma_{1,\text{eff}}(0)\approx3.5\times10^{4}\gamma_{m}$, $\Gamma_{2,\text{eff}}(0)\approx1.5\times10^{3}\gamma_{m}$]. This giant enhancement of the mechanical damping plays an important role in the cooling process for the two mechanical resonators. We see from Figs.~\ref{weffreff}(e,g) and Eqs.~(\ref{effective},\ref{coolingrate0}) that, when we turn off the optomechanical coupling ($G=0$) or the feedback ($g_{\text{cd}}=0$), these mechanical resonators are uncooled ($\Gamma_{j,\text{eff}}/\gamma_{j}\approx1$, i.e., $\gamma_{j,\text{C}}\ll\gamma_{j}$), i.e., the breaking of the feedback loop ($G=0$ or $g_{\text{cd}}=0$) leads to no actual cooling for the mechanical-resonator chain. This is because the feedback loop applied on the first mechanical resonator acts as a cooling impetus of this mechanical-resonator chain. Moreover, by increasing the optomechanical coupling $G$ or the feedback gain $g_{\text{cd}}$, the effective mechanical decay rates $\Gamma_{j,\text{eff}}$ are exponentially increased [see Figs.~\ref{weffreff}(e) and \ref{weffreff}(g)] while the effective mechanical frequencies $\Omega _{j,\text{eff}}$ are nearly unchanged [see Figs.~\ref{weffreff}(d) and \ref{weffreff}(f)]. For example, the effective mechanical damping of the first mechanical resonator is increased from $\Gamma_{1,\text{eff}}/\gamma_{1}=1$ to values larger than $10^{4}$, and that of the second one is increased from $\Gamma_{2,\text{eff}}/\gamma_{2}=1$ to values larger than $10^{3}$. Physically, increasing the optomechanical coupling $G$ or the feedback gain $g_{\text{cd}}$ enhances the feedback loop, and then substantially enhances the cooling efficiencies of these mechanical resonators. In the absence of the nearest-neighbor coupling ($\tilde{\eta}_{1}=0$) between the two mechanical resonators, the first mechanical resonator is substantially modulated ($\Gamma_{1,\text{eff}}\approx3.5\times10^{4}\gamma_{1}$) by the feedback loop, while the second one becomes a dissipative harmonic resonator ($\Omega _{2,\text{eff}}=\omega_{2}$, $\Gamma_{2,\text{eff}}=\gamma_{2}$), as shown in Figs.~\ref{weffreff}(h) and~\ref{weffreff}(i). This means that the cooling is feasible for the first mechanical resonator but not for the second one due to a zero-value cooling rate, i.e., $\gamma_{2,\text{C}}=0$ [see Eq.~(\ref{coolingrate0}b)]. Increasing the nearest-neighbor coupling $\tilde{\eta}_{1}$, the effective mechanical frequency $\Omega _{j,\text{eff}}$ decreases, and the effective mechanical damping of the second mechanical resonator significantly increases from $\Gamma_{2,\text{eff}}/\gamma_{2}=1$ to $3.5\times10^{4}$ [see Figs.~\ref{weffreff}(h) and~\ref{weffreff}(i)]. \begin{figure} \caption{The noise spectra of (a) the first and (b) the second mechanical resonators are plotted as functions of the frequency $\omega$. Other parameters are the same as those used in Fig.~\ref{weffreff} \label{spectra} \end{figure} To analyze the cooling rates of the two mechanical resonators, we consider the case $\omega=0$ and reexpress Eq.~(\ref{effective}) as \begin{subequations} \label{effe} \begin{eqnarray} \Omega_{1,\text{eff}} &=&\sqrt{\omega _{1}^{2}-\frac{4\tilde{\eta}_{1}^{2}\omega _{1}}{\omega _{2}}}, \\ \Omega_{2,\text{eff}} &=&\sqrt{\omega _{2}^{2}-\frac{4\tilde{\eta} _{1}^{2}\omega _{2}}{\omega _{1}}}, \\ \Gamma_{j,\text{eff}} &=&\gamma _{j}+\gamma _{j,\text{C}}, \end{eqnarray} \end{subequations} where $\gamma_{j,\text{C}}$ denotes the cooling rate of the $j$th mechanical resonator, defined as \begin{subequations} \label{coolingrate} \begin{eqnarray} \gamma_{1,\text{C}} &=&\frac{Gg_{\text{cd}}\omega _{1}}{\kappa }+\frac{4\tilde{\eta}_{1}^{2}\omega _{1}\gamma _{2}}{\omega _{2}^{3}}, \\ \gamma_{2,\text{C}} &=&\frac{4\tilde{\eta}_{1}^{2}\omega _{2}(Gg_{\text{cd}}\omega _{1}+\kappa \gamma _{1})}{\omega _{1}^{3}\kappa }. \end{eqnarray} \end{subequations} We can see from Eqs.~(\ref{effe}a) and (\ref{effe}b) that the effective mechanical frequencies $\Omega_{1,\text{eff}}$ and $\Omega_{2,\text{eff}}$ are modulated only by the nearest-neighbor coupling $\tilde{\eta}_{1}$ between the adjacent resonators. This feature can well explain the phenomenon that the effective mechanical frequencies $\Omega_{j,\text{eff}}$ are \emph{independent of the feedback loop} ($G$ and $g_{\text{cd}}$) but are \emph{sensitive to the nearest-neighbor coupling} $\tilde{\eta}_{1}$ [see Figs.~\ref{weffreff}(d), \ref{weffreff}(f), and \ref{weffreff}(h)]. The parameters $\gamma_{1,\text{C}}$ and $\gamma_{2,\text{C}}$ defined in Eqs.~(\ref{coolingrate}a) and (\ref{coolingrate}b) are, respectively, the feedback-loop and mechanical cooling rates. Here, the feedback-loop cooling rate $\gamma_{1,\text{C}}$ is mainly governed by the radiation pressure $G$ and feedback $g_{\text{cd}}$, while the mechanical cooling rate $\gamma_{2,\text{C}}$ is decided by the mechanical coupling $\tilde{\eta}_{1}$. When we turn off the feedback loop (i.e., $G=0$ or $g_{\text{cd}}=0$), the cooling rates of the two mechanical resonators shown in Eq.~(\ref{coolingrate}) become \begin{subequations} \label{coolingrate2} \begin{eqnarray} \gamma_{1,\text{C}} &=&\frac{4\tilde{\eta}_{1}^{2}\omega_{1}\gamma _{2}}{\omega _{2}^{3}}, \\ \gamma_{2,\text{C}} &=&\frac{4\tilde{\eta}_{1}^{2}\omega_{2}\gamma _{1}}{\omega _{1}^{3}}. \end{eqnarray} \end{subequations} We can see from Eqs.~(\ref{coolingrate2}a) and (\ref{coolingrate2}b) that, when the feedback loop is broken ($G=0$ or $g_{\text{cd}}=0$), the cooling rates of the two mechanical resonators are largely suppressed due to $\gamma_{j,\text{C}}\ll \gamma_{j}$, as shown in Figs.~\ref{weffreff}(e) and~\ref{weffreff}(g). Physically, to realize the ground-state cooling of this mechanical-resonator chain, the cooling rate $\gamma_{j,\text{C}}$ should be larger than the thermal-reservoir coupling rate $\gamma_{j}$ (i.e., $\gamma_{j,\text{C}}\gg\gamma_{j}$), and the cooling rate of the first resonator should be much larger than that of the second one ($\gamma_{1,\text{C}}\gg\gamma_{2,\text{C}}$). These results coincide with those shown in Figs.~\ref{weffreff}(c), \ref{weffreff}(e), \ref{weffreff}(g), and \ref{weffreff}(i). Thus, the thermal excitations stored in the second mechanical resonator can be extracted into the first one by the cascade cooling channel $\gamma_{2,\text{C}}$. In fact, the cooling of the two mechanical resonators can be explained based on the noise spectra [see Eqs.~(\ref{Spectra0}-\ref{Spectra1})] of the resonators. In Fig.~\ref{spectra}, we plot the noise spectra of the two mechanical resonators as functions of the frequency $\omega$. For the first mechanical resonator, we find that at $\omega=0$, the contribution from the feedback noise $S_{\text{fb},1}(\omega )$ is much smaller than those from the thermal noise $S_{\text{th},1}(\omega )$, the radiation-pressure noise $S_{\text{rp}}(\omega )$, and the mechanical-coupling noise $S_{\text{me},1}(\omega )$, as shown in Fig.~\ref{spectra}(a). For the second mechanical resonator, the mechanical-coupling-noise contribution is much less than that of the thermal noise when $\omega=0$, i.e., $S_{\text{me},2}(0)\ll S_{\text{th},2}(0)$ [see Fig.~\ref{spectra}(b)]. Therefore, the efficient cooling of the two-mechanical-resonator chain can be achieved because the thermal noise stored in these resonators is significantly suppressed by the cold-damping feedback. \subsection{Ground-state cooling\label{sec4B}} To investigate the cooling rule of this cascade optomechanical system, we first consider the cooling of a two-mechanical-resonator system. Physically, the first mechanical resonator undergoing cold damping can be directly cooled to its quantum ground-state by the feedback-loop cooling channel ($\gamma_{1,\text{C}}$), and the second one can also experience a cooling process via the cascade-cooling channel ($\gamma_{2,\text{C}}$) between the adjacent mechanical resonators [see Fig.~\ref{weffreff}(a)]. Below, we show in detail the dependence of the cooling performance of the two mechanical resonators on the system parameters. \begin{figure} \caption{The final average phonon numbers (a) $n_{1} \label{Pk} \end{figure} In Fig.~\ref{Pk}, we plot the final mean phonon numbers $n_{1}^{f}$ and $n_{2}^{f}$ as functions of the laser power $P$ and the cavity-field decay rate $\kappa$. It is shown that the two mechanical resonators can be cooled efficiently ($n^{f}_{1},n^{f}_{2}<1$) in the unresolved-sideband regime $\kappa/\omega_{m}>1$. This indicates that the simultaneous ground-state cooling of the two mechanical resonators is achievable via the cold-damping feedback. In addition, the optimal cooling performances of the two mechanical resonators are $n^{f}_{1}\approx0.5$ and $n^{f}_{2}\approx0.55$ when $P=100$ mW and $\kappa/\omega_{1}=3.5$. To further elucidate this aspect, in Fig.~\ref{Pk}(c) we show the dependence of the cooling efficiencies of the two mechanical resonators on the laser power $P$. We find that when $P<100$ mW, the cooling becomes less efficient when decreasing the laser power $P$. These results indicate that the feedback loop plays the role of a cooling impetus, and that these mechanical resonators cannot be cooled because the feedback loop is broken when $P\rightarrow0$ [see Fig.~\ref{Figmodel}(b)]. Particularly, it shows one switch point (SP) (i.e., the symmetric cooling point $n_{1}^{f}=n_{2}^{f}$) in Fig.~\ref{Pk}(c). This means that a flexible asymmetric-to-symmetric or inverse cooling switch can be achieved by appropriately engineering the laser power $P$. Furthermore, we can see from Fig.~\ref{Pk}(d) that the optimal cooling of the two resonators is achieved around $\kappa/\omega_{1}=3$. This point is different from that in the sideband cooling method, in which the optimal cooling is reached in the resolved-sideband regime~\cite{Wilson-Rae2007PRL,Marquardt2007PRL,Teufel2011Nature}. \begin{figure} \caption{The final average phonon numbers (a) $n_{1} \label{wfbgcd} \end{figure} In Fig.~\ref{wfbgcd}, we investigate the dependence of the cooling efficiencies of the two mechanical resonators on the feedback gain $g_{\text{cd}}$ and the feedback bandwidth $\omega_{fb}$. We find that the optimal cooling can be achieved for the parameters $g_{\text{cd}}>0.5$ and $\omega_{fb}/\omega_{m}>2$. However, when $g_{\text{cd}}\rightarrow0$, the two mechanical resonators are uncooled due to the breaking of the feedback loop, as shown in Fig.~\ref{wfbgcd}(c). In the absence of the feedback loop (i.e., $G=0$ or $g_{\text{cd}}=0$), we can see from Eqs.~(\ref{coolingrate}a) and (\ref{coolingrate}b) that the cooling rates of these resonators are largely suppressed owing to $\gamma_{j,\text{C}}\ll \gamma_{j}$. When the feedback bandwidth $\omega_{fb}\rightarrow0$, the two mechanical resonators cannot be cooled, as shown in Fig.~\ref{wfbgcd}(d). This is because a lower feedback bandwidth indicates a longer time delay of the feedback loop, and this leads to a lower cooling efficiency in this system. In addition, there is one SP in Figs.~\ref{wfbgcd}(c) and ~\ref{wfbgcd}(d), respectively. These results indicate that by appropriately engineering the laser power $P_{L}$ or the feedback $\omega_{\text{fb}}$ ($g_{\text{cd}}$), a flexible cooling switch between symmetrical and asymmetrical ground-state cooling of these mechanical resonators can be realized. The feedback loop provides a direct cooling channel ($\gamma_{1,\text{C}}$) to extract the thermal excitations in the first mechanical resonator, and then the second resonator can be cooled by the mechanical cooling channel ($\gamma_{2,\text{C}}$) between the two mechanical resonators [see Fig.~\ref{weffreff}(a)]. Consequently, the optimal cooling of the first mechanical resonator plays a key role on that of the second one. This is because the cooling efficiency of the second mechanical resonator depends on the rotating-wave coupling between the two mechanical resonators. This coupling is determined by both the resonance frequencies of the two mechanical resonators and the coupling strength between them. \begin{figure} \caption{The final average phonon numbers (a) $n_{1} \label{etaW2} \end{figure} To further elucidate this effect, the final mean phonon numbers $n_{1}^{f}$ and $n_{2}^{f}$ are plotted as functions of the mechanical coupling strength $\tilde{\eta}_{1}$ and the frequency ratio $\omega_{2}/\omega_{1}$, as shown in Figs.~\ref{etaW2}(a) and~\ref{etaW2}(b). We find that the two mechanical resonators can be simultaneously cooled to their quantum ground states within a large mechanical frequency bandwidth, and that the optimal cooling is located at $\omega_{2}/\omega_{1}\approx1$. The mechanical coupling between the two resonators provides a mechanical cooling channel ($\gamma_{2,\text{C}}$) for the second resonator. This point can be confirmed based on no actual cooling for the second mechanical resonator when $\tilde{\eta}_{1}=0$ [see Fig.~\ref{etaW2}(c)]. In the weak-coupling region $\tilde{\eta}_{1}/\omega_{m}<0.06$, the cooling performance of the first mechanical resonator becomes worse while that of the second one becomes better with increasing $\tilde{\eta}_{1}$, i.e., $n_{1}^{f}<n_{2}^{f}$. The reason for this phenomenon is that the cooling channel of the second resonator is directly provided by the first resonator which is cooled by the feedback loop, while the second resonator will encumber the cooling efficiency of the first resonator. In the region $0.06<\tilde{\eta}_{1}/\omega_{m}<0.45$, the cooling performance of the two resonators shows an opposite result (i.e., $n_{1}^{f}>n_{2}^{f}$) in comparison with that in the region $\tilde{\eta}_{1}/\omega_{m}<0.06$. Physically, with the increase of the mechanical coupling strength, the CRW interaction terms, which simultaneously create phonon excitations in the two resonators, will become more and more important, and then the cooling of the first resonator will be suppressed largely. Moreover, the symmetrical cooling ($n_{1}^{f}=n_{2}^{f}$) of the two mechanical resonators can be achieved when the mechanical coupling strength takes $\tilde{\eta}_{1}/\omega_{m}=0.06$. These results mean that, when the nearest-neighbor coupling strength $\tilde{\eta}_{1}$ takes a proper value ($\tilde{\eta}_{1}/\omega_{m}<0.06$), the cooling efficiency is higher for the mechanical oscillator which is closer to the cavity. Additionally, we can see from Fig.~\ref{etaW2}(d) that the optimal cooling efficiency of the two mechanical resonators emerges when the two resonators are resonant and near resonant ($\omega_{2}$ around $\omega_{1}$). Physically, the efficiency of energy extraction from the second resonator decreases with increasing this detuning, and the counter rotating-wave interaction becomes important when this detuning becomes comparable to the mechanical frequencies. When $\omega_{2}/\omega_{1}>2$, the cooling channel of the second resonator is almost turned off (i.e., $\gamma_{2,\text{C}}\approx0$), owing to the approximately negligible mechanical interaction under the condition $\tilde{\eta}_{1}/\vert\omega_{1}-\omega_{2}\vert\ll1$. In this case, the second mechanical resonator will be thermalized by its thermal bath, then the system becomes a typical optomechanical system consisting of an optical cavity and a mechanical resonator. These results provide the possibility to reach simultaneous ground-state cooling of both degenerate and nondegenerate mechanical resonators in the unresolved-sideband regime. \begin{figure} \caption{The final average phonon numbers $n^{f} \label{n34} \end{figure} \section{Cooling of a coupled $N$-resonator chain\label{sec5}} We now extend our cold-damping-feedback cooling scheme to the case of an $N$-mechanical-resonator chain. We consider an optical cavity coupled to an array of $N$ mechanical resonators coupled in series, as shown in Figs.~\ref{Figmodel}(a) and ~\ref{Figmodel}(b). The feedback loop is applied on the first mechanical resonator and other nearest-neighboring mechanical resonators are coupled to each other through the mechanical interactions. The first mechanical resonator can be cooled by the feedback loop, and then the thermal excitations in the later mechanical resonator will be extracted by the former via the mechanical cooling channel. As a result, the physical mechanism behind this cooling scheme could be understood as a cascade-cooling process: akin to a domino effect or chain reaction. Without loss of generality, we consider the identical-resonator case where all the mechanical resonators have the same resonant frequencies $\omega_{j}=\omega_{m}$, decay rates $\gamma_{j}=\gamma_{m}$, thermal phonon numbers $\bar{n}_{j}=\bar{n}$, and mechanical coupling strengths $\tilde{\eta}_{j}=\tilde{\eta}$. Here, we consider the cases of three and four mechanical resonators (i.e., $N=3,4$) in our simulations. In Fig.~\ref{n34}, we plot the final mean phonon numbers in these mechanical resonators as a function of the mechanical coupling $\tilde{\eta}$ for the cases of (a) $N = 3$ and (b) $N = 4$. We can see that the final mean phonon numbers in these mechanical resonators can be effectively decreased from $10^{3}$ to below $1$. This indicates that the simultaneous cooling of these mechanical resonators can be achieved by using the cold-damping feedback scheme. Figure~\ref{n34} shows that, when $\tilde{\eta}\ll\omega_{m}$, the final average phonon numbers successively increase from $n^{f}_{1}$ to $n^{f}_{N}$ (see the shadow areas), i.e., the closer to the optomechanical cavity the resonator is, the smaller the final average phonon number in this resonator is. Physically, the thermal excitations in the first resonator is extracted via the feedback cooling channel, and successively, the thermal phonons stored in the next resonator is extracted by the former via the mechanical cooling channel. In this case, the feedback cooling rate should be much larger than the mechanical cooling rates, and thus the cooling efficiency is higher for the mechanical oscillator which is closer to the cavity. In addition, with the increase of $\tilde{\eta}$, we find an anomalous cooling (i.e., the feedback-cooled resonator is not the coldest) (see the blank areas). This phenomenon can also be explained based on the excitations increase caused by the CRW terms. \section{Discussion and Conclusion\label{sec6}} Finally, we present some discussions on the understanding of the cooling problems of our system in the mechanical normal-mode representation. In a two-resonator optomechanical system, a cavity-field mode couples to the first mechanical resonator via the radiation-pressure coupling, and the two mechanical resonators are coupled to each other through the mechanical interaction. After diagonalizing the coupled mechanical resonators, the model can be described by a multi-mode system where the cavity-field mode couples to two mechanical normal modes. However, we should point out that the frequency difference between the two normal modes depends on the coupling strength between the two resonators. Depending on the relation between the frequency difference and the width of the cooling window, there are two different cases~\cite{Lai2020PRARC,Sommer2019PRL}. (i) When the frequency difference between the two mechanical normal modes is larger than the effective mechanical linewidth, the simultaneous cooling of these mechanical normal modes is accessible because there is no dark mode in this system~\cite{Lai2020PRARC,Sommer2019PRL}. (ii) When the frequency difference is smaller than the effective mechanical linewidth, the cooling of the two mechanical normal modes is suppressed, because the dark-mode effect works in the near-degenerate-resonator case. The cooling of these normal modes is less efficient and depends on the number of normal modes. In this case, the final average phonon numbers in these mechanical normal modes are $\bar{n}(N-1)/N$ with $\bar{n}_{j}=\bar{n}$~\cite{Lai2020PRARC,Sommer2019PRL}. In conclusion, we have studied how to realize the simultaneous ground-state cooling of a mechanical-resonator chain coupled to an optomechanical cavity via a standard cold-damping feedback technique. We have found that the entire chain is cooled via a domino effect or chain reaction through the system. We have obtained analytical results for the effective susceptibilities, noise spectra, final mean phonon numbers, and cooling rates of these mechanical resonators. We have also found the optimal-cooling condition for these resonators. In addition, we have found that by appropriately engineering the laser power or the feedback applied on the first mechanical resonator, a flexible switch between symmetric and asymmetric ground-state cooling can be achieved. This could potentially be used to prepare symmetric quantum states in coupled mechanical systems. Our cascade-cooling proposal works for both degenerate and nondegenerate mechanical resonators in the unresolved-sideband regime. This work will pave the way for studying and observing quantum coherence effects involving multiple mechanical modes. \begin{acknowledgments} D.-G.L. thanks Dr. Ken Funo and Li Yuan for their useful comments on the manuscript. J.-Q.L. is supported in part by National Natural Science Foundation of China (Grants No. 11822501, No. 11774087, and No. 11935006), Hunan Science and Technology Plan Project (Grant No. 2017XK2018), and the Science and Technology Innovation Program of Hunan Province (Grant No. 2020RC4047). B.-P.H. is supported in part by National Natural Science Foundation of China (Grant No.~11974009). F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP) program, the Moonshot RD Grant Number JPMJMS2061, and the Centers of Research Excellence in Science and Technology (CREST) Grant No. JPMJCR1676], the Japan Society for the Promotion of Science (JSPS) [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134 and the JSPS-RFBR Grant No. JPJSBP120194828], the Army Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06. \end{acknowledgments} \appendix* \section{Calculation of the final mean phonon numbers\label{appendixa}} In this appendix, we present the exact analytical results of the final mean phonon numbers in the two-mechanical-resonator case. As shown in Sec.~\ref{sec3C}, by calculating the integral in Eq.~(\ref{specintegral}) for the position and momentum fluctuation spectra, the exact final phonon numbers in the two mechanical resonators can be obtained as~\cite{Genes2008PRA,Sommer2019PRL} \begin{equation} \label{exactcoolresult} n_{l=1,2}^{f}=\frac{1}{2}\left( \frac{iD_{6}^{(l)}}{2\Delta _{6}}+\frac{ iM_{6}^{(l)}}{2\Delta _{6}}-1\right) . \end{equation} Here, we introduce the variables \begin{eqnarray} \Delta _{6} &=&a_{5} \{a_{4}(-a_{1}a_{2}a_{3}+a_{3}^{2}+a_{1}^{2}a_{4})+[-a_{2}a_{3}+a_{1}(a_{2}^{2}-2a_{4})]a_{5} \notag \\ &&+a_{5}^{2}\}-[a_{3}^{3}-a_{1}a_{3}(a_{2}a_{3}+3a_{5}) \notag \\ &&+a_{1}^{2}(a_{3}a_{4}+2a_{2}a_{5})]a_{6}+a_{1}^{3}a_{6}^{2}, \end{eqnarray} \begin{eqnarray} D_{6}^{(l=1,2)} &=&[-a_{3}a_{4}a_{5}+a_{3}^{2}a_{6}+a_{5}(a_{2}a_{5}-a_{1}a_{6})]b_{1}^{(l)}+(a_{1}a_{4}a_{5} \notag \\ &&-a_{5}^{2}-a_{1}a_{3}a_{6})b_{2}^{(l)}+(-a_{1}a_{2}a_{5}+a_{3}a_{5}+a_{1}^{2}a_{6})b_{3}^{(l)} \notag \\ &&+[-a_{3}^{2}-a_{1}^{2}a_{4}+a_{1}(a_{2}a_{3}+a_{5})]b_{4}^{(l)} \notag \\ &&+\frac{1}{a_{6}} [a_{3}^{2}a_{4}-a_{2}a_{3}a_{5}+a_{5}^{2}+a_{1}^{2}(a_{4}^{2}-a_{2}a_{6}) \notag \\ &&+a_{1}(-a_{2}a_{3}a_{4}+a_{2}^{2}a_{5}-2a_{4}a_{5}+a_{3}a_{6})]b_{5}^{(l)}, \end{eqnarray} and \begin{eqnarray} M_{6}^{(l=1,2)} &=&\frac{1}{\omega _{l}^{2}}\{-[a_{5}\left( -a_{2}a_{3}a_{4}+a_{2}^{2}a_{5}+a_{4}(a_{1}a_{4}-a_{0}a_{5})\right) \notag \\ &&+\left( -a_{1}a_{3}a_{4}+a_{0}a_{3}a_{5}+a_{2}(a_{3}^{2}-2a_{1}a_{5})\right) a_{6}+a_{1}^{2}a_{6}^{2}]b_{1}^{(l)} \notag \\ &&+[-a_{3}a_{4}a_{5}+a_{3}^{2}a_{6}+a_{5}(a_{2}a_{5}-a_{1}a_{6})]b_{2}^{(l)} \notag \\ &&+(a_{1}a_{4}a_{5}-a_{5}^{2}-a_{1}a_{3}a_{6})b_{3}^{(l)}+(-a_{1}a_{2}a_{5}+a_{3}a_{5} \notag \\ &&+a_{1}^{2}a_{6})b_{4}^{(l)}+[-a_{3}^{2}-a_{1}^{2}a_{4}+a_{1}(a_{2}a_{3}+a_{5})]b_{5}^{(l)}\}, \end{eqnarray} where the coefficients in the two-mechanical-resonator case are defined by \begin{widetext} \begin{eqnarray} a_{0} &=&i, \notag \\ a_{1} &=&\kappa +\gamma _{1}+\gamma _{2}+\omega _{\text{fb}}, \notag \\ a_{2} &=&-i[\omega _{1}^{2}+\omega _{2}^{2}+\gamma _{2}\omega _{\text{fb} }+\gamma _{1}(\gamma _{2}+\omega _{\text{fb}})+\kappa (\gamma _{1}+\gamma _{2}+\omega _{\text{fb}})], \notag \\ a_{3} &=&-\omega _{1}\omega _{\text{fb}}(Gg_{\text{cd}}+\omega _{1})-\omega _{2}^{2}(\gamma _{1}+\omega _{\text{fb}})-\gamma _{2}(\omega _{1}^{2}+\gamma _{1}\omega _{\text{fb}})-\kappa \lbrack \omega _{1}^{2}+\omega _{2}^{2}+\gamma _{2}\omega _{\text{ fb}}+\gamma _{1}(\gamma _{2}+\omega _{\text{fb}})], \notag \\ a_{4} &=&i\{\gamma _{1}\omega _{2}^{2}\omega _{\text{fb}}+\omega _{1}^{2}\left( \omega _{2}^{2}+\gamma _{2}\omega _{\text{fb}}\right) +\kappa \lbrack \omega _{1}^{2}\omega _{\text{fb}}+\omega _{2}^{2}(\gamma _{1}+\omega _{\text{fb}})+\gamma _{2}(\omega _{1}^{2}+\gamma _{1}\omega _{\text{fb}})]+\omega _{1}(Gg_{\text{cd}}\gamma _{2}\omega _{\text{fb}}-4\omega _{2}\tilde{\eta} _{1}^{2})\}, \notag \\ a_{5} &=&\omega _{1}\omega _{2}\omega _{\text{fb}}(Gg_{\text{cd}}\omega _{2}+\omega _{1}\omega _{2}-4\tilde{\eta}_{1}^{2})+\kappa \lbrack \gamma _{1}\omega _{2}^{2}\omega _{\text{fb}}+\omega _{1}^{2}(\omega _{2}^{2}+\gamma _{2}\omega _{\text{fb}})-4\omega _{1}\omega _{2}\tilde{\eta}_{1}^{2}], \notag \\ a_{6} &=&-i\kappa \omega _{1}\omega _{2}\omega _{\text{fb}}(\omega _{1}\omega _{2}-4\tilde{\eta}_{1}^{2}), \end{eqnarray} \begin{eqnarray} b_{0}^{\left( 1\right) } &=&0, \notag \\ b_{1}^{\left( 1\right) } &=&-\frac{\omega _{1}^{2}}{4\kappa \zeta }[g_{\text{cd}}^{2}\omega _{\text{fb}}^{2}+4\kappa \gamma _{1}\zeta (1+2\bar{n}_{1})], \notag \\ b_{2}^{\left( 1\right) } &=&-\frac{\omega _{1}^{2}}{4\kappa \zeta }\{g_{\text{cd}}^{2}\omega _{\text{fb}}^{2}\left( \kappa ^{2}+\gamma _{2}^{2}-2\omega _{2}^{2}\right) +4\kappa \lbrack G^{2}\kappa +\gamma _{1}(1+2\bar{n}_{1})(\kappa ^{2}+\gamma _{2}^{2}-2\omega _{2}^{2}+\omega _{\text{fb} }^{2})]\zeta \}, \notag \\ b_{3}^{\left( 1\right) } &=&-\frac{\omega _{1}^{2}}{4\kappa \zeta }\{g_{\text{cd}}^{2}\omega _{\text{fb}}^{2}\left[ \omega _{2}^{4}+\kappa^{2}(\gamma _{2}^{2}-2\omega _{2}^{2})\right] +4\kappa \lbrack G^{2}\kappa (\gamma _{2}^{2}-2\omega _{2}^{2}+\omega _{\text{fb}}^{2})\notag \\ &&+(1+2\bar{n}_{1})\gamma _{1}\left( \kappa ^{2}\gamma _{2}^{2}-2\kappa^{2}\omega _{2}^{2}+\omega _{2}^{4}+\left( \kappa ^{2}+\gamma _{2}^{2}-2\omega _{2}^{2}\right) \omega _{\text{fb}}^{2}\right)+4(1+2\bar{n}_{2})\gamma _{2}\omega _{2}^{2}\tilde{\eta}_{1}^{2}]\zeta \}, \notag \\ b_{4}^{\left( 1\right) } &=&-\frac{\omega _{1}^{2}}{4\zeta }\{g_{\text{cd}}^{2}\kappa \omega _{2}^{4}\omega _{\text{fb}}^{2}+4[G^{2}\kappa (\omega _{2}^{4}+\gamma _{2}^{2}\omega _{\text{fb}}^{2}-2\omega _{2}^{2}\omega_{\text{fb}}^{2})+(1+2\bar{n}_{1})\gamma _{1}\left( \omega _{2}^{4}\omega _{\text{fb}}^{2}+\kappa ^{2}(\omega _{2}^{4}+\gamma _{2}^{2}\omega_{\text{fb}}^{2}-2\omega _{2}^{2}\omega _{\text{fb}}^{2})\right) \notag \\ &&+4(1+2\bar{n}_{2})\gamma _{2}\omega _{2}^{2}(\kappa ^{2}+\omega _{\text{fb}}^{2})\tilde{\eta}_{1}^{2}]\zeta \}, \notag \\ b_{5}^{( 1) } &=&-\kappa \omega _{1}^{2}\omega _{2}^{2}\omega _{ \text{fb}}^{2}\{[G^{2}+\kappa \gamma _{1}( 1+2\bar{n}_{1}) ]\omega _{2}^{2}+4\kappa (1+2\bar{n}_{2})\gamma _{2}\tilde{\eta}_{1}^{2}\}, \end{eqnarray} and \begin{eqnarray} b_{0}^{\left( 2\right) } &=&0, \notag \\ b_{1}^{\left( 2\right) } &=&-(1+2\bar{n}_{2})\gamma _{2}\omega _{2}^{2}, \notag \\ b_{2}^{\left( 2\right) } &=&-(1+2\bar{n}_{2})\gamma _{2}\omega _{2}^{2}(\kappa ^{2}+\gamma _{1}^{2}-2\omega _{1}^{2}+\omega _{\text{fb}}^{2}), \notag \\ b_{3}^{\left( 2\right) } &=&-\frac{\omega _{2}^{2}}{\kappa \zeta }\{g_{\text{cd}}^{2}\omega _{1}^{2}\omega _{\text{fb}}^{2}\tilde{\eta}_{1}^{2}-2Gg_{\text{cd}}\kappa (1+2\bar{n}_{2})\gamma _{2}\omega _{1}\omega _{\text{fb}}(\kappa +\gamma _{1}+\omega _{\text{fb}})\zeta \notag \\ &&+\kappa \lbrack \left( 1+2\bar{n}_{2}\right) \gamma _{2}\left( \omega _{1}^{4}+\gamma _{1}^{2}\omega _{\text{fb}}^{2}-2\omega _{1}^{2}\omega _{\text{fb}}^{2}+\kappa ^{2}(\gamma _{1}^{2}-2\omega _{1}^{2}+\omega _{\text{fb}}^{2})\right)+4\left( 1+2\bar{n}_{1}\right) \gamma _{1}\omega _{1}^{2}\tilde{\eta}_{1}^{2}]\zeta \}, \notag \\ b_{4}^{\left( 2\right) } &=&-\frac{\omega _{2}^{2}}{\zeta }\{2Gg_{\text{cd} }(1+2\bar{n}_{2})\gamma _{2}\omega _{1}\omega _{\text{fb}}[\omega _{1}^{2}\omega _{\text{fb}}+\kappa (\omega _{1}^{2}+\gamma _{1}\omega _{\text{fb}})]\zeta+[(1+2\bar{n}_{2})\gamma _{2}\left( \omega _{1}^{4}\omega _{\text{fb} }^{2}+\kappa ^{2}(\omega _{1}^{4}+\gamma _{1}^{2}\omega _{\text{fb}}^{2}-2\omega _{1}^{2}\omega _{\text{fb}}^{2})\right) \notag \\ &&+4\omega _{1}^{2}\left( G^{2}\kappa +(1+2\bar{n}_{1})\gamma _{1}(\kappa ^{2}+\omega _{\text{fb}}^{2})\right) \tilde{\eta}_{1}^{2}]\zeta+g_{\text{cd}}^{2}\omega _{1}^{2}\omega _{\text{fb}}^{2}[\kappa \tilde{\eta }_{1}^{2}+G^{2}(1+2\bar{n}_{2})\gamma _{2}\zeta ]\}, \notag \\ b_{5}^{\left( 2\right) } &=&-\kappa \omega _{1}^{2}\omega _{2}^{2}\omega_{\text{fb}}^{2}\{\kappa (1+2\bar{n}_{2})\gamma _{2}\omega _{1}^{2}+4[G^{2}+\kappa (1+2\bar{n}_{1})\gamma _{1}]\tilde{\eta}_{1}^{2}\}. \end{eqnarray} \end{widetext} \end{document}
math
61,268
\begin{document} \title{Complex Laplacians and Applications in Multi-Agent Systems \thanks{This work is supported by the National Natural Science Foundation of China under grant~11301114 and Hong Kong Research Grants Council under grant~618511.}} \author{Jiu-Gang~Dong, and~Li~Qiu,~\IEEEmembership{Fellow,~IEEE} \thanks{J.-G. Dong is with the Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China and is also with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China (e-mail: [email protected]).} \thanks{L. Qiu is with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China (e-mail: [email protected]).} } \maketitle \begin{abstract} Complex-valued Laplacians have been shown to be powerful tools in the study of distributed coordination of multi-agent systems in the plane including formation shape control problems and set surrounding control problems. In this paper, we first provide some characterizations of complex Laplacians. As an application, we then establish some necessary and sufficient conditions to ensure that the agents interacting on complex-weighted networks converge to consensus in some sense. These general consensus results are used to discuss some multi-agent coordination problems in the plane. \end{abstract} \begin{IEEEkeywords} Complex Laplacians, multi-agent systems, complex consensus. \end{IEEEkeywords} \section{Introduction} In the past decade there has been increasing interest in studying the distributed coordination and control of multi-agent systems, which appear in diverse situations including consensus problems, flocking and formation control~\cite{JLM03,OSM04,OSFM07,OT11}. As a natural tool, Laplacian matrices of a weighted graph (modeling the interaction among agents) are extensively used in the study of the distributed coordination problems of multi-agent systems. Most results are based on real Laplacians, see, e.g., the agreement~\cite{BS03,OSM04,OSFM07,RB05}, generalized consensus~\cite{CLHY11,Morbidi13} and bipartite consensus on signed graphs~\cite{Altafini13,MSJCH14}. Very recently, complex Laplacians have been applied to multi-agent systems~\cite{LDYYG13,LWHF14,LH14}. In particular, formation shape control problems in the plane with complex Laplacians were discussed in~\cite{LDYYG13,LWHF14}, while based on complex Laplacians, new methods were developed in~\cite{LH14} for the distributed set surrounding design, which contains consensus on complex-valued networks as a special case. It has been shown that complex Laplacians are powerful tools for multi-agent systems and can significantly simplify the analysis once the state space is a plane. From this point, it is worth investigating complex Laplacians independently. The main goal of this paper is to study the properties of complex Laplacians. More precisely, for a complex-weighted graph, we provide a necessary and sufficient condition ensuring that the complex Laplacian has a simple eigenvalue at zero with a specified eigenvector. The condition is in terms of connectivity of graphs and features of weights. It is shown that the notion of {\em structural balance} for complex-weighted graphs plays a critical role for establishing the condition. To demonstrate the importance of the obtained condition, we apply the condition to consensus problems on complex-weighted graphs. A general notion of consensus, called {\em complex consensus}, is introduced, which means that all limiting values of the agents have the same modulus. Some necessary and sufficient conditions for complex consensus are obtained. These complex consensus results extend and complement some existing ones including the standard consensus results~\cite{RB05} and bipartite consensus results~\cite{Altafini13}. This paper makes the following contributions. 1) We extend the known results on complex Laplacains (see~\cite{Reff12}) to a general setting. 2) We establish general consensus results, which are shown to be useful in the study of distributed coordination of multi-agent systems in the plane such as circular formation and set surrounding control. In particular, our results supplement the bipartite consensus results in~\cite{Altafini13}. The remainder of this paper is organized as follows. Section~\ref{section: complex Laplacians} discusses the properties of the complex Laplacian. Some multi-agent coordination control problems, based on the complex Laplacian, are investigated in Section~\ref{section: application}. Section~\ref{section: examples} presents some examples to illustrate our results. This paper is concluded in Section~\ref{section: conclusion}. The notation used in the paper is quite standard. Let $\mathbb R$ be the field of real numbers and $\mathbb C$ the field of complex numbers. For a complex matrix $A\in\mathbb C^{n\times n}$, $A^*$ denotes the conjugate transpose of $A$. We use $\bar{z}$ to denote the complex conjugate of a complex number $z$. The modulus of $z$ is denoted by $|z|$. Let $\mathbf{1}\in\mathbb R^n$ be the $n$-dimensional column vector of ones. For $x=[x_1,\ldots,x_n]^T\in\mathbb C^n$, let $\|x\|_1$ be its $1$-norm, i.e., $\|x\|_1=\sum_{i=1}^n|x_i|$. Denote by $\mathbb T$ the unit circle, i.e., $\mathbb T=\{z\in\mathbb C:\ |z|=1\}$. It is easy to see that $\mathbb T$ is an abelian group under multiplication. For $\zeta=[\zeta_1,\ldots,\zeta_n]^T\in\mathbb T^n$, let $D_\zeta:=\mathrm{diag}(\zeta)$ denote the diagonal matrix with $i$th diagonal entry $\zeta_i$. Finally, we have ${\rm j}=\sqrt{-1}$. \section{Complex-weighted graphs}\label{section: complex Laplacians} In this section we present some interesting results on complex-weighted graphs. We believe that these results themselves are also interesting from the graph theory point of view. Before proceeding, we introduce some basic concepts of complex-weighted graphs. \subsection{Preliminaries} The digraph associated with a complex matrix $A=[a_{ij}]_{n\times n}$ is denoted by $\mathcal G(A)=(\mathcal V,\mathcal E)$, where $\mathcal V=\{1,\ldots,n\}$ is the vertex set and $\mathcal E\subset\mathcal V\times \mathcal V$ is the edge set. An edge $(j,i)\in\mathcal E$, i.e., there exists an edge from $j$ to $i$ if and only if $a_{ij}\neq0$. The matrix $A$ is usually called the adjacency matrix of the digraph $\mathcal G(A)$. Moreover, we assume that $a_{ii}=0$, for $i=1,\ldots,n$, i.e., $\mathcal G(A)$ has no self-loop. For easy reference, we say $\mathcal G(A)$ is complex, real and nonnegative if $A$ is complex, real and (real) nonnegative, respectively. Let $\mathcal N_i$ be the neighbor set of agent $i$, defined as $\mathcal N_i=\{j:\ a_{ij}\neq0\}$. A directed path in $\mathcal G(A)$ from $i_1$ to $i_k$ is a sequence of distinct vertices $i_1,\ldots,i_k$ such that $(i_l,i_{l+1})\in\mathcal E$ for $l=1,\ldots,k-1$. A cycle is a path such that the origin and terminus are the same. The {\em weight} of a cycle is defined as the product of weights on all its edges. A cycle is said to be {\em positive} if it has a positive weight. The following definitions are used throughout this paper. \begin{itemize} \item[$\cdot$] A digraph is said to be {\em (structurally) balanced} if all cycles are positive. \item[$\cdot$] A digraph has a directed spanning tree if there exists at least one vertex (called a root) which has a directed path to all other vertices. \item[$\cdot$] A digraph is strongly connected if for any two distinct vertices $i$ and $j$, there exists a directed path from $i$ to $j$. \end{itemize} For a strongly connected graph, it is clear that all vertices can serve as roots. We can see that being strongly connected is stronger than having a directed spanning tree and they are equivalent when $A$ is Hermitian. For a complex digraph $\mathcal G(A)$, the complex Laplacian matrix $L=[l_{ij}]_{n\times n}$ of $\mathcal G(A)$ is defined by $L=D-A$ where $D=\mathrm{diag}(d_1,\ldots,d_n)$ is the modulus degree matrix of $\mathcal G(A)$ with $d_i=\sum_{j\in\mathcal N_i}|a_{ij}|$. This definition appears in the literature on gain graphs (see, e.g., \cite{Reff12}), which can be thought as a generalization of standard Laplacian matrix of nonnegative graphs. We need the following definition on {\em switching equivalence} \cite{Reff12, Z89}. \begin{defn}\label{defn: switching equivalent} {\rm Two graphs $\mathcal G(A_1)$ and $\mathcal G(A_2)$ are said to be {\em switching equivalent}, written as $\mathcal G(A_1)\sim\mathcal G(A_2)$, if there exists a vector $\zeta=[\zeta_1\ldots,\zeta_n]^T\in\mathbb T^n$ such that $A_2=D_\zeta^{-1} A_1D_\zeta$.} \end{defn} It is not difficult to see that the switching equivalence is an equivalence relation. We can see that switching equivalence preserves connectivity and balancedness. We next investigate the properties of eigenvalues of complex Laplacian $L$. \subsection{Properties of the complex Laplacian} For brevity, we say $A$ is {\em essentially nonnegative} if $\mathcal G(A)$ is switching equivalent to a graph with a nonnegative adjacency matrix. By definition, it is easy to see that $A$ is essentially nonnegative if and only if there exists a diagonal matrix $D_\zeta$ such that $D_\zeta^{-1} AD_\zeta$ is nonnegative. By the Ger\v{s}gorin disk theorem \cite[Theorem 6.1.1]{HJ87}, we see that all the eigenvalues of the Laplacian matrix $L$ of $A$ have nonnegative real parts and zero is the only possible eigenvalue with zero real part. We next further discuss the properties of eigenvalues of $L$ in terms of $\mathcal G(A)$. \begin{lem}\label{lem: 1} Zero is an eigenvalue of $L$ with an eigenvector $\zeta\in\mathbb T^n$ if and only if $A$ is essentially nonnegative. \end{lem} \begin{proof} (Sufficiency) Assume that $A$ is essentially nonnegative. That is, there exists a diagonal matrix $D_\zeta$ such that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative. Let $L_1$ be the Laplacian matrix of the nonnegative matrix $A_1$ and thus $L_1\mathbf{1}=0$. A simple observation shows that these two Laplacian matrices are similar, i.e., $L_1=D_\zeta^{-1}LD_\zeta$. Therefore, $L\zeta=0$. (Necessity) Let $L\zeta=0$ with $\zeta\in\mathbb T^n$. Then we have $LD_\zeta\mathbf{1}=0$ and so $D_\zeta^{-1}LD_\zeta\mathbf{1}=0$. Expanding the equation $D_\zeta^{-1}LD_\zeta\mathbf{1}=0$ in component form, we can verify that $D_\zeta^{-1}LD_\zeta\in\mathbb R^{n\times n}$ has nonpositive off-diagonal entries. This implies that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and thus $A$ is essentially nonnegative. \end{proof} If we take the connectedness into account, then we can derive a stronger result. \begin{prop}\label{prop: 2} Zero is a simple eigenvalue of $L$ with an eigenvector $\xi\in\mathbb T^n$ if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. \end{prop} \begin{proof} The proof follows from a sequence of equivalences: $$ (1)\Leftrightarrow(2)\Leftrightarrow(3)\Leftrightarrow(4). $$ Conditions (1)-(4) are given in the following. \begin{itemize} \item[$(1)$] $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. \item[$(2)$] There exists a diagonal matrix $D_\zeta$ such that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $\mathcal G(A_1)$ has a spanning tree. \item[$(3)$] There exists a diagonal matrix $D_\zeta$ such that $L_1=D_\zeta^{-1}LD_\zeta$ has a simple zero eigenvalue with an eigenvector being $\mathbf{1}$. \item[$(4)$] $L$ has a simple zero eigenvalue with an eigenvector $\zeta\in\mathbb T^n$. \end{itemize} Here, the second one is from \cite[Lemma 3.1]{Ren07} and the last one follows from the similarity. \end{proof} Here a key issue is how to verify the essential nonnegativity of $A$. Thanks to the concept of balancedness of digraphs, we can derive a necessary and sufficient condition for $A$ to be essentially nonnegative. To this end, for a complex matrix $A$, we denote by $A_H=(A+A^*)/2$ the Hermitian part of $A$. Clearly, we have $A=A_H$ when $A$ is Hermitian. \begin{prop}\label{prop: 1} The complex matrix $A=[a_{ij}]_{n\times n}$ is essentially nonnegative if and only if $\mathcal G(A_H)$ is balanced and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. \end{prop} \begin{proof} Since $A_H$ is Hermitian, it follows from~\cite{Z89} that $\mathcal G(A_H)$ is balanced if and only if $A_H$ is essentially nonnegative. Therefore, to complete the proof, we next show that $A$ is essentially nonnegative if and only if $A_H$ is essentially nonnegative and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. {\em Sufficiency:} By the condition that $a_{ij}a_{ji}\geq0$, we have that $|a_{ij}a_{ji}|=\bar{a}_{ij} \bar{a}_{ji}$. Multiplying both sides by $a_{ij}$, we obtain that $|a_{ji}|a_{ij}=|a_{ij}|\bar{a}_{ji}$. Consequently, for a diagonal matrix $D_\zeta$ with $\zeta=[\zeta_1,\ldots,\zeta_n]^T\in\mathbb T^n$, we have for $a_{ij}\neq0$ \begin{equation}\label{eq: relation} \zeta_i^{-1}\frac{a_{ij}+\bar{a}_{ji}}{2}\zeta_j =\frac{1+\frac{|a_{ji}|}{|a_{ij}|}}{2}\zeta_i^{-1}a_{ij}\zeta_j. \end{equation} It thus follows that $D_\zeta^{-1}A_HD_\zeta$ being nonnegative implies $D_\zeta^{-1}AD_\zeta$ being nonnegative, which proves the sufficiency. {\em Necessity:} Now assume that $A$ is essentially nonnegative. That is, there exists a diagonal matrix $D_\zeta$ such that $D_\zeta^{-1}AD_\zeta$ is nonnegative. Then we have $$ a_{ij}a_{ji}=(\zeta_i^{-1}a_{ij}\zeta_j)(\zeta_j^{-1}a_{ji}\zeta_i)\geq0 $$ from which we know that relation~\eqref{eq: relation} follows. This implies that $D_\zeta^{-1}A_HD_\zeta$ is nonnegative. This concludes the proof. \end{proof} The above proposition deals with the balancedness of $\mathcal G(A_H)$, instead of $\mathcal G(A)$ itself. The reason is that $\mathcal G(A)$ being balanced is not a sufficient condition for $A$ being essentially nonnegative, as shown in the following example. \begin{exa} {\rm Consider the complex matrix $A$ given by $$ A=\begin{bmatrix} 0 & 2 & 0 \\ 1 & 0 & 0 \\ -\mathrm{j} & \mathrm{j} & 0 \\ \end{bmatrix}. $$ It is straightforward that $\mathcal G(A)$ only has a positive cycle of length two and thus is balanced. However, we can check that $A$ is not essentially nonnegative.} \end{exa} The following theorem is a combination of Propositions~\ref{prop: 2} and \ref{prop: 1}. \begin{thm} Zero is a simple eigenvalue of $L$ with an eigenvector $\xi\in\mathbb T^n$ if and only if $\mathcal G(A)$ has a spanning tree, $\mathcal G(A_H)$ is balanced and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. \end{thm} We next turn our attention to the case that $A$ is not essentially nonnegative. When $\mathcal G(A)$ has a spanning tree and $A$ is not essentially nonnegative, what we can only obtain from Proposition~\ref{prop: 2} is that either zero is not an eigenvalue of $L$, or zero is an eigenvalue of $L$ with no associated eigenvector in $\mathbb T^n$. To provide further understanding, we here consider the special case that $A$ is Hermitian. In this case, $L$ is also Hermitian. Then all eigenvalues of $L$ are real. Let $\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_n$ be the eigenvalues of $L$. The positive semidefiniteness of $L$, i.e., the fact that $\lambda_1\geq0$, can be obtained by the following observation. For $z=[z_1,\ldots,z_n]^T\in\mathbb C^n$, we have \begin{equation}\label{eq: positivity of L} \begin{split} z^*Lz&=\sum_{i=1}^n\bar{z}_i\left(\sum_{j\in\mathcal N_i}|a_{ij}|z_i-\sum_{j\in\mathcal N_i}a_{ij}z_j\right)\\ &=\frac{1}{2}\sum_{(j,i)\in\mathcal E}\left(|a_{ij}||z_i|^2+|a_{ij}||z_j|^2-2a_{ij}\bar{z}_iz_j\right)\\ &=\frac{1}{2}\sum_{(j,i)\in\mathcal E}|a_{ij}|\left|z_i-\varphi(a_{ij})z_j\right|^2\\ \end{split} \end{equation} where $\varphi:\ \mathbb C\backslash\{0\}\rightarrow\mathbb T$ is defined by $\varphi(a_{ij})=\frac{a_{ij}}{|a_{ij}|}$. Based on \eqref{eq: positivity of L}, we have the following lemma. \begin{lem}\label{lem: 2} Let $A$ be Hermitian. Assume that $\mathcal G(A)$ has a spanning tree. Then $L$ is positive definite, i.e., $\lambda_1>0$, if and only if $A$ is not essentially nonnegative. \end{lem} \begin{proof} We only show the sufficiency since the necessity follows directly from Proposition~\ref{prop: 2}. Assume the contrary. Then there exists a nonzero vector $y=[y_1,\ldots,y_n]^T\in\mathbb C^n$ such that $Ly=0$. By \eqref{eq: positivity of L}, \begin{equation*} \begin{split} y^*Ly=\frac{1}{2}\sum_{(j,i)\in\mathcal E}|a_{ij}|\left|y_i-\frac{a_{ij}}{|a_{ij}|}y_j\right|^2=0. \end{split} \end{equation*} This implies that $y_i=\frac{a_{ij}}{|a_{ij}|}y_j$ for $(j,i)\in\mathcal E$ and so $|y_i|=|y_j|$ for $(j,i)\in\mathcal E$. Note that for $\mathcal G(A)$ with $A$ being Hermitian, having a spanning tree is equivalent to the strong connectivity. Then we conclude that $|y_i|=|y_j|$ for all $i,j=1,\ldots,n$. Without loss of generality, we assume that $y\in\mathbb T^n$. It follows from Lemma~\ref{lem: 1} that $A$ is essentially nonnegative, a contradiction. \end{proof} On the other hand, for the general case that $A$ is not Hermitian, we cannot conclude that $L$ has no zero eigenvalue when $\mathcal G(A)$ has a spanning tree and $A$ is not essentially nonnegative. Example~\ref{exa: 1} in Section~\ref{section: examples} provides such an example. \section{Applications}\label{section: application} In this section, we study the distributed coordination problems with the results established in Section~\ref{section: complex Laplacians}. We first consider the consensus problems on complex-weighted digraphs. \subsection{Complex consensus} For a group of $n$ agents, we consider the continuous-time (CT) consensus protocol over complex field \begin{equation}\label{eq: Algorithm} \dot{z}_i(t)=u_i(t),\ t\geq0 \end{equation} where $z_i(t)\in\mathbb C$ and $u_i(t)\in\mathbb C$ are the state and input of agent $i$, respectively. We also consider the corresponding discrete-time (DT) protocol over complex field \begin{equation}\label{eq: discrete algorithm} z_i(k+1)=z_i(k)+u_i(k),\ k=0,1,\ldots. \end{equation} The communications between agents are modeled as a complex graph $\mathcal G(A)$. The control input $u_i$ is designed, in a distributed way, as $$ u_i=-\kappa\sum_{j\in\mathcal N_i}(|a_{ij}|z_i-a_{ij}z_j), $$ where $\kappa>0$ is a fixed control gain. Then we have the following two systems described as \begin{equation*} \dot{z}_i(t)=-\kappa\sum_{j\in\mathcal N_i}(|a_{ij}|z_i-a_{ij}z_j) \end{equation*} and \begin{equation*} z_i(k+1)=z_i(k)-\kappa\sum_{j\in\mathcal N_i}(|a_{ij}|z_i-a_{ij}z_j). \end{equation*} Denote by $z=(z_1,\ldots,z_n)^T\in\mathbb C^n$ the aggregate position vector of $n$ agents. With the Laplacain matrix $L$ of $\mathcal G(A)$, these two systems can be rewritten in more compact forms: \begin{equation}\label{eq: Algorithm with complex L} \dot{z}(t)=-\kappa Lz(t) \end{equation} in the CT case and \begin{equation}\label{eq: discrete algorithm with complex L} z(k+1)=z(k)-\kappa Lz(k) \end{equation} in the DT case. Inspired by the consensus in real-weighted networks \cite{Altafini13, OSM04, OSFM07}, we introduce the following definition. \begin{defn}\label{defn: modulus consensus} {\rm We say that the CT system \eqref{eq: Algorithm with complex L} (or the DT system~\eqref{eq: discrete algorithm with complex L}) reaches the {\em complex consensus} if $\lim_{t\rightarrow\infty}|z_i(t)|=a>0$ (or $\lim_{k\rightarrow\infty}|z_i(k)|=a>0$) for $i=1,\ldots,n$.} \end{defn} The following is useful in simplifying the statement of complex consensus results. Let $A$ be an essentially nonnegative complex matrix. If $\mathcal G(A)$ has a spanning tree, then it follows from Proposition~\ref{prop: 2} that $L$ has a simple eigenvalue at zero with an associated eigenvector $\zeta\in\mathbb T^n$. Thus, we have $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $D_\zeta^{-1}LD_\zeta$ has a simple eigenvalue at zero with an associated eigenvector $\mathbf{1}$. In the standard consensus theory~\cite{RBA07}, it is well-known that $D_\zeta^{-1}LD_\zeta$ has a nonnegative left eigenvector $\nu=[\nu_1,\ldots,\nu_n]^T$ corresponding to eigenvalue zero, i.e., $\nu^T(D_\zeta^{-1}LD_\zeta)=0$ and $\nu_i\geq0$ for $i=1,\ldots,n$. We assume that $\|\nu\|_1=1$. Letting $\eta=D_\zeta^{-1}\nu=[\eta_1,\ldots,\eta_n]^T$, we have $\|\eta\|_1=1$ and $\eta^TL=0$. We first state a necessary and sufficient condition for complex consensus of the CT system~\eqref{eq: Algorithm with complex L}. \begin{thm}\label{thm: 2} The CT system \eqref{eq: Algorithm with complex L} reaches complex consensus if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, we have $$ \lim_{t\rightarrow\infty}z(t)=(\eta^Tz(0))\zeta. $$ \end{thm} \begin{proof} Assume that $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. By Proposition~\ref{prop: 2}, we have $L$ has a simple eigenvalue at zero with an associated eigenvector $\zeta\in\mathbb T^n$. Thus, we conclude that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $D_\zeta^{-1}LD_\zeta$ has a simple eigenvalue at zero with an eigenvector $\mathbf{1}$. Let $z=D_\zeta x$. By system \eqref{eq: Algorithm with complex L}, we can see that $x$ satisfies the system \begin{equation*} \dot{x}=-\kappa D_\zeta^{-1}LD_\zeta x. \end{equation*} Note that this is the standard consensus problem. From~\cite{RBA07}, it follows that $$ \lim_{t\rightarrow\infty}x(t)=\nu^Tx(0)\mathbf{1}=\nu^TD_\zeta^{-1}z(0)\mathbf{1}. $$ This is equivalent to $$ \lim_{t\rightarrow\infty}z(t)=(\nu^TD_\zeta^{-1}z(0))D_\zeta\mathbf{1}= (\eta^Tz(0))\zeta. $$ To show the other direction, we now assume that the system \eqref{eq: Algorithm with complex L} reaches complex consensus but $\mathcal G(A)$ does not have a spanning tree. Let $T_1$ be a maximal subtree of $\mathcal G$. Note that $T_1$ is a spanning tree of subgraph $\mathcal G_1$ of $\mathcal G(A)$. Denote by $\mathcal G_2$ the subgraph induced by vertices not belonging to $\mathcal G_1$. It is easy to see that there does not exist edge from $\mathcal G_1$ to $\mathcal G_2$ since otherwise $T_1$ is not a maximal subtree. All possible edges between $\mathcal G_1$ and $\mathcal G_2$ are from $\mathcal G_2$ to $\mathcal G_1$, and moreover we can see that there is no directed path from a vertex in $\mathcal G_2$ to the root of $T_1$ by $T_1$ being a maximal subtree again. Therefore it is impossible to reach the complex consensus between the root of $T_1$ and vertices of $\mathcal G_2$. This implies that the system \eqref{eq: Algorithm with complex L} cannot reach complex consensus. We obtain a contradiction. Hence $\mathcal G(A)$ have a spanning tree. On the other hand, since the system \eqref{eq: Algorithm with complex L} reaches complex consensus we can see that the solutions $y=[y_1,\ldots,y_n]^T$ of the equation $Ly=0$ always have the property $|y_i|=|y_j|$ for all $i,j=1,\ldots,n$. Namely, zero is an eigenvalue of $L$ with an eigenvector $\zeta\in\mathbb T^n$. It thus follows from Lemma~\ref{lem: 1} that $A$ is essentially nonnegative. We complete the proof of Theorem \ref{thm: 2}. \end{proof} For $\mathcal G(A)$, define the maximum modulus degree $\Delta$ by $\Delta=\max_{1\leq i\leq n}d_i$. We are now in a position to state the complex consensus result for the DT system~\eqref{eq: discrete algorithm with complex L}. \begin{thm}\label{thm: discrete version of 2} Assume that the input gain $\kappa$ is such that $0<\kappa<1/\Delta$. Then the DT system \eqref{eq: discrete algorithm with complex L} reaches complex consensus if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, we have $$ \lim_{k\rightarrow\infty}z(k)=(\eta^Tz(0))\zeta. $$ \end{thm} \begin{proof} Assume that $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. By Propositions~\ref{prop: 2}, we have $L$ has a simple eigenvalue at zero with an associated eigenvector $\zeta\in\mathbb T^n$. Thus, we conclude that $A_1=D_\zeta^{-1}AD_\zeta$ is nonnegative and $D_\zeta^{-1}LD_\zeta$ has a simple eigenvalue at zero with an associated eigenvector $\mathbf{1}$. Let $z=D_\zeta x$. By system \eqref{eq: discrete algorithm with complex L}, we can see that $x$ satisfies the system \begin{equation*} x(k+1)=(I-\kappa D_\zeta^{-1}LD_\zeta)x(k). \end{equation*} Note that this is the standard consensus problem. From~\cite{RBA07}, it follows that $$ \lim_{k\rightarrow\infty}x(k)=\nu^Tx(0)\mathbf{1}=\nu^TD_\zeta^{-1}z(0)\mathbf{1}. $$ This is equivalent to $$ \lim_{k\rightarrow\infty}z(k)=(\nu^TD_\zeta^{-1}z(0))D_\zeta\mathbf{1}=(\eta^Tz(0))\zeta. $$ To show the other direction, we now assume that the system \eqref{eq: discrete algorithm with complex L} reaches complex consensus. Using the same arguments as for the CT system \eqref{eq: Algorithm with complex L} above, we can see that $\mathcal G(A)$ have a spanning tree. On the other hand, based on the Ger\v{s}gorin disk theorem \cite[Theorem 6.1.1]{HJ87}, all the eigenvalues of $-\kappa L$ are located in the union of the following $n$ disks: $$ \left\{z\in\mathbb C: \left|z+\kappa\sum_{j\in\mathcal N_i}|a_{ij}|\right|\leq\kappa\sum_{j\in\mathcal N_i}|a_{ij}|\right\}, \ i=1,\ldots,n. $$ Clearly, all these $n$ disks are contained in the largest disk defined by $$ \left\{z\in\mathbb C: \left|z+\kappa\Delta\right|\leq\kappa\Delta\right\}. $$ Noting that $0<\kappa<1/\Delta$, we can see that the largest disk is contained in the region $\left\{z\in\mathbb C: \left|z+1\right|<1\right\}\cup\{0\}$. By translation, we have all the eigenvalues of $I-\kappa L$ are located in the following region: $$ \left\{z\in\mathbb C: |z|<1\right\}\cup\{1\}. $$ Since the system \eqref{eq: discrete algorithm with complex L} reaches complex consensus we can see that $1$ must be the eigenvalue of $I-\kappa L$. All other eigenvalue of $I-\kappa L$ have the modulus strictly smaller than $1$. Moreover, if $y=[y_1,\ldots,y_n]^T$ is an eigenvector of $I-\kappa L$ corresponding to eigenvalue $1$, then $|y_i|=|y_j|>0$ for $i,j=1,\ldots,n$. That is, zero is an eigenvalue of $L$ with an eigenvalue $\zeta\in\mathbb T^n$. It thus follows from Lemma~\ref{lem: 1} that $A$ is essentially nonnegative. We complete the proof of Theorem \ref{thm: discrete version of 2}. \end{proof} \begin{rem}\label{rem: Hermitian} {\rm \noindent \begin{itemize} \item[1)] In Theorems \ref{thm: 2} and \ref{thm: discrete version of 2}, the key point is to check the condition that $A$ is essentially nonnegative which, by Proposition~\ref{prop: 1}, can be done by examining the condition that $\mathcal G(A_H)$ is balanced and $a_{ij}a_{ji}\geq0$ for all $1\leq i,j\leq n$. \item[2)]For the special case when $A$ is Hermitian, Theorems \ref{thm: 2} and \ref{thm: discrete version of 2} take a simpler form. As an example, we consider the CT system \eqref{eq: Algorithm with complex L} with $A$ being Hermitian. In this case, it follows from Proposition~\ref{prop: 1} that $A$ is essentially nonnegative if and only if $\mathcal G(A)$ is balanced. Then we have that the CT system \eqref{eq: Algorithm with complex L} reaches complex consensus if and only if $\mathcal G(A)$ has a spanning tree and is balanced. In this case, $$ \lim_{t\rightarrow\infty}z(t)=\frac{1}{n}(\zeta^*z(0))\zeta. $$ In addition, in view of Lemma~\ref{lem: 2}, it yields that $\lim_{t\rightarrow\infty}z(t)=0$ when $\mathcal G(A)$ has a spanning tree and is unbalanced. \item[3)] By the standard consensus results in \cite{RB05}, Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2} can be generalized to the case of switching topology. We omit the details to avoid repetitions. \end{itemize} } \end{rem} \begin{rem} {\rm \noindent \begin{itemize} \item[1)] Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2} actually give an equivalent condition to ensure that all the agents converge to a common circle centered at the origin. Motivated by this observation, we can modify the two systems~\eqref{eq: Algorithm with complex L} and \eqref{eq: discrete algorithm with complex L} accordingly to study the circular formation problems. Similar to Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2}, we can establish a necessary and sufficient condition to ensure all the agents converge to a common circle centered at a given point and are distributed along the circle in a desired pattern, expressed by the prespecified angle separations and ordering among agents. We omit the details due to space limitations. \item[2)] Part of Theorem~\ref{thm: 2} has been obtained in the literature, see~\cite[Theorems III.5 and III.6]{LH14}. As potential applications, the reuslts in Section~\ref{section: complex Laplacians} can be used to study the set surrounding control problems~\cite{LH14}. A detailed analysis for this is beyond the scope of this paper. \end{itemize} } \end{rem} \subsection{Bipartite consensus revisited} As an application, we now revisit some bipartite consensus results from Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2}. We will see that these bipartite consensus results improve the existing results in the literature. Let $\mathcal G(A)$ be a signed graph, i.e., $A=[a_{ij}]_{n\times n}\in\mathbb R^{n\times n}$ and $a_{ij}$ can be negative. By bipartite consensus, we mean on a signed graph, all agents converge to a consensus value whose absolute value is the same for all agents except for the sign. The state $z$ is now restricted to the field of real numbers $\mathbb R$, denoted by $x$. Then the two systems \eqref{eq: Algorithm with complex L} and \eqref{eq: discrete algorithm with complex L} reduces to the standard consensus systems: \begin{equation}\label{eq: standard CT system} \dot{x}(t)=-\kappa Lx(t) \end{equation} and \begin{equation}\label{eq: standard DT system} x(k+1)=x(k)-\kappa Lx(k). \end{equation} With the above two systems and based on Theorems~\ref{thm: 2} and \ref{thm: discrete version of 2}, we can derive the bipartite consensus results on signed graphs. \begin{cor}\label{thm: signed 2} Let $\mathcal G(A)$ be a signed digraph. Then the CT system \eqref{eq: standard CT system} achieves bipartite consensus asymptotically if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, for any initial state $x(0)\in\mathbb R^n$, we have $$ \lim_{t\rightarrow\infty}x(t)=(\eta^Tx(0))\sigma $$ where $\sigma=[\sigma_1\ldots,\sigma_n]^T\in\{\pm1\}^n$ such that $D_{\sigma}AD_{\sigma}$ is nonnegative matrix and $\eta^TL=0$ with $\eta=[\eta_1,\dots,\eta_n]^T\in\mathbb R^n$ and $\|\eta\|_1=1$. \end{cor} \begin{cor}\label{thm: signed D version of 2} Let $\mathcal G(A)$ be a signed digraph. Then the DT system \eqref{eq: standard DT system} with $0<\kappa<1/\Delta$ achieves bipartite consensus asymptotically if and only if $A$ is essentially nonnegative and $\mathcal G(A)$ has a spanning tree. In this case, for any initial state $x(0)\in\mathbb R^n$, we have $$ \lim_{k\rightarrow\infty}x(k)=(\eta^Tx(0))\sigma $$ where $\eta$ and $\sigma$ are defined as in Corollary~\ref{thm: signed 2}. \end{cor} \begin{rem} {\rm Corollary \ref{thm: signed 2} indicates that bipartite consensus can be achieved under a condition weaker than that given in Theorem~2 in~\cite{Altafini13}. In addition, we also obtain a similar necessary and sufficient condition for bipartite consensus of the DT system~\eqref{eq: standard DT system}.} \end{rem} \section{Examples}\label{section: examples} In this section we present some examples to illustrate our results. \begin{exa} {\rm Consider the complex graph $\mathcal G(A)$ illustrated in Figure \ref{fig: balanced} with adjacency matrix $$ A=\begin{bmatrix} 0 & 0 & -\mathrm{j} & 0 \\ 1 & 0 & 0 & 0 \\ 0 & \mathrm{j} & 0 & 0 \\ 0 & 1+\mathrm{j} & 0 & 0 \\ \end{bmatrix}. $$ It is trivial that $\mathcal G(A)$ has a spanning tree. Since $\mathcal G(A_H)$ is balanced, Proposition~\ref{prop: 1} implies that $A$ is essentially nonnegative. Furthermore, defining $\zeta=[1,1,\mathrm{j},e^{\mathrm{j}\frac{\pi}{4}}]^T\in\mathbb T^4$, we have $$ A_1=D_\zeta^{-1}AD_\zeta=\begin{bmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & \sqrt{2} & 0 & 0 \\ \end{bmatrix}. $$ The set of eigenvalues of the complex Laplacian $L$ is $\{0, \sqrt{2}, 3/2+\sqrt{3}\mathrm{j}/2, 3/2-\sqrt{3}\mathrm{j}/2\}$. The vector $\zeta$ is an eigenvector associated with eigenvalue zero. A simulation under system \eqref{eq: Algorithm with complex L} is given in Figure \ref{fig: Modolus consensus}, which shows that the complex consensus is reached asymptotically. This confirms the analytical results of Theorems \ref{thm: 2} and \ref{thm: discrete version of 2}. \begin{figure} \caption{Balanced graph.} \label{fig: balanced} \end{figure} \begin{figure} \caption{Complex consensus process of the agents.} \label{fig: Modolus consensus} \end{figure}} \end{exa} \begin{exa}\label{exa: 1} {\rm Consider the complex graph $\mathcal G(A)$ illustrated in Figure \ref{fig: unbalanced1} with adjacency matrix $$ A=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1-\mathrm{j} & 0 & 0\\ 0 & \mathrm{j} & 0 & 0 & 0 & 0\\ \mathrm{j}& 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -\mathrm{j} & 0\\ \end{bmatrix}. $$ We can see that $\mathcal G(A)$ has a spanning tree and $A$ is not essentially nonnegative since $\mathcal G(A_H)$ is unbalanced. We can verify that zero is an eigenvalue of $L$. The simulation in Figure \ref{fig: no MC} shows that the complex consensus cannot be reached. \begin{figure} \caption{Unbalanced graph.} \label{fig: unbalanced1} \end{figure} \begin{figure} \caption{Trajectories of the agents which mean that complex consensus cannot be reached.} \label{fig: no MC} \end{figure}} \end{exa} \section{Conclusion}\label{section: conclusion} Motivated by the study of bipartite consensus problems, we discuss the consensus problems in complex-weighted graphs. To this end, we first establish some key properties of the complex Laplacian. We emphasize that these properties can be examined by checking the properties of the corresponding digraph. Then we give some necessary and sufficient conditions to ensure the convergence of complex consensus. It is shown that these general consensus results can be used to study some distributed coordination control problems of multi-agent systems in a plane. In particular, these results cover the bipartite consensus results on signed digraphs. We believe that the properties of the complex Laplacian obtained in this paper are useful in other multi-agent coordination problems in a plane. \end{document}
math
34,702
\begin{document} \mathtt maketitle \begin{abstract} We study a weighted eigenvalue problem with anisotropic diffusion in bounded Lipschitz domains $\Omega\subset \mathtt mathbb{R}^{N} $, $N\ge1$, under Robin boundary conditions, proving the existence of two positive eigenvalues $\lambda^{\pm}$ respectively associated with a positive and a negative eigenfunction. Next, we analyze the minimization of $\lambda^{\pm}$ with respect to the sign-changing weight, showing that the optimal eigenvalues $\Lambda^{\pm}$ are equal and the optimal weights are of bang-bang type, namely piece-wise constant functions, each one taking only two values. As a consequence, the problem is equivalent to the minimization with respect to the subsets of $\Omega$ satisfying a volume constraint. Then, we completely solve the optimization problem in one dimension, in the case of homogeneous Dirichlet or Neumann conditions, showing new phenomena induced by the presence of the anisotropic diffusion. The optmization problem for $\lambda^{+}$ naturally arises in the study of the optimal spatial arrangement of resources for a species to survive in an heterogeneous habitat. \end{abstract} \noindent {\footnotesize \textbf{AMS-Subject Classification}}. {\footnotesize 49K15, 49K20, 35J92, 35J70.}\\ {\footnotesize \textbf{Keywords}}. {\footnotesize Weighted eigenvalues, population dynamics, survival threshold, symmetrization.} \section{Introduction} This paper is focused on the spectral optimization problem associated with the following eigenvalue problem \begin{equation}\label{equazione} \begin{cases} -{\rm div}\left((H(\nabla u))^{p-1}H_{\xi}(\nabla u)\right) = \lambda m(x) |u|^{p-2}u & \text{ in } \Omega\\ H^{p-1}(\nabla u) H_\xi(\nabla u)\cdot n +\mathtt k|u|^{p-2}u=0 &\text{ on } \partial \Omega, \end{cases}\end{equation} where $\Omega\subset \mathtt mathbb{R}^N$ is a Lipschitz bounded domain, $N \ge 1$, $\lambda \in \mathtt mathbb{R}$, $p >1$, $n$ is the outward unit normal on $\partial \Omega$ and $\tau\cdot\eta$ denotes the scalar product between two vectors $\tau $ and $\eta$. The constant $\mathtt k$ runs into the set $[0, +\infty]$, so we are considering homogeneous Robin boundary conditions, which reduce to homogeneous Neumann boundary conditions if $\mathtt k=0$, while we will refer to homogeneous Dirichlet case for $\mathtt k=+\infty$ (see the beginning of Section \ref{sec:principal eigenvalues} for more details). The function $m\in L^{\infty}(\Omega)$ is a sign changing weight, belonging to the class \begin{equation}\label{defM} \mathtt mathcal{M}=\left \{ -\beta \leqslant m(x) \leqslant 1, \; \left|\Omega^{+}_{m}\right|> 0,\; \int_{\Omega} m(x)\leqslant -m_0|\Omega|, \, \right \}, \end{equation} where $\beta>0$ is a constant, $m_{0}\in (-1, \beta )$ if $\mathtt k>0$, $m_{0}\in (0,\beta)$ if $\mathtt k=0$ and $\Omega^{+}_{m}:=\{x \in \Omega : m(x)>0 \}$. We will assume that the function $H: \mathtt mathbb{R}^N \to \mathtt mathbb{R}$, belonging to $C^2(\mathtt mathbb{R}^N \setminus \{ 0\})$, is such that \begin{align} \label{h:norma} &\text{$H \geq 0$ and $H(\xi)=0$ if and only if $\xi=0$} \\ \label{h:posom} &H(t\xi)= t H(\xi), \quad \text{ for any $t \ge 0$, $\xi\in \mathtt mathbb{R}^{N}$} \\ \label{h:convex} &\text{$\{\xi \in \mathtt mathbb{R}^{N} : H(\xi) < 1\}$ is uniformly convex, } \end{align} where by uniform convexity we mean that the principal curvatures of the boundary are positive and bounded away from zero. We will be interested in the minimization with respect to $m\in \mathtt mathcal{M}$ of the positive principal eigenvalues of \eqref{equazione}, namely the positive eigenvalues associated with an eigenfunction of constant sign. In the case \begin{equation}\label{h:modulo} H(\xi)=|\xi| \end{equation} problem \eqref{equazione} corresponds to the linearization of the nonlinear elliptic logistic problem \begin{equation}\label{prob:nonlineariso} \begin{cases} -\mathfrak Delta u= \lambda |u|^{p-2}u(m(x)-|u|^{q}) & \text{ in } \Omega\\ \nabla u\cdot n +\mathtt k|u|^{p-2}u=0 &\text{ on } \partial \Omega, \end{cases} \end{equation} with $q>0$. Positive solutions are the stationary states of the associated reaction diffusion equation. This model, introduced in \cite{fis, kpp}, describes the dispersal of a population, with density $u$, in a heterogeneous environment $\Omega$, triggered by a brownian motion law, so that each individual moves in every direction with the same probability. The heterogeneity of the habitat is modelled by representing $\Omega$ as union of patches, favourable and hostile zones, corresponding respectively to the positivity and negativity set of the weight $m$, so that $\Omega^{+}_{m}$ can be interpreted as the favorable zone of $\Omega$ (see \cite{bhr}). In this context, a positive principal eigenvalue $\lambda$ with eigenfunction $\varphi$, which, in view of \eqref{h:modulo}, can be chosen positive, turns out to be a threshold for the survival of the population. So that, minimizing $\lambda$, with respect to the weight or to other features of the model, endorses the chances of survival. Several contributions can be found in the literature, and we refer to the recent papers \cite{mazari2022,berecoville,dpv, mapeve1, mapeve2, feve} and references therein, for interesting phenomena such as fragmentation effects, nonlocal aspects or asymptotic analysis. Let us also mention that similar optimization problems have been addressed in other related contexts, such as in the framework of composite membranes (see \cite{chanillo, henrot} and the references therein). The study of the optimization of $\lambda=\lambda(m)$ with respect to the weight $m$ goes back to the contribution by Cantrell and Cosner in \cite{CC89} and it is known that the minimum $\od$ is achieved by an optimal weight of bang-bang type, namely a piece-wise function $m=\ind{D}-\beta\ind{D^{c}}$, where $\ind{D}$ denotes the characteristic function of the set $D$ and $D\subset \Omega $ turns out to be a super-level set of the associated positive eigenfunction (see \cite{LLNP, DG, CC91,louyan}). Then, natural questions concerning the qualitative properties of the ``optimal set'' $D$ arise. This is a rather hard task, mostly open in general, and the analysis is complete only for $p=2$ and in dimension one. This situation has been first investigated in \cite{CC91, louyan} for homogeneous Dirichlet or Neumann boundary conditions, and the study has been concluded in \cite{LLNP, hintermuller}, where it is proved that $D$ is connected, so that it is an interval and there exist a constant $\overline{\mathtt k}$ such that for every $ \mathtt k>\overline{\mathtt k}$, $D$ is centred in the middle of the interval $\Omega$, while for $\mathtt k<\overline{\mathtt k}$, $D$ sticks to the boundary. For $p\neq2$, the same analysis has been performed considering homogeneous Neumann in \cite{DG}. When the population adopts different diffusion strategies, one is naturally lead to consider different differential operators in the model. For instance, fractional diffusion operators have been investigated in \cite{cadiva, dpv, peve} (see also the references therein). In particular, for spectral fractional laplacian under homogeneous Neumann boundary conditions the optimal weight is of bang-bang type (\cite{peve}), while the shape and localization of the optimal set $D$ are still unknown even in dimension one. Here, we are focused on anisotropic diffusions, thinking of the population dispersing in the habitat with different probabilities depending on the direction (see\cite{Bouin} for a related model), so that the diffusion operator is given by the so called anisotropic $p-$Laplace operator \[ \mathfrak Delta_{H, p}u :={\rm div}\left((H(\nabla u))^{p-1}H_{\xi}(\nabla u)\right) . \] Eigenvalues' properties when $m\equiv 1$ have been widely studied under various boundary conditions, assuming that $H(t\xi)=|t|H(\xi)$ for every $t\in \mathtt mathbb{R}$, in place of \eqref{h:posom} (see e.g. \cite{BFK, dellapietraDBG, GavitoneTrani} and the references therein). From this perspective, we tackle the case of an indefinite eigenvalue problem under general Robin boundary conditions. As a first result, we establish the existence of a positive principal eigenvalue for every fixed $m\in {\mathtt mathcal M}$ by minimizing a suitable Rayleigh quotient restricted to the cone of positive functions (see Proposition \ref{prop:lambda}). Even in this study a novelty arises: as we want to include the study of an anisotropic one-dimensional diffusion operator, we assume $H(t\xi)=tH(\xi)$ for every $\xi\in \mathtt mathbb{R}^{N} $, but just for $t\geq 0$, so that $H$ is not assumed to be a norm as it may not be even. This has significant consequences. For instance, if one minimizes the associated Rayleigh quotient in the whole Sobolev space, it is not possible to deduce the sign of the associated eigenfunction a posteriori (see Remark \ref{rem:sign}). This is the reason why we restrict the minimization problem in the cone of non-negative functions. As a matter of fact, there exist two positive eigenvalues $\lambda^{\pm}$ with associated eigenfunctions of constant sign (See Section \ref{sec:principal eigenvalues}). One can be obtained through minimization on the cone of the positive functions, the other on the cone of negative ones (see Proposition \ref{prop:exlameno}). This phenomenon, due to the fact that $H$ is not supposed to be even, resembles what occurs in the context of fully nonlinear operators (see \cite{BD}, \cite{QS} and references therein). In analogy to what happens for isotropic diffusions, we prove that $\lambda^{+}(m) $ is a threshold for the existence of positive solutions of the nonlinear logistic elliptic problem (see Theorem \ref{soglia}). As a consequence, minimizing $\lambda^{+}(m)$ with respect to $m\in {\mathtt mathcal M}$ consists in finding the best spatial arrangements of resources in order to endorse the chances of survival of a population living in $\Omega$. In this direction we will first prove the following result. \begin{theorem}\label{thm:superlevel set} Assume that $H\in C^2(\mathtt mathbb{R}^N \setminus \{ 0\})$ satisfies hypotheses \eqref{h:norma}, \eqref{h:posom} and \eqref{h:convex}. The minimization problem \begin{equation}\label{Lambda} \od^+:=\inf_{m \in \mathtt mathcal{M}} \lambda^+(m) \end{equation} has a solution given by $m(x)=\ind{D}-\beta\ind{D^{c}} $, $(D^{c}=\Omega\setminus D)$. If $\varphi=\varphi(m)$ is a positive eigenfunction associated with $\Lambda^+=\lambda^+(m)$, than the set $D$ is a super level set of $\varphi$, i.e. for a suitable $t>0$ \begin{equation}\label{Dset} D=\{ x \in \Omega \;:\; \varphi(x) >t\} \end{equation} and every level set of $\varphi$ has zero measure. \end{theorem} Theorem \ref{thm:superlevel set} proves that, as shown in several contexts (see for instace \cite{CC91, louyan, LLNP, MazNadPri2020, MazNadPri2023, mazari2023}), also in the anisotropic case optimizers for $\Lambda^+$ are of bang-bang type. In particular, the minimization problem \eqref{Lambda} is equivalent to the minimization with respect to the subsets of $\Omega$ satisfying a volume constraint (see Remark \ref{rem:min set}), and one is naturally lead to study qualitative properties of optimal sets. With this goal in mind, we consider homogeneous Dirichlet or Neumann boundary conditions and we restrict ourselves to dimension one, where $H$ satisfying \eqref{h:norma}, \eqref{h:posom} and \eqref{h:convex}, has necessarily the expression \begin{equation}\label{h:1dim} H(x)= \begin{cases} ax & \text{ if } x \ge 0\\ -bx & \text{ if } x < 0, \end{cases} \end{equation} with $a\neq b$, otherwise no anisotropy occurs. We will show the following result. \begin{theorem}\label{thm:localization} Let $N=1$, $\Omega=(0,1)$ and assume $H$ is of the form \eqref{h:1dim}. \\ Then, the super-level set $D$ is an interval. In addition \begin{enumerate} \item If $\mathtt k=0$, $D=(0, |D|)$ if $a>b$, and $D=(1-|D|, 1)$ if $b >a$. If $a=b$, $(0, |D|)$ and $(1-|D|, 1)$ are both optimal sets. \item If $\mathtt k=+\infty$, then $D$ is given by \begin{equation}\label{localization D} D=\left( \frac{(1-|D|)a}{a+b}, \frac{|D|b+a}{a+b} \right). \end{equation} \end{enumerate} \end{theorem} \begin{figure} \caption{$\mathtt k=0$ and $a>b$.} \caption{$\mathtt k=0$ and $a<b$.} \caption{A representation of the optimal weight $m$ for $\Lambda^+$ in dimension one with Neumann boundary conditions ($\mathtt k=0$) (see Theorem \ref{thm:localization} \label{fig:loc} \end{figure} \begin{figure} \caption{Anisotropic case: $a>b$.} \caption{Isotropic case: $a=b$.} \caption{A representation of the optimal weight $m$ for $\Lambda^+$ in dimension one with Dirichlet boundary conditions ($\mathtt k=+\infty$), as found in Theorem \ref{thm:localization} \label{fig:loc2} \end{figure} This result enlighten the effect of the anisotropy on the location of the optimal interval $D$. Indeed, in the case of homogeneous Dirichlet boundary conditions the anisotropy produces a shift of $D$, which turns out to be the one-dimensional anisotropic ball centered at $1/2$, namely the interval given in \eqref{localization D}. In the case of homogeneous Neumann boundary conditions the anisotropy decides to which extremum the interval $D$ should stick, see Figures \ref{fig:loc} and \ref{fig:loc2}. In order to prove Theorem \ref{thm:localization} we will first perform a suitable monotone rearrangement, in the spirit of \cite{LLNP}, to show that the eigenfunction $\varphi$ has a unique maximum point, so that the super-level set $D$ is an interval. Then, the analysis is completed in the case of homogeneous Neumann and Dirichlet boundary conditions, $\mathtt k=0$, $\mathtt k=\infty$ respectively. Let us observe that, when $H(\xi)=|\xi|^{2}$ and Robin boundary conditions are imposed, the location of $D$ is detected by directly computing and comparing the eigenvalues associated with the possible optimal sets (see \cite{hintermuller}). Here, this comparison appears particularly involved, due to the presence of the anisotropy. However, when $\mathtt k=0$, the monotone rearrangements argument immediately implies that $\varphi$ is monotone in the whole $(0,1)$. Then, the conclusion follows by direct comparison of the Rayleigh quotient. In the case of homogeneous Dirichlet boundary conditions, the uniqueness of the maximum point of the eigenfunction $\varphi$ allows us to manage the equality case in the Polya inequality for anisotropic symmetrizations (see Proposition \ref{prop: polya anis}) yielding the conclusion. We expect that a suitable version of Theorem \ref{thm:localization} should also hold in the general case of Robin boundary conditions, as for the isotropic case. We believe that the case of homogeneous Dirichlet boundary conditions can be handled directly exploiting anisotropic symmetrization arguments without passing by monotone rearrangements, and using isoperimetric inequalities, although this would require a careful adaptation in order to handle the case of non-even anisotropy (see Remark \ref{rem:coarea}) . On the other hand, since monotone rearrangements are used in order to get that $D$ is an interval in the general case of Robin boundary conditions, as a byproduct we have that the eigenfunction $\varphi$ has a unique maximum point. This allows us to use an elementary one-dimensional rigidity result for the equality case in the Polya inequality (see Proposition \ref{prop: polya anis}), to obtain the localization of $D$. Starting the analysis in the cone of negative functions, it is possible to show the existence of a positive eigenvalue $\lambda^-(m)$, with a unique negative normalized eigenfunction. Then, one can perform the optimization of $\lambda^-(m)$ with respect to the weight $m$, namely studying the problem \[ \od^-:=\inf_{m \in \mathtt mathcal{M}} \lambda^-(m). \] It is possible to give the analogous version of Theorem \ref{thm:localization} for the optimal set associated with $\Lambda^{-}$. Also, we show that under a suitable symmetry assumption on the domain $\Omega$ one has the following result \begin{theorem}\label{thm:Lambda+=Lambda-} We assume that $\Omega$ has a centre of symmetry, i.e. there exists $x_{0}\in \mathtt mathbb{R}^{N}$ such that $2x_{0}-x \in\Omega$ for every $x\in \Omega$. Then, \[ \Lambda^+=\Lambda^-. \] \end{theorem} We stress that our proof of Theorem \ref{thm:Lambda+=Lambda-} heavily uses the monotonicity properties of the eigenfunctions associated with the optimal eigenvalues. We think that the equality does not hold for any $\lambda^{\pm}(m)$ as the monotonicity properties are not expected to hold. The paper is organized as follows. In the next section we discuss the emergence of $\lambda^{+}(m)$ as a threshold for the existence of positive solution of the associated nonlinear elliptic problem. In Section \ref{sec:principal eigenvalues} we set up the eigenvalue problem for a fixed weight $m$, we show the existence of the principal eigenvalues $\lambda^{+}(m)$ and $\lambda^{-}(m)$ in Propositions \ref{prop:lambda} and \ref{prop:exlameno}, then we give the proof of Theorem \ref{thm:superlevel set} as well as the counterpart for $\lambda^{-}(m)$. The proofs of Theorems \ref{thm:localization} and \ref{thm:Lambda+=Lambda-} are given in Section \ref{localization}. Finally in Section \ref{rearrangements} we prove some necessary technical tools about one-dimensional anisotropic rearrangements inequalities. \section{The nonlinear Problem }\label{nonlinear} In the whole paper we will assume that $H$ satisfies \eqref{h:norma}, \eqref{h:posom}. This type of functions is usually referred to as {\it positively homogeneous Minkowski norm}. It can be easily seen that $H$ satisfies the following growth conditions \begin{equation}\label{h:growth} \underline \alpha \abs{\xi} \leqslant H(\xi) \leqslant \overline \alpha \abs{\xi}, \qquad \text{with $0<\underline \alpha<\overline \alpha$.} \end{equation} Under the assumption \eqref{h:convex}, it can also be proved that for any $p>1$ the function $H^p$ is strictly convex and there exist positive constants $\gamma, \Gamma$ such that \begin{equation} \label{H-convexity} (\text{Hess}(H^p)(\xi))_{ij} \zeta_i \zeta_j \ge \gamma |\xi|^{p-2} |\zeta|^2, \qquad \sum_{i, j=1}^N \abs{(\text{Hess}(H^p)(\xi))_{ij}} \leqslant \Gamma |\xi|^{p-2} \end{equation} for any $\xi \in \mathtt mathbb{R}^N \setminus\{0\}$ and $\zeta\in \mathtt mathbb{R}^N$, see \cite[Proposition 3.1]{CFV14_1} for dimension $N \ge 2$, whereas for $N=1$ these estimates follow by direct computations. Let us start by giving the following definition. \begin{definition}\label{def:lambda} Let $m\in {\mathtt mathcal M}$. We set \[ \lambda^+(m)=\inf_{u\in {\mathtt mathcal S_{\mathtt k, m}^+}}{\mathtt mathcal R_{\mathtt k,m}}(u) \] where the Rayleigh quotient ${\mathtt mathcal R_{\mathtt k,m}}$ and the set ${\mathtt mathcal S_{\mathtt k, m}^+}$ are defined in a different way depending on $\mathtt k$. If $\mathtt k<+\infty $ they are given by \[ {\mathtt mathcal R_{\mathtt k,m} }(u) := \dfrac{\int_\Omega H(\nabla u)^pdx+\mathtt k\int_{\partial \Omega}|u|^{p}d\sigma}{\int_\Omega m(x)|u|^{p}dx} \] \[ {\mathtt mathcal S_{\mathtt k, m}^+} :=\left\{u\in W^{1, p}(\Omega),\ u \ge 0, \, \int_\Omega m(x)|u|^p\,dx>0\right\}, \] and for $\mathtt k=+\infty$, \[ {\mathtt mathcal R_{\infty,m}}(u):= \dfrac{\int_\Omega H(\nabla u)^p}{\int_\Omega m(x)|u|^{p}dx} \qquad {\mathtt mathcal S_{\infty, m}^+}:=\left\{ u\in W^{1, p}_{0}(\Omega),\ u \ge 0, \, \int_\Omega m(x)|u|^p\,dx>0\right\}. \] \end{definition} A similar definition can be given choosing ${\mathtt mathcal S}^{-}_{\mathtt k,m}$ (see Definition \ref{def:lambda-}). In the next theorem we show that, as in the isotropic case, $\lambda^{+}(m)$ naturally arises as a threshold for the existence of positive solutions of the following logistic type nonlinear problem \begin{equation}\label{eq:nonlinear} \begin{cases} - \mathfrak Delta_{H, p}u = \lambda |u|^{p-2} u (m - |u|^q) &\text{ in } \Omega,\\ H(\nabla u)^{p-1} H_\xi(\nabla u)\cdot n +\mathtt k|u|^{p-2}u=0 &\text{ on } \partial \Omega, \end{cases}\end{equation} with $q>0$, that is the counterpart in the anisotropic case of problem \eqref{prob:nonlineariso}. \begin{theorem}\label{soglia} There exists a nonnegative nontrivial bounded solution $u$ to problem \eqref{eq:nonlinear} if and only if $\lambda > \lambda^+(m)$. Moreover, $u\in C^{1, \alpha}(\Omega)$ is the unique nonnegative nontrivial solution. \end{theorem} This result can be proved by means of different approaches, such as minimizing a suitable action functional. Here, we will obtain the existence of a bounded solution via sub-, super-solution arguments, which can be also exploited to show the existence of solutions of the associated parabolic equations. We recall that $u\in W^{1,p}(\Omega)$ is a super-solution for \eqref{eq:nonlinear} if \begin{equation}\label{eq:supersolution} \begin{split} D(u,\phi): = & \int_\Omega H(\nabla u)^{p-1} H_\xi(\nabla u) \nabla \phi - \lambda \int_\Omega m(x) |u|^{p-2} u \phi \\ & + \lambda\int_\Omega |u|^{p+q-2} u \phi + \mathtt k \int_{\partial \Omega} |u|^{p-2} u \phi \geqslant 0. \end{split} \end{equation} is satisfied for any nonnegative $\phi \in C^{\infty}(\Omega)$ if $\mathtt k\in (0,\infty)$ or $\phi \in C_{c}^{\infty}(\Omega)$ if $\mathtt k = \infty$. Analogously, but with the opposite inequality, one can give the definition of a sub-solution for \eqref{eq:nonlinear}. We explicitly note that if $u\in L^{(p^{*})'/(p+q-1)}(\Omega)$ then $D(u,\cdot)$ is sequentially continuous with respect to $\| \cdot \|_{W^{1,p}(\Omega)}$, therefore, for bounded super-solutions, \eqref{eq:supersolution} holds for any $\phi \in W^{1,p}(\Omega)$ if $\mathtt k\in (0,\infty)$ or $\phi \in W^{1,p}_{0}(\Omega)$ if $\mathtt k = \infty$. We start by proving an existence result. \begin{proposition}\label{prop:subsuper} Suppose that there exist a sub-solution $\underline u$ and a super-solution $\overline u$ to \eqref{eq:nonlinear}, and assume that for some constants $\underline c$, $\overline c$ one has $ -\infty < \underline c \leqslant \underline u \leqslant \overline u \leqslant \overline c < \infty$ almost everywhere in $\Omega$. Then there exists a weak solution $u$ of \eqref{eq:nonlinear} such that $\underline u \leqslant u\leqslant \overline u$ a.e. in $\Omega$. \end{proposition} \begin{proof} The proof follows the classical argument in \cite[Theorem 2.4]{Struwe}. We will only highlight some different points which appear when considering Robin boundary conditions, i.e. $\mathtt k <\infty$. The approach is based on the minimization of the functional \[ E(u)= \int_\Omega H(\nabla u)^p + \mathtt k \int_{\partial \Omega} |u|^p- \lambda \int_\Omega m |u|^p + \lambda \int_\Omega |u|^{p+q} \] on the closed convex subset of $W^{1, p}(\Omega)$ \[ \mathtt mathcal{M}=\left\{ u \in W^{1, p}(\Omega): \underline u \leqslant u \leqslant \overline u \, \text{ a.e.} \right\}. \] We first observe that, being $\mathtt mathcal{M}$ bounded in $L^{\infty}(\Omega)$, using the hypotheses on $H$, it is not hard to verify that the functional $E$ restricted to $\mathtt mathcal{M}$, endowed with the norm induced by the $W^{1, p}(\Omega)$ norm, is coercive and weakly lower semicontinuous. It easily follows that $E$ attains its infimum in $\mathtt mathcal M$. Let $u\in \mathtt mathcal{M}$ be the minimum point of $E$ on $\mathtt mathcal{M}$. Since $p>1$ implies that $H^{p}$ is positively $p$-homogeneous (therefore with first derivatives positively $(p-1)$-homogeneous and continuous in the origin), the Lagrangian function of the bulk component of the functional $E$, namely \[ f(x,u,\xi)=H^{p}(\xi)-\lambda m(x) |u|^p+\lambda |u|^{p+q}, \] satisfies the so called {\it natural growth conditions} (cf. \cite[Condition 3.34]{daco-2008}). Then one may argue as in \cite[Theorem 3.7]{daco-2008}, to prove that the right-Gateaux derivative of the functional $E$ at the minimum point $u$ in any direction $\phi=v-u$ with $v\in \mathtt mathcal{M}$\footnote{Note that, since $\mathtt mathcal{M}$ in convex, $u+\varepsilon(v-u)$ is an admissible variation for the functional $E$ restricted to $\mathtt mathcal{M}$. Again for the lower order terms, as well as for the boundary integral, we are heavily using the fact that $\mathtt mathcal{M}$ is bounded in $L^{\infty}(\Omega)$ to apply the dominated convergence theorem.} exists and it is equal to $D(u,\phi)$. Moreover, being $u$ a minimizer we also have $D(u,\phi)\geqslant 0$. Given $\varphi \in C^\infty( \Omega)$, $\varepsilon>0$ sufficiently small, and having defined \[ \varphi^\varepsilon:= \mathtt max \{ 0, u+ \varepsilon \varphi - \overline u \} \ge 0, \quad \varphi_\varepsilon:= \mathtt max \{ 0, \underline u - (u+\varepsilon \varphi) \} \ge 0, \] we have that $v_\varepsilon:=u + \varepsilon \varphi - \varphi^\varepsilon + \varphi_\varepsilon \in \mathtt mathcal{M}$, therefore the variational inequality $D(u,v_{\varepsilon}-u) \geqslant 0$ holds true. By linearity it results \[ D(u,\varphi) \geqslant \frac{1}{\varepsilon} \big(D(u,\varphi^\varepsilon)- D(u,\varphi_\varepsilon) \big). \] Let us show that the right hand side goes to zero when $\varepsilon$ vanishes. In the following, we will use the notation \begin{align*} g(x, u)&=\lambda m(x) |u|^{p-2} u - \lambda |u|^{p+q-2} u, \quad \Omega_\varepsilon:= \left\{ x \in \Omega: u(x)+\varepsilon \varphi(x) \ge \overline u(x) > u(x) \right\}. \\ K_\varepsilon &=\left\{ x\in \partial \Omega: Tr \varphi^\varepsilon(x) \ne 0\right\}, \quad \text{ where } Tr: W^{1, p}(\Omega) \rightarrow L^p(\partial \Omega) \text{ is the trace operator.} \end{align*} Taking into account that $\overline u$ is a super-solution, that $u+\varepsilon\varphi-\overline{u}\leqslant \varepsilon \varphi$ in $\Omega_{\varepsilon}$, and by convexity of $H(\cdot)^{p}$ and $|\cdot|^{p}$, one has \[ \begin{split} D(u,\varphi^\varepsilon) \geqslant & D(u,\varphi^\varepsilon) - D(\overline u,\varphi^\varepsilon) \\ \geqslant & \; \varepsilon \int_{\Omega_\varepsilon} \left( H(\nabla u)^{p-1} H_\xi(\nabla u) - H(\nabla \overline u)^{p-1} H_\xi(\nabla \overline u) \right) \nabla \varphi \\ & + \varepsilon \, \mathtt k \int_{K_\varepsilon} (|\overline u|^{p-2} \overline u- |u|^{p-2} u) \varphi - \varepsilon \int_{\Omega_\varepsilon} \abs{g(x, u)- g(x, \overline u)} |\varphi|. \end{split} \] Note that $\varphi^{\varepsilon}\to 0$ in $W^{1,p}(\Omega)$, so that $Tr\varphi^{\varepsilon}\to 0$ in $L^{p}(\partial\Omega)$, thanks to the continuity of the trace operator; as a consequence, $|\Omega_\varepsilon| \to 0$, and $\mathtt mathcal{H}^{n-1}(K_\varepsilon) \to 0$. Then \[ D(u,\varphi^\varepsilon) \geqslant o(\varepsilon). \] and the conclusion follows arguing as in the proof of \cite[Theorem 2.4]{Struwe}. \end{proof} In view of Proposition \ref{prop:subsuper}, in order to prove the existence of a bounded weak solution to \eqref{eq:nonlinear}, it is sufficient to find $\underline{u}\leqslant \overline{u}$ bounded sub- and super-solution. To this aim, it is convenient to introduce the following eigenvalue for any $\lambda>0$ fixed \[ \mathtt mu^+(\lambda, m) =\inf_{\substack{v \in W^{1, p} \\ v \ge 0, \not \equiv 0}} \frac{\int_\Omega H(\nabla v)^p + \mathtt k \int_{\partial \Omega} |v|^p - \lambda\int_\Omega m |v|^p}{\int_\Omega |v|^p}. \] Arguing as in Proposition \ref{prop:lambda} it is possible to show that $\mathtt mu^+(\lambda, m)$ is attained by a positive eigenfunction which we denote by $\Phi\in {\mathtt mathcal S}^{+}_{\mathtt k,m}$. The eigenvalue $\mathtt mu^+(\lambda, m)$ is the anisotropic counterpart of the eigenvalue introduced in \cite{bhr,rh} and as in the isotropic diffusion we are going to show that $\mathtt mu^+(\lambda, m)<0$ is a sufficient and necessary condition to obtain the existence of a positive solution of \eqref{eq:nonlinear}. \begin{proof}[Proof of Theorem \ref{soglia} ] We only consider the case $\mathtt k< \infty$, the case of Dirichlet boundary conditions ($\mathtt k=\infty$) can be treated analogously. \textit{First step: existence of a non-negative nontrivial solution above $\lambda^+(m)$.} Let us fix $\lambda > \lambda^+(m)$. Note that $\mathtt mu^+(\lambda, m)<0$, since for any $v \in {\mathtt mathcal S}^{+}_{\mathtt k,m}$ we have \begin{align*} \displaystyle \dfrac{\int_\Omega H^p(\nabla v) + \mathtt k \int_{\partial \Omega} |v|^p - \lambda\int_\Omega m |v|^p}{\int_\Omega |v|^p} \frac{\int_\Omega |v|^p}{\int_\Omega m |v|^p} &= \frac{\int_\Omega H^p(\nabla v) + \mathtt k \int_{\partial \Omega} |v|^p - \lambda\int_\Omega m |v|^p}{\int_\Omega m |v|^p} \\ &= \frac{\int_\Omega H^p(\nabla v) + \mathtt k \int_{\partial \Omega} |v|^p}{\int_\Omega m |v|^p} - \lambda. \end{align*} Having fixed $\varepsilon >0$ such that \[ \varepsilon < \mathtt min\left\{\left( - \frac{\mathtt mu^+(\lambda, m)}{\lambda} \right)^{\frac 1 q}, \|m^{+}\|_{\infty}\right\}, \] let $\Phi$ be the positive eigenfunction associated with $\mathtt mu^+(\lambda, m)$ satisfying $\|\Phi\|_{\infty}=1$. It is immediate to see that $\underline u:=\varepsilon \Phi >0$ is a sub-solution; in addition any constant function $\overline u\equiv c \geqslant (\norm{m^{+}}_\infty)^{\frac 1 q}$ is a super-solution, and $\underline u<\overline u$ thanks to the choice of $\varepsilon$. Then, Proposition \ref{prop:subsuper} yields the existence of a solution $u$ such that $0 <\underline u \leqslant u \leqslant \overline u$. As a consequence, $0<u \leqslant \|m^{+}\|^{\frac 1 q}_{\infty}$. \textit{Second step: non-existence of a nontrivial nonnegative solution below $\lambda^+(m)$.} Assume $0 < \lambda \leqslant \lambda^+(m)$, and let, by contradiction, $u$ be a nonnegative, nontrivial, bounded solution to \eqref{eq:nonlinear}. Then, by testing \eqref{eq:nonlinear} with $u$, one immediately see that $\int_\Omega m u^p >0$. So that Definition \ref{def:lambda} yields \begin{align*} \lambda^+(m) \int_\Omega m u^p & \leqslant \int_\Omega H(\nabla u)^p + \mathtt k \int_{\partial \Omega} u^p = \lambda \int_\Omega m u^{p} -\lambda \int_\Omega u^{p+q} < \lambda^+(m) \int_\Omega m u^{p}, \end{align*} which is a contradiction. \textit{Third step: uniqueness and regularity.} Classical elliptic regularity theory (see for instance \cite{Tolksdorf}) and the Harnack inequality proved in \cite{Trudinger.1967} (see also \cite{dellapietragavitone}) ensure that $u\in C^{1,\alpha}(\Omega)$ and positive. The uniqueness can be obtained arguing by contradiction. Assume that $u, v$ are two positive solutions to \eqref{eq:nonlinear} for $\lambda>\lambda^{+}(m)>0$. Let $\varepsilon>0$ and taking $ \frac{u^p}{(v+\varepsilon)^{p-1} }$ as test function in the equation satisfied by $v$. By applying a suitable Picone identity (\cite[Lemma 2.2]{Jaros}), one has \[ \begin{split} \mathtt k \int_{\partial \Omega} u^{p} & \left( 1- \frac{v^{p-1} }{(v+\varepsilon)^{p-1}} \right) - \lambda \int_{\Omega} m u^{p} \left( 1- \frac{v^{p-1}}{(v+\varepsilon)^{p-1}} \right) + \lambda \int_\Omega u^{p }\left( u^{q}- \frac{v^{p+q-1}}{(v+\varepsilon)^{p-1}} \right) \\ = & \int_\Omega H(\nabla v)^{p-1} H_\xi (\nabla v) \nabla \left ( \frac{u^p}{(v+\varepsilon)^{p-1}} \right)- \int_\Omega H^p(\nabla u) \leqslant 0. \end{split} \] Writing the analogous inequality obtained taking $ \frac{v^p}{(u+\varepsilon)^{p-1} } $ as a test function in the equation satisfied by $u$, summing up and and letting $\varepsilon \to 0$, we obtain \begin{align*} 0 & \ge \lambda \int_\Omega (u^p-v^p)(u^q-v^q) \\ &=\lambda \left( \int_{\Omega \cap \{ u \ge v\}} (u^p-v^p)(u^q-v^q) + \int_{\Omega \cap \{ u \leqslant v\}}(u^p-v^p)(u^q-v^q) \right) \ge 0, \end{align*} a contradiction. \end{proof} \section{The principal eigenvalues $\lambda^\pm(m)$}\label{sec:principal eigenvalues} We start this section focusing our attention on the properties of $\lambda^+(m)$ (cf. Definition \ref{def:lambda}). Let us first observe that, an easy adaptation of Proposition 2 in \cite{LLNP} allows us to refer to the case $\mathtt k=+\infty$ as the case in which homogeneous Dirichlet boundary conditions are imposed. The existence of $\lambda^{+}(m)$ (see Definition \ref{def:lambda}) will be proved via a constrained minimization as shown in the next proposition. \begin{proposition}\label{prop:lambda} Assume \eqref{h:norma}, \eqref{h:posom}, \eqref{h:convex}. Let $m\in {\mathtt mathcal M} $. Then, the following conclusions hold. \begin{enumerate} \item $\lambda^{+}(m)$ is attained by a unique positive $\varphi\in C^{1,\alpha}(\Omega)$ up to multiplication by a positive constant. \item $\lambda^+(m)$ satisfies the lower bound, \[ \lambda^+(m)\geq \frac{\underline \alpha c}{\mathtt max\{1,\beta\}} \] where $c=c(\Omega)>0$ is a positive constant, $\beta$ is given in \eqref{defM}, and $\underline \alpha$ is given in \eqref{h:growth}. \item $\lambda^{+}(m) $ is the unique positive principal eigenvalue with positive eigenfuncion. \end{enumerate} \end{proposition} \begin{remark} Let us observe that in the case $H(\xi)=|\xi|$, $\mathtt k>0$, if the condition $\int_{\Omega} m(x)|u|^p>0$ is not assumed, there exist two principal eigenvalues $\lambda_-<0<\lambda_+$ with associated positive eigenfunctions $\varphi_-,\,\varphi_+$ such that respectively $\int_{\Omega} m(x)|\varphi_{\pm}|^p\gtrless 0$. Whereas, for $\mathtt k=0$, $\lambda_-=0$ is a principal eigenvalue associated with a constant eigenfunction and $\lambda_+>0$ iff $\int m(x)<0$. In this case $\int_{\Omega} m(x)|\varphi_+|^p>0$. Here, we focus on positive principal eigenvalues, this is why we impose $\int_{\Omega} m(x)|u|^p>0$. \end{remark} \begin{remark} Notice that, if $\Omega$ is a $C^{1, \alpha}$ domain, then the eigenfunction $\varphi \in C^{1, \alpha}(\overline \Omega)$ by \cite[Theorem 2]{Lieberman}. \end{remark} \begin{proof}[Proof of Proposition \ref{prop:lambda}] Let us start by considering the case $\mathtt k<+\infty$. The invariance of ${\mathtt mathcal R_{\mathtt k,m}}(u)$ up to a positive scaling of $u$ ensures that we can calculate $\lambda^{+}(m)$ by solving a constrained minimization problem, i.e. \begin{equation}\label{minpb} \lambda_{\mathtt k}^{+}(m)=\inf \left \{ \int_{\Omega} H^p(\nabla v) + \mathtt k \int_{\partial \Omega} |v|^p : v \in W^{1, p}(\Omega), \, v \ge 0, \, \int_{\Omega} m(x) \abs{v}^p =1 \right \}, \end{equation} and note that $\lambda^{+}(m)=\lambda_{\mathtt k}^{+}(m)$. Notice that the set where we minimize is not empty: indeed, it suffices to take $\psi \in C^{\infty}_{c}(\Omega)$, $\psi \ge 0$, approximating the characteristic function of the set $\Omega^{+}_{m}$ in $L^{p}(\Omega)$, and then normalize it, in order to have a function satisfying the constraint. As a consequence, $\lambda^+(m)$ is finite and the direct methods of calculus of variations easily imply the existence of a minimizer. For it, let us take a minimizing sequence $\{u_n\}$ with energy bounded by a positive constant $C$, so that $u_{n}$ satisfies \[ \int_{\Omega} (H(\nabla u_n))^p + \mathtt k \int_{\partial \Omega} |u_n|^p \leqslant C. \] First we observe that there exists a constant $c>0$ such that for any $m\in \mathtt mathcal M$ and for any $u\in W^{1,p}$ with $\int_{\Omega} m(x) \abs{v}^p >0$ the following Poincar\'e type inequality holds \begin{equation}\label{poincare-type} \int_\Omega |u|^p \leqslant c \left( \int_{\Omega} (H(\nabla u))^p + \mathtt k \int_{\partial \Omega} |u|^p \right). \end{equation} Indeed, equation \eqref{poincare-type} reduces to \cite[Lemma 3.1]{DG} when $\mathtt k=0$ and can be easily proved when $\mathtt k>0$ arguing by contradiction. Using \eqref{poincare-type} we deduce that the sequence $\{u_n\}$ is bounded in $W^{1, p}(\Omega)$. Thus, there exists $u\in W^{1, p}(\Omega)$, $u\geq 0$, such that, up to a subsequence, $u_n \rightharpoonup u$ in $W^{1, p}(\Omega)$ and $u_{n}\to u$ strongly in $L^{p}(\Omega)$, so that $u$ satisfies the constraints. Finally, taking into account that $H$ is continuous and convex and that the embedding $W^{1, p}(\Omega) \hookrightarrow L^p(\partial \Omega)$ is compact, one obtains that $\lambda_{\mathtt k}^+(m)$ is attained by $u$ since \[ \lambda_{\mathtt k}^+(m) \leqslant \int_{\Omega}(H(\nabla u))^p+\mathtt k \int_{\partial \Omega} |u|^p \leqslant \liminf_{n\to \infty} \left( \int_{\Omega}(H(\nabla u_n))^p+\mathtt k \int_{\partial \Omega} |u_n|^p \right)=\lambda_\mathtt k^+(m). \] Being $u$ a minimizer for the problem \eqref{minpb}, it is a weak solution of the associate Euler-Lagrange equation \eqref{equazione}. We can therefore apply the classical elliptic regularity theory (see for instance \cite{Tolksdorf}) and the Harnack inequality proved in \cite{Trudinger.1967} (see also \cite{dellapietragavitone}) to ensure that $u\in C^{1,\alpha}(\Omega)$ and positive. The Dirichlet case, corresponding to $\mathtt k=+\infty$, can be addressed in an analogous way, by observing that, \begin{equation}\label{minpbdir} \lambda_\infty^+ (m)=\inf \left \{ \int_{\Omega} (H(\nabla v))^p : v \in W_{0}^{1, p}(\Omega), \, v \geqslant 0, \, \int_{\Omega} m(x) \abs{v}^p =1 \right \}. \end{equation} and the existence of a minimum point follows as before by exploiting the classical Poincar\'e inequality. We now prove the uniqueness of the minimizer, arguing by contradiction and exploiting a convexity argument as done in \cite{BFK}. Take two positive minimizers $u$ and $U$. For $t \in (0, 1)$ we set $\eta=tu^p + (1-t)U^p$, and $u_t=\eta^{1/p}$. Then, \eqref{h:posom}, \eqref{h:convex} and the convexity of $t \mathtt mapsto t^{p}$ imply that \begin{align} \nonumber H^p(\nabla u_t) &= \eta H^p \left( \frac{t u^p }{\eta} \frac{ \nabla u}{u} + \frac{(1-t)U^p }{\eta} \frac{\nabla U}{U} \right) \\ \label{inequality:uniq1} & \leqslant \eta \left[ \frac{tu^p}{\eta} H\left( \frac{\nabla u}{u} \right) + (1-t)\frac{U^p}{\eta} H \left( \frac{\nabla U}{U} \right) \right]^p \\ \label{inequality:uniq2} & \leqslant \eta \left[ \frac{tu^p}{\eta} H^p\left( \frac{\nabla u}{u} \right) + (1-t)\frac{U^p}{\eta} H^p \left( \frac{\nabla U}{U} \right) \right] \\ \nonumber& = t H^p(\nabla u) + (1-t)H^p(\nabla U). \end{align} Furthermore \[ \int_{\partial \Omega} u_t^p= t \int_{\partial \Omega} u^p + (1-t) \int_{\partial \Omega} U^p ,\qquad \int_{\Omega}m(x)u_{t}^{p}=1. \] The above expressions imply that also $u_t$ is a minimizer, which means that \eqref{inequality:uniq1} and \eqref{inequality:uniq2} are actually equalities. Then, as $p>1$, $H^{p}$ is strictly convex and one obtains that $\nabla u/u=\nabla U/U$. This immediately yields that $u/U$ is constant almost everywhere, and in view of the constraint condition in \eqref{minpb}, we deduce that $u=U$. This finishes the proof of the first conclusion. In order to obtain conclusion (2), one may use again the Poincaré type inequality \eqref{poincare-type} or the classical Poincaré inequality, and taking into account \eqref{h:growth}, \eqref{defM}, one obtains \[ 1=\int_{\Omega}m(x)|u|^{p}\leqslant \|m\|_{\infty}\|u\|_{p}^{p}\leqslant \frac{\|m\|_{\infty}}{\underline \alpha c}\lambda^+(m) \leqslant \frac{\mathtt max\{1,\beta\}}{\underline \alpha c}\lambda^+(m). \] To conclude the proof it is left to show that, if there exists an eigenvalue $\lambda>0$ such that the corresponding eigenfunction is non-negative, then $\lambda=\lambda^+(m)$. Let us assume that $v \geqslant 0$ is an eigenfunction with eigenvalue $\lambda \ne \lambda^+(m)$, $\lambda>0$, and let us first consider the case $\mathtt k < +\infty$. Take $u=t \bar u$, where $t >0$, and $\bar u$ is the positive eigenfunction for $\lambda^+(m)$ normalized such that $\int_\Omega m \bar u^p=1$. Notice that by \eqref{h:posom} $u$ is again an eigenfunction for \eqref{equazione} with eigenvalue $\lambda^+(m)$. Taking as test function $ u^p(v+\varepsilon)^{1-p} $ in the equation satisfied by $v$, one obtains \begin{align*} \int_\Omega H(\nabla v)^{p-1} H_\xi (\nabla v) \nabla \left ( \frac{u^p}{(v+\varepsilon)^{p-1}} \right) &+ \mathtt k \int_{\partial \Omega} \frac{u^pv^{p-1} }{(v+\varepsilon)^{p-1}} = \lambda \int m \frac{u^pv^{p-1} }{(v+\varepsilon)^{p-1}} \\ =& \lambda \int m \frac{u^pv^{p-1}}{(v+\varepsilon)^{p-1}} - \lambda^+(m) \int_\Omega m u^p \\ &+ \int_\Omega H^p(\nabla u) + \mathtt k \int_{\partial \Omega} u^p . \end{align*} Choosing as test function $ v^p (u+\varepsilon)^{1-p}$ in the equation satisfied by $u$ yields \begin{align*} \int_\Omega H(\nabla u)^{p-1} H_\xi (\nabla u) \nabla \left ( \frac{v^p}{(u+\varepsilon)^{p-1}} \right) &+ \mathtt k \int_{\partial \Omega}\frac{v^p u^{p-1}}{(u+\varepsilon)^{p-1}} = \lambda^+(m) \int_\Omega m \frac{v^p u^{p-1}}{(u+\varepsilon)^{p-1}} \\ & -\lambda \int_\Omega m v^p + \int_\Omega H^p(\nabla v) + \mathtt k \int_{\partial \Omega} v^p. \end{align*} Summing up these two identities and using Picone identity \cite{Jaros}, we deduce that \begin{align*} \mathtt k \int_{\partial \Omega} \frac{u^pv^{p-1} }{(v+\varepsilon)^{p-1}} + \mathtt k \int_{\partial \Omega}\frac{v^p u^{p-1}}{(u+\varepsilon)^{p-1}} \geqslant &\lambda \int m \frac{u^pv^{p-1}}{(v+\varepsilon)^{p-1}} - \lambda^+(m) \int_\Omega m u^p \\ & \lambda^+(m) \int_\Omega m \frac{v^p u^{p-1}}{(u+\varepsilon)^{p-1}} -\lambda \int_\Omega m v^p \\ &+ \mathtt k \int_{\partial \Omega} u^p + \mathtt k \int_{\partial \Omega} v^p. \end{align*} Letting $\varepsilon \to 0$, one gets \[ (\lambda-\lambda^+(m)) \int_\Omega m (u^p-v^p) \leqslant 0. \] Since $\lambda > \lambda^+(m)$ as $v\in {\mathtt mathcal S^{+}_{\mathtt k,m}}$, we conclude \[ t^p=\int_\Omega m u^p \leqslant \int_\Omega m v^p. \] Since $t>0$ is arbitrary, we get a contradiction, hence $\lambda=\lambda^+(m)$. \end{proof} \begin{remark}\label{rem:sign} Assuming $H$ to be even, it results \[ \lambda^{+}(m)=\lambda(m)=\mathtt min\left\{\int_{\Omega}|H(\nabla u)|^{p}+\mathtt k\int_{\partial \Omega}|u|^{p}, \;u\in W^{1,p}(\Omega)\; :\; \int_{\Omega}m(x)|u|^{p}=1 \right\}. \] Indeed, arguing as in Proposition \ref{prop:lambda} one can show that $\lambda(m)$ is achieved by a function $u$. In order to show that $u$ has constant sign, we can argue by contradiction, as for example in \cite[Theorem 1.13]{deFig}. Suppose that $u$ changes sign so that $u=u^+-u^-$ with both $u^{\pm}\not \equiv 0$, $u^{\pm}\geq 0$. Then, \begin{align}\label{segnocostante} \nonumber\lambda(m)=\frac{\int_\Omega H^p(\nabla u^+)+ \mathtt k \int_{\partial \Omega} (u^+)^p +\int_\Omega H^p(-\nabla u^- )+ \mathtt k \int_{\partial \Omega} (u^-)^p}{\int_\Omega m (u^+)^p+\int_\Omega m (u^-)^p} \\ \geq \mathtt min\left\{ \frac{\int_\Omega H^p(\nabla u^+)+ \mathtt k \int_{\partial \Omega} (u^+)^p }{\int_\Omega m (u^+)^p}, \frac{\int_\Omega H^{p}(-\nabla u^- )+ \mathtt k \int_{\partial \Omega} (u^-)^p}{\int_\Omega m (u^-)^p} \right\}. \end{align} Now, if the two quotients on the right hand side are equal, both $u^+$ and $-u^-$ are eigenfunctions and the strong maximum principle yields that $u^+ >0$ and $u^->0$ a.e. on $\Omega$, which is impossible. Otherwise, the above quotients are different and in this case $u=u^{+}$ or $u=-u^{-}$. Finally, as $H$ is even, $H(\nabla u^{-}) =H(-\nabla u^{-})$, so that we can always suppose that $u>0$, yielding $\lambda^{+}(m)=\lambda(m)$. On the other hand, since we are just assuming \eqref{h:posom} we cannot choose the sign of $\varphi$ a posteriori. This is the reason why we minimize in ${\mathtt mathcal S}^{+}_{\mathtt k,m}$. \end{remark} In the rest of the paper we will denote by $\varphi$ the positive eigenfunction associated with the principal eigenvalue $\lambda^{+}(m)$ normalized to be in the unit sphere of $L^{p}(\Omega)$. \begin{proof}[Proof of Theorem \ref{thm:superlevel set}] Let us first observe that $\od^+$ defined in \eqref{Lambda} is achieved by a minimization argument taking into account \eqref{defM}, \eqref{h:convex}, and exploiting the Poincar\'e inequality in \cite[Lemma 3.1]{DG}. In addition, it is possible to exploit the so called \emph{bathtub principle}, (see e.g.~\cite[Theorem~1.14]{lilo} or ~\cite[Lemma 3.3]{DG}), to obtain that the minimizing weight is bang-bang, namely \[ m= \ind{D} -\beta\ind{D^c}, \] where $\ind{D}: \Omega \mathtt mapsto \{0,1\}$ is the characteristic function of the set $D\subset \Omega$ such that \begin{equation}\label{dinclusion} \{ \varphi >t\} \subseteq D \subseteq \{ \varphi\geqslant t\} \end{equation} for some $t>0$ and \begin{equation}\label{misura intervallo} |D| = \frac{(\beta-m_0)|\Omega|}{1+\beta}. \end{equation} For what it concerns the last conclusion, we can apply Corollary 1.7 in \cite{ACF} to get that $D=\{ \varphi >t\} $ up to a set of zero measure. \end{proof} \begin{remark}\label{rem:hc2} Corollary 1.7 in \cite{ACF} is proved only for dimension $N \ge 2$ and even $H$. Although their proof can be adapted to the one dimensional case and without the hypothesis of symmetry of $H$, we will also give a different proof in dimension 1 by showing that $\varphi$ satisfies a monotonicity property (see Theorem \ref{D:intervallo}). \end{remark} \begin{remark}\label{rem:min set} As a consequence of Theorem \ref{thm:superlevel set}, we deduce that $\Lambda^{+}$ can be equivalently obtained as \begin{equation}\label{eq:Lambda+nuova} \Lambda^{+}:=\mathtt min \left\{\lambda^{+}(E), \, E\subset \Omega, \text{$E$ is measurable and }\; 0<|E|\leqslant \frac{(\beta-m_{0})|\Omega|}{1+\beta}\right\} \end{equation} where, with a slight abuse of notation, we denote $\lambda^{+}(E)= \lambda^{+}(\ind{E}-\beta\ind{E^{c}}) $, and \begin{equation} \Lambda^{+}=\lambda^{+}(D)=\lambda^{+}(\ind{D}-\beta\ind{D^{c}}), \quad D=\{ \varphi >t\} \end{equation} where $D$ satisfies \eqref{misura intervallo} and $\varphi$ is the eigenfunction associated with $\lambda^{+}(D)$. We will refer to any set $D$ which solves \eqref{eq:Lambda+nuova} as the \textit{optimal set}. \end{remark} In analogy to what is known in the context of fully non-linear operators, such as Pucci operators where principal half-eigenvalues are studied (see e.g. \cite{BD,QS}), we can define another principal eigenvalue, $\lambda^-(m)$, as follows \begin{definition}\label{def:lambda-} We set \begin{equation} \label{minpbmeno} \lambda^-(m)=\inf_{u\in {\mathtt mathcal S_{\mathtt k, m}^-}}{\mathtt mathcal R_{\mathtt k,m}}(u) \end{equation} where \begin{align*} \label{skmeno}{\mathtt mathcal S_{\mathtt k, m}^-} :=\left\{u\in W^{1, p}(\Omega),\ u \leqslant 0, \, \int_\Omega m(x)|u|^p>0\right\}, \end{align*} for $\mathtt k<+\infty $ and \[ \mathtt mathcal S_{\infty, m}^-:=\left\{ u\in W^{1, p}_{0}(\Omega),\ u \leqslant 0, \, \int_\Omega m(x)|u|^p>0\right\}. \] in the case $\mathtt k=+\infty$. \end{definition} The counterpart of Proposition \ref{prop:lambda} and Theorem \ref{thm:superlevel set} is contained in the following result. \begin{proposition}\label{prop:exlameno} For any fixed $m \in \mathtt mathcal{M}$, $\lambda^-(m)$ is achieved by a negative eigenfunction. Moreover, all the other conclusions of Proposition \ref{prop:lambda} hold with obvious changes. In addition, the minimization problem \[ \Lambda^-:=\inf_{m \in \mathtt mathcal{M}} \lambda^-(m) \] has a bang-bang solution $m_{-}=\ind{D_{-}}-\beta\ind{D_{-}^{c}}$. If $\varphi_{-}$ is an associated negative eigenfunction there exists $t>0$ with $D_{-}=\{\varphi_{-}<-t\}$. \end{proposition} \begin{proof} As a preliminary observation, notice that $u <0$ is a minimizer for \eqref{minpbmeno} if and only if \[ \widetilde u(x)=-u(x)>0 \] solves \[ \inf \left \{ \int_{\Omega} (\widetilde H(\nabla v))^p + \mathtt k \int_{\partial \Omega} |v|^p : v \in W^{1, p}(\Omega), \, v \geq 0,\, \int_{\Omega} m(x) \abs{v}^p =1 \right \} \] where $\widetilde{H}(\xi)= H(-\xi)$. Since $\widetilde{H}$ satisfies \eqref{h:norma}, \eqref{h:posom}, \eqref{h:convex}, we can apply Proposition \ref{prop:lambda} and Theorem \ref{thm:superlevel set} to $\widetilde{H}$, obtaining the conclusion. \end{proof} If $H$ is even, then $\lambda^+(m)=\lambda^-(m)$, and the corresponding eigenfunctions $\varphi_+>0$, $\varphi_-<0$ satisfy $\varphi_+=-\varphi_-$, so that it is always possible to choose a positive eigenfunction generating the whole eigenspace. In particular, this is true for instance in the case of the $p$-Laplace operator, and $\lambda^+(m)=\lambda^-(m)$ coincides with the usual notion of principal eigenvalue. Under our assumptions, $H$ is not even in general, for instance, in dimension one, $\widetilde H=H$ if and only if $a=b$, see \eqref{h:1dim}. \section{Proofs of the main results}\label{localization} In this section we will provide the proofs of Theorems \ref{thm:localization} and \ref{thm:Lambda+=Lambda-}. First we study the position of the optimal set in the one-dimensional case. Let $H$ be given in \eqref{h:1dim} and consider the following problem \begin{equation}\label{prob:N=1} \begin{cases} -\left((H( u'))^{p-1} H'( u')\right)' = \lambda m(x) |u|^{p-2}u & \text{ in } (0,1)\\ H^{p-1}( u'(1)) H'( u'(1))+\mathtt k|u(1)|^{p-2}u(1)=0& \\ -H^{p-1}( u'(0)) H'( u'(0))+\mathtt k|u(0)|^{p-2}u(0)=0 & \end{cases} \end{equation} In what follows we denote by $m=\ind{D}-\beta\ind{D^{c}}$ the minimizer for \eqref{Lambda}, and $\varphi$ the positive eigenfunction corresponding to $\lambda^+(m)$, normalized with respect to the $L^{p}$-norm. Recall that $D$ is a minimizer of \eqref{eq:Lambda+nuova}, so that we will refer to it as \textit{optimal}. \begin{theorem}\label{D:intervallo} The following conclusions hold. \begin{enumerate} \item If $\mathtt k \in (0, +\infty]$, then $\varphi$ attains its maximum in $\alpha \in (0, 1)$, and $\varphi$ is strictly increasing in $(0, \alpha)$, and strictly decreasing in $(\alpha, 1)$. \item If $\mathtt k=0$, then $\varphi$ is monotone. \item The optimal set $D=\{x\in (0,1) \,:\; \varphi(x)>t\}$ is an interval. \item The set $\{x\in (0,1) : \varphi'(x)=0 \}$ is finite. \end{enumerate} \end{theorem} \begin{proof} We will follow the argument of \cite[Proposition 4]{LLNP} and we start proving conclusion \textit{$(1)$.} Notice that, by elliptic regularity, $\varphi'\in C([0,1])$, so that, if $\mathtt k>0$ the boundary conditions and \eqref{h:1dim} immediately imply that $\varphi$ achieves its maximum in $(0,1)$. Denoting with $\alpha$ the first maximum point of $\varphi$, one can use the monotone rearrangements (see Section \ref{rearrangements}) and define \[ \varphi^{R}=\begin{cases} \varphi^{\ast} &x\in (0,\alpha) \\ \varphi _{\ast} &x\in (\alpha,1) \end{cases}\,, \qquad m^{R}=\begin{cases} m^{\ast}&x\in (0,\alpha) \\ m_{\ast} &x\in (\alpha,1) \end{cases} \] where, in view of Remark \ref{rem:I}, $\varphi^{\ast}$, ($\varphi_{\ast}$) stands for the monotone increasing (decreasing) rearrangement of the restriction of the function $\varphi$ to the interval $(0,\alpha)$ ($(\alpha,1)$) and analogously for $m$. As $\alpha$ is a maximum point, $\varphi^{R} \in H^{1}(0,1)$. The Hardy--Littlewood inequality (see \cite{Kawohl}) implies \begin{align*} \int_0^\alpha m \varphi^p &= \int_0^\alpha (m+\beta) \varphi^p - \beta \int_0^\alpha \varphi^p \leqslant \int_0^\alpha (m+\beta)^* (\varphi^*)^p - \beta \int_0^\alpha (\varphi^*)^p = \int_0^\alpha m^* (\varphi^*)^p \end{align*} and an analogous inequality holds for $\varphi_{*}$ and $m_{*}$. Moreover, \[ (\varphi^*)^p(0)=\mathtt min_{[0, \alpha]} \varphi^p \leqslant \varphi^p(0), \quad (\varphi_*)^p(1)=\mathtt min_{[ \alpha, 1]} \varphi^p \leqslant \varphi^p(1). \] Note that $m^{R}$ and $\varphi^{R}$ are admissible competitors for $\Lambda^{+}$, and Proposition \ref{polya monotona} yields \begin{align*} \Lambda^+ \leqslant \mathtt mathcal {R}_{\mathtt k,m^{R}}(\varphi^{R}) &= \dfrac{\int_0^\alpha H^p((\varphi^*)') +\int_\alpha ^1 H^p((\varphi_*)')+ \mathtt k (\varphi^*)^p(0) + \mathtt k (\varphi_*)^p(1)}{ \int_0^\alpha m^{*} (x) (\varphi^*)^p+ \int_\alpha^1 m_{*} (x) (\varphi_*)^p} \\ &\leqslant \frac{\int_0^1 H(\varphi')^p +\mathtt k \varphi^p(0) + \mathtt k \varphi^p(1)}{ \int_0^1 m(x) \varphi^p} =\Lambda^+, \end{align*} which implies \[ \frac{\int_0^\alpha H^p((\varphi^*)') +\int_\alpha ^1 H((\varphi_*)')^p+ \mathtt k (\varphi^*)^p(0) + \mathtt k (\varphi_*)^p(1)}{ \int_0^\alpha m^{*}(x) (\varphi^*)^p+ \int_\alpha^1 m_{*}(x) (\varphi_*)^p} =\frac{\int_0^1 H^p(\varphi') +\mathtt k \varphi^p(0) + \mathtt k \varphi^p(1)}{ \int_0^1 m(x) \varphi^p}. \] As a consequence, \[ \int_{0}^{\alpha}H^{p}(\varphi')=\int_{0}^{\alpha}H^{p}((\varphi^{*})'), \qquad\text{and}\qquad \int_{\alpha}^{1}H^{p}(\varphi')=\int_{\alpha}^{1}H^{p}((\varphi_{*})'). \] Then, Proposition \ref{equality polya} implies that $\varphi=\varphi^{R}$ yielding that $\varphi$ increases up to its maximum and then decreases. Let us now show conclusion \textit{$(2)$.} Consider the decreasing rearrangements $\varphi_*$ and $m_*$. Then, arguing as before, \[ \Lambda^+= \frac{\int_0^1 H(\varphi')^p }{ \int_0^1 m(x) \varphi^p} \ge\frac{\int_0^1 H((\varphi_*)')^p }{ \int_0^1 m_*(x) \varphi_*^p} \geqslant \Lambda^+. \] Therefore, equality holds, and applying Proposition \ref{equality polya} one obtains that $\varphi$ is monotone. Conclusion \textit{$(3)$} directly follows from the previous ones. Let us now prove conclusion $(4)$. Notice that, for any $x<y \in D$, integrating the equation in \eqref{prob:N=1} in $(x,y)$, one has \[ (H^{p})'(\varphi'(x)) - (H^{p})'(\varphi'(y)) = \Lambda^+ \int_x^y \varphi^{p-1} >0. \] Taking into account that the function $H^{p}$ is strictly convex, we have that $\varphi'(x) > \varphi'(y)$ for any $x<y \in D$, namely $\varphi'$ is strictly decreasing in $D$. Similarly, one proves that $\varphi'$ is strictly increasing on every connected component of $D^c$. This shows that $\varphi$ has a finite number of critical points since $D$ is an interval. As an immediate consequence, we also get that the monotonicity of $\varphi$ is strict in the intervals $(0,\alpha)$, $(\alpha,1)$. \end{proof} We are now ready to prove Theorem \ref{thm:localization}. \begin{proof}[Proof of Theorem \ref{thm:localization}] The fact that $D$ is an interval has already been proved in Theorem \ref{D:intervallo}. Let us deal with the Neumann case first, namely $\mathtt k=0$. Due to Theorem \ref{thm:superlevel set} and Theorem \ref{D:intervallo}, we know that the optimal eigenfunction $\varphi$ is strictly increasing or decreasing and recalling that the optimal interval $D$ is the positivity set of the optimal weight $m$, one has the following alternative \[ m_{(0, c)}:= \ind{(0, c)} - \beta \ind{(c, 1)} \quad \text{ or } \quad m_{(1-c, 1)} := \ind{(1-c, 1)} - \beta \ind{(0, 1- c)}, \] where $c:=|D|$. We denote $\lambda^{+}(E)=\lambda^{+}(\ind{E}-\beta\ind{E^{c}}) $ (see Remark \ref{rem:min set}). Let us first deal with the case $a>b$ and suppose by contradiction that $m_{(1-c, 1)} := \ind{(1-c, 1)} - \beta \ind{(0, 1- c)}$ is the optimal weight. Observe that Theorem \ref{D:intervallo} yields that the optimal eigenfunction $\varphi$ is monotone and the contradiction hypothesis readily implies that $\varphi$ has to be increasing. Defining $\psi(x)=\varphi(1-x)$ and taking into account \eqref{h:1dim}, one has \[ \int_0^1H^p(\psi') = \int_0^1 H^p(-\varphi'(1-x))= b^p \int_0^1 (\varphi'(1-x))^p =b^p \int_0^1 (\varphi')^p=\frac{b^p}{a^p} \int_0^1 H^p(\varphi'). \] Also \[ \int_0^1 m_{(0, c)} \psi^p= \int_0^1 m_{(1-c, 1)} \varphi^p. \] Hence \[ \lambda^+((0,c)) \leqslant {\mathtt mathcal R_{0,m_{(0,c)}}}(\psi) =\frac{b^p}{a^p} \lambda^+((1-c, 1))= \frac{b^p}{a^p} \Lambda^{+} <\Lambda^{+} \] that contradicts the minimality of $\Lambda^{+}$. The case $a<b$ follows analogously. We now take into account the Dirichlet case $\mathtt k=\infty$. Applying Proposition \ref{prop: polya anis} and \cite[Proposition 2.28]{VanSchaft} one deduces that \[ \Lambda^+ \leqslant \frac{\int_{I} \tau_{l}\left( H((\varphi^\star)')^p\right)}{\int_{I} \tau_l(m^\star |\varphi^\star|^p)} \leqslant \frac{\int_{I^\star} H((\varphi^\star)')^p}{\int_{I^\star} m^\star |\varphi^\star|^p} \leqslant \frac{\int_I H(\varphi')^p}{\int_I m |\varphi|^p}= \Lambda^+, \] where $\tau_l$ denotes the translation operator in the direction $l=-\frac{a}{a+b}$ and $\varphi^\star$ is the anisotropic rearrangement of $\varphi$ with respect to $H_0$, see Section \ref{sec:anissym}. Exploiting the equality case in Proposition \ref{prop: polya anis}, we get that $\varphi(x+\frac{a}{a+b})=\varphi^\star(x)$. Then, \[ \left\{\varphi\left(x+\frac{a}{a+b}\right)>t\right\}=\left(\frac{-a|D|}{a+b}, \frac{b|D|}{a+b}\right), \] completing the proof. \end{proof} \begin{remark} Notice that the presence of the anisotropy forces the position of the optimal interval $D$. Indeed, in the Neumann case $D$ lies on the left or on the right of $(0,1)$ if $a>b$ or $b>a$ and in the Dirichlet case the centre of the optimal interval given in Theorem \ref{thm:localization} is \[ x_{0}=\frac{|D|(b-a)+2a}{2(a+b)}. \] and $x_{0}<1/2$ if $b >a$, while $x_{0}>1/2$ if $b<a$. In both cases, if $a=b$ we recover the known results of \cite{CC89,louyan,DG}. \end{remark} Now we prove Theorem \ref{thm:Lambda+=Lambda-}. \begin{proof}[Proof of Theorem \ref{thm:Lambda+=Lambda-}] Without loss of generality we may assume that the origin is the center of symmetry for $\Omega$. Let $\varphi_+$ the positive eigenfunction corresponding to $\lambda^+(m_+)=\Lambda^+$, hence \[ \lambda^+(m_+)=\frac{\int_\Omega H^p(\nabla \varphi_+) + \mathtt k \int_{\partial \Omega} \varphi_+^p}{\int_\Omega m_+ \varphi_+^p}. \] We now define \[ v(x)=-\varphi_+(-x) <0, \] thus $\nabla v(x)=(\nabla \varphi_+)(-x)$. Therefore, \begin{align*} \Lambda^+=\lambda^+(m_+)& =\frac{\int_{\Omega} H^p(\nabla v)(-x) + \mathtt k \int_{\partial \Omega} |v|^p(-x)}{\int_\Omega m_+(x)|v|^p(-x)} =\frac{\int_{\Omega} H^p(\nabla v)(x) + \mathtt k \int_{\partial \Omega} |v|^p(x)}{\int_\Omega m_+(-x)|v|^p(x)} \\ &\ge \lambda^-(m_+(-x)) \ge \Lambda^-, \end{align*} where we have used that $m_+(-x) \in \mathtt mathcal{M}$ and $v\in \mathtt mathcal{S}_{\mathtt k, m}^-$. Analogously, take $\varphi_-$ the eigenfunction such that $\lambda^-(m_-)=\Lambda^-$. Then \[ \Lambda^-=\lambda^-(m_-) \ge \lambda^+(m_-(-x)) \ge \Lambda^+, \] from which we immediately deduce $\Lambda^+=\Lambda^-$. \end{proof} \begin{remark} The argument of Theorem \ref{thm:Lambda+=Lambda-} actually shows that if $\Omega=-\Omega$ and $m(x)=m(-x)$, then $\lambda^{+}(m)=\lambda^{-}(m)$ with associated eigenfunction $\varphi_{+}(x)=-\varphi_{-}(-x)$. Indeed, let $m$ satisfy $m(x)=m(-x)$. If we set $v(x):=-\varphi_+(-x)$ we get \[ \lambda^+(m)= \frac{\int_\Omega H^p(\nabla \varphi_+) + \mathtt k\int_{\partial \Omega} \varphi_+^p}{\int_\Omega m(x) \varphi_+^p } = \frac{\int_\Omega H^p(\nabla v) + \mathtt k\int_{\partial \Omega} |v|^p}{\int_\Omega m(x) |v|^p } \ge \lambda^-(m). \] Similarly, we prove the opposite inequality. Thus $\lambda^+(m)=\lambda^-(m)$, and $-\varphi_+(-x)$ is a minimizer for $\lambda^-(m)$, from which we deduce $\varphi_+(x)=-\varphi_-(-x)$ by uniqueness of the positive eigenfunction (see Proposition \ref{prop:lambda}). \end{remark} From the proof above we also deduce that if $u, m$ is a minimizer for $\Lambda^+=\Lambda^-$, then $-u(-x), m(-x)$ is still a minimizer. We now consider the case $N=1$, and $\Omega=[0, 1]$. Notice that the reflection above in this case reads as $x \mathtt mapsto 1-x$. Therefore, what we proved until now can be stated as follows: if $u, m$ is a minimizer for $\Lambda^+=\Lambda^-$, then $-u(1-x), m(1-x)$ is still a minimizer. Due to Theorems \ref{thm:localization}, we can actually say something more, namely that these are the only minimizers for $\Lambda^+=\Lambda^-$, at least if we take Dirichlet or Neumann boundary conditions. \begin{proposition}\label{prop:relaz m+m-} Let $N=1$, and consider $\mathtt k=+\infty$ (Dirichlet), or $\mathtt k=0$ (Neumann) with $a \ne b$. Let us denote $m_+$ a weight such that $\Lambda^+=\lambda^+(m_+)$ and $\varphi_+>0$ the associated eigenfunction, and similarly $m_-$ a weight such that $\Lambda^-=\lambda^-(m_-)$ with eigenfunction $\varphi_-<0$. Then \[ m_+(x)=m_-(1-x) \] and \[\varphi_+(x)=-\varphi_-(1-x). \] \end{proposition} We preliminary notice that, calling $\widetilde H(x):=H(-x)$, then the polar function of $\widetilde H$ (see for instance \cite{AlvinoFLT}) is \begin{equation}\label{eq:H-zero-tilde} \widetilde H_0(x):=\sup_{x \in \mathtt mathbb{R}} \frac{ \langle t, x \rangle}{\widetilde H(x)} =H_0(-x)= \begin{cases} \frac x b & \text{ if } x \ge 0\\ - \frac x a & \text{ if } x < 0. \end{cases} \end{equation} \begin{proof} \textit{Dirichlet case. } Let $\varphi_+$ the positive eigenfunction corresponding to $\lambda^+(m_+)=\Lambda^+$ so that the associated optimal set $D_+$ satisfies \eqref{localization D}. Let us now consider $\varphi_-$ the negative eigenfunction corresponding to $\lambda^-(m_-)=\Lambda^-$ (see Proposition \ref{prop:exlameno}). As a consequence of Corollary \ref{lem:anis polya 2}, reasoning as in Theorem \ref{thm:localization}, we get $m_-=\ind{D_-} - \beta \ind{D_-^c}$, where $D_-$ is defined by \[ D_-= \left( \frac{(1-c)b}{a+b}, \frac{ca +b}{a+b} \right), \quad \text{ with $c:=|D_-|$ given by \eqref{misura intervallo},} \] which, in view of \eqref{localization D}, gives \[ m_+(x)=m_-(1-x). \] Now, observe that \[ -\mathfrak Delta_{H,p} \varphi_+=\Lambda^+ m_+ \varphi_+^{p-1}, \] and \[ - \mathfrak Delta_{H, p} v=\Lambda^+ m_+ v^{p-1}, \quad \text{ where $v(x):=-\varphi_-(1-x)$. } \] Thus, using Proposition \ref{prop:lambda} conclusion (1), we get \[ \varphi_+(x)=-\varphi_-(1-x). \] \textit{Neumann case.} Let us consider the case $a > b$. We know by Theorem \ref{thm:localization} that $\varphi_+$ is decreasing and $m_+=\ind{D_+} - \beta \ind{D_+^c}$, where $D_+$ is $(0, |D_+|)$. Following exactly the same argument, one shows that also $\varphi_-$ is decreasing, and $m_-=\ind{D_-} - \beta \ind{D_-^c}$, where $D_-=(1-|D_-|, 1)$. This immediately implies $m_+(x)=m_-(1-x)$. As above, we get $\varphi_+(x)=-\varphi_-(1-x)$. If $a < b$ one can argue analogously. \end{proof} \section{Anisotropic rearrangement inequalities in $\mathtt mathbb{R}$.}\label{rearrangements} In this section we prove all the rearrangement inequalities that have been exploited to obtain the qualitative properties of the optimal set $D$ in the one dimensional case. The next subsection deals with the monotone rearrangements while Subsection \ref{sec:anissym} treats anisotropic symmetrizations. \subsection{Monotone rearrangements}\label{monotonere} Given a function $u:[0, 1] \to \mathtt mathbb{R}^+$, we define the monotone decreasing rearrangement of $u$ as the function $u_*: [0, 1] \to \mathtt mathbb{R}^+$ such that \[ u_*(x)= \begin{cases} \sup u & \text{ if } x=0\\ \inf \left\{ t : \, \mathtt mu_u(t) < x \right\} &\text{ if } x \in (0, 1], \end{cases} \] where \begin{equation}\label{distribution function} \mathtt mu_u(t):=\left| \left \{ x \in [0, 1]: \, u(x) >t \right\} \right| \end{equation} is the distribution function of $u$. The monotone increasing rearrangement $u^*$ is defined analogously. Our first aim is to show a Polya type inequality as stated in the following result. \begin{proposition}\label{polya monotona} Let $H$ be defined as in \eqref{h:1dim} and $u \in W^{1,p}(0, 1)$. Then \begin{equation}\label{polya:ineq} \int_0^1 H^p(u') \ge \int_0^1 H^p((u_*)'). \end{equation} \end{proposition} \begin{proof} The proof closely follows the arguments of \cite[Section II.3]{Kawohl} (see in particular Lemma 2.4, Lemma 2.6) , so that we will simply enlighten the differences in our situation. In view of \cite[ Remark 2.20]{Kawohl}, we can assume that $u > 0$, piecewise affine and with maximum at the origin. Let us consider the set of the values of $u$ at the non-differentiability points, together with $u(0)$ and $u(1)$, denote them by $\{a_1 \leqslant \dots \leqslant a_k\}$ and let \[ D_i= \left\{ x \in [0, 1]: a_i < u(x) < a_{i+1} \right\} \;, \;\; E_i= \left\{ x \in[0, 1]: a_i < u_*(x) < a_{i+1} \right\}. \] Note that $E_{i}$ are always connected, while $D_{i}$ may be not. Arguing as in \cite[Lemma 2.4]{Kawohl}, for every $i=1,\dots, k$, it results that \begin{equation}\label{eq:decomposition-Di} D_{i}=\bigcup_{j=1}^{N(i)}Y_{ij}, \end{equation} where $u$ is differentiable and non-constant in each $Y_{ij}$. We assume, without loss of generality, that for each $i$, $Y_{ij}$ are ordered according to their distance from the origin. Moreover, since $H(0)=0$, it is enough to prove that \begin{equation}\label{eq:goal1} \int_{D_i} H^p(u') \ge \int_{E_i} H^p\left((u_*)'\right), \qquad \text{for $i=1,\dots, k.$} \end{equation} Since $u$ is injective in every $Y_{ij}$, for each $\lambda \in (a_i, a_{i+1})$ the equation $u(x)=\lambda$ has a unique solution, so that we can define the differentiable function $\rho_{j}:(a_{i},a_{i+1})\mathtt mapsto Y_{ij}$ such that \begin{equation} \label{eq:infoRhoj} u(x)=\lambda, \quad\text{ if and only if }\quad x=\rho_{j}(\lambda)\;,\;\; u'(\rho_j(\lambda)) = \left( \rho'_j (\lambda) \right)^{-1}. \end{equation} Hence \begin{equation}\label{eq:deco1} \int_{D_i} H^p(u'(x))dx= \sum_{j=1}^{N(i)} \int_{a_i}^{a_{i+1}} H^p\left[ \left( \rho'_j (\lambda) \right)^{-1} \right] \abs{\rho'_j (\lambda)} \, d\lambda, \end{equation} where we have taken into account that the sign of $\rho'_{j}$ allows to obtain the integral from $a_{i}$ up to $a_{i+1}$. At the same time, as $u_*$ is monotone we can define $\rho_{*}: (a_{i},a_{i+1})\mathtt mapsto E_{i}$ such that \begin{equation}\label{eq:ustar} u_*(x)=\lambda \quad\text{ if and only if }\quad x=\rho_{*}(\lambda) \quad \text{and}\quad (u_*)'(\rho_*(\lambda)) = \left[(\rho_*)'(\lambda) \right]^{-1}. \end{equation} Thus, also using the hypothesis that $u$ has its maximum in the origin, it results \[ \operatorname{sign} \rho'_{j}(\lambda)= \operatorname{sign} u'(x) = (-1)^j, \text{ in } Y_{ij}, \quad \text{for $j=1,\dots, N(i)$ and $i=1,\dots, k$,} \] so that \begin{equation}\label{eq:3} \abs{\sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j(\lambda)} = \abs{\sum_{j=1}^{N(i)} \left(-\left| \rho'_j(\lambda) \right| \right)} = \sum_{j=1}^{N(i)}\abs{\rho'_j (\lambda)}. \end{equation} Furthermore, using the definition of $u_{*},$ it results \begin{equation}\label{eq:rhostar} \rho_*(\lambda)= \begin{cases} \displaystyle \sum_{j=1}^{N(i)} (-1)^{j+1} \rho_j(\lambda) & \text{ if $N(i)$ is odd}, \\ \displaystyle \sum_{j=1}^{N(i)} (-1)^{j+1} \rho_j(\lambda)+1 & \text{ if $N(i)$ is even}. \end{cases} \end{equation} As a consequence, performing the change of variable $x=\rho_{*}(\lambda)$ and using \eqref{eq:ustar} \begin{equation*}\label{eq:deco2} \begin{split} \int_{E_i} H^p((u_*(x))')dx &= \int_{a_i}^{a_{i+1}} H^p\left[ \left( \sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j (\lambda) \right)^{-1} \right] \abs{\sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j (\lambda)} \, d\lambda \\ &= \int_{a_i}^{a_{i+1}} H^p\left[ \left( \sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j (\lambda) \right)^{-1} \right] \sum_{j=1}^{N(i)}\abs{\rho'_j (\lambda)} d\lambda. \end{split} \end{equation*} Then, recalling \eqref{eq:deco1}, in order to obtain \eqref{eq:goal1}, it is enough to show that \[ \sum_{j=1}^{N(i)} \int_{a_i}^{a_{i+1}} H^p\left[\left( \rho'_j (\lambda) \right)^{-1} \right] \abs{ \rho'_j (\lambda)} d\lambda \ge \int_{a_i}^{a_{i+1}} H^p\left[ \left( \sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j (\lambda) \right]^{-1} \right) \sum_{k=1}^{N(i)} \abs{\rho'_k (\lambda)}d\lambda. \] In turn, it is sufficient to show that the following point-wise inequality holds \begin{equation}\label{disug} \sum_{j=1}^{N(i)} \alpha_j H^p\left[ \left( \rho'_j (\lambda) \right)^{-1} \right] \ge H^p \left[ \left( \sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j (\lambda) \right)^{-1} \right], \end{equation} where \[ \alpha_j= \abs{ \rho'_j (\lambda)} \left( \sum_{k=1}^{N(i)} \abs{ \rho'_k (\lambda)} \right)^{-1}. \] In order to prove \eqref{disug} first notice that, as $u(0)=\mathtt max_{[0, 1]}u$, the expression \eqref{h:1dim} yields \begin{align*} \sum_{j=1}^{N(i)} \alpha_j H^p \left[ \left( \rho'_j (\lambda) \right)^{-1} \right] &= a^p \sum_{j \text{ even}} \alpha_j \abs{ \rho'_j (\lambda) }^{-p} + b^p \sum_{j \text{ odd}} \alpha_j \abs{\rho'_j (\lambda) }^{-p} \\ & = \left( \sum_{k=1}^{N(i)} \abs{\rho'_k(\lambda)} \right)^{-1} \left[ a^p \sum_{j \text{ even}} \abs{\rho'_j (\lambda)}^{1-p} + b^p \sum_{j \text{ odd}} \abs{ \rho'_j (\lambda)}^{1-p} \right] . \end{align*} On the other hand, in view of \eqref{eq:3} we have \[ H^p\left[ \left( \sum_{j=1}^{N(i)} (-1)^{j+1} \rho'_j (\lambda) \right)^{-1} \right] = H^p \left[- \left( \sum_{j=1}^{N(i)} \abs{\rho'_j (\lambda)} \right)^{-1} \right] = b^p \left(\sum_{j=1}^{N(i)} \abs{ \rho'_j (\lambda)} \right)^{-p} . \] Then, \eqref{disug} holds if \begin{equation}\label{disug2} a^p \sum_{j \text{ even}} \abs{\rho'_j (\lambda)}^{1-p} + b^p \sum_{j \text{ odd}} \abs{ \rho'_j (\lambda)}^{1-p} \ge b^p \left( \sum_{j=1}^{N(i)} \abs{ \rho'_j (\lambda)} \right)^{1-p}. \end{equation} Notice that since $p >1$, for any $j=1,\dots, N(i) $ and $i=1,\dots,k$ it holds \begin{equation*}\label{ineq} \abs{ \rho'_j (\lambda)}^{1-p} \ge \left( \sum_{j=1}^{N(i)} \abs{\rho'_j(\lambda)} \right)^{1-p}. \end{equation*} Thus, \eqref{disug2} is proved if \[ a^p \sum_{j \text{ even}} \abs{\rho'_j (\lambda)}^{1-p} + b^p \sum_{j \text{ odd}} \abs{ \rho'_j (\lambda)}^{1-p} \ge b^p \abs{ \rho'_1 (\lambda)}^{1-p}, \] which is evidently true. \end{proof} In the following proposition we analyze the equality case in \eqref{polya:ineq}. \begin{proposition}\label{equality polya} Assume \eqref{h:1dim}. If $u\in W^{1,p}(0,1)$ is such that \begin{equation}\label{polya:eq} \int_0^1 H^p(u') = \int_0^1 H^p((u_*)'), \end{equation} then $u$ is monotone. \end{proposition} Let us notice that if $u$ is piece-wise affine, the result follows from an inspection of the proof above. Indeed, if we argue by contradiction and assume that $u$ is not monotone, then there exists an index $i$ such that $N(i) \ge 2$. Thus, the inequality \eqref{disug2} holds with a strict sign as $p >1$. Going backwards up to the beginning of the proof of Proposition \ref{polya monotona} one realizes that the inequality \eqref{polya:ineq} would have a strict sign too, which contradicts the hypothesis. As a consequence, the equality forces $N(i)=1$ for any $i=1,\dots,k$, namely that $u$ is decreasing. We now consider the case $u \in W^{1,p}(0, 1)$, following an analogous argument used in the isotropic case (see for example \cite{BLR, louyan}) . \begin{proof}[Proof of Proposition \ref{equality polya}] First notice that, as $u\in W^{1,p}(0, 1)$, $u$ is continuous. Let us argue by contradiction and assume that there exist $t_1\neq t_2\in (0, 1)$ such that $u(t_1)=u(t_2)$ and $t_{1},$ $ t_{2}$ are not local maximum nor minimum points. Then \begin{equation}\label{eq:max} \mathtt min_{(0,k)} u < \mathtt max_{(k,1)} u, \quad \text{and}\quad \mathtt max_{(0, k)} u > \mathtt min_{(k, 1)} u\quad \text{for all $k \in (t_1, t_2)$.} \end{equation} Let us define $u_{1}:(0,k)\mathtt mapsto \mathtt mathbb{R}$ by $u_1(t)=u(t)$ and $u_{2}:(0, 1-k)\mathtt mapsto \mathtt mathbb{R}$ by $u_2(t)=u(k+t)$. Then we can consider \[ v(t)= \begin{cases} (u_1)_*(t) & t \in (0, k) \\ (u_2)_*(t-k) & t \in (k, 1). \end{cases} \] Observe that $v_*=u_*$. Hence, hypothesis \eqref{polya:eq} and Proposition \ref{polya monotona} yield \begin{align*} \int_0^1 H^p(u') &= \int_0^1 H^p((u_*)') = \int_0^1 H^p((v_*)') \leqslant \int_0^1 H^p(v') \\ &= \int_0^k H^p(((u_1)_*)') + \int_0^{1-k} H^p(((u_2)_*)') \leqslant \int_0^k H^p(u_1') + \int_0^{1-k} H^p(u_2') \\ & = \int_0^k H^p(u'(t))dt + \int_0^{1-k} H^p(u'(k+t))dt =\int_0^1 H^p(u'). \end{align*} In particular, \begin{equation}\label{eq:A} \begin{split} \int_0^1 H^p((u_*)') & -\int_0^k H^p(((u_1)_*)')- \int_0^{1-k} H^p(((u_2)_*)') \\ & = \int_0^1 H^p((v_*)') - \int_0^1 H^p(v')=0. \end{split} \end{equation} Define $\sigma=\mathtt min v_*$, $\Sigma=\mathtt max v_*$, and similarly $\sigma_i, \Sigma_i$ for $(u_i)_*$. Then \eqref{eq:max} yields \[ \sigma_1 < \Sigma_2, \quad \sigma_2<\Sigma_1. \] Let us denote by $\mathtt mu $ the distribution function of $v$, see \eqref{distribution function}, so that $(v_*)'(t)=1/\mathtt mu'(v_*(t))$ and $\mathtt mathcal{E}\subset (0,1)$ such that $(v_*)'\equiv 0$ on $\mathtt mu( \mathtt mathcal{E})$. Analogously define $\mathtt mu_{i}$ and $\mathtt mathcal{E}_{i}$ for $u_{i}$. Notice that $\mathtt mathcal{E}=\mathtt mathcal{E}_1 \cup \mathtt mathcal{E}_2$. As $v_{*}$ is not increasing, performing a change of variable, we obtain \begin{equation}\label{change var} \int_0^1 H^p((v_*)')= - \int_{(\sigma, \Sigma) \setminus \mathtt mathcal{E}} H^p\left(\frac{1}{\mathtt mu'(s)}\right) \mathtt mu'(s) \, ds. \end{equation} Thus \eqref{eq:A} becomes \begin{align*} \int_{(\sigma_1, \Sigma_1) \setminus \mathtt mathcal{E}_1} H^p\left(\frac{1}{\mathtt mu_1'(s)}\right) \mathtt mu_1'(s) + \int_{(\sigma_2, \Sigma_2) \setminus \mathtt mathcal{E}_2} H^p\left(\frac{1}{\mathtt mu_2'(s)}\right) \mathtt mu_2'(s) - \int_{(\sigma, \Sigma) \setminus \mathtt mathcal{E}} H^p\left(\frac{1}{\mathtt mu'(s)}\right) \mathtt mu'(s) =0. \end{align*} Notice that $\sigma=\mathtt min\{ \sigma_1, \sigma_2 \}$ and $\Sigma=\mathtt max\{\Sigma_1, \Sigma_2 \}$. Also, setting $\overline \sigma= \mathtt max\{\sigma_1, \sigma_2\}$ and $\underline \Sigma=\mathtt min \{\Sigma_1, \Sigma_2 \}$, we have $\sigma \leqslant \overline \sigma < \underline \Sigma \leqslant \Sigma$. On $(\sigma, \overline \sigma) \cup (\underline \Sigma, \Sigma)$, one of the $\mathtt mu_{i}$ is constant, if for example this occurs for $\mathtt mu_{1}$, then $\mathtt mu'=\mathtt mu_{2}'$; and in the interval $(\overline \sigma, \underline \Sigma)$ it holds $\mathtt mu'=\mathtt mu_1'+\mathtt mu_2'$. Thus the above equality becomes \begin{equation}\label{eq:A=0} \int_{(\overline \sigma, \underline \Sigma)\setminus (\mathtt mathcal{E}_1 \cup \mathtt mathcal{E}_2)} \left[ H^p\left(\frac{1}{\mathtt mu_1'}\right )\mathtt mu_1'+H^p\left(\frac{1}{\mathtt mu_2'}\right )\mathtt mu_2' -H^p\left(\frac{1}{\mathtt mu_1'+\mathtt mu_2'}\right )(\mathtt mu_1'+\mathtt mu_2') \right] =0. \end{equation} We now use the positive homogeneity of $H$ and the fact that $\mathtt mu_i' < 0$ to observe that the integrand is equal to \[ H^p(-1) \left( -\abs{\mathtt mu_1'}^{1-p} -\abs{\mathtt mu_2'}^{1-p} +\abs{\mathtt mu_1'+\mathtt mu_2'}^{1-p} \right) > 0. \] Then, by \eqref{eq:A=0}, and also recalling that $\mathtt mathcal{E}=\mathtt mathcal{E}_1 \cup \mathtt mathcal{E}_2$, we conclude that \[ \abs{(\overline \sigma, \underline \Sigma)\setminus (\mathtt mathcal{E}_1 \cup \mathtt mathcal{E}_2)}= \abs{(\overline \sigma, \underline \Sigma)\setminus \mathtt mathcal{E}}=0. \] Now, the same change of variable we performed in \eqref{change var} yields \[ \int_{\{t: \, \overline \sigma < v_*(t) < \underline \Sigma\}} H^p((v_*)') = - \int_{(\overline \sigma, \underline \Sigma) \setminus \mathtt mathcal{E}} H^p\left(\frac{1}{\mathtt mu'(s)}\right) \mathtt mu'(s)=0. \] Since by our contradiction hypothesis we have $\overline \sigma < \underline \Sigma$, and, as $H(t)=0$ if and only if $t=0$, we conclude that $v_*$ is constant, thus $\sigma=\Sigma$, a contradiction. \end{proof} \begin{remark}\label{rem:I} Let us observe that all the results of this section can be easily adapted to the case of a function $u$ defined in arbitrary interval $I\subset \mathtt mathbb{R}$ with $u_*$ defined in the same interval. \end{remark} \subsection{Anisotropic Symmetrization}\label{sec:anissym} We again consider $H$ of the form \eqref{h:1dim} and, recalling that the polar function $H_0$ of $H$ is defined as \[ H_0(x)=\sup_{t \in \mathtt mathbb{R}} \frac{ t x}{H(t)} , \] we have \[ H_0(x)= \begin{cases} \frac x a & \text{ if } x \ge 0\\ - \frac x b & \text{ if } x < 0. \end{cases} \] Let $I:=[0,1]$ and $u: I \to [0,+\infty)$, we define \[ I^\star= \left \{ x \in \mathtt mathbb{R} : H_0(-x) < \frac{1}{a+b} \right \} = \left( -\frac{a}{a+b}, \frac{b}{a+b}\right) \] and $u^\star: I^\star \to [0, +\infty)$ as \[ \begin{split} u^\star(x) &= \sup \{ t \in \overline \mathtt mathbb{R}: \abs{ \{ y: u(y) >t \} } > H_0(-x)(a+b) \} \\ & = \begin{cases} \sup \{ t : \abs{ \{ y: u(y) >t \} } > \frac{a+b}{b} x \} & \text{ if } x \ge 0 \\\\ \sup \{ t : \abs{ \{ y: u(y) >t \} } > -\frac{a+b}{a} x \} & \text{ if } x < 0, \end{cases} \end{split} \] and we will call $u^\star$ the anisotropic rearrangement of $u$ with respect to $H_0$. \begin{remark} For any set $E\subset \mathtt mathbb{R}$, $E^{\star}$ is the interval $(\omega_{1},\omega_{2})$ with the same measure of $E$ and such that $\omega_1= -\frac a b \omega_2$; namely the sub-level set of $H_{0}(-\cdot)$ with the same measure of $E$. Any interval satisfying $E=E^\star$ will be called an anisotropic ball. \end{remark} We now introduce the anisotropic Polya inequality useful in our context. \begin{proposition}\label{prop: polya anis} Let $u \in W_0^{1, p}(I,[0,+\infty))$, then \begin{equation}\label{polya anis} \int_I H^p(u')\ge \int_{I^{\star}} H^p((u^\star)'). \end{equation} Moreover, assume that $u \in C^1(I, [0,+\infty))$, and that the set $\{ u'(x)=0 \}$ is finite. Then, equality in \eqref{polya anis} holds if and only if $u \!\left(x+\frac{a}{a+b}\right)=u^\star(x)$. \end{proposition} \begin{remark} Anisotropic Polya inequalities, together with the study of the equality case, have first been proved in \cite{AlvinoFLT} (see also \cite{FV}) for every dimension assuming $H(t \xi)=|t|H(\xi)$. Unfortunately, the function $H$ given by \eqref{h:1dim} does not enjoy this property. Generalizations of \eqref{polya anis} are provided in \cite{VanSchaft} except for the study of the equality case. As we treat a quite simple situation, we provide a direct approach to show both the inequality and the characterization of the equality case suitable to our situation. \end{remark} \begin{proof} Let us first prove the inequality \eqref{polya anis}. We will assume $u$ is piecewise affine, the case $W^{1,p}$ follows by density. We will adapt \cite[Theorem 2.9]{Kawohl}. Let $\left\{ a_1 \leqslant \dots \leqslant a_k\right\}$ be the values of $u$ at the non-differentiability points, set $a_0=0$ and let \[ \begin{split} D_i &= \left\{ x \in [0, 1]: a_i < u(x) < a_{i+1} \right\} \quad\qquad E_i = \left\{ x \in I^{\star}: a_i < u^{\star}(x) < a_{i+1} \right\}=E_{i}^{-}\cup E^{+}_{i} \\ E_i^{-} &= \left\{ x \in\left[-\frac{a}{a+b}, 0\right]: a_i < u^{\star}(x) < a_{i+1} \right\},\quad E_i^{+} = \left\{ x \in\left[0,\frac{b}{a+b}\right]: a_i < u^{\star}(x) < a_{i+1} \right\}. \end{split}\] Taking into account that $H(0)=0$, it is enough to prove that \begin{equation}\label{eq:goal} \int_{D_i} H^p(u') \ge \int_{E_i} H^p\left((u^\star)'\right), \qquad \text{for $i=0,\dots, k-1.$} \end{equation} Recalling the decomposition \eqref{eq:decomposition-Di} we can consider the monotone and differentiable functions $\rho_{j}:(a_{i},a_{i+1})\mathtt mapsto Y_{ij}$ as in the proof of Proposition \ref{polya monotona} such that \eqref{eq:infoRhoj} holds and we notice that, since $u(0)=u(1)=0$, \[ \operatorname{sign}\, u'(\rho_j)=(-1)^{j+1}. \] By definition, $u^{\star }$ is strictly increasing in $E_i^-$ and strictly decreasing in $E_i^+$, so that we can define $\rho_\pm^\star :(a_i, a_{i+1}) \to E_i^\pm$ such that $\rho_-^\star(\lambda)$ is the unique negative value such that $u^\star(\rho^\star_-)=\lambda$, and $\rho_+^\star(\lambda)$ the unique positive value such that $u^\star(\rho^\star_+)=\lambda$. In addition we have \[ (u^\star)'\left(\rho_-^\star(\lambda)\right) = \left((\rho^\star_-)'(\lambda) \right)^{-1} \text{ in } E_i^- \;,\;\; (u^\star)'\left(\rho_+^\star(\lambda)\right) = \left((\rho^\star_+)' (\lambda) \right)^{-1} \text{ in } E_i^+. \] As a consequence, it results \begin{equation}\label{eq:signrho} \begin{split} \rho_-^\star(\lambda)= \frac{a}{a+b} \sum_{j=1}^N (-1)^{j+1} \rho_j(\lambda) \quad&\implies\quad (\rho^\star_-)'(\lambda) = \frac{a}{a+b} \sum_{j=1}^N \abs{ \rho_j' (\lambda)} \\ \rho_+^\star(\lambda)= \frac{b}{a+b} \sum_{j=1}^N (-1)^j \rho_j(\lambda) \quad\quad&\implies\quad ( \rho^\star_+)'(\lambda)= - \frac{b}{a+b} \sum_{j=1}^N \abs{ \rho_j' (\lambda) }. \end{split}\end{equation} Then, since showing \eqref{eq:goal} is equivalent to prove \[ \sum_{j=1}^{N(i)} \int_{Y_{ij} } H^p(u') \ge \int_{E_i^+} H^p((u^\star)') + \int_{E_i^-} H^p((u^\star)'), \] we can exploit a change of variable and obtain the following inequality \[ \sum_{j=1}^{N(i)} \int_{a_i}^{a_{i+1}} H^p\left[\left( \rho'_j \right)^{-1} \right]\abs{ \rho'_j} \ge \int_{a_i}^{a_{i+1}} H^p\left[\left( (\rho_-^\star)' \right)^{-1} \right] \abs{ \left(\rho_-^\star\right)'}+ \int_{a_i}^{a_{i+1}} H^p\left[ \left( ( \rho_+^\star)' \right)^{-1} \right] \abs{ ( \rho_+^\star)'}, \] which, in view of \eqref{eq:signrho}, is equivalent to \begin{align*} \sum_{j=1}^{N(i)} \int_{a_i}^{a_{i+1}}\alpha_{j} H^p\left[\left( \rho'_j \right)^{-1} \right] \ge & \int_{a_i}^{a_{i+1}} \frac{a}{a+b} H^p\left[ \left( \frac{a}{a+b} \sum_{j=1}^{N(i)} \abs{ \rho'_j} \right)^{-1}\right] \\ &+ \int_{a_i}^{a_{i+1}} \frac{b}{a+b} H^p\left[ \left(-\frac{b}{a+b} \sum_{j=1}^{N(i)} \abs{ \rho'_j} \right)^{-1} \right] , \end{align*} where \begin{equation}\label{eq:alphaj} \alpha_j= \abs{ \rho'_j} \left( \sum_{j=1}^{N(i)} \abs{ \rho'_j} \right)^{-1}\hskip-6pt, \quad \text{so that } \sum_{j=1}^{N(i)} \alpha_{j}=1. \end{equation} Then, it is sufficient to prove that \begin{equation}\label{eq:integrand} \begin{split} \sum_{j=1}^{N(i)} \alpha_j H^p\left[ \left( \rho_j' \right)^{-1} \right] \ge& \frac{a}{a+b} H^p\left[ \left(\frac{a}{a+b}\sum_{j=1}^{N(i)} \abs{ \rho'_j }\right)^{-1} \right] \\ &+ \frac{b}{a+b} H^p\left[\left(-\frac{b}{a+b}\sum_{j=1}^{N(i)} \abs{ \rho_j'} \right)^{-1} \right], \end{split}\end{equation} for $\lambda \in (a_i, a_{i+1})$. Keeping in mind \eqref{eq:alphaj}, the convexity of the real function $t^p$ and \eqref{h:1dim}, we obtain \begin{equation}\label{convex polya} \begin{split} \sum_{j=1}^{N(i)} \alpha_j H^p\left[ \left( \rho'_j\right)^{-1} \right] & \geqslant \left\{\sum_{j=1}^{N(i)} \alpha_j H\left[ \left(\rho'_j \right)^{-1} \right] \right\}^p = \left[ \sum_{j \text{ even}}-\alpha_j b\left( \rho'_j\right)^{-1} + \sum_{j \text{ odd}} \alpha_j a\left( \rho'_j \right)^{-1} \right]^p \\ & = \left[ \frac {N(i)}2 (a+b) \right]^p \left( \sum_{j=1}^{N(i)} \abs{\rho'_j} \right)^{-p}, \end{split} \end{equation} where we have also taken into consideration that $N(i)$ is even. On the other hand, on the right hand side of \eqref{eq:integrand} we have \[ \begin{split} H^p\left[\left(-\frac{b}{a+b}\sum_{j=1}^{N(i)} \abs{ \rho_j'} \right)^{-1} \right] &= b^p \left( \frac{b}{a+b} \right)^{-p} \left( \sum_{j=1}^{N(i)} \abs{ \rho'_j} \right)^{-p}, \\ H^p\left[\left(\frac{a}{a+b} \sum_j \abs{ \rho'_j} \right)^{-1} \right] &= a^p \left( \frac{a}{a+b} \right)^{-p} \left( \sum_{j=1}^{N(i)} \abs{ \rho'_j} \right)^{-p}. \end{split} \] This, together with \eqref{convex polya}, implies that \eqref{eq:integrand} is equivalent to \begin{equation}\label{final ineq polya} \left[ \frac {N(i)}2 (a+b) \right]^p \ge (a+b)^p \end{equation} which is satisfied as $N(i) \ge 2$. Let us now study the equality case. We preliminary observe that the arguments above also work for a function $u \in C^1(I)$ such that the set $\{ u'(x)=0 \}$ is finite, once we choose $\{ a_{0}\leqslant a_1 \leqslant \dots \leqslant a_k \}$ the values of $u$ at these points. With this choice, the functions $\rho_j:(a_i, a_{i+1}) \mathtt mapsto Y_{ij}$ are well defined and differentiable, and, by continuity, we can also define $\rho_j(a_i)$ for any $j=1, \dots, N(i)$ and any $i=0,\dots, k-1$. In particular \eqref{convex polya} and \eqref{final ineq polya} hold. As a consequence, the strict convexity of the real function $t^{p}$ implies that equality holds in \eqref{polya anis} if only if $N(i)=2$ for any $i$ and equality holds in \eqref{convex polya}. Then, \eqref{h:1dim} yields \[ a \left( \rho'_1 \right)^{-1}= H\left(\left( \rho'_1 \right)^{-1} \right)=H\left[\left( \rho'_2 \right)^{-1} \right]=-b \left( \rho'_2 \right)^{-1} \qquad \text{ in $(a_i, a_{i+1})$. } \] Namely, \begin{equation}\label{eq rho} b \rho'_1(\lambda)=-a \rho'_2(\lambda) \quad \text{ for all } \lambda \in (a_i, a_{i+1}), \, i=0, \dots, k-1. \end{equation} Integrating this expression in $(0, t)$, $t \leqslant a_1$, we get \[ \rho_2(\lambda)= -\frac{b}a \rho_1(\lambda) + \frac b a \rho_1(0) +\rho_2(0)= -\frac{b}a \rho_1(\lambda)+1, \] and in particular \[ \rho_2(a_1)=-\frac{b}a \rho_1(a_1)+1. \] Repeating the argument on each interval $(a_i, a_{i+1})$, we conclude \[ \rho_2(\lambda)=-\frac{b}a \rho_1(\lambda)+1,\qquad \text{for any $\lambda\in [0, \norm{u}_\infty)$.} \] Since $\rho_1(\norm{u}_\infty)=\rho_2(\norm{u}_\infty)$, we also deduce that $u$ attains its maximum for $x=\frac{a}{a+b}$. We claim that superlevel sets of $u \!\left(x+\frac{a}{a+b}\right)$ are anisotropic ball. Indeed, fix $t \in (0, \norm{u}_\infty)$, and let $x_1 \in [-\frac{a}{a+b}, 0]$ such that $u(x_1+\frac{a}{a+b})=t$, and $x_2 \in[0, \frac{b}{a+b}]$ such that $u(x_2+\frac{a}{a+b})=t$. Thus, \[ x_1+\frac{a}{a+b} = \rho_1(\lambda)= -\frac a b \rho_2(\lambda) + \frac a b = -\frac a b\left (x_2 + \frac{a}{a+b}\right) + \frac a b, \] which immediately implies $ x_1= -\frac a b x_2$. Namely, the superlevel set of $u \!\left(x+\frac{a}{a+b}\right)$ at $t$ is an anisotropic ball. Hence $u \!\left(x+\frac{a}{a+b}\right)=u^\star(x)$. \end{proof} \begin{remark}\label{rem:coarea} We believe that the proof of the rigidity result in Proposition \ref{prop: polya anis} may be addressed as done in \cite{FV, ET}, paying attention to the effect produced by the loss of evenness on the anisotropy. Indeed, one may argue as in \cite{FV} to prove that equality in \eqref{polya anis} implies that the super-level sets of $u$ are intervals; then, under the additional assumption $|\{u'(x)=0\}|=0$, one may show, following \cite{ET}, that they are centered in the same point. Here, we have given a elementary proof suitable for our context. \end{remark} We end this section with the following analog of Proposition \ref{prop: polya anis} for negative functions. Recall that $\widetilde H(x):=H(-x)$, and that its polar function is given in \eqref{eq:H-zero-tilde}. \begin{corollary}\label{lem:anis polya 2} Let $v: I:=[0, 1] \to (-\infty,0]$ such that $v \in W_{0}^{1,p}(0,1)$. Define $v^\# :=-(-v)^\star:\widetilde{I}^\star \to (-\infty,0]$, where $(\cdot)^\star$ is the anisotropic rearrangement with respect to $\widetilde H_0(x)$, namely \[ (-v)^\star(x)= \begin{cases} \sup \{ t : \abs{ \{ y: -v(y) >t \} } \ge \frac{a+b}{a} x \} & \text{ if } x \ge 0 \\ \sup \{ t : \abs{ \{ y: -v(y) >t \} } \ge -\frac{a+b}{b} x \} & \text{ if } x < 0, \end{cases} \] with $\widetilde{I}^\star=(-\frac{b}{a+b}, \frac{a}{a+b})$. Then \begin{equation}\label{eq:polya neg} \int_I H^p(v') \ge \int_{\widetilde{I}^\star} H^p((v^\#)'). \end{equation} Moreover, if $v \in C^1(I)$ and the set $\{x : v'(x)=0 \}$ is finite, then equality holds in \eqref{eq:polya neg} if and only if $v\left(x+\frac{b}{a+b} \right)= v^\#(x)$. \end{corollary} \begin{proof} Just notice that by Proposition \ref{prop: polya anis} \[ \int_I H^p(v') = \int_I \widetilde H^p(-v') \geqslant \int_{\widetilde{I}^\star} \widetilde H^p((-v^\star)') = \int_{\widetilde{I}^\star} H^p((v^\#)'). \] The statement about the equality case follows again by Proposition \ref{prop: polya anis}. \end{proof} \section*{Acknowledgments} Work partially supported by PRIN-2017-JPCAPN Grant: ``Equazioni differenziali alle derivate parziali non lineari'', by project Vain-Hopes within the program VALERE: VAnviteLli pEr la RicErca , by the INdAM-GNAMPA group, by the Portuguese government through FCT - Funda\c c\~ao para a Ci\^encia e a Tecnologia, I.P., under the projects UID/MAT/04459/2020 and PTDC/MAT-PUR/1788/2020 and, when eligible, by COMPETE 2020 FEDER funds, under the Scientific Employment Stimulus - Individual Call (CEEC Individual) -\\ 2020.02540.CEECIND/CP1587/CT0008. \end{document}
math
89,087
\begin{document} \title{Quantum coherence on selectivity and transport of ion channels} \author{Mina Seifi} \affiliation{Research Group on Foundations of Quantum Theory and Information, Department of Chemistry, Sharif University of Technology P.O.Box 11365-9516, Tehran, Iran} \author{Ali Soltanmanesh} \affiliation{Research Group on Foundations of Quantum Theory and Information, Department of Chemistry, Sharif University of Technology P.O.Box 11365-9516, Tehran, Iran} \affiliation{Sharif Quantum Center, Sharif University of Technology, Tehran, Iran} \author{Afshin Shafiee*} \affiliation{Research Group on Foundations of Quantum Theory and Information, Department of Chemistry, Sharif University of Technology P.O.Box 11365-9516, Tehran, Iran} \begin{abstract} Recently, it has been suggested that ion channel selectivity filter may exhibit quantum coherence, which may be appropriate to explain ion selection and conduction processes. Potassium channels play a vital role in many physiological processes. One of their main physiological functions is the efficient and highly selective transfer of \text{K}$^{+}$ ions through the membranes into the cells. To do this, ion channels must be highly selective, allowing only certain ions to pass through the membrane, while preventing the others. The present research is an attempt to investigate the relationship between hopping rate and maintaining coherence in ion channels. Using the Lindblad equation to describe a three-level system, the results in different quantum regimes are examined. We studied the distillable coherence and the second order coherence function of the system. The oscillation of distillable coherence from zero, after the decoherence time, and also the behavior of the coherence function clearly show the point that the system is coherent in ion channels with high throughput rates. \end{abstract} \maketitle \section{Introduction} Quantum biology is a relatively new field of study in quantum mechanics which can use quantum theory in some aspects of biology that classical physics cannot describe precisely. In the beginning, it was believed that quantum phenomena such as tunneling or quantum entanglement do not exist in living environments since these environments are inherently warm, humid, and noisy \cite{Moh,Tus,Bne}. The newfound evidence, in recent years, reveals that quantum principles play a critical role in explaining various biological phenomena such as photosynthesis, quantum effects in the brain, and spin and electromagnetic routing of the migratory birds \cite{ghas,Mar,arash,Sch,kim,Lam}. According to this evidence, the role of quantum phenomena, such as tunneling and quantum coherence, has been widely accepted in the crucial activities of living cells \cite{ghasem}. Recently, it has been proposed that quantum coherence may play a role in the selectivity of ions and their transport through ion channels \cite{Gan,sal,vaz}. Due to the energy scale and transport phenomena, ion channels can be considered as a distinct protein system that the quantum effects may have a functional role within them so that their activities can be comprehended via quantum mechanics \cite{Gan}. These channels are a collection of proteins embedded in the cell membrane that tune the flux of specific ions across the membrane and regulate interactions between the cell and its environment \cite{Cor}. Structurally, ion channels are protein complexes comprised of several subunits whose cyclic arrangement forms sub-nanometer pores for ions to enter or leave the cell \cite{Moh}. Ion channels share common properties, the most important of which is the presence of a gate that can be activated by such factors as chemicals, voltage, light, and mechanical pressure. Another common feature is having a selectivity filter responsible for passing only one specific type of ion. The structure of this filter is well studied in the Streptomyces lividans (KcsA) bacterial channel \cite{doyl}. The 3.4 nm long KcsA channel is comprised of a 1.2 nm long selectivity filter that is composed of four P-loop monomers. Each P-loop is composed of five amino acids: (Theronine (Thr75), Valine (Val76), Glycine (Gly77), Tyrosine (Tyr78), Glycine (Gly79)) linked by peptide units (H-N-C=O). The selectivity filter width is only a few angstroms (3A). The ions must move in a single file without their hydration shell in this filter. The process of gating, i.e., the mechanism that controls the closing and opening of the pores, is different in the variety of potassium channels, but the sequence of amino acids forming the selection filter is the same in all potassium channels \cite{doyl}. The selectivity filter is capable of selecting potassium ions over sodium ions in a ratio of $10^{4}$ : 1. Ion channels allow ions to enter or leave cells in a very selective and rapid manner. FIG. \ref{ion} shows the structure of the KcsA channel and its selectivity filter. Potassium channels conduct $\text{K}^{+}$ ions at a rate of $10^{6}$-$10^{8}$ $\text{s}^{-1}$ throughout the cell membrane \cite{tri,mor}. An important question is how a flexible structure, such as a selectivity filter, can be selected at high speed. This rapid and high selectivity is critical for the physiology of living creatures. Various ion channels are involved in several biological processes, including nerve signaling, muscular contraction, cellular homeostasis, and epithelial fluid transport \cite{wes,ogr,par}. The selectivity of ion channels refers to the fact that each ion channel is individually for passing the specific ions. For example, potassium channels only allow the potassium ions to pass through the membrane while rejecting the other ions (e.g., sodium ions) \cite{wes}. The ion radii of sodium and potassium ions only differ by 0.38A. Despite this slight difference in ion radii, the selectivity of the potassium ions is more than a thousand times higher than that of sodium ions \cite{mac}. Hence, the high selectivity of the ion channels cannot be simply explained by the physical obstruction, and the other factors may be involved \cite{Ash}. There is a large body of literature and numerous hypotheses about ion selectivity in biomolecular and genetic disciplines. Many researchers have contributed to developing the current views and concepts by experimentation and simulation \cite{tho,kas,dud}. Qi \textit{et al.} examined the importance of channel size in ion transport selectivity in molecular detail \cite{Qi}. To investigate the selectivity and other properties of the ion channel, Allen \textit{et al.} performed molecular dynamic calculations on the entire experimental protein structure determined for the KcsA potassium channel from Streptomyces lividans \cite{allen}. Allen and Chung used three-dimensional Brownian dynamics simulations to study the conductivity of the KcsA potassium channel using a known crystallographic structure \cite{chung}. Salari \textit{et al.} investigated the possibility of quantum ion interference through ion channels to understand the role of quantum interference on selectivity in ion channels \cite{sal}. Summhammer \textit{et al.} also analyzed the solutions of the Schroedinger equation for the bacterial KcsA ion channel. They claimed that quantum mechanical calculation is needed to explain basic biological properties such as ion selection in the ion channels of the trans membrane \cite{summ}. Despite the extensive experimental and theoretical researches, most biomolecular methods cannot well explain the ion selection in small scale (nanoscale), and still, numerous issues remain unresolved. After determining the three-dimensional structure of the bacterial KcsA channel with atomic resolution using x-ray crystallography, doubts were raised about the classical explanations, such as sung fit. It seems that such a process cannot well explain ion selectivity \cite {nos,jia,rou}. Considering the inadequacy of classical mechanics in explaining ion selectivity as well as experimental evidences demonstrating the existence of quantum effects in biological cells, it seems that quantum mechanics should be used to resolve this problem. To comprehend the ion selection as well as ion transport, accurate atomic models which are able to well describe the microscopic interactions are required. The coherence functional relations can only be guessed in the ion channels. In other words, coherence may play a role in ion selectivity. Vaziri \textit{et al.} suggested that the ion channel selectivity filter shows quantum coherence, which could possibly explain the ion selection process \cite{vaz}.\\In the present work, the relationship between hopping rate and maintaining coherence in ion channels is investigated. The Lindblad master equation is used to describe the system, and the coherence of the system is examined at different hopping rates. Selectivity and high rates in ion channels seem to be two contradictory features. We show that these two features are not going to be contradictory to each other, but high rate is an essential requisite for selectivity. High selectivity occurs only at high hopping rates. Despite the occurrence of decoherence, the high rate of hopping causes the system to always maintain its coherence in an oscillating manner.\\This paper is organized as follows. In Sec. II, we briefly review the Lindblad master equation and decoherence in open quantum systems. In Sec. III, the quantum mechanical model for ion transition through the ion channel is presented, and quantum transport equations in terms of Lindblad operators are introduced. Then, the results are discussed and the effect of hopping rates on coherence is investigated in detail in Sec. IV. Finally, the paper is concluded in Sec. V. \begin{figure} \caption{(Left) A representation of KcsA ion channel after PDB 1K4C. (Right) Two P-loop monomers in the selectivity filter.} \label{ion} \end{figure} \section{Decoherence and Lindblad master equation} All realistic systems inevitably interact with their environments. For the quantum systems in nanoscale and quantum biological systems, these interactions are not negligible; therefore, they should be considered as open systems \cite {Zha}. In recent years, understanding the dynamics of open systems has been one of the most complex challenges in quantum physics. The environment has an important role in the quantum realm. The interaction of a system and its environment can result in the entanglement between the system and environment in the quantum realm so that the system entity may generally change \cite {sch}. In this interaction, the environment has an infinite degree of freedom, and the system cannot be characterized with wave function or specific states. So, a density matrix should be assigned to the central system, and the evolution of this matrix is important over time \cite {sch}. The terms associated with the quantum behavior emerge as diagonal elements of the density matrix operator. Due to the system and environment interactions, quantum coherence which is characterized by the non-diagonal elements decay, and quasi-classical features occur in the system. Based on decoherence, if the interactions of a quantum system and its environment are considered, a realistic image of a distinct model can be achieved \cite{tir}. The complexity of the transmission dynamics in ion channels is attributed to the high interactions between ions along with the high degree of freedom of the environment. The selectivity filter is responsible for the passage of certain ions in ion channels. There are three sites for this filter, as shown in FIG. \ref{ion}. Jumping from one site to another is accompanied by an energy barrier. The jump rate between these sites is called the hopping rate. The rate of hopping between sites should be equal to the rate of transmission($10^{6}$-$10^{8}$\text{s}$^{-1}$) through the channel. To maintain the quantum state of the ions during the passage of the ion channel, the decoherence time must be greater than the time interval of the ion passing through the channel (10-20 nanoseconds) \cite{vaz}. Unlike a closed system, the temporal evolution of an open system is non-unitary due to the dissipative terms in the master equation. The Lindblad master equation is of special importance because it is the most general generator of Markovian dynamics in quantum systems \cite{man,dub}. Here, the Lindblad master equation can be written in a generic state as follows \cite{qin,solta,nae}: \begin{equation} \label{lindblad} \frac{d}{dt}\hat{\rho}(t)=-i[\hat{H}(t),\hat{\rho}(t)]+\sum_i\gamma(\nu_i\rho_t\nu^{\dagger}_i-\frac{1}{2}\nu^{\dagger}_i\nu_i\rho_t-\frac{1}{2}\rho_t\nu^{\dagger}_i\nu_i) \end{equation} The first term on the right side of the above equation is the Liouvillian part which represents the unitary evolution of the density matrix. The second term is Lindbladian which illustrates the system's interaction with its environment and represents decoherence effects \cite{ale}. The Lindblad master equation is local and markovian over time. The $ \nu$ operators are not necessarily hermitic and are known as Kraus operators, satisfying ${\sum_L\hat{k}_{L}^\dagger \hat{k}_{L}}=1$. A simple application of this equation in quantum optics is photon emission from a two-level atom in free space. In this situation, the density matrix is transformed into a 2×2 matrix, and the $\nu$ operators are reduced to Pauli lowering and rising operators. Consequently, the Lindblad equation is simply transformed into four first-order linear differential equations \cite{che}. Equation (1) can be symbolically expressed as follows: \begin{equation} \label{symblindblad} \frac{d}{dt}\hat{\rho}(t)=L^{\dagger}_i\hat{\rho}(t) \end{equation} where L is a Lindbladian super-operator that is implemented on the density matrix and leads to its temporal evolution. Since $L^{\dagger}_i$ is a linear map in the operator space, it is referred to as a super-operator. In the present work, the evolution of the reduced density matrix is investigated using the Lindblad master equation. In the following, the employed model is described in detail. \section{Model and Methods} \begin{figure} \caption{Schematic illustration of the ion channel states corresponds to (a)$=\vert 0\rangle$ (b)$=\vert 1\rangle$ (c)$=\vert 2\rangle$ states. } \label{filter} \end{figure} Herein, we express our model to discuss the high speed of selectivity in ion channels using Lindblad master equation. The mechanism of ion permeation through the selectivity filter in ion channels has been investigated using diverse experimental techniques such as radiotracer flux assays, single-channel electrophysiological measurements, x-ray crystallography, and molecular dynamics (MD) simulations. These experiments support two mechanisms commonly referred to as “knock–on” (water molecules may or may not be present in this mechanism) and “hard-knock” (water molecules are ignored in it) permeation models. To determine whether these experimental features can be explained by either of the two permeation models, Huong T. Kratochvil et al. conducted molecular dynamics (MD) simulations and computed 2D IR spectra for all relevant ion configurations \cite{kra}. They showed that knock-on model with water molecules is in most agreement with experience. Accordingly, we considered the presence of water molecules in our model. We examined our model on a relatively short time scale. Due to the large fluctuations of ions in the ion channel, there is probably no strong columbine repulsion at this time scale. To this end, an ion channel with four sites is considered \cite{kra} (FIG. \ref{filter}). In this model we consider a three state system: 1) Potassium ions in the first and third sites and water molecules in the second and fourth sites. 2) Potassium ions in the second and fourth states with water molecules in the first and third sites. 3) A Potassium ion releases to the environment from the previous state. The jump from site 4 to the environment is considered as a 3 to 1 transition. Jumping between these sites is associated with an energy barrier. Due to the high speed of passing, there is always a state transition from site 1 to 2, 2 to 3, or 3 to 1. The system can be shown in a three-dimensional Hilbert space with three states of $\vert 0\rangle$ , $\vert 1\rangle$ and $\vert 2\rangle$ similar to a spin-1 system. The evolution of the density matrix is obtained using the Lindblad master equation as below ($\hslash$ is assumed to be one in the rest of the paper): \begin{equation} \label{lindblad} \frac{d}{dt}\hat{\rho}(t)=-i[\hat{H}(t),\hat{\rho}(t)]+L[\hat{\rho}(t)] \end{equation} Also, the time-dependent Hamiltonian is considered as: \begin{equation} \label{totalH} \hat{H}=\hat{H}_0+\hat{H}_1 \end{equation} \begin{align} \label{H0} \hat{H}_0=\omega_0\hat{S}_Z \end{align} \begin{equation} \label{H1} \hat{H}_1=c(\vert 1\rangle\langle 0\vert +\vert 2\rangle\langle 1\vert +\vert 0\rangle\langle 2\vert) \end{equation} where $\hat{H}_0$ is the Hamiltonian system with a transition frequency $\omega_0$, $ \hat{S}_Z $ is the z-coordination operator of spin-1 and $\hat{H}_1$ is associated with the coupling of the system and its environment. Here, c is the hopping rate coefficient, and L is a super-operator: \begin{equation} \label{superoperator} L[\hat{\rho}]=\gamma(\hat{S}_-\hat{\rho}\hat{S}_+ -\frac{1}{2}\hat{S}_+\hat{S}_-\hat{\rho} -\frac{1}{2}\hat{\rho}\hat{S}_+\hat{S}_-) \end{equation} where $\hat{S}_\pm=\dfrac{1}{\sqrt{2}}(\hat{S}_x \pm i\hat{S}_y )$ are the spin-1 operators. \\The Hamilton function, $\hat{H}_1$, expresses the transition between states by passing particles from one site to the next. To drive the corresponding equations in the Dirac picture, the system density matrix and the Hamiltonian interaction are rewritten as follow: \begin{equation} \label{density} \hat{\rho}^{D}(t)=e^{it\hat{H}_0}\hat{\rho}(t)e^{-it\hat{H}_0} \end{equation} \begin{equation} \label{interaction} \hat{H}^{D}_{1} (t)=e^{it\hat{H}_0}\hat{H}_1e^{-it\hat{H}_0} \end{equation} Using Equations \eqref{density} and \eqref{interaction} and going to the Dirac picture, the Lindblad equation becomes: \begin{equation} \label{dirac} \frac{d}{dt}\hat{\rho}^{D}(t)=-i[\hat{H}^{D}_{1}(t),\hat{\rho}^{D}(t)]+L^{D}[\hat{\rho}^{D}(t)] \end{equation} where \begin{equation} \label{superoperatordirac} L^{D}[\hat{\rho}^{D}]=\gamma(\hat{S}_-\hat{\rho}^{D}\hat{S}_+ -\frac{1}{2}\hat{S}_+\hat{S}_-\hat{\rho}^{D} -\frac{1}{2}\hat{\rho}^{D}\hat{S}_+\hat{S}_-) \end{equation} According to the above discussion, equation \eqref{dirac} can be rewritten in the basis of the eigenstates of $ \hat{S}_z $ operator as follows: \begin{align} \label{num} \nonumber \frac{d}{dt} \hat{\rho}^{D_{00}}&=-ic(\hat{\rho}^{D_{20}}-\hat{\rho}^{D_{01}})-2\gamma\hat{\rho}^{D_{00}}\\ \nonumber \frac{d}{dt}\hat{\rho}^{D_{01}}&=e^{i\omega_{0}t}(ic\hat{\rho}^{D_{21}}+ic\hat{\rho}^{D_{02}}-2\gamma\hat{\rho}^{D_{01}})\\ \nonumber \frac{d}{dt}\hat{\rho}^{D_{02}}&=e^{2i\omega_{0}t}(-ic\hat{\rho}^{D_{22}}+ic\hat{\rho}^{D_{00}}-\gamma\hat{\rho}^{D_{02}})\\ \nonumber \frac{d}{dt}\hat{\rho}^{D_{10}}&=e^{-i\omega_{0}t}(-ic\hat{\rho}^{D_{00}}+ic\hat{\rho}^{D_{11}}-2\gamma\hat{\rho}^{D_{10}})\\ \frac{d}{dt}\hat{\rho}^{D_{11}}&=-ic(\hat{\rho}^{D_{01}}-\hat{\rho}^{D_{12}})+2\gamma(\hat{\rho}^{D_{00}}-\hat{\rho}^{D_{11}})\\ \nonumber \frac{d}{dt}\hat{\rho}^{D_{12}}&=-ice^{i\omega_{0}t}(\hat{\rho}^{D_{02}}-\hat{\rho}^{D_{10}})+\gamma e^{i\omega_{0}t}(2\hat{\rho}^{D_{01}}-\hat{\rho}^{D_{12}})\\ \nonumber \frac{d}{dt}\hat{\rho}^{D_{20}}&=-ice^{-2i\omega_{0}t}(\hat{\rho}^{D_{10}}-\hat{\rho}^{D_{21}})-\gamma e^{-2i\omega_{0}t}\hat{\rho}^{D_{20}}\\ \nonumber \frac{d}{dt}\hat{\rho}^{D_{21}}&=-ice^{-i\omega_{0}t}(\hat{\rho}^{D_{11}}-\hat{\rho}^{D_{22}})+\gamma e^{-i\omega_{0}t}(2\hat{\rho}^{D_{10}}-\hat{\rho}^{D_{21}})\\ \nonumber\frac{d}{dt}\hat{\rho}^{D_{22}}&=-ic(\hat{\rho}^{D_{12}}-\hat{\rho}^{D_{21}})+2\gamma\hat{\rho}^{D_{11}} \end{align} \section{Results and Discussion} \begin{center} \begin{figure*} \caption{The probability of finding the system in $|0\rangle, |1\rangle$ and $|2\rangle$ states versus time, with $\omega=1\times 10^8s^{-1} \label{prob1} \label{prob2} \end{figure*} \end{center} The mechanism behind the high throughput rate in $\text{K}^+$ channels and its contradiction with its high selectivity is still an open problem. However, it has been suggested that quantum coherent hopping is included throughout the process \cite{Gan,vaz,sala,bhat}. Nevertheless, the system is coupled to a biological environment, which causes the system to go through a decoherence process. The mentioned problem is a tremendous obstacle in understanding the mechanism in the quantum regime. To study the problem, first we solved the system of equations \eqref{num} with the system initial state being in a superposition: \begin{align} \label{is} \vert\psi_i\rangle=\frac{1}{\sqrt{3}}(|0\rangle+|1\rangle+|2\rangle). \end{align} In the absence of the environment (in a case with $\gamma=0$) the probabilities of the states oscillate with time as we expect. However, the probability of finding the system in the $|1\rangle$ state tends to zero as time passes by (see Fig.~\ref{prob1}). This phenomenon shows us that $|1\rangle$ is just an intermediary state with a short lifetime. The use of the spin 1 matrix in the Hamiltonian system indicates that these states are not degenerative. The difference in energy of these states causes them to behave differently. In the spin model, the middle term is considered as an intermediary state. We also considered its energy in proportion to the intermediary state. It is expected that the system will be found more likely in the states where a potassium ion enters the channel from the environment or enters the environment from the channel($|0\rangle$ and $|2\rangle$ respectively). On the other hand, it is found less likely in the intermediary state. This expectation is consistent with our results. Otherwise, in a case in which decoherence is present (with $\gamma>0$) the probabilities oscillate and tend to a finite value after the decoherence time, as shown in Fig.~\ref{prob2}. As the values of$\gamma$ increases, makes the system to equilibrate faster. The appropriate gamma for our system varies between $\gamma=0.5\times10^{7}s^{-1} $ and $2\times10^{8}s^{-1}$ \cite{vaz}. The larger the number in this range, the more fluctuations we see. After the decoherence time we can almost find the system with the lowest energy, in state $|2\rangle$. In other words, the system prefers to cross and release the particles to the environment quickly. Also, Fig. \ref{real} and \ref{imag} shows a study on the evolution of off-diagonal elements of the density matrix of the system. Surprisingly, the off-diagonal elements tends to zero as time passes by. But, it seems the fast hopping speed of particles prevents the off-diagonal elements to became zero, and some oscillation can be observed even after the decoherence time. This behavior encourages us to study the coherence of the system. If the selectivity process considered to be quantum mechanical, it is required for the system to remain coherent. \begin{center} \begin{figure*} \caption{The evolution of (a) real parts and (b) imaginary parts, of off-diagonal elements of density matrix versus time. With $\gamma=0.5\times10^7s^{-1} \label{real} \label{imag} \end{figure*} \end{center} \begin{table}[!t] \centering \caption{Requirements for a coherence measure $C_d(\hat{\varrho})$. $S$ is von Neumann entropy and $\Delta$ represents the dephasing operator. \label{table1}} \begin{tabular}{p{18ex} p{32ex}} \hline Postulate & Definition \\ \hline Nonnegativity & $C(\hat{\varrho})\geq 0$, in general. \\[1ex] Monotonicity & {\small $C$ does not increase under the action of incoherent operations.} \\[1ex] Strong monotonicity & {\small $C$ does not increase on average under selective incoherent operations.} \\[1ex] Convexity & {\small $C$ is a convex function of the state.} \\[1ex] Uniqueness & {\small For any pure states $\vert\psi\rangle$, $C$ takes the form: \linebreak $C(\vert\psi\rangle\langle\psi\vert)=S(\Delta[\vert\psi\rangle\langle\psi\vert])$.} \\[1ex] Additivity & {\small $C$ is additive under tensor products.} \\[1ex] \hline \end{tabular} \end{table} Now, let us have a glimpse on the order of the magnitude of the decoherence time in the case of our problem. By using the thermal de Broglie wavelength $\lambda_{dB}=1/\sqrt{2mk_BT}$, we could define a corresponding decoherence time as \begin{align} \label{dtime} \tau_D=\frac{\Delta X^2}{\gamma \lambda_{dB}^2}, \end{align} where $\Delta X$ is the dispersion in position space and \begin{align} \label{gamma} \gamma=\gamma_0\omega\bar{n}\frac{r^2}{1+r^2}, \end{align} with $r=\Lambda/\omega$ and $\bar{n}=(e^{\omega/k_BT}-1)^{-1}$. $\Lambda$ represents the cutoff frequency and $\bar{n}$ denotes the mean population of the environment based on the Temperature ($T$). In the case of ours, we introduced the states consisting of two $K^{+}$ ions and two water molecules each interacting with channel wall, with its own dephasing rate $\gamma$. In this case, one can show that the system evolution can be written in a single Lindblad equation with the sum of all individual dephasing rates as its dephasing rate. The value of the decoherence time in decoherence formalism extremely depends on the value of this dephasing rate (equation \eqref{dtime}). We examined the calculation of the decoherence time at the body temperature. We placed the values of mass, gamma, and dispersion in position space in Equations \eqref{dtime} and \eqref{gamma}. We used the reduced mass of potassium and water molecule. The value of the dispersion greatly depends on the frequency of the particles and varies between $1\times 10^{-12}$ to $1\times 10^{-10}m$. Changes in the value of the dephasing rate drastically change the value of the decoherence time. In this regard, we sketched the effect of the dephasing rate and the particles frequencies. \begin{figure} \caption{Changes of the decoherence time as the dephasing rate increases, in a variety of system frequencies.} \label{dectime} \end{figure} As it is shown in Fig. \ref{dectime}, increasing in dephasing rate, decreases the decoherence time, on the other hand, an increase in system frequency also increases the decoherence time. Accordingly, we considered the dephasing rate in the range of $\gamma=1\times10^6s^{-1}$ to $\gamma=1\times10^8s^{-1}$ , as reported in \cite{vaz}. In the mentioned range of the dephasing rate and with the system frequency in the range of $1\times10^9s^{-1}$ to $1\times10^{12}s^{-1}$, the decoherence time varies between $1\times10^{-10 }s$ to $1\times10^{-7} s$. If we consider other mechanisms in the absence of water molecules, our model can still predict the overall behavior of the system. Due to the change in frequency and reduced mass, the overall behavior of the system does not change, but the decoherence time coefficients and the slope of the graph may change. This is because we reduced the whole mechanism to a three-level process. Our model states are a three-step process. The decoherence time is not short for a particle to cross the channel, but definitely, it is not secure that the system remains coherent in the whole process. Therefore, we need to have an accurate study on the coherency of the system. So far, many approaches have been proposed to quantify coherence. One of the most useful of them is the framework developed by Baumgratz \textit{et al.} \cite{lio,str}. Following this framework, a number of coherence quantifiers have been found. These include relative entropy of coherence, distillable coherence, the robustness of coherence, coherence concurrence, and coherence of formation \cite{bau,nap}. If the coherence quantifier satisfies all the postulates mentioned in TABLE \ref{table1}, it is called a coherence measure \cite{sol}. It has been proven that distillable coherence and relative entropy of coherence are equivalent and satisfy all the necessary conditions. Hence, they are considered as two basic coherence measures \cite{lio,hu}. Using these measures is mainly justified based on the physical intuition \cite{bau}. The distillable coherence is the optimal number of maximally coherent states which can be obtained from a state $\hat{\varrho}$ via incoherent operations \cite{str}. A simple expression for distillable coherence was given by Winter and Yang as follows \cite{yan}: \begin{equation} \label{DistillableCoherence} C_d(\hat{\varrho})=S(\Delta[\hat{\varrho}])-S(\hat{\varrho}) \end{equation} \\where $\Delta[\hat{\varrho}]=\sum_{i=0}^{d-1}\vert i\rangle\langle i\vert\hat{\varrho}\vert i\rangle\langle i\vert$ is the dephasing operator and $S(\hat{\varrho})=-Tr[\varrho log_{2}\varrho] $ is the Von Neumann entropy. By studying the changes of distillable coherence, as we expect, the distillable coherence tends to zero as time passes by. In a hypothetical case with no hopping rate $c=0$, after the decoherence time the distillable coherence reaches zero $C_d(\rho)=0$ (see the black line in Fig. \ref{dis}). In the other hand, we sketched the distillable coherence against time in different hopping rates in Fig. \ref{dis}. The distillable coherence tends to zero yet, but interestingly, increasing the hopping rates causes the distillable coherence to oscillate. Therefore, the system remains coherent, even after the decoherence time. Moreover, a faster hopping rate results in increasing the amplitude of the oscillations and also more coherence. \begin{figure} \caption{The changes of the distillable coherence $C_d(\rho)$ versus time with $\omega=1\times10^8s^{-1} \label{dis} \end{figure} To support our findings, we calculate the second-order quantum coherence function $g^{(2)}$(or the degree of coherence). At a fixed position $g^{(2)}$ depends only on the time difference $\tau$ and defines as \begin{align} \label{dc} g^{(2)}(\tau)=\frac{\langle E^{(-)}(t)E^{(-)}(t+\tau)E^{(+)}(t+\tau)E^{(+)}(t)\rangle}{\langle E^{(-)}(t)E^{(+)}(t)\rangle\langle E^{(-)}(t+\tau)E^{(+)}(t+\tau)\rangle} \end{align} where, \begin{align} E^{(+)}=i(\frac{\hbar\omega_0}{2\epsilon_0V})^{1/2}\hat{S}_{-}e^{i(\bf{k.r}-\omega_0 t)} \end{align} is the positive frequency part of the field operator with wave vector ${\bf k}$, and $E^{(-)}$ is its conjugate. For a coherent state $\vert\alpha\rangle$ the degree of coherence is $g^{(2)}(\tau)=1$. Also, for a field in a single mode thermal-state it can be shown that $g^{(2)}(\tau)=2=1+\vert g^{(1)}(\tau)\vert^2$, which lies in the range of $1\leq g^{(2)}(\tau)\leq2$ and is just as in the classical case. On the other hand, in nonclassical systems such as numbers states the degree of coherence is in the range of $0<g^{(2)}(\tau)\leq1$. In the case of ours, for lower hopping rates $g^{(2)}(\tau)$ tends to $1/2$ after the decoherence time. By increasing the hopping rate ($c$), not only the oscillations appear, but also $g^{(2)}(\tau)$ increases in high hopping rates as we showed in Fig. \ref{g2f}. This observation confirms that a high throughput rate is a necessary requirement for an ion channel to stay coherent. \begin{figure} \caption{The second order degree of coherence versus time in different hopping rates, with $\omega=1\times10^8s^{-1} \label{g2f} \end{figure} \section{Conclusion} The objective of this paper is to give a clear answer to the paradox of the coexistence of a very high throughput rate of $K^+$ ions in KscA channels and their very high selectivity. In recent years, it has been suggested that quantum coherence is included in the process. However, the biological temperature and channels coupling with environment cause the system to lose coherence. In this regard, to investigate the problem, we studied a model that assume the system being in a superposition of three states: i) Potassium ions in the first and third sites and water molecules in the second and fourth sites. ii) Potassium ions in the second and fourth states with water molecules in the first and third sites. iii) A Potassium ion releases to the environment from the previous state. As the system interacts with the environment, it goes through a decoherence process. We studied the process using Lindblad master equation and solved the equations numerically. The results shows that by measuring the system after the decoherence time, it almost always can be found in a state where release a Potassium ion to the environment. This observation is in accordance with the high throughput rate, but it does not necessitate quantum coherence in the system. Also, we discussed the decoherence time in the formalism of quantum decoherence in a variety of system frequencies and dephasing rates. Increasing the dephasing rate decreases the decoherence time, on the other hand, an increase in system frequency results a longer decoherence time. To study the system coherence directly, we used distillable coherence as a coherence measure and also calculated the second order coherence function. As we expect the distillable coherence tends to zero after the decoherence time. However, in high throughput rates, it oscillates from zero value and causes the system to remain coherent. Moreover, increasing the hopping rate increases the amplitude of the oscillations. The coherence function of the system tends to $1/2$ and by increasing the throughput rate, the value of the coherence function after the decoherence time increases but remains less that $1$. This observation clearly shows the point that the system is coherent in ion channels with high throughput rate as the systems that behaves classically have second order coherence functions greater than 1. In this paper we successfully show that not only the high throughput rate is not in contrast with high selectivity, but also it is necessary for the system to remain coherent and act in quantum manner. In future works we concentrate on the topic that how the quantum coherence can act in the selectivity process. \section*{Additional information} Correspondence should be addressed to Afshin Shafiee [email: [email protected]]. \end{document}
math
34,316
\begin{document} \begin{titlepage} \noindent {\bf INTRODUCTION TO DIALECTICAL NETS} \vspace*{\baselineskip} \\ \indent ROBERT E. KENT \\ \indent Department of Electrical Engineering and Computer Science \\ \indent University of Illinois at Chicago, Chicago, Illinois \vspace*{\baselineskip} \\ \hspace*{\fill} ABSTRACT \hspace*{\fill} \vspace*{\baselineskip} \\ This paper initiates the dialectical approach to net theory. This approach views nets as special, but very important and natural, dialectical systems. By following this approach, a suitably generalized version of nets, called {\em dialectical nets}, can be defined in terms of the ``fundamental contradiction'' inherent in the structure of {\em closed preorders}. Dialectical nets are the least conceptual upper bound subsuming the notions of Petri nets, Kan quantification and transition systems. The nature of dialectical nets is that of logical dynamics, and is succinctly defined and summarized in the statement that ``dialectical nets are transition systems relativized to closed preorders, and hence are general predicate transformers''. \vspace*{\baselineskip} \\ \hspace*{\fill} INTRODUCTION \hspace*{\fill} \vspace*{\baselineskip} \\ Nets are extensively used to model system phenomena such as concurrency, conflict, synchronization, information flow etc. In order to model a variety of systems, nets come in a variety of forms including: condition/event nets using Boolean values, consumption/production nets using natural numbers, predicate/transition nets using colored tokens and formal polynomials, etc. In order to make mathematical sense out of this multiplicity of net models, and in order to be able to extend the net concept to first order and higher order logic, we stress the need for a proper mathematical base. The first step in the development of this base is recognition of the fact that values in nets (be they booleans, numbers, colored tokens, subsets, etc.) form a well-known mathematical structure called a {\em closed preorder}. This is the established and accepted structure for predicates in first order and higher order logic and should also be used in net theory. The second step in the development of this base is recognition of the fact that transitions in nets (be they precondition/postcondition, consumption/production, conjunction/implication, existential-quantification/substitution, etc.) are generalized inverse or dialectical activities and form a well-known mathematical structure called an {\em adjunction}. After development of a proper mathematical base for nets we introduce a generalized model for such system phenomena called {\em dialectical nets}. Dialectical nets are special, but very important, forms of dialectical systems based upon the internal contradiction in closed preorders. The theory of dialectical (motion in) systems has already been applied to four important areas of computer science: \begin{itemize} \item concurrent systems \cite{Kent87a}, where it distinguishes the notions of observational equivalence and dialectical motion of transition systems; \item OBJ-like functional programming, where it generalizes the notion of {\em institutions}, and is based upon the doctrinal diagram associated with algebraic theories; \item generalized Petri net theory [this paper], where it unites notions of {\em nets} with the notion of {\em predicate transformers}, and is based upon the notions of bimodules, Kan quantification and normed categories; and \item first order logic \cite{Kent87b}, where it unifies the semantics of Horn clause logic with that of relational databases, and is based upon the notion of model doctrines. \end{itemize} The theory of dialectical systems was originally developed out of a desire to understand mathematically the obvious structural similarities between the "parallel composition" of concurrent systems and the "natural join" of database relations. The dialectical view of nature \cite{Bernow&Raskin} is ancient: it was discussed in the earliest history of ideas by various Presocratic Greek philosophers, most notably Heraclitus. Dialectical systems contain the following essential aspects: 1. based upon contradictions or "opposing tendencies"; 2. interacting objects or entities; 3. movement, motion or development; and 4. reproduction or renewal of entities. All of these aspects are present in the parallel composition of concurrent interacting systems and the natural join of database relations. They are also present as basic concepts in the theory of nets. F.W. Lawvere gave the theory of dialectical systems its most succinct expression: {\sc Category Theory $\equiv$ Objective Dialectics}. Indeed dialectics invests the dynamical view of systems theory with the fundamental ideas of category theory, such as adjunctions, limits, tensors and Kan extensions; but in turn, it gives these categorical notions that dynamical view. In short, the theory of dialectics studies both the ``motion (development, or growth) of structure'' and the ``structure of motion''. \vspace*{\baselineskip} \\ \end{titlepage} \hspace*{\fill} ENRICHED NETS \hspace*{\fill} \vspace*{\baselineskip} \\ In the theory of dialectics inverse activities such as consumption and production are fundamental structural units called opposing tendencies, or contradictions. A mathematical formulation of dialectics exists, and is called category theory: {\sc category theory $\equiv$ objective dialectics}. In category theory dialectical contradictions are represented by adjunctions. Given two monotonic functions ${\cal B} \stackrel{f}{\longrightarrow} {\cal A}$ and ${\cal B} \stackrel{u}{\longleftarrow} {\cal A}$ flowing in opposite directions between two preorders ${\cal A} = \pair{A}{\preceq_{A}}$ and ${\cal B} = \pair{B}{\preceq_{B}}$, the pair $\pair{f}{u}$ is called an {\em adjunction} (or an adjoint pair, or a Galois connection, or generalized inverses, or opposing tendencies, or a dialectical contradiction), when they satisfy the equivalence axiom: \( f(b) \preceq_{A} a \mbox{ iff } b \preceq_{B} u(a) \). In this case, $f$ is called the left adjoint (or left aspect) and $u$ is called the right adjoint (or right aspect). The equivalence axiom can be interpreted as the ``dialectical tension'' which exists between the left and right aspects within the complementary pair. We symbolize this adjunction by the functional notation $(f \dashv u) \! : \! {\cal B} \longrightarrow {\cal A}$ with ${\cal B}$ (arbitrarily) the source of the adjunction and ${\cal A}$ the target. The following two conditions are equivalent: (1) $(f \dashv u) \! : \! {\cal B} \longrightarrow {\cal A}$; (2) unit axiom $\mbox{Id}_{B} \preceq f \cdot u$; that is, $b \preceq_{B} u(f(b))$ for all $b \in B$; and counit axiom $u \cdot f \preceq \mbox{Id}_{A}$; that is, $f(u(a)) \preceq_{A} a$ for all $a \in A$. Either of these equivalent conditions implies the condition: (3) $u \cdot f \cdot u \equiv u$; and $f \cdot u \cdot f \equiv f$. Ordinary inverses are generalized inverses (contradictions): two monotonic functions ${\cal B} \stackrel{f}{\longrightarrow} {\cal A}$ and ${\cal B} \stackrel{f^{-1}}{\longleftarrow} {\cal A}$ which are inverse to each other, $f \cdot f^{-1} = {\rm Id}_B$ and $f^{-1} \cdot f = {\rm Id}_A$, form an adjunction $(f \dashv f^{-1}) \! : \! {\cal B} \longrightarrow {\cal A}$. The notion of adjoint pairs can be generalized from the realm of preorders and monotonic functions to the realm of categories and functors. The fundamental algebraic structure used to define the dynamics of consumption/production Petri nets is that of the natural numbers ${\bf N}$. Natural numbers represent quantities of various resources in systems, which are distributed over, and indexed by, places. Certain properties of natural numbers are essential in the definition of the structure and behavior of nets. These properties form a coherent and very important mathematical structure called a closed preorder. A {\em closed preorder} \cite{Lawvere73} ${\bf V} = \quintuple{V}{\preceq}{\oplus}{\Rightarrow}{e}$ consist of the following data and axioms: (1) $\quadruple{V}{\preceq}{\oplus}{e}$ is a monoidal preorder, or ordered monoid, with $\pair{V}{\preceq}$ a preorder and $\triple{V}{\oplus}{e}$ a monoid, where the binary operation $\oplus \! : \! \product{V}{V} \longrightarrow V$, called {\bf V}-composition, is monotonic: if both $u \preceq u'$ and $v \preceq v'$ then $(u \oplus v) \preceq (u' \oplus v')$; (2) $\oplus$ is symmetric, or commutative; that is, $a \oplus b = b \oplus a$ for all elements $a,b \in V$; and (3) {\bf V} satisfies the closure axiom: the monotonic {\bf V}-composition function $(\:) \oplus b \! : \! V \longrightarrow V$ has a specified right adjoint $b \!\Rightarrow\! (\:) \! : \! V \longrightarrow V$ for each element $b \in B$, called {\bf V}-implication, or symbolically $\left( (\:) \oplus b \right) \dashv \left( b \!\Rightarrow\! (\:) \right) \! : \! V \longrightarrow V$; that is, $a \oplus b \preceq c$ iff $a \preceq b \!\Rightarrow\! c$ for any triple of elements $a,b,c \in V$. The adjunction in the closure axiom is what we referred to as the fundamental contradiction inherent in the mathematical structure of the closed preorder {\bf V}. The counit axiom for the closure adjunction is generalized {\em modus ponens}: $((b \!\Rightarrow\! a) \oplus b) \preceq a$ for all elements $a,b \! \in \! V$. When the unit axiom of the closure adjunction is equivalence, $((b \!\Rightarrow\! (a \oplus b)) \equiv a$ for all elements $a,b \! \in \! V$, the closed preorder {\bf V} is said to be {\em coreflective}. The commutative, associative and unital binary operation $\oplus$ is sometimes called a tensor product. We usually also assume that our closed preorders are bicomplete; that is, the supremum $\bigvee B$ and the infimum $\bigwedge B$ exist (and are unique up to equivalence $\equiv$) for all subsets $B \subseteq V$. When the tensor product $\oplus$ is the binary infimum or meet $\wedge$ and the unit $e$ is the top element $\top_V$, the closed preorder ${\bf V} = \quintuple{V}{\preceq}{\wedge}{\Rightarrow}{\top_V}$ is called a {\em cartesian closed preorder} (or a bicomplete Heyting prealgebra, or a locale). The context of cartesian closed preorders is the context of traditional logic. A characteristic property of cartesian closed preorders is idempotency: $v \oplus v = v \wedge v = v$ for all elements $v \! \in \! V$. In a cartesian closed preorder, and even in an arbitrary closed preorder, we regard $V$ as being a set of generalized truth values. A closed preorder is normal when the unit is the top element $e = \top_V$ and {\bf V}-implication is directed-continuous: $b \!\Rightarrow\!(\bigvee_{d \in D}d) \equiv \bigvee_{d \in D}(b \!\Rightarrow\! d)$ for all directed subsets $D \subseteq V$. For normal closed preorders $a \oplus b \preceq a \wedge b$ for all elements $a,b \! \in \! V$. Cartesian closed preorders are normal. A pair ${\cal X} = \pair{X}{d_X}$ consisting of a set $X$ and a function $d_X \! : \! \product{X}{X} \longrightarrow V$ is called a {\em quasi} {\bf V}-{\em space} when it satisfies the triangle (or transitivity) axiom $d_X(x_{1},x_{2}) \oplus d_X(x_{2},x_{3}) \preceq d_X(x_{1},x_{3})$ for all triples of elements $x_{1},x_{2},x_{3} \in X$; and the zero (or reflexivity) axiom $e \preceq d_X(x,x)$ for all elements $x \in X$. The quasi {\bf V}-space ${\cal X} = \pair{X}{d_X}$ is a {\bf V}-space when it satisfies the additional condition: if $e \preceq d_X(x_{1},x_{2})$ and $e \preceq d_X(x_{2},x_{1})$ then $x_{1} = x_{2}$. The function $d_X$ is called a metric. We interpret $d_X$ to be either a generalized distance function or a fuzzy preorder. In general our metrics are asymmetrical: $d_X(x_{1},x_{2}) \neq d_X(x_{2},x_{1})$. Any quasi {\bf V}-space ${\cal X} = \pair{X}{d_X}$ can be symmetrized by defining $d_X^{\rm sym}(x_{1},x_{2}) = d_X(x_{1},x_{2}) \oplus d_X^{\rm op}(x_{1},x_{2})$ where $d_X^{\rm op}(x_{1},x_{2}) = d_X(x_{2},x_{1})$ is the dual or opposite metric. The set of truth values ${\cal V} = \pair{V}{d_V}$, where $d_V(v_1,v_2) = v_1 \!\Rightarrow\! v_2$, is a quasi {\bf V}-space. Any set $X$ can be viewed as a discrete {\bf V}-space $X = \pair{X}{d_X} = X^{\rm op}$, where $d_X(x,x') = e \mbox{ if } x \!=\! x', = \bot_V \mbox{ if } x \!\not=\! x'$. Associated with every quasi {\bf V}-space ${\cal X} = \pair{X}{d_X}$ is an underlying preorder $\Box{\cal X} = \pair{X}{\preceq_X}$ where $x \preceq_X x'$ when $e \preceq d_X(x,x')$, and $x$ and $x'$ are unrelated when $e \not\preceq d_X(x,x')$. So the characteristic monotonic function for the order $\preceq_X$ is $\kappa_{\preceq_X} = d_X \cdot \Box_V \! : \! \product{\pair{X}{\preceq_X}^{\rm op}}{\pair{X}{\preceq_X}} \rightarrow \pair{V}{\preceq} \rightarrow \pair{2}{\leq} = \{0\leq1\}$, where $\Box_V = e \!\preceq\!(\:)$ is the usual characteristic function for the principal filter $\uparrow_V\!(e) \subseteq V$; that is, $\Box_V(v) = 1$ if $e \preceq v$, and $\Box_V(v) = 0$ otherwise. Note that $\Box({\cal X}^{\rm sym}) = (\Box{\cal X})^{\rm sym} = \pair{X}{\equiv_X}$ and that $\Box({\cal X}^{\rm op}) = (\Box{\cal X})^{\rm op} = \pair{X}{\succeq_X}$. For a {\bf V}-space the underlying preorder is a partial order. For a symmetric quasimetric {\bf V}-space the underlying preorder is an equivalence relation. For the space of generalized truth values ${\cal V} = \pair{V}{d_V}$, since $e \preceq d_V(v_{1},v_{2})$ iff $v_{1} \preceq v_{2}$, the underlying preorder is the given order on ${\bf V}$. A {\bf V}-{\em morphism} $f \! : \! {\cal X} \longrightarrow {\cal Y}$ between two quasi {\bf V}-spaces ${\cal X} = \pair{X}{d_X}$ and ${\cal Y} = \pair{Y}{d_Y}$ is a function $f \! : \! X \longrightarrow Y$ which satisfies the condition $d_X(x,x') \preceq d_Y(f(x),f(x'))$ for all $x,x' \in X$. {\bf V}-spaces and {\bf V}-morphisms form the category ${\bf Space}_V$. By modus ponens, $(\:) \oplus v \! : \! {\cal V} \longrightarrow {\cal V}$ is a {\bf V}-morphism for all elements $v \! \in \! V$. By transitivity of $d_V$, $v \!\Rightarrow\! (\:) \! : \! {\cal V} \longrightarrow {\cal V}$ is a {\bf V}-morphism for all elements $v \! \in \! V$. Given any two quasi {\bf V}-spaces ${\cal X}$ and ${\cal Y}$ the set of all {\bf V}-morphisms from ${\cal X}$ to ${\cal Y}$ is a quasi {\bf V}-space ${\cal Y}^{\cal X}$, called the exponential quasi {\bf V}-space of $\cal X$ and ${\cal Y}$, whose metric $d$, called the pointwise inf metric, is defined by $d(f,g) = \bigwedge_{x \in X} d_Y(f(x),g(x))$. Notice that the metric $d_X$ is not used to define $d$. The metric $d_X$ is only used to restrict admission to the underlying set of ${\cal Y}^{\cal X}$. In particular, the exponential space ${\cal V}^{{\cal X}}$ of all {\bf V}-valued {\bf V}-morphisms on ${\cal X}$ is an quasi {\bf V}-space with the inf metric $d(\phi,\psi) = \bigwedge_{x \in X} d_V(\phi(x),\psi(x)) = \bigwedge_{x \in X} [\phi(x) \!\Rightarrow\! \psi(x)]$. We interpret an element of ${\cal V}^{{\cal X}}$, a {\bf V}-morphism $\mu \! : \! {\cal X} \longrightarrow {\cal V}$, to be an $X$-indexed marking $\mu \! : \! X \longrightarrow {\cal V}$ which satisfies the internal pointwise metric constraint $d_X$: $d_X(x,x') \preceq d_V(\mu(x),\mu(x')) = \mu(x) \!\Rightarrow\! \mu(x')$ for all $x,x' \in X$; or equivalently, by the $\oplus$-$\Rightarrow$ adjunction, $\mu(x) \oplus d_X(x,x') \preceq \mu(x')$ for all $x,x' \! \in \! X$. Such a marking $\mu \! : \! {\cal X} \rightarrow {\bf V}$ which is constrained by the metric $d_X$ is called a {\bf V}-({\em valued}) {\em predicate} over ${\cal X}$: ``predicates $\equiv$ metric-constrained markings''. The specification $\bot_V = d_X(x,x')$ is no constraint at all, and that the specification $e \preceq d_X(x,x')$ is precisely the order-theoretic constraint $x \preceq x'$ requiring that $\mu$ satisfy $\mu(x) \preceq \mu(x')$. For the exponential space ${\cal V}^{{\cal X}}$ of {\bf V}-predicates over ${\cal X}$, since $e \preceq d(\phi,\psi)$ iff $e \preceq \bigwedge_{x \in X} d_V(\phi(x),\psi(x))$ iff $e \preceq d_V(\phi(x),\psi(x))$ for all $x \! \in \! X$ iff $\phi(x) \preceq \psi(x)$ for all $x \! \in \! X$, the underlying preorder is the usual {\em entailment order} on {\bf V}-predicates over ${\cal X}$. We relativize the notion of a consumption/production net by using as our fundamental domain of values an arbitrary closed preorder {\bf V} in place of the natural numbers {\bf N}. A {\bf V}-{\em net} {\sf N} is a quadruple ${\sf N} = \quadruple{T}{{\cal P}}{\iota}{o}$ consisting of: a set (of transition symbols) $T$, a quasi {\bf V}-space (of places) ${\cal P}$, an input weighting function $\iota \! : \! T \longrightarrow {\bf V}^{\cal P}$, and an output weighting function $o \! : \! T \longrightarrow {\bf V}^{\cal P}$. Markings are given two interpretations: (1) a place $p$ is a site associated with a value (of a resource) $\mu(p)$ and a marking $\mu$ is a distribution of places (hence resources); or (2) a marking $\mu$ is a fuzzy $P$-subset with $\mu(p)$ indicating the degree-of-membership of ``$p \! \in \! \mu$''. A transition $t \! \in \! T$ in a {\bf V}-net $\sf N$ is enabled by marking $\mu$ when $\mu \preceq \iota(t)$; that is, when $\mu(p) \preceq \iota(t,p)$ for all places $p \! \in \! P$. A transition $t \! \in \! T$ fires by $\Rightarrow$-ing, or consuming, $\iota(t,p)$ tokens from place $p \! \in \! P$, and then $\oplus$-ing, or producing, $o(t,p)$ tokens to place $p \! \in \! P$. The result of the firing of a transition $t \! \in \! T$ is expressed by the equation ${\sf N}_t(\mu)(p) = [ \iota(t,p) \!\Rightarrow\! \mu(p) ] \oplus o(t,p)$ for all places $p \! \in \! P$. We regard a {\bf V}-net to be a transformer of constrained {\bf V}-markings; that is, a {\bf V}-predicate transformer. The semantics of a {\bf V}-net {\sf N} can be defined as either external or internal behaviors. External behaviors include: (1) unfoldment-tree, and (2) regular-set behavior. Internal behaviors include: (1) reachable predicates (markings), and (2) cumulative fixpoint behavior. We list some important closed preorders on which nets and transition systems can be based: \\ {\bf booleans} [cartesian closed] \\ {\bf 2} = $\quintuple{2=\{0,1\}}{\leq}{\wedge}{\rightarrow}{1}$, where 0 is {\bf false}, 1 is {\bf true}, $\leq$ is the usual order on truth-values, $\wedge$ is the truth-table for {\bf and}, and $\rightarrow$ is the truth-table for {\bf implies}. Here quasi {\bf 2}-spaces ${\cal X} = \pair{X}{d}$ are preorders ${\cal X} = \pair{X}{\preceq}$ where $x_1 \preceq x_2$ when $d(x_1,x_2) = 1$, {\bf 2}-spaces are posets, and {\bf 2}-morphisms are monotonic functions. \\ ${\bf 2'}$ = $\quintuple{2=\{1,0\}}{\geq}{\vee}{\setminus}{0}$, where 1 is {\bf true}, 0 is {\bf false}, $\geq$ is the usual downward order on truth-values, $\vee$ is the truth-table for {\bf or}, and $\setminus$ is the truth-table for {\bf difference}: $b_1 \setminus b_2$ is true iff $b_1$ is true and $b_2$ is false. Here, quasi ${\bf 2'}$-spaces are preorders where $x_1 \preceq x_2$ when $d(x_1,x_2) = 0$, ${\bf 2'}$-spaces are posets, and ${\bf 2'}$-morphisms are monotonic functions. ${\bf 2'}$ defines the correct context for condition/event nets. \\ {\bf natural numbers} \\ {\bf N} = $\quintuple{N}{\geq}{+}{\: \dot{-} \:}{0}$, where $N$ is the set of natural numbers $N = \{0,1,\ldots,n,\ldots,\infty\}$ with infinity, $\geq$ is the usual downward ordering on natural numbers $N$, $+$ is sum, and $\: \dot{-} \:$ is difference defined by $m \: \dot{-} \: n = m - n \mbox{ if } m \geq n, = 0 \mbox{ if } m < n$. \\ {\bf reals} \\ {\bf R} = $\quintuple{R=[\infty,0]}{\geq}{+}{\: \dot{-} \:}{0}$. \\ The quantitative closed preorders of reals {\bf R} and natural numbers {\bf N} are coreflective and normal. They define the correct context for consumption/production nets. \\ {\bf markings} \\ If {\bf V} is a closed preorder and $I$ is any indexing set, then the marking space ${\cal V}^I$ is a closed preorder ${\bf V}^I = \quintuple{V^I}{\preceq}{\oplus}{\Rightarrow}{e}$ where $\preceq,\oplus,\Rightarrow \mbox{ and } e$ have obvious pointwise definitions. $I$ might denote places, colors, some combination of these, etc., in nets. If ${\bf V} = \quintuple{V}{\preceq}{\wedge}{\Rightarrow}{\top_V}$ is a cartesian closed preorder and ${\cal X}$ is any {\bf V}-space, then the predicate space ${\bf V}^{{\cal X}}$, the restriction of ${\bf V}^X$ to {\bf V}-morphisms, is a [cartesian closed] subpreorder of ${\bf V}^X$. \\ {\bf subsets} [cartesian closed] \\ Let $A$ be any set and let $P(A)$ be the set $P(A)=\{B \mid B \subseteq A\}$ of all subsets of $A$. \\ ${\bf P}(A) = \quintuple{P(A)}{\subseteq}{\cap}{\rightarrow}{A}$, where $\cap$ is {\bf set intersection}, and $\rightarrow$ is {\bf set implication}: $B_1 \rightarrow B_2 = \{a \! \in \! A \mid a \! \in \! B_1 \mbox{ implies } a \! \in \! B_2\} = -B_1 \cup B_2$. ${\bf P}(A)$ is essentially the marking space closed preorder ${\bf P}(A) \cong {\bf 2}^A$ defining the most basic markings-as-fuzzy-subsets interpretation for nets. \\ ${\bf P}'(A) = \quintuple{P(A)}{\supseteq}{\cup}{\setminus}{\emptyset}$, where $\cup$ is {\bf set union}, and $\setminus$ is {\bf set difference}: $B_1 \setminus B_2 = \{a \! \in \! A \mid a \! \in \! B_1 \mbox{ but not } a \! \in \! B_2\} = B_1 \cap -B_2$. ${\bf P}'(A)$ is essentially the marking space closed preorder ${\bf P}'(A) \cong {\bf 2'}^A$, the marking space for condition/event nets. \\ {\bf propositional logic} [cartesian closed] \\ Let $A$ be any fixed denumerable set of propositional variables and let ${\bf \Phi}(A)$ be the recursively defined set of all sentences. ${\bf \Phi}(A) = \quintuple{{\bf \Phi}(A)}{\models}{\wedge}{\rightarrow}{\top}$, where $\models$ is semantically defined logical entailment, and $\wedge$ and $\rightarrow$ are the syntactic binary operations on ${\bf \Phi}(A)$ defined by $\wedge(\alpha,\beta) = (\alpha \wedge \beta)$ and $\rightarrow(\alpha,\beta) = (\alpha \rightarrow \beta)$. Here, quasi ${\bf \Phi}(A)$-spaces are sentence-valued sets. A ${\bf \Phi}(A)$-marking $\mu$ assigns a sentence $\mu(p)$ to each place $p \! \in \! P$, hence is a $P$-indexed collection of sentences. The metric $d_P$, in the quasi ${\bf \Phi}(A)$-space of places ${\cal P} = \pair{P}{d_P}$, specifies generalized laws of modus ponens, since if $d_P(p_1,p_2) = \alpha$ then $\alpha \models \mu(p_1) \rightarrow \mu(p_2)$; that is, $\alpha \wedge \mu(p_1) \models \mu(p_2)$. So for all ordered pairs $(p_1,p_2)$ of places $d_P$ specifies an assumption, or context, in which $\mu(p_1)$ the sentence indexed at place $p_1$ is required to logically entail $\mu(p_2)$ the sentence indexed at place $p_2$. In particular, $d_P(p_1,p_2)$ = ``$\mu(p_1) \rightarrow \mu(p_2)$'' specifies ordinary modus ponens, $(\mu(p_1) \rightarrow \mu(p_2)) \wedge \mu(p_1) \models \mu(p_2)$, the weakest assumption for logical entailment between $\mu(p_1)$ and $\mu(p_2)$. There is a standard method for transforming between enriched contexts. A closed monotonic function $h \! : \! {\bf V} \longrightarrow {\bf W}$ between two closed preorders ${\bf V}$ and ${\scriptbf{W}}$ is a monotonic function $h \! : \! {\cal V} \longrightarrow {\cal W}$ which preserves monoidal unit and composition: $e_{\scriptbf{W}} \preceq_{\scriptbf{W}} h(e_{\scriptbf{V}})$ and $h(v) \oplus_{\scriptbf{W}} h(v') \preceq_{\scriptbf{W}} h(v \oplus_{\scriptbf{V}} v')$ for all elements $v,v' \! \in \! V$. Closed monotonic functions determine transformations of spaces, predicates, relations, dialectical moves, dialectical nets, etc. For example, any closed preorder {\bf V} has a canonical closed monotonic function $\Box_V \! : \! {\bf V} \longrightarrow {\bf 2}$, defined above, which determines a functor $\Box_V \! : \! {\bf Space}_V \longrightarrow {\bf Space}_2\!=\!{\bf PO}$, where $\Box_V({\cal X}) = \pair{X}{\preceq_X}$ the underlying preorder of ${\cal X}$. Some simple net oriented examples: (1) the two monotonic functions, inclusion ${\rm Inc} \! : \! {\bf N} \longrightarrow {\bf Z}$ and saturation $[\:] \! : \! {\bf Z} \longrightarrow {\bf N}$, between natural numbers {\bf N} and integers {\bf Z}, where $[n] = n \mbox{ if } n \geq 0, = 0 \mbox{ if } n < 0$, are adjoint $({\rm Inc} \dashv [\:])$ and closed (allowing the use of linear algebraic techniques in net theory); (2) the two monotonic functions, floor $\lfloor\:\rfloor \! : \! {\bf R} \longrightarrow {\bf N}$ and inclusion ${\rm Inc} \! : \! {\bf N} \longrightarrow {\bf R}$, between (nonnegative) reals {\bf R} and natural numbers {\bf N}, are adjoint $(\lfloor\:\rfloor \dashv {\rm Inc})$ and closed (encouraging the use of analysis techniques in net theory). If ${\bf V} = \quintuple{V}{\preceq}{\oplus}{\Rightarrow}{e}$ is not a cartesian closed preorder, such as {\bf N}, then ${\bf V}^{{\cal X}}$ is not necessarily a closed preorder since it may not be closed under the pointwise operations $\oplus$ and $\Rightarrow$. For a counterexample, let ${\cal X}$ be the symmetric two point {\bf N}-space ${\cal X} = \pair{X}{d}$ where $X=\{a,b\}$ and $d(a,b) = 1 = d(b,a)$, and let $\phi \! : \! {\cal X} \longrightarrow {\cal N}$ be defined by $\phi(a)=1$ and $\phi(b)=2$. If $\theta \! : \! {\cal X} \longrightarrow {\cal N}$ is defined by $\theta(a)=1$ and $\theta(b)=0$, then $\phi \: \dot{-} \: \theta$, where $(\phi \: \dot{-} \: \theta)(a)=0$ and $(\phi \: \dot{-} \: \theta)(b)=2$, is not an {\bf N}-morphism ${\cal X} \longrightarrow {\cal N}$ since $d(b,a) \not\geq (\phi \: \dot{-} \: \theta)(b) \: \dot{-} \: (\phi \: \dot{-} \: \theta)(a)$. In the other dialectical direction, if $\theta \! : \! {\cal X} \longrightarrow {\cal N}$ is defined to be $\phi$ above, then $\phi+\theta = \phi+\phi = 2\phi$, where $(\phi+\theta)(a)=2\phi(a)=2$ and $(\phi+\theta)(b)=2\phi(b)=4$, is not an {\bf N}-morphism ${\cal X} \longrightarrow {\cal N}$ since $d(b,a) \not\geq (\phi + \theta)(b) \!\Rightarrow\! (\phi + \theta)(a)$. In one sense the problem with the pointwise and discrete operations of implication $\theta \!\Rightarrow\! (\:)$ and composition $(\:) \oplus \theta$ is that they are ``isolated'' notions without ``collective'' influence between points, whereas the metric $d$ makes $X$ into a nondiscrete structure ${\cal X} = \pair{X}{d}$ with points which are not isolated from one another in the sense that they have collective constraints between themselves. One general solution to this problem is the use of enriched relations to model the dialectical movement of consumption and production. Relations allow for collective influence between points. \vspace*{\baselineskip} \\ \hspace*{\fill} DIALECTICAL NETS \hspace*{\fill} \vspace*{\baselineskip} \\ Each element $x \! \in \! X$ of a quasi {\bf V}-space ${\cal X} = \pair{X}{d}$ can be represented as the {\bf V}-predicate ${\rm y}(x) = d(x,-)$ over ${\cal X}$ where ${\rm y}(x)(x') = d(x,x')$ for each element $x' \! \in \! X$. The map ${\rm y} \! : \! X \longrightarrow V^X$, which is called the {\em Yoneda embedding}, is a {\bf V}-isometry ${\rm y}_{{\cal X}} \! : \! {\cal X}^{\rm op} \longrightarrow {\bf V}^{\cal X}$. Composition on the right with the Yoneda embedding allows us to consider the concept of a {\bf V}-morphism ${\cal Y}^{\rm op} \longrightarrow {\bf V}^{\cal X}$ to be a generalization of the concept of a {\bf V}-morphism ${\cal Y} \longrightarrow {\cal X}$. Such a generalized {\bf V}-morphism is equivalent to a {\bf V}-morphism $\product{{\cal Y}^{\rm op}}{{\cal X}} \stackrel{\tau}{\longrightarrow} {\bf V}$ and may be regarded to be a {\bf V}-({\em valued}) {\em relation} (also called a {\bf V}-{\em bimodule}) from {\cal Y} to {\cal X}, denoted by ${\cal Y} \stackrel{\tau}{\rightharpoondown} {\cal X}$, with $\tau(y,x)$ a element of {\bf V} being interpreted as the ``truth-value of the $\tau$-relatedness of $y$ to $x$'' \cite{Lawvere73}. We can regard a {\bf V}-relation to be a $\product{|{\cal Y}|}{|{\cal X}|}$-matrix whose $(y,x)$-th entry is $\tau(y,x)$. As mentioned above every {\bf V}-morphism ${\cal Y} \stackrel{f}{\longrightarrow} {\cal X}$ determines a {\bf V}-relation ${\cal Y} \stackrel{f^\triangleleft}{\rightharpoondown} {\cal X}$ defined by $f^\triangleleft = f^{\rm op} \cdot {\rm y}_{\cal X}$; that is, $f^\triangleleft(y,x) = d_X(f(y),x)$. Dually every {\bf V}-morphism ${\cal Y} \stackrel{f}{\rightarrow} {\cal X}$ also determines a {\bf V}-relation ${\cal X} \stackrel{f_\triangleleft}{\rightharpoondown} {\cal Y}$ defined by $f_\triangleleft = {\rm y}_{\cal X} \cdot {\bf V}^f$; that is, $f_\triangleleft(x,y) = d_X(x,f(y))$. A pair of {\bf V}-relations ${\cal Z} \stackrel{\sigma}{\rightharpoondown} {\cal Y}$ and ${\cal Y} \stackrel{\rho}{\rightharpoondown} {\cal X}$ can be composed, yielding the {\bf V}-relation ${\cal Z} \stackrel{\sigma\circ\rho}{\rightharpoondown} {\cal X}$ defined to be the categorical {\em coend} $\sigma\!\circ\!\rho(z,x) = \int^{y \in {\cal Y}} [\sigma(z,y) \oplus \rho(y,x)] = \bigvee_{y \in {\cal Y}} [\sigma(z,y) \oplus \rho(y,x)]$ where $\bigvee$ denotes supremum, which is colimit, in ${\bf V} = \pair{V}{\preceq}$. Relational composition can be viewed as a matrix product. One can verify that relational composition is associative $(\tau\circ\sigma)\circ\rho = \tau\circ(\sigma\circ\rho)$, and that metrics (as {\bf V}-relations) are identities $d_Y \circ \tau = \tau = \tau \circ d_X$. So {\bf V}-spaces and {\bf V}-relations form a category ${\bf rel}_{\scriptbf{V}}$. One can also verify that $ (g \cdot f)^\triangleleft = g^\triangleleft \circ f^\triangleleft $ for any two composable {\bf V}-morphisms ${\cal Z} \stackrel{g}{\rightarrow} {\cal Y} \stackrel{f}{\rightarrow} {\cal X}$, and that $(\mbox{Id}_{\cal X})^\triangleleft = d_X$ the identity {\bf V}-relation at ${\cal X}$. So the Yoneda embedding determines a functor $(\:)^\triangleleft \! : \! {\bf Space}_{\scriptbf{V}} \longrightarrow {\bf rel}_{\scriptbf{V}}$ which makes concrete the concept generalization discussed at the beginning of this section. There is a concept orthogonal to relations-as-morphisms. Given any two {\bf V}-relations ${\cal Y}_1 \stackrel{\tau_1}{\rightharpoondown} {\cal X}_1$ and ${\cal Y}_2 \stackrel{\tau_2}{\rightharpoondown} {\cal X}_2$ a ({\em vertical}) {\em morphism} of {\bf V}-relations $\pair{g}{f} \! : \! \triple{{\cal Y}_1}{\tau_1}{{\cal X}_1} \Rightarrow \triple{{\cal Y}_2}{\tau_2}{{\cal X}_2}$ consists of two {\bf V}-morphisms, a source morphism $g \! : \! {\cal Y}_1 \longrightarrow {\cal Y}_2$ and a target morphism $f \! : \! {\cal X}_1 \longrightarrow {\cal X}_2$, which satisfy the inequality condition $\tau_1 \preceq (\product{g^{\rm op}}{f}) \cdot \tau_2$ as {\bf V}-predicates over $\product{{\cal Y}_1^{\rm op}}{{\cal X}_1}$; or more abstractly, in terms of relational composition, $g_\triangleleft \circ \tau_1 \preceq \tau_2 \circ f_\triangleleft$. {\bf V}-relations and their vertical morphisms form a category ${\bf Rel}_{\scriptbf{V}}$. This is actually the vertical category of a double category, which we also denote by ${\bf Rel}_{\scriptbf{V}}$, whose underlying horizontal category is ${\bf rel}_{\scriptbf{V}}$. If we change the definition of a vertical morphism $\pair{f}{g} \! : \! \triple{{\cal Y}_1}{\tau_1}{{\cal X}_1} \Rightarrow \triple{{\cal Y}_2}{\tau_2}{{\cal X}_2}$ to the inequality condition $g^\triangleleft \circ \tau_2 \preceq \tau_1 \circ f^\triangleleft$, we can define a dual (vertical) category, which we denote by ${\bf Rel}^\bullet_{\scriptbf{V}}$. For any category ${\bf C}$ the category of parallel pairs of {\bf C}-morphisms, denoted by ${\bf C}^\times\!$, has the same objects as ${\bf C}$, $|{\bf C}^\times\!| = |{\bf C}|$, and has parallel pairs of ${\bf C}$-morphism as its morphisms, ${\bf C}^\times\![c,c'] = ({\bf C}[c,c'])^2$. ${\bf C}^\times\!$ is a kind of 2nd power (square) of ${\bf C}$. In particular, ${\bf rel}_{\scriptbf{V}}^\times$ is the category of parallel relation pairs $\tau = \parpair{{\cal Y}}{o}{\iota}{{\cal X}}$ with (horizontal) ${\bf rel}_{\scriptbf{V}}$-composition. Then ${\bf rel}_{\scriptbf{V}}^\times$, with the vertical morphisms $\pair{g}{f} \! : \! \triple{{\cal Y}_1}{\tau_1}{{\cal X}_1} \Rightarrow \triple{{\cal Y}_2}{\tau_2}{{\cal X}_2}$ when $\pair{g}{f} \! : \! \triple{{\cal Y}_1}{o_1}{{\cal X}_1} \Rightarrow \triple{{\cal Y}_2}{o_2}{{\cal X}_2}$ is a vertical morphism in ${\bf Rel}_{\scriptbf{V}}$ and $\pair{f}{g} \! : \! \triple{{\cal Y}_1}{\iota_1}{{\cal X}_1} \Rightarrow \triple{{\cal Y}_2}{\iota_2}{{\cal X}_2}$ is a vertical morphism in ${\bf Rel}_{\scriptbf{V}}^\bullet$, is a vertical category denoted ${\bf Rel}_{\scriptbf{V}}^\times$. Any element $v \! \in \! V$ is a {\bf V}-relation ${\bf 1} \stackrel{v}{\rightharpoondown} {\bf 1}$. In fact, ${\bf rel}_{\scriptbf{V}}[{\bf 1},{\bf 1}] = {\bf V}^{\product{{\bf 1}^{\rm op}}{{\bf 1}}} \cong {\bf V}$. Any {\bf V}-relation ${\bf 1} \stackrel{\psi}{\rightharpoondown} {\cal Y}$, a {\em generalized element} of ${\cal Y}$, is a {\bf V}-morphism $\product{{\bf 1}^{\rm op}}{{\cal Y}} \stackrel{\psi}{\longrightarrow} {\bf V}$, and hence is the same thing as a predicate over ${\cal Y}$: ``${\cal Y}$-predicates $\equiv$ generalized ${\cal Y}$-elements''. In fact, ${\bf rel}_{\scriptbf{V}}[{\bf 1},{\cal Y}] = {\bf V}^{\product{{\bf 1}^{\rm op}}{{\cal Y}}} \cong {\bf V}^{{\cal Y}}$. So any {\bf V}-relation ${\cal Y} \stackrel{\tau}{\rightharpoondown} {\cal X}$ defines by composition a {\bf V}-morphism ${\bf V}^{{\cal Y}} \! \stackrel{\directflow_\tau}{\longrightarrow} {\bf V}^{{\cal X}}$ called {\em direct flow} (or {\em yang}), which maps {\bf V}-predicates over ${\cal Y}$ to {\bf V}-predicates over ${\cal X}$, and is defined by $\directflow_\tau\!(\psi) = \psi \circ \tau$. The direct flow $\directflow_\tau$ corresponds to the direct image map $PY \stackrel{R_P}{\longrightarrow} PX$ of a ordinary binary relation $R \subseteq \product{Y}{X}$. Explicitly, $\directflow_\tau$ is defined to be the coend \[ \directflow_\tau(\psi)(x) = \int^{y \in {\cal Y}} [\psi(y) \oplus \tau(y,x)] = \bigvee_{y \in {\cal Y}} [\psi(y) \oplus \tau(y,x)] \] for any predicate $\psi \! \in \! {\bf V}^{{\cal Y}}$ over ${\cal Y}$ and any point $x \! \in \! {\cal X}$. Note that $\directflow_{\sigma \circ \rho} = \directflow_\sigma \cdot \directflow_\rho$ for any pair of composable {\bf V}-relations ${\cal Z} \stackrel{\sigma}{\rightharpoondown} {\cal Y}$ and ${\cal Y} \stackrel{\rho}{\rightharpoondown} {\cal X}$, and that $\tau_1 \preceq \tau_2$ implies $\directflow_{\tau_1} \preceq \directflow_{\tau_2}$ as {\bf V}-morphisms for all pairs $\parpair{{\cal Y}}{\tau_1}{\tau_2}{{\cal X}}$. For the special case ${\cal Y} = {\bf 1} = {\cal X}$, when {\bf V}-elements are viewed as {\bf V}-relations ${\bf 1} \stackrel{v}{\rightharpoondown} {\bf 1}$, direct flow is $\directflow_v = (\:) \oplus v \! : \! {\bf V}\!=\!{\bf V}^{\bf 1} \longrightarrow {\bf V}^{\bf 1}\!=\!{\bf V}$ since $\directflow_v(u) = \bigvee_{{\bf 1}} (u \oplus v) = u \oplus v$. For any {\bf V}-morphism ${\cal Y} \stackrel{f}{\longrightarrow} {\cal X}$ with associated {\bf V}-relation ${\cal X} \stackrel{f_\triangleleft}{\rightharpoondown} {\cal Y}$, note that $\directflow_{f_\triangleleft}(\phi) = f \cdot \phi$ for any {\bf V}-predicate $\phi \! \in \! {\bf V}^{{\cal X}}$ over ${\cal X}$; that is, $\directflow_{f_\triangleleft}$ is just composition on the right $\directflow_{f_\triangleleft} = f \cdot (\:) = {\bf V}^f$. On the other hand, for any {\bf V}-morphism ${\cal Y} \stackrel{f}{\longrightarrow} {\cal X}$ with associated {\bf V}-relation ${\cal Y} \stackrel{f^\triangleleft}{\rightharpoondown} {\cal X}$, the {\bf V}-morphism $\exists_f = \directflow_{f^\triangleleft} \! : \! {\bf V}^{{\cal Y}} \! \longrightarrow {\bf V}^{{\cal X}}$ is called {\em existential Kan quantification} along $f$; in detail, $\exists_f(\psi)(x) = \bigvee_{y \in {\cal Y}} [\psi(y) \oplus d_X(f(y),x)]$ for any predicate $\psi \! \in \! {\bf V}^{{\cal Y}}$ over ${\cal Y}$ and any point $x \! \in \! {\cal X}$. Applying the directflow operator to the inequality condition for vertical morphisms in ${\bf Rel}_{\scriptbf{V}}$ gives ${\bf V}^g \cdot \directflow_{\tau_1} \preceq \directflow_{\tau_2} \cdot {\bf V}^f$. The {\bf V}-morphism $\directflow_\tau$ has as a right adjoint the {\bf V}-morphism ${\bf V}^{{\cal Y}} \! \stackrel{\inverseflow_\tau}{\longleftarrow} {\bf V}^{{\cal X}}$ called {\em inverse flow} (or {\em yin}), which maps {\bf V}-predicates over ${\cal X}$ to {\bf V}-predicates over ${\cal Y}$, and is defined to be the categorical {\em end} \[ \inverseflow_\tau(\phi)(y) = \int_{x \in {\cal X}} [\tau(y,x) \!\Rightarrow\! \phi(x)] = \bigwedge_{x \in {\cal X}} [\tau(y,x) \!\Rightarrow\! \phi(x)] = \bigwedge_{x \in {\cal X}} d_{\scriptbf{V}}(\tau(y,x),\phi(x)) \] for any predicate $\phi \! \in \! {\bf V}^{{\cal X}}$ over ${\cal X}$ and any point $y \! \in \! {\cal Y}$, where $\bigwedge$ denotes infimum, which is limit, in ${\bf V} = \pair{V}{\preceq}$. Note that $\inverseflow_{\sigma \circ \rho} = \inverseflow_\rho \cdot \inverseflow_\sigma$ for any pair of composable {\bf V}-relations ${\cal Z} \stackrel{\sigma}{\rightharpoondown} {\cal Y}$ and ${\cal Y} \stackrel{\rho}{\rightharpoondown} {\cal X}$, and that $\tau_1 \preceq \tau_2$ implies $\inverseflow_{\tau_2} \preceq \inverseflow_{\tau_1}$ as {\bf V}-morphisms for all pairs $\parpair{{\cal Y}}{\tau_1}{\tau_2}{{\cal X}}$. For the special case ${\cal Y} = {\bf 1} = {\cal X}$, when {\bf V}-elements are viewed as {\bf V}-relations ${\bf 1} \stackrel{v}{\rightharpoondown} {\bf 1}$, inverse flow is $\inverseflow_v = v \!\Rightarrow\! (\:) \! : \! {\bf V}\!=\!{\bf V}^{\bf 1} \longrightarrow {\bf V}^{\bf 1}\!=\!{\bf V}$ since $\inverseflow_v(u) = \bigwedge_{{\bf 1}} (v \!\Rightarrow\! u) = v \!\Rightarrow\! u$. For any {\bf V}-morphism ${\cal Y} \stackrel{f}{\longrightarrow} {\cal X}$ with associated {\bf V}-relation ${\cal X} \stackrel{f^\triangleleft}{\rightharpoondown} {\cal Y}$, note that $\inverseflow_{f^\triangleleft}(\phi) = f \cdot \phi$ for any {\bf V}-predicate $\phi \! \in \! {\bf V}^{{\cal X}}$ over ${\cal X}$; that is, $\inverseflow_{f^\triangleleft}$ is just composition on the right $\inverseflow_{f^\triangleleft} = f \cdot (\:) = {\bf V}^f = \directflow_{f_\triangleleft}$. On the other hand for any {\bf V}-morphism ${\cal Y} \stackrel{f}{\longrightarrow} {\cal X}$ with associated {\bf V}-relation ${\cal X} \stackrel{f_\triangleleft}{\rightharpoondown} {\cal Y}$, the {\bf V}-morphism $\forall_f = \inverseflow_{f_\triangleleft} \! : \! {\bf V}^{{\cal Y}} \! \longrightarrow {\bf V}^{{\cal X}}$ is called {\em universal Kan quantification} along $f$; in detail, $\forall_f(\psi)(x) = \bigwedge_{y \in {\cal Y}} [d_X(x,f(y)) \!\Rightarrow\! \psi(y)]$ for any predicate $\psi \! \in \! {\bf V}^{{\cal Y}}$ over ${\cal Y}$ and any point $x \! \in \! {\cal X}$. Ordinary quantification in predicate calculus is a very special case of Kan quantifiction. Kan quantification is quantification relativized to an arbitrary closed preorder {\bf V}. Applying the inverseflow operator to the inequality condition for vertical morphisms in ${\bf Rel}^\bullet_{\scriptbf{V}}$ gives ${\bf V}^f \cdot \inverseflow_{\tau_1} \preceq \inverseflow_{\tau_2} \cdot {\bf V}^g$. The following fact appears in \cite{Lawvere73}, showing that (parallel pairs of) relations specify ``the dialectical flow of predicates''. \begin{Fact} Direct flow is left adjoint to inverse flow $\left( \directflow_\tau \dashv \inverseflow_\tau \right)$ for any {\bf V}-relation ${\cal Y} \stackrel{\tau}{\rightharpoondown} {\cal X}$. \end{Fact} Dialectical flow is the alternation-composition of direct and inverse flow. A parallel pair $\tau = \parpair{{\cal Y}}{o}{\iota}{{\cal X}}$ of {\bf V}-relations is called a {\em dialectical flow specifier}. Given a flow specifier $\tau$, the {\em dialectical flow} (or {\em yinyang}) specified by $\tau$ is the composition \[ \yiya{8}_\tau = {\bf V}^{\cal X} \stackrel{\inverseflow_\iota}{\longrightarrow} {\bf V}^{\cal Y} \stackrel{\directflow_o}{\longrightarrow} {\bf V}^{\cal X} .\] The symbol $\yiya{8}$ denotes dialectical motion. This is a stylized version of the {\em yin-yang} symbol of the ancient Chinese {\em naturalist} philosophers, where the two parts {\em yin} and {\em yang} represent contradictory elements or opposing tendencies forming a complementary union out of which all things develop. For us, the complementary union is the flow adjunction and the circular shape represents the cyclical or spiral shape of dialectical motion. Dialectical flow is functorial: if $\pair{g}{f} \! : \! \triple{{\cal Y}_1}{\tau_1}{{\cal X}_1} \Rightarrow \triple{{\cal Y}_2}{\tau_2}{{\cal X}_2}$ is a vertical morphism in ${\bf Rel}_{\scriptbf{V}}^\times$, then ${\bf V}^f \cdot \yiya{8}_{\tau_1} \preceq \yiya{8}_{\tau_2} \cdot {\bf V}^f$. There is a four-fold duality in the definition of dialectical flow. The {\em dialectical opflow} (or {\em yangyin}) specified by $\tau$ is the composition $\yayi{8}_\tau = \directflow_o \cdot \inverseflow_\iota$. The \mbox{{\em dialectical}} {\em coflow} of $\tau$ is the dialectical flow of $\tau^{\rm op}$; namely, $\yiya{8}_{\tau^{\rm op}} = \inverseflow_o \cdot \directflow_\iota$. The {\em dialectical coopflow} of $\tau$ is the dialectical opflow of $\tau^{\rm op}$; namely, $\yayi{8}_{\tau^{\rm op}} = \directflow_\iota \cdot \inverseflow_o$. The modern theory of dialectics incorporates the notion of the {\em reproduction} of entities which are in dialectical motion. Reproduction can be modelled mathematically as the recursive specification of entities with respect to dialectical motion. Reproduction (renewal, recursion) is the internal semantics of dialectical motion. Dialectical entities in the context of this paper are the enriched predicates (constrained markings). For any flow specifier $\tau$, as above, we say that $\tau$ \mbox{{\em reproduces}} the {\bf V}-predicate $\phi \! \in \! {\bf V}^{\cal X}$ when $\phi$ is a fixpoint solution of the recursive equation \[ \chi \equiv \yiya{8}_\tau(\chi) ;\] that is, when $\phi \equiv \yiya{8}_\tau(\phi) = \directflow_o(\inverseflow_\iota(\phi))$. A fixpoint solution $\phi$ is an internal behavior of the dialectical motion (specified by) $\tau$. A measure, or index, of reproduction is the value $|\phi|_\tau = \yiya{8}_\tau^\triangleleft(\phi) = d^{\rm sym}(\phi,\yiya{8}_\tau(\phi))$. So the dialectical flow $\tau$ reproduces the entity ({\bf V}-predicate) $\phi$ iff $e \preceq |\phi|_\tau$. The increasing sequence $\mbox{$\bot_{\cal X} \preceq \yiya{8}_\tau(\bot_{\cal X}) \preceq \cdots \preceq \yiya{8}_\tau^n(\bot_{\cal X}) \preceq \cdots$}$ of {\bf V}-predicates over quasi {\bf V}-space ${\cal X}$, is called the least fixpoint approximation sequence of $\tau$. The {\bf V}-predicate $\tau^\ast = \bigvee_{n \in \omega}\yiya{8}_\tau^n(\bot_{\cal X})$, the {\em least fixpoint} solution of the recursive equation above, exists for directed-continuous inverse flow (yin), and in particular, for normal {\bf V}. The decreasing sequence $\mbox{$\cdots \preceq \yiya{8}_\tau^n(\top_{\cal X}) \preceq \cdots \preceq \yiya{8}_\tau(\top_{\cal X}) \preceq \top_{\cal X}$}$ of {\bf V}-predicates over quasi {\bf V}-space ${\cal X}$, is called the greatest fixpoint approximation sequence of $\tau$. The {\bf V}-predicate $\tau^\infty = \bigwedge_{n \in \omega}\yiya{8}_\tau^n(\top_{\cal X})$, is the {\em greatest fixpoint} solution of the recursive equation above. The least and greatest fixpoints are two canonical internal behaviors of the dialectical motion $\tau$. [A philosophical note: The notion of complementary union (two working together in one) is not that of ``synthesis''. Neither of the opposites is ``transformed''. Indeed, with synthesis, dialectical motion would cease! The notion of ``reproduction'' is one of equilibrium of motion, not lack of motion.] Just as in the ordinary context of sets a subset $A \subseteq X$ can be viewed as a binary relation $A \subseteq \product{X}{X}$ ($ yAx \mbox{ iff } x \! \in \! A$, or $R_A = \mbox{pr}_2 \cdot \kappa_A$), so also in the context of {\bf V}-spaces a marking $\theta \! : \! X \longrightarrow V$ can be viewed as a {\bf V}-relation ${\cal X} \stackrel{\tau_\theta}{\rightharpoondown} \tilde{\cal X}$ for any two quasi {\bf V}-spaces ${\cal X} = \pair{X}{d}$ and $\tilde{\cal X} = \pair{X}{\tilde{d}}$. In this spirit and following the places-as-sites interpretation of nets, for any two quasi {\bf V}-spaces ${\cal X}$ and $\tilde{\cal X}$, and any {\bf V}-relation ${\cal X} \stackrel{\tau}{\rightharpoondown} \tilde{\cal X}$, we view inverse dialectical flow $\inverseflow_\tau$ as a generalization of the concept of consumption, and direct dialectical flow $\directflow_\tau$ as a generalization of the concept of production. \begin{Proposition} Consumption is a special case of inverse dialectical flow. In particular, if the map $\tau_{\theta} = (\mbox{pr}_2 \cdot \theta) \oplus d^{\rm sym} \oplus \tilde{d}$ associated with any marking $\theta$ over $X$ is a {\bf V}-relation, then ordinary consumption by $\theta$ is inverse dialectical flow along $\tau_{\theta}$; that is, $\theta \!\Rightarrow\! (\:) =\; \inverseflow_{\tau_\theta}$. Production is a special case of direct dialectical flow. In particular, if the map $\tau_{\theta} = (\mbox{pr}_2 \cdot \theta) \oplus \tilde{d}$ associated with any marking $\theta$ over $X$ is a {\bf V}-relation, then ordinary production by $\theta$ is direct dialectical flow along $\tau_{\theta}$; that is, $(\:) \oplus \theta =\; \directflow_{\tau_\theta}$. \end{Proposition} For the special case {\bf V} = {\bf N} when $\theta$ is finite everywhere the constraint that $\tau_{\theta}$ be a {\bf N}-relation implies that $d$ is either $\infty$ or $0$ everywhere and that $d$ is $\infty$ iff $e$ is $\infty$. A {\bf V}-{\em transition system} ${\sf A}$ is a triple ${\sf A} = \triple{T}{{\cal Q}}{\delta}$ consisting of: a set (of transition symbols) $T$, a quasi {\bf V}-space (of internal states) ${\cal Q}$, and a transition map (transition relation assignment) $\delta \! : \! T \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal Q},{\cal Q}]$. A state pair $(q,q') \! \in \! \product{{\cal Q}^{\rm op}}{{\cal Q}}$ is regarded as an $a$-transition from current state $q$ to next state $q'$ with weighting (probability, believability, etc.) the generalized truth-value $\delta_a(q,q') \! \in \! V$ for each transition symbol $a \! \in \! T$. The case ${\bf V} = {\bf R}$ of (nonnegative) real values includes probabilistic and fuzzy transition systems as special cases. The transition map $\delta$ can be recursively extended to a ``run map'' monoid morphism $\delta^\ast \! : \! T^\ast \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal Q},{\cal Q}]$: $\delta^\ast_\varepsilon = {\rm y}_{\cal Q}$, the identity {\bf V}-relation at ${\cal Q}$; and $\delta^\ast_{xa} = \delta^\ast_x \circ \delta_a$ for all strings of transition symbols $x \! \in \! T^\ast$ and single transition symbols $a \! \in \! T$. Composing syntactic run map dynamics $\delta^\ast$ with flow $\yiya{8}$ defines the transition system dynamics $\delta^\ast \cdot \yiya{8} \! : \! T^\ast \longrightarrow {\bf Space}_{\scriptbf{V}}[{\bf V}^{\cal Q},{\bf V}^{\cal Q}]$. There are three main motivators for the concept of dialectical nets: Petri nets, transition systems and Kan quantification theory of first order predicate logic. The two motivators Petri nets and Kan quatification correspond to the structural concepts of enriched predicates (generalized subobjects) and enriched functions, respectively. The concept of dialectical nets, which generalizes transition systems, corresponds to the structural concept of enriched relations. An {\em elementary dialectical {\bf V}-net} (or {\em elementary dialectical {\bf V}-transition system}) {\sf N} is a quadruple ${\sf N} = \quadruple{T}{{\cal S}}{\iota}{o}$ consisting of: a set (of transition symbols) $T$, two quasi {\bf V}-spaces (of sites) ${\cal S} \!=\! \pair{{\cal S}_0}{{\cal S}_1}$, an inverse flow assignment $\iota \! : \! T \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1]$, and a direct flow assignment $o \! : \! T \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1]$, where ${\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1]$ is the collection (quasi {\bf V}-space) of all {\bf V}-relations between the quasi {\bf V}-spaces of sites ${\cal S}_0$ and ${\cal S}_1$. So for each transition symbol $a \! \in \! T$ the net assigns a relation pair $\tau_a = \parpair{{\cal S}_0}{o_a}{\iota_a}{{\cal S}_1}$ consisting of a direct flow specifier ${\cal S}_0 \stackrel{o_a}{\rightharpoondown} {\cal S}_1$, and an inverse flow specifier ${\cal S}_0 \stackrel{\iota_a}{\rightharpoondown} {\cal S}_1$ which specify the dialectical flow $\yiya{8}_a^{\smallsf{N}} = \yiya{8}_{\tau_a} = \inverseflow_{\iota_a} \cdot \directflow_{o_a} \! : \! {\bf V}^{{\cal S}_1} \longrightarrow {\bf V}^{{\cal S}_1}$ for any transition symbol $a \! \in \! T$. Dialectical flow $\yiya{8}^{\smallsf{N}}$ can be recursively extended to a ``run flow'' monoid morphism $\yiya{8}^{\smallsf{N}} \! : \! T^\ast \longrightarrow {\bf Space}_{\scriptbf{V}}[{\bf V}^{{\cal S}_1},{\bf V}^{{\cal S}_1}]$: $\yiya{8}^{\smallsf{N}}_\varepsilon = {\rm Id}_{V^{{\cal S}_1}}$, the identity {\bf V}-morphism at ${\bf V}^{{\cal P}_1}$; and $\yiya{8}^{\smallsf{N}}_{xa} = \yiya{8}^{\smallsf{N}}_x \cdot \yiya{8}^{\smallsf{N}}_a$ for all strings of transition symbols $x \! \in \! T^\ast$ and single transition symbols $a \! \in \! T$. When appropriate pullbacks exist, the flow assignment $\tau$ can be recursively extended to a ``run map'' monoid morphism $\tau^\ast \! : \! T^\ast \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal S}_1,{\cal S}_1]$. A ``localized Beck condition'' of higher order categorical logic \cite{Lawvere70} should by incorporated into the above definition of run-flow. The elements in the quasi {\bf V}-space ${\cal S}_1$ are the sites where values (bit-values for conditions in condition/event nets, numbers representing resources in consumption/production nets, database relations in predicate/transition nets, etc.) are stored, and local processing of values (of the nature: combination, accumulation of values, suprema-calculation, union) takes place; whereas, the elements in the quasi {\bf V}-space ${\cal S}_0$ are usually sites where transient local processing of values (of the dual nature: interaction, matching of values, infima-calculation, intersection) takes place. In the traditional theory of nets and in the traditional theory of transition systems ${\cal S}_0 = {\cal S}_1$. In the theory of Horn clause logic programming ${\cal S}_1$ is the set of predicate (or relational) names of a logic program, and ${\cal S}_0$ is the set of clause names (or elementary implications) of same. We regard a dialectical net to be a transformer of constrained {\bf V}-markings; that is, a {\bf V}-predicate transformer. The semantics of a dialectical {\bf V}-net {\sf N} can be defined as either external or internal behaviors. External behaviors include: (1) unfoldment-tree, and (2) regular-set behavior. Internal behaviors include: (1) reachable predicates (markings), and (2) cumulative fixpoint behavior. [There is a formal dialectical approach to net behavior. Define the {\bf V}-relations $v^\triangleleft = ((\:) \oplus v)^\triangleleft \! : \! {\cal V} \rightharpoondown {\cal V}$ and $v_\triangleleft = ((\:) \oplus v)_\triangleleft \! : \! {\cal V} \leftharpoondown {\cal V}$. Note that $\Box v_\triangleleft(y,x)$ iff $y \preceq x \oplus v$, a generalized net enabling condition. Now $v^\triangleleft$ is formally left adjoint to $v_\triangleleft$ (as an arrow in the 2-category ${\bf rel}_{\scriptbf{V}}$), $\left( v^\triangleleft \dashv v_\triangleleft \right)$. Let $\lhd_{\scriptbf{V}}(v)$ denote this adjunction. Then $\lhd_{\scriptbf{V}} \! : \! \bullet_{\scriptbf{V}} \longrightarrow {\bf rel}_{\scriptbf{V}}$ is a formal dialectical base. Let {\sf N} be any {\bf V}-net. Define the dialectical flow $\yiya{8}_t = (\iota_t)_\triangleleft \circ (o_t)^\triangleleft$ for each transition symbol $t \! \in \! T$. For normal {\bf V}, $\Box \yiya{8}_t(z,x)$ iff $z \preceq y \oplus \iota_t \mbox{ and } y \oplus o_t \preceq x \mbox{ some } y \! \in \! V$. Define $\yiya{8}_\varepsilon = {\rm y}_{\cal V}$; $\yiya{8}_{xa} = \yiya{8}_x \circ \yiya{8}_a$ for all $x \! \in \! T^\ast$ and $a \! \in \! T$; and $\yiya{8}_\ast = \bigvee_{w \in T^\ast} \yiya{8}_w$. We say that $x$ is {\em reachable by} $w$ from $y$ when $\Box \yiya{8}_w(y,x)$; that is, when $e \preceq \yiya{8}_w(y,x)$. We say that $x$ is {\em reachable} from $y$ when $\Box \yiya{8}_\ast(y,x)$; that is, when $e \preceq \yiya{8}_\ast(y,x)$. So $\yiya{8}_\ast(y,x)$ gives a measure of reachability.] Boundedness, liveness, synchronic distance and fairness are definable for dialectical nets just as for ordinary nets. The dialectical {\bf V}-net {\sf N} is deterministic when $\iota$ and $o$ factor through ${\rm Space}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1]$. Then dialectical flow is existential-quantification/substition composition $\yiya{8}^{\smallsf{N}}_a = {\bf V}^{\iota_a} \cdot \exists_{o_a}$. Using the places-as-sites interpretation, any {\bf V}-net ${\sf N} = \quadruple{T}{{\cal P}}{\iota}{o}$ is an elementary dialectical {\bf V}-net. Indeed, for the special case where ${\cal S}_0$ is the terminal-coterminal {\bf V}-space ${\cal S}_0 = {\bf 1}$ and ${\cal S}_1$ is the place-space ${\cal S}_1 = {\cal P}$ (so that ${\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1] = {\bf rel}_{\scriptbf{V}}[{\bf 1},{\cal P}] = {\bf V}^{\cal P}$), elementary dialectical {\bf V}-nets $\equiv$ {\bf V}-nets. Any {\bf V}-transition system ${\sf A} = \triple{T}{{\cal Q}}{\delta}$ is an elementary dialectical {\bf V}-net. Indeed, for the special case where the two site-spaces are the one state-space ${\cal S}_0 = {\cal Q} = {\cal S}_1$, where inverse flow is the trivial identity {\bf V}-relation $\iota_a = {\rm y}_{\cal Q}$ on ${\cal Q}$ for all transition symbols $a \! \in \! T$, and where direct flow is the transition map $o = \delta$, elementary dialectical {\bf V}-nets $\equiv$ {\bf V}-transition systems. Using the markings-as-fuzzy-subsets interpretation, any {\bf V}-net ${\sf N} = \quadruple{T}{P}{\iota}{o}$ is an elementary dialectical ${\bf V}^P$-net ${\sf N} = \quadruple{T}{{\bf 1}}{\iota}{o}$ with only one site ${\cal S}_0 = {\bf 1} = {\cal S}_1$ (so that ${\bf rel}_{V^P}[{\cal S}_0,{\cal S}_1] = {\bf rel}_{V^P}[{\bf 1},{\bf 1}] \cong {\bf V}^P$), where inverse flow $\iota \! : \! T \longrightarrow {\bf V}^P \cong {\bf rel}_{V^P}[{\bf 1},{\bf 1}]$ and direct flow $o \! : \! T \longrightarrow {\bf V}^P \cong {\bf rel}_{V^P}[{\bf 1},{\bf 1}]$ assign ${\bf V}^P$-elements as ${\bf V}^P$-relations. That is, elementary dialectical ${\bf V}^P$-nets on one site $\equiv$ {\bf V}-nets. Here $\yiya{8}_t(\mu) = [\iota_t \!\Rightarrow\! \mu] \oplus o_t = {\sf N}_t(\mu)$ for any transition symbol $t \! \in \! T$; that is, dialectical flow is ordinary consumption/production transitioning. These are not transition systems since inverse flow is not identity. Any two {\bf V}-relations in the collection $\{ {\cal S}_0 \stackrel{\iota_a}{\rightharpoondown} {\cal S}_1 \mid a \! \in \! T \}$ of inverse flow assignments are obviously comparable by use of the inf metric on the quasi {\bf V}-space ${\bf V}^{\subsupsize{\product{{\cal S}_0^{\rm op}}{{\cal S}_1}}}$. We can intend, or specify, relationships between the $\iota_a$ by requiring that $T$ be a general (and not just discrete) quasi {\bf V}-space ${\cal T} = \pair{T}{d_T}$, and that inverse flow assignment be a {\bf V}-morphism $\iota \! : \! {\cal T}^{\rm op} \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1] \!=\! {\bf V}^{\subsupsize{\product{{\cal S}_0^{\rm op}}{{\cal S}_1}}}$. The same comments apply to the direct flow assignment $o$. A {\em dialectical {\bf V}-net} (or {\em dialectical {\bf V}-transition system}) {\sf N} is a quadruple ${\sf N} = \quadruple{{\cal T}}{{\cal S}}{\iota}{o}$ consisting of: a quasi {\bf V}-space of transition symbols ${\cal T}$, two quasi {\bf V}-spaces of sites ${\cal S} \!=\! \pair{{\cal S}_0}{{\cal S}_1}$, an inverse flow {\bf V}-morphism $\iota \! : \! {\cal T}^{\rm op} \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1]$, and a direct flow {\bf V}-morphism $o \! : \! {\cal T}^{\rm op} \longrightarrow {\bf rel}_{\scriptbf{V}}[{\cal S}_0,{\cal S}_1]$. Dialectical nets have the same run-flow dynamics as elementary dialectical nets. It is clear that every dialectical net ${\sf N} = \quadruple{{\cal T}}{{\cal S}}{\iota}{o}$ defines a (single) dialectical flow specifier. Both the input and the output weighting functions are {\bf V}-morphisms $\iota,o \! : \! \product{\product{{\cal T}^{\rm op}}{{\cal S}_0^{\rm op}}}{{\cal S}_1} \longrightarrow {\bf V}$; that is, {\bf V}-relations $\iota,o \! : \! \product{{\cal T}}{{\cal S}_0} \rightharpoondown {\cal S}_1$. This means that the dialectical net {\sf N} is just the (single) relation pair ${\sf N} = \;\bigparpair{\product{{\cal T}}{{\cal S}_0}}{o}{\iota}{{\cal S}_1}$, which can be viewed as an enriched {\em state-space graph} of {\sf N}. For a {\bf V}-transition system ${\sf A} = \triple{T}{{\cal Q}}{\delta}$, $\product{{\cal T}}{{\cal S}_0} = \product{T}{{\cal Q}}$ is the $T$-th copower of ${\cal Q}$. If the transition system {\sf A} is discrete and deterministic, $o = \delta \! : \! \product{T}{Q} \longrightarrow Q$ is the usual determistic state transition function, $\iota = {\rm pr}_Q \! : \! \product{T}{Q} \longrightarrow Q$ is the projection function (identity inverse flow), $\product{T}{Q}$ is the set of edges in the state-space, and the relation pair ${\sf A} = \;\bigparpair{\product{{\cal T}}{{\cal Q}}}{\delta}{{\rm pr}_Q}{{\cal Q}}$ is the ordinary state-space graph for the transition system {\sf A}. If we interpret the state-space graph of a dialectical net {\sf N} to be a dialectical flow specifier we can define an aggregate dialectical flow $\yiya{8}^{\smallsf{N}} \! : \! {\bf V}^{{\cal S}_1} \longrightarrow {\bf V}^{{\cal S}_1}$ on {\bf V}-predicates over the site-space ${\cal S}_1$. This aggregate dialectical flow, and its various fixpoints, define a combined external-internal behavior for the dialectical net {\sf N}. For a fixed space of transition symbols ${\cal T}$, given any two dialectical {\bf V}-nets $\pair{{\cal S}_0}{\tau_0} = \triple{{\cal S}_0}{\iota_0}{o_0}$ and $\pair{{\cal S}_1}{\tau_1} = \triple{{\cal S}_1}{\iota_1}{o_1}$, a {\em morphism} of dialectical {\bf V}-nets $\pair{h_0}{h_1} \! : \! \triple{{\cal S}_0}{\iota_0}{o_0} \Rightarrow \triple{{\cal S}_1}{\iota_1}{o_1}$ consists of two {\bf V}-morphisms $h_0 \! : \! {\cal S}_{0,0} \longrightarrow {\cal S}_{1,0}$ and $h_1 \! : \! {\cal S}_{0,1} \longrightarrow {\cal S}_{1,1}$, where $\pair{h_0}{h_1} \! : \! \pair{{\cal S}_0}{\tau_{0,a}} \Rightarrow \pair{{\cal S}_1}{\tau_{1,a}}$ is a vertical morphism in ${\bf Rel}_{\scriptbf{V}}^\times$ for all transition symbols $a \! \in \! T$. For fixed ${\cal T}$, dialectical {\bf V}-nets and their morphisms form a category ${\bf Net}_{\scriptbf{V}}^{\cal T}$, the ${\cal T}$-th fiber of the {\em category of dialectical {\bf V}-nets} ${\bf Net}_{\scriptbf{V}}$. {\bf V}-predicates and flow conditions form a category ${\bf Pred}_{\scriptbf{V}}$, whose objects are pairs $\pair{{\cal X}}{\phi}$ where ${\cal X}$ is a quasi {\bf V}-space and $\phi \! \in \! {\bf V}^{{\cal X}}$ is a {\bf V}-predicate over ${\cal X}$, and whose morphisms $\pair{{\cal Y}}{\psi} \stackrel{\tau}{\longrightarrow} \pair{{\cal X}}{\phi}$ are {\bf V}-relations ${\cal Y} \stackrel{\tau}{\rightharpoondown} {\cal X}$ satisfying the direct flow condition $\directflow_\tau\!(\psi) \preceq_{\cal X} \phi$ or the equivalent inverse flow condition $\psi \preceq_{\cal Y}\; \inverseflow_\tau\!(\phi)$ There is a underlying {\bf V}-space functor $P_{\scriptbf{V}} \! : \! {\bf Pred}_{\scriptbf{V}} \longrightarrow {\bf rel}_{\scriptbf{V}}$ where $P_{\scriptbf{V}}(\pair{{\cal X}}{\phi}) = {\cal X}$ and $P_{\scriptbf{V}}(\tau) = \tau$. A {\em dialectical base} is a 01-fibration $P \! : \! {\bf E} \longrightarrow {\bf \Omega}$ whose fibers ${\bf E}_w = P^{-1}(w)$ are bicomplete. \begin{Proposition} {\bf V}-predicates and flow conditions form a dialectical base over {\bf V}-spaces and {\bf V}-relations; that is, the functor $P_{\scriptbf{V}} \! : \! {\bf Pred}_{\scriptbf{V}} \longrightarrow {\bf rel}_{\scriptbf{V}}$ is a dialectical base. \end{Proposition} This dialectical base erects a second external level of dialectical structure over the first internal level of dialectical structure represented by the category ${\bf rel}_{\scriptbf{V}}$. It will prove useful in the definition of the type theory of {\bf V}-nets, and in the recursive specification of {\bf V}-nets. Recall that in order to motivate the notion of a quasi {\bf V}-space, we showed how to translate external marking constraints such as net transition enabling conditions or the more general form $\mu_1 \preceq \mu_0$ into internal marking constraints (or metrics) $d$. However, with the introduction of the category of {\bf V}-predicates ${\bf Pred}_{\scriptbf{V}}$ we have incorporated both internal and external constraints into our dialectical approach. The internal constraints on markings $\phi \! : \! X \rightarrow {\bf V}$ are still specified by metrics $d \! : \! \product{X}{X} \rightarrow {\bf V}$ on $X$, and markings which satisfy internal constraints $d$ are called {\bf V}-predicates $\phi \! \in \! {\bf V}^{{\cal X}}$ over ${\cal X} \!=\! \pair{X}{d}$. So internal constraints are embedded into the objects of ${\bf Pred}_{\scriptbf{V}}$. We identify external constraints with the morphisms of ${\bf Pred}_{\scriptbf{V}}$: a morphism $\pair{{\cal X}}{\phi} \stackrel{\tau}{\rightharpoondown} \pair{{\cal Y}}{\psi}$ imposes the external dialectical constraint $\directflow_\tau(\psi) \preceq_{\cal X} \phi$ or equivalently $\psi \preceq_{\cal Y} \inverseflow_\tau(\phi)$ on markings (now called {\bf V}-predicates). These external constraints include the original external constraints. In fact the original external constraints are specified precisely by the identities: the identity {\bf V}-relation $\tau = d_X = \mbox{Id}_{\cal X}$ on ${\cal X}$ gives the external dialectical constraint $\psi \preceq_{\cal X} \phi$. But, in general, the external constraints are no longer necessarily pointwise constraints. In our interpretation of dialectical flow the quasi {\bf V}-space ${\cal S}_0$ is a site of transient entities (predicates), whereas the quasi {\bf V}-space ${\cal S}_1$ is the site where predicates actually reside. So we can allow the transient site-space ${\cal S}_0$ to vary as transient symbols $a \! \in \! T$ vary. A {\em generalized dialectical {\bf V}-net} {\sf N} is a pair ${\sf N} = \pair{{\bf T}}{\tau}$ consisting of: a graph (or category) of transition symbols ${\bf T}$, and a cocone $\tau \! : \! S_0 \Longrightarrow {\cal S}_1$ of relation pairs whose base is a diagram (or functor) $S_0 \! : \! {\bf T}^{\rm op} \longrightarrow {\bf Flow}_{\scriptbf{V}}$ of transient site-spaces and whose vertex is a quasi {\bf V}-space of sites ${\cal S}_1$. Let $\tilde{{\cal S}_0} = {\rm Colim}(S_0) = \prod_{a \in |\scriptbf{T}|}S_{0,a}$ be the colimit of $S_0$ in ${\bf Flow}_{\scriptbf{V}}$. Then any generalized dialectical net {\sf N} is equivalent to the unique flow specifier $\tilde{\tau} = \parpair{\tilde{{\cal S}_0}}{o}{\iota}{{\cal S}_1}$ determined by the cocone $\tau$. If $S_0$ is constant on objects (transition symbols), $S_{0,a} = {\cal S}_0$ for all $a \! \in \! |{\bf T}|$, then $\tilde{{\cal S}_0} = \product{|{\bf T}|}{{\cal S}_0}$. \vspace*{\baselineskip} \\ \hspace*{\fill} DIALECTICAL SYSTEMS \hspace*{\fill} \vspace*{\baselineskip} \\ In this section we show that each dialectical net determines a special kind of dialectical system. Let ${\bf V} = \quintuple{V}{\preceq}{\oplus}{\Rightarrow}{e}$ be any closed preorder. The monoid $\triple{V}{\oplus}{e}$ can be regarded to be a category, denoted by $\bullet_V$. Let us give the monotonic functions of {\bf V}-composition and {\bf V}-implication the more function-like notation $V^v(\:) = (\:) \oplus v \! : \! V \longrightarrow V$ and $V_v(\:) = v \!\Rightarrow\! (\:) \! : \! V \longrightarrow V$ for each {\bf V}-element $v \! \in \! V$. Then the closure axiom becomes the adjunction statement $(V^v \dashv V_v) \! : \! V \longrightarrow V$ for each {\bf V}-element $v \! \in \! V$. Let ${\bf V}(n)$ denote this adjunction. In objective dialectics, since dialectical contradictions are represented by adjunctions, systems of dialectical contradictions are represented by diagrams (pseudofunctors) in the category {\bf adj} whose objects are bicomplete preorders and whose morphisms are adjoint pairs of monotonic functions. We call such a (pseudo)functor ${\bf E} \! : \! {\bf \Omega} \longrightarrow {\bf adj}$ a {\em dialectical base} of preorders, and use the notation ${\bf E}(w_1 \stackrel{t}{\rightarrow} w_2) = ({\bf E}^t \dashv {\bf E}_t) \! : \! {\bf E}_{w_1} \rightarrow {\bf E}_{w_2}$. Objects of ${\bf \Omega}$ are called {\em types} and arrows of ${\bf \Omega}$ are called {\em terms}. Dialectical systems are the ``motors of nature'' specifying the dialectical motion of structured entities, and a dialectical base provides the ``motive power'' for this motion. The entire set of axioms for the closed preorder ${\bf V} = \quintuple{V}{\preceq}{\oplus}{\Rightarrow}{e}$ are equivalent to the following single statement. \begin{Fact} The operator ${\bf V}(\:)$ is a dialectical base ${\bf V} \! : \! \bullet_V \longrightarrow {\bf adj}$. \end{Fact} We now develop an equivalent fibrational approach for formalizing the dialectical structure of the {\bf V}-elements. Any quasi {\bf V}-space ${\cal X} = \pair{X}{d_X}$ determines a category ${\bf X}$, whose objectset is ${\rm Obj}({\bf X}) = X$, whose arrowset is ${\rm Ar}({\bf X}) = \{(x,v,x') \mid v \preceq d_X(x,x')\}$ with homsets being the principal ideals ${\bf X}[x,x'] \cong\; \downarrow_V\!\!d_X(x,x')$, whose source and target functions are the projections ${\rm pr}_1,{\rm pr}_3 \! : \! {\rm Ar}({\bf X}) \longrightarrow X$, whose identities are ${\rm Id}_x = (x,e,x)$ for each $x \! \in \! X$, and whose composition is $(x,v,x') \circ (x',v',x'') = (x,v \oplus v',x'')$. Clearly the projection function ${\rm pr}_2 \! : \! {\rm Ar}({\bf X}) \longrightarrow V$ defines a functor $|\:|_{\cal X} = {\rm pr}_2 \! : \! {\bf X} \longrightarrow \bullet_V$. Since $(x,e,x')$ is an ${\bf X}$-arrow iff $e \preceq d_X(x,x')$ iff $x \preceq_X x'$, the fiber of $|\:|_{\cal X}$ over the (only) $\bullet_V$-object $e$ is $|\:|_{{\cal X},e} = |\:|_{\cal X}^{-1}(e) = \{(x,e,x') \mid e \preceq d_X(x,x')\} \cong \{(x,x') \mid x \preceq_X x'\}$; that is, the fiber $|\:|_{{\cal X},e}$ is essentially the preorder $\pair{X}{\preceq_X}$ viewed as a subcategory of {\bf X}. A {\em distributed} {\bf V}-{\em type} (or a {\bf V}-{\em normed category}) ${\cal C}$ is a pair ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$ where ${\bf C}$ is a category and $|\:|_{\cal C} \! : \! {\bf C} \longrightarrow \bullet_V$ is a functor called a {\bf V}-{\em norm} (or a {\bf V}-{\em typing}). So every quasi {\bf V}-space ${\cal X} = \pair{X}{d_X}$ determines a distributed {\bf V}-type ${\cal X} = \pair{{\bf X}}{|\:|_{\cal X}}$. Any category {\bf C} is a distributed {\bf V}-type ${\cal C} = \pair{{\bf C}}{e}$, where $e \! : \! {\bf C} \rightarrow \bullet_V$ is the constant identity functor. The category {\bf V} of {\bf V}-{\em inequality conditions} is the category part of $\pair{{\bf V}}{|\:|_{\cal V}}$, the distributed {\bf V}-type determined by the quasi {\bf V}-space ${\cal V}$ of generalized truth-values. The functor part $|\:|_{\cal V}$ has special properties, and so we give it a special notation: $P_V = |\:|_{\cal V}$. By the above fact, $P_V$ is a $01$-fibration. The only fiber of $P_V$ is $P_{V,e} \cong {\cal V}=\pair{V}{\preceq}$ which is bicomplete (a cpo). \begin{Proposition} {\bf V}-inequality conditions form a dialectical base over {\bf V}-truth-values; that is, the {\bf V}-norm $P_V \! : \! {\bf V} \longrightarrow \bullet_V$ is a dialectical base. \end{Proposition} We can extend this result from ${\bf V}=\triple{{\cal V}}{\oplus}{\Rightarrow}$ itself to any tensored-cotensored quasi {\bf V}-space . Any {\bf V}-morphism ${\cal X} \stackrel{f}{\longrightarrow} {\cal Y}$ of quasi {\bf V}-spaces ${\cal X} = \pair{X}{d_X}$ and ${\cal Y} = \pair{Y}{d_Y}$ determines a functor ${\bf X} \stackrel{H_f}{\longrightarrow} {\bf Y}$ where $H_f(x) = f(x)$ on {\bf X}-objects and $H_f((x,v,x')) = (f(x),v,f(x'))$ on {\bf X}-arrows. Clearly $H_f$ commutes with the projection functions: $H_f \cdot |\:|_{\cal Y} = |\:|_{\cal X}$. Let ${\rm Type}_V(f) = H_f$ denote this construction. A {\em morphism} of distributed {\bf V}-types $H \! : \! \pair{{\bf C}}{|\:|_{\cal C}} \longrightarrow \pair{{\bf D}}{|\:|_{\cal D}}$ is a functor $H \! : \! {\bf C} \longrightarrow {\bf D}$ which commutes with the {\bf V}-norms: $H \cdot |\:|_{\cal C} = |\:|_{\cal D}$. So every quasi {\bf V}-space morphism is a distributed {\bf V}-type morphism. A {\bf V}-predicate $\phi \! \in \! {\bf V}^{\cal X}$ over a quasi {\bf V}-space ${\cal X}$ is a {\bf V}-morphism $\phi \! : \! {\cal X} \longrightarrow {\bf V}$, and hence determines a morphism of distributed {\bf V}-types $H_\phi \! : \! {\rm Type}_V({\cal X}) \longrightarrow {\rm Type}_V({\cal V})$; that is, a functor $H_\phi \! : \! {\bf X} \longrightarrow {\bf V}$ satisfying $H_\phi \cdot P_V = |\:|_{\cal X}$. Given a distributed {\bf V}-type ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$, a {\em distributed} {\bf V}-{\em entity} (or {\bf V}-{\em predicate}) $\Phi$ of type ${\cal C}$ is a functor $\Phi \! : \! {\bf C} \longrightarrow {\bf V}$ satisfying $\Phi \cdot P_V = |\:|_{\cal C}$. Let ${\bf V}^{\cal C}$ denote the collection (bicomplete quasi {\bf V}-space) of all distributed {\bf V}-entities of type ${\cal C}$. Let ${\bf Type}_V$ denote the category of distributed {\bf V}-types and their morphisms. ${\bf Type}_V$ is the comma category ${\bf Type}_V = {\bf Cat}\!\downarrow\!\bullet_V$. Then ${\rm Type}_V$ is a functor ${\rm Type}_V \! : \! {\bf Space}_V \longrightarrow {\bf Type}_V$ from spaces to types. In the opposite direction, any distributed {\bf V}-type ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$ determines a quasi {\bf V}-space ${\cal C} = \pair{C}{d_C}$, where $C$ is the objectset $C = {\rm Obj}({\bf C})$ of {\bf C} and the metric $d_C \! : \! \product{C}{C} \longrightarrow V$ is defined to be the homset supremum $d_C(c,c') = \bigvee_{c \stackrel{g}{\rightarrow} c'}|g|_{\cal C}$. Let ${\rm Space}_V(\pair{{\bf C}}{|\:|_{\cal C}}) = \pair{C}{d_C}$ denote this construction. Any morphism of distributed {\bf V}-types $H \! : \! {\cal C} \longrightarrow {\cal D}$, where ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$ and ${\cal D} = \pair{{\bf D}}{|\:|_{\cal D}}$, determines a morphism of quasi {\bf V}-spaces $f_H \! : \! {\rm Space}_V({\cal C}) \longrightarrow {\rm Space}_V({\cal D})$, where $f_H(c) = H(c)$ for all objects $c \! \in \! {\bf C}$. Let ${\rm Space}_V(H) = f_H$ denote this construction. Then ${\rm Space}_V$ is a functor ${\rm Space}_V \! : \! {\bf Type}_V \longrightarrow {\bf Space}_V$ from types to spaces. For any distributed {\bf V}-type ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$ there is a canonical morphism of {\bf V}-types $\eta_{{\cal C}} \! : \! {\cal C} \longrightarrow {\rm Type}_V({\rm Space}_V({\cal C}))$, where $\eta_{\cal C}$ is the identity map on objects and $\eta_{\cal C}(c \stackrel{g}{\rightarrow} c') = c \stackrel{(c,|g|_{\cal C},c')}{\longrightarrow} c'$ on arrows. Moreover, ${\rm Space}_V({\rm Type}_V({\cal X})) = {\cal X}$ for any quasi {\bf V}-space ${\cal X}$. \begin{Proposition} The {\bf V}-space functor is left adjoint $({\rm Space}_V \dashv {\rm Type}_V)$ to the {\bf V}-type functor. \end{Proposition} The unit of this adjunction $\eta \! : \! {\rm Id} \Longrightarrow {\rm Space}_V \cdot {\rm Type}_V$ has the canonical morphism of {\bf V}-types $\eta_{\cal C}$ as its ${\cal C}$-th component, and the counit of this adjunction is the identity natural transformation ${\rm Id} \! : \! {\rm Type}_V \cdot {\rm Space}_V \Longrightarrow {\rm Id}$. So this adjunction is a reflection with ${\rm Type}_V \! : \! {\bf Space}_V \longrightarrow {\bf Type}_V$ embedding ${\bf Space}_V$ as a subcategory of ${\bf Type}_V$, and ${\rm Space}_V \! : \! {\bf Type}_V \longrightarrow {\bf Space}_V$ reflecting ${\bf Type}_V$ into its ``subcategory'' ${\bf Space}_V$. Given two categories {\bf B} and {\bf A}, a {\em distributor} ${\bf R}$ from category {\bf B} to category {\bf A}, denoted by ${\bf B} \stackrel{\scriptbf{R}}{\rightharpoondown} {\bf A}$, is a triple ${\bf R} = \triple{\circ_0}{R}{\circ_1}$, where: $R$ is a span $R = \quintuple{{\rm Obj}({\bf B})}{\partial_0}{{\rm Ar}({\bf R})}{\partial_1}{{\rm Obj}({\bf A})}$ with arrowset ${\rm Ar}({\bf R})$, source function $\partial_0 \! : \! {\rm Ar}({\bf R}) \longrightarrow {\rm Obj}({\bf B})$, and target function $\partial_1 \! : \! {\rm Ar}({\bf R}) \longrightarrow {\rm Obj}({\bf A})$; $\circ_0$ is a left action with respect to {\bf B}, so that ${\rm Id}_b \circ_0 e = e$ and $(g' \circ_B g) \circ_0 e = g' \circ_0 (g \circ_0 e)$ for all $R$-arrows $b \stackrel{e}{\rightarrow} a$ and all {\bf B}-arrows $b'' \stackrel{g'}{\rightarrow} b'$ and $b' \stackrel{g}{\rightarrow} b$; $\circ_1$ is a right action with respect to {\bf A}, so that $e \circ_1 {\rm Id}_a = e$ and $e \circ_1 (f \circ_A f') = (e \circ_1 f) \circ_1 f'$ for all $R$-arrows $b \stackrel{e}{\rightarrow} a$ and all {\bf A}-arrows $a \stackrel{f}{\rightarrow} a'$ and $a' \stackrel{f'}{\rightarrow} a''$; and the mixed associative law $g \circ_0 (e \circ_1 f) = (g \circ_0 e) \circ_1 f)$ holds. A category ${\bf C}$ is a distributor ${\bf C} \stackrel{\scriptbf{C}}{\rightharpoondown} {\bf C}$, where ${\bf C} = \triple{\circ_C}{C}{\circ_C}$ and $C = \quintuple{{\rm Obj}({\bf C})}{\partial_0^C}{{\rm Ar}({\bf C})}{\partial_1^C}{{\rm Obj}({\bf C})}$. Categories and distributors form a category {\bf dist}, which includes as a subcategory (via Yoneda) the category {\bf Cat} of categories and functors. A {\em morphism of distributors} ${\bf R}_1 \stackrel{\scriptbf{H}}{\Longrightarrow} {\bf R}_2$ from distributor ${\bf B}_1 \stackrel{\scriptbf{R}_1}{\rightharpoondown} {\bf A}_1$ to distributor ${\bf B}_2 \stackrel{\scriptbf{R}_2}{\rightharpoondown} {\bf A}_2$, is a triple ${\bf H} = \triple{G}{H}{F}$ where $G \! : \! {\bf B}_1 \longrightarrow {\bf B}_2$ and $F \! : \! {\bf A}_1 \longrightarrow {\bf A}_2$ are functors, and $\triple{G}{H}{F} \! : \! R_1 \longrightarrow R_2$ is a morphism of spans which preserves actions: $H$ preserves source, $\partial_0(H(e)) = G(\partial_0(e))$; $H$ preserves target, $\partial_1(H(e)) = F(\partial_1(e))$; $H$ preserves source (left) action, $H(g \circ_0 e) = G(g) \circ_0 H(e)$; and $H$ preserves target (right) action, $H(e \circ_1 f) = H(e) \circ_1 F(f)$; for all $b' \stackrel{g}{\rightarrow} b$ in ${\rm Ar}({\bf B})$, $b \stackrel{e}{\rightarrow} a$ in ${\rm Ar}({\bf R})$ and $a \stackrel{f}{\rightarrow} a'$ in ${\rm Ar}({\bf A})$. Distributers and their vertical morphisms form the category {\bf Dist}. A functor ${\bf A}_1 \stackrel{F}{\longrightarrow} {\bf A}_2$ is a morphism of distributors, ${\bf A}_1 \stackrel{\scriptbf{F}}{\Longrightarrow} {\bf A}_2$ with categories ${\bf A}_1$ and ${\bf A}_2$ regarded as distributors, where ${\bf F} = \triple{F \! : \! {\rm Obj}({\bf A}_1) \rightarrow {\rm Obj}({\bf A}_2)} {F \! : \! {\rm Ar}({\bf A}_1) \rightarrow {\rm Ar}({\bf A}_2)} {F \! : \! {\rm Obj}({\bf A}_1) \rightarrow {\rm Obj}({\bf A}_2)}$. This defines a vertical embedding of {\bf Cat} into {\bf Dist}. Any {\bf V}-relation ${\cal Y} \stackrel{\tau}{\rightharpoondown} {\cal X}$ determines a distributor ${\bf Y} \stackrel{\scriptbf{T}}{\rightharpoondown} {\bf X}$ whose arrowset is ${\rm Ar}({\bf T}) = \{(y,v,x) \mid y \! \in \! Y, x \! \in \! X, v \preceq \tau(y,x)\}$ with homsets being the principal ideals ${\bf T}[y,x] \cong\: \downarrow_V\! \tau(y,x)$, whose source and target functions are the projections $\partial_0^T = {\rm pr}_1 \! : \! {\rm Ar}({\bf T}) \longrightarrow {\rm Obj}({\bf Y})$ and $\partial_1^T = {\rm pr}_3 \! : \! {\rm Ar}({\bf T}) \longrightarrow {\rm Obj}({\bf X})$, and whose biaction consists of the left action $(y',v,y) \circ_0 (y,w,x) = (y',v \oplus w,x)$ and the right action $(y,w,x) \circ_1 (x,u,x') = (y,w \oplus u,x')$ (essentially the {\bf V}-composition $\oplus$ distributed over $Y$ and $X$). Clearly the projection function ${\rm pr}_2 \! : \! {\rm Ar}({\bf T}) \longrightarrow V$ defines a morphism of spans $|\:|_{\cal T} = {\rm pr}_2 \! : \! {\bf T} \longrightarrow \bullet_V$ which preserves left and right actions: $|(y',v,y) \circ_0 (y,w,x)|_{\cal T} = v \oplus w = |(y',v,y)|_{\cal Y} \oplus |(y,w,x)|_{\cal T}$. So ${\bf T} \stackrel{|\:|_{\cal T}}{\Longrightarrow} \bullet_V$ is a morphism of distributors, where $|\:|_{\cal T} = \triple{|\:|_{\cal Y}}{|\:|_{\cal T}}{|\:|_{\cal X}}$. Since $(y,e,x)$ is an ${\bf T}$-arrow iff $e \preceq \tau(y,x)$ iff $y \preceq_\tau x$, the fiber of $|\:|_{\cal T}$ over the (only) $\bullet_V$-object $e$ is $|\:|_{{\cal T},e} = |\:|_{\cal T}^{-1}(e) = \{(y,e,x) \mid e \preceq \tau(y,x)\} \cong \{(y,x) \mid y \preceq_\tau x\}$; that is, the fiber $|\:|_{{\cal X},e}$ is essentially the {\bf 2}-relation ${\cal Y} \stackrel{\preceq_\tau}{\rightharpoondown} {\cal X}$ viewed as a subdistributor of ${\bf Y} \stackrel{\scriptbf{T}}{\rightharpoondown} {\bf X}$. Given two distributed types ${\cal B}$ and ${\cal A}$, a {\em distributed} {\bf V}-{\em term} (or a {\bf V}-{\em normed distributor}) ${\cal R}$ from distributed {\bf V}-type ${\cal B}$ to distributed {\bf V}-type ${\cal A}$, denoted by ${\cal B} \stackrel{\scriptcal{R}}{\rightharpoondown} {\cal A}$, is a pair ${\cal R} = \pair{{\bf R}}{|\:|_{\cal R}}$ where ${\bf B} \stackrel{\scriptbf{R}}{\rightharpoondown} {\bf A}$ is a distributor and ${\bf R} \stackrel{|\:|_{\cal R}}{\Longrightarrow} \bullet_V$ is a morphism of distributors, called a {\bf V}-{\em norm} (or a {\bf V}-{\em terming}), where $|\:|_{\cal R} = \triple{|\:|_{\cal B}}{|\:|_{\cal R}}{|\:|_{\cal A}}$. So every {\bf V}-relation ${\cal Y} \stackrel{\tau}{\rightharpoondown} {\cal X}$ determines a distributed {\bf V}-term ${\cal T} = \pair{{\bf T}}{|\:|_{\cal T}}$. Let ${\rm Term}_V(\tau) = \pair{{\bf T}}{|\:|_{\cal T}}$ denote this construction. Also, a distributed {\bf V}-type ${\cal C}$ is a distributed {\bf V}-term ${\cal C} \stackrel{{\cal C}}{\rightharpoondown} {\cal C}$ where ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$. Any distributor ${\bf B} \stackrel{\scriptbf{R}}{\rightharpoondown} {\bf A}$ is a distributed {\bf V}-term ${\cal B} \stackrel{\scriptcal{R}}{\rightharpoondown} {\cal A}$ with constant identity {\bf V}-norm ${\bf R} \stackrel{e}{\Longrightarrow} \bullet_V$. So {\bf Dist} is embeddable into ${\bf Term}_V$. So distributed {\bf V}-types and distributed {\bf V}-terms form a category ${\bf term}_V$. There is a concept orthogonal to distributed-terms-as-morphisms. Any morphism $\tau_1 \stackrel{\mbox{\footnotesize$\pair{g}{f}$\normalsize}}{\longrightarrow} \tau_2$ of {\bf V}-relations ${\cal Y}_1 \stackrel{\tau_1}{\rightharpoondown} {\cal X}_1$ and ${\cal Y}_2 \stackrel{\tau_2}{\rightharpoondown} {\cal X}_2$ determines a morphism of distributors ${\bf T}_1 \stackrel{\scriptbf{H}_{g,f}}{\longrightarrow} {\bf T}_2$ where ${\bf H}_{g,f} = \triple{|\:|_{\cal Y}}{H_{g,f}}{|\:|_{\cal X}}$ with $H_{g,f}((y,v,x)) = (g(y),v,f(x))$ on ${\bf T}_1$-arrows. Clearly ${\bf H}_{g,f}$ commutes with the {\bf V}-norms: ${\bf H}_{g,f} \cdot |\:|_{{\cal T}_2} = |\:|_{{\cal T}_1}$. Let ${\rm Term}_V({\bf A}) = {\rm Space}({\bf A})$ and ${\rm Term}_V(\pair{g}{f}) = {\bf H}_{g,f}$ denote this construction. Given two distributed {\bf V}-terms ${\cal R}_1 = \pair{{\bf R}_1}{|\:|_{{\cal R}_1}}$ and ${\cal R}_2 = \pair{{\bf R}_2}{|\:|_{{\cal R}_2}}$, a {\em morphism of distributed {\bf V}-terms} ${\cal R}_1 \stackrel{\scriptbf{H}}{\Longrightarrow} {\cal R}_2$ is a morphism of distributors ${\bf R}_1 \stackrel{\scriptbf{H}}{\Longrightarrow} {\bf R}_2$, say ${\bf H} = \triple{G}{H}{F}$, which commutes with the {\bf V}-norms: ${\bf H} \cdot |\:|_{{\cal R}_1} = |\:|_{{\cal R}_2}$. So every morphism of {\bf V}-relations is a morphism of distributed {\bf V}-terms. Let ${\bf Term}_V$ denote the category of distributed {\bf V}-terms and their morphisms. ${\bf Term}_V$ is the comma category ${\bf Term}_V = {\bf Dist}\!\Downarrow\!\bullet_V$. This is the vertical category of a double category, which we also denote by ${\bf Term}_V$, whose underlying horizontal category is ${\bf term}_V$. Then ${\rm Term}_V$ is a functor ${\rm Term}_V \! : \! {\bf Rel}_V \longrightarrow {\bf Term}_V$ from relations to terms. In the opposite direction, any distributed {\bf V}-term ${\cal T} = \pair{({\bf B} \stackrel{\scriptbf{T}}{\rightharpoondown} {\bf A})}{|\:|_{\cal T}}$ determines a {\bf V}-relation ${\rm Space}({\bf B}) \stackrel{\tau_T}{\rightharpoondown} {\rm Space}({\bf A})$, where the {\bf V}-morphism $\tau_T \! : \! \product{B}{A} \longrightarrow V$ is defined to be the homset supremum $\tau_T(b,a) = \bigvee_{b \stackrel{e}{\rightarrow} a}|e|_{\cal T}$. Let ${\rm Rel}_V(\pair{{\bf T}}{|\:|_{\cal T}}) = \tau_T$ denote this construction. Any morphism of distributed {\bf V}-terms $F\!=\!\triple{g}{h}{f} \! : \! {\cal T}_1 \longrightarrow {\cal T}_2$, where ${\cal T}_1 = \pair{{\bf T}_1}{|\:|_{{\cal T}_1}}$ and ${\cal T}_2 = \pair{{\bf T}_2}{|\:|_{{\cal T}_2}}$, determines a morphism of {\bf V}-relations $\pair{g}{f} \! : \! {\rm Rel}_V({\cal T}_1) \longrightarrow {\rm Rel}_V({\cal T}_2)$. Let ${\rm Rel}_V(F) = \pair{g}{f}$ denote this construction. Then ${\rm Rel}_V$ is a functor ${\rm Rel}_V \! : \! {\bf Term}_V \longrightarrow {\bf Rel}_V$ from terms to relations. For any distributed {\bf V}-term ${\cal T} = \pair{{\bf T}}{|\:|_{\cal T}}$ there is a canonical morphism of {\bf V}-terms $\eta_{{\cal T}} \! : \! {\cal T} \longrightarrow {\rm Term}_V({\rm Rel}_V({\cal T}))$, where $\eta_{\cal T} = \triple{{\rm Id}_B}{h}{{\rm Id}_A}$ and $h(b \stackrel{e}{\rightarrow} a) = b \stackrel{(b,|e|_{\cal T},a)}{\longrightarrow} a$ on arrows. Moreover, ${\rm Rel}_V({\rm Term}_V(\tau)) = \tau$ for any {\bf V}-relation $\tau$. \begin{Proposition} The {\bf V}-relation functor is left adjoint $({\rm Rel}_V \dashv {\rm Term}_V)$ to the {\bf V}-term functor. \end{Proposition} The unit of this adjunction $\eta \! : \! {\rm Id} \Longrightarrow {\rm Rel}_V \cdot {\rm Term}_V$ has the canonical morphism of {\bf V}-terms $\eta_{\cal T}$ as its ${\cal T}$-th component, and the counit of this adjunction is the identity natural transformation ${\rm Id} \! : \! {\rm Term}_V \cdot {\rm Rel}_V \Longrightarrow {\rm Id}$. So this adjunction is a reflection with ${\rm Term}_V \! : \! {\bf Rel}_V \longrightarrow {\bf Term}_V$ embedding ${\bf Rel}_V$ as a subcategory of ${\bf Term}_V$, and ${\rm Rel}_V \! : \! {\bf Term}_V \longrightarrow {\bf Rel}_V$ reflecting ${\bf Term}_V$ into its ``subcategory'' ${\bf Rel}_V$. Let ${\bf E} \! : \! {\bf \Omega} \longrightarrow {\bf adj}$, or equivalently, $P_{\scriptbf{E}} \! : \! {\bf E} \longrightarrow {\bf \Omega}$ be any dialectical base. A {\em distributed} {\bf E}-{\em type} ${\cal C}$ is a pair ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$ where ${\bf C}$ is a category and $|\:|_{\cal C} \! : \! {\bf C} \longrightarrow {\bf \Omega}$ is a functor called an {\bf E}-{\em typing}. Given a distributed {\bf E}-type ${\cal C} = \pair{{\bf C}}{|\:|_{\cal C}}$, a {\em distributed} {\bf E}-{\em entity} $\Phi$ of type ${\cal C}$ is a functor $\Phi \! : \! {\bf C} \longrightarrow {\bf E}$ satisfying $\Phi \cdot P_{\scriptbf{E}} = |\:|_{\cal C}$. Let ${\bf E}^{\cal C}$ denote the collection (bicomplete preorder) of all distributed {\bf E}-entities of type ${\cal C}$. A {\em distributed} {\bf E}-{\em term} (or an {\bf E}-{\em termed distributor}) ${\cal R}$ is a pair ${\cal R} = \pair{{\bf R}}{|\:|_{\cal R}}$ where ${\bf B} \stackrel{\scriptbf{R}}{\rightharpoondown} {\bf A}$ is a distributor and ${\bf R} \stackrel{|\:|_{\cal R}}{\Longrightarrow} {\bf \Omega}$ is a morphism of distributors called an {\bf E}-{\em terming}. Let ${\bf Term}_E$ denote the category of distributed {\bf E}-terms and their morphisms. Any distributed {\bf E}-term ${\cal R} = \pair{{\bf R}}{|\:|_{\cal R}}$ with distributor ${\bf B} \stackrel{\scriptbf{R}}{\rightharpoondown} {\bf A}$ and {\bf E}-{\em terming} $|\:|_{\cal R} \! : \! {\bf R} \longrightarrow {\bf \Omega}$ defines by composition the {\bf E}-morphism ${\bf E}^{{\cal B}} \! \stackrel{\directflow_{\cal R}}{\longrightarrow} {\bf E}^{{\cal A}}$ called {\em direct flow} (or {\em yang}), where $\directflow_{\cal R}$ is defined by \[ \directflow_{\cal R}\!(\Psi)(A) = \bigvee_{B \in |\scriptbf{B}|,e \in \scriptbf{R}[B,A]} {\bf E}^{|e|_{\cal R}}(\Psi(B)) \] for any distributed {\bf E}-entity $\Psi \! \in \! {\bf E}^{{\cal B}}$ of distributed {\bf E}-type ${\cal B} = \pair{{\bf B}}{|\:|_{\cal B}}$ and any object $A \! \in \! {\rm Obj}({\bf A})$, where $\bigvee$ denotes supremum, or colimit, in ${\bf E}(|A|_{\cal A})$. The {\bf E}-morphism $\directflow_{\cal R}$ has as a right adjoint the {\bf E}-morphism ${\bf E}^{{\cal B}} \! \stackrel{\inverseflow_{\cal R}}{\longleftarrow} {\bf E}^{{\cal A}}$ called {\em inverse flow} (or {\em yin}), and defined by \[ \inverseflow_{\cal R}\!(\Phi)(B) = \bigwedge_{A \in |\scriptbf{A}|,e \in \scriptbf{R}[B,A]} {\bf E}_{|e|_{\cal R}}(\Phi(A)) \] for any distributed {\bf E}-entity $\Phi \! \in \! {\bf E}^{{\cal A}}$ of distributed {\bf E}-type ${\cal A} = \pair{{\bf A}}{|\:|_{\cal A}}$ and any object $B \! \in \! {\rm Obj}({\bf B})$, where $\bigwedge$ denotes infimum, which is limit, in ${\bf E}(|B|_{\cal B})$. Parallel pairs of distributed {\bf E}-terms specify ``the dialectical motion (flow, development) of entities''. \begin{Fact} Direct flow is left adjoint to inverse flow $(\directflow_{\cal R} \dashv \inverseflow_{\cal R})$ for any distributed {\bf E}-term ${\cal R}$. \end{Fact} A parallel pair $\tau = \parpair{{\cal B}}{\footnotecal{O}}{\footnotecal{I}}{{\cal A}}$ of distributed {\bf E}-terms is called a {\em dialectical flow specifier}. Given a flow specifier $\tau$, the {\em dialectical flow} (or {\em yinyang}) specified by $\tau$ is the composition \[ \yiya{8}_\tau = {\bf E}^{\cal A} \stackrel{\inverseflow_{\cal I}}{\longrightarrow} {\bf E}^{\cal B} \stackrel{\directflow_{\cal O}}{\longrightarrow} {\bf E}^{\cal A} .\] We say that $\tau$ {\em reproduces} the distributed {\bf E}-entity $\Phi \! \in \! {\bf E}^{\cal A}$ when $\Phi$ is a fixpoint solution of the recursive equation $\chi \equiv \yiya{8}_\tau(\chi)$. An {\em elementary dialectical {\bf E}-system} {\sf S} is a quadruple ${\sf S} = \quadruple{T}{{\cal S}}{{\cal I}}{{\cal O}}$ consisting of: a set (of transition symbols) $T$, two distributed {\bf E}-types (of sites) ${\cal S} \!=\! \pair{{\cal S}_0}{{\cal S}_1}$, an inverse flow assignment ${\cal I} \! : \! T \longrightarrow {\bf Term}_E[{\cal S}_0,{\cal S}_1]$, and a direct flow assignment $O \! : \! T \longrightarrow {\bf Term}_E[{\cal S}_0,{\cal S}_1]$, where ${\bf Term}_E[{\cal S}_0,{\cal S}_1]$ is the collection of all distributed {\bf E}-terms between the distributed {\bf E}-types of sites ${\cal S}_0$ and ${\cal S}_1$. For fixed category of transition symbols {\bf T}, we can define morphisms of dialectical {\bf E}-systems, analogous to those for nets. Then, for fixed ${\bf T}$, dialectical {\bf E}-systems and their morphisms form a category ${\bf kosmos}_{\scriptbf E}^{\scriptbf T}$, the ${\bf T}$-th fiber of the {\em category of dialectical {\bf E}-systems} ${\bf kosmos}_{\scriptbf E}$ (the {\bf E}-th kosmos). If {\bf V} is any closed preorder with associated dialectical base ${\bf V} \! : \! \bullet_V \longrightarrow {\bf adj}$, or equivalently $P_{\scriptbf{V}} \! : \! {\bf V} \longrightarrow \bullet_V$, then dialectical {\bf V}-systems (as shown above) are (or more precisely, can be reflected into) dialectical {\bf V}-nets. If $D \cdot {\bf P} \! : \! {\bf T}_\Sigma^{\rm op} \rightarrow {\bf Set} \rightarrow {\bf adj}$ is the dialectical base, where $\Sigma$ is an algebraic signature with term category ${\bf T}_\Sigma$, $D$ is a $\Sigma$-algebra in functorial form, and ${\bf P} \! : \! {\bf Set} \longrightarrow {\bf adj}$ is the direct-image/inverse-image dialectical base, then dialectical $D \cdot {\bf P}$-systems are Horn clause logic programs. Horn clause logic programs can be enriched by replacing the pseudofunctor ${\bf P} \! : \! {\bf Set} \longrightarrow {\bf adj}$ with the existential-Kan-quantification/substitution pseudofunctor ${\bf P}_{\scriptbf{V}} \! : \! {\bf Space}_V \longrightarrow {\bf adj}$, where ${\bf P}_{\scriptbf{V}}({\cal X} \stackrel{f}{\rightarrow} {\cal Y}) = \parpair{{\bf V}^{{\cal X}}}{\subsupsize{\exists_f}}{\subsupsize{{\bf V}^f}}{{\bf V}^{{\cal Y}}}$, and defining the dialectical base to be $D \cdot {\bf P}_{\scriptbf{V}} \! : \! {\bf T}_\Sigma^{\rm op} \rightarrow {\bf Space}_V \rightarrow {\bf adj}$, where $D$ is any {\bf V}-enriched $\Sigma$-algebra (see \cite{Kent87b} for further development of this case). In particular, the special case of natural numbers ${\bf V} = {\bf N}$ enriches Horn clause logic programs with multiplicities, and gives a proper formulation for ``predicate/transition nets'', which are not just nets, but full-fledged dialectical $D \cdot {\bf P}_{\scriptbf{N}}$-systems. \vspace*{\baselineskip} \\ \end{document}
math
90,848
\begin{document} \author{DongSeon Hwang} \address{Department of Mathematics, Ajou University, Suwon, Korea} \email{[email protected]} \author{Jinhyung Park} \address{Department of Mathematical Sciences, KAIST, Daejeon, Korea} \email{[email protected]} \subjclass[2010]{Primary 14J26; Secondary 14J17} \date{\today} \keywords{redundant blow-up, normal surface singularity, rational surface with big anticanonical divisor, Zariski decomposition.} \begin{abstract} We completely classify redundant blow-ups appearing in the theory of rational surfaces with big anticanonical divisor due to Sakai. In particular, we construct a rational surface with big anticanonical divisor which is not a minimal resolution of a del Pezzo surface with only rational singularities, which gives a negative answer to a question raised in a paper by Testa, V\'{a}rilly-Alvarado, and Velasco. \end{abstract} \maketitle \tableofcontents \section{Introduction} Throughout the paper, we work over an algebraically closed field $k$ of arbitrary characteristic. Sakai (\cite[Proposition 4.1 and Theorem 4.3]{Sak84}) proved that the anticanonical morphism $f \colon S \to \bar{S}$ of a big anticanonical rational surface, i.e., a smooth projective rational surface with big anticanonical divisor, factors through the minimal resolution of a del Pezzo surface with only rational singularities followed by a sequence of \emph{redundant blow-ups}, which will be defined below. However, the existence of redundant points was not known before. In the present paper, we show the existence of redundant points (Theorem \ref{reddisc}) by providing a systematic way of finding redundant points (Theorem \ref{redpt}). Here, we briefly introduce the notion of redundant blow-up in general inspired by Sakai's work. Let $S$ be a smooth projective surface. Assume that $-K_S$ is pseudo-effective so that we have the Zariski decomposition $-K_S = P+N$. A point $p$ in $S$ is called a \emph{redundant point} if $\operatorname{mult}_p N \geq 1$. The blow-up $f \colon \widetilde{S} \to S$ at a redundant point $p$ is called a \emph{redundant blow-up}. For more detail, see Section \ref{setupsec}. To classify redundant blow-ups, we only need to know the information of the redundant points on the surface $S$. In many natural situations, $S$ is a minimal resolution of a normal projective surface $\bar{S}$ with nef anticanonical divisor. It turns out that the position of redundant points on $S$ can be read off from the information of singularities on $\bar{S}$. \begin{theorem}\label{redpt} Let $\bar{S}$ be a normal projective rational surface with nef anticanonical divisor, and let $\widetilde{S}$ be a surface obtained by a sequence of redundant blow-ups from the minimal resolution $S$ of $\bar{S}$. Then, we have the following. \begin{enumerate} \item The number of surfaces obtained by a sequence of redundant blow-ups from $S$ is finite if and only if $\bar{S}$ contains at worst log terminal singularities. \item If $\bar{S}$ contains at worst log terminal singularities, then every redundant point on $\widetilde{S}$ lies on the intersection points of the two curves contracted by the morphism $h \colon \widetilde{S} \rightarrow \bar{S}$. \item If $\bar{S}$ contains a non-log terminal singularity, then there is a curve in $\widetilde{S}$ contracted by the morphism $h \colon \widetilde{S} \rightarrow \bar{S}$ such that every point lying on this curve is a redundant point. \end{enumerate} \end{theorem} Moreover, the existence of the redundant points can also be determined from the singularity types of $\bar{S}$. \begin{theorem}\label{reddisc} Let $\bar{S}$ be a normal projective rational surface with nef anticanonical divisor, and let $g \colon S \rightarrow \bar{S}$ be its minimal resolution. Then, $S$ has no redundant point if and only if $\bar{S}$ contains at worst canonical singularities or log terminal singularities whose dual graphs are as follows: \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-4,2.8) rectangle (9,4.2); \draw (-3.5,3.96)-- (-2.8,3.96); \draw (-1.57,3.96)-- (-0.87,3.96); \draw (-4,3.82) node[anchor=north west] {$-\underbrace{2\text{ }-2 \text{ }\text{ }\text{ }\text{ }\text{ } -2}_{\text{ } \text{ } \text{ } \text{ } \text{ } \alpha \text{ } (\alpha \geq 1)}$}; \draw (-1.32, 3.82) node[anchor=north west] {$-3$}; \draw (-2.5,4.12) node[anchor=north west] {$\cdots$}; \draw (0.2,3.96)-- (2.3,3.96); \draw (-0.25,3.82) node[anchor=north west] {$-2$}; \draw (0.45,3.82) node[anchor=north west] {$-2$}; \draw (1.15,3.82) node[anchor=north west] {$-3$}; \draw (1.85,3.82) node[anchor=north west] {$-2$}; \draw (3.37,3.96)-- (4.77,3.96); \draw (2.92,3.82) node[anchor=north west] {$-2$}; \draw (3.62,3.82) node[anchor=north west] {$-3$}; \draw (4.32,3.82) node[anchor=north west] {$-2$}; \draw (5.84,3.96)-- (6.54,3.96); \draw (5.39,3.82) node[anchor=north west] {$-2$}; \draw (6.09,3.82) node[anchor=north west] {$-4$}; \draw (7.16,3.86) node[anchor=north west] {$-n$}; \draw (6.9,3.5) node[anchor=north west] {$(n \geq 3)$}; \begin{scriptsize} \fill [color=black] (-3.5,3.96) circle (2.5pt); \fill [color=black] (-2.8,3.96) circle (2.5pt); \fill [color=black] (-1.57,3.96) circle (2.5pt); \fill [color=black] (-0.87,3.96) circle (2.5pt); \fill [color=black] (0.2,3.96) circle (2.5pt); \fill [color=black] (0.9,3.96) circle (2.5pt); \fill [color=black] (1.6,3.96) circle (2.5pt); \fill [color=black] (2.3,3.96) circle (2.5pt); \fill [color=black] (3.37,3.96) circle (2.5pt); \fill [color=black] (4.07,3.96) circle (2.5pt); \fill [color=black] (4.77,3.96) circle (2.5pt); \fill [color=black] (5.84,3.96) circle (2.5pt); \fill [color=black] (6.54,3.96) circle (2.5pt); \fill [color=black] (7.61,3.96) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \end{theorem} It was shown that every big anticanonical rational surface has a finitely generated Cox ring in \cite[Theorem 1]{TVV10}, \cite[Theorem 3]{CS08}, and the following question was raised in \cite{TVV10}. \begin{question}[{\cite[Remark 3]{TVV10}}]\label{tvvq} Is every big anticanonical rational surface the minimal resolution of a del Pezzo surface with only rational singularities? \end{question} We give a negative answer to this question by explicitly constructing examples of redundant blow-ups (see Subsection \ref{exredsubsec}). \begin{theorem}\label{nomin} For each $n \geq 10$, there exists a big anticanonical rational surface of Picard number $n$ which is not a minimal resolution of a del Pezzo surface with only rational singularities. \end{theorem} The remainder of this paper is organized as follows. We first define redundant blow-ups in Section \ref{setupsec}. Then, Section \ref{redptsec} is devoted to the investigation of redundant points with respect to a given singularity, which leads us to the proof of Theorem \ref{redpt}. In Section \ref{discsec}, we prove Theorem \ref{reddisc} by calculating discrepancies. Finally, in Section \ref{exredsec}, we construct examples of redundant blow-ups. In particular, we prove Theorem \ref{nomin} in Subsection \ref{exredsubsec}. \section{Basic setup}\label{setupsec} In this section, we define redundant blow-ups. Let $S$ be a smooth projective surface, and let $D$ be a $\mathbb{Q}$-divisor. The \emph{Iitaka dimension} of $D$ is given by $$\kappa(D) := \max \{\dim \varphi_{|-nD| (S)} : n \in \mathbb{N}\},$$ whose value is one of $2,1,0,$ and $-\infty$. We will frequently use the notion of the \emph{Zariski decomposition} of a pseudo-effective $\mathbb{Q}$-divisor $D$ (see \cite[Section 2]{Sak84} for details): $D$ can be written uniquely as $P+N$, where $P$ is a nef $\mathbb{Q}$-divisor, $N$ is an effective $\mathbb{Q}$-divisor, $P.N=0$, and the intersection matrix of the irreducible components of $N$ is negative definite if $N \neq 0$. Write an effective $\mathbb{Q}$-divisor $D=\sum_{i=1}^{n} \alpha_i E_i$, where $E_i$ is a prime divisor for all $1 \leq i \leq n$. The \emph{multiplicity} of $D$ at a point $p$ in $S$ is defined by $\operatorname{mult}_{p} D := \sum_{i=1}^n \alpha_i \operatorname{mult}_{p} E_i$, where $\operatorname{mult}_{p} E_i$ denotes the usual multiplicity of $E_i$ at $p$. In the remainder of this subsection, we use the following notations. Let $S$ be a smooth projective rational surface with $\kappa(-K_S) \geq 0$, and let $-K_S = P + N$ be the Zariski decomposition. Let $f \colon \widetilde{S} \to S$ be a blow-up at a point $p$ in $S$ with the exceptional divisor $E$. \begin{definition}\label{reddef} A point $p$ is called \emph{redundant} if $\operatorname{mult}_p N \geq 1$. The blow-up $f \colon \widetilde{S} \rightarrow S$ at a redundant point $p$ is called a \emph{redundant blow-up}, and the exceptional curve $E$ is called a \emph{redundant curve}. \end{definition} Note that we always have $\kappa(-K_{\widetilde{S}}) \leq \kappa(-K_S)$ in general. If $f$ is a redundant blow-up, then $\kappa(-K_{\widetilde{S}}) \geq 0$ by \cite[Lemma 6.9]{Sak84}. Now, we reformulate Sakai's result on redundant blow-ups. This will play a key role throughout the paper. \begin{lemma}[{\cite[Corollary 6.7]{Sak84}}]\label{redlem} Assume that $\kappa(-K_{\widetilde{S}}) \geq 0$ so that we have the Zariski decomposition $-K_{\widetilde{S}}=\widetilde{P}+\widetilde{N}$. Then, the following are equivalent: \begin{enumerate} \item $f$ is a redundant blow-up. \item $\widetilde{P}= f^{*}P$ and $\widetilde{N}= f^{*}N - E$. \end{enumerate} \end{lemma} \begin{remark}\label{redrem} If $-K_{\widetilde{S}}$ is big, then $\widetilde{P}.E=0$ if and only if $f$ is a redundant blow-up by Lemma \ref{redlem}. Thus, our definition of redundant curves coincides with that of Sakai (\cite[Definition 4.1]{Sak84}). \end{remark} \section{Finding redundant points}\label{redptsec} In this section, we focus on redundant points, and we prove Theorem \ref{redpt} at the end. In what follows, $\bar{S}$ denotes a normal projective rational surface such that $-K_{\bar{S}}$ is nef, and $g \colon S \rightarrow \bar{S}$ denotes the minimal resolution. First, we recall some basic facts concerning normal singularities on surfaces. Let $(\bar{S}, s)$ be a germ of a normal surface singularity, and let $g \colon S \rightarrow \bar{S}$ be the minimal resolution. Denote the exceptional set by $A=\pi^{-1}(s)=E_1 \cup \cdots \cup E_l$, where each $E_i$ is an irreducible component. Note that $E_i^2 = -n_i \leq -1$, where each $n_i$ is an integer for $i=1,\ldots,l$. Now, we have $$-K_S = g^{*}(-K_{\bar{S}}) + \sum_{i=1}^{l} a_i E_i ,$$ where each $a_i$ is a nonnegative rational number for all $i=1,\ldots,l$. Here, we call $a_i$ the \emph{discrepancy} of $E_i$ with respect to $\bar{S}$ for convenience in computation, even though $-a_i$ is called the discrepancy in the literature. By the adjunction formula, this number can be obtained by the simultaneous linear equations: $$\sum_{j=1}^{l}a_j E_j E_i = -K_{S}.E_i = -n_i + 2 \hbox{ for } i=1,\ldots,l.$$ Thus, each discrepancy $a_i$ can be calculated by the following matrix equation \begin{displaymath} \left( \begin{array}{cccc} -n_1 & E_2 E_1 & \cdots & E_l E_1 \\ E_2 E_1 &-n_2 & \cdots & E_l E_2 \\ \vdots & \vdots & \ddots & \vdots \\ E_l E_1 & E_l E_2 & \cdots & -n_l \end{array} \right) \left( \begin{array}{c} a_1 \\ a_2 \\\vdots \\ a_l \end{array} \right) = -\left( \begin{array}{c} n_1 - 2 \\ n_2 - 2 \\\vdots \\ n_l - 2 \end{array} \right). \end{displaymath} If an algebraic surface contains at worst rational singularities, then its singularities are isolated and the surface is $\mathbb{Q}$-factorial (\cite[Theorem 4.6]{B01}), and the exceptional set $A$ consists of smooth rational curves with a simple normal crossing (\emph{snc} for short) support. Throughout this paper, we adopt the following standard terminologies. \begin{definition}\label{sing} (1) A singularity $s$ is called \emph{canonical} if $a_i = 0 $ for all $i=1,\ldots,l$.\\ (2) A singularity $s$ is called \emph{log terminal} if $0 \leq a_i < 1$ for all $i=1,\ldots,l$. \end{definition} We note that every log terminal singularity is rational (\cite[Theorem 4.12]{KM98}). When $\operatorname{char}(k)=0$, a log terminal singularity is nothing but a quotient singularity (\cite[Proposition 4.18]{KM98}). We can get the Zariski decomposition of $-K_S$ immediately by the assumption that $-K_{\bar{S}}$ is nef. \begin{lemma}\label{zar} Let $P=g^{*}(-K_{\bar{S}})$ and $N=\sum_{i=1}^{l} a_i E_i$, where each $E_i$ is a $g$-exceptional curve and each $a_i$ is the discrepancy of $E_i$ with respect to $\bar{S}$ for $1 \leq i \leq l$. Then, $-K_S = P+N$ is the Zariski decomposition. \end{lemma} \begin{proof} The pull-back of a nef divisor is again nef. Clearly $P.N=0$ and the intersection matrix of irreducible components of $N$ is negative definite. \end{proof} Suppose that $S$ contains a redundant point $p$. Let $f \colon \widetilde{S} \rightarrow S$ be the redundant blow-up at $p$ with the exceptional divisor $E$, and let $-K_S = P+N$ and $-K_{\widetilde{S}} = \widetilde{P} + \widetilde{N}$ be the Zariski decompositions. Then, we obtain $$\widetilde{N} = \sum_{i=1}^{l} a_i \widetilde{E}_i + (\operatorname{mult}_p N-1) E= \sum_{i=1}^{l} a_i \widetilde{E}_i + \left( \sum_{p \in E_j} a_j -1\right) E,$$ where $\widetilde{E}_i$ is the strict transform of $E_i$ for each $1 \leq i \leq l$. We note that $\sum_{p \in E_j} a_j \geq 1$, since $\operatorname{mult}_p N \geq 1$. \begin{remark} If $N$ has an snc support, then so does $\widetilde{N}$. Thus, the negative part of the Zariski decomposition of the anticanonical divisor of a big anticanonical rational surface also has an snc support. This is no longer true for rational surfaces with anticanonical Iitaka dimension $1$ or $0$ (see Example \ref{0redex}). \end{remark} In the remainder of this section, we will completely determine the location of redundant points on $S$ in terms of the singularities of $\bar{S}$. \subsection{$\bar{S}$ contains at worst canonical singularities} \begin{lemma}\label{can} If $\bar{S}$ has at worst canonical singularities, then $S$ has no redundant points. \end{lemma} \begin{proof} By Definition \ref{sing}, $N = 0$; thus, the lemma follows. \end{proof} \subsection{$\bar{S}$ contains at worst log terminal singularities} \begin{lemma}\label{lt} If $\bar{S}$ contains at worst log terminal singularities, then any surface $\widetilde{S}'$ obtained by a sequence of redundant blow-ups from $S$ has finitely many redundant points, and every redundant point is an intersection point of two curves contracted by the morphism $\widetilde{S}' \rightarrow \bar{S}$. \end{lemma} \begin{proof} By Definition \ref{sing}, $0 \leq a_i < 1$ for each $1 \leq i \leq l$. Finding a point $p$ in $S$ satisfying $\operatorname{mult}_p N \geq 1$ is equivalent to finding an intersection point of $E_j$ and $E_k$ such that $a_j + a_k \geq 1$ for some $1 \leq j \leq l$ and $1 \leq k \leq l$. Since the number of $E_i$'s is finite, the number of redundant points is also finite. In this case, we have $$\widetilde{N}=\sum_{i=1}^{l}a_i \widetilde{E}_i + (a_j + a_k -1)E.$$ Observe that $a_i < 1$ for each $1 \leq i \leq l$ and $a_j+a_k-1 < 1$. Thus, after performing redundant blow-ups, the number of redundant points is still finite. \end{proof} \begin{lemma}\label{length} The number of surfaces obtained by a sequence of redundant blow-ups from $S$ is finite. \end{lemma} \begin{proof} Let $\widetilde{p}:= \widetilde{E}_j \cap E$ the intersection point. Then, we have $$\operatorname{mult}_{\widetilde{p}} \widetilde{N}=a_j + (\operatorname{mult}_p N-1)=\operatorname{mult}_p N-(1-a_j)<\operatorname{mult}_p N,$$ i.e., the multiplicity strictly decreases after redundant blow-ups. Suppose that $\operatorname{mult}_{\widetilde{p}}\widetilde{N} \geq 1$. Let $f' \colon \widetilde{S}' \rightarrow \widetilde{S}$ be a redundant blow-up at $\widetilde{p}$ with the redundant curve $F$, and let $\widetilde{E}_j', \widetilde{E}_k'$ and $E'$ be the strict transforms of $\widetilde{E}_j, \widetilde{E}_k$ and $E$, respectively. Let $\widetilde{p}':=\widetilde{E}_j' \cap F$ and $q:=E' \cap F$ be the intersection points. \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.8cm,y=0.8cm] \clip(-7.4,0.9) rectangle (8,4.7); \draw (5.74,4)-- (3.74,2); \draw (5,4)-- (7,2); \draw (0.06,4.2)-- (-0.86,1.84); \draw (1.16,4.2)-- (2.16,1.84); \draw (1.16,4.2)-- (2.16,1.84); \draw [dash pattern=on 3pt off 3pt] (-0.7,3.54)-- (2.04,3.54); \draw (2.96,3.6) node[anchor=north west] {$\xrightarrow{f}$}; \draw (3.45,2.00) node[anchor=north west] {$E_j$}; \draw (6.7,2.00) node[anchor=north west] {$E_k$}; \draw (5.15,3.52) node[anchor=north west] {$p$}; \draw (-0.3,3.6) node[anchor=north west] {$\widetilde{p}$}; \draw (-1.25,1.84) node[anchor=north west] {$\widetilde{E}_j$}; \draw (1.8,1.84) node[anchor=north west] {$\widetilde{E}_k$}; \draw (-5.9,4.22)-- (-6.82,1.86); \draw (-4.12,4.18)-- (-3.12,1.82); \draw (-2.3,3.6) node[anchor=north west] {$\xrightarrow{f'}$}; \draw [dotted] (-6.74,3.14)-- (-4.52,4.28); \draw [dash pattern=on 3pt off 3pt] (-5.54,4.26)-- (-3.22,3.22); \draw (-7.2,1.88) node[anchor=north west] {$\widetilde{E}_j'$}; \draw (-3.5,1.84) node[anchor=north west] {$\widetilde{E}_k'$}; \draw (-6.3,3.5) node[anchor=north west] {$\widetilde{p}'$}; \draw (-5.25,3.96) node[anchor=north west] {$q$}; \draw (-1.2,4.05) node[anchor=north west] {$E$}; \draw (-3.3,3.56) node[anchor=north west] {$E'$}; \draw (-7.3,3.48) node[anchor=north west] {$F$}; \begin{scriptsize} \fill (5.37,3.63) circle (2.0pt); \fill (-0.21,3.55) circle (2.0pt); \fill (-5.03,4.03) circle (2.0pt); \fill (-6.25,3.39) circle (2.0pt); \end{scriptsize} \end{tikzpicture} \\ Let $-K_{\widetilde{S}'} = \widetilde{P}' + \widetilde{N}'$ be the Zariski decomposition. Then, we have \begin{displaymath} \begin{array}{lll} \operatorname{mult}_{\widetilde{p}} \widetilde{N}-\operatorname{mult}_{\widetilde{p}'} \widetilde{N}'&=&\operatorname{mult}_{\widetilde{p}}\widetilde{N}-\{\operatorname{mult}_{\widetilde{p}}\widetilde{N}-(1-a_j)\} = 1-a_j,\\ \operatorname{mult}_{\widetilde{p}}\widetilde{N}-\operatorname{mult}_{q}\widetilde{N}'&=&\operatorname{mult}_{\widetilde{p}}\widetilde{N}-\{(\operatorname{mult}_{\widetilde{p}}\widetilde{N}-1) + (\operatorname{mult}_pN -1)\}\\ &=&2-\operatorname{mult}_pN\\ &=& 2-a_j - a_k. \end{array} \end{displaymath} Note that $\operatorname{mult}_p N-\operatorname{mult}_{\widetilde{p}}\widetilde{N}=1-a_j < 2-a_j-a_k,$ and hence, we obtain $$\operatorname{mult}_p N-\operatorname{mult}_{\widetilde{p}}\widetilde{N} \leq \operatorname{mult}_{\widetilde{p}}\widetilde{N}-\operatorname{mult}_{\widetilde{p}'}\widetilde{N}',$$ and $$\operatorname{mult}_pN-\operatorname{mult}_{\widetilde{p}}\widetilde{N} \leq \operatorname{mult}_{\widetilde{p}}\widetilde{N}-\operatorname{mult}_{q}\widetilde{N}', $$ i.e., the difference of multiplicities increases after redundant blow-ups. Thus, the assertion immediately follows. \end{proof} There exist natural numbers $M_j$ and $M_k$ such that \[ \begin{array}{l} \operatorname{mult}_{p}N- M_j (1-a_j) < 1 \hbox{ and} \operatorname{mult}_{p}N-(M_j -1) (1-a_j) \geq 1, \end{array}\] and \[ \begin{array}{l} \operatorname{mult}_{p}N- M_k (1-a_k) < 1 \hbox{ and} \operatorname{mult}_{p}N-(M_k -1) (1-a_k) \geq 1. \end{array}\] Since $\max \{M_j, M_k \}$ depends only on $p$, we denote it by $M(p)$. \begin{corollary}\label{bdm} The maximal length of sequences of redundant blow-ups from $S$ is equal to $$\max_{p \in R} \{M(p)\},$$ where $R$ is the set of all redundant points on $S$. \end{corollary} \begin{remark} Corollary \ref{bdm} shows that there is a bound of the length of sequences of redundant blow-ups for a given surface $S$. However, there is no global bound for $M(p)$ (see Example \ref{ltex}; we have $M_1 > m-2 - \frac{2m}{n}$ and $M_2 > n-2 - \frac{2n}{m}$, and hence, $M(p)$ can be increased arbitrarily large as $m$ and $n$ goes to infinity). \end{remark} \subsection{$\bar{S}$ contains worse than log terminal singularities} \begin{lemma}\label{rtsing} If $\bar{S}$ contains a singularity that is not a log terminal singularity, then any surface $\widetilde{S}'$ obtained by a sequence of redundant blow-ups from $S$ contains a curve $C$ contracted by the morphism $\widetilde{S}' \rightarrow \bar{S}$ such that every point in $C$ is a redundant point. In particular, $\widetilde{S}'$ has infinitely many redundant points. \end{lemma} \begin{proof} By Definition \ref{sing}, we can choose an integer $k$ with $1 \leq k \leq l$ such that $a_k \geq 1$, and hence, every point in $E_k$ is a redundant point. After a redundant blow-up $\widetilde{S} \rightarrow S$ at a point $p$ in $E_k$, every point in the proper transform of $E_k$ is again a redundant point in $\widetilde{S}$. In this way, the lemma easily follows. \end{proof} We are ready to prove Theorem \ref{redpt}. \begin{proof}[Proof of Theorem \ref{redpt}] By Lemmas \ref{lt}, \ref{length}, \ref{rtsing}, the theorem holds. \end{proof} \section{Existence of redundant points}\label{discsec} In this section, we focus on the existence of redundant points, and we prove Theorem \ref{reddisc} at the end. As in Section \ref{redptsec}, we use the following notations throughout this section: $\bar{S}$ denotes a normal projective rational surface such that $-K_{\bar{S}}$ is nef, and $g \colon S \rightarrow \bar{S}$ denotes the minimal resolution. To prove Theorem \ref{reddisc}, by Lemmas \ref{can} and \ref{rtsing}, we only need to consider the case when $\bar{S}$ has at worst log terminal singularities. Recall that in this case, $p \in S$ is a redundant point if and only if there are two intersecting irreducible exceptional curves $E_j$ and $E_k$ in the minimal resolution such that the sum of discrepancies $a_j + a_k \geq 1$. Thus, it suffices to consider the problem locally near each singular point $s$ in $\bar{S}$, and hence, we can throughoutly assume that $\bar{S}$ contains only one log terminal singular point $s$. In the case of characteristic zero, Brieskorn completely classified finite subgroups of $GL(2, k)$ without quasi-reflections, i.e., he classified all the dual graphs of quotient singularities of surfaces (\cite{Bri68}). It turns out that the complete list of the dual graphs of the log terminal surface singularities remains the same in arbitrary characteristic (see \cite{Ale91}). For the reader's convenience, we will give a detailed description of all possible types ($A_{q,q_1}, D_{q,q_1}, T_m, O_m$ and $I_m$) of dual graphs and discrepancies of log terminal singularities in the following subsections. \subsection{$A_{q,q_1}$-type}\label{ltAsss} The dual graph of a log terminal singularity $s \in \bar{S}$ of type $A_{q,q_1}$ is \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-4.3,2.2) rectangle (7.28,4.0); \draw (-1,3)-- (0,3); \draw (0,3)-- (1,3); \draw (2.58,3)-- (3.58,3); \draw (-1.24,3.66) node[anchor=north west] {$E_1$}; \draw (-0.24,3.66) node[anchor=north west] {$E_2$}; \draw (0.78,3.66) node[anchor=north west] {$E_3$}; \draw (1.45,3.16) node[anchor=north west] {$\cdots$}; \draw (2.28,3.66) node[anchor=north west] {$E_{l-1}$}; \draw (3.36,3.66) node[anchor=north west] {$E_l$}; \draw (-1.3,2.84) node[anchor=north west] {$-n_1$}; \draw (-0.3,2.84) node[anchor=north west] {$-n_2$}; \draw (0.68,2.84) node[anchor=north west] {$-n_3$}; \draw (2.1,2.84) node[anchor=north west] {$-n_{l-1}$}; \draw (3.36,2.84) node[anchor=north west] {$-n_l$}; \begin{scriptsize} \fill [color=black] (-1,3) circle (2.5pt); \fill [color=black] (0,3) circle (2.5pt); \fill [color=black] (1,3) circle (2.5pt); \fill [color=black] (2.58,3) circle (2.5pt); \fill [color=black] (3.58,3) circle (2.5pt); \end{scriptsize} \end{tikzpicture}\\ where each $n_i \geq 2$ is an integer for all i. This singularity can be characterized by the so-called \emph{Hirzebruch-Jung continued fraction} \[ \frac{q}{q_1}=[n_1, n_2, ..., n_l]= n_1 - \dfrac{1}{n_2-\dfrac{1}{\ddots - \dfrac{1}{n_l}}}. \] The intersection matrix is \begin{displaymath} M(-n_1, \ldots, -n_l) := \left( \begin{array}{cccccc} -n_1 & 1 & 0 & \cdots & \cdots &0 \\ 1 & -n_2 & 1 & \cdots & \cdots& 0 \\ 0 & 1 & -n_3 & \cdots & \cdots& 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0& 0 & \cdots & -n_{l-1}&1 \\ 0 & 0& 0 & \cdots &1 & -n_l \\ \end{array} \right). \end{displaymath} For simplicity, we use the notation $[n_1, \ldots, n_l]$ to refer to a log terminal singularity of $A_{q,q_1}$-type. Note that $[n_1, \ldots, n_l]$ and $[n_l, \ldots, n_1]$ denote the same singularity. We will use the following notation for convenience (cf. \cite{HK09}). \begin{enumerate} \item $q_{b_1, b_2, \ldots, b_m} := |\det(M')|$ where $M'$ is the $(l-m)\times(l-m)$ matrix obtained by deleting $-n_{b_1}, -n_{b_2}, \ldots, -n_{b_m}$ from $M(-n_1, \ldots, -n_l)$. For convenience, we also define $q_{1, \ldots, l}=|\det(M(\emptyset))|=1$. \item $u_s := q_{s, \ldots, l} = |\det(M(-n_1, \ldots, -n_{s-1}))|$ $(2 \leq s \leq l)$, $u_0=0, u_1 = 1$. \item $v_s := q_{1, \ldots, s} = |\det(M(-n_{s+1}, \ldots, -n_l))|$ $(1 \leq s \leq l-1)$, $v_l=1, v_{l+1}=0$. \item $q = |\det(M(-n_1, \ldots, -n_l))|=u_{l+1}=v_0$. \item $|[n_1, \ldots, n_l]|:=|\det (M(-n_1, \ldots, -n_l))|$. \end{enumerate} It follows that $ q_1 = |\det(M(-n_2,\ldots, -n_l)|=v_1$. The following properties of Hirzebruch-Jung continued fractions will be used. \begin{lemma}\label{uv} For $1 \leq i \leq l$, we have the following. \begin{enumerate} \item $a_i = 1 - \frac{u_i + v_i}{q}$. \item $u_{i+1} = n_i u_{i} - u_{i-1}$, $v_{i-1} = n_i v_i -v_{i+1}$. \item $v_i u_{i+1}- v_{i+1}u_{i} =v_{i-1} u_{i} - v_i u_{i-1}= q. $ \item $|[n_1,\ldots, n_{i-1}, n_i +1, n_{i+1}, \ldots, n_l]|=v_i u_{i}+|[n_1, n_2, \ldots, n_l]|>q.$ \item If $l \geq 2$ and $n_1 \geq 3$, then $v_1 + v_2 < q$. \end{enumerate} \end{lemma} \begin{proof} (1)-(4) are well-known facts (for (1), see \cite[Lemma 2.2]{HK11}, and for (2)-(4), see \cite[Lemma 2.4]{HK09}). It is straightforward to check (5) as follows: $q = v_1u_2-v_2u_1 = v_1 \cdot n_1 - v_2 = (n_1-1)v_1 +(v_1-v_2) > 2v_1 > v_1 + v_2$. \end{proof} Now, we give a criterion for checking $a_k + a_{k+1} \geq 1$ for some $k$. Recall that the intersection point of $E_K$ and $E_{k+1}$ is a redundant point in $S$. \begin{lemma}\label{TFAE} The following are equivalent. \begin{enumerate} \item $a_k + a_{k+1} \geq 1$ for some $1 \leq k \leq l-1$. \item $q \geq u_k + u_{k+1} + v_k + v_{k+1}$ for some $1 \leq k \leq l-1$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{uv} (1), we have $$a_k + a_{k+1} = \left(1 - \frac{u_k + v_k}{q} \right) + \left(1 - \frac{u_{k+1} + v_{k+1}}{q} \right) = 2 - \frac{ u_k + u_{k+1} + v_k + v_{k+1} }{q},$$ and hence, the equivalence follows. \end{proof} The singularity $[\underbrace{2,\ldots,2}_{l}]$ is a canonical singularity of type $A_l$, and hence, $q=l+1$ and $a_i=0$ for all $1 \leq i \leq l$. On the other hand, the minimal resolution $S$ of the singularity $[\underbrace{2,\ldots,2}_{l-1},3]$ does not contain any redundant point. Indeed, $q=2l+1$ and $u_i=i$, $v_i=2l-2i+1$ for $1 \leq i \leq l$ by using Lemma \ref{uv}. Then, for $1 \leq i \leq l-1$, $$u_i + u_{i+1} + v_i + v_{i+1} = 4l - 2i + 1 > 2l+1=q,$$ and hence, by Lemma \ref{TFAE}, there is no redundant point on $S$. \begin{proposition}\label{A} Let $s \in \bar{S}$ be a log terminal singularity of type $A_{q,q_1}$. Then, $S$ does not contain a redundant point if and only if $s$ is of type $[\underbrace{2,\ldots,2}_{\alpha}] (\alpha \geq 1)$, $[\underbrace{2,\ldots,2}_{\alpha},3] (\alpha \geq 1)$, $[2,2,3,2]$, $[2,3,2]$, $[2,4]$, or $[n]$ for $n \geq 3$. \end{proposition} To prove the proposition, we need two more lemmas. \begin{lemma}\label{A1} The following hold. \begin{enumerate} \item If $s$ is the singularity $[\underbrace{2,\ldots,2}_{\alpha},3,\underbrace{2,\ldots,2}_{\beta}]$ $(\alpha \geq \beta \geq 1)$, then $q=\alpha \beta + 2 \alpha + 2 \beta + 3$ and the intersection point of $E_{\alpha}$ and $E_{\alpha+1}$ is a redundant point, except for $[2,2,3,2]$ and $[2,3,2]$. \item If $s$ is the singularity $[\underbrace{2,\ldots,2}_{\alpha},3,\underbrace{2,\ldots,2}_{\beta}, 3, \underbrace{2,\ldots,2}_{\gamma}]$ $(\alpha \geq 0$, $ \beta \geq 1$ and $ \gamma \geq 0)$, then the intersection point of $E_{\alpha+1}$ and $E_{\alpha+2}$ is a redundant point. \item If $s$ is the singularity $[\underbrace{2,\ldots,2}_{\alpha},4,\underbrace{2,\ldots,2}_{\beta}]$ $(\alpha \geq \beta \geq 0)$, then the intersection point of $E_{\alpha}$ and $E_{\alpha+1}$ is a redundant point, except for $[2,4]$ and $[4]$. \end{enumerate} \end{lemma} \begin{proof} The strategy is as follows. First, we compute $u_{\alpha}, u_{\alpha+1}, u_{\alpha+2}, v_{\alpha}, v_{\alpha+1}, v_{\alpha+2}$ and $q$ using Lemma \ref{uv}. Second, we determine whether $$q \geq u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1} \quad \text{ or } \quad q \geq u_{\alpha+1}+u_{\alpha+2}+v_{\alpha+1}+v_{\alpha+2}$$ holds or not. Finally, by applying Lemma \ref{TFAE}, we can find a redundant point. (1) We have \begin{displaymath} \begin{array}{l} u_{\alpha} = |[\underbrace{2,\ldots,2}_{\alpha-1}]| = \alpha, v_{\alpha} = |[3,\underbrace{2,\ldots,2}_{\beta}]| = 2\beta + 3\\ u_{\alpha+1} = |[\underbrace{2,\ldots,2}_{\alpha}]| = \alpha+1, v_{\alpha+1}= |[\underbrace{2,\ldots,2}_{\beta}]| = \beta + 1.\\ \end{array} \end{displaymath} Then, $u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1}=2 \alpha + 3\beta + 5$. Using Lemma \ref{uv} (3), we obtain $$q= v_{\alpha} u_{\alpha+1} - v_{\alpha+1}u_{\alpha} = (\alpha+1)(2\beta+3)-(\beta+1)\alpha=\alpha \beta + 2\alpha + 2\beta + 3.$$ Then, we have $q -(u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1}) = \alpha \beta - \beta - 2 = (\alpha - 1) \beta -2$. If $\alpha \beta - \beta -2 <0$, then $(\alpha, \beta)=(1,1)$ or $(2,1)$. These cases are exactly $[2,2,3,2]$ and $[2,3,2]$, and we can easily check that there is no redundant point on $S$ (see Table \ref{A.A}). Otherwise, we have $q \geq (u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1})$. (2) We have \begin{displaymath} \begin{array}{l} u_{\alpha+1} =|[\underbrace{2,\ldots,2}_{\alpha}]| = \alpha+1, v_{\alpha+1}=|[\underbrace{2,\ldots,2}_{\beta},3,\underbrace{2,\ldots,2}_{\gamma}]|=\beta\gamma + 2\beta + 2\gamma + 3\\ u_{\alpha+2} =|[\underbrace{2,\ldots,2}_{\alpha},3]| = 2\alpha+3, v_{\alpha+2}=|[\underbrace{2,\ldots,2}_{\beta-1},3,\underbrace{2,\ldots,2}_{\gamma}]|=\beta\gamma + 2\beta + \gamma + 1. \end{array} \end{displaymath} Then, $u_{\alpha+1}+u_{\alpha+2}+v_{\alpha+1}+v_{\alpha+2} = 2\beta \gamma + 3\alpha + 4\beta + 3 \gamma + 8$. Using Lemma \ref{uv} (4), we obtain \begin{eqnarray*} q &=& |[\underbrace{2,\ldots,2}_{\alpha+\beta+1},3,\underbrace{2\ldots,2}_{\gamma}]| + u_{\alpha+1}v_{\alpha+1}\\ &=& (\alpha+\beta+1)\gamma + 2 (\alpha+\beta+1) + 2 \gamma + 3 + (\alpha+1)(\beta \gamma + 2 \beta + 2 \gamma + 3)\\ & =& \alpha \beta \gamma + 2\alpha \beta + 3 \alpha \gamma + 2 \beta\gamma + 5\alpha + 4\beta + 5\gamma + 8. \end{eqnarray*} It immediately follows that $q \geq u_{\alpha+1}+u_{\alpha+2}+v_{\alpha+1}+v_{\alpha+2}$. (3) We have \begin{displaymath} \begin{array}{l} u_{\alpha}=|[\underbrace{2,\ldots,2}_{\alpha-1}]|=\alpha, v_{\alpha}=|[4,\underbrace{2,\ldots,2}_{\beta}]|= 3\beta + 4 \\ u_{\alpha+1} = |[\underbrace{2,\ldots,2}_{\alpha}]|=\alpha+1, v_{\alpha+1}= |[\underbrace{2,\ldots,2}_{\beta}]|=\beta+1. \end{array} \end{displaymath} Then, $u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1}=2\alpha + 4\beta + 6$. Using Lemma \ref{uv} (4), we obtain $$q = |[\underbrace{2,\ldots,2}_{\alpha},3,\underbrace{2,\ldots,2}_{\beta}]| + u_{\alpha+1}v_{\alpha+1}=2\alpha \beta + 3\alpha + 3 \beta + 4.$$ Then, we have $q - (u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1}) = 2\alpha \beta+\alpha - \beta - 2$. If $2 \alpha \beta + \alpha - \beta -2 <0$, then $( \alpha, \beta )=(0,0), (1,0)$. These cases are exactly $[2,4]$ and $[4]$, and we can easily check that there is no redundant point on $S$ (see Table \ref{A.A}). Otherwise, we have $q \geq (u_{\alpha}+u_{\alpha+1}+v_{\alpha}+v_{\alpha+1})$. \end{proof} \begin{table}[ht] \caption{}\label{A.A} \renewcommand\arraystretch{1.5} \noindent\[ \begin{array}{|l|l|l|l|l|} \hline \text{singularities} & [2,2,3,2] & [2,3,2] & [2,4] & [n] \\ \hline \text{discrepancies} &\left( \frac{2}{11}, \frac{4}{11}, \frac{6}{11}, \frac{3}{11} \right) & \left( \frac{1}{4},\frac{1}{2}, \frac{1}{4} \right) &\left( \frac{2}{7}, \frac{4}{7} \right) & \left( \frac{n-2}{n} \right) (n \geq 2)\\ \hline \end{array} \] \end{table} \begin{lemma}\label{A2} Suppose that for $[n_1, \ldots,n_j,\ldots,n_k,\ldots n_l]$, there are integers $j$ and $k$ with $1 \leq j \leq k \leq l-1$ such that $n_j \geq 3$ and $u_j + u_{j+1} +v_{j}+v_{j+1} \leq q$. Let $$[n_1', \ldots,n_j',\ldots,n'_{k-1},n_k',n'_{k+1},\ldots , n_l']:=[n_1, \ldots,n_j,\ldots,n_{k-1},n_k+1,n_{k+1} , \ldots , n_l].$$ Then, we have $u_j' + u_{j+1}' +v_{j}'+v_{j+1}' \leq q'$. \end{lemma} \begin{proof} By Lemma \ref{uv}, $q'=q+u_kv_k$. It suffices to show that $u_j' + u_{j+1}' +v_{j}'+v_{j+1}' \leq q+u_kv_k$. In order to calculate $u_j', u_{j+1}', v_{j}'$, and $v_{j+1}'$, we divide it into three cases.\\[5pt] Case 1: $j+2 \leq k$. We have $u_{j}' = u_j$ and $u_{j+1}'= u_{j+1}$. Moreover, we have \begin{displaymath} \begin{array}{l} v_{j}'=|[n_{j+1},\ldots,n_{k-1},n_{k}+1,n_{k+1},\ldots, n_l]| = v_j + v_k|[n_{j+1},\ldots, n_{k-1}]| \text{ and} \\ v_{j+1}'=|[n_{j+2},\ldots,n_{k-1},n_{k}+1,n_{k+1},\ldots, n_l]| = v_{j+1} + v_k|[n_{j+2},\ldots, n_{k-1}]| \end{array} \end{displaymath} Then, $u_j' + u_{j+1}' +v_{j}'+v_{j+1}' \leq q'$ is equivalent to \begin{displaymath} \begin{array}{l} u_j + u_{j+1} +v_{j}+v_{j+1} +v_k(|[n_{j+1},\ldots, n_{k-1}]|+|[n_{j+2},\ldots, n_{k-1}]|)\\ \leq q + u_k v_k = q+v_k |[n_{j},\ldots, n_{k-1}]|. \\ \end{array} \end{displaymath} The above inequality always holds by the assumption and Lemma \ref{uv} (5).\\[5pt] Case 2: $j+1 = k$. We have $u_{j}' =u_j$ and $u_{j+1}'=u_{j+1}$. Moreover, we have $$v_{j}'=|[n_{j+1}+1,n_{j+2},\ldots, n_l]| = v_{j} + v_{j+1} \text{ and } v_{j+1}'=|[n_{j+2},\ldots, n_l]| = v_{j+1}.$$ Since $v_{j+1} \leq u_{j+1}v_{j+1}$, we obtain $ u_j' + u_{j+1}' +v_{j}'+v_{j+1}' \leq q + u_{j+1}v_{j+1}$.\\[5pt] Case 3: $j=k$. We have $u_{j}'=u_j$ and $u_{j+1}'=|[n_1,\ldots,n_j +1]|=u_{j+1}+u_j$. Moreover, we also have $v_{j}'= v_{j}$ and $v_{j+1}'= v_{j+1}$. As in Case 2, we obtain $ u_j' + u_{j+1}' +v_{j}'+v_{j+1}' \leq q + u_{j+1}v_{j+1}$. \end{proof} \begin{proof}[Proof of Proposition \ref{A}] First, we introduce notations for simplicity. Fix a natural number $l$. For integers $n_1, \ldots, n_l \geq 2$ and $n_1', \ldots, n_l' \geq 2$, we write $[n_1, \ldots, n_l] < [n_1', \ldots, n_l']$ if and only if $n_i \leq n_i'$ for all $1 \leq i \leq l$ and $n_j < n_j'$ for some $1 \leq j \leq l$. Suppose that $s \in \bar{S}$ is the singularity $[n_1,\ldots,n_l]$ other than $[\underbrace{2,\ldots,2}_{\alpha}] (\alpha \geq 1)$, $[\underbrace{2,\ldots,2}_{\alpha},3] (\alpha \geq 1)$, $[2,2,3,2]$, $[2,3,2]$, $[2,4]$, or $[n]$ for $n \geq 3$, in which cases $S$ has no redundant point. Then, $[n_1,\ldots,n_l]$ is greater than or equal to $[\underbrace{2,\ldots,2}_{\alpha},3, \underbrace{2,\ldots,2}_{\beta}] (\alpha$$ \geq \beta \geq 1)$ (not equal to $[2,2,3,2]$ and $[2,3,2]$), $[\underbrace{2,\ldots,2}_{\alpha},3, \underbrace{2,\ldots,2}_{\beta}, 3, \underbrace{2,\ldots,2}_{\gamma}]$ $ (\alpha \geq 0, \beta \geq 1 \text{ and } \gamma \geq 0)$, $[\underbrace{2,\ldots,2}_{\alpha},4, \underbrace{2,\ldots,2}_{\beta}] $ $(\alpha \geq \beta \geq 0)$ (not equal to $[2,4]$ and $[4]$), $[2,2,3,3]$, $[2,3,3,2]$, $[2,3,3]$, $[3,3]$, or $[2,5]$, in which cases $S$ has a redundant point thanks to Lemma \ref{A1} and Table \ref{A.A.A}. Thus, by Lemma \ref{A2}, $[n_1,\ldots,n_l]$ has a redundant point. \end{proof} \begin{table}[ht] \caption{}\label{A.A.A} \renewcommand\arraystretch{1.5} \noindent\[ \begin{array}{|l|l|l|l|l|l|} \hline \text{singularities} & [2,2,3,3] & [2,3,3,2] & [2,3,3] & [3,3] & [2,5] \\ \hline \text{discrepancies} &\left( \frac{2}{9}, \frac{4}{9}, \frac{6}{9}, \frac{5}{9} \right) & \left( \frac{1}{3},\frac{2}{3}, \frac{2}{3}, \frac{1}{3} \right) &\left( \frac{4}{13}, \frac{8}{13}, \frac{7}{13} \right) & \left( \frac{1}{2}, \frac{1}{2} \right) & \left( \frac{1}{3}, \frac{2}{3} \right)\\ \hline \end{array} \] \end{table} \subsection{$D_{q,q_1}$-type} The dual graph of a log terminal singularity $s \in \bar{S}$ of $D_{q,q_1}$-type is \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-4.3,1.1) rectangle (7.28,3.8); \draw (0.22,2.78)-- (0.22,1.78); \draw (-0.78,1.78)-- (0.22,1.78); \draw (0.22,1.78)-- (1.22,1.78); \draw (2.52,1.78)-- (3.52,1.78); \draw (0.22,3.28) node[anchor=north west] {$E_{l+2}$}; \draw (-1.12,2.3) node[anchor=north west] {$E_{l+1}$}; \draw (0.32,2.3) node[anchor=north west] {$E_0$}; \draw (1.12,2.3) node[anchor=north west] {$E_1$}; \draw (2.3,2.3) node[anchor=north west] {$E_{l-1}$}; \draw (3.32,2.3) node[anchor=north west] {$E_l$}; \draw (0.25,2.8) node[anchor=north west] {$-2$}; \draw (-1.04,1.65) node[anchor=north west] {$-2$}; \draw (0,1.65) node[anchor=north west] {$-b$}; \draw (0.96,1.65) node[anchor=north west] {$-n_1$}; \draw (2.2,1.65) node[anchor=north west] {$-n_{l-1}$}; \draw (3.3,1.65) node[anchor=north west] {$-n_l$}; \draw (1.56,1.92) node[anchor=north west] {$\cdots$}; \begin{scriptsize} \fill [color=black] (-0.78,1.78) circle (2.5pt); \fill [color=black] (0.22,1.78) circle (2.5pt); \fill [color=black] (0.22,2.78) circle (2.5pt); \fill [color=black] (1.22,1.78) circle (2.5pt); \fill [color=black] (2.52,1.78) circle (2.5pt); \fill [color=black] (3.52,1.78) circle (2.5pt); \end{scriptsize} \end{tikzpicture}\\ where $\frac{q}{q_1} = [n_1, \ldots, n_l]$, and $b \geq 2$ and $n_i \leq 2$ are integers for all $1 \leq i \leq l$. The matrix equation for each discrepancy $a_i$ is given by \begin{displaymath} \left( \begin{array}{cccccccc} -2 & 0 & 1 & 0 & 0 &0& \cdots & 0 \\ 0 & -2 & 1 & 0 & 0 &0& \cdots & 0 \\ 1 & 1 & -b & 1 & 0 &0& \cdots &0 \\ 0 & 0 & 1 & -n_1 & 1 &0& \cdots & 0 \\ 0 & 0 & 0 & 1 & -n_2 & 1 &\cdots & 0 \\ 0 & 0 & 0 & 0 & 1 & -n_3 &\cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \cdots & 1 & -n_l \\ \end{array} \right) \left( \begin{array}{c} a_{l+1}\\ a_{l+2}\\ a_0\\ a_1\\ a_2\\ a_3\\ \vdots\\ a_l\\ \end{array} \right) =-\left( \begin{array}{c} 0\\ 0\\ b-2\\ n_1-2\\ n_2-2\\ n_3-2\\ \vdots\\ n_l-2\\ \end{array} \right). \end{displaymath} \begin{lemma}[{\cite[Lemma 3.7]{HK11}}]\label{lemD} We have the following. \begin{enumerate} \item $a_0 = 1 - \frac{1}{(b-1)q-q_1}$ and $a_{l+1}=a_{l+2}= \frac{1}{2}a_0$. \item $a_1 = 1 - \frac{b-1}{(b-1)q-q_1}$. \item $a_l = 1 - \frac{(b-1)q_l - q_{1,l}}{(b-1)q - q_1}$ for $l \geq 2$. \end{enumerate} \end{lemma} \begin{proposition}\label{D} Let $s \in \bar{S}$ be a log terminal singularity of $D_{q,q_1}$-type. Then, we have the following four cases. \begin{enumerate} \item If $b \geq 3$, then $a_0 + a_{l+1} \geq 1$. \item If $b = 2$ and $q \geq q_1 + 3$, then $a_0 + a_{l+1} \geq 1$. \item If $b = 2$ and $q = q_1 + 2$, then $a_0=a_1=\frac{1}{2}$. \item If $b = 2$ and $q = q_1 + 1$, then $s$ is a canonical singularity. \end{enumerate} In particular, the minimal resolution $S$ of $\bar{S}$ always has a redundant point, unless $s$ is a canonical singularity. \end{proposition} \begin{proof} Using Lemma \ref{lemD}, we get $$a_0 + a_{l+1} = \frac{3}{2}a_0 = \frac{3}{2}\left(1- \frac{1}{(b-1)q-q_1} \right) \geq 1$$ which is equivalent to $$(b-1)q-q_1 \geq 3.$$ If $b \geq 3$, then $$(b-1)q-q_1 \geq 2q-q_1 > q \geq 2.$$ Thus, $(b-1)q-q_1 \geq 3$, that is, $a_0 + a_{l+1} \geq 1$, which proves (1). Suppose that $b=2$. If $q-q_1 \geq 3$, then it is still true that $a_0 + a_{l+1} \geq 1$. This proves (2). Now, we assume that $b=2$ and $q=q_1+2$. The condition $q=q_1+2$ is equivalent to $(n_1 - 1)q_1 = q_{1,2} + 2$. By simple calculation, we conclude that $n_1=2$ and $q_1 = q_{1,2} + 2$. By induction, $$[n_1, \ldots, n_{l-1}, n_l] = [2, \ldots, 2, 3].$$ Using Lemma \ref{lemD}, we have $a_0=a_1=\frac{1}{2}$, which proves (3). Finally, we assume that $b=2$ and $q=q_1 + 1$. As in the proof of (3), we obtain $$[n_1, \ldots, n_l] = [2, \ldots, 2, 2].$$ Thus, $y$ is a canonical singularity of type $D_{l+3}$, which proves (4). \end{proof} \subsection{$T_m$, $O_m$, and $I_m$-types} We use the notation $\langle b;q,q_1 ; q',q_1' \rangle$ to refer to the following dual graph \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-4.3,0.0) rectangle (7.28,4.7); \draw (-0.16,4.02)-- (-0.16,3.02); \draw (-0.16,1.62)-- (-0.16,0.62); \draw (-1.16,0.62)-- (-0.16,0.62); \draw (-0.16,0.62)-- (0.84,0.62); \draw (2.24,0.62)-- (3.24,0.62); \draw (-0.32,2.78) node[anchor=north west] {$\vdots$}; \draw (1.2,0.78) node[anchor=north west] {$\cdots$}; \draw (-1,4.18) node[anchor=north west] {$E_l$}; \draw (-1.1,3.18) node[anchor=north west] {$E_{l-1}$}; \draw (-1.1,1.74) node[anchor=north west] {$E_{k+1}$}; \draw (-1.55,1.2) node[anchor=north west] {$E_1$}; \draw (-1.4,0.55) node[anchor=north west] {$-2$}; \draw (-0.1,1.2) node[anchor=north west] {$E_0$}; \draw (0.68,1.2) node[anchor=north west] {$E_2$}; \draw (2,1.2) node[anchor=north west] {$E_{k-1}$}; \draw (3.12,1.2) node[anchor=north west] {$E_k$}; \draw (-0.3,0.55) node[anchor=north west] {$-b$}; \draw (0.66,0.55) node[anchor=north west] {$-n_2$}; \draw (2,0.55) node[anchor=north west] {$-n_{k-1}$}; \draw (3.16,0.55) node[anchor=north west] {$-n_k$}; \draw (0.0,4.2) node[anchor=north west] {$-n_l$}; \draw (0.0,3.16) node[anchor=north west] {$-n_{l-1}$}; \draw (0.0,1.74) node[anchor=north west] {$-n_{k+1}$}; \begin{scriptsize} \fill [color=black] (-1.16,0.62) circle (2.5pt); \fill [color=black] (-0.16,0.62) circle (2.5pt); \fill [color=black] (0.84,0.62) circle (2.5pt); \fill [color=black] (-0.16,1.62) circle (2.5pt); \fill [color=black] (-0.16,3.02) circle (2.5pt); \fill [color=black] (-0.16,4.02) circle (2.5pt); \fill [color=black] (2.24,0.62) circle (2.5pt); \fill [color=black] (3.24,0.62) circle (2.5pt); \end{scriptsize} \end{tikzpicture}\\ where $b \geq 2$ and $n_i \geq 2$ are integers for all $2 \leq i \leq l$, and $\frac{q}{q_1}=[n_2, \ldots, n_k]$ and $\frac{q'}{q_1'}=[n_{k+1},\ldots,n_l]$ are the Hirzebruch-Jung continued fractions. Each dual graph of a log terminal singularity $s \in \bar{S}$ of $T_m$, $I_m$, or $O_m$-types is one of three items from the top, eight items from the bottom, or the remaining items in Table \ref{T.T}, respectively. For example, consider $\langle b;3,1;4,3\rangle$, which is of $O_m$-type. \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-4.34,0.4) rectangle (7.24,3.8); \draw (0.18,2.96)-- (0.18,1.96); \draw (0.18,0.96)-- (0.18,1.96); \draw (1.18,1.96)-- (0.18,1.96); \draw (1.18,1.96)-- (2.18,1.96); \draw (2.18,1.96)-- (3.18,1.96); \draw (-0.6,3.16) node[anchor=north west] {$E_1$}; \draw (-0.6,2.1) node[anchor=north west] {$E_0$}; \draw (-0.6,1.2) node[anchor=north west] {$E_2$}; \draw (0.2,3.12) node[anchor=north west] {$-2$}; \draw (0.23,1.9) node[anchor=north west] {$-b$}; \draw (0.2,1.2) node[anchor=north west] {$-3$}; \draw (0.97,1.9) node[anchor=north west] {$-2$}; \draw (1.92,1.9) node[anchor=north west] {$-2$}; \draw (2.89,1.9) node[anchor=north west] {$-2$}; \draw (1,2.5) node[anchor=north west] {$E_3$}; \draw (1.94,2.48) node[anchor=north west] {$E_4$}; \draw (2.92,2.48) node[anchor=north west] {$E_5$}; \begin{scriptsize} \fill [color=black] (0.18,2.96) circle (2.5pt); \fill [color=black] (0.18,1.96) circle (2.5pt); \fill [color=black] (0.18,0.96) circle (2.5pt); \fill [color=black] (1.18,1.96) circle (2.5pt); \fill [color=black] (2.18,1.96) circle (2.5pt); \fill [color=black] (3.18,1.96) circle (2.5pt); \end{scriptsize} \end{tikzpicture}\\ The matrix equation for discrepancies $a_i$ is given by \begin{displaymath} \left( \begin{array}{cccccc} -b & 1 & 1 & 1 & 0 & 0 \\ 1 &-2 & 0 & 0 & 0 & 0 \\ 1 & 0 &-3 & 0 & 0 & 0\\ 1 & 0 & 0 &-2 & 1 & 0 \\ 0 & 0 & 0 & 1 &-2 & 1 \\ 0 & 0 & 0 & 0 & 1 &-2 \end{array} \right) \left( \begin{array}{c} a_0 \\ a_1 \\a_2 \\ a_3 \\a_4 \\a_5 \end{array} \right) = \left( \begin{array}{c} 2-b \\ 0 \\ -1 \\ 0 \\ 0 \\0 \end{array} \right). \end{displaymath} It is easy to see that the solution $(a_0, a_2, a_2, a_3, a_4, a_5)$ is $$\left( \frac{12b-20}{12b-19} , \frac{6b-10}{12b-19} , \frac{8b-13}{12b-19} , \frac{9b-15}{12b-19} , \frac{6b-12}{12b-19} , \frac{3b-5}{12b-19} \right). $$ Since $b \geq 2$, we obtain $a_0 + a_1 \geq 1$ by the following computation: $$ \frac{12b-20}{12b-19} + \frac{6b-10}{12b-19} = \frac{18b-30}{12b-19} \geq 1$$ is equivalent to $$ 6b \geq 11.$$ Similarly, we calculate all discrepancies for 15 cases. For the reader's convenience, we list discrepancies $(a_0, a_1, \ldots, a_l)$ of all possible cases in Table \ref{T.T}. It is easy to check that $a_0 + a_1 \geq 1$ for all cases provided that $b \geq 2$ and $y$ is not a canonical singularity. Thus, we obtain the following. \begin{proposition}\label{TOI} If $s \in \bar{S}$ is a log terminal singularity of one of $T_m$, $O_m$, and $I_m$-types, and $s$ is not a canonical singularity, then $a_0 + a_1 \geq 1$. \end{proposition} \begin{table}[ht] \caption{}\label{T.T} \renewcommand\arraystretch{1.5} \noindent\[ \begin{array}{|l|l|} \hline \hbox{singularities} & \hbox{discrepancies } (a_0, a_1, \ldots, a_l)\\ \hline \langle b;3,1;3,1\rangle & \left( \frac{6b-8}{6b-7} , \frac{3b-4}{6b-7} , \frac{4b-5}{6b-7} , \frac{4b-5}{6b-7} \right)\\ \hline \langle b;3,1;3,2\rangle & \left( \frac{6b-10}{6b-9} , \frac{3b-5}{6b-9}, \frac{12b-19}{18b-27} , \frac{12b-20}{18b-27} , \frac{6b-10)}{18b-27)}\right) \\ \hline \langle b;3,2;3,2\rangle & \left( \frac{6b-12}{6b-11} , \frac{3b-6}{6b-11}, \frac{4b-8}{6b-11} , \frac{2b-4}{6b-11} , \frac{4b-8}{6b-11} , \frac{2b-4}{6b-11} \right)\\ \hline \langle b;3,2;4,3\rangle & \left( \frac{12b-24}{12b-23} , \frac{6b-12}{12b-23} , \frac{8b-16}{12b-23} , \frac{4b-8}{12b-23}, \frac{9b-18}{12b-23}, \frac{6b-12}{12b-23}, \frac{3b-6}{12b-23} \right) \\ \hline \langle b;3,1;4,3\rangle & \left( \frac{12b-20}{12b-19} , \frac{6b-10}{12b-19} , \frac{8b-13}{12b-19} , \frac{9b-15}{12b-19} , \frac{6b-10}{12b-19}, \frac{3b-5}{12b-19} \right) \\ \hline \langle b;3,2;4,1\rangle & \left( \frac{12b-18}{12b-17} , \frac{6b-9}{12b-17} , \frac{8b-12}{12b-17} , \frac{4b-6}{12b-17} , \frac{9b-13}{12b-17} \right) \\ \hline \langle b;3,1;4,1\rangle & \left( \frac{12b-14}{12b-13} , \frac{6b-7}{12b-13} , \frac{8b-9}{12b-13} , \frac{9b-10}{12b-13} \right) \\ \hline \langle b;3,2;5,4\rangle & \left( \frac{30b-60}{30b-59} , \frac{15b-30}{30b-59} , \frac{20b-40}{30b-59} , \frac{10b-20}{30b-59} , \frac{24b-48}{30b-59}, \frac{18b-36}{30b-59} , \frac{12b-24}{30b-59} , \frac{6b-12}{30b-59} \right) \\ \hline \langle b;3,2;5,3\rangle & \left(\frac{30b-54}{30b-53} ,\frac{15b-27}{30b-53} , \frac{20b-36}{30b-53} , \frac{10b-18}{30b-53} , \frac{24b-43}{30b-53} , \frac{18b-32}{30b-53} \right) \\ \hline \langle b;3,1;5,4\rangle & \left( \frac{30b-50}{30b-49} , \frac{15b-25}{30b-49} , \frac{20b-33}{30b-49} , \frac{24b-40}{30b-49} , \frac{18b-30}{30b-49} , \frac{12b-20}{30b-49} , \frac{6b-10}{30b-49} \right) \\ \hline \langle b;3,2;5,2\rangle & \left( \frac{30b-48}{30b-47} , \frac{15b-24}{30b-47} , \frac{20b-32}{30b-47} , \frac{10b-16}{30b-47} , \frac{24b-38}{30b-47} , \frac{12b-19}{30b-47} \right) \\ \hline \langle b;3,1;5,3\rangle & \left(\frac{30b-44}{30b-43} , \frac{15b-22}{30b-43} , \frac{20b-29}{30b-43} , \frac{24b-35}{30b-43} , \frac{18b-26}{30b-43} \right) \\ \hline \langle b;3,2;5,1\rangle & \left( \frac{30b-42}{30b-41} , \frac{15b-21}{30b-41} , \frac{20b-28}{30b-41} , \frac{10b-14}{30b-41} , \frac{24b-33}{30b-41} \right) \\ \hline \langle b;3,1;5,2\rangle & \left( \frac{30b-38}{30b-37} , \frac{15b-19}{30b-37} , \frac{20b-25}{30b-37} , \frac{24b-30}{30b-37} , \frac{12b-15}{30b-37} \right) \\ \hline \langle b;3,1;5,1\rangle & \left( \frac{30b-32}{30b-31} , \frac{15b-16}{30b-31} , \frac{20b-21}{30b-31} , \frac{24b-25}{30b-31} \right) \\ \hline \end{array} \] \end{table} We are ready to prove Theorem \ref{reddisc}. \begin{proof}[Proof of Theorem \ref{reddisc}] It is a consequence of Lemmas \ref{rtsing}, \ref{can}, Propositions \ref{A}, \ref{D}, and \ref{TOI}. \end{proof} \section{Examples of redundant blow-ups}\label{exredsec} In this section, we construct various examples of redundant blow-ups. In particular, we prove Theorem \ref{nomin}, i.e., we answer Question \ref{tvvq}. \subsection{Geometry of big anticanonical surfaces}\label{prelimbigsursubsec} In this subsection, we briefly review some basic properties of big anticanonical surfaces. A smooth projective surface $S$ is called a \emph{big anticanonical surface} if the anticanonical divisor $-K_S$ is big. Although not every anticanonical ring $R(-K_X):=\bigoplus_{m \geq 0} H^0 (\mathcal{O}_X(-mK_S)$ of a big anticanonical surface $S$ is finitely generated (\cite[Lemma 14.39]{B01}), we can always define the anticanonical model of $S$ using the Zariski decomposition $-K_S = P+N$. The morphism $f \colon S \rightarrow \bar{S}$ which contracts all curves $C$ such that $C.P=0$ is called the \emph{anticanonical morphism}, and $\bar{S}$ is called the \emph{anticanonical model} of $S$. Then, $\bar{S}$ is a normal projective surface (\cite[14.32]{B01}). Note that the anticanonical ring $R(-K_S)$ of a big anticanonical \emph{rational} surface $S$ is always finitely generated, and the anticanonical model $\bar{S}=\mathbb{P}roj R(-K_S)$ is a del Pezzo surface, i.e., $-K_{\bar{S}}$ is an ample $\mathbb{Q}$-Cartier divisor. In this case, $\bar{S}$ contains rational singularities by \cite[Theorem 4.3]{Sak84}. Furthermore, we have the following. \begin{proposition}[{\cite[Proposition 4.2]{Sak84}}]\label{redbigratsurf} Let $S$ be a big anticanonical rational surface. Then, there is a sequence of redundant blow-ups $f \colon S \rightarrow S_0$, where $S_0$ is the minimal resolution of the anticanonical model of $S$. \end{proposition} \subsection{Redundant blow-ups of big anticanonical rational surfaces}\label{exredsubsec} In this subsection, we give an explicit construction of big anticanonical rational surfaces containing redundant points. Before giving constructions, we explain how these examples give a negative answer to Question \ref{tvvq}. Let $S$ be a big anticanonical rational surface admiting a redundant blow-up $f \colon \widetilde{S} \to S$. By Lemma \ref{redlem}, the anticanonical models $\bar{S}$ of $\widetilde{S}$ and $S$ are the same. Suppose that $\widetilde{S}$ is a minimal resolution of a del Pezzo surface. Then, this minimal resolution is nothing but the anticanonical morphism to $\bar{S}$. However, in view of Proposition \ref{redbigratsurf}, $\widetilde{S}$ cannot be the minimal resolution of $\bar{S}$ because the anticanonical morphism $\widetilde{S} \to \bar{S}$ factors through $S$. Thus, the existence of redundant points on some big anticanonical rational surface answers to Question \ref{tvvq}. \begin{remark} Let $S$ be a big anticanonical rational surface, and let $f \colon \widetilde{S} \rightarrow S$ be a blow-up at $p \in S$. Even though $f$ is not a redundant blow-up, $\widetilde{S}$ might be a big anticanonical rational surface. However, in this case, the anticanonical model of $\widetilde{S}$ is different from that of $S$. \end{remark} In the following examples, we construct big anticanonical rational surfaces whose anticanonical models contain only log terminal singularities (Example \ref{ltex}) or contain at least one rational singularity that is not a log terminal singularity (Example \ref{ctex2}). \begin{example}\label{ltex} Let $\pi \colon S(m,n) \rightarrow \mathbb{P}^2$ be the blow-up of $\mathbb{P}^2$ at $p_1^1, \ldots p_m^1$ on a line $\overline{l}_1$ and $p_1^2, \ldots p_n^2$ on a different line $\overline{l}_2$ for $m \geq n \geq 4$. Assume that all the chosen points are distinct and away from the intersection point of $\overline{l}_1$ and $\overline{l}_2$. Let $l_i$ be the strict transform of $\overline{l}_i$, $E_j^i$ be the exceptional divisors of $p_j^i$, and $l$ be the pull-back of a general line in $\mathbb{P}^2$. Then, the anticanonical divisor of $S(m,n)$ is given by \[-K_{S(m,n)} = \pi^{*}(-K_{\mathbb{P}^2}) - \sum_{i,j}E_j^i = l+l_1 + l_2.\] Let $-K_{S(m,n)} = P+N$ be the Zariski decomposition. Then, we have $$N=\frac{mn-m-2n}{mn-m-n}l_1 + \frac{mn-2m-n}{mn-m-n}l_2.$$ It is easy to see that $S(m,n)$ is a big anticanonical rational surface. By contracting curves $l_1$ and $l_2$ on $S(m,n)$, we obtain the anticanonical model of $S(m,n)$ which contains only log terminal singularities. Now, the intersection point $p$ of $l_1$ and $l_2$ is a redundant point because $$ \operatorname{mult}_p N=\frac{mn-m-2n}{mn-m-n} + \frac{mn-2m-n}{mn-m-n} \geq 1 .$$ Thus, by blowing up at $p$, we obtain a redundant blow-up $f \colon \widetilde{S}(m,n) \rightarrow S(m,n)$. The Picard number of $\widetilde{S}(m,n)$ is $m+n+2$, and hence, it supports Theorem \ref{nomin}. \end{example} Singularities on log del Pezzo surfaces are classified in some cases (see e.g., \cite{AN}, \cite{Na}, \cite{K}). Thus, we can find redundant points on the minimal resolution of log del Pezzo surfaces. \begin{example}\label{ctex2} Let $h \colon \mathbb{F}_n \rightarrow \mathbb{P}^1$ be the Hirzebruch surface with a section $\overline{\sigma}$ of self-intersection $-n \leq -2$. Let $k$ be an integer such that $3 \leq k \leq n+1$ and let $a_1 , \ldots, a_k$ be positive integers such that $\sum_{j=1}^{k} \frac{1}{a_j} < k-2$. Choose $k$ distinct fibers $\overline{F_1}, \ldots, \overline{F_k}$ of $h$, and choose $a_i$ distinct points $p_1^{i}, \ldots, p_{a_i}^i$ on $\overline{F_i} \backslash \overline{\sigma}$. Let $S$ be the blow-up of $\mathbb{F}_n$ at $p_j^i$ where $1 \leq i \leq k, 1 \leq j \leq a_i$. Let $\sigma$ be the strict transform of $\overline{\sigma}$, and let $F_i$ be the strict transform of $\overline{F_i}$. Denote the pull-back of a general fiber of $\mathbb{F}_n$ by $F$. Then, we have the Zariski decomposition of the anticanonical divisor $-K_{S}=P+N$ as follows: \[P=\frac{n+2-k}{n-\sum{\frac{1}{a_j}}} \sigma + (n+2-k)F + \sum_{i=1}^{k}{\frac{n+2-k}{a_i(n-\sum{\frac{1}{a_j}})} F_i} ,\] \[N= \left( 2-\frac{n+2-k}{n-\sum\frac{1}{a_j}} \right)\sigma + \sum_{i=1}^k \left(1- \frac{n+2-k}{a_i (n-\sum \frac{1}{a_j})} \right)F_i.\] Thus, $S$ is a big anticanonical rational surface, and every point $p$ in $\sigma$ is a redundant point because $$\operatorname{mult}_{p}N \geq 2-\frac{n+2-k}{n-\sum\frac{1}{a_j}} >1.$$ The construction of $S$ appeared in Section 3 of \cite{TVV10}. \end{example} \subsection{Redundant blow-ups of other surfaces} Throughout this subsection, we simply assume that $k=\mathbb{C}$. Here, we construct smooth projective surfaces with $\kappa(-K)=2,1,$ or $0$ containing redundant points. \begin{example} Let $C$ be a smooth projective curve of genus $g \geq 1$, and let $A$ be a divisor of degree $e > 2g-2$ on $C$. Consider the ruled surface $X:=\mathbb{P}(\mathcal{O}_C \oplus \mathcal{O}_C(-A))$ with the ruling $\pi \colon X \to C$. Then, $-K_X$ is big. It is easy to see that the negative part of the Zariski decomposition $-K_X = P+N$ is given by $N=\left(1+\frac{2g-2}{e} \right)C_0$, where $C_0$ is a section of $\pi$ corresponding to the canonical projection $\mathcal{O}_C(-A) \oplus \mathcal{O}_C \to \mathcal{O}_C(-A)$. Thus, every point on $C_0$ is redundant. \end{example} \begin{example}\label{x22} Let $S$ be an extremal rational elliptic surface $X_{321}$ in \cite[Theorem 4.1]{MP86}. There is a singular fiber which consists of two smooth rational curves $A$ and $B$ meeting transversally at two points $p$ and $q$. Let $\pi \colon \widetilde{S} \rightarrow S$ be the blow-up $p$ with the exceptional divisor $E$. Since $\operatorname{mult}_p(A+B)=2$. By \cite[Lemma 4.4]{AL11}, we obtain the Zariski decomposition $-K_{\widetilde{S}} =P+N$, where $P=\frac{1}{2}\pi^{*}(-K_S)$ and $N = \frac{1}{2}(\pi^{-1}_* A+ \pi^{-1}_* B)$. Then, $\kappa(-K_{\widetilde{S}})=1$, and the intersection point $q$ of $\pi^{-1}_* A$ and $\pi^{-1}_* B$ is redundant. \end{example} \begin{example}\label{0redex} Let $S$ be an extremal rational elliptic surface $X_{22}$ in \cite[Theorem 4.1]{MP86}. The elliptic fibration has a singular fiber $C$ which is a cuspidal rational curve and the unique section $D$. Let $p$ be the intersection point of a singular fiber $C$ and the section $D$, and let $\pi \colon \widetilde{S} \to S$ be the blow-up at $p$ with the exceptional divisor $E$. Then, we obtain the Zariski decomposition $-K_{\widetilde{S}} = P + N$, where $P=0$ and $N = \pi^{-1}_*C$. Thus, $\kappa(-K_{\widetilde{S}})=0$, and every point on $N$ is redundant. \end{example} \end{document}
math
57,357
\begin{document} \parskip 6pt \pagenumbering{arabic} \def \rule{2mm}{2mm}{ \rule{2mm}{2mm}} \def\leq{\leq} \def\geq{\geq} \def\mathcal S{\mathcal S} \def{\bold q}{{\bold q}} \def\mathcal M{\mathcal M} \def\mathcal T{\mathcal T} \def\mathcal E{\mathcal E} \def\leqp{\mbox{block}ox{lsp}} \def\mbox{block}ox{rsp}{\mbox{block}ox{rsp}} \def\noindent {\it Proof.} {\noindent {\it Proof.} } \def\mbox{block}ox{pyramid}{\mbox{block}ox{pyramid}} \def\mbox{block}{\mbox{block}ox{block}} \def\mbox{cross}{\mbox{block}ox{cross}} \def \rule{4pt}{7pt}{ \rule{4pt}{7pt}} \def \rule{5pt}{5pt}{ \rule{5pt}{5pt}} \begin{center} {\Large\bf Bijections for inversion sequences, ascent sequences and 3-nonnesting set partitions } \vskip 6mm \end{center} \begin{center} {\small Sherry H. F. Yan \\[2mm] Department of Mathematics, Zhejiang Normal University, Jinhua 321004, P.R. China \\[2mm] [email protected] \\[0pt] } \end{center} \noindent {\bf Abstract.} Set partitions avoiding $k$-crossing and $k$-nesting have been extensively studied from the aspects of both combinatorics and mathematical biology. By using the generating tree technique, the obstinate kernel method and Zeilberger's algorithm, Lin confirmed a conjecture due independently to the author and Martinez-Savage that asserts inversion sequences with no weakly decreasing subsequence of length 3 and enhanced 3-nonnesting partitions have the same cardinality. In this paper, we provide a bijective proof of this conjecture. Our bijection also enables us to provide a new bijective proof of a conjecture posed by Duncan and Steingr\'{\i}msson, which was proved by the author via an intermediate structure of growth diagrams for $01$-fillings of Ferrers shapes. \noindent {\sc Key words}: inversion sequence, ascent sequence, pattern avoiding, 3-nonnesting set partition. \noindent {\sc AMS Mathematical Subject Classifications}: 05A05, 05A19. \section{Introduction} Set partitions avoiding $k$-crossing and $k$-nesting have been extensively studied from the aspects of both combinatorics and mathematical biology; see $\cite{chen,chen3,kra}$ and the references therein. The objective of this paper is to provide a bijective proof of a conjecture due independently to the author \cite{Yan} and Martinez-Savage \cite{Mar}, which was recently confirmed by Lin \cite{Lin} using the generating tree technique, the obstinate kernel method \cite{melon1} and Zeilberger's algorithm \cite{Zei}. Our bijection also enables us to provide a new bijective proof of a conjecture posed by Duncan and Steingr\'{\i}msson \cite{Duncan}, which was proved by the author \cite{Yan} via an intermediate structure of growth diagrams for $01$-fillings of Ferrers shapes \cite{kra} and \cite{van}. Let us first give an overview of the notation and terminology. A sequence $x=x_1 x_2 \cdots x_n $ is said to be an {\em inversion sequence} of length $n$ if it satisfies $0\leq x_i< i$ for all $1\leq i\leq n$. Inversion sequences of length $n$ are in easy bijection with permutations of length $n$. An inversion sequence $x_1x_2\ldots x_n$ can be obtained from any permutation $\pi=\pi_1\pi_2\ldots \pi_n$ by setting $x_i=\{j\mid j<i \,\,\mbox{block}ox{ and }\,\, \pi_{j}>\pi_{i}\}$. Given a sequence of integers $x=x_1x_2\cdots x_n$, we say that the sequence $x$ has an {\em ascent} at position $i$ if $x_i<x_{i+1}$. The number of ascents of $x$ is denoted by $\as(x)$. A sequence $x=x_1x_2\cdots x_n$ is said to be an {\em ascent sequence of length $n$} if it satisfies $x_1=0$ and $0\leq x_i\leq \as(x_1x_2\cdots x_{i-1})+1$ for all $2\leq i\leq n$. Ascent sequences were introduced by Bousquet-M$\acute{e}$lou et al. \cite{melon} in their study of $(2+2)$-free posets, which are closely connected to unlabeled $(2+2)$-free posets, permutations avoiding a certain pattern, and a class of involutions introduced by Stoimenow \cite{Stoi}. We call an ascent sequence with no two consecutive equal entries a {\em primitive} ascent sequence. Pattern avoiding permutations have been extensively studied over last decade. For a thorough summary of the current status of research, see B\'{o}na's book \cite{bona} and Kitaev's book \cite{kitaev}. Analogous to pattern avoidance on permutations, Corteel-Martinez-Savage-Weselcouch \cite{Cor} and Mansour-Shattuck \cite{man2} initiated the study of inversion sequences avoiding certain pattern. Pattern avoiding inversion sequences are closely related to Catalan numbers, large Schr$\ddot{o}$der numbers, Euler numbers and Baxter numbers (see \cite{Cor}, \cite{Kim}, \cite{Lin} and \cite{man2}). In their paper \cite{Duncan}, Duncan and Steingr\'{\i}msson studied ascent sequences avoiding certain patterns. Further results on the enumeration of pattern-avoiding ascent sequences could be found in \cite{chen1, man,Yan}. By using the generating tree technique, the obstinate kernel method and Zeilberger's algorithm, Lin \cite{Lin} confirmed the following conjecture proposed by Martinez-Savage \cite{Mar}. \begin{conjecture}{ \upshape ( Martinez-Savage \cite{Mar} )}\label{con} Inversion sequences of length $n$ and with no weakly decreasing subsequence of length $3$ are equinumerous with enhanced 3-nonnesting (3-noncrossing) set partitions of $[n]$. \end{conjecture} As remarked by Lin \cite{Lin}, this conjecture has already been proposed by Yan \cite{Yan} in the the course of confirming the following conjecture posed by Duncan and Steingr\'{\i}msson \cite{Duncan}. \begin{conjecture}{ \upshape ( See \cite{Duncan}, Conjecture 3.3)}\label{Yan} Ascent sequences of length $n$ and with no decreasing subsequence of length $3$ are equinumerous with 3-nonnesting (3-noncrossing) set partitions of $[n]$. \end{conjecture} Recall that a subsequence $x_{i_1}x_{i_2}\ldots x_{i_k} $ of a sequence $x=x_1x_2\ldots x_n$ is said to be {\em decreasing } if $i_1<i_2<\ldots <i_k$ and $x_{i_1}> x_{i_2}>\ldots> x_{i_k}$ and to be {\em weakly decreasing} if $i_1<i_2<\ldots <i_k$ and $x_{i_1}\geq x_{i_2}\geq \ldots\geq x_{i_k}$. Denote by $\mathcal{A}_k(n)$ and $\mathcal{PA}_k(n)$ the set of ordinary and primitive ascent sequences of length $n$ and with no decreasing subsequence of length $k$, respectively. Let $\mathcal{I}_k(n)$ denote the set of inversion sequences of length $n$ and with no weakly decreasing sequences of length $k$. A set partition $P$ of $[n] = \{1,2,\cdots, n\}$ can be represented by a diagram with vertices drawn on a horizontal line in increasing order. For a block $B$ of $P$, we write the elements of $B$ in increasing order. Suppose that $B=\{i_1, i_2, \cdots, i_k\}$. Then we draw an arc from $i_1$ to $i_2$, an arc from $i_2$ to $i_3$, and so on. Such a diagram is called the {\em linear representation} of $P$, see Figure \ref{linear} for example. The {\em enhanced } representation of $P$ is defined to the union of the standard representation of $P$ and the set of loops $(i,i)$, where $i$ ranges over all the singleton blocks $\{i\}$ of $P$. Then one defines a {\em $k$-crossing} of a set partition to be a subset $\{(i_1, j_1), (i_2, j_2), \cdots, (i_k, j_k)\}$ of its linear representation where $i_1<i_2<\cdots <i_k< j_1<j_2<\cdots<j_k$, and an {\em enhanced $k$-crossing } of a set partition to be a subset $\{(i_1, j_1), (i_2, j_2), \cdots, (i_k, j_k)\}$ of its enhanced representation where $i_1<i_2<\cdots <i_k\leq j_1<j_2<\cdots<j_k$. A partition without any (enhanced) $k$-crossings is said to be {\em (enhanced )$k$-noncrossing}. Similarily, a {\em $k$-crossing } is defined to be a subset $\{(i_1, j_1), (i_2, j_2), \cdots, (i_k, j_k)\}$ of its enhanced representation where $i_1<i_2<\cdots <i_k<j_k<j_{k-1}<\cdots<j_1$, and an {\em enhanced $k$-nesting} is defined to be a subset $\{(i_1, j_1), (i_2, j_2), \cdots, (i_k, j_k)\}$ of its enhanced representation where $i_1<i_2<\cdots <i_k\leq j_k<j_{k-1}<\cdots<j_1$. A set partition without any (enhanced ) $k$-nestings is said to be {\em (enhanced ) $k$-nonnesting}. Chen et al. \cite{chen} proved that (enhanced) $k$-nonnesting set partitions of $[n]$ are equinumerous with (enhanced) $k$-noncrossing set partitions of $[n]$ bijectively using hesitating tableaux as an intermediate object. Denote by $\mathcal{C}_k(n)$ and $\mathcal{E}_k(n)$ the set of ordinary and enhanced $k$-nonnesting set partitions of $[n]$, respectively. \begin{figure} \caption{ The linear representation of a set partition $\pi=\{\{1,2,3,4, 6,10\} \label{linear} \end{figure} \section{ Bijective proof of Conjecture \ref{con} } In this section, we shall provide a bijective proof of Conjecture \ref{con} by showing that inversion sequences of length $n$ and with no weakly decreasing subsequence of length 3 are in bijection with enhanced 3-nonnesting partitions of $[n]$. To this end, we recall some necessary notation and terminology. A {\em triangular shape} of order $n$ is the left-justified array of ${n+1\choose 2}$ squares in which the $i$th row contains exactly $i$ squares. Let $\Delta_n$ be the triangular shape of order $n$. In a triangular shape, we number rows from top to bottom and columns from left to right and identify squares using matrix coordinate. The $i$th row (column) is called row (column) $i$. For example, the square in the first row and second column is numbered $(1, 2)$. A {\em $01$-filling} of a triangular shape $\Delta_n$ is obtained by filling the squares of $\Delta_n$ with $1's$ and $0's$, see Figure \ref{afilling} for example, where we represent a $1$ by a $\bullet$ and suppress the $0$'s. A $01$-filling of a triangular shape is said to be {\em valid} if every row contains at most one $1$. A row (column) of a $01$-filling is said to a {\em zero} if all the squares at this row ( column) are filled with $0's$. A {\em NE-chain} of a $01$-filling is a sequence of $1's$ such that any $1$ is strictly above and weakly to the right of the preceding $1$ in the sequence. For example, in Figure \ref{afilling}, the sequence of $1's$ lying in the squares $(6,3)$, $(5,4,)$ and $(4,4)$ form a NE-chain of length $3$. \begin{figure} \caption{An example of a $01$-filling of a triangular shape of order $6$. } \label{afilling} \end{figure} An inversion sequence $x_1x_2\ldots x_n$ can be encoded by a $01$-filling of $\Delta_n$ in which the square $(i,x_i+1)$ is filled with a $1$ for all $1\leq i\leq n$ and all the other squares are filled with $0's$. It is easily seen that a weakly decreasing sequence of length $k$ corresponds to a NE-chain of length $k$. Denote by $\mathcal{M}_k(n)$ the set of $01$-fillings of $\Delta_{n}$ with the property that every row contains exactly one $1$ and there is no NE-chain of length $k$. \begin{theorem}\label{th1} There is a one-to-one correspondence between the set $\mathcal{I}_k(n)$ and the set $\mathcal{M}_k(n)$. \end{theorem} In his paper \cite{kra}, Krattenthaler established a bijection between set partitions of $[n]$ and $01$-fillings of $\Delta_{n}$ in which every row and every column contain at most one $1$, and either column $i$ or row $i$ contains at least one $1$ for all $1\leq i\leq n$. For the sake of completeness, we give a brief description of this bijection. Given a set partition $\pi$ of $[n]$, we can get a $01$-filling of $\Delta_{n}$ by putting a $1$ in the square $(j,i)$ if $(i,j)$ is an arc in its enhanced representation, and, in addition, by putting a $1$ in the the square $(i,i)$ if $(i,i)$ is a loop in its enhanced representation. The $01$-filling corresponding to the set partition $\pi=\{\{1, 3,6 \}, \{2, 8\}, \{4\}, \{5,7,9\}\}$ is indicated in Figure \ref{filling}. From the construction of Krattenthaler's bijection, an enhanced $k$-nesting of a set partition corresponds to a NE-chain of length $k$ in its corresponding $01$-filling. Denote by $\mathcal{N}_k(n)$ the set of $01$-fillings of $\Delta_{n}$ satisfying the following properties: \begin{itemize} \item[(a1)] every row and every column contain at most one $1$; \item[(b1)] either column $i$ or row $i$ contains at least one $1$ for all $1\leq i\leq n$; \item[(c1)] there is no NE-chain of length $k$. \end{itemize} From Krattenthaler's bijection, we immediately get the following result. \begin{figure} \caption{ A set partition $\pi=\{\{1, 3,6 \} \label{filling} \end{figure} \begin{theorem}\label{th2} The $01$-fillings of the set $\mathcal{E}_k(n)$ are in bijection with the $01$-fillings of the set $\mathcal{N}_k(n)$. \end{theorem} In view of Theorems \ref{th1} and \ref{th2}, in order to prove Conjecture \ref{con}, it suffices to establish a bijection between the set $\mathcal{M}_k(n)$ and the set $\mathcal{N}_k(n)$. To this end, we define two transformations, which will play an essential role in the construction of the bijection. {\noindent \bf The transformation $\alpha$}\,\, Let $F$ be a valid $01$-filling of $\Delta_n$ without any NE-chain of length $3$. If every column of $F$ contains at most one $1$, we simply define $\alpha(F)=F$. Otherwise, find the leftmost column $i$ which contains at least two $1's$. Suppose the square $(i, j)$ is filled with a $1$ for some $1\leq j\leq i$. Assume that the $1's$ below and weakly to the left of the square $(i,i)$ are positioned at the squares $(r_1, c_1)$, $(r_2, c_2), \ldots, (r_m, c_m)$ with $r_1<r_2<\ldots <r_m$. Let $r_0=i, c_0=j$. Suppose that the topmost $1$ in column $i$ is at row $r_s$. If row $r_s+1$ contains a $1$ which is to the right of the square $(r_s, c_s)$, then define $\alpha (F)$ to be the $01$-filling of $\Delta_n$ obtained from $F$ by the following procedure: \begin{itemize} \item For all $0\leq \ell\leq s$, replace the $1$ at the square $(r_\ell, c_\ell)$ with a $0$; \item For all $0\leq \ell<s $, fill the square $(r_{\ell+1}, c_\ell)$ with a $1 $; \item Leave all the other squares fixed. \end{itemize} Otherwise, define $\alpha (F)$ to be the $01$-filling of $\Delta_n$ obtained from $F$ by the following procedure: \begin{itemize} \item For all $0\leq \ell\leq m$, replace the $1$ at the square $(r_\ell, c_\ell)$ with a $0 $; \item For all $0\leq \ell<m $, fill the square $(r_{\ell+1}, c_\ell)$ with a $1$; \item Leave all the other squares fixed. \end{itemize} Now we proceed to show that the transformation $\alpha$ has the following desired properties. \begin{lemma}\label{alpha0} In $\alpha(F)$, each column to the left of column $i$ contains at most one $1$ and column $i$ contains exactly one $1$. \end{lemma} \noindent {\it Proof.} It is obvious from the selection of column $i$ and the construction of the transformation $\alpha$. \rule{4pt}{7pt} \begin{lemma}\label{alpha1} The filling $\alpha(F)$ is a valid $01$-filling of $\Delta_n$ containing no NE-chain of length $3$. \end{lemma} \noindent {\it Proof.} According to the construction of the transformation $\alpha$, it is easily seen that $\alpha(F)$ is a valid $01$-filling of $\Delta_n$. Now we proceed to show that $\alpha(F)$ contains no NE-chain of length $3$. If not, suppose that the $1's$ positioned at the squares $(a_1, b_1)$, $(a_2, b_2)$ and $(a_3, b_3)$ form a NE-chain of length $3$, where $a_1<a_2<a_3$. Since $F$ has no NE-chain of length $3$, the square $(a_1, b_1)$ must be positioned below and to the right of the square $(r_s,c_s)$. Suppose that there exists a $1$ at row $r_s+1$ which is to the right of the square $(r_s, c_s)$. From the construction of $\alpha(F)$, it is easy to check that all the squares below row $r_s$ remain the same as those of $F$. This implies that there is no NE-chain of length $3$ below row $r_s$ in $\alpha(F)$. This contradicts the fact that $(a_1, b_1)$ is below row $r_s$. Hence, row $r_s+1$ does not contain a $1$ which is to the right of the square $( r_s, c_s)$. This implies that the square $(a_1,b_1)$ is below and to the right of the square $(r_{s+1}, c_s)$. Since $F$ has no NE-chain of length $3$, there is no NE-chain of length $2$ below and to the left of the square $(r_{s+1}, c_s)$ in $\alpha(F)$. This yields that both the square $(a_1, b_1 )$ and the square $(a_2, b_2)$ are positioned to the right of the square $(r_s, c_s)$. From the fact that $F$ contains no NE-chain of length $3$, we have $c_s=c_m=i$. Then the $1's$ positioned at the squares $(a_1, b_1)$, $(a_2, b_2)$ and $(r_m, c_m)$ would form a NE-chain of length $3$ in $F$, which contradicts the hypothesis. This completes the proof. \rule{4pt}{7pt} Lemma \ref{alpha0} states that the column $i$ that we find in the transformation $\alpha$ can only go right. Hence, there will be no column containing at least two $1's$ in the resulting filling after finitely many iterations of $\alpha$. Lemma \ref{alpha1} tells us that the resulting filling is a valid $01$-filling of $\Delta_n$ containing no NE-chain of length $3$. Therefore, we will get a $01$-filling in $N_3(n)$ after finitely applying many iterations of $\alpha$ to a $01$-filling $F$ in $\mathcal{M}_3(n)$. Define $\phi(F)$ to be the resulting filling. Figure \ref{filling1} illustrates an example of two iterations of $\alpha$ to a $01$-filling in $\mathcal{M}_3(9)$. {\noindent \bf The transformation $\beta$} \,\, Let $F$ be a valid filling of $\Delta_n$ which verifies property (b1) and contains no NE-chain of length $3$. If every row contains a $1$ in $F$, then we simply define $\beta(F)=F$. Otherwise, find the lowest zero row $i$. Suppose that the $1's$ below and weakly to the left of the square $(i,i)$ are positioned at the squares $(r_1, c_1)$, $(r_2, c_2), \ldots, (r_m, c_m)$ with $r_1<r_2<\ldots <r_m$. Assume that $r_0=i$. Suppose that the topmost $1$ at column $i$ is positioned at the square $(r_s,c_s)$. If there is at least one $1$ which is above and to the right of the square $(r_s, c_s)$, then find the topmost square, say $(p,q)$, containing such a $1$. Then we have $p=r_t+1$ for some $0\leq t\leq s-1$. Define $\beta(F)$ to be the $01$-filling of $\Delta_n$ obtained from $F$ by the following procedure: \begin{itemize} \item For all $1\leq \ell \leq t$, replace the $1$ at the square $(r_\ell, c_{\ell})$ with a $0$; \item For all $0\leq \ell\leq t$, fill the square $(r_\ell, c_{\ell+1})$ with a $1 $ with the assumption $c_{t+1}=i$; \item Leave all the other squares fixed. \end{itemize} Otherwise, define $\beta(F)$ to be the $01$-filling of $\Delta_n$ obtained from $F$ by the following procedure: \begin{itemize} \item For all $1\leq \ell \leq m$, replace the $1$ at the square $(r_\ell, c_\ell)$ with a $0 $; \item For all $0\leq \ell\leq m $, fill the square $(r_\ell, c_{\ell+1})$ with a $1$ with the assumption $c_{m+1}=i$; \item Leave all the other squares fixed. \end{itemize} Now we proceed to show that the transformation $\beta$ has the following analogous properties of $\alpha$. \begin{lemma}\label{beta0} In $\beta(F)$, every row below row $i$ (including row $i$) contains exactly one $1$. \end{lemma} \noindent {\it Proof.} It is obvious from the selection of row $i$ and the construction of the transformation $\beta$. \rule{4pt}{7pt} \begin{lemma}\label{beta1} The filling $\beta(F)$ is a valid $01$-filling of $\Delta_n$ which verifies property (b1) and contains no NE-chain of length $3$. \end{lemma} \noindent {\it Proof.} It is obvious that $\beta(F)$ is a valid $01$-filling of $\Delta_n$ which verifies property (b1). Now we proceed to show that $\beta(F)$ contains no NE-chain of length $3$. If not, suppose that the $1's$ positioned at the squares $(a_1, b_1)$, $(a_2, b_2)$ and $(a_3, b_3)$ form a NE-chain of length $3$, where $a_1<a_2<a_3$. Suppose that there is at least one $1$ which is above and to the right of the square $(r_s,c_s)$ in $F$. In this case, all the squares below row $r_t$ in $\beta(F)$ remain the same as those of $F$. Since $F$ contains no NE-chain of length $3$, we must have $(a_1, b_1)=(r_t, i)$. Hence, the $1's$ positioned at the squares $(p,q)$, $(a_2, b_2)$ and $(a_3, b_3)$ form a NE-chain of length $3$, which contradicts the hypothesis. Thus, $F$ does not contain a $1$ which is above and to the right of the square $(r_s,c_s)$. According to the construction of $\beta(F)$, one of $(a_1, b_1)$, $(a_2, b_2)$ and $(a_3, b_3)$ must fall in $(r_m,i)$. Since there is no $1$ which is below row $r_m$ and to the left of column $i$ in $\beta(F)$, we have $(a_3, b_3)=(r_m,i)$. Then the $1's$ positioned at the squares $(a_1, b_1)$, $(a_2, b_2)$ and $(r_m, i)$ would form a NE-chain of length $3$ in $F$, which contradicts the hypothesis. This completes the proof. \rule{4pt}{7pt} Lemma \ref{beta0} states that the row $i$ that we find in the transformation $\beta$ can only go upside. Hence, there will be no zero row after finitely many iterations of $\beta$. Lemma \ref{beta1} tells us that the resulting filling is a valid $01$-filling of $\Delta_n$ containing no NE-chain of length $3$. Hence, we will get a $01$-filling in $\mathcal{M}_3(n)$ after finitely applying many iterations of $\beta$ to a $01$-filling $F$ in $\mathcal{N}_3(n)$. Define $\psi(F)$ to be the resulting filling. \begin{theorem}\label{bijection} The maps $\phi$ and $\psi$ induce a bijection between the set $\mathcal{M}_3(n)$ and the set $\mathcal{N}_3(n)$. \end{theorem} \noindent {\it Proof.} It suffices to show that the maps $\phi$ and $\psi$ are inverses of each other. First, we proceed to show that $\phi$ is the inverse of the map $\psi$, that is, $\phi(\psi(F))=F$ for any $01$-filling $F\in \mathcal{N}_3(n)$. To this end, it suffices to show that $\alpha(\beta^k(F))=\beta^{k-1}(F)$. Suppose that at the $k$th application of $\beta$ to $\beta^{k-1}(F)$, the selected row is row $i$. Suppose that the $1's$ below and weakly to the left of the square $(i,i)$ are positioned at the squares $(r_1, c_1)$, $(r_2, c_2), \ldots, (r_m, c_m)$ with $r_1<r_2<\ldots <r_m$. Assume that $r_0=i$. Suppose that the topmost $1$ at column $i$ is positioned at the square $(r_s,c_s)$. We have two cases. If there is at least one 1 which is above and to the right of the square $(r_s,c_s)$ in $\beta^{k-1}(F)$, then find the topmost square $(p,q)$ containing such a 1. Assume that $p=r_t+1$ for some $0\leq t<s$. From the construction of the transformation $\beta$, the square $(r_\ell, c_{\ell+1})$ of $\beta^{k}(F)$ is filled with a $1$ for all $1\leq \ell\leq t $ with the assumption $c_{t+1}=i$ and all the other squares remain the same as those of $\beta^{k-1}(F)$. Clearly, in $\beta^{k}(F)$, all the columns to the left of column $i$ contains at most one $1$, and column $i$ contains exactly two $1's$. Hence, when we apply the transformation $\alpha$ to $\beta^{k}(F)$, the column that we select is just column $i$ and the topmost $1$ at column $i$ is positioned at the square $(r_t, i)$. Moreover, the $1$ positioned at the square $(p,q)$ is to the right of square $(r_t, i)$ and $p=r_t+1$. From the construction of $\alpha$, it is not dificult to check that $\alpha(\beta^{k}(F))=\beta^{k-1}(F)$. If there does not exists any 1 which is above and to the right of the square $(r_s, c_s)$ in $\beta^{k-1}(F)$, From the construction of the transformation $\beta$, the square $(r_\ell, c_{\ell+1})$ of $\beta^{k}(F)$ is filled with a $1$ for all $1\leq \ell\leq m $ with the assumption $r_{m+1}=i$ and all the other squares remain the same as those of $\beta^{k-1}(F)$. Clearly, in $\beta^{k}(F)$, all the columns to the left of column $i$ contains at most one $1$, in which column $i$ contains exactly two $1's$. Hence, when we apply the transformation $\alpha$ to $\beta^{k}(F)$, the column that we select is just column $i$ and the topmost $1$ at column $i$ is positioned at the square $(r_{s-1}, c_s)$. Notice that there does not exist any $1$ which is above and to the right of square $(r_s,c_s)$ in $\beta^{k-1}(F)$. Hence, there is no $1$ at row $r_{s-1}+1$ which is to the right of the square $(r_{s-1}, c_s)$. From the construction of $\alpha$, it is easily seen that $\alpha(\beta^{k}(F))=\beta^{k-1}(F)$. Combining the two above cases, we have deduced that $\alpha(\beta^k(F))=\beta^{k-1}(F)$. By similar arguments, one can verify that $\beta(\alpha^k(F))=\alpha^{k-1}(F)$ for any $01$-filling $F$ of $\mathcal{M}_3(n)$. The details are omitted here. Hence, the maps $\phi$ and $\psi$ are inverses of each other. Thus, the maps $\phi$ and $\psi$ induce a bijection between the set $\mathcal{M}_3(n)$ and the set $\mathcal{N}_3(n)$ as claimed. \rule{4pt}{7pt} \begin{figure} \caption{ An example of two iterations of $\alpha$ to a $01$-filling in $\mathcal{M} \label{filling1} \end{figure} Combining Theorems \ref{th1}, \ref{th2} and \ref{bijection}, we are led to a bijective proof of Conjecture \ref{con}. \section{Bijective proof of Conjecture \ref{Yan} } In this section, we shall give a new bijective proof of Conjecture \ref{Yan} relying on the bijection $\phi$. In the following, a $01$-filling in $\mathcal{M}_3(n)$ will be identified with a sequence $\{( 1, a_1),$ $ (2, a_2), $ $ \cdots, (n, a_n)\}$, where $1\leq a_i\leq i$ and $a_i=k$ if and only if there is a $1$ in the $i$th row and $k$th column. In the course of proving Conjecture \ref{Yan}, Yan \cite{Yan} provided a bijection $\gamma$ between the set $\mathcal{PA}_3(n+1)$ and the set $\mathcal{M}_3(n)$. Let $x=x_1x_2\cdots x_{n+1}\in \mathcal{PA}_{3}(n+1)$. Define $\gamma(x)=\{( 1, a_1), (2, a_2), $ $\cdots$ ,$(n, a_n)\}$ where $a_i=i+x_{i+1}-\as(x_1x_2\cdots x_{i+1})$ for all $i=1, 2, \cdots, n$. For example, let $x=012340415\in \mathcal{PA}_{3}(9)$. Then we have $$\gamma(x)=\{(1, 1), (2, 2), (3,3), (4,4), (5,1), (6, 5), (7, 3), (8, 7) \}. $$ The inverse of the map $\gamma$ is defined as follows. Let $F=\{( 1, a_1), (2, a_2), \cdots ,(n, a_n)\}$. Define $\gamma^{-1}(F)=(x_1,x_2,\cdots, x_{n+1})$ inductively as follows: \begin{itemize} \item $x_1=0$ and $x_2=1$; \item if $a_{i-1 }< a_{i }$, then $x_{i+1}=\as(x_1x_2\cdots x_{i})+1+a_{i}-i$ for all $2\leq i\leq n$ ; \item if $a_{i-1}\geq a_{i}$, then $x_{i+1}=\as(x_1x_2\cdots x_{i})+a_{i}-i$ for all $2\leq i\leq n$. \end{itemize} Recall that Krattenthaler \cite{kra} also established a bijection between set partitions of $[n+1]$ and $01$-fillings of $\Delta_{n}$ in which every row and every column contain at most one $1$. Given a set partition $\pi$ of $[n]$, we can get a $01$-filling of $\Delta_{n}$ by putting a $1$ in the square $(j-1,i)$ if $(i,j)$ is an arc in its linear representation. From the construction of Krattenthaler's bijection, a $k$-nesting of a set partition corresponds to a NE-chain of length $k$ in its corresponding $01$-filling. Denote by $\mathcal{P}_k(n)$ the set of $01$-fillings of $\Delta_{n}$ in which every row and every column contain at most one $1$, and there is no NE-chain of length $k$. The following result follows immediately from Krattenthaler's bijection \cite{kra}. \begin{theorem}\label{th4} There is a one-to-one correspondence between the set $\mathcal{C}_k(n+1)$ and the set $\mathcal{P}_k(n)$. \end{theorem} By Theorem \ref{th4}, in order to provide a bijection between $\mathcal{A}_3(n)$ and $C_3(n)$, it suffices to establish a bijection between the set $\mathcal{A}_3(n+1)$ and the set $\mathcal{P}_3(n)$. In a $01$-filling, if both row $i$ and column $i$ are zero, then row (column) $i$ is said to be {\em critical}. \begin{theorem}\label{mainth} There is a bijection between the set $\mathcal{A}_3(n+1)$ and the set $\mathcal{P}_3(n)$. \end{theorem} \noindent {\it Proof.} First we shall describe a map $\delta$ from the set $\mathcal{A}_3(n+1)$ to the set $\mathcal{P}_3(n)$. Let $x\in \mathcal{A}_3(n+1) $. It is apparent that the ascent sequence $x$ can be written as $x_1^{c_1} x_2^{c_2} \cdots x_{k+1}^{c_{k+1}}$, where $x_i\neq x_{i+1}$ and $c_i\geq 1$ for all $i\geq 1$. Let $x'=x_1 x_2\cdots x_{k+1}$. Obviously, $x'$ is a primitive ascent sequence in $\mathcal{PA}_3(k+1)$. Let $F=\gamma(x')$ and $F'=\phi(F)$. Clearly, we have $F\in \mathcal{M}_3(k)$ and $F'\in \mathcal{N}_3(k)$. Now we can generate a $01$-filling $F''$ of $\Delta_{n}$ from $F'$ by inserting $c_1-1$ consecutive zero rows immediately above row $1$ and $c_1-1$ consecutive zero columns immediately to the left of column $1$, and inserting $c_i$ consecutive zero rows immediately below row $i$ and $c_i$ consecutive zero columns immediately to the right of column $i$ for all $1\leq i\leq k$. Define $\delta(x)=F''$. It is not difficult to see that the resulting filling $F''$ is an element of $\mathcal{P}_3(n)$. This implies that the map $\delta$ is well defined. In order to prove that $\delta$ is a bijection, we construct a map $\delta'$ from the set $\mathcal{P}_3(n)$ to the set $\mathcal{A}_3(n+1)$. Given a $01$-filling $F\in \mathcal{P}_3(n)$, we can recover an ascent sequence $\delta'(F)$ as follows. Suppose that there are $k$ non-critical rows in $F$. Let rows $i_1$, $i_2$, $\ldots$, $i_k$ be the non-critical rows of $F$. Assume that there are $c_1$ critical rows immediately above row $i_1$, and $c_{\ell+1}$ critical rows immediately below row $i_\ell$ for all $1\leq \ell\leq k$. Denote $F'$ the $01$-filling obtained from $F$ by removing all the critical rows and columns from $F$. Moreover, let $F''=\psi(F')$ and $x=x_1x_2\ldots x_{k+1}=\gamma^{-1}(F'')$. It is easily seen that $F'\in \mathcal{N}_3(k)$, $F''\in \mathcal{M}_3(k)$ and $x\in \mathcal{PA}_3(k+1)$. Let $\delta'(F)=x_1^{c_1+1}x_2^{c_{2}}\ldots x_{k+1}^{c_{k+1}} $. It is apparent that we have $\delta'(F)\in \mathcal{A}_{3}(n+1)$. Property $(b1)$ ensures that the inserted rows and columns in the construction of $\delta$ are exactly the removed rows and columns in the construction of $\delta'$. Thus the map $\delta'$ is the inverse of the map $\delta$. This implies that $\delta$ is bijection. \rule{4pt}{7pt} For example, let $x=001234345664$ be an ascent sequence in $\mathcal{A}_3(13)$. Then $x$ can be written as $0^21^12^13^14^13^14^15^16^{2}4 $. Let $x'=0123434564$, which is an element of $\mathcal{PA}_3(10)$. By applying the map $\gamma$ to $x'$, we get a $01$-filling $$F=\gamma(x')=\{(1,1), (2,2,), (3,3), (4,4), (5,4), (6,5), (7,6), (8,7), (9,6)\}\in \mathcal{M}_3(9) $$ illustrated in Figure \ref{filling2}. Then by applying the map $\phi$ to $F$, we get a $01$-filling $F'$ as shown in Figure \ref{filling2}. Finally, we obtain a $01$-filling $F''\in \mathcal{P}_3(12)$ by adding one zero row immediately above row $1$, one zero column immediately to the left of column $1$, two consecutive zero rows immediately below row $8$, and two consecutive zero columns immediately to the right of column $8$, see Figure \ref{filling2}. \begin{figure} \caption{ A $01$-filling $F\in \mathcal{M} \label{filling2} \end{figure} Combining Theorems \ref{th4} and \ref{mainth}, we get a new bijective proof of Conjecture \ref{Yan}. \noindent{\bf Acknowledgments.} This work was supported by the National Natural Science Foundation of China (11571320 and 11671366) and Zhejiang Provincial Natural Science Foundation of China ( LY15A010008). \end{document}
math
31,559
\begin{document} \title{Comment on: ``On the effects of the Lorentz symmetry violation yielded by a tensor field on the interaction of a scalar particle and a Coulomb-type field'' Ann. Phys. 399 (2018) 117-123} \author{Paolo Amore\thanks{ e--mail: [email protected]} \\ Facultad de Ciencias, CUICBAS, Universidad de Colima,\\ Bernal D\'{\i}az del Castillo 340, Colima, Colima,Mexico \\ and \\ Francisco M. Fern\'andez\thanks{ e--mail: [email protected]} \\ INIFTA, Divisi\'{o}n Qu\'{\i}mica Te\'{o}rica,\\ Blvd. 113 y 64 (S/N), Sucursal 4, Casilla de Correo 16,\\ 1900 La Plata, Argentina} \maketitle \begin{abstract} We analyze the eigenvalues and eigenfunctions stemming from a recent study of the interaction of a scalar particle with a Coulomb potential in the presence of a background of the violation of the Lorentz symmetry established by a tensor field. We show, beyond any doubt, that the physical conclusions drawn by the authors from a truncation of a power series, coming from the application of the Frobenius method, are meaningless and nonsensical. \end{abstract} In a recent paper Vit\'{o}ria et al\cite{VBB18} analyze the interaction of a scalar particle with a Coulomb-type potential in the presence of a background of the violation of the Lorentz symmetry established by a tensor field. The equation proposed by the authors is separable in cylindrical coordinates and the radial part is a solution to an eigenvalue equation with centrifugal-like ($r^{-2}$), Coulomb ($r^{-1}$) and harmonic ($r^{2}$) terms. The application of the Frobenius method leads to a three-term recurrence relation for the expansion coefficients and the authors force a truncation in order to obtain polynomial solutions. In this way they obtain analytical expressions for the energies of the system and conclude that there are \textit{permitted} values of a parameter that characterizes the magnetic field. The purpose of this Comment is the analysis of the effect of the truncation approach on the physical conclusions drawn by the authors. The starting point of present discussion is the eigenvalue equation for the radial part of the Schr\"{o}dinger equation \begin{eqnarray} &&F^{\prime \prime }(x)+\frac{1}{x}F^{\prime }(x)-\frac{\gamma ^{2}}{x^{2}} F(x)+\frac{\theta }{x}F(x)-x^{2}F(x)+WF(x)=0, \nonumber \\ &&W=\frac{\beta }{\tau },\;\beta =\mathcal{E}^{2}-m^{2}-p_{z}^{2},\;\gamma ^{2}=l^{2}-\alpha ^{2},\;\tau ^{2}=\frac{1}{2}gb\chi ^{2}, \nonumber \\ &&\theta =\frac{2\alpha \mathcal{E}}{\sqrt{\tau }} \label{eq:eig_eq} \end{eqnarray} where $l=0,\pm 1,\pm 2,\ldots $ is the rotational quantum number (restricted to $l^{2}\geq \alpha ^{2}$), $m$ the mass of the particle, $\alpha $ the strength of a Coulomb-type potential, $\mathcal{E}$ the energy, $b=-\left( K_{HB}\right) _{zz}>0$, $\chi $ comes from a magnetic field and $g$ is a constant. The constant $-\infty <p_{z}<\infty $ is the quantum number for the free motion along the $z$ direction. The authors simply set $\hbar =1$, $ c=1$ though there are well known procedures for obtaining suitable dimensionless equations in a clearer and more rigorous way\cite{F20}. In what follows we focus on the discrete values of $W$ that one obtains from the bound-state solutions of equation (\ref{eq:eig_eq}) that satisfy \begin{equation} \int_{0}^{\infty }\left| F(x)\right| ^{2}x\,dx<\infty . \label{eq:bound_states} \end{equation} Notice that we have bound states for all $-\infty <\theta <\infty $ and that the eigenvalues $W$ satisfy \begin{equation} \frac{\partial W}{\partial \theta }=-\left\langle \frac{1}{x}\right\rangle <0, \label{eq:HFT} \end{equation} according to the Hellmann-Feynman theorem\cite{F39}. The eigenvalue equation (\ref{eq:eig_eq}) is an example of conditionally solvable (or quasi-exactly solvable) problems that have been widely studied by several authors and exhibit a hidden algebraic structure (see, for example, \cite{T16} and references therein). In order to solve the eigenvalue equation (\ref{eq:eig_eq}) the authors proposed the ansatz \begin{equation} F(x)=x^{s}\exp \left( -\frac{x^{2}}{2}\right) P(x),\;P(x)=\sum_{j=0}^{\infty }a_{j}x^{j},\;s=|\gamma |, \label{eq:ansatz} \end{equation} and derived the three-term recurrence relation \begin{eqnarray} a_{j+2} &=&-\frac{\theta }{\left( j+2\right) \left[ j+2\left( s+1\right) \right] }a_{j+1}+\frac{2j+2s-W+2}{\left( j+2\right) \left[ j+2\left( s+1\right) \right] }a_{j},\; \nonumber \\ j &=&-1,0,\ldots ,\;a_{-1}=0,\;a_{0}=1. \label{eq:TTRR} \end{eqnarray} If the truncation condition $a_{n+1}=a_{n+2}=0$ has physically acceptable solutions then one obtains some exact eigenvalues and eigenfunctions. The reason is that $a_{j}=0$ for all $j>n$ and the factor $P(x)$ in equation ( \ref{eq:ansatz}) reduces to a polynomial of degree $n$. This truncation condition is equivalent to $W_{s}^{(n)}=2(n+s+1)$ and $a_{n+1}=0$. The latter equation is a polynomial function of $\theta $ of degree $n+1$ and it can be proved that all the roots $\theta _{s}^{(n,i)}$, $i=1,2,\ldots ,n+1$, $\theta _{s}^{(n,i)}<\theta _{s}^{(n,i+1)}$, are real\cite{CDW00,AF20}. If $ V(\theta ,x)=-\theta /x+x^{2}$ denotes the parameter-dependent potential for the model discussed here, then it is clear that the truncation condition produces an eigenvalue $W_{s}^{(n)}$ that is common to $n+1$ different potential-energy functions $V_{s}^{(n,i)}(x)=V\left( \theta _{s}^{(n,i)},x\right) $. Notice that in this analysis we have deliberately omitted part of the interaction that has been absorbed into $\gamma $ (or $s$ ) because it is not affected by the truncation approach. It is also worth noticing that the truncation condition only yields \textit{some particular} eigenvalues and eigenfunctions because not all the solutions $F(x)$ of (\ref {eq:eig_eq}) satisfying equation (\ref{eq:bound_states}) have polynomial factors $P(x)$. From now on we will refer to them as follows \begin{equation} F_{s}^{(n,i)}(x)=x^{s}P_{s}^{(n,i)}(x)\exp \left( -\frac{x^{2}}{2}\right) ,\;P_{s}^{(n,i)}(x)=\sum_{j=0}^{n}a_{j,s}^{(n,i)}x^{j}. \label{eq:f^(n,i)(y)} \end{equation} We want to stress that the $n+1$ eigenfunctions $F_{s}^{(n,i)}(x)$, $ i=1,2,\ldots ,n+1$ share the \textit{same} eigenvalue $W_{s}^{(n)}$, a point that was not taken into account by Vit\'{o}ria et al\cite{VBB18} and that is of utmost relevance, as shown below. Let us consider the first cases as illustrative examples. When $n=0$ we have $W_{s}^{(0)}=2(s+1)$, $\theta _{s}^{(0)}=0$ and the eigenfunction $ F_{s}^{(0)}(x)$ has no nodes. We may consider this case trivial because the problem reduces to the exactly solvable harmonic oscillator. Probably for this reason it was not explicitly considered by Vit\'{o}ria et al\cite{VBB18} . When $n=1$ there are two roots $\theta _{s}^{(1,1)}=-\sqrt{4s+2}$ and $ \theta _{s}^{(1,2)}=\sqrt{4s+2}$ and the corresponding non-zero coefficients are \begin{equation} a_{1,s}^{(1,1)}=\frac{\sqrt{2}}{\sqrt{2s+1}},\;a_{1,s}^{(1,2)}=-\frac{\sqrt{2 }}{\sqrt{2s+1}}, \label{eq:a^(1,i)_1} \end{equation} respectively. We appreciate that the eigenfunction $F_{s}^{(1,1)}(x)$ is nodeless and $F_{s}^{(1,2)}(x)$ has one node and that both corresponds to the \textit{same} eigenvalue $W_{s}^{(1)}$. When $n=2$ the results are \begin{eqnarray} \theta _{s}^{(2,1)} &=&-2\sqrt{4s+3},\;a_{1,s}^{(2,1)}=\frac{2\sqrt{4s+3}}{ 2s+1},\;a_{2,s}^{(2,1)}=\frac{2}{2s+1}, \nonumber \\ \theta _{s}^{(2,2)} &=&0,\;a_{1,s}^{(2,2)}=0,\;a_{2,s}^{(2,2)}=-\frac{1}{s+1} , \nonumber \\ \theta _{s}^{(2,3)} &=&2\sqrt{4s+3},\;a_{1,s}^{(2,3)}=-\frac{2\sqrt{4s+3}}{ 2s+1},\;a_{2,s}^{(2,3)}=\frac{2}{2s+1}. \label{eq:alpha,a_n=2} \end{eqnarray} Notice that $F_{s}^{(2,1)}(x)$, $F_{s}^{(2,2)}(x)$ and $F_{s}^{(2,3)}(x)$ have zero, one and two nodes, respectively, in the interval $0<x<\infty $ and that the three eigenfunctions correspond to the \textit{same} eigenvalue $W_{s}^{(2)}$. From the truncation condition the authors derived \begin{equation} \mathcal{E}_{n,l,p_{z}}^{2}=m^{2}+p_{z}^{2}+2\tau \left( n+|\gamma |+1\right) , \label{eq:E^2_VBB} \end{equation} as well as expressions for $\tau _{n,l,p_{z}}$ and $\chi _{n,l,p_{z}}$, $ n=1,2$. They concluded that there are \textit{permitted} values of $\chi $ that characterize the magnetic field. Since there are square-integrable solutions to the eigenvalue equation (\ref{eq:eig_eq}) for all values of $ \theta $ it is clear that such particular values of $\chi $ are just an artifact of the truncation method that yields particular solutions to the eigenvalue equation with polynomial factors $P_{s}^{(n,i)}(x)$. Besides, the allowed energies associated to the nodes $n=1$ and $n=2$ obtained by the authors have no physical meaning because they stem from different potentials $V_{s}^{(n,i)}(x)$. In what follows we discuss this point with more detail. In order to make present discussion clear we write the \textit{actual} eigenvalues of equation (\ref{eq:eig_eq}) as $W_{j,s}(\theta )$, $ j=0,1,\ldots $, $W_{j,s}<W_{j+1,s}$. Given that there are square-integrable solutions for all $-\infty <\theta <\infty $, as indicated above, each eigenvalue can be considered to be a curve $W_{j,s}(\theta )$ in the $\left( \theta ,W\right) $ plane. Therefore, the correct energies of the system should be \begin{equation} \mathcal{E}_{j,l,p_{z}}^{2}=m^{2}+p_{z}^{2}+\tau W_{j,s}. \label{eq:E^2_present} \end{equation} Since the eigenvalue equation (\ref{eq:eig_eq}) is not exactly solvable, except for some particular values of $\theta $, we should resort to an approximate method in order to obtain the eigenvalues and eigenfunctions that are not given by the truncation condition. Here, we apply the well known Rayleigh-Ritz variational method that yields upper bounds to all the eigenvalues and choose the non-orthogonal basis set $\left\{ x^{s+j}\exp \left( -\frac{x^{2}}{2}\right) ,\;j=0,1,\ldots \right\} $. We arbitrarily choose $s=0$ as a first illustrative example in order to facilitate the calculations. When $\theta =\theta _{0}^{(1,1)}=-\sqrt{2}$ the first four eigenvalues are $W_{0,0}=W_{0}^{(1)}=4$, $W_{1,0}=7.693978891$ , $W_{2,0}=11.50604238$, $W_{3,0}=15.37592718$; on the other hand, when $ \theta =\theta _{0}^{(1,2)}=\sqrt{2}$ we have $W_{0,0}=-1.459587134$, $ W_{1,0}=W_{0}^{(1)}=4$, $W_{2,0}=8.344349427$, $W_{3,0}=12.53290130$. Notice that the truncation condition yields only the ground state for the former model and the first excited state for the latter, missing all the other eigenvalues for each model potential. As a second example we choose $s=1$, again to facilitate the calculations. When $\theta =\theta _{1}^{(1,1)}=-\sqrt{6}$ the first four eigenvalues are $ W_{0,0}=W_{1}^{(1)}=6$, $W_{1,1}=9.805784090$, $W_{2,1}=13.66928892$, $ W_{3,1}=17.56601881$; on the other hand, when $\theta =\theta _{1}^{(1,2)}= \sqrt{6}$ we have $W_{0,1}=1.600357154$, $W_{1,1}=W_{1}^{(1)}=6$, $ W_{2,1}=10.21072810$, $W_{3,1}=14.35078474$. Notice that the truncation condition yields only the lowest state for the former model and the second-lowest one for the latter, missing all the other eigenvalues for each model potential. In order to convince the reader about the accuracy of the variational method, tables \ref{tab:theta1} and \ref{tab:theta2} show how the approximate eigenvalues given by this approach converge from above towards the exact eigenvalues of equation (\ref{eq:eig_eq}) as the number $N$ of functions in the expansion increases. We appreciate that the variational method yields the exact eigenvalue $W_{1}^{(1)}$ for all $N$ because the corresponding eigenfunction is, in this case, a linear combination of only two basis functions. From the analysis above one may draw the wrong conclusion that the truncation condition is utterly useless; however, it has been shown that one can extract valuable information about the spectrum of conditionally solvable models if one arranges and connects the roots $W_{s}^{(n)}$ properly \cite{CDW00,AF20}. From the analysis outlined above we conclude that $\left( \theta _{s}^{(n,i)},W_{s}^{(n)}\right) $ is a point on the curve $ W_{i-1,s}(\theta )$, $i=1,2,\ldots ,n+1$, so that we can easily construct some parts of such spectral curves. For example, Figure~\ref{Fig:Wn} shows several eigenvalues $W_{0}^{(n)}$ and $W_{1}^{(n)}$ given by the truncation condition (blue points) and red lines representing the variational calculations. Notice that the continuous variational curves $W_{j,s}(\theta ) $ already connect the points $W_{s}^{(n)}$ corresponding to the truncation condition. In other words, the variational method yields \textit{all} the eigenvalues $W_{j,s}(\theta )$ for any value of $\theta $ while the truncation results $W_{s}^{(n)}$ are just \textit{some particular} points on the curves. Besides, it is clear that the variational curves $W_{j,s}(\theta )$ have negative slopes as predicted by the Hellmann-Feynman theorem (\ref {eq:HFT}). We clearly see that the \textit{allowed} energies reported by Vit\'{o}ria et al\cite{VBB18} have no physical meaning because they correspond to many different problems instead of just one. In addition to it, the occurrence of discrete \textit{permitted} values of the magnetic field parameter $\chi $ is a mere consequence of selecting particular points $\left( \theta _{s}^{(n,i)},W_{s}^{(n)}\right) $ on the curves $ W_{j,s}(\theta )$. It should be clear from present analysis that such points (by themselves) do not exhibit any physical meaning. Notice that the truncation method only yields the exact result in the trivial case $\theta =0 $. This fact is already discussed in many textbooks of quantum mechanics where it is shown that the coefficients of the power series expansions of the solutions to the exactly solvable quantum-mechanical models, like the harmonic oscillator, hydrogen atom, Morse oscillator, etc., satisfy two-term recurrence relations and not three-term ones like the quasi-exactly solvable problems\cite{CDW00,AF20}. In two earlier papers on this journal Bakke\cite{B14} and Bakke and Furtado \cite{BF15} discussed physical systems with different interactions, arrived at the same eigenvalue equation, applied the same approach and, consequently, draw somewhat similar wrong physical conclusions. \section*{Addendum} According to the reviewer: ``Another point to be observed is the dependence of the Rayleigh-Ritz variational method on the choice of the wave function. Despite not being mentioned by the authors of this comment, the wave function used in the Rayleigh-Ritz variational method is obtained from the asymptotic analysis made by Vit\'{o}ria et al. If one uses another wave function that differs from the wave function obtained from the asymptotic analysis made by Vit\'{o}ria et al, therefore, the results will be different. In addition, no mathematical proof has been shown in this comment that clarifies the relation of the approximate solutions to the biconfluent Heun equation.'' This comment is surprising. In order to apply the Ritz variational method it is mandatory that the basis functions satisfy the correct boundary conditions at $x=0$ and $x\rightarrow \infty $. Therefore, we have chosen the simplest basis set that satisfy such boundary conditions. The set of Gaussian functions chosen here is complete and, for this reason it should give the actual eigenvalues of the problem at hand. This fact is clearly revealed in the convergence of the approximate eigenvalues shown in Tables \ref{tab:theta1} and \ref{tab:theta2}. The Ritz variational method is well known and and has been widely used for the study of many quantum-mechanical problems. \begin{table}[tbp] \caption{Eigenvalues $W_{j,0}$ for $\gamma=0$ and $\theta=-\protect\sqrt{2}$} \label{tab:theta1} \begin{center} \par \begin{tabular}{D{.}{.}{3}D{.}{.}{11}D{.}{.}{11}D{.}{.}{11}D{.}{.}{11}} \hline \multicolumn{1}{c}{$N$}& \multicolumn{1}{c}{$W_{0 0}$} & \multicolumn{1}{c}{$W_{1 0}$} & \multicolumn{1}{c}{$W_{2 0}$} & \multicolumn{1}{c}{$W_{3 0}$}\\ \hline 2 & 4.000000000 & 10.49997602 & & \\ 3 & 4.000000000 & 7.751061995 & 19.88102859 & \\ 4 & 4.000000000 & 7.694010921 & 11.97562584 & 33.92039998 \\ 5 & 4.000000000 & 7.693979367 & 11.51212379 & 17.05520450 \\ 6 & 4.000000000 & 7.693978905 & 11.50604696 & 15.46896992 \\ 7 & 4.000000000 & 7.693978892 & 11.50604243 & 15.37652840 \\ 8 & 4.000000000 & 7.693978891 & 11.50604238 & 15.37592761 \\ 9 & 4.000000000 & 7.693978891 & 11.50604238 & 15.37592718 \\ 10 & 4.000000000 & 7.693978891 & 11.50604238 & 15.37592718 \\ \end{tabular} \par \end{center} \end{table} \begin{table}[tbp] \caption{Eigenvalues $W_{j,0}$ for $\gamma=0$ and $\theta=\protect\sqrt{2}$} \label{tab:theta2} \begin{center} \par \begin{tabular}{D{.}{.}{3}D{.}{.}{11}D{.}{.}{11}D{.}{.}{11}D{.}{.}{11}} \hline \multicolumn{1}{c}{$N$}& \multicolumn{1}{c}{$W_{0 0}$} & \multicolumn{1}{c}{$W_{1 0}$} & \multicolumn{1}{c}{$W_{2 0}$} & \multicolumn{1}{c}{$W_{3 0}$}\\ \hline 2& -1.180391283 & 4.000000000 & & \\ 3& -1.401182256 & 4.000000000 & 9.284143096 & \\ 4& -1.449885589 & 4.000000000 & 8.345259771 & 17.66452696 \\ 5& -1.458156835 & 4.000000000 & 8.344361267 & 12.69095166 \\ 6& -1.459389344 & 4.000000000 & 8.344349784 & 12.53313315 \\ 7& -1.459560848 & 4.000000000 & 8.344349442 & 12.53290257 \\ 8& -1.459583736 & 4.000000000 & 8.344349427 & 12.53290132 \\ 9& -1.459586704 & 4.000000000 & 8.344349427 & 12.53290130 \\ 10& -1.459587081 & 4.000000000 & 8.344349427 & 12.53290130 \\ 11 & -1.459587128 & 4.000000000 & 8.344349427 & 12.53290130 \\ 12 & -1.459587134 & 4.000000000 & 8.344349427 & 12.53290130 \\ 13 & -1.459587134 & 4.000000000 & 8.344349427 & 12.53290130 \\ \end{tabular} \par \end{center} \end{table} \begin{figure} \caption{Eigenvalues $W_{j,0} \label{Fig:Wn} \end{figure} \end{document}
math
17,996
\begin{document} \hyphenation{boun-da-ry mo-no-dro-my sin-gu-la-ri-ty ma-ni-fold ma-ni-folds re-fe-rence se-cond se-ve-ral dia-go-na-lised con-ti-nuous thres-hold re-sul-ting fi-nite-di-men-sio-nal ap-proxi-ma-tion pro-per-ties ri-go-rous mo-dels mo-no-to-ni-ci-ty pe-ri-o-di-ci-ties mi-ni-mi-zer mi-ni-mi-zers know-ledge ap-proxi-mate pro-per-ty poin-ting ge-ne-ra-li-za-tion ge-ne-ral re-pre-sen-ta-tions equi-variance equi-variant Equi-variance Choo-sing to-po-lo-gy brea-king} \newcommand{\mathbb{X}}{\mathbb{X}} \newcommand{\partial}{\partialartial} \title{Coupled cell networks and their hidden symmetries} \noindent \abstract{\noindent Dynamical systems with a coupled cell network structure can display synchronous solutions, spectral degeneracies and anomalous bifurcation behavior. We explain these phenomena here for homogeneous networks, by showing that every homogeneous network dynamical system admits a semigroup of hidden symmetries. The synchronous solutions lie in the symmetry spaces of this semigroup and the spectral degeneracies of the network are determined by its indecomposable representations. Under a condition on the semigroup representation, we prove that a one-parameter synchrony breaking steady state bifurcation in a coupled cell network must generically occur along an absolutely indecomposable subrepresentation. We conclude with a classification of generic one-parameter bifurcations in monoid networks with two or three cells.} \section{Introduction} Coupled cell networks arise abundantly in the sciences. They vary from discrete particle models, electrical circuits and Josephson junction arrays to the world wide web, power grids, food webs and neuronal networks. Throughout the last decade, an extensive mathematical theory has been developed for the study of dynamical systems with a network structure \cite{field}, \cite{curious}, \cite{golstew}, \cite{stewartnature}, \cite{pivato}. In these network dynamical systems, the evolution of a constituent or ``cell'' is determined by the states of certain particular other cells. In this paper, we shall study the dynamics of homogeneous coupled cell networks. This dynamics is determined by a system of ordinary differential equations of the form \begin{align}\label{diffeqnintro} \dot x_i = f(x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)}) \ \mbox{for} \ 1\leq i\leq N. \end{align} In these equations of motion, the evolution of the state variable $x_i$ is only determined by the values of $x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)}$. The functions $$\sigma_1, \ldots, \sigma_n: \{1, \ldots, N\} \to \{1, \ldots, N\}$$ should therefore be thought of as the network that decides which cells influence which cells. A network structure can have a nontrivial impact on the behavior of a dynamical system. For instance, the network architecture of (\ref{diffeqnintro}) may force it to admit synchronous or partially synchronous solutions, cf. \cite{antonelli2}, \cite{romano}, \cite{golstew3}, \cite{torok}, \cite{stewart1}, \cite{pivato}, \cite{wang}. It has also been observed that the network structure of a dynamical system can influence its bifurcations. In fact, bifurcation scenarios that are unheard of in dynamical systems without any special structure, can occur generically in certain networks \cite{bifurcations}, \cite{anto4}, \cite{dias}, \cite{elmhirst}, \cite{krupa}, \cite{pivato2}, \cite{claire2}, \cite{synbreak}. In particular, its network structure can force the linearization of (\ref{diffeqnintro}) at a (partially) synchronous equilibrium to have eigenvalues with high multiplicity \cite{synbreak2}, \cite{leite}, \cite{feedforwardRinkSanders}. This in turn influences the solutions and bifurcations that can occur near such an equilibrium. Attempts to understand this degenerate behaviour of networks have invoked the {\it groupoid formalism} of Golubitsky and Stewart et al. \cite{curious}, \cite{golstew}, \cite{stewartnature}, \cite{pivato} and more recently also the language of category theory \cite{deville}. In this paper, we propose another explanation though, inspired by the remark that invariant subspaces and spectral degeneracies are often found in dynamical systems with symmetry, cf. \cite{field4}, \cite{perspective}, \cite{golschaef2}. It is natural to ask whether symmetries explain the dynamical degeneracies of coupled cell networks and it has been conjectured that in general they do not \cite{golstew}. We nevertheless show in this paper that every homogeneous coupled cell network has hidden symmetries. More precisely, it turns out that equations (\ref{diffeqnintro}) are conjugate to another coupled cell network that admits a semigroup of symmetries. We call this latter network the {\it fundamental network} of (\ref{diffeqnintro}) and it is given by equations of the form \begin{align}\label{fundamentalintro} \dot X_j = f(X_{\widetilde \sigma_1(j)}, \ldots, X_{\widetilde \sigma_{n'}(j)})\ \mbox{for} \ 1\leq j \leq n'. \end{align} We will show how to compute the symmetries of (\ref{fundamentalintro}) from the network maps $\sigma_1, \ldots, \sigma_n$ and note that some of these symmetries may be represented by noninvertible transformations. It will moreover be shown that if the semigroup of symmetries is a monoid (i.e. if it contains a unit), then the fundamental network (\ref{fundamentalintro}) is completely characterized by its symmetries. This means that all the degenerate phenomena that occur in the fundamental network are due to symmetry, including the existence of synchronous solutions and the occurrence of unfamiliar bifurcations. The characterization of the fundamental network as an equivariant dynamical system is of great practical interest, because it allows us to understand the structure of equations (\ref{diffeqnintro}) and (\ref{fundamentalintro}) much better. For example, with the help of representation theory we are able to classify the generic one-parameter synchrony breaking steady state bifurcations that can be found in fundamental networks with two or three cells. We also explain how our knowledge of the fundamental network helps us understanding the behavior of the original network (\ref{diffeqnintro}). The remainder of this paper is organized as follows. In Section \ref{secsemigroup} we introduce the semigroup associated to the equations of motion (\ref{diffeqnintro}) and we recall some results from \cite{CCN} on semigroup coupled cell networks. In Section \ref{sechidden} we relate equations (\ref{diffeqnintro}) to the fundamental network (\ref{fundamentalintro}) and we show that the latter is equivariant under the action of the semigroup introduced in Section \ref{secsemigroup}. In Section \ref{secrepresentations} we present and prove some well-known facts from the representation theory of semigroups. We apply this theory in Sections \ref{seclyapunovschmidt} and \ref{secgeneric}, where we build a framework for the bifurcation theory of fundamental networks. More precisely, in Section \ref{seclyapunovschmidt} we introduce a variant of the method of Lyapunov-Schmidt reduction to investigate steady state bifurcations in differential equations with a semigroup of symmetries. In Section \ref{secgeneric} we then prove (under a certain condition on the semigroup) that a generic synchrony breaking steady state bifurcation in a one-parameter family of semigroup symmetric differential equations takes place along an absolutely indecomposable representation of the semigroup. We apply this result in Section \ref{sectwoorthree} to classify the generic co-dimension one synchrony breaking steady state bifurcations that can occur in monoid networks with two or three cells. \section{Semigroup networks}\label{secsemigroup} In this section we make some basic definitions and summarize some results from \cite{CCN}. Dynamical systems with a coupled cell network structure can be determined in various ways \cite{field}, \cite{golstew}, \cite{torok}, \cite{pivato}, but in this paper we describe it by means of a collection of distinct maps $$\Sigma=\{\sigma_1, \ldots, \sigma_n\}\ \mbox{with} \ \sigma_1,\ldots, \sigma_n: \{1, \ldots, N\}\to\{1,\ldots, N\}\, .$$ The collection $\Sigma$ has the interpretation of a network with $1\leq N < \infty$ cells. These cells can be thought of as the vertices of a directed multigraph in which vertex $1\leq i\leq N$ receives inputs from respectively the vertices $\sigma_1(i), \ldots, \sigma_n(i)$. The idea is that the state of cell $1\leq i\leq N$ is determined by a variable $x_i$ that takes values in a vector space $V$ and that the evolution of cell $i$ is determined only by the states of the cells that act as its inputs. With this in mind, we make the following definition: \begin{definition}\label{networkdefinition} Let $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ be a collection of $n$ distinct maps on $N$ elements, $V$ a finite dimensional real vector space and $f: V^n\to V$ a smooth function. Then we define \begin{align}\label{networkvectorfield} \gamma_f:V^N\to V^N \ \mbox{by}\ (\gamma_f)_i(x):=f(x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)})\ \mbox{for}\ 1 \leq i \leq N\, . \end{align} Depending on the context, we will say that $\gamma_f$ is a {\it homogeneous coupled cell network map} or a {\it homogeneous coupled cell network vector field} subject to $\Sigma$. \end{definition} Indeed, the coupled cell network vector field $\gamma_f$ defines a dynamical system in which the evolution of the state of cell $i$ is determined by the states of cells $\sigma_1(i), \ldots, \sigma_n(i)$, namely $$\dot x(t) = \gamma_f(x(t))\, .$$ One can also view $\gamma_f$ as a map rather than a vector field and study the discrete dynamics $$x^{(n+1)}=\gamma_f(x^{(n)})\, .$$ \begin{example}\label{running} Our running example of a network dynamical system will consist of $N=3$ cells and $n=3$ inputs per cell. In fact, let us choose $$\sigma_1[123]:=[123], \sigma_2[123]:=[121]\ \mbox{and} \ \sigma_3[123]:=[111]\, .$$ Then the coupled cell network maps subject to $\Sigma:=\{\sigma_1, \sigma_2, \sigma_3\}$ are of the form $$\gamma_f(x_1, x_2, x_3) = \left(f(x_1, x_1, x_1), f(x_2, x_2, x_1), f(x_3, x_1, x_1) \right)\, . $$ The corresponding network differential equations are $$\begin{array}{ll} \dot x_1 =& f(x_1, x_1, x_1) \\ \dot x_2 =& f(x_2, x_2, x_1) \\ \dot x_3 =& f(x_3, x_1, x_1)\end{array} \, .$$ A graphical representation of the networks maps $\sigma_1, \sigma_2, \sigma_3$ is given in Figure \ref{pict1}. \begin{figure} \caption{\footnotesize {\rm The collection $\{\sigma_1, \sigma_2, \sigma_3\} \label{pict1} \end{figure} \end{example} A technical problem that occurs when studying network dynamical systems is that the composition $\gamma_f\circ \gamma_g$ (or the infinitesimal composition $[\gamma_f, \gamma_g]$, the Lie bracket) of two coupled cell network maps need not be a coupled cell network map with the same network structure. This problem was addressed in \cite{CCN}, where we formulated a condition on a network that guarantees that this problem does not arise. Let us recall this condition here: \begin{definition} We say that $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ is a {\it semigroup} if all its elements are distinct and if for all $1\leq j_1, j_2\leq n$ there is a $1\leq j_3\leq n$ such that $\sigma_{j_1}\circ \sigma_{j_2}= \sigma_{j_3}$. \end{definition} \begin{example}\label{comptable} Recall our running Example \ref{running}. The collection $\Sigma=\{\sigma_1, \sigma_2, \sigma_3\}$ forms an abelian semigroup. Indeed, one checks that the composition table of these maps is given by: $$\begin{array}{c|ccc} \circ & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_2 & \sigma_3 \\ \sigma_3 & \sigma_3 & \sigma_3 & \sigma_3 \end{array} \, .$$ \end{example} The relevance of semigroup networks is illustrated by the following theorem. It is one of the main results in \cite{CCN} and we omit the proof here. \begin{theorem}\label{closed} When $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ is a semigroup, then the collection $$\{\gamma_f\, |\, f:V^n\to V\ \mbox{smooth}\}$$ is closed under taking compositions and Lie brackets. \end{theorem} \begin{example} Recall that our running Example \ref{running} is a semigroup network. When \begin{align}\nonumber \gamma_f(x_1, x_2, x_3) & = \left(f(x_1, x_1, x_1), f(x_2, x_2, x_1), f(x_3, x_1, x_1) \right) \ \mbox{and} \\ \nonumber \gamma_g(x_1, x_2, x_3) & = \left(g(x_1, x_1, x_1), g(x_2, x_2, x_1), g(x_3, x_1, x_1) \right) \end{align} are two coupled cell networks subject to $\Sigma$, then one computes that $$(\gamma_f\circ \gamma_g)(x_1, x_2, x_3) = \left( \begin{array}{c} f(g(x_1, x_1, x_1), g(x_1, x_1, x_1), g(x_1, x_1, x_1)) \\ f(g(x_2, x_2, x_1), g(x_2, x_2, x_1), g(x_1, x_1, x_1))\\ f(g(x_3, x_1, x_1), g(x_1, x_1, x_1), g(x_1, x_1, x_1)) \end{array} \right) \, .$$ As anticipated by Theorem \ref{closed}, this shows that $\gamma_f\circ \gamma_g = \gamma_h$ with $$h(X_1, X_2, X_3) = f(g(X_1, X_2, X_3), g(X_2, X_2, X_3), g(X_3, X_3, X_3))\, .$$ A similar computation shows that $[\gamma_f, \gamma_g] := D\gamma_f\cdot \gamma_g-D\gamma_g\cdot \gamma_f= \gamma_{i}$ with \begin{align} i(X_1, X_2, X_3) & = \nonumber D_1f(X_1, X_2, X_3)\cdot g(X_1, X_2, X_3) - D_1g(X_1, X_2, X_3)\cdot f(X_1, X_2, X_3) \\ \nonumber & + D_2f(X_1, X_2, X_3)\cdot g(X_2, X_2, X_3) - D_2g(X_1, X_2, X_3)\cdot f(X_2, X_2, X_3) \\ \nonumber & + D_3f(X_1, X_2, X_3)\cdot g(X_3, X_3, X_3) - D_3g(X_1, X_2, X_3)\cdot f(X_3, X_3, X_3)\, . \end{align} \end{example} Theorem \ref{closed} means that the class of semigroup network dynamical systems is a natural one to work with. It was shown for example in \cite{CCN} that near a dynamical equilibrium, the local normal form of a semigroup network vector field is a network vector field with the very same semigroup network structure. From the point of view of local dynamics and bifurcation theory, semigroup networks are thus very useful. An arbitrary collection $\Sigma=\{\sigma_1,\ldots,\sigma_n\}$ need of course not be a semigroup, but it does generate a unique smallest semigroup $$\Sigma'=\{\sigma_1, \ldots, \sigma_n, \sigma_{n+1},\ldots, \sigma_{n'}\} \ \mbox{that contains}\ \Sigma\, .$$ In fact, every coupled cell network map $\gamma_f$ subject to $\Sigma$ is also a coupled cell network map subject to the semigroup $\Sigma'$. Indeed, if we define $$f'(X_1, \ldots, X_n, X_{n+1}, \ldots, X_{n'}) := f(X_1, \ldots, X_n)$$ then it obviously holds that $$(\gamma_{f'})_i(x) = f'(x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)}, x_{\sigma_{n+1}(i)}, \ldots, x_{\sigma_{n'}(i)}) = f(x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)}) = (\gamma_{f})_i(x)\ .$$ For this reason, we will throughout this paper always augment $\Sigma$ to the semigroup $\Sigma'$ and think of every coupled cell network map subject to $\Sigma$ as a (special case of a) coupled cell network map subject to the semigroup $\Sigma'$. To illustrate that this augmentation is natural, let us finish this section by mentioning a result from \cite{CCN} concerning the synchronous solutions of a network dynamical system. We recall the following well-known definition: \begin{definition} Let $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ be a collection of maps, not necessarily forming a semigroup, and $P=\{P_1,\ldots, P_r\}$ a partition of $\{1,\dots, N\}$. If the subspace $$ {\rm Syn}_{P} :=\{x\in V^N\, |\ x_{i_1}=x_{i_2} \, \mbox{when} \ i_1 \ \mbox{and}\ i_2\ \mbox{are in the same element of}\ P\, \}$$ is an invariant submanifold for the dynamics of $\gamma_f$ for every $f\in C^{\infty}(V^n,V)$, then we call ${\rm Syn}_{P}$ a {\it (robust) synchrony space} for the network defined by $\Sigma$. \end{definition} Interestingly, the synchrony spaces of networks subject to $\Sigma$ are the same as those for networks subject to $\Sigma'$. This means that the extension from $\Sigma$ to $\Sigma'$ does not have any effect on synchrony. Indeed, let us state the following result from \cite{CCN}. The proof, that we do not give here, is easy and uses the concept of a {\it balanced partition} of the cells \cite{curious}, \cite{golstew3}, \cite{torok}, \cite{CCN}, \cite{stewart1}, \cite{pivato}. \begin{lemma}\label{robustness} Let $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ be a collection of maps, not necessarily forming a semigroup, and $P=\{P_1,\ldots, P_r\}$ a partition of $\{1,\dots, N\}$. Then ${\rm Syn}_P$ is a synchrony space for $\Sigma$ if and only if it is a synchrony space for the semigroup $\Sigma'$ generated by $\Sigma$. \end{lemma} Lemma \ref{robustness} shows that the extension from $\Sigma$ to $\Sigma'$ is harmless from the point of view of synchrony. \section{Hidden symmetry}\label{sechidden} We will now show that every homogeneous coupled cell network is conjugate to a network that is equivariant under a certain action of a semigroup. The symmetry of the latter network thus acts as a hidden symmetry for the original network. For the remainder of this paper, let us assume that $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ is a semigroup (i.e. $\Sigma=\Sigma'$ and the necessary extension has taken place). To understand the hidden symmetries of the networks subject to $\Sigma$, one should note that every $\sigma_j\in \Sigma$ induces a map $$\widetilde \sigma_j: \{1,\ldots, n\}\to\{1, \ldots, n\}\ \mbox{via the formula}\ \sigma_{\widetilde \sigma_j(k)} = \sigma_j \circ \sigma_k\, .$$ The map $\widetilde \sigma_j$ encodes the left-multiplicative behavior of $\sigma_j$. We shall write $\widetilde \Sigma := \{\widetilde \sigma_1, \ldots, \widetilde \sigma_n\}$. The following result is easy to prove: \begin{proposition} If $\Sigma$ is a semigroup with unit, then so is $\widetilde \Sigma$ and the map $$\sigma_j \mapsto \widetilde \sigma_j\ \mbox{from}\ \Sigma\ \mbox{to}\ \widetilde \Sigma$$ is a homomorphism of semigroups. \end{proposition} \begin{proof} By definition, it holds for all $i,j,k$ that $$\sigma_{\widetilde{ (\sigma_i \circ \sigma_j)}(k)} = (\sigma_i \circ \sigma_j) \circ \sigma_k = \sigma_i\circ (\sigma_j\circ \sigma_k)= \sigma_i\circ \sigma_{\widetilde \sigma_j(k)} = \sigma_{\widetilde \sigma_i ( \widetilde \sigma_j(k))}\, . $$ Because $\Sigma$ is a semigroup (and hence its elements are distinct), this implies that $$\widetilde{\sigma_{i}\circ\sigma_{j}} =\widetilde \sigma_{i}\circ \widetilde \sigma_{j}\, .$$ In other words, $\widetilde \Sigma$ is closed under composition and the map $\sigma_j \mapsto \widetilde \sigma_j$ is a homomorphism. It remains to check that if $\Sigma$ has a unit, then so does $\widetilde \Sigma$ and its elements are distinct. So let us assume that $\sigma_{i^*}$ is the unit of $\Sigma$, i.e. that $\sigma_{i^*}\circ \sigma_{j} = \sigma_j\circ \sigma_{i^*}= \sigma_j$ for all $j$. Then $$\sigma_{\widetilde \sigma_{i^*}(j)} = \sigma_{i^*}\circ \sigma_{j} = \sigma_j\, .$$ This means that $\widetilde \sigma_{i^*}={\rm id}_{\{1, \ldots, n\}}$ and hence that $\widetilde \Sigma$ has a unit. Finally, it also follows for $j\neq k$ that $$\sigma_{\widetilde \sigma_{j}(i^*)} = \sigma_{j}\circ \sigma_{i^*}=\sigma_j \neq \sigma_k = \sigma_{k}\circ \sigma_{i^*}=\sigma_{\widetilde \sigma_{k}(i^*)}$$ and therefore that the elements of $\widetilde \Sigma$ are distinct. \end{proof} \begin{definition} We shall call a semigroup with (left- and right-)unit a {\it monoid}. \end{definition} \begin{example}\label{running5} Recall our running Example \ref{running} in which $$\sigma_1[123]:=[123], \sigma_2[123]:=[121]\ \mbox{and} \ \sigma_3[123]:=[111]\, .$$ From the composition table of $\Sigma$ given in Example \ref{comptable}, we see that $\widetilde \sigma_1, \widetilde \sigma_2, \widetilde \sigma_3$ are given by $$\widetilde \sigma_1[123]=[123], \widetilde \sigma_2[123]=[223]\ \mbox{and}\ \widetilde \sigma_3[123]=[333]\, .$$ These maps are not conjugate to the maps $\sigma_1, \sigma_2, \sigma_3$ by any permutation of the cells $\{1,2,3\}$ but do have the same composition table. A graphical representation of the network maps $\widetilde \sigma_1, \widetilde \sigma_2, \widetilde \sigma_3$ is given in Figure \ref{pict2}. \begin{figure} \caption{\footnotesize {\rm The collection $\{\widetilde \sigma_1, \widetilde \sigma_2, \widetilde \sigma_3\} \label{pict2} \end{figure} \end{example} One can of course also study coupled cell networks subject to the monoid $\widetilde \Sigma$. They give rise to a differential equation on $V^n$ of the form $$\dot X_j = f(X_{\widetilde \sigma_1(j)}, \ldots, X_{\widetilde \sigma_n(j)})\ \mbox{for}\ 1\leq j \leq n\, .$$ These differential equations will turn out important enough to give the corresponding maps and vector fields a special name: \begin{definition} Let $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ be a monoid and $f:V^n\to V$ a smooth function. Then we call the coupled cell network map/vector field $$\Gamma_f:V^n\to V^n\ \mbox{defined as} \ (\Gamma_{f})_j(X) := f (X_{\widetilde \sigma_1(j)}, \ldots, X_{\widetilde \sigma_n(j)}) \ \mbox{for}\ 1\leq j\leq n $$ the {\it fundamental network} of $\gamma_f$. \end{definition} \begin{example}\label{fundamentalexample} For our running Example \ref{running} the maps $\widetilde \sigma_1, \widetilde \sigma_2, \widetilde \sigma_3$ were computed in Example \ref{running5}. We read off that the equations of motion of the fundamental network are given by $$\begin{array}{ll} \dot X_1 = & f(X_1, X_2, X_3) \\ \dot X_2 = & f(X_2, X_2, X_3) \\ \dot X_3 = & f(X_3, X_3, X_3)\end{array} \, .$$ \end{example} \begin{remark} If $\Sigma$ is a monoid, then so is $\widetilde \Sigma$ and we can observe that $$\widetilde \sigma_{\widetilde{\widetilde \sigma}_i(j)} = \widetilde \sigma_i \circ \widetilde \sigma_j = \widetilde{\sigma_i \circ \sigma_j} = \widetilde{ \sigma_{\widetilde \sigma_i(j)}} = \widetilde \sigma_{\widetilde \sigma_i(j)}\, . $$ This proves that $\widetilde{\widetilde \sigma}_i = \widetilde \sigma_i$ and thus that $\widetilde{\widetilde \Sigma} = \widetilde \Sigma$. In particular, $\Gamma_f$ is equal to its own fundamental network. In fact, this is the reason we call $\Gamma_f$ ``fundamental''. \end{remark} Theorem \ref{conjugation} below was proved in \cite{CCN} and demonstrates the relation between $\gamma_f$ and $\Gamma_f$: \begin{theorem}\label{conjugation} For $1\leq i\leq N$ let us define the map $\partiali_i:V^N\to V^n$ by $$\partiali_i(x_1, \ldots, x_N):=(x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)})\, .$$ All the maps $\partiali_i$ conjugate $\gamma_f$ to $\Gamma_f$, that is $$\Gamma_f\circ \partiali_i = \partiali_i\circ \gamma_f\ \ \mbox{for all} \ 1\leq i\leq N \, .$$ \end{theorem} \begin{proof} We remark that the definition of $\partiali_i:V^N\to V^n$ is such that $$(\gamma_f)_i=f\circ \partiali_i\, .$$ With this in mind, let us also define for $1\leq j \leq n$ the maps \begin{align}\label{Amapdef} A_{\sigma_j}: V^n\to V^n \ \mbox{by}\ A_{\sigma_j}(X_1, \ldots, X_{n}) :=(X_{\widetilde \sigma_1(j)}, \ldots, X_{\widetilde \sigma_n(j)}) \end{align} for which it holds that $$(\Gamma_f)_j= f\circ A_{\sigma_j}\, .$$ With these definitions, we find that \begin{align} (A_{\sigma_j}\circ \partiali_i)(x)& =A_{\sigma_j}(x_{\sigma_1(i)}, \ldots, x_{\sigma_n(i)}) = (x_{\sigma_{\widetilde\sigma_1(j)}(i)}, \ldots, x_{\sigma_{\widetilde\sigma_n(j)}(i)}) \nonumber \\ \nonumber & = (x_{\sigma_1(\sigma_j(i))}, \ldots, x_{\sigma_n(\sigma_j(i))}) = \partiali_{\sigma_j(i)}(x)\, . \end{align} In other words, $$A_{\sigma_j}\circ \partiali_i =\partiali_{\sigma_j(i)}\, .$$ As a consequence, we have for $x\in V^N$ that $$(\Gamma_f\circ\partiali_i)_j(x)=f(A_{\sigma_j}(\partiali_i(x))) =f(\partiali_{\sigma_j(i)}(x)) = (\gamma_f(x))_{\sigma_j(i)} = (\partiali_i\circ\gamma_f)_j(x)\, .$$ This proves the theorem. \end{proof} Theorem \ref{conjugation} says that $\Gamma_f$ is semi-conjugate to $\gamma_f$. In particular, every $\partiali_i$ sends integral curves of $\gamma_f$ to integral curves of $\Gamma_f$ (and discrete-time orbits of $\gamma_f$ to those of $\Gamma_f$). The opposite need not be true though, because it may happen that none of the $\partiali_i$ is invertible. In addition, the dynamics of $\gamma_f$ can be reconstructed from the dynamics of $\Gamma_f$. More precisely, when $x(t)$ is an integral curve of $\gamma_f$ and $X_{(i)}(t)$ are integral curves of $\Gamma_f$ with $X_{(i)}(0)=\partiali_i(x(0))$, then $\partiali_i(x(t)) = X_{(i)}(t)$ and thus $$\dot x_i(t) = f(\partiali_i(x(t))) = f(X_{(i)}(t)) \ \mbox{for}\ 1\leq i\leq N\, .$$ This means that $x(t)$ can simply be obtained by integration. In other words, it suffices to study the dynamics of $\Gamma_f$ to understand the dynamics of $\gamma_f$. \begin{example}\label{conjugateexample} In our running Example \ref{running}, the maps $\partiali_1, \partiali_2, \partiali_3: V^3\to V^3$ are given by $$\begin{array}{l} \partiali_1(x_1, x_2, x_3) = (x_1, x_1, x_1)\, , \\ \partiali_2(x_1, x_2, x_3) = (x_2, x_2, x_1)\, ,\\ \partiali_3(x_1, x_2, x_3) = (x_3, x_1, x_1)\, . \end{array}$$ Although none of these maps is invertible, they indeed send the solutions of $$\begin{array}{ll} \dot x_1 = & f(x_1, x_1, x_1) \\ \dot x_2 = & f(x_2, x_2, x_1) \\ \dot x_3 = & f(x_3, x_1, x_1)\end{array} \, $$ to solutions of the equations $$\begin{array}{ll} \dot X_1 = & f(X_1, X_2, X_3) \\ \dot X_2 = & f(X_2, X_2, X_3) \\ \dot X_3 = & f(X_3, X_3, X_3)\end{array} \, .$$ \end{example} \begin{remark} One could think of the maps $\partiali_i$ as forming ``shadows'' of the dynamics of $\gamma_f$ in the dynamics of $\Gamma_f$, in such a way that the original dynamics can be reproduced from all its shadows. At the same time, the transition from $\gamma_f$ to $\Gamma_f$ is reminiscent of the symmetry reduction of an equivariant dynamical system: the dynamics of $\gamma_f$ descends to the dynamics of $\Gamma_f$ and the dynamics of $\gamma_f$ can be reconstructed from that of $\Gamma_f$ by means of integration. Most importantly, $\Gamma_f$ captures all the dynamics of $\gamma_f$. \end{remark} \begin{remark}\label{equilibriaremark} When $\gamma_f(x)=0$ then $\Gamma_f(\partiali_i(x)) = \partiali_i(\gamma_f(x))=0$, that is $\partiali_i$ sends equilibria of $\gamma_f$ to equilibria of $\Gamma_f$. On the other hand, when $\Gamma_f(\partiali_i(x))=0$, then $$(\gamma_f)_{\sigma_{j}(i)}(x)=f(\partiali_{\sigma_j(i)}(x)) = f(A_{\sigma_{j}}(\partiali_i(x))) = (\Gamma_f)_j(\partiali_i(x)) = 0\, .$$ Applied to $j=i^*$ this gives that $(\gamma_f)_{i}(x)=0$ if $\Gamma_f(\partiali_i(x))=0$, that is $x$ is an equilibrium of $\gamma_f$ as soon as all maps $\partiali_i$ send it to an equilibrium. We conclude that $x\in V^N$ is an equilibrium point of $\gamma_f$ if and only if all the points $\partiali_i(x)\in V^n \ (1\leq i\leq N)$ are equilibria of $\Gamma_f$. With this in mind, we can determine the equilibria of $\gamma_f$ from those of $\Gamma_f$. \end{remark} \noindent There are two major advantages of studying $\Gamma_f$ instead of $\gamma_f$: \begin{itemize} \item[{\bf 1.}] The network structure of $\Gamma_f$ only depends on the composition/multiplication table (i.e. the semigroup/monoid structure) of $\Sigma$. More precisely: when $\Sigma^{(1)}$ and $\Sigma^{(2)}$ are isomorphic monoids, then the fundamental networks $\Gamma_f^{(1)}$ and $\Gamma_f^{(2)}$ are (bi-)conjugate. This is not true in general for $\gamma_f^{(1)}$ and $\gamma_f^{(2)}$. Thus, every abstract monoid corresponds to precisely one fundamental network. \item[{\bf 2.}] The fundamental network $\Gamma_f$ is fully characterized by symmetry. This is the content of Theorem \ref{equithm} below, and one of the crucial points of this paper. \end{itemize} \begin{theorem}\label{equithm} Let $\Sigma=\{\sigma_1, \ldots, \sigma_n\}$ be a monoid with unit $\sigma_{i^*}$ and define the maps \begin{align}\label{Amapdef2} A_{\sigma_j}: V^n\to V^n \ \mbox{by}\ A_{\sigma_j}(X_1, \ldots, X_{n}) :=(X_{\widetilde \sigma_1(j)}, \ldots, X_{\widetilde \sigma_n(j)})\ \mbox{for}\ 1\leq j\leq n\, . \end{align} Then the following are true: \begin{itemize} \item The maps $A_{\sigma_j}$ form a representation of $\Sigma$ in $V^n$, i.e. $A_{\sigma_{i^*}}={\rm id}_{V^n}$ and $$A_{\sigma_j}\circ A_{\sigma_k} = A_{\sigma_j\circ \sigma_k}\ \mbox{for all}\ 1\leq j, k \leq n\, .$$ \item Each fundamental network $\Gamma_f:V^n\to V^n$ is equivariant under this representation: \begin{align}\label{equi} \Gamma_f\circ A_{\sigma_j}=A_{\sigma_j}\circ \Gamma_f \ \mbox{for all}\ 1\leq j \leq n\, . \end{align} \item Conversely, if $\Gamma:V^n\to V^n$ satisfies $$\Gamma\circ A_{\sigma_j}=A_{\sigma_j}\circ \Gamma\ \mbox{for all}\ 1\leq j \leq n\, ,$$ then there exists a function $f:V^n\to V$ such that $$\Gamma_j(X)= (\Gamma_f)_j(X) = f(A_{\sigma_j}X)\ \mbox{for all}\ 1\leq j\leq n\, .$$ \end{itemize} \end{theorem} \begin{proof} Because $\sigma_{\widetilde \sigma_j(i^*)} = \sigma_j\circ \sigma_{i^*} = \sigma_j$, it holds that $\widetilde \sigma_j(i^*) = j$ and hence that $A_{\sigma_{i^*}}(X) = (X_{\widetilde \sigma_1(i^*)}, \ldots, X_{\widetilde \sigma_n(i^*)})=X$. This proves that $A_{\sigma_{i^*}}={\rm id}_{V^n}$. Furthermore, by definition $$\sigma_{\widetilde \sigma_{\widetilde \sigma_{k}(j_1)}(j_2)} = \sigma_{\widetilde \sigma_{k}(j_1)}\circ \sigma_{j_2}= \sigma_k\circ\sigma_{j_1}\circ\sigma_{j_2} = \sigma_k\circ\sigma_{\widetilde \sigma_{j_1}(j_2)}= \sigma_{\widetilde \sigma_k(\widetilde \sigma_{j_1}(j_2))} \, .$$ This implies that $\widetilde \sigma_{\widetilde \sigma_{k}(j_1)}(j_2) = \widetilde \sigma_k(\widetilde \sigma_{j_1}(j_2))$, which in turn yields that \begin{align}\nonumber (&A_{\sigma_{j_1}}\circ A_{\sigma_{j_2}}) (X_1, \ldots, X_n) = A_{\sigma_{j_1}}(X_{\widetilde \sigma_{1}(j_2)}, \ldots, X_{\widetilde \sigma_{n}(j_2)})= (X_{\widetilde \sigma_{\widetilde \sigma_{1}(j_1)}(j_2)}, \ldots, X_{\widetilde \sigma_{\widetilde \sigma_{n}(j_1)}(j_2)}) \\ \nonumber &= (X_{\widetilde \sigma_{1}(\widetilde \sigma_{j_1}(j_2))}, \ldots, X_{\widetilde \sigma_{n}(\widetilde \sigma_{j_1}(j_2))}) = A_{\sigma_{\widetilde \sigma_{j_1}(j_2)}}(X_1, \ldots, X_n)= A_{\sigma_{j_1}\circ \sigma_{j_2}}(X_1, \ldots, X_n) \ . \end{align} This proves the first claim of the theorem. The second claim from the first claim and from the fact that $(\Gamma_f)_k=f\circ A_{\sigma_k}$: \begin{align} (\Gamma_f\circ A_{\sigma_j})_k(X) &= f(A_{\sigma_k}\circ A_{\sigma_j}X) = f(A_{\sigma_k\circ\sigma_j}X)= \nonumber \\ \nonumber f(A_{\sigma_{\widetilde \sigma_k(j)}}X) &= (\Gamma_f)_{\widetilde \sigma_k(j)}(X) = (A_{\sigma_j}\circ \Gamma_f)_k(X)\, . \end{align} To prove the third claim, assume that $A_{\sigma_j}\circ \Gamma = \Gamma \circ A_{\sigma_j}$ for all $1\leq j\leq n$. Recalling that $(A_{\sigma_j}\Gamma)_i = \Gamma_{\widetilde \sigma_i(j)}$, this implies that \begin{align}\label{symmetry} \Gamma_{\widetilde \sigma_i(j)}(X) = (A_{\sigma_j}\circ \Gamma)_i(X) = (\Gamma \circ A_{\sigma_j})_i(X) = \Gamma_i(A_{\sigma_j}X)\, . \end{align} When $\sigma_{i^*}$ is the unit of $\Sigma$, then $\sigma_{\widetilde \sigma_{i^*}(j)}=\sigma_{i^*}\circ \sigma_{j}=\sigma_j$, so $\widetilde \sigma_{i^*}={\rm id}_{\{1,\ldots, n\}}$. As a consequence, applied to $i=i^*$, equation (\ref{symmetry}) implies in particular that $$\Gamma_{j}(X) = \Gamma_{i^*}(A_{\sigma_j}X)\ \mbox{for all}\ j=1, \ldots, n\, .$$ If we now choose $f:=\Gamma_{i^*}, V^n\to V$, then $\Gamma=\Gamma_f$ as required. \end{proof} \noindent Theorem \ref{equithm} says that a vector field $\Gamma: V^n\to V^n$ is a fundamental coupled cell network for the monoid $\Sigma$ if and only if it is equivariant under the action $\sigma_j\mapsto A_{\sigma_j}$ of this monoid. In particular, all the degeneracies that occur in the dynamics of the fundamental network $\dot X = \Gamma_f(X)$ are due to (monoid-)symmetry. Such degeneracies may include the existence of synchrony spaces and the occurrence of double eigenvalues. \begin{remark} We shall be referring to the transformations $A_{\sigma_j}$ as ``symmetries'' of $\Gamma_f$, even if these transformations may not be invertible. As is the case for groups of (invertible) symmetries, our semigroup of symmetries can force the existence of synchronous solutions. For instance, the fixed point set of any of the maps $A_{\sigma_j}$ is flow-invariant for any $\Sigma$-equivariant vector field. Because the $A_{\sigma_j}$ may not be invertible, there can be many more invariant subsets though. For example, the image ${\rm im}\, A_{\sigma_j}$ of a symmetry is flow-invariant and the inverse image $A_{\sigma_j}^{-1}(W)$ of a flow-invariant subspace $W$ is flow-invariant. We conclude that synchrony spaces can arise as symmetry spaces in many different ways. Also, recall from Theorem \ref{conjugation} that the maps $\partiali_i: V^N\to V^n$ send orbits of $\gamma_f$ to orbits of $\Gamma_f$. As a consequence, ${\rm im}\, \partiali_i \subset V^n$ is invariant under the flow of $\Gamma_f$. In fact, it was proved in \cite{CCN} that this image is a robust synchrony space corresponding to a balanced partition of the cells of the fundamental network. \end{remark} \begin{example} The fundamental network of our running Example \ref{running} was given by \begin{align}\label{fundamentalexampleformula} \begin{array}{ll} \dot X_1 = & f(X_1, X_2, X_3) \\ \dot X_2 = & f(X_2, X_2, X_3) \\ \dot X_3 = & f(X_3, X_3, X_3)\end{array} \, . \end{align} In other words, the representation of the monoid $\{\sigma_1, \sigma_2, \sigma_3\}$ is given by \begin{align}\nonumber A_{\sigma_1}(X_1, X_2, X_3) = (X_1, X_2, X_3)\, , \\ \nonumber A_{\sigma_2}(X_1, X_2, X_3) = (X_2, X_2, X_3)\, , \\ \nonumber A_{\sigma_3}(X_1, X_2, X_3) = (X_3, X_3, X_3)\, . \end{align} One checks that this representation indeed consists of symmetries of (\ref{fundamentalexampleformula}). In this example, the nontrivial balanced partitions of the fundamental network are $$\{1,2,3\}, \{1,2\} \cup \{3\} \ \mbox{and}\ \{1\}\cup\{2,3\} \, .$$ These respectively correspond to the robust synchrony spaces $$\{X_1=X_2=X_3\}, \{X_1=X_2\} \ \mbox{and}\ \{X_2=X_3\}\, .$$ These synchrony spaces can both be characterized in terms of the conjucacies $\partiali_i$ and in terms of the symmetries $A_{\sigma_j}$, namely \begin{align}\nonumber & \{X_1=X_2=X_3\}={\rm im}\, \partiali_1 = {\rm Fix}\, A_{\sigma_3}= {\rm im}\, A_{\sigma_3}\, , \\ \nonumber & \{X_1=X_2\}={\rm im}\, \partiali_2 = {\rm Fix}\, A_{\sigma_2} = {\rm im} \, A_{\sigma_2} \ \mbox{and}\\ \nonumber & \{X_2=X_3\}={\rm im}\, \partiali_3 = A_{\sigma_2}^{-1}({\rm im}\, A_{\sigma_3}) \, . \end{align} \end{example} \section{Representations of semigroups}\label{secrepresentations} In this section we present some rather well-known facts from the representation theory of semigroups. We choose to explain and prove these results in great detail, as our readers may not be so familiar with them. One of the main goals of this section is to explain how semigroup symmetry can lead to degeneracies in the spectrum of the linearization of an equivariant vector field at a symmetric equilibrium. Although the theory in this section has strong similarities with the representation theory of compact groups, we would like to warn the reader in advance that the situation is slightly more delicate for semigroups. Firstly, when $\Sigma$ is a semigroup and $W$ a finite dimensional real vector space, then a map $$A: \Sigma \to \mathfrak{gl}(W)\ \mbox{for which}\ A_{\sigma_j}\circ A_{\sigma_k} = A_{\sigma_j\circ \sigma_k}\ \mbox{for all} \ \sigma_j, \sigma_k\in\Sigma$$ will be called a representation of the semigroup $\Sigma$ in $W$. A subspace $W_1\subset W$ is called a subrepresentation of $W$ if it is stable under the action the semigroup, that is if $A_{\sigma_j}(W_1)\subset W_1$ for all $\sigma_j\in \Sigma$. \begin{definition} A representation of $\Sigma$ in $W$ is called {\it indecomposable} if $W$ is not a direct sum $W=W_1\oplus W_2$ with $W_1$ and $W_2$ both nonzero subrepresentations of $W$. A representation of $\Sigma$ in $W$ is called {\it irreducible} if $W$ does not contain a subrepresentation $W_1\subset W$ with $W_1 \neq 0$ and $W_1\neq W$. \end{definition} Clearly, an irreducible representation is indecomposable, but the converse is not true. By definition, every representation is a direct sum of indecomposable representations. In general, this statement is not true for irreducible representations either (unless for example $\Sigma$ is a compact group). Also, the decomposition of a representation into indecomposable representations is unique up to isomorphism. This is the content of the Krull-Schmidt theorem that we will formulate and prove below. When $A:\Sigma \to \mathfrak{gl}(W)$ and $A':\Sigma\to \mathfrak{gl}(W')$ are semigroup representations and $L: W\to W'$ is a linear map so that $$L\circ A_{\sigma_j} = A'_{\sigma_j}\circ L \ \mbox{for all}\ \sigma_j\in \Sigma \, ,$$ then we call $L$ a {\it homomorphism of representations} and write $L\in {\rm Hom}(W, W')$ - note that the dependence on the semigroup $\Sigma$ is not expressed by our notation. We shall call an invertible homomorphism an {\it isomorphism}. The following result will be useful later: \begin{proposition}\label{twoisos} Let $W, W'$ be indecomposable representations, $L_1\in {\rm Hom}(W, W')$ and $L_2\in {\rm Hom}(W', W)$. If $L_2\circ L_1$ is invertible, then both $L_1$ and $L_2$ are isomorphisms. \end{proposition} \begin{proof} Note first of all that when $L_1\in {\rm Hom}(W, W')$ and $Y=L_1(X)$, then $A_{\sigma_j}'(Y) = A_{\sigma_j}'(L_1(X))=L_1(A_{\sigma_j}(X))$ and thus ${\rm im}\, L_1$ is a subrepresentation of $W'$. Similarly, when $L_2\in {\rm Hom}(W', W)$ and $L_2(X)=0$, then $L_2(A'_{\sigma_j}(X)) = A_{\sigma_j}(L_2(X))= 0$ and hence also $\ker L_2$ is a subrepresentation of $W'$. Because $L_2\circ L_1$ is invertible, it must hold that $L_1$ is injective, $L_2$ is surjective and $ {\rm im}\, L_1\cap \ker L_2=\{0\}$. Now, let $Z\in W'$. Then $Z= X+Y$ with $Y=L_1\circ (L_2\circ L_1)^{-1}\circ L_2 (Z)$ and $X=Z-Y$. It is clear that $X \in \ker L_2$ and that $Y \in {\rm im}\, L_1$ and we conclude that $W'={\rm im}\, L_1 \oplus \ker L_2$. But both ${\rm im}\, L_1$ and $\ker L_2$ are subrepresentations of $W'$ and $W'$ was assumed indecomposable. We conclude that $\ker L_2=0$ and ${\rm im} \, L_1 = W'$ and hence that $L_1$ is surjective and $L_2$ is injective. Thus, $L_1$ and $L_2$ are isomorphisms. \end{proof} If $W$ is a semigroup representation, then we call an element of ${\rm Hom}(W,W)$ an {\it endomorphism} of $W$ and we shall write ${\rm End}(W):={\rm Hom}(W,W)$. \begin{remark}\label{linearization} Assume that $\Gamma: W\to W$ is a vector field and $X_0\in W$ is a point such that \begin{itemize} \item[i)] $X_0$ is an equilibrium point of $\Gamma$, i.e. $\Gamma(X_0)=0$, \item[ii)] $X_0$ is $\Sigma$-symmetric, i.e. $A_{\sigma_j}(X_0)=X_0$ for all $1\leq j \leq n$ and \item[iii)] $\Gamma$ is $\Sigma$-equivariant, i.e. $\Gamma \circ A_{\sigma_j}=A_{\sigma_j} \circ \Gamma$ for all $1\leq j\leq n$. \end{itemize} Then differentiation of $\Gamma(A_{\sigma_j}(X))=A_{\sigma_j}(\Gamma(X))$ at $X=X_0=A_{\sigma_j}(X_0)$ yields that $$L_0\circ A_{\sigma_j} = A_{\sigma_j}\circ L_0\ \mbox{if we set} \ L_0:= D_X\Gamma(X_0)\, .$$ This explains why we are interested in the endomorphisms of a representation of a semigroup: the linearization of an equivariant vector field at a symmetric equilibrium is an example of such an endomorphism. \end{remark} The following proposition states that the endomorphisms of an indecomposable representation fall into two classes. \begin{proposition}\label{oneev} Let $W$ be an indecomposable representation and $L\in {\rm End}(W)$. Then $L$ is either invertible or nilpotent. \end{proposition} \begin{proof} Let $n$ be large enough that $W=\ker L^n \oplus {\rm im}\, L^n$. Because $\ker L^n$ and ${\rm im}\, L^n$ are subrepresentations of $W$ and $W$ is indecomposable it follows that either $W=\ker L^n$ and $L$ is nilpotent, or $W={\rm im}\, L^n$ and $L$ is invertible. \end{proof} As a consequence of Proposition \ref{oneev}, we find that the endomorphisms of indecomposable representations have spectral degeneracies as follows. \begin{corollary} Let $W$ be an indecomposable representation and $L\in {\rm End}(W)$. Then either \begin{itemize} \item[i)] $L$ has one real eigenvalue, or \item[ii)] $L$ has one pair of complex conjugate eigenvalues. \end{itemize} \end{corollary} \begin{proof} First, assume that $\lambda$ is a real eigenvalue of $L$. Then $L-\lambda I$ is not invertible and hence nilpotent, i.e. $(L-\lambda I)^n=0$ for some $n$. It follows that every element of $W$ is a generalized eigenvector for the eigenvalue $\lambda$. The argument is similar in case that $\lambda=\alpha\partialm i\beta\notin \R$ is a complex conjugate pair of eigenvalues of $L$. Then $(L-\lambda I)(L-\overline \lambda I) = L^2-2\alpha L +(\alpha^2+\beta^2)I$ is not invertible and thus nilpotent. \end{proof} Schur's lemma gives a more precise characterization of ${\rm End}(W)$. We will formulate this characterization as Lemma \ref{schur} below. It follows from a few preparatory results. The first says that the endomorphisms of an indecomposable representation form a ``local ring''. \begin{proposition}\label{sumnilpotent} Let $W$ be an indecomposable representation and assume that $L_1, L_2\in {\rm End}(W)$ are both nilpotent. Then also $L_1+L_2$ is nilpotent. \end{proposition} \begin{proof} Assume that $L_1+L_2$ is not nilpotent. Then it is invertible by Proposition \ref{oneev}. Multiplying $$(L_1+L_2)\circ A_{\sigma_j} = A_{\sigma_j}\circ (L_1+L_2)$$ left and right by $(L_1+L_2)^{-1}$ gives that $(L_1+L_2)^{-1}\in {\rm End}(W)$. Hence, so are $$M_1:= (L_1+L_2)^{-1}L_1\ \mbox{and} \ M_2:=(L_1+L_2)^{-1}L_2\, .$$ Clearly, $M_1$ and $M_2$ can not be invertible, because they contain a nilpotent factor. So they are both nilpotent. In particular, $I-M_2$ is invertible. But $M_1=I-M_2$. This is a contradiction. \end{proof} \begin{corollary}\label{nilpotentideal} Let $W$ be an indecomposable representation. Then the collection $${\rm End}^{\rm Nil}(W)=\{L\in {\rm End}(W)\, |\, L \ \mbox{\rm is nilpotent}\, \! \}$$ is an ideal in ${\rm End}(W)$. \end{corollary} \begin{proof} Obvious from Proposition \ref{oneev} and Proposition \ref{sumnilpotent}. \end{proof} We can now formulate the following refinement of Proposition \ref{oneev}: \begin{lemma}[Schur's Lemma]\label{schur} Let $W$ be an indecomposable representation. The quotient $${\rm End}(W)/{\rm End}^{\rm Nil}(W) \ \mbox{is a division algebra}\, .$$ \end{lemma} \begin{proof} By Corollary \ref{nilpotentideal}, the quotient ring ${\rm End}(W)/{\rm End}^{\rm Nil}(W)$ is well-defined. By Proposition \ref{oneev}, an element of this quotient is invertible if and only if it is nonzero. Thus, the quotient is a division algebra. \end{proof} We recall that any finite dimensional real associative division algebra is isomorphic to either $$\R, \C\ \mbox{or}\ \H\, .$$ In particular, this implies that ${\rm End}(W)/{\rm End}^{\rm Nil}(W)$ can only have dimension $1$, $2$ or $4$ if $W$ is indecomposable. We also note that one can represent the equivalence class $[L]\in {\rm End}(W)/{\rm End}^{\rm Nil}(W)$ of an endomorphism $L\in {\rm End}(W)$ by the semisimple part $L^S$ of $L$. It holds that $L^S\in {\rm End}(W)$, because $L^S$ is a polynomial expression in $L$, and thus Lemma \ref{schur} can also be thought of as a restriction on the semisimple parts of the endomorphisms of an indecomposable representation. In case ${\rm End}(W)/{\rm End}^{\rm Nil}(W)\cong \R$, we call $W$ a representation of {\it real type} or an {\it absolutely indecomposable} representation. This terminology is due to the fact that even the complexification of $W$ can not be decomposed further. Otherwise we call $W$ an {\it absolutely decomposable} representation, respectively of {\it complex type} or of {\it quaternionic type}. Finally, let us for completeness prove the Krull-Schmidt theorem here. \begin{theorem}[Krull-Schmidt] The decomposition $$W=W_1\oplus \ldots \oplus W_m$$ of $W$ into indecomposable representations is unique up to isomorphism. \end{theorem} \begin{proof} We will prove a slightly more general fact. Let us assume that $W$ and $W'$ are two isomorphic representations of $\Sigma$. This means that there is an isomorphism of representations $h:W\to W'$. Let us assume furthermore that $$W=W_1\oplus \ldots \oplus W_m \ \mbox{and}\ W'= W_1'\oplus \ldots \oplus W_{m'}'$$ are decompositions of $W$ and $W'$ into indecomposable representations. We claim that $m=m'$ and that it holds after renumbering the factors that $W_j$ is isomorphic to $W_j'$ for every $1\leq j\leq m$. The theorem then follows by choosing $W=W'$. To prove our claim, let $$i_{j}:W_j\to W, p_j:W\to W_j, i_{k}':W_k'\to W'\ \mbox{and}\ p_k':W'\to W_k'$$ be the embeddings and projections associated to the above decompositions. These maps are homomorphisms of the corresponding representations. Indeed, because $A_{\sigma_i}(W_l)\subset W_l$, \begin{align}\nonumber &p_{j}\circ A_{\sigma_i} = p_j\circ A_{\sigma_i} \circ( i_1\circ p_1+\ldots +i_n\circ p_m ) = (p_j\circ A_{\sigma_i}\circ i_j) \circ p_j \, , \\ \nonumber & A_{\sigma_i}\circ i_j = (i_1\circ p_1+\ldots +i_n\circ p_m) \circ (A_{\sigma_i}\circ i_j) = i_j\circ (p_j \circ A_{\sigma_i}\circ i_j)\, , \end{align} and similarly for $i_k'$ and $p_k'$. It now holds for $j=1, \ldots, m$ that $$\sum_{k=1}^{m'} (p_j \circ h^{-1} \circ i_k' \circ p_k' \circ h \circ i_j) = p_j \circ h^{-1} \left(\sum_{k=1}^{m'} i_k' \circ p_k' \right)\circ h \circ i_j = p_j\circ i_j= I_{W_j} \, .$$ Because $I_{W_j}$ is invertible, Proposition \ref{sumnilpotent} implies that there is at least one $1\leq k\leq m'$ so that $p_j \circ h^{-1}\circ i_k' \circ p_k'\circ h \circ i_j$ is an isomorphism. The latter is the composition of the homomorphisms $$p_k'\circ h \circ i_j:W_j\to W_{k}'\ \mbox{and} \ p_j \circ h^{-1}\circ i_k': W_k'\to W_j \, .$$ Because $W_j$ and $W_k'$ are indecomposable, it follows from Proposition \ref{twoisos} above that both these maps are isomorphisms, i.e. $W_j$ is isomorphic to $W_k'$. The theorem now follows easily by induction. \end{proof} \section{Equivariant Lyapunov-Schmidt reduction}\label{seclyapunovschmidt} We say that a parameter dependent differential equation $$\dot X = \Gamma(X;\lambda)\ \mbox{for}\ X\in W\ \mbox{and}\ \lambda\in \R^p$$ undergoes a steady state bifurcation at $(X_0; \lambda_0)$ if $\Gamma(X_0; \lambda_0)=0$ and the linearization $$L_0:=D_X\Gamma(X_0;\lambda_0): W\to W$$ is not invertible. When this happens, the collection of steady states of $\Gamma$ close to $X_0$ may topologically change as $\lambda$ varies near $\lambda_0$. To study such a bifurcation in detail, it is customary to use the method of Lyapunov-Schmidt reduction, cf. \cite{duisbif}, \cite{constrainedLS}, \cite{golschaef1}, \cite{golschaef2}. We will now describe a variant of this method that applies in case that $\Gamma$ is equivariant under the action of a semigroup. This section serves as a preparation for Sections \ref{secgeneric} and \ref{sectwoorthree} below. As a start, let us denote by $$L_0=L^S_0+L^N_0$$ the decomposition of $L_0$ in semisimple and nilpotent part. We shall split $W$ as a direct sum $$W= {\rm im}\, L^S_0 \oplus \ker L^S_0\, .$$ The projections that correspond to this splitting shall be denoted $$P_{\rm im}:W\to {\rm im}\, L_0^S\ \mbox{and}\ P_{\ker}: W\to \ker L_0^S\, .$$ One can now decompose every element $X\in W$ as $X=X_{\rm im}+X_{\ker}$ with $X_{\rm im}=P_{\rm im}(X)\in {\rm im}\, L_0^S$ and $X_{\rm ker}=P_{\ker}(X)\in \ker L_0^S$. We also observe that $\Gamma(X;\lambda)=0$ if and only if $$ \Gamma_{\rm im}(X;\lambda):=P_{\rm im}(\Gamma(X;\lambda)) =0 \ \mbox{and}\ \Gamma_{\rm ker}(X;\lambda):=P_{\rm ker}(\Gamma(X;\lambda)) =0\, .$$ The idea of Lyapunov-Schmidt reduction is to solve these equations consecutively. Thus, one first considers the equation $$ \Gamma_{\rm im}(X_{\rm im}+X_{\ker};\lambda) =0 \ \mbox{for}\ \Gamma_{\rm im}:{\rm im}\, L_0^S \oplus \ker L_0^S \times \R^p \to {\rm im}\, L_0^S\, .$$ By construction, the derivative of $\Gamma_{\rm im}$ in the direction of ${\rm im}\, L_0^S$ is given by $$D_{X_{\rm im}}\Gamma_{\rm im}(X_0;\lambda_0) = P_{\rm im}\circ D_{X_{\rm im}}\Gamma(X_0;\lambda_0) = P_{\rm im} \circ \left(\left. L_0\right|_{{\rm im}\, L_0^S}\right)\, .$$ This derivative is clearly invertible. As a consequence, by the implicit function theorem there exists a smooth function $X_{\rm im}=X_{\rm im}(X_{\ker}, \lambda)$, defined for $X_{\ker}$ near $(X_0)_{\ker}$ and $\lambda$ near $\lambda_0$ so that $X_{\rm im}(X_{\ker}, \lambda)$ is the unique solution near $(X_{0})_{\rm im}$ of the equation $$\Gamma_{\rm im}(X_{\rm im}(X_{\ker}, \lambda)+X_{\ker}; \lambda)=0\, .$$ Hence, what remains is to solve the bifurcation equation $$r(X_{\ker}; \lambda):= \Gamma_{\ker}(X_{\rm im}(X_{\ker}, \lambda)+X_{\ker}; \lambda)=0\ \mbox{with}\ r: \ker L_0^S\times\Lambda \to \ker L_0^S$$ and $\Lambda\subset \R^p$ an open neighborhood of $\lambda_0$. The process described here, of reducing the equation $\Gamma(X;\lambda)=0$ for $X\in W$ to the lower-dimensional equation $r(X_{\ker}; \lambda)=0$ for $X_{\ker}\in \ker L_0^S$ is called {\it Lyapunov-Schmidt reduction}. The following lemma contains the simple but important observation that at a symmetric equilibrium, the reduced equation inherits the symmetries of the original equation. The proof is standard: \begin{lemma}[Equivariant Lyapunov-Schmidt reduction] Assume that $W$ is a representation of a semigroup $\Sigma$ and that $\Gamma: W\times \R^p \to W$ is $\Sigma$-equivariant, i.e. that $$\Gamma(A_{\sigma_j}(X);\lambda)=A_{\sigma_j}(\Gamma(X;\lambda))\ \mbox{for all}\ \sigma_j\in \Sigma\, .$$ Assume furthermore that $X_0\in W$ is a symmetric equilibrium at the parameter value $\lambda_0$, i.e. that $\Gamma(X_0; \lambda_0)=0$ and $A_{\sigma_j}(X_0)=X_0$ for all $1\leq j\leq n$. Then also the reduced vector field $r: \ker L_0^S\times \Lambda \to \ker L_0^S$ is $\Sigma$-equivariant: $$r(A_{\sigma_j}(X_{\ker}); \lambda) = A_{\sigma_j}(r(X_{\ker}; \lambda))\ \mbox{for all}\ \sigma_j\in \Sigma\, ,$$ where we denoted by $A_{\sigma_j}: \ker L_0^S\to \ker L_0^S$ the restriction of $A_{\sigma_j}$ to $\ker L_0^S$. \end{lemma} \begin{proof} First of all, because $\Gamma$ is $\Sigma$-equivariant and $X_0$ is $\Sigma$-symmetric, we know from Remark \ref{linearization} that $$L_0\in {\rm End}(W)\, .$$ Moreover, $L^S_0\in {\rm End}(W)$ as well, because $L_0^S=p(L_0)=a_0I+a_1L+\ldots+a_{r-1}L^{r-1}+a_r L^r$ for certain $a_0, \ldots, a_r\in \C$. As a consequence, $\ker L_0^S$ and ${\rm im}\, L_0^S$ are subrepresentations of $W$ and the projections $P_{\ker}: W\to \ker L_0^S$ and $P_{\rm im}: W\to{\rm im}\, L_0^S$ are homomorphisms. Recall that $X_{\rm im}(X_{\ker}, \lambda)$ is the unique solution to the equation $\Gamma_{\rm im}(X_{\rm im}+X_{\ker}; \lambda)=0$. In particular it holds that $\Gamma_{\rm im}(X_{\rm im}(A_{\sigma_j}(X_{\ker}), \lambda) + A_{\sigma_j}(X_{\ker}); \lambda)=0$. But also \begin{align} \Gamma_{\rm im}(A_{\sigma_j}(X_{\rm im}(X_{\ker}, \lambda)) + A_{\sigma_j}(X_{\ker}); \lambda) & =\Gamma_{\rm im}(A_{\sigma_j}(X_{\rm im}(X_{\ker}, \lambda)+X_{\ker});\lambda) \nonumber \\ \nonumber = A_{\sigma_j}(\Gamma_{\rm im}(X_{\rm im}(X_{\ker}, &\lambda)+X_{\ker}; \lambda)) = 0\, . \end{align} Here, the second equality holds because $\Gamma_{\rm im}=P_{\rm im}\circ \Gamma$ is the composition of $\Sigma$-equivariant maps. By uniqueness of $X_{\rm im}(A_{\sigma_j}(X_{\ker}), \lambda)$, this proves that $$A_{\sigma_j}(X_{\rm im}(X_{\ker}, \lambda))=X_{\rm im}(A_{\sigma_j}(X_{\rm ker}), \lambda)\, .$$ In other words, the map $X_{\rm im}: \ker L_0^S \times \Lambda \to {\rm im}\, L_0^S$ is $\Sigma$-equivariant. It now follows easily that $r$ is $\Sigma$-equivariant: \begin{align} &r(A_{\sigma_j}(X_{\ker}); \lambda) = \Gamma_{\ker}(X_{\rm im}(A_{\sigma_j}(X_{\ker}), \lambda)+A_{\sigma_{j}}(X_{\ker}); \lambda) = \nonumber \\ \nonumber \Gamma_{\ker}(A_{\sigma_j}(&X_{\rm im}(X_{\ker}, \lambda)+X_{\ker}); \lambda) = A_{\sigma_j}(\Gamma_{\ker}(X_{\rm im}(X_{\ker}, \lambda);\lambda) = A_{\sigma_j}(r(X_{\ker}; \lambda))\, . \end{align} Here, the second equality holds because $X_{\rm im}$ is $\Sigma$-equivariant and the third because $\Gamma_{\rm ker}=P_{\ker}\circ \Gamma$ is the composition of $\Sigma$-equivariant maps. \end{proof} \begin{remark} The above construction is only slightly unusual. Indeed, in bifurcation theory, it is perhaps more common to reduce the steady state equation $\Gamma(X;\lambda)=0$ to a bifurcation equation on the kernel $\ker L_0$ of $L_0=D_X\Gamma(X_0; \lambda_0)$. This kernel is a subrepresentation of $W$, but the problem is that it may not be complemented by another subrepresentation. As a result, the reduced equation $r(X_{\ker}; \lambda)=0$ need not be equivariant under this construction. This explains our choice to reduce to an equation on the ``generalized kernel'' $\ker L_0^S$: this kernel is nicely complemented by the subrepresentation ${\rm im}\, L_0^S$, the ``reduced image'' of $L_0$. \end{remark} \section{Generic steady state bifurcations}\label{secgeneric} In this section, we investigate the structure of a generic semigroup equivariant steady state bifurcation in more detail. To this end, assume that $\lambda_0 < \lambda_1$ are real numbers and that $$L:(\lambda_0, \lambda_1)\to {\rm End}(W)$$ is a continuously differentiable one-parameter family of endomorphisms. The collection of such curves of endomorphisms is given the $C^1$-topology. \begin{definition} We say that a one-parameter family $\lambda\mapsto L(\lambda)$ is {\it in general position} if for each $\lambda\in (\lambda_0, \lambda_1)$ either $L(\lambda)$ is invertible or the generalized kernel of $L(\lambda)$ is absolutely indecomposable (i.e. of real type). \end{definition} We would like to prove that a ``generic'' one-parameter curve of endomorphisms is in general position, because this would imply that steady state bifurcations generically occur along precisely one absolutely indecomposable representation. Instead, we will only prove this result under a special condition on the representation. It is currently unclear to us if the result is true in more generality. The main result of this section is the following: \begin{theorem}\label{genericcodimone} Assume that the representation $W$ of a semigroup splits as a sum of mutually non-isomorphic indecomposable representations. Then the collection of one-parameter families of endomorphisms in general position is open and dense in the $C^1$-topology. \end{theorem} \begin{proof}[Sketch] Under the prescribed condition on the representation, we will prove below that the set of endomorphisms $$\{L\in {\rm End}(W) \, |\, \ker L^S \ \mbox{is not absolutely indecomposable}\}$$ is contained in a finite union of submanifolds of ${\rm End}(W)$, each of which has co-dimension at least $2$. Theorem \ref{genericcodimone} therefore follows from the Thom transversality theorem (that implies that every smooth curve can smoothly be perturbed into a curve that does not intersect any given manifold of co-dimension $2$ or higher). \end{proof} The full proof of Theorem \ref{genericcodimone} will be given below in a number of steps, starting with the following preparatory lemma. \begin{lemma}\label{preplemma} Let $L_0\in {\rm End}(W)$ and denote by $$W=\ker L^S_0 \oplus {\rm im}\, L^S_0$$ the decomposition into generalized kernel and reduced image of $L_0$, with respect to which $$L_0=\left(\begin{array}{cc} L_0^{11} & 0 \\ 0 & L^{22}_0\end{array}\right)\ \mbox{with}\ L_0^{11}\ \mbox{nilpotent and}\ L_0^{22}\ \mbox{invertible}.$$ Then there is an open neighbourhood $U \subset {\rm End}(W)$ of the zero endomorphism and smooth maps $\partialhi^{11}: U \to {\rm End}(\ker L^S_0)$ and $\partialhi^{22}: U \to {\rm End}({\rm im}\, L_0^S)$ so that for every $L \in U$, $$ L_0+ L \ \mbox{is conjugate to}\ \left(\begin{array}{cc} \partialhi^{11}(L) & 0 \\ 0 & \partialhi^{22}(L) \end{array}\right)\, .$$ It holds that $\partialhi^{11}(L)= L_0^{11}+L^{11} + \mathcal{O}(||L||^2)$ and $\partialhi^{22}(L)=L^{22}_0+ L^{22} + \mathcal{O}(||L||^2)$. \end{lemma} \begin{proof} Let us define the smooth map $$\Phi: {\rm End}(W)\times \{M\in {\rm End}(W) \, |\, I+M\ \mbox{is invertible}\} \to {\rm End}(W)$$ by $$\Phi(L, M) = (I+M) \circ (L_0 + L) \circ (I+M)^{-1}\, .$$ This map admits the Taylor expansion \begin{align}\nonumber \Phi(L, M)& = (I+M) \circ (L_0+L)\circ (I-M+\mathcal{O}(||M||^2)) \\ \nonumber & = L_0 + L + [M, L_0] + \mathcal{O}(||L||^2 + ||M||^2) \, . \end{align} Let us now write $L$ and $M$ in terms of the decomposition $W=\ker L_0^S \oplus {\rm im}\, L_0^S$, i.e. for the appropriate homomorphisms $L^{ij}, M^{ij}$ ($1\leq i,j\leq 2$) we write $$L= \left(\begin{array}{cc} L^{11} & L^{12} \\ L^{21} & L^{22}\end{array}\right)\ \mbox{and} \ M= \left(\begin{array}{cc} M^{11} & M^{12} \\ M^{21} & M^{22}\end{array}\right) \, .$$ In the same way we denote $$ \Phi(L,M)= \left(\begin{array}{cc} \Phi^{11}(L,M) & \Phi^{12}(L,M) \\ \Phi^{21}(L,M) & \Phi^{22}(L,M) \end{array}\right)\, .$$ It is easy to check that \begin{align} \nonumber L+[M,L_0] = \nonumber \left(\begin{array}{ll} L^{11} + M^{11} L_0^{11}-L_0^{11}M^{11} & L^{12} +M^{12}L_0^{22} - L_0^{11}M^{12}\\ L^{21} + M^{21}L_0^{11}-L_0^{22}M^{21}& L^{22} + M^{22} L_0^{22} -L_0^{22}M^{22}\end{array}\right)\, . \end{align} In other words, \begin{align} &D\Phi^{11}(0, 0)\cdot (L,M) = L^{11} + L_0^{11} M^{11} - M^{11}L_0^{11} \, ,\nonumber \\ &D\Phi^{12}(0, 0)\cdot (L,M) = L^{12} + M^{12}L_0^{22}-L_0^{11}M^{12}\, , \nonumber \\ &D\Phi^{21}(0, 0)\cdot (L,M) = L^{21} + M^{21}L_0^{11}-L_0^{22}M^{21}\, , \nonumber \\ &D\Phi^{22}(0, 0)\cdot (L,M) = L^{22} +M^{22} L_0^{22} - L_0^{22} M^{22} \, .\nonumber \end{align} We claim that the operator $$D_{M^{12}}\Phi^{12}(0,0): M^{12}\mapsto M^{12}L_0^{22}-L_0^{11}M^{12} \ \mbox{from} \ {\rm Hom}({\rm im} \, L_0^S, \ker L_0^S) \ \mbox{to itself} $$ is invertible. Indeed, the homological equation $$M^{12}L_0^{22}-L_0^{11}M^{12} = -L^{12}$$ can be solved for $M^{12}$ to give, for any $N\geq 0$, that \begin{align} M^{12} & = -L^{12}(L_0^{22})^{-1} + L_0^{11}M^{12}(L_0^{22})^{-1} \nonumber \\ \nonumber & = -L^{12}(L_0^{22})^{-1} - L_0^{11} L^{12}(L_0^{22})^{-2} + (L_0^{11})^{2}M^{12}(L_0^{22})^{-2} \nonumber \\ \nonumber & = \ldots = -\sum_{n=0}^{N} (L_0^{11})^n L^{12} (L_0^{22})^{-(n+1)} + (L_0^{11})^{N+1}M^{12}(L_0^{22})^{-(N+1)}\, . \end{align} Because $L_0^{11}$ is nilpotent, $(L_0^{11})^{N+1}=0$ for large enough $N$. This proves our claim that $D_{M^{12}}\Phi^{12}(0,0)$ is invertible. In the same way, $D_{M^{21}}\Phi^{21}(0,0)$ is invertible, and as a consequence so is the map $$D_{(M^{12}, M^{21})}(\Phi^{12}, \Phi^{21})(0,0): (M^{12}, M^{21})\mapsto ( M^{12}L_0^{22}-L_0^{11}M^{12}, M^{21}L_0^{11}-L_0^{22}M^{21} )\, . $$ This proves, by the implicit function theorem, that there are smooth functions $$M^{12}=M^{12}(L, M^{11}, M^{22})\ \mbox{and} \ M^{21}=M^{21}(L, M^{11}, M^{22})\, ,$$ defined for $L, M^{11}$ and $M^{22}$ near zero, such that $$\Phi^{12}(L, M(L, M^{11}, M^{22}))=0\ \mbox{and}\ \Phi^{21}(L, M(L, M^{11}, M^{22}))=0\, .$$ If we choose $M^{11}=0$ and $M^{22}=0$, then it actually follows that \begin{align}\nonumber & \partialhi^{11}(L):=\Phi^{11}(L, M(L, 0,0)) = L_0^{11}+L^{11} + \mathcal{O}(||L||^2) \ \mbox{and}\\ \nonumber & \partialhi^{22}(L):=\Phi^{22}(L, M(L,0,0))=L^{22}_0+ L^{22}+\mathcal{O}(||L||^2)\, . \end{align} This proves the lemma. \end{proof} \noindent Let us assume now that the representation $W$ splits as a sum of indecomposables $$W = W_1\oplus \ldots \oplus W_m\, .$$ When $L\in {\rm End}(W)$ is an arbitrary endomorphism, then also $\ker L^S$ is a sum of indecomposable representations. In fact, by the Krull-Schmidt theorem, $$\ker L^S\cong W_{i_1}\oplus \ldots \oplus W_{i_k}\ \mbox{with}\ k\leq m \ \mbox{and}\ 1\leq i_1 < i_2 < \ldots < i_k \leq m\, .$$ Thus, we can classify the endomorphisms of $W$ by the isomorphism type of their generalized kernels by defining for all $1\leq i_1 < i_2 < \ldots < i_k \leq m$ the collection $${\rm Iso}(W_{i_1}\oplus \ldots \oplus W_{i_k}):=\{L\in {\rm End}(W)\, |\, \ker L^S \ \mbox{is isomorphic to} \ W_{i_1}\oplus \ldots \oplus W_{i_k}\}\, .$$ \noindent The following proposition gives a description of ${\rm Iso}(W_{i_1}\oplus \ldots \oplus W_{i_k})$ in case $W_{i_1}, \ldots, W_{i_k}$ are mutually nonisomorphic: \begin{proposition}\label{isoprop} Suppose $W$ splits as the sum of indecomposables $W_1\oplus \ldots \oplus W_m$ and let $${\rm ind}\, W_i := \dim {\rm End}(W_i)/{\rm End}^{\rm Nil}(W_i) \ (= 1,2 \ \mbox{or}\ 4)\ \mbox{for} \ 1\leq i \leq m\, .$$ Choose $1\leq i_1 < i_2 < \ldots < i_k \leq m$ and assume that the $W_{i_j}$ are mutually nonisomorphic. Then ${\rm Iso}(W_{i_1}\oplus \ldots \oplus W_{i_k})$ is contained in a submanifold of ${\rm End}(W)$ of co-dimension $${\rm ind}\, W_{i_1} + \ldots + {\rm ind}\, W_{i_k}\, .$$ \end{proposition} \begin{proof} Choose an arbitrary isomorphism $L_0\in {\rm Iso}(W_{i_1}\oplus \ldots \oplus W_{i_k})$ and recall from Lemma \ref{preplemma} that an endomorphism $L_0+L$ close to $L_0$ is conjugate to $$\left(\begin{array}{cc} \partialhi^{11}(L) & 0 \\ 0 & \partialhi^{22}(L) \end{array}\right)\, .$$ It is clear that $\partialhi^{22}(L)$ is invertible, because $\partialhi^{22}(L)=L_0^{22}+\mathcal{O}(||L||)$. The generalized kernels of $L_0$ and $L_0+L$ are therefore isomorphic if and only if $\partialhi^{11}(L)$ is nilpotent. Moreover, $\partialhi^{11}$ is a submersion at $L_0$ because $\partialhi^{11}(L)=L_0^{11}+L^{11}+\mathcal{O}(||L||^2)$. By the submersion theorem it thus suffices to check that $${\rm End}^{\rm Nil}(W_{i_1} \oplus \ldots \oplus W_{i_k})=\{B\in {\rm End}(W_{i_1}\oplus \ldots \oplus W_{i_k}) \, |\, B\ \mbox{is nilpotent}\, \}$$ is contained in a submanifold of ${\rm End}(W_{i_1}\oplus \ldots \oplus W_{i_k})$ of the prescribed co-dimension. To check this fact, let us decompose an arbitrary $B\in {\rm End}(W_{i_1} \oplus \ldots \oplus W_{i_k})$ as $$B= \left( \begin{array}{cccc} B^{11} & B^{12} & \cdots & B^{1k} \\ B^{21} & B^{22} & \cdots & B_{2k}\\ {\bf v}dots & {\bf v}dots & \ddots & {\bf v}dots \\ B^{k1} & B^{k2} & \cdots & B^{kk} \end{array} \right) $$ with $B^{jl}\in {\rm Hom}(W_{i_j}, W_{i_l})$. Then it holds for any $n\geq 1$ that $$(B^n)^{jl} = \!\!\! \sum_{1\leq k_1, \ldots, k_{n-1} \leq k} \!\!\! B^{j k_1} B^{k_1k_2} \cdots B^{k_{n-1} l}\, .$$ We now remark that any composition $$B^{j k_1} B^{k_1k_2} \cdots B^{k_{n-1} j} \in {\rm End}(W_{i_j})$$ is nilpotent as soon as there exists an $r$ with $k_r\neq j$: otherwise, by Proposition \ref{oneev}, it would have been an isomorphism and by Proposition \ref{twoisos}, then $W_{i_j}$ would have been isomorphic to $W_{i_{k_r}}$, which contradicts our assumptions. It therefore follows from Proposition \ref{sumnilpotent} that $$(B^n)^{jj} = (B^{jj})^n + ``{\rm nilpotent}"\, .$$ Assume now that $B$ is nilpotent. Then there is an $n$ so that $B^n=0$. For this $n$ it then holds that $(B^{jj})^n$ is nilpotent and hence that $B^{jj}$ is nilpotent. This finishes the proof that whenever $W_{i_1}, \ldots, W_{i_k}$ are mutually nonisomorphic indecomposable representations, then $${\rm End}^{\rm Nil}(W_{i_1} \oplus \ldots \oplus W_{i_k}) \subset \{B\in {\rm End}(W_{i_1} \oplus \ldots \oplus W_{i_k})\, |\, B^{jj}\in {\rm End}^{\rm Nil}(W_{i_j})\ \mbox{for all}\ 1\leq j\leq k\, \}\, .$$ The latter is a subspace (and in particular a submanifold) of ${\rm End}(W_{i_1}\oplus \ldots \oplus W_{i_k})$ of co-dimension ${\rm ind}\, W_{i_1}+ \ldots + {\rm ind}\, W_{i_k}$. This proves the proposition. \end{proof} \begin{remark} If $W_i$ is one of the indecomposable factors of $W$, then ${\rm End}^{\rm Nil}(W_i)\subset {\rm End}(W_i)$ is a linear subspace (it is not just contained in one). This implies that ${\rm Iso}(W_i)$ is a true submanifold of ${\rm End}(W)$ (and not just contained in one). The co-dimension of this submanifold is $1$ if $W_i$ is of real type, $2$ if $W_i$ is of complex type and $4$ if $W_i$ is of quaternionic type. \end{remark} \noindent We are now ready to finish the proof of Theorem \ref{genericcodimone}. \begin{proofof}\! \!\!\! [of Theorem \ref{genericcodimone}]: Recall that the co-dimension of ${\rm Iso}(W_{i_1}\oplus \ldots \oplus W_{i_k})$ is equal to $${\rm ind} \, W_{i_1} + \ldots + {\rm ind}\, W_{i_k}\, .$$ In particular, this co-dimension is equal to zero if and only if $k=0$ and is equal to one if and only if $k=1$ and $W_{i_1}$ is absolutely indecomposable. This proves that the collection $$\{L\in {\rm End}(W) \, |\, \ker L^S \ \mbox{is not absolutely indecomposable}\}$$ is contained in the union of finitely many submanifolds of ${\rm End}(W)$, each of which has co-dimension $2$ or higher. The Thom transversality theorem finishes the proof. \end{proofof} \noindent For completeness, let us state here as an obvious corollary of Theorem \ref{genericcodimone} that a generic co-dimension one synchrony breaking steady state bifurcation must occur along an absolutely indecomposable representation: \begin{corollary} Let $W$ be a representation of a semigroup $\Sigma$ and assume that it splits as a sum of mutually non-isomorphic indecomposable representations. Moreover, let $X_0\in W$ be $\Sigma$-symmetric, i.e. $A_{\sigma_j}(X_0)=X_0$ for all $1\leq j\leq n$. We define the set of curves of equivariant vector fields admitting $X_0$ as an equilibrium by $$E:=\{ \Gamma: W\times (\lambda_0, \lambda_1) \to W\, | \ \Gamma\ \mbox{is smooth, commutes with}\ \Sigma\ \mbox{and}\ \Gamma(X_0; \lambda)=0 \, \}$$ and the subset of those curves that are in general position as $$E_{\rm gp}:=\left\{\, \Gamma\in E\, |\, \ker D_{X}\Gamma(X_0; \lambda)\ \mbox{is either trivial or absolutely indecomposable}\, \right\}\, .$$ Then it holds that $E_{\rm gp}$ is open and dense in $E$ in the $C^1$-topology. \end{corollary} \begin{proof} Obvious from Theorem \ref{genericcodimone}. \end{proof} \section{Monoid networks with two or three cells}\label{sectwoorthree} In this section, we investigate the steady state bifurcations that can occur in fundamental monoid networks with two or three cells, where for simplicity we let $V$ be one-dimensional. It so turns out that for all these networks, the corresponding semigroup representations split as the sum of mutually nonisomorphic indecomposable representations. Thus, we are able to classify all possible generic co-dimension one steady state bifurcations in these networks. \subsection{Monoid networks with two cells} It is clear that up to isomorphism there are precisely two monoids with two elements, say $\Sigma_1$ and $\Sigma_2$, with multiplication tables $$ \begin{array}{c|cc}\Sigma_1 & \sigma_1 & \sigma_2\\ \hline \sigma_1 & \sigma_1 & \sigma_2 \\ \sigma_2 & \sigma_2 & \sigma_1 \end{array} \ \mbox{and} \ \begin{array}{c|cc}\Sigma_2 & \sigma_1 & \sigma_2\\ \hline \sigma_1 & \sigma_1 & \sigma_2 \\ \sigma_2 & \sigma_2 & \sigma_2 \end{array}\, . $$ Below, we shall investigate their fundamental networks separately. \subsubsection*{Bifurcations for $\Sigma_1$} The monoid $\Sigma_1$ is the group ${\mathbb{Z}}} \def\DD{{\bf D}} \def\II{{\bf I}_2$ and the corresponding semigroup representation is given by \begin{align}\nonumber A_{\sigma_1}(X_1, X_2)=(X_1, X_2)\, , \\ \nonumber A_{\sigma_2}(X_1, X_2)=(X_2, X_1)\, . \end{align} In particular, the fundamental network is given by the differential equations \begin{align}\nonumber \begin{array}{rl} \dot X_1 &= f(X_1, X_2) \, ,\\ \dot X_2 &= f(X_2, X_1)\, . \end{array} \end{align} The bifurcation theory of such equivariant networks is of course well-known, but we summarize it here for completeness. First of all, the indecomposable decomposition of the phase space is a unique decomposition into mutually nonisomorphic irreducible representations of $\Sigma_1$, given by $$\{X_1=X_2\} \oplus \{X_1+X_2=0\}\, .$$ The subrepresentation $\{X_1=X_2\}$ is trivial in the sense that $A_{\sigma_2}$ acts upon it as the identity. Thus, if we use $X_1$ as a coordinate on this representation, the resulting bifurcation equation after Lyapunov-Schmidt reduction must be of the form $r(X_1;\lambda)=0$ for a function $r(X_1; \lambda)$ satisfying $r(0;0)=0$, i.e. $$r(X_1;\lambda)=a \lambda + b X_1^2 + \mathcal{O}(|\lambda|^2 + |\lambda|\cdot |X_1| + |X_1|^3)\, .$$ Under the generic conditions that $a, b\neq 0$, the solutions of the bifurcation equation are of the form $$X_1=X_2=\partialm\sqrt{-(a/b)\lambda}+\mathcal{O}(\lambda)\, .$$ We conclude that, generically, a synchronous saddle-node bifurcation takes place along the trivial subrepresentation. The subrepresentation $\{X_1+X_2=0\}$ is acted upon by $A_{\sigma_2}$ as minus identity. Choosing again $X_1$ as a coordinate, this yields an equivariant bifurcation equation of the form $r(X_1; \lambda)=0$ with $r(-X_1;\lambda)=-r(X_1; \lambda)$, i.e. $$r(X_1; \lambda) = a \lambda X_1 + b X_1^3 + \mathcal{O}(|\lambda|^2\cdot |X_1| + |\lambda|\cdot |X_1|^3 + |X_1|^5)\, .$$ Under the generic conditions that $a, b\neq 0$, this yields a pitchfork bifurcation, i.e. solutions are of the form $$X_1=X_2=0\ \mbox{or}\ X_1=-X_2 = \partialm \sqrt{-(a/b)\lambda} + \mathcal{O}(\lambda)\, .$$ \subsubsection*{Bifurcations for $\Sigma_2$.} We first of all remark that the monoid $\Sigma_2$ is not a group. Its representation is given by \begin{align}\nonumber A_{\sigma_1}(X_1, X_2)=(X_1, X_2)\, , \\ \nonumber A_{\sigma_2}(X_1, X_2)=(X_2, X_2)\, , \end{align} and the corresponding differential equations read $$ \begin{array}{rl} \dot X_1 &= f(X_1, X_2) \, ,\\ \dot X_2 &= f(X_2, X_2) \, . \end{array} $$ Again, the indecomposable decomposition of the representation is a unique decomposition into mutually nonisomorphic irreducible representations, now given by $$\{X_1=X_2\}\oplus \{X_2=0\}\, .$$ The subrepresentation $\{X_1=X_2\}$ is trivial so that once more only a synchronous saddle-node bifurcation is expected along this representation. The subrepresentation $\{X_2=0\}$ is acted upon by $A_{\sigma_2}$ as the zero map though. Equivariance of the reduced bifurcation equation $r(X_1; \lambda)=0$ under the map $X_1\mapsto 0$ just means that $r(0;\lambda)=0$ and thus that $$r(X_1; \lambda) = a \lambda X_1 + b X_1^2 + \mathcal{O}(|\lambda|\cdot |X_1|^2 + |X_1|^3)\, .$$ Under the generic conditions that $a, b \neq 0$, this produces a transcritical bifurcation with solution branches $$X_1=X_2=0 \ \mbox{and}\ X_1=0, X_2 = -(a/b)\lambda + \mathcal{O}(\lambda^2)\, .$$ Interestingly, the transcritical bifurcation arises here as a generic co-dimension one equivariant bifurcation. \subsection{Monoid networks with three cells} Up to isomorphism, there are precisely $7$ monoids with three elements. Their multiplication tables are the following: \begin{align}\nonumber & \begin{array}{c|ccc}\Sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_2 & \sigma_2 \\ \sigma_3 & \sigma_3 & \sigma_2 & \sigma_2 \end{array} \ \ \begin{array}{c|ccc}\Sigma_2 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_2 & \sigma_3 \\ \sigma_3 & \sigma_3 & \sigma_3 & \sigma_2 \end{array} \ \ \begin{array}{c|ccc}\Sigma_3 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_2 & \sigma_3 \\ \sigma_3 & \sigma_3 & \sigma_3 & \sigma_3 \end{array} \ \ \begin{array}{c|ccc}\Sigma_4 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_2 & \sigma_2 \\ \sigma_3 & \sigma_3 & \sigma_3 & \sigma_3 \end{array} \\ \nonumber & \begin{array}{c|ccc}\Sigma_5 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_2 & \sigma_3 \\ \sigma_3 & \sigma_3 & \sigma_2 & \sigma_3 \end{array} \ \ \begin{array}{c|ccc}\Sigma_6 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_3 & \sigma_1 \\ \sigma_3 & \sigma_3 & \sigma_1 & \sigma_2 \end{array} \ \ \begin{array}{c|ccc}\Sigma_7 & \sigma_1 & \sigma_2 & \sigma_3\\ \hline \sigma_1 & \sigma_1 & \sigma_2 & \sigma_3\\ \sigma_2 & \sigma_2 & \sigma_1 & \sigma_3 \\ \sigma_3 & \sigma_3 & \sigma_3 & \sigma_3 \end{array} \, . \end{align} Below, we shall investigate the steady state bifurcations in the corresponding fundamental networks separately: \subsubsection*{Bifurcations for $\Sigma_1$} We have graphically depicted $\Sigma_1$ in Figure \ref{pict3} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_1$ depicted as a directed multigraph.} \label{pict3} \end{figure} The representation of $\Sigma_1$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_2, X_2) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_2, X_2)\, . \end{array} \end{align} This representation uniquely splits as a sum of mutually nonisomorphic indecomposables $$ \{X_1=X_2=X_3\} \oplus \{X_2=0\}\, .$$ The action of $\Sigma_1$ on the subrepresentation $\{X_1=X_2=X_3\}$ is trivial, so only a saddle-node bifurcation can generically occur along this irreducible representation. On the indecomposable subrepresentation $\{X_2=0\}$, let us choose coordinates $(X_1, X_3)$. Then the action of $A_{\sigma_2}$ and $A_{\sigma_3}$ on this subrepresentation is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_2}(X_1, X_3) &= (0, 0) \, ,\\ A_{\sigma_3}(X_1, X_3) &= (X_3, 0)\, . \end{array} \end{align} This confirms that $\{X_2=0\}$ is indecomposable, but not irreducible, because it contains the one-dimensional subrepresentation $\{X_2=X_3=0\}$. Moreover, one computes that $${\rm End}(\{X_2=0\})=\{(X_1, X_3)\mapsto (\alpha X_1+\beta X_3, \alpha X_3)\, |\, \alpha, \beta \in \R\}\, . $$ This shows that $\{X_2=0\}$ is absolutely indecomposable (i.e. of real type), and that there exist nontrivial nilpotent endomorphisms, namely of the form $(X_1, X_3)\mapsto (\beta X_3, 0)$. The bifurcation equation $r(X_1, X_3; \lambda) = (r_1(X_1, X_3; \lambda), r_3(X_1, X_3; \lambda)) = (0,0)$ is equivariant precisely when $$r_{1}(0, 0; \lambda)=0, r_{3}(0, 0; \lambda)=0, r_1(X_3,0;\lambda) = r_{3}(X_1, X_3; \lambda) \ \mbox{and}\ r_3(X_3,0;\lambda) = 0 \, .$$ This implies that \begin{align}\nonumber r_1(X_1, X_3; \lambda)& = a \lambda X_1 + bX_3 + cX_1^2 + \mathcal{O}(|\lambda|^2\cdot |X_1|+|\lambda|\cdot |X_1|^2 + |X_1|^3+|\lambda|\cdot |X_3|)\, , \\ \nonumber r_3(X_1, X_3; \lambda)& = a \lambda X_3 + cX_3^2 + \mathcal{O}(|\lambda|^2\cdot |X_3|+ |\lambda|\cdot |X_3|^2+|X_3|^3) \, . \end{align} Under the generic conditions that $a, b, c\neq 0$, this gives three solution branches: \begin{align}\nonumber \begin{array}{lll} X_1=0, & X_2=0, & X_3=0,\\ X_1= -(a/c)\lambda + \mathcal{O}(\lambda^2), & X_2= 0,& X_3= 0, \\ \nonumber X_1=\partialm\sqrt{(ab/c^2)\lambda} + \mathcal{O}(\lambda),& X_2 = 0, & X_3= -(a/c)\lambda + \mathcal{O}(\lambda^2). \end{array} \end{align} This means that a fully synchronous trivial branch, a partially synchronous transcritical branch and a fully nonsynchronous saddle-node branch coalesce in this bifurcation. We note that this phenomenon was observed before for this network in \cite{feedforwardRinkSanders} and \cite{CCN}. A diagram of this bifurcation is given in Figure \ref{figbif}. \begin{figure} \caption{\footnotesize {\rm Bifurcation diagram of a co-dimension one steady state bifurcation in the fundamental network of $\Sigma_1$. This figure depicts the nontrivial solution branches in case $a, b>0$ and $c<0$.} \label{figbif} \end{figure} \subsubsection*{Bifurcations for $\Sigma_2$} We have graphically depicted $\Sigma_2$ in Figure \ref{pict4} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_2$ depicted as a directed multigraph.} \label{pict4} \end{figure} The representation of $\Sigma_2$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_2, X_3) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_3, X_2)\, . \end{array} \end{align} This representation uniquely splits as a sum of mutually nonisomorphic one-dimensional irreducible representations $$ \{X_1=X_2=X_3\}\oplus \{X_2=X_3=0\}\oplus \{X_1=X_2=-X_3\}\, .$$ The action of $\Sigma_2$ on the subrepresentation $\{X_1=X_2=X_3\}$ is trivial, so only a saddle-node bifurcation can generically occur along this irreducible representation. On the subrepresentation $\{X_2=X_3=0\}$, both $A_{\sigma_2}$ and $A_{\sigma_3}$ act as the zero map. Thus one expects a transcritical bifurcation to occur along this irreducible representation. On the subrepresentation $\{X_1=X_2=-X_3\}$, the map $A_{\sigma_2}$ acts as identity, while $A_{\sigma_3}$ acts as minus identity. This means that a generic steady state bifurcation along this irreducible representation must be a pitchfork bifurcation. \subsubsection*{Bifurcations for $\Sigma_3$} We have graphically depicted $\Sigma_3$ in Figure \ref{pict5} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_3$ depicted as a directed multigraph.} \label{pict5} \end{figure} \noindent The representation of $\Sigma_3$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_2, X_3) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_3, X_3)\, . \end{array} \end{align} This representation uniquely splits as a sum of mutually nonisomorphic one-dimensional irreducible representations $$ \{X_1=X_2=X_3\} \oplus \{X_2=X_3=0\} \oplus \{X_1=X_2, X_3=0\} \, .$$ The action of $\Sigma_3$ on the subrepresentation $\{X_1=X_2=X_3\}$ is trivial, so only a saddle-node bifurcation can generically occur along this irreducible representation. On the subrepresentation $\{X_2=X_3=0\}$, both $A_{\sigma_2}$ and $A_{\sigma_3}$ act as the zero map. Thus one expects a transcritical bifurcation to occur along this irreducible representation. On the subrepresentation $\{X_1=X_2, X_3=0\}$, the map $A_{\sigma_2}$ acts as identity, while $A_{\sigma_3}$ acts as the zero map. Hence, a transcritical bifurcation must occur generically along this irreducible representation as well. \subsubsection*{Bifurcations for $\Sigma_4$} We have graphically depicted $\Sigma_4$ in Figure \ref{pict6} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_4$ depicted as a directed multigraph.} \label{pict6} \end{figure} \noindent The representation of $\Sigma_4$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_2, X_3) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_2, X_3)\, . \end{array} \end{align} This representation nonuniquely splits as a sum of mutually nonisomorphic indecomposables $$\{X_1=X_2=X_3\} \oplus \{(1+a) X_2 + (1-a) X_3=0\}\ \mbox{for} \ a\in \R \, .$$ The action of $\Sigma_4$ on the subrepresentation $\{X_1=X_2=X_3\}$ is trivial, so only a saddle-node bifurcation can generically occur along this irreducible representation. Because the two-dimensional indecomposable representations $\{(1+a) X_2 + (1-a) X_3=0\}$ are all isomorphic by the Krull-Schmidt theorem, let us consider only $\{X_3=0\}$ and choose coordinates $(X_1, X_2)$. The action of $A_{\sigma_2}$ and $A_{\sigma_3}$ on this subrepresentation then reads \begin{align}\nonumber \begin{array}{rl} A_{\sigma_2}(X_1, X_2) &= (X_2, X_2) \, ,\\ A_{\sigma_3}(X_1, X_2) &= (0, X_2)\, . \end{array} \end{align} This confirms that $\{X_3=0\}$ is indecomposable, but not irreducible, because it contains the one-dimensional subrepresentation $\{X_2=X_3=0\}$. Moreover, one computes that $${\rm End}(\{X_3=0\})=\{ \alpha I \, |\, \alpha \in \R\}\, . $$ This shows that $\{X_3=0\}$ is absolutely indecomposable (i.e. of real type), and that there do not exist nontrivial nilpotent endomorphisms. The bifurcation equation $r(X_1, X_2; \lambda) = (r_1(X_1, X_2; \lambda), r_2(X_1, X_2; \lambda)) = (0,0)$ is equivariant precisely when \begin{align}\nonumber &r_{1}(X_2, X_2; \lambda)= r_{2}(X_2, X_2; \lambda)=r_2(X_1, X_2; \lambda), \\ \nonumber &r_1(0,X_2;\lambda) = 0 \ \mbox{and}\ r_2(0, X_2;\lambda) = r_2(X_1, X_2;\lambda) \, . \end{align} These conditions imply that \begin{align}\nonumber r_1(X_1, X_2; \lambda)& = a \lambda X_1 + bX_1X_2 + cX_1^2 \\ \nonumber&+ \mathcal{O}\left(|X_1|\cdot( |\lambda|^2 + |\lambda|\cdot|X_1| + |\lambda|\cdot|X_2|+ |X_1|^2+ |X_2|^2)\right)\, , \\ \nonumber r_2(X_1, X_2; \lambda)& = a \lambda X_2 + (b+c) X_2^2 + \mathcal{O}(|\lambda|^2\cdot|X_2|+|\lambda|\cdot |X_2|^2+|X_2|^3) \, . \end{align} Under the generic conditions that $a, b+c, c\neq 0$, this gives four solution branches: \begin{align}\nonumber \begin{array}{lll} X_1=0, & X_2=0, & X_3=0,\\ X_1=0, & X_2= -\frac{a}{b+c}\lambda + \mathcal{O}(\lambda^2), & X_3=0,\\ X_1= -\frac{a}{c}\lambda + \mathcal{O}(\lambda^2), & X_2= 0,& X_3= 0, \\ \nonumber X_1= -\frac{a}{b+c}\lambda + \mathcal{O}(\lambda^2), & X_2 = -\frac{a}{b+c}\lambda + \mathcal{O}(\lambda^2), & X_3=0. \end{array} \end{align} This means that in this bifurcation a fully synchronous branch and three partially synchronous transcritical branches come together. A diagram of this bifurcation is given in Figure \ref{figbif2}. \begin{figure} \caption{\footnotesize {\rm Bifurcation diagram of a co-dimension one steady state bifurcation in the fundamental network of $\Sigma_4$. This figure depicts the nontrivial solution branches in case $a<0$ and $b, c>0$.} \label{figbif2} \end{figure} \subsubsection*{Bifurcations for $\Sigma_5$} We have graphically depicted $\Sigma_5$ in Figure \ref{pict7} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_5$ depicted as a directed multigraph.} \label{pict7} \end{figure} \noindent The representation of $\Sigma_5$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_2, X_2) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_3, X_3)\, . \end{array} \end{align} This representation nonuniquely splits as a sum of mutually nonisomorphic indecomposables $$\{X_2=X_3=0\} \oplus \{X_1 + aX_2 =(1+a)X_3\} \ \mbox{for} \ a\in \R\, .$$ The maps $A_{\sigma_2}$ and $A_{\sigma_3}$ both act on the subrepresentation $\{X_2=X_3=0\}$ as the zero map, so only a transcritical bifurcation can generically occur along this irreducible representation. Because the two-dimensional indecomposable subrepresentations $\{X_1 + aX_2 =(1+a)X_3\}$ are all isomorphic by the Krull-Schmidt theorem, let us consider only $\{X_1=X_2\}$ and choose coordinates $(X_1, X_3)$. The action of $A_{\sigma_2}$ and $A_{\sigma_3}$ on this subrepresentation reads \begin{align}\nonumber \begin{array}{rl} A_{\sigma_2}(X_1, X_3) &= (X_1, X_1) \, ,\\ A_{\sigma_3}(X_1, X_3) &= (X_3, X_3)\, . \end{array} \end{align} This confirms that $\{X_1=X_2\}$ is indecomposable, but not irreducible, because it contains the one-dimensional subrepresentation $\{X_1=X_2=X_3\}$. Moreover, one computes that $${\rm End}(\{X_1=X_2\})=\{ \alpha I \, |\, \alpha \in \R\}\, . $$ This shows that $\{X_1=X_2\}$ is absolutely indecomposable (i.e. of real type), and that there do not exist nontrivial nilpotent endomorphisms. The bifurcation equation $r(X_1, X_3; \lambda) = (r_1(X_1, X_3; \lambda), r_3(X_1, X_3; \lambda)) = (0,0)$ is equivariant precisely when \begin{align}\nonumber &r_{1}(X_1, X_1; \lambda)= r_{3}(X_1, X_1; \lambda)=r_1(X_1, X_3; \lambda), \\ \nonumber &r_1(X_3, X_3;\lambda) = r_3(X_3, X_3;\lambda) = r_3(X_1, X_3;\lambda) \, . \end{align} These conditions imply that \begin{align}\nonumber r_1(X_1, X_3; \lambda)& = a \lambda + b X_1^2 + \mathcal{O}(|\lambda|\cdot |X_1|+ |X_1|^3)\, , \\ \nonumber r_3(X_1, X_3; \lambda)& = a \lambda + b X_3^2 + \mathcal{O}(|\lambda|\cdot |X_3|+|X_3|^3) \, . \end{align} Under the generic conditions that $a, b \neq 0$, this gives two solution branches: \begin{align}\nonumber \begin{array}{l} X_1 = X_2= X_3= \partialm \sqrt{-(a/b)\lambda} + \mathcal{O}(\lambda)\, ,\\ X_1 = X_2= -X_3= \partialm \sqrt{-(a/b)\lambda} + \mathcal{O}(\lambda)\, . \end{array} \end{align} In this bifurcation a fully synchronous saddle-node branch and a partially synchronous saddle-node branche meet. A diagram of this bifurcation is given in Figure \ref{figbif3}. \begin{figure} \caption{\footnotesize {\rm Bifurcation diagram of a co-dimension one steady state bifurcation in the fundamental network of $\Sigma_5$. This figure depicts the solution branches in case $a>0$ and $b<0$.} \label{figbif3} \end{figure} \subsubsection*{Bifurcations for $\Sigma_6$} We have graphically depicted $\Sigma_6$ in Figure \ref{pict8} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_6$ depicted as a directed multigraph.} \label{pict8} \end{figure} The semigroup $\Sigma_6$ is the group ${\mathbb{Z}}} \def\DD{{\bf D}} \def\II{{\bf I}_3$. Only for completeness, we shall now recall some well-known facts from the bifurcation theory of ${\mathbb{Z}}} \def\DD{{\bf D}} \def\II{{\bf I}_3$-equivariant differential equations. The representation of $\Sigma_6$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_3, X_1) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_1, X_2)\, . \end{array} \end{align} This representation uniquely splits as a sum of mutually nonisomorphic irreducibles $$ \{X_1=X_2=X_3\}\oplus \{ X_1+X_2+X_3=0\} \, .$$ The action of $\Sigma_6$ on the subrepresentation $\{X_1=X_2=X_3\}$ is trivial, so only a saddle-node bifurcation can generically occur along this irreducible representation. On the subrepresentation $\{X_1+X_2+X_3=0\}$, let us choose coordinates $$Y_1:=\sqrt{3}(X_1+X_2)\, , \, Y_2:=X_1-X_2 \, .$$ Then the action of $A_{\sigma_2}$ and $A_{\sigma_3}$ on this subrepresentation is given by the rotations \begin{align}\nonumber \begin{array}{rl} A_{\sigma_2}(Y_1, Y_2) &= ( \cos(2\partiali/3)Y_1 - \sin(2\partiali/3)Y_2, \sin(2\partiali/3)Y_1 + \cos(2\partiali/3)Y_2) \, ,\\ A_{\sigma_3}(Y_1, Y_2) &= ( \cos(4\partiali/3)Y_1 - \sin(4\partiali/3)Y_2, \sin(4\partiali/3)Y_1 + \cos(4\partiali/3)Y_2) \, . \end{array} \end{align} This confirms that $\{X_1+X_2+X_3\}$ is irreducible (over the real numbers). Moreover, one computes that $${\rm End}(\{X_1+X_2+X_3=0\})=\left\{ \left(\begin{array}{c} Y_1 \\ Y_2 \end{array} \right) \mapsto \left(\begin{array}{cc} \alpha & \beta \\ -\beta & \alpha \end{array} \right) \left(\begin{array}{c} Y_1 \\ Y_2 \end{array} \right) \, |\, \alpha, \beta \in \R \right\}\, . $$ This shows that $\{X_1+X_2+X_3=0\}$ is nonabsolutely irreducible (in fact of complex type), and that there do not exist nontrivial nilpotent endomorphisms. Generically, co-dimension one steady state bifurcations do not take place along an irreducible representation of complex type, so our bifurcation analysis of $\Sigma_6$ ends here. \subsubsection*{Bifurcations for $\Sigma_7$} We have graphically depicted $\Sigma_7$ in Figure \ref{pict9} below. \begin{figure} \caption{\footnotesize {\rm The collection $\Sigma_7$ depicted as a directed multigraph.} \label{pict9} \end{figure} \noindent The representation of $\Sigma_7$ is given by \begin{align}\nonumber \begin{array}{rl} A_{\sigma_1}(X) &= (X_1, X_2, X_3) \, ,\\ A_{\sigma_2}(X) &= (X_2, X_1, X_3) \, ,\\ A_{\sigma_3}(X) &= (X_3, X_3, X_3)\, . \end{array} \end{align} This representation uniquely splits as a sum of mutually nonisomorphic one-dimensional irreducible representations $$\{X_1=X_2=X_3\}\oplus \{X_1=X_2, X_3=0\}\oplus \{X_1+X_2=0, X_3=0\}\, .$$ The action of $\Sigma_7$ on the subrepresentation $\{X_1=X_2=X_3\}$ is trivial, so only a saddle-node bifurcation can generically occur along this irreducible representation. On the subrepresentation $\{X_1=X_2, X_3=0\}$, the map $A_{\sigma_2}$ acts as identity, while $A_{\sigma_3}$ act as the zero map. Thus one expects a transcritical bifurcation to occur along this irreducible representation. On the subrepresentation $\{X_1+X_2=0, X_3=0\}$, the map $A_{\sigma_2}$ acts as minus identity, while $A_{\sigma_3}$ acts as the zero map. This means that a generic steady state bifurcation along this irreducible representation must be a pitchfork bifurcation. \subsubsection*{Our running example revisited} For our running Example \ref{running}, the composition table of $\Sigma=\{\sigma_1, \sigma_2, \sigma_3\}$ was found in Example \ref{comptable}. It turns out that this composition table is identical to that of $\Sigma_3$. As a consequence, the fundamental networks for $\Sigma$ and $\Sigma_3$ must be the same. Indeed, we computed the fundamental network of our example in Example \ref{fundamentalexample} and it coindices with that of $\Sigma_3$. Let us assume now that the response function $f:V^3\times (\lambda_0, \lambda_1)\to V$ depends on a parameter. Then the differential equations of our running example become \begin{align}\label{1} \begin{array}{ll} \dot x_1 = & f(x_1, x_1, x_1; \lambda) \\ \dot x_2 = & f(x_2, x_2, x_1; \lambda) \\ \dot x_3 = & f(x_3, x_1, x_1; \lambda)\end{array} \, . \end{align} The corresponding fundamental network reads \begin{align}\label{2} \begin{array}{ll} \dot X_1 = & f(X_1, X_2, X_3; \lambda) \\ \dot X_2 = & f(X_2, X_2, X_3; \lambda) \\ \dot X_3 = & f(X_3, X_3, X_3; \lambda)\end{array} \, . \end{align} Recall that when $V=\R$, then our analysis of the fundamental network of $\Sigma_3$ predicts three possible generic co-dimension one steady state bifurcations: \begin{itemize} \item[i)] A fully synchronous saddle-node bifurcation inside $\{X_1=X_2=X_3\}$. \item[ii)] A partially synchronous transcritical bifurcation inside $\{X_2=X_3\}$. \item[iii)] A partially synchronous transcritical bifurcation inside $\{X_1=X_2\}$. \end{itemize} To understand how these scenarios impact the original network (\ref{1}), let us recall from Example \ref{conjugateexample} and Remark \ref{equilibriaremark} that $(x_1, x_2, x_3)$ is an equilibrium point of (\ref{1}) if and only if it is mapped to an equilibrium point of (\ref{2}) by all the maps $\partiali_1, \partiali_2, \partiali_3: V^3\to V^3$ given by $$\begin{array}{l} \partiali_1(x_1, x_2, x_3) = (x_1, x_1, x_1)\, , \\ \partiali_2(x_1, x_2, x_3) = (x_2, x_2, x_1)\, ,\\ \partiali_3(x_1, x_2, x_3) = (x_3, x_1, x_1)\, . \end{array}$$ As a consequence, we find the following: \begin{itemize} \item[i)] Assume that the fundamental network undergoes a fully synchronous saddle-node bifurcation. Then all its local equilibria lie inside the diagonal $\{X_1=X_2=X_3\}$. Now one can remark that $\partiali_1$ always sends the point $(x_1, x_2, x_3)$ to the diagonal, but $\partiali_2$ does so only if $x_1=x_2$ and $\partiali_3$ only if $x_1=x_3$. Thus, the point $(x_1,x_2, x_3)$ can only be an equilibrium if $x_1=x_2=x_3$. In other words, if the fundamental network undergoes a fully synchronous saddle-node bifurcation, then so does the original network. \item[ii)] It is clear that $\partiali_1$ and $\partiali_3$ always map $(x_1, x_2, x_3)$ inside $\{X_2=X_3\}$ but $\partiali_2$ only does so if $x_1=x_2$. Thus, if the fundamental network undergoes a partially synchronous transcritical bifurcation inside $\{X_2=X_3\}$, then the original network undergoes a partially synchronous transcritical bifurcation inside $\{x_1=x_2\}$. \item[iii)] Similarly, $\partiali_1$ and $\partiali_2$ always map $(x_1, x_2, x_3)$ inside $\{X_1=X_2\}$ but $\partiali_3$ only does so if $x_1=x_3$. Thus, if the fundamental network undergoes a partially synchronous transcritical bifurcation inside $\{X_1=X_2\}$, then the original network undergoes a partially synchronous transcritical bifurcation inside $\{x_1=x_3\}$. \end{itemize} Our message is that the monoid structure of $\Sigma$ both explains and predicts these bifurcation scenarios. Nevertheless, let us for completeness also show how they can be found from direct calculations of the steady states of (\ref{1}): \begin{itemize} \item[i)] Assume that $f(0,0,0; 0)=0$ and let us Taylor expand $$f(X, X, X) = a \lambda + bX + c X^2 + \mathcal{O}(|\lambda|^2 +|\lambda| \cdot |X| + |X|^3)\, .$$ When $b=0$ and $a, c \neq 0$, we find as solutions of (\ref{1}) \begin{align}\nonumber x_1=x_2=x_3 =\partialm \sqrt{-(a/c)\lambda} + \mathcal{O}(|\lambda|)\, . \end{align} \item[ii)] Assume that $f(0,0,0;\lambda)=0$ and let us Taylor expand \begin{align}\nonumber f(X_1, X_2, X_3; \lambda) & = (a + b\lambda) X_1 +c X_2 + d X_3 + e X_1^2 \\ \nonumber & + \mathcal{O}(|\lambda|^2\cdot |X_1| + |\lambda|\cdot |X_2|+ |\lambda|\cdot |X_3| + |X_2|^2+|X_3|^2+|X_1|^3) \, . \end{align} When $a=0$ and $b, c, c+d, e \neq 0$, we find as solutions of (\ref{1}) \begin{align}\nonumber x_1=x_2=x_3=0 \ \mbox{and}\ x_1=x_2=0, x_3 = - (b/e)\lambda + \mathcal{O}(|\lambda|^2)\, . \end{align} \item[iii)] Assume that $f(0,0,0;\lambda)=0$ and let us Taylor expand \begin{align}\nonumber f(X, X, Y; \lambda) = (a + b\lambda) X + cY + d X^2 + \mathcal{O}(|\lambda|^2\cdot |X| + |\lambda|\cdot |Y|+ |Y|^2+|X|^3) \, . \end{align} When $a=0$ and $a+c, b, d \neq 0$, we find as solutions of (\ref{1}) \begin{align}\nonumber x_1=x_2=x_3=0 \ \mbox{and}\ x_1 = x_3 = 0, x_2= - (b/d)\lambda + \mathcal{O}(|\lambda|^2)\, . \end{align} \end{itemize} \end{document}
math
93,923
\begin{document} \title{Applying Brownian motion to the study of birth-death chains.} \author{ \begin{tabular}{c} \textit{Greg Markowsky} \\ [email protected] \\ (054)279-5828 \\ Pohang Mathematics Institute \\ POSTECH \\ Pohang, 790-784 \\ Republic of Korea \end{tabular}} \begin{abstract} Basic properties of Brownian motion are used to derive two results concerning birth-death chains. First, the probability of extinction is calculated. Second, sufficient conditions on the transition probabilities of a birth-death chain are given to ensure that the expected value of the chain converges to a limit. The theory of Brownian motion local time figures prominently in the proof of the second result. \end{abstract} \begin{keyword}Birth-death chain, Markov chain, Brownian motion, local time. \end{keyword} \maketitle \section{Introduction} Let $X_m$ be a Markov chain taking values on the nonnegative integers with the following transition probabilities for $n \neq 0$ \be p_{nj} = \left \{ \begin{array}{ll} r_{n} & \qquad \mbox{if } j=n+1 \\ l_n & \qquad \mbox{if } j=n-1 \\ 0 & \qquad \mbox{if } |n-j| \neq 1\;. \end{array} \right. \ee Implicit here is the fact that $r_n+l_n=1$. We suppose further for simplicity that $X_0 = k$ almost surely, for some $k \in \mathbb{N}$. $X_m$ is essentially a random walk on the nonnegative integers, moving to the right from state $n$ with probability $r_n$ and to the left with probability $l_n$. We refer to such a Markov chain as a {\it birth-death chain}. This name comes from considering $X_m$ as the number of members in a population, where at each step either a new member is born or an old member dies, causing the process to increase or decrease by 1. We can assume $p_{00}=1$ and $p_{0j}=0$ for any $j \neq 0$, as when the population reaches 0 it is considered to have gone extinct with no possibility of regeneration. The purpose of this paper is to introduce a method of using properties of Brownian motion to deduce two fundamental theorems concerning birth-death chains. The first theorem, presented in the next section, gives the probability that a birth-death chain goes extinct at some finite time. The second theorem, presented in Section 3, gives sufficient conditions for $E[X_m]$ to converge as $m \longrightarrow \ff$. The properties of Brownian motion which will be utilized are standard and can be found in many references on Brownian motion, such as \cite{rosmarc} or \cite{revyor}. \vski We will now introduce the basic setup. Let $t_0 := 1$ and \be \label{pred} t_n := \frac{l_1 l_2 \ldots l_n}{r_1 r_2 \ldots r_n} \ee for $n>0$. Define a sequence $\{x_n\}_{n=0}^\ff$ recursively by setting $x_0=0$, and having defined $x_n$ let $x_{n+1}=x_n+t_n$. Since the sequence $\{x_n\}$ is increasing it converges to a limit $x_\ff$, possibly infinite, as $n \longrightarrow \ff$. Let $B_t$ be a Brownian motion starting at $x_k$ and stopped at the first time $T_{\Delta}$ it hits $0$ or $x_\ff$. The recurrence properties of Brownian motion imply that $T_{\De} < \ff$ almost surely. We define a sequence of stopping times $T_m$ which are, roughly speaking, the successive hitting times of $\cal{A} :=$ $ \{ x_n \}_{n=0}^\ff$. More rigorously, $T_m$ is defined recursively by setting $T_0=0$, and having defined $T_m$ we let $T_{m+1}=\inf_{t>T_m}\{B_t \in \cal{A},$ $B_t \neq B_{T_m}\}$. We see that the variables $B_{T_0},B_{T_1},B_{T_2}, \ldots $ form a random process taking values in $\cal{A}$. The strong Markov property of Brownian motion, together with the standard exit distribution of Brownian motion from an interval, imply that \bea \label{} \nn && P(B_{T_{m+1}} = x_{n+1} | B_{T_{m}} = x_{n}) = \frac{x_{n}-x_{n-1}}{x_{n+1}-x_{n-1}} = \frac{t_{n-1}}{t_{n-1}+t_n} = \frac{1}{1+l_n/r_n} = r_n \eea and, likewise, \bea \label{} \nn && P(B_{T_{m+1}} = x_{n-1} | B_{T_{m}} = x_{n}) = l_n \eea If we define $\phi$ on $\{x_n\}_{n=0}^\ff$ by $\phi(x_n)=n$, we see that $\phi(B_{T_0}),\phi(B_{T_1}),\phi(B_{T_2}), \ldots $ is a realization of our original birth-death chain. The picture below gives an example, where we have oriented the time axis vertically and the space axis horizontally. \vski \hspace{.8cm} \includegraphics[width=110mm,height=80mm]{bdchainpic1.pdf} {\small Figure 1: The Brownian path pictured realizes the birth-death path $k,k+1,k,k+1,k,k-1,k,k-1,k-2, \ldots$} \vski Given this framework, we are ready to prove several theorems. In the sequel, any reference to $X, B, x_n, T_{\Delta},\phi,$ etc. will refer to the definitions presented in this section. \section{The extinction probability of a birth-death chain} Perhaps the most fundamental question one can ask regarding a birth-death chain is whether the population must go extinct or not, that is, whether $P(X_m = 0$ for some $m)=1$ or $P(\lim_{m \longrightarrow \ff} X_m = +\ff)>0$. Let $P_k$ be the probability that the birth-death chain eventually hits 0 (recall $X_0=k$ a.s.). We then have the following. \bt \label{surf} \be \label{mass} P_k = \frac{\sum_{j=k}^\ff t_j}{\sum_{j=0}^\ff t_j} \ee where this quotient is interpreted as being equal to 1 if the sums diverge. \et This elegant theorem has a straightforward proof using recurrence relations; see \cite{norr} or \cite{sysk}. A potentially pleasing aspect of the proof below, however, lies in giving a clear, visual intuition for the sums in \rrr{mass}. \vski {\bf Proof of Theorem \ref{surf}:} Recall that $x_\ff = \lim_{n \longrightarrow \ff} x_n$ is given by \be \label{} x_\ff=\sum_{j=0}^\ff t_j \ee If $x_\ff=\ff$, so that both sums in \rrr{mass} diverge, then $B_{T_{\DD}}=0$ almost surely. This implies that the population dies out with probability $1$. On the other hand, if $x_\ff<\ff$ then $P(B_{T_{\De}}=0)$ is given by \be \label{} \frac{x_\ff-x_k}{x_\ff-0} = \frac{\sum_{j=k}^\ff t_j}{\sum_{j=0}^\ff t_j} \ee However, as in the first case, $P(B_{T_\De}=0)$ is precisely $P_k$, the probability of extinction. This is because the Brownian motion hitting $x_\ff$ before $0$ implies $B_{T_m}\longrightarrow x_\ff$, hence $\phi(B_{T_m}) \longrightarrow \ff$, whereas hitting $0$ before $x_\ff$ implies $\phi(B_{T_\De}) =0$ for some $m$. The two cases ($x_\ff=\ff$ and $x_\ff < \ff$) are illustrated in the following figure. \includegraphics[width=140mm,height=110mm]{bdchain_compare.pdf} {\small Figure 2: The left panel illustrates the situation in which $\sum_{j=0}^\ff t_j$ diverges. Eventually, $B_t$ hits $0$ and the population goes extinct. The right panel illustrates the other scenario, in which $\sum_{j=0}^\ff t_j = x_\ff < \ff$. In this case, there is a positive probability that $B_t$ hits $x_\ff$ before $0$, in which case the population never goes extinct.} \vski This completes the proof of Theorem 1. { $\square$ } \section{The long-term average of a birth-death chain} Recall that $B$ and $X$ are stopped upon reaching $0$. It will therefore be convenient to let $X_m$ be defined to be $0$ for all $m>m_0$, where $m_0$ is the smallest integer, if it exists, for which $X_{m_0}=0$. Similarly, for convenience let $T_m=T_\DD$ for all $m > m_0$, where $m_0$ is the smallest integer, if it exists, for which $B_{T_{m_0}}=0$. In the case $r_i=l_i=\frac{1}{2}$ for all $i$, it is well known that $X_m$ is a martingale, and therefore $E[X_m]=E_0=k$ for all $m$. This occurs despite the fact that $P(X_m = 0) \longrightarrow 1$ as $m \longrightarrow \ff$, as the average value of $X_m$ on $\{X_m \neq 0\}$ grows at exactly the right speed to balance the set of large probability upon which $X_m=0$. Such behavior certainly does not hold for the general case, since we no longer have the martingale property, but we will see that the Brownian motion model presented above can shed light on the behavior of $E[X_m]$ as $m \longrightarrow \ff$. \vski Recall that $\cal{A}$ $=\{x_n\}_{n=0}^\ff$. Let $\phi: \cal{A}$ $\longrightarrow \mathbb{R}^+$ be extended to a continuous function from $\mathbb{R}^+$ to $\mathbb{R}^+$ by defining $\phi$ to be linear on each interval $(x_{n-1},x_n)$. Alternatively, we may think of $x_n=x(n)$ as a function from $\mathbb{N}$ to $\mathbb{R}$ which can be extended by linear interpolation to an increasing function from $\mathbb{R}^+$ to $\mathbb{R}^+$. In this case, $\phi$ is simply $x^{-1}$. $\phi$ is therefore a piecewise linear function, and $\phi'$ exists on $\mathbb{R}^+ - \cal{A}$. Let $\phi'_n$ be the value of $\phi'$ on $(x_{n-1},x_n)$. We will prove the following theorem. \bt \label{bigguyii} If $\phi'_\ff = \lim_{n \longrightarrow \ff} \phi'_n$ exists, then \be \label{yokoneg1} \lim_{m \longrightarrow \ff} E[X_m] = x_k \phi'_\ff \ee \et Note that we are allowing $\phi'_\ff = +\ff$ or $0$. Writing $\phi'_\ff$ and $x_k$ in terms of the $l_n$'s and $r_n$'s shows that the following statement is equivalent to Theorem \ref{bigguyii}. \vski {\it If $t_\ff := \lim_{n \longrightarrow \ff} \frac{l_1\ldots l_{n}}{r_1\ldots r_{n}}$ exists then $\lim_{m \longrightarrow \ff}E[X_m]$ exists, and } \be \label{} \lim_{m \longrightarrow \ff}E[X_m] = \frac{1+ \frac{l_1}{r_1} + \ldots + \frac{l_1\ldots l_{k-1}}{r_1\ldots r_{k-1}}}{t_\ff} \ee \vski The bulk of the rest of this section is devoted to the proof of this theorem. We will simplify initially by assuming $\sum_{n=1}^{\ff} |\phi'_{n+1}-\phi'_n| < \ff$; this condition will be removed at the end of the proof. For the case in which there is a positive probability that the population never goes extinct, it is easy to see that $E[X_m] \longrightarrow \ff$ as $m \longrightarrow \ff$, and that $\phi'_\ff$ exists and is equal to $+\ff$, so that \rrr{yokoneg1} is valid. We will therefore assume that $P(X_m = 0$ for some $m) = 1$. Note that $\phi'_{n+1} = \frac{1}{t_n}$, and $x_{n+1}-x_n = t_n$, so that $\phi_{n+1}'(x_{n+1}-x_n) = 1$. Note also that $x_1 \phi'_1 = 1$. This allows us to perform the following manipulations to obtain an expression which will be more convenient for the purposes of the proof. \bea \label{yokoa} && x_k \phi'_\ff = x_k (\phi'_\ff - \phi'_{k}) + x_k \phi'_k + k - x_1\phi'_1 - \sum_{n=1}^{k-1} \phi_{n+1}(x_{n+1}-x_n) \\ \nn && \hspace{1.1cm}= k + x_k (\phi'_\ff - \phi'_{k}) + \sum_{n=1}^{k-1} (\phi'_{n+1}-\phi'_n) x_n \eea The last equality uses summation by parts; see \cite{lang}. We see that the conclusion of the theorem is equivalent to showing \be \label{yoko} \lim_{m \longrightarrow \ff} E[X_m] = k + x_k (\phi'_\ff - \phi'_{k}) + \sum_{n=1}^{k-1} (\phi'_{n+1}-\phi'_n) x_n \ee This is what we will prove. We will proceed through several lemmas, and will need properties of Brownian motion {\it local time}, which is the density of the occupation measure of Brownian motion with respect to Lebesgue measure. That is, the local time $L_t^x$ satisfies \be \label{} L_t^x dx = \int_{0}^{t} 1_{B_s \in dx} ds \ee It is well known that $L_t^x$ exists and that \be \label{} L_t^x = \lim_{\varepsilon \longrightarrow 0} \frac{1}{2\varepsilon} \int_{0}^{t} 1_{|B_s-x|<\varepsilon}ds \ee almost surely. See \cite{rosmarc} for a comprehensive treatment of local time, or the more general reference \cite{revyor}. The following is an extension of Tanaka's formula, Theorem VI.1.2 in \cite{revyor}. \bl \label{45} Almost surely, for any stopping time $T$, \be \label{maz} \phi(B_T) = k + \int_{0}^{T} \phi'(B_s) dB_s + \sum_{n=1}^\ff \frac{(\phi'_{n+1}-\phi'_n)}{2} L^{x_n}_T \ee where $L^{x_n}_T$ denotes the local time of $B_t$ at $x_n$ at time $T$. \el {\bf Proof:} Note that $\phi''(x) = \sum_{n=1}^\ff (\phi'_{n+1}-\phi'_n) \delta_{x_n}(x)$ in the sense of distributions, where $\delta_{x_n}(x) = \delta_0(x-x_n)$ denotes the Dirac delta function at point $x_n$. Lemma \ref{45} is therefore seen to be a special case of the It\^{o}-Tanaka formula, Theorem VI.1.5 of \cite{revyor}, provided that $\phi$ can be realized as a difference of two convex functions. However, any piecewise-linear function can be realized as the difference of two convex functions, provided that the points of nondifferentiability do not accumulate. We may argue as follows. $\phi'$ is a piecewise constant function, which is therefore of bounded variation on bounded intervals, and as such we may write $\phi' = f-g$ where $f$ and $g$ are nondecreasing. Let $F$ and $G$ be antiderivatives of $f$ and $g$ chosen so that $\phi = F-G$. Then $F$ and $G$ are convex, and the result follows. { $\square$ } Applying this lemma to the stopping time $T_m$, we immediately obtain \be \label{kok} E[X_m] = k + \sum_{n=1}^\ff \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}], \ee Note that the convergence of the sum at this point is not an issue, since $L^{x_n}_{T_m} = 0$ for $n>m+k$. Using the identity \rrr{kok} does not seem to be an effective way to calculate $E[X_m]$, due to the difficulty of obtaining information about $T_m$. Nonetheless, we do know that $T_m \nearrow T_\DD$ as $m \longrightarrow \ff$, and this implies \be \label{oko} \lim_{m \longrightarrow \ff} E[X_m] = k + \sum_{n=1}^\ff \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_\ff], \ee provided that $E[L^{x_n}_\ff]$ can be bounded uniformly, which we will show soon to be the case. We should mention that it was in obtaining \rrr{oko} that we used the assumption that $\sum_{n=1}^{\ff} |\phi'_{n+1}-\phi'_n| < \ff$. This is because a priori the quantities $E[L^{x_n}_{T_m}]$ may be growing in some strange way that causes problems if $\sum_{n=1}^{\ff} |\phi'_{n+1}-\phi'_n| = \ff$. We will return to this point at the end of the proof. In light of \rrr{oko}, we must compute $E[L^{x_n}_\ff]$. \bl \label{sbtm} \be \label{a3} E[L^{x_n}_\ff] = 2 \min (x_k,x_n) \ee \el {\bf Proof:} One may derive this through standard calculations involving the probability density function of $B_t$, but the following is a quicker and easier proof. Let us suppose first that $n=k$. From Tanaka's formula, $E[L^{x_k}_\ff] = \lim_{t \longrightarrow \ff} E[|B_t-x_k|]$. Furthermore, $B_t$ is a martingale, so $E[(B_t-x_k)]=0$ for all $t$. It follows from this that \bea \label{} && E[L^{x_k}_\ff]=2\lim_{t \longrightarrow \ff} E[\max(-(B_t-x_k),0)] \\ \nn && \hspace{1.3cm} = 2 \lim_{t \longrightarrow \ff} \Big( x_k P(B_t=0) + \int_{0}^{k} (x_k-x) P(B_t \in dx) \Big) \eea As $\lim_{t \longrightarrow \ff} P(B_t=0) = 1$, we can conclude that $E[L^{x_k}_\ff]=2x_k$. Now suppose $n \neq k$. Note that, if we let $T_{x_n} = \inf \{t: B_t=x_n\}$, then \be \label{runy} L^{x_n}_\ff = L_{T_{x_n}} + L_\ff(B \circ \theta_{T_{x_n}})1_{T_{x_n}<T_\DD} = L_\ff(B \circ \theta_{T_{x_n}})1_{T_{x_n}<T_\DD} \ee where $\theta$ denotes the standard shift operator and $L_t(B \circ \theta_{T_{x_n}})$ is the local time of the shifted process $B \circ \theta_{T_{x_n}}$. Let $E_{x_j}$ denote expectation with respect to a Brownian motion $W$ which starts at $x_j$ and is stopped upon hitting $0$. The prior calculation together with \rrr{runy} and the strong Markov property of Brownian motion imply that \bea \label{} && E[L^{x_n}_\ff] = P(T_{x_n}<T_\DD) E_{x_n} [L_\ff^{x_n}] \\ \nn && \hspace{1.32cm} = P(T_{x_n}<T_\DD) 2x_n \eea The general result follows from noting that $P(T_{x_n}<T_\DD)$ is $1$ if $x_n < x_k$ and $\frac{x_k}{x_n}$ if $x_n > x_k$. { $\square$ } Combining \rrr{oko} and Lemma \ref{sbtm} gives \be \label{yoko2} \lim_{m \longrightarrow \ff} E[X_m] = k + \sum_{n=1}^{k-1} (\phi'_{n+1}-\phi'_n) x_n + x_k \sum_{n=k}^{\ff}(\phi'_{n+1} - \phi'_n) \ee Since $\sum_{n=k}^{\ff}(\phi'_{n+1} - \phi'_n) = (\phi'_\ff-\phi'_{k})$, we are done in this case. It remains only to remove the restriction that $\sum_{n=1}^{\ff} |\phi'_{n+1}-\phi'_n| < \ff$. The following lemma is key. \bl \label{} For any $m$, and any $n \geq k$, $E[L^{x_n}_{T_m}] \geq E[L^{x_{n+1}}_{T_m}]$. \el {\bf Proof:} In fact, we may prove somewhat more, namely that if $T$ is any stopping time with $B_T \in \cal{A}$ almost surely, then $E[L^{x_n}_{T}] \geq E[L^{x_{n+1}}_{T}]$. Let $E_{x_j}$ and $W$ be as in the proof of Lemma \ref{sbtm}. Using Lemma \ref{sbtm} and the strong Markov property of Brownian motion, we obtain \bea \label{} && E[L^{x_n}_{T}] = E[L^{x_n}_{\ff}] - E\Big[\lim_{\varepsilon \longrightarrow 0}\frac{1}{2\varepsilon}\int_{T}^{\ff} 1_{(-\varepsilon,\varepsilon)}(B_s - x_n) ds \Big] \\ \nn && \hspace{1.33cm} = 2 x_k - \sum_{j=1}^{\ff} P(B_T = x_j)E_{x_j} \Big[ \lim_{\varepsilon \longrightarrow 0} \frac{1}{2\varepsilon} \int_{0}^{\ff} 1_{(-\varepsilon,\varepsilon)}(W_s- x_n) ds \Big] \\ \nn && \hspace{1.33cm} = 2 x_k - \sum_{j=1}^{n-1} P(B_T = x_j)x_j - \sum_{j=n}^{\ff} P(B_T = x_j)x_n \eea Similarly, \be \label{} E[L^{x_{n+1}}_{T}] = 2 x_k - \sum_{j=1}^{n} P(B_T = x_j)x_j - \sum_{j=n+1}^{\ff} P(B_T = x_j)x_{n+1} \ee The conclusion of the lemma now follows from the fact that $x_{n+1} > x_n$. { $\square$ } We may now complete the proof of the theorem. Recall \rrr{kok}, and observe that $E[L^{x_n}_{T_m}] = 0$ for $n>k+m$, since $X_m \leq m+k$. This means that \rrr{kok} is in fact a finite sum. \bea \label{kok2} && E[X_m] = k + \sum_{n=1}^{m+k} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] \\ \nn && \hspace{1.3cm} = k + \sum_{n=1}^{k-1} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] + \sum_{n=k}^{k+m} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] \eea The indices of the first sum in the final expression of \rrr{kok2} are independent of $m$. This implies that the sum converges as $m\longrightarrow \ff$, since $E[L^{x_n}_{T_m}] \longrightarrow 2x_n$ as $m \longrightarrow \ff$ for $n \leq k$. We must show that the second sum converges as $m\longrightarrow \ff$. We use summation by parts again, which gives \bea \label{kok2} && \sum_{n=k}^{k+m} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] = \frac{1}{2}\Big( \phi'_{k+m+1}E[L^{x_{k+m+1}}_{T_m}] - \phi'_{k}E[L^{x_{k}}_{T_m}] \\ \nn && \hspace{1cm} - \sum_{n=k}^{k+m} \phi'_{n+1} (E[L^{x_{n+1}}_{T_m}] - E[L^{x_{n}}_{T_m}]) \Big) \eea Let us assume that $\phi'_\ff < \ff$, and let $\varepsilon>0$ be given. We may choose $N>k$ such that $\phi'_n \in (\phi_\ff-\varepsilon,\phi_\ff+\varepsilon)$ for all $n \geq N$. Having chosen this, we may choose $M > N-k$ such that $2x_k \geq E[L^{x_{n}}_{T_m}] > 2x_k - \varepsilon$ for all $n \in [k,N], m \geq M$. Using the fact that $E[L^{x_{k+m+1}}_{T_m}]=0$, and setting $\overline{\phi'} = \sup_{j > 0} \phi'_j$, we see that for $m>M$ \bea \label{} && \sum_{n=k+1}^{k+m} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] \\ \nn && \hspace{1cm} \leq \frac{1}{2}\Big( -\phi'_{k} (2x_k - \varepsilon) + \sum_{n=k}^{N} \phi'_{n+1} (E[L^{x_{n}}_{T_m}] - E[L^{x_{n+1}}_{T_m}]) \\ \nn && \hspace{2cm} + \sum_{n=N+1}^{k+m} \phi'_{n+1} (E[L^{x_{n}}_{T_m}] - E[L^{x_{n+1}}_{T_m}]) \Big) \\ \nn && \hspace{1cm} \leq \frac{1}{2}\Big( -\phi'_{k} (2x_k - \varepsilon) + \overline{\phi'} (E[L^{x_{k}}_{T_m}] - E[L^{x_{N+1}}_{T_m}]) \\ \nn && \hspace{2cm} + (\phi'_\ff + \varepsilon) (E[L^{x_{N+1}}_{T_m}] - E[L^{x_{k+m+1}}_{T_m}]) \Big) \\ \nn && \hspace{1cm} \leq \frac{1}{2}\Big( -\phi'_{k} (2x_k - \varepsilon) + \overline{\phi'} \varepsilon + (\phi'_\ff + \varepsilon) 2x_k \Big) \eea This shows that \be \label{} \limsup_{m \longrightarrow \ff} \sum_{n=k}^{k+m} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] \leq x_k(\phi'_\ff - \phi'_{k}) \ee Proceeding similarly, we can obtain \be \label{} \liminf_{m \longrightarrow \ff} \sum_{n=k+1}^{k+m} \frac{(\phi'_{n+1}-\phi'_n)}{2} E[L^{x_n}_{T_m}] \geq x_k(\phi'_\ff - \phi'_{k}) \ee Together, these prove the desired convergence. The case $\phi'_\ff = +\ff$ is similar but easier and is omitted. This completes the proof of Theorem \ref{bigguyii}. { $\square$ } We conclude with a simple but counterintuitive example. Let $l_n=\frac{n}{2n+1}, r_n = \frac{n+1}{2n+1}$ for $n \geq 1$. Then $t_n = \frac{1}{n+1}$, so that $t_\ff = 0$. On the other hand, $x_\ff = 1 + \sum_{n=1}^{\ff}t_n = \ff$. We see that the birth-death chain $X_m$ built upon these transition probabilities has an extinction probability of 1, but $E[X_m] \longrightarrow \ff$ as $m \longrightarrow \ff$. \end{document}
math
20,022
\begin{document} \DeclarePairedDelimiterX\MeijerM[3]{\lparen}{\rparen}{\begin{matrix}#1 \\ #2\end{matrix}\delimsize\vert\,#3} \title{The local universality of Muttalib-Borodin ensembles when the parameter $\theta$ is the reciprocal of an integer} \author{L. D. Molag} \maketitle \begin{center} KU Leuven, Department of Mathematics,\\ Celestijnenlaan 200B box 2400, BE-3001 Leuven, Belgium.\\ E-mail: [email protected] \begin{abstract} The Muttalib-Borodin ensemble is a probability density function for $n$ particles on the positive real axis that depends on a parameter $\theta$ and a weight $w$. We consider a varying exponential weight that depends on an external field $V$. In a recent article, the large $n$ behavior of the associated correlation kernel at the hard edge was found for $\theta=\frac{1}{2}$, where only few restrictions are imposed on $V$. In the current article we generalize the techniques and results of this article to obtain analogous results for $\theta=\frac{1}{r}$, where $r$ is a positive integer. The approach is to relate the ensemble to a type II multiple orthogonal polynomial ensemble with $r$ weights, which can then be related to an $(r+1)\times (r+1)$ Riemann-Hilbert problem. The local parametrix around the origin is constructed using Meijer G-functions. We match the local parametrix around the origin with the global parametrix with a double matching, a technique that was recently introduced. \end{abstract} \end{center} \tableofcontents \section{Introduction and main result} \subsection{Introduction} The Muttalib-Borodin ensemble (MBE) with parameter $\theta>0$ and weight $w$ is defined by the following joint probability density function for particles on the positive half-line. \begin{align} \label{ch4:eq:defMBE} & \frac{1}{Z_n} \prod_{1\leq i<j\leq n} (x_i-x_j)(x_i^\theta-x_j^\theta) \prod_{j=1}^n w(x_j), & x_1,\ldots,x_n> 0. \end{align} Here $Z_n>0$ is a normalization constant. We consider an $n$-dependent weight \begin{align} \label{ch4:eq:defMBEw} w(x)=w_\alpha(x)= x^\alpha e^{-n V(x)}, \end{align} where $\alpha>-1$ and $V:[0,\infty)\to \mathbb R$ is an external field that has enough increase at infinity. The latter is imposed to assure that \eqref{ch4:eq:defMBE} is integrable and thus normalizable. A sufficient condition would be to have $V(x)\geq \frac{1+\theta}{2} \log(x)$ for $x$ big enough. We put $V(0)=0$ without loss of generality. In 1995 Muttalib introduced the model as a simplified model for disordered conductors in the metallic regime \cite{Mu}. This type of disordered conductors was not accurately described by the existing random matrix models. A few years later Borodin obtained interesting results for several specific choices of the weight $w_\alpha$ \cite{Bo}, most notably for the Laguerre case, i.e., when $V$ is linear. For linear external fields he found a new scaling limit, that we turn to in a second. The model has seen a revival of interest, as it became clear in recent years that the MBE is connected to several random matrix models \cite{FoWa, Ch, BeGeSz, AkIpKi}, where it describes either the eigenvalue density or the density of the squared singular values. We also mention the recent results on the corresponding large gap probabilities of the MBE \cite{ClGiSt,ChLeMa}. See \cite{YaAlMuWa} for a recent attempt of Yadav, Muttalib et. al. to model certain physical systems with a generalization of the MBE. The MBE is a determinantal point process and thus it has an associated correlation kernel $K_{V,n}^{\alpha,\theta}$. In fact, it is a biorthogonal ensemble \cite{Bo}, and this implies that we may take \begin{align} \label{ch4:defK} K_{V,n}^{\alpha,\theta}(x,y) = w_\alpha(x) \sum_{j=0}^{n-1} p_j(x) q_j(x^\theta), \end{align} where $p_j$ and $q_j$ are polynomials of degree $j$ that satisfy \begin{align} \label{ch4:defpnqn} \int_0^\infty p_j(x) q_k(x^\theta) w_\alpha(x) dx &= \delta_{ij}, & j=0,1,\ldots \end{align} In the large $n$ limit the particles, corresponding to the weight \eqref{ch4:eq:defMBEw}, behave almost surely according to a limiting empirical measure $\mu_{V,\theta}^*$ that minimizes a corresponding equilibrium problem, as was shown by Claeys and Romano in \cite{ClRo}. Namely, $\mu_{V,\theta}^*$ minimizes the functional \begin{align} \label{ch4:defmuVtheta} \frac{1}{2} \iint \log \frac{1}{|x-y|} d\mu(x) d\mu(y) + \frac{1}{2} \iint \log \frac{1}{|x^\theta-y^\theta|} d\mu(x) d\mu(y) + \int V(x) d\mu(x). \end{align} The Euler-Lagrange variational conditions (see \cite{ClRo}) corresponding to this minimization problem take the form \begin{align} \label{ch4:eq:varCon} \int\log|x-s| d \mu_{V,\theta}^* + \int\log|x^\theta -s^\theta| d \mu_{V,\theta}^* \left\{\begin{array}{ll} = V(x) + \ell, & x\in\operatorname{supp}(\mu_{V,\theta}^*),\\ \leq V(x) + \ell, & x\in [0,\infty), \end{array}\right. \end{align} where $\ell$ is a real constant. For specific choices of $V$ we know how the correlation kernel behaves around the origin in the large $n$ limit. In particular, Borodin \cite{Bo} calculated the hard edge scaling limit of \eqref{ch4:defK} for the Laguerre case, i.e., where $V(x)=x$. His result translates to \begin{align} \label{ch4:eq:scalingLimitK} \lim_{n\to\infty} \frac{1}{n^{1+\frac{1}{\theta}}} K_{V,n}^{\alpha,\theta}\left(\frac{x}{n^{1+\frac{1}{\theta}}},\frac{y}{n^{1+\frac{1}{\theta}}}\right) = \mathbb{K}^{(\alpha,\theta)}(x,y), \end{align} where \begin{align} \label{ch4:eq:scalingLimitIK} \mathbb{K}^{(\alpha,\theta)}(x,y) = \theta y^\alpha \int_0^1 J_{\frac{\alpha+1}{\theta},\frac{1}{\theta}}(ux) J_{\alpha+1,\theta}\left((uy)^\theta\right) u^\alpha du, \end{align} and $J_{a,b}$ is Wright's generalized Bessel function. The scaling limit \eqref{ch4:eq:scalingLimitK} is valid for any fixed $\theta>0$. When either $\theta$ or $1/\theta$ is a positive integer the limiting kernel coincides (up to rescaling and a gauge factor) with the so-called Meijer G-kernel \cite{AkIpKi, KuSt}. In \cite{KuMo} we conjectured that one would obtain the scaling limit \eqref{ch4:eq:scalingLimitK} for a much larger class of external fields, for any fixed $\theta>0$. Such universality was well-known for $\theta=1$, where one obtains the Bessel kernel. Indeed, the Bessel kernel coincides with \eqref{ch4:defK} when $\theta=1$. In \cite{KuMo}, the conjecture was proved for $\theta=\frac{1}{2}$. \subsection{Statement of results} In this paper we will go a step further and prove the conjecture for all $\theta=\frac{1}{r}$ with $r$ a positive integer. There are several advantages when we restrict to such $\theta$. First of all, it is then known that the biorthogonal ensemble can be related to a multiple orthogonal polynomial ensemble (MOP) with $r$ weights $w_\alpha, w_{\alpha+\frac{1}{r}}, \ldots, w_{\alpha+\frac{r-1}{r}}$ (see \cite{KuMo}), Lemma 2.1). That is, we can take $p_n$, as in \eqref{ch4:defpnqn}, to be the unique monic polynomial that satisfies \begin{align} \label{ch4:defMOP} \int_0^\infty p_n(x) x^k w_{\alpha+\frac{j-1}{r}}(x) dx &= 0, & j=1,2,\ldots,r, \quad k = 0,1,\ldots,\left\lfloor \frac{n-j}{r} \right\rfloor. \end{align} Secondly, it was shown \cite{Ku} by Kuijlaars that there is, besides the equilibrium problem as in \eqref{ch4:defmuVtheta}, also a corresponding \textit{vector equilibrium problem} consisting of $r$ measures. We describe this vector equilibrium problem in Section \ref{ch4:sec:normalization2}. It is interesting that such a vector equilibrium problem also exists when $\theta$ is assumed to be rational (although it is unclear which multiple orthogonal polynomials, if any, would correspond to that situation). Our result will be valid under a generic restriction (see \cite{ClRo}, Theorem 1.8) on the external field. As in \cite{KuMo}, we call $V$ one-cut $\theta$-regular when the equilibrium measure $\mu_{V,\theta}^*$ is supported on one interval $[0,q]$, for some $q>0$, has a density that is positive on $(0,q)$ and that behaves near the endpoints as \begin{align} \label{ch4:eq:onecutthetabehav} \frac{d \mu_{V,\theta}^*(s)}{ds} = \left\{\begin{array}{ll} c_{0,V} (1+o(1)) s^{-\frac{1}{\theta+1}}, & s\downarrow 0,\\ c_{1,V} (1+o(1)) (q-s)^\frac{1}{2}, & s\uparrow q \end{array}\right. \end{align} for some positive constants $c_{0,V}$ and $c_{1,V}$, and in addition we demand that the inequality in \eqref{ch4:eq:varCon} is strict for $x>q$. This last condition is not essential, but it will make our derivation cleaner. The one-cut condition, added for convenience as well, is also not absolutely necessary. The main result holds as long as the support of the equilibrium measure contains a closed interval with left-end point $0$ (and \eqref{ch4:eq:onecutthetabehav} is satisfied). A sufficient condition for $V$ to be one-cut $\frac{1}{r}$-regular is that it is twice differentiable on $[0,\infty)$ and that $x V'(x)$ is increasing for $x>0$. A proof for this can be found in Proposition 2.4 in \cite{KuMo}, the proof is for $\theta=\frac{1}{2}$ but with a mild modification it also works for all rational $\theta>0$. Notice in particular, that linear external fields are one-cut $\frac{1}{r}$-regular. The main result of this paper is the following. \begin{theorem} \label{ch4:mainThm} Let $\alpha>-1$ and let $\theta=\frac{1}{r}$, where $r$ is a positive integer. Let $V:[0,\infty)\to\mathbb{R}$ be a one-cut $\theta$-regular external field which is real analytic on $[0,\infty)$. Then for $x,y\in (0,\infty)$ we have \begin{align} \label{ch4:eq:mainResult} \lim_{n\to\infty} \frac{1}{(c n)^{r+1}} K_{V,n}^{\alpha,\frac{1}{r}}\left(\frac{x}{(c n)^{r+1}},\frac{y}{(c n)^{r+1}}\right) & = \mathbb K^{(\alpha,\frac{1}{r})}(x,y), \end{align} uniformly on compact sets, where $c = \pi c_{0,V}/\sin\frac{\pi}{r+1}$ with $c_{0,V}$ as in \eqref{ch4:eq:onecutthetabehav}. \end{theorem} We remark that the substitution $x\to x^{\frac{1}{\theta}}$ changes the MBE to one with a different input. Namely, we should then substitute $\theta, \alpha$ and $V(x)$ by $1/\theta, (1+\alpha)/\theta -1$ and $V(x^\frac{1}{\theta})$ respectively. This means that the main result is also true when we replace $r$ by $\frac{1}{r}$ everywhere in its formulation, but with the altered condition that $V(x^\frac{1}{r})$ should be one-cut $\frac{1}{r}$-regular. Then $V(x)$ should have a power series expansion evaluated in $x^r$, and this severely restricts what type of external fields can be treated. There does not appear to be a simple way to relax this restriction, although we believe that the main result should hold without it. Our approach is conceptually the same as in \cite{KuMo}. The main difference is that the calculations become more technical and involved. The MOP ensemble \eqref{ch4:defMOP} is related to an $(r+1)\times (r+1)$ Riemann-Hilbert problem (RHP) which we will present in the next section. We analyze this RHP using the Deift-Zhou method of nonlinear steepest descent and this will allow us to prove Theorem \ref{ch4:mainThm}. To make our derivation cleaner we will assume that $n$ is divisible by $r$, but we explain in Appendix \ref{ch:appendixA} how the case where $n$ is not divisible by $r$ is treated. Notice that $\left\lfloor \frac{n-j}{r} \right\rfloor=\frac{n}{r}-1$ for all $j=1,2,\ldots,r$ in the case that $n$ is divisible by $r$. The local parametrix at the hard edge is constructed with the help of Meijer G-functions. This was to be expected, Zhang showed in \cite{Zh} that the limiting correlation kernel in \eqref{ch4:eq:scalingLimitIK} can be expressed with the help of Meijer G-functions when either $\theta$ or $1/\theta$ is a positive integer. The local parametrix that we find shows great similarity with the bare Meijer G-parametrix from \cite{BeBo}, although there does not appear to be a simple transformation that relates these two local parametrix problems. As in other larger size RHPs (e.g., see \cite{BeBo} and \cite{KuMFWi}), we will not be able to match the local parametrix with the global parametrix in the usual way. In order to match the global and local parametrix we are going to use a double matching. This double matching procedure was introduced in \cite{KuMo} and was later applied in \cite{SiZh}. In \cite{Mo}, the double matching procedure was refined and a general framework was put forward. The current paper will be the first instance were this general framework for the double matching procedure is utilized. As a convenience to the reader, we repeat the main result concerning the double matching of \cite{Mo} in Section \ref{ch4:sec:matching} (see \text{Theorem \ref{lem:matching}}). Having done the RHP analysis, one can also calculate the scaling limits of the correlation kernel in the bulk and at the soft edge $q$ to be the sine and Airy kernel respectively. We omit the details. We do not believe it to be reasonable to expect that our method can be generalized to all $\theta>0$, but it might be possible to adapt it to obtain the same results for rational $\theta$. In particular, it was shown in \cite{Ku} that there exists an underlying vector equilibrium problem when $\theta$ is rational. The measures of its solution might be used to construct $g$-functions for a corresponding RHP, although, at the moment, it is not clear to us what this RHP would look like. To prove the conjecture for irrational $\theta$ we would suspect that an entirely new approach has to be invented, although a density or continuity argument might do the trick once the conjecture is proved for rational $\theta$. Another question for further research, is whether the real analyticity of $V$ can be relaxed. Our current approach can not deal with the situation where $V$ is is not real analytic. To be specific, equation \eqref{ch4:eq:varphijcrelation} would not necessarily be valid anymore for $j=0$. This means that our Szeg\H{o} function as defined in \eqref{ch4:eq:defD0} does not actually lead to a local parametrix problem with constant jumps (see Section \ref{ch4:sec:szegoconstantjumps}). Furthermore, the map $f$ as defined in \eqref{ch4:eq:conformalf} will not be analytic, hence not conformal. It has been suggested to us that the $\overline{\partial}$-method as introduced by McLaughlin and Miller \cite{McMi}, adapted to larger size RHPs, might be able to deal with more general external fields.\\ The following three expressions will be used repeatedly throughout this paper. \begin{align} \label{ch4:defab} \beta = \alpha + \frac{r-1}{2 r}, \quad\quad\Omega = e^\frac{2\pi i}{r}, \quad\quad\omega = e^\frac{2\pi i}{r+1}. \end{align} We will use these notations without reference henceforth. Throughout this paper we use principle branches for fractional powers and logarithms, i.e., we pick the argument of $z$ between $-\pi$ and $\pi$. In the few cases were we have no choice but to deviate from this convention, it will be explicitly mentioned what branch we take. \section{The Riemann-Hilbert problem} \subsection{Introduction of the Riemann-Hilbert problem} \label{ch4:sec:theRHP} As mentioned in the introduction we will assume that $n$ is divisible by $r$. This choice is made because the intuition behind some of the formulae that we will encounter might be obscured if we include the $n$ that are not divisible by $r$. Most of the RH analysis is identical for such $n$ though, see Appendix \ref{ch:appendixA}. The MOP ensemble defined in \eqref{ch4:defMOP} is related to an $(r+1)\times (r+1)$ RHP \cite{VAGeKu}, which takes the form \begin{rhproblem} \label{ch4:RHPforY} \ \begin{description} \item[RH-Y1] $Y : \mathbb{C}\setminus [0,\infty)\to \mathbb{C}^{(r+1)\times (r+1)}$ is analytic. \item[RH-Y2] $Y$ has boundary values for $x\in (0,\infty)$, denoted by $Y_{+}(x)$ (from the upper half plane) and $Y_{-}(x)$ (from the lower half plane), and for such $x$ we have the jump condition \begin{align} \label{ch4:RHY2} Y_{+}(x) = Y_{-}(x) \begin{pmatrix} 1 & w_\alpha(x) & w_{\alpha+\frac{1}{r}}(x) & \ldots &w_{\alpha+\frac{r-1}{r}}(x) \\ 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0\\ \vdots & & & & \vdots\\ 0 & 0 & 0 & \ldots & 1 \end{pmatrix}. \end{align} \item[RH-Y3] As $|z|\to\infty$ \begin{align} \label{ch4:RHY3} Y(z) = \left(\mathbb{I}+\mathcal{O}\left(\frac{1}{z}\right)\right) \begin{pmatrix} z^{n} & 0 & 0 & \ldots & 0\\ 0 & z^{-\frac{n}{r}} & 0 & \ldots & 0\\ 0 & 0 & z^{-\frac{n}{r}} & \ldots & 0\\ \vdots & & & & \vdots\\ 0 & 0 & 0 & \ldots & z^{-\frac{n}{r}} \end{pmatrix}. \end{align} \item[RH-Y4] As $z\to 0$ \begin{align} \label{ch4:RHY4} Y(z) = \mathcal{O}\begin{pmatrix} 1 & h_{\alpha}(z) & h_{\alpha+\frac{1}{r}}(z) & \ldots & h_{\alpha+\frac{r-1}{r}}(z)\\ 1 & h_{\alpha}(z) & h_{\alpha+\frac{1}{r}}(z) & \ldots & h_{\alpha+\frac{r-1}{r}}(z)\\ \vdots & & & & \vdots\\ 1 & h_{\alpha}(z) & h_{\alpha+\frac{1}{r}}(z) & \ldots & h_{\alpha+\frac{r-1}{r}}(z)\end{pmatrix} \text{ with } \, h_{\alpha}(z) = \begin{cases} |z|^{\alpha}, & \text{if } \alpha < 0, \\ \log{|z|}, & \text{if } \alpha = 0,\\ 1, & \text{if } \alpha > 0. \end{cases} \end{align} \end{description} The $\mathcal{O}$ condition in \eqref{ch4:RHY3} and \eqref{ch4:RHY4} is to be taken entry-wise. \end{rhproblem} It will be convenient to use the following convention for our RH analysis, which is indeed seen to be in agreement with RH-Y2. \begin{convention} \label{ch4:con:orientation} Any jump curve that touches the origin is oriented away from the origin. \end{convention} Notice that this means, perhaps contrary to intuition, that the $+$ and $-$ signs are in the lower and upper half-plane respectively, when we consider jumps on the negative real axis. We will never deviate from Convention \ref{ch4:con:orientation}. \\ RH-Y has a unique solution $Y(z)$, which is related to the multiple orthogonal polynomials in the following way. The first row of $Y(z)$ can be expressed as \begin{multline*} \hspace{-0.4cm}\left( p_n(z) \quad \displaystyle\frac{1}{2\pi i} \int_0^\infty \frac{p_n(x) w_\alpha(x)}{x-z} dx \quad \displaystyle\frac{1}{2\pi i} \int_0^\infty \frac{p_n(x) w_{\alpha+\frac{1}{r}}(x)}{x-z} dx \quad \hdots\right. \\ \left. \hdots\quad \displaystyle\frac{1}{2\pi i} \int_0^\infty \frac{p_n(x) w_{\alpha+\frac{r-1}{r}}(x)}{x-z} dx\right). \end{multline*} The other rows are similar, but are expressed with different though similar multiple orthogonal polynomials (see \cite[Theorem 3.1]{VAGeKu}). It is known \cite{DaKu} that the correlation kernel \eqref{ch4:defK} can be conveniently expressed in terms of $Y$ by \begin{align} \label{ch4:eq:KinY} K_{V,n}^{\alpha,\frac{1}{r}}(x,y) &= \frac{1}{2\pi i(x-y)} \begin{pmatrix} 0 & w_\alpha(y) & w_{\alpha+\frac{1}{r}}(y) & \hdots & w_{\alpha+\frac{r-1}{r}}(y) \end{pmatrix} Y_+^{-1}(y) Y_+(x) \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}\\ \nonumber &= \frac{w_0(y)}{2\pi i(x-y)} \begin{pmatrix} 0 & y^\alpha & y^{\alpha+\frac{1}{r}} & \hdots & y^{\alpha+\frac{r-1}{r}} \end{pmatrix} Y_+^{-1}(y) Y_+(x) \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \end{align} for $x,y>0$. Hence, to obtain the large $n$ behavior of the correlation kernel it will suffice to determine the large $n$ behavior of $Y$. \subsection{First transformation $Y\mapsto X$} \label{ch4:sec:firstTransformation} In order to be able to normalize the RHP properly we will need a first transformation that will turn the jumps into a direct sum of $1\times 1$ and $2\times 2$ jumps. Here and in the rest of this paper we shall often use a block form notation, similarly as in \cite{BeBo}. As in \cite{BeBo}, we also often write a diagonal matrix as a direct sum of $1\times 1$ blocks, in cases where formulae tend to become big. We mention that (matrix) multiplication has a higher precedence than the direct sum in the order of operations. We remind the reader that $\Omega=e^\frac{2\pi i}{r}$. We introduce the $r\times r$ matrices \begin{align} \label{ch4:defU+} U^+ &= \frac{1}{\sqrt r} \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 & \hdots \\ 1 & \Omega & \Omega^{-1} & \Omega^2 & \Omega^{-2} & \Omega^3 & \hdots \\ 1 & \Omega^2 & \Omega^{-2} & \Omega^4 & \Omega^{-4} & \Omega^6 & \hdots \\ \vdots & & & & & & \vdots\\ 1 & \Omega^{r-1} & \Omega^{-(r-1)} & \Omega^{2(r-1)} & \Omega^{-2(r-1)} & \Omega^{3(r-1)} & \hdots \end{pmatrix}\\ \label{ch4:defU-} U^- &= \overline{U^+}. \end{align} Here $\overline{U^+}$ denotes the complex conjugate of $U^+$. Since $U^+$ is unitary we have that $\overline{U^+}$ can also be written as $(U^+)^{-t}$ where $t$ denotes transposition. We also define the $r\times r$ diagonal matrices \begin{align} \label{ch4:eq:defDpm} D^+ &= \operatorname{diag}\left(1,\Omega^{\frac{1}{2}},-\Omega^{-\frac{1}{2}}, -\Omega^{\frac{2}{2}}, \Omega^{-\frac{2}{2}}, \Omega^{\frac{3}{2}}, -\Omega^{-\frac{3}{2}}, \ldots\right),\\ D^- &= \operatorname{diag}\left(1,-\Omega^{-\frac{1}{2}},-\Omega^{\frac{1}{2}}, \Omega^{-\frac{2}{2}}, \Omega^{\frac{2}{2}},-\Omega^{-\frac{3}{2}}, -\Omega^{-\frac{3}{2}},\ldots\right). \end{align} \begin{definition} \label{ch4:def:X} We define $X:\mathbb C\setminus \mathbb R\to\mathbb C^{(r+1)\times (r+1)}$ by the transformation \begin{align} \label{ch4:defX} X(z) = Y(z) \left(r^\frac{r}{2r+2} \oplus r^{-\frac{1}{2r+2}} z^{\frac{r-1}{2 r}} \bigoplus_{j=1}^r z^{-\frac{j-1}{r}} \right) \times \left\{\begin{array}{ll} 1 \oplus U^+ D^+, & \operatorname{Im}(z) >0,\\ 1 \oplus U^- D^-, & \operatorname{Im}(z) <0. \end{array} \right. \end{align} \end{definition} For clarity, we emphasize that $1\oplus U^\pm D^\pm$ is the direct sum of the $1\times 1$ block with component $1$ and the $r\times r$ block $U^\pm D^\pm$. Now $X$ satisfies \begin{rhproblem} \label{ch4:RHPforX} \ \begin{description} \item[RH-X1] $X$ is analytic on $\mathbb C\setminus \mathbb R$. \item[RH-X2] $X$ has boundary values for $x\in (-\infty, 0)\cup (0,\infty)$. \begin{align} \nonumber X_+(x) &= X_-(x) \\ \nonumber &\hspace{-0.5cm}\times \left\{\begin{array}{ll} \begin{pmatrix} 1 & w_{\beta}(x)\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} 1 & w_{\beta}(x)\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ \label{ch4:RHX2} &\hspace{6.5cm} x > 0,\\ \nonumber X_+(x) &= X_-(x)\\ \nonumber &\hspace{-0.5cm}\times \left\{\begin{array}{ll} 1\oplus \bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, & r \equiv 0 \mod 2,\\ 1\oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 1 \mod 2,\\ \end{array} \right. \\ \label{ch4:RHX2b} &\hspace{6.5cm} x < 0. \end{align} \item[RH-X3] As $|z|\to\infty$ \begin{align} \label{ch4:RHX3} X(z) = \left(\mathbb{I}+\mathcal{O}\left(\frac{1}{z}\right)\right) \left(1 \oplus z^{\frac{r-1}{2 r}} \bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) \times \left\{\begin{array}{ll} z^n \oplus z^{-\frac{n}{r}} U^+ D^+ , & \operatorname{Im}(z) >0,\\ z^n \oplus z^{-\frac{n}{r}} U^- D^- , & \operatorname{Im}(z) <0. \end{array} \right. \end{align} \item[RH-X4] As $z\to 0$ \begin{align} \label{ch4:RHX4} X(z) = \mathcal{O}\begin{pmatrix} 1 & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\\ \vdots & & & \vdots\\ 1 & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\end{pmatrix}. \end{align} \end{description} \end{rhproblem} \begin{proof} RH-X2 requires verification. As an intermediate step we define \begin{align} \label{ch4:defZ} Z(z) = Y(z) \left(r^\frac{r}{2r+2} \oplus r^{-\frac{1}{2r+2}} z^{\frac{r-1}{2 r}} \bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right). \end{align} Then we have \begin{align} \label{ch4:eq:defXasZ} X(z) = Z(z) \times \left\{\begin{array}{ll} 1 \oplus U^+ D^+, & \operatorname{Im}(z) >0\\ 1 \oplus U^- D^-, & \operatorname{Im}(z) <0. \end{array} \right. \end{align} $Z$ has a jump on $(0,\infty)$ and $(-\infty,0)$ which we denote by $J$. One easily checks that \begin{align} \label{ch4:eq:jumpZ} J = \left\{\begin{array}{ll} \begin{pmatrix} 1 & \frac{1}{\sqrt r} w_{\beta}(x) & \frac{1}{\sqrt r} w_{\beta}(x) & \hdots & \frac{1}{\sqrt r} w_{\beta}(x)\\ 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & & & \ddots & \vdots\\ 0 & 0 & 0 & \hdots & 1 \end{pmatrix}, & x >0,\\ \begin{pmatrix} 1 & 0 & 0 & \hdots & 0\\ 0 & -\Omega^\frac{1}{2} & 0 & \hdots & 0\\ 0 & 0 & -\Omega^\frac{1}{2} \Omega & \hdots & 0\\ \vdots & & & \ddots & \vdots\\ 0 & 0 & 0 & \hdots & -\Omega^\frac{1}{2} \Omega^{r-1} \end{pmatrix}, & x <0. \end{array} \right. \end{align} We remind the reader, once more, that we use the convention that jump curves that touch the origin are oriented away from the origin. Let us now prove the jumps for $X$. In what follows we let the $(r+1)\times (r+1)$ matrices have indices ranging from $0$ to $r$, we make this choice because then we can label the entries of the $r\times r$ matrices $U^\pm$ and the $r\times r$ diagonal matrices $D^\pm$ with indices ranging from $1$ to $r$. Notice that the entries of $U^+$ can be written explicitly as follows. \begin{align} \label{ch4:eq:U+-evenandodd} \begin{array}{lll} U^+_{i,2j} &= \frac{1}{\sqrt r}\Omega^{(i-1)j}, & i=1,2,\ldots,r; j=1,2,\ldots, \left\lfloor \frac{r}{2}\right\rfloor\\ U^+_{i,2j-1} &= \frac{1}{\sqrt r} \Omega^{-(i-1)(j-1)}, & i=1,2,\ldots,r; j=1,2,\ldots, \left\lfloor \frac{r+1}{2} \right\rfloor. \end{array} \end{align} Let us check the jump of $X$ for $x>0$. By using the block form we notice, using \eqref{ch4:eq:defXasZ} and \eqref{ch4:eq:jumpZ}, that \begin{align*} \left(X_-(x)^{-1} X_+(x)\right)_{00} &= \sum_{k,l=0}^r (1\oplus U^- D^-)^{-1}_{0k} J_{kl} (1\oplus U^+ D^+)_{l0} = J_{00} = 1,\\ \left(X_-(x)^{-1} X_+(x)\right)_{01} &= \sum_{k,l=0}^r (1\oplus U^- D^-)^{-1}_{0k} J_{kl} (1\oplus U^+ D^+)_{l1} = \sum_{l=1}^r J_{0l} U^+_{l1}= w_{b-a}(x). \end{align*} Here we used in the last line that $J_{0l}=\frac{1}{\sqrt r} w_{\beta}(x)$ and that $U^+_{l1}=\frac{1}{\sqrt r}$ for all $l=1,2,\ldots,r$. We have also used the diagonal form of the matrices $D^\pm$ in both lines. On the other hand, we have for $i=1,2,\ldots,r$ and $j=2,\ldots,r$ that \begin{align*} \left(X_-(x)^{-1} X_+(x)\right)_{i0} &= \sum_{k,l=0}^r (1\oplus U^- D^-)^{-1}_{ik} J_{kl} (1\oplus U D^+)_{l0}\\ &= (D^-)^{-1}_{ii} (1\oplus U^-)^{-1}_{i0} J_{00} = 0,\\ \left(X_-(x)^{-1} X_+(x)\right)_{0j} &= \sum_{k,l=0}^r (1\oplus U^- D^-)^{-1}_{0k} J_{kl} (1\oplus U^+ D^+)_{lj}\\ &= D^+_{jj} \sum_{l=0}^r J_{0l} U^+_{lj} = D_{jj}^+ \frac{w_{\beta}(x)}{\sqrt r} \sum_{l=1}^r U^+_{lj} = 0. \end{align*} For indices $i,j=1,2,\ldots,r$ we have that \begin{align*} \left(X_-(x)^{-1} X_+(x)\right)_{ij} &= \sum_{k,l=0}^r (1\oplus U^- D^-)^{-1}_{ik} J_{kl} (1\oplus U^+ D^+)_{lj}\\ &= \sum_{k,l=1}^r (1\oplus U^- D^-)^{-1}_{ik} J_{kl} (1\oplus U^+ D^+)_{lj}\\ &= (D^-)^{-1}_{ii} D^+_{jj} \sum_{k=1}^r (U^+)^t_{ik} U^+_{kj}\\ &= (D^-)^{-1}_{ii} D^+_{jj} \sum_{k=1}^r U^+_{ki} U^+_{kj}. \end{align*} Here we have used that $(U^-)^{-1} = (U^+)^t$. Now we have four cases depending on the parity of $i$ and $j$. We will use \eqref{ch4:eq:U+-evenandodd} for all of them. Suppose that $i$ and $j$ are both odd. Then we can write $i=2A-1$ and $j=2B-1$. Thus we find \begin{align*} \sum_{k=1}^r U^+_{ki} U^+_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{-(k-1)(A+B-2)} = \left\{ \begin{array}{ll} 1, & i=j=1\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} When $i=j=1$ we indeed have $(D^-)^{-1}_{ii}D^+_{jj} = 1$. Now let us suppose that $i=2A$ and $j=2B$. Then we have \begin{align*} \sum_{k=1}^r U^+_{ki} U^+_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{(k-1)(A+B)} = \left\{ \begin{array}{ll} 1, & i=j=r\text{ and }r\equiv 0\mod 2\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} When we are in the situation of $i=j=r$ and $r$ is even we obtain $(D^-)^{-1}_{ii}D^+_{jj} = (-1)^\frac{r}{2} \Omega^{\frac{r}{4}} (-1)^{\frac{r}{2}-1}\Omega^{\frac{r}{4}} = -\Omega^{\frac{r}{2}}=1$, as we should have. Lastly, but perhaps most importantly, we check the case where $i$ and $j$ have a different parity. Let us write $i=2A-1$ and $j=2B$. Then we have \begin{align*} \sum_{k=1}^r U^+_{ki} U^+_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{(k-1)(B-A+1)} = \left\{ \begin{array}{ll} 1, & i=j+1\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} When $i=j+1$ we have $$(D^-)^{-1}_{ii} D^+_{jj} = (-1)^{A-1} \Omega^{-\frac{A-1}{2}} (-1)^{B-1} \Omega^{\frac{B}{2}} = -\Omega^{\frac{j+1-i}{4}} = -1.$$ For $i=2A$ and $j=2B-1$ we get \begin{align*} \sum_{k=1}^r U^+_{ki} U^+_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{(k-1)(A-B-1)} = \left\{ \begin{array}{ll} 1, & i=j-1\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} When $i=j-1$ we have $$(D^-)^{-1}_{ii} D^+_{jj} = (-1)^A \Omega^{\frac{A}{2}} (-1)^{B-1} \Omega^{-\frac{B-1}{2}} = \Omega^{\frac{i-j+1}{4}} = 1.$$ Having found all the components of the jump matrix for $x>0$, we conclude that \eqref{ch4:RHX2} holds.\\ Let us now turn to the jump for $x<0$. Here we may ignore the $0$ indices altogether due to the particular block form of $J$. For $i,j=1,2,\ldots,r$ we have \begin{align*} \left(X_-(x)^{-1} X_+(x)\right)_{ij} &= (D^+)^{-1}_{ii} D^-_{jj}\sum_{k,l=1}^r (U^+)^{-1}_{ik} J_{kl} U^-_{lj} = - (D^+)^{-1}_{ii} D^-_{jj} \Omega^\frac{1}{2} \sum_{k=1}^r U^-_{ki} \Omega^{k-1} U^-_{kj}. \end{align*} Again we treat the four cases for $i$ and $j$ depending on the parity. Let $i=2A-1$ and $j=2B-1$, then \begin{align*} \sum_{k=1}^r U^-_{ki} \Omega^{k-1} U^-_{kj} &= \frac{1}{r} \sum_{k=1}^r \Omega^{(k-1)(A+B-1)} = \left\{ \begin{array}{ll} 1, & i=j=r\text{ and }r=1\mod 2\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} In the case that $i=j=r$ and $r$ odd we indeed have $$(D^+)^{-1}_{ii} D^-_{jj} = (-1)^\frac{r-1}{2} \Omega^{\frac{r-1}{4}} (-1)^\frac{r-1}{2} \Omega^{\frac{r-1}{4}} = -\Omega^{-\frac{1}{2}},$$ as it should be (it should exactly cancel the $-\Omega^\frac{1}{2}$ factor).\\ Let $i=2A$ and $j=2B$, then \begin{align*} \sum_{k=1}^r U^-_{ki} \Omega^{k-1} U^-_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{-(k-1)(A+B-1)} =0. \end{align*} Let $i=2A-1$ and $j=2B$, then \begin{align*} \sum_{k=1}^r U^-_{ki} \Omega^{k-1} U^-_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{(k-1)(A-B)} = \left\{ \begin{array}{ll} 1, & i=j-1\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} In the case that $i=j-1$ we have $$(D^+)^{-1}_{ii} D^-_{jj} = (-1)^{A-1} \Omega^{\frac{A-1}{2}} (-1)^B \Omega^{-\frac{B}{2}} = -\Omega^\frac{i-j+1}{4} = -\Omega^{-\frac{1}{2}}.$$ Let $i=2A$ and $j=2B-1$, then \begin{align*} \sum_{k=1}^r U^-_{ki} \Omega^{k-1} U^-_{kj} = \frac{1}{r} \sum_{k=1}^r \Omega^{(k-1)(B-A)} = \left\{ \begin{array}{ll} 1, & i=j+1\\ 0, & \text{otherwise.} \end{array} \right. \end{align*} In the case that $i=j+1$ we have $$(D^+)^{-1}_{ii} D^-_{jj} = (-1)^{A-1} \Omega^{-\frac{A}{2}} (-1)^{B-1} \Omega^{\frac{B-1}{2}} = \Omega^\frac{j-i-1}{4} = \Omega^{-\frac{1}{2}}.$$ We conclude that we get the jump on $x<0$ as in \eqref{ch4:RHX2b}. RH-X4 follows from the observation that as $z\to 0$ \begin{align} \label{ch4:eq:RH-X4proof} z^{-\frac{j}{r}} h_{\alpha+\frac{j}{r}}(z) = \mathcal O\left(z^{-\frac{r-1}{r}} h_{\alpha+\frac{r-1}{r}}(z)\right), \hspace{2cm}j=0,1,\ldots,r-1. \end{align} Indeed, we have as $z\to 0$ that \begin{align*} z^{-\alpha-\frac{j}{r}} h_{\alpha+\frac{j}{r}}(z) = h_{-\alpha-\frac{j}{r}}(z) &= \mathcal O\left(h_{-\alpha-\frac{r-1}{r}}(z)\right) = \mathcal O\left(z^{-\alpha-\frac{r-1}{r}} h_{\alpha+\frac{r-1}{r}}(z)\right) \end{align*} and this leads to \eqref{ch4:eq:RH-X4proof} after multiplication by $z^\alpha$. \end{proof} \section{Normalization and opening of the lens} \label{ch4:sec:normalization} Our next task is to normalize the RHP. That is, we should eliminate the $z^n$ behavior in RH-X3, without making the jumps too cumbersome. As usual, we will use $g$-functions to perform this normalization. \subsection{Vector equilibrium problem and definition of the $g$-functions} \label{ch4:sec:normalization2} In order to construct the $g$-functions we need a vector equilibrium problem corresponding to the MBE. Fortunately, this has been studied by Kuijlaars in detail in \cite{Ku}. According to \cite{Ku} we have a vector of measures $(\mu_0,\mu_1,\ldots,\mu_{r-1})$ that minimizes the energy functional \begin{align} \label{ch4:eq:eqProblem} \sum_{j=0}^{r-1} \iint \log \frac{1}{|x-y|} d\mu_j(x) d\mu_j(y) - \sum_{j=0}^{r-2} \iint \log \frac{1}{|x-y|} d\mu_j(x) d\mu_{j+1}(y) + \int V(x) d\mu(x). \end{align} under the condition that $\mu_j$ has support in $[0,\infty)$ for even $j$ and $(-\infty,0]$ for odd $j$, and the condition that the total mass of the measures is given by \begin{align} \label{ch4:eq:totalMass} \mu_j(\operatorname{supp}(\mu_j)) = \frac{r-j}{r}. \end{align} The main result of \cite{Ku} is then that $\mu_0$ coincides with the equilibrium measure $\mu_{V,\theta}^*$ from \eqref{ch4:defmuVtheta}. Furthermore, we have $\operatorname{sup}(\mu_j)=\Delta_{j}$, where \begin{align} \label{ch4:eq:defDeltaj} \Delta_j = \left\{\begin{array}{rl} [0,q] \text{ for some }q>0, & j=0,\\ (-\infty,0], & j \equiv 1 \mod 2,\\ {}[0,\infty), & j \equiv 0 \mod 2 \text{ and }j\neq 0. \end{array}\right. \end{align} That the support for $\mu_0$ is an interval $[0,q]$ is dictated by the one-cut $\frac{1}{r}$-regularity assumption on $V$. For convenience we also define $\mu_r=0$. In addition, the measures satisfy the following variational conditions. \begin{align} \label{ch4:varCon1} & 2\int \log \frac{1}{|s-x|} d\mu_0(s) - \int \log \frac{1}{|s-x|} d\mu_1(s) + V(x) \left\{\begin{array}{ll} = -\ell, & x\in [0,q]\\ \geq -\ell, & x>q \end{array}\right.\\ \label{ch4:varCon2} &- \int \log \frac{1}{|s-x|} d\mu_{j-1}(s)+2\int \log \frac{1}{|s-x|} d\mu_j(s) - \int \log \frac{1}{|s-x|} d\mu_{j+1}(s) = 0, \quad j=1,\ldots,r-1. \end{align} Here $\ell$ is some real constant (that does not necessarily equal the one in \eqref{ch4:eq:varCon}). The derivation for all the properties in this section up to this point can be found in \cite{Ku}. As stated in the introduction, we make an additional assumption on the solution to the variational equations. Namely, we assume that the inequality in \eqref{ch4:eq:varCon} is strict, which is equivalent to the following assumption. \begin{assumption} \label{ch4:assump:varStrict} The inequality in \eqref{ch4:varCon1} for $x>q$ is strict. \end{assumption} We now define the $g$-functions by \begin{align} \label{ch4:eq:defgfunctions} g_j(z) = \int_{\Delta_{j}} \log(z-s) d\mu_j(s), \quad j=0,1,\ldots,r. \end{align} For convenience we also put $g_{-1}=g_r=0$. We remind the reader that the logarithm is taken with the principle branch, as always. It follows immediately from the definition \eqref{ch4:eq:defgfunctions} that for real $x$ \begin{align} \label{ch4:g0jump} g_{0,+}(x) - g_{0,-}(x) &= 2\pi i \int_x^q d\mu_0(s), & x\in [0,q], \end{align} and \begin{align} \label{ch4:g0jump2} g_{0,+}(x) - g_{0,-}(x) &\equiv 0 \mod 2\pi i, & x\in \mathbb R\setminus [0,q],\\ \label{ch4:g1jump} g_{1,+}(x) - g_{1,-}(x) &= 0, & x\in \mathbb R\setminus \Delta_1,\\ \label{ch4:grjump2} g_{r-1,+}(x) - g_{r-1,-}(x) &= ((-1)^r-1) \frac{\pi i}{r}, & x\in \mathbb R\setminus \Delta_{r-1}. \end{align} Similar formulae hold for the other $g$-functions but we will not need these. One can also deduce from the variational conditions that \begin{align} \label{ch4:g1g2jump} & g_{0,-}(x)+g_{0,+}(x) - g_{1,+}(x) - V(x) \left\{\begin{array}{ll} = \ell, & x\in [0,q]\\ < \ell, & x>q \end{array}\right.\\ \label{ch4:g1g2jumpb} &- g_{j-1,-}(x)+g_{j,-}(x)+g_{j,+}(x) - g_{j+1,+} = ((-1)^j-1) \frac{\pi i}{r}, \qquad j=1,\ldots,r-1; x\in\Delta_{j}. \end{align} In particular, the equations on the negative real axis, i.e., for odd $j$, yield an equality with $-\frac{2\pi i}{r}$. \subsection{Asymptotic behavior of the $g$-functions} In general one uses the matrix \begin{align} \label{ch4:defG} G(z) = \bigoplus_{j=0}^{r} e^{n (g_{j-1}(z)-g_j(z))} \end{align} to normalize the RHP. Then we need to understand the asymptotics of the $g$-functions as $z\to\infty$. The following two propositions will provide these. \begin{proposition} \label{ch4:asympgfunctions} For $a>0$ let \begin{align} \label{ch4:eq:defma} m_a = \int_0^q s^a d\mu_{V,\theta}^*(s). \end{align} As $z\to \infty$ we have \begin{align} \label{ch4:eq:asympg0infty} g_0(z) &= \log(z) + O\left(\frac{1}{z}\right), \end{align} and for $j=1,\ldots,r$ and $\pm \operatorname{Im}(z)>0$, we have \begin{align} \label{ch4:eq:asympgjinfty} g_{j-1}(z)-g_{j}(z) &= \frac{1}{r} \log(z) - \sum_{k=1}^{r-1} m_{\frac{k}{r}} \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor k} z^{-\frac{k}{r}} + O\left(\frac{1}{z}\right). \end{align} \end{proposition} \begin{proof} For the asymptotics of $g_0$ we can use the compactness of the support of $d\mu_{V,\theta}^*$ to immediately conclude that \[ g_0(z) = \int_{0}^q \left(\log(z) + \mathcal O(1/z))\right) d\mu_{V,\theta}^*(s) = \log(z) + \mathcal O(1/z), \] as $z\to\infty$. For the asymptotics of $g_j$ with $j=1,\ldots,r-1$ we will have to look at the specific construction of the measures as presented in \cite{Ku}. For a fixed $a>0$ one considers the rational function \begin{align*} \Psi^a(w) = \frac{1}{r \Omega^{r-1} (w-a^\frac{1}{r})}, \quad\quad z=w^r, \end{align*} on an $r$-sheeted Riemann surface, that has cuts $\Delta_j$ (as defined in \eqref{ch4:eq:defDeltaj}, but excluding $\Delta_0$), which connect the $j$-th sheet to the $(j+1)$-st sheet, for $j=1,\ldots,r-1$. This is done in such a way that $w=z^\frac{1}{r}$ is taken with the principle branch on the first sheet. This uniquely determines the branches that we should take on the other sheets. Explicitly, we have for $j=1,\ldots,r$ \begin{align} \label{ch4:eq:defPsijExplicit} \Psi^a_{j}(z) = \left\{\begin{array}{ll} \displaystyle\frac{1}{r z^{1-\frac{1}{r}}(z^\frac{1}{r}- \Omega^{(-1)^{j}\lfloor \frac{j}{2}\rfloor} a^\frac{1}{r})}, & \operatorname{Im}(z)>0,\\ \displaystyle\frac{1}{r z^{1-\frac{1}{r}}(z^\frac{1}{r}- \Omega^{(-1)^{j-1}\lfloor \frac{j}{2}\rfloor} a^\frac{1}{r})}, & \operatorname{Im}(z)<0. \end{array} \right. \end{align} We remind the reader that $z^{1-\frac{1}{r}}$ and $z^\frac{1}{r}$ are taken with the principle branch, as usual. Now, following \cite{Ku}, we construct some auxiliary measures $\nu^a_1, \ldots, \nu^a_{r-1}$ out of these, namely \begin{align} \label{ch4:eq:defAuxMeas} d\nu^a_{j}(s) = \frac{\Psi^a_{j+}(s)-\Psi^a_{j-}(s)}{2\pi i} ds, \quad\quad s\in\Delta_j. \end{align} It is a known fact from \cite{Ku} that the $d\nu^a_{j}$ are bonafide positive measures. By formula (3.7) of \cite{Ku} and the first formula in the proof of Proposition 3.2 of \cite{Ku} we then have for $j=1,\ldots,r-1$ that \begin{align} \label{ch4:explicitMeasures} d\mu_j(s) =\int_0^q d\nu^a_j(s) d\mu_{V,\frac{1}{r}}^*(a). \end{align} We remind the reader that $(\mu_0,\mu_1,\ldots,\mu_{r})$ solves our vector equilibrium problem mentioned in the beginning of this section. We denote the Stieltjes transforms of the auxiliary measures by \begin{align*} F^a_j(z) = \int_{\Delta_j} \frac{d\nu^a_{j}(s)}{z-s}, \quad\quad j=1,\ldots,r-1. \end{align*} Then according to \cite{Ku} we have \begin{align*} F^a_1(z) &= \frac{1}{z-a} - \Psi^a_{1}(z)\\ F^a_{r-1}(z) &= \Psi^a_{r}(z) \end{align*} and, in particular, for $j=2,\ldots,r-1$ \begin{align} \label{ch4:eq:StieltjesIdj} F^a_{j-1}(z) - F^a_j(z) = \Psi^a_j(z). \end{align} Using \eqref{ch4:explicitMeasures} and \eqref{ch4:eq:StieltjesIdj}, we see that for $j=2,\ldots,r-1$ \begin{align*} \int_{\Delta_{j-1}} \frac{d\mu_{j-1}(s)}{z-s} - \int_{\Delta_{j}} \frac{d\mu_{j}(s)}{z-s} &= \int_0^q \left(F^a_{j-1}(z) - F^a_{j}(z)\right) d\mu_{V,\frac{1}{r}}^*(a) = \int_0^q \Psi^a_{j}(z) d\mu_{V,\frac{1}{r}}^*(a). \end{align*} Integrating this with respect to $z$, using the explicit formula in \eqref{ch4:eq:defPsijExplicit}, then yields for $\pm\operatorname{Im}(z)>0$ \begin{align*} g_{j-1}(z) - g_{j}(z) &= \int_0^q \log(z^\frac{1}{r}- \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor} a^\frac{1}{r})) d\mu_{V,\theta}^*(a)\\ &= \frac{1}{r}\log(z) - \sum_{k=1}^\infty \frac{1}{k} \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor k} z^{-\frac{k}{r}} \int_0^q a^\frac{k}{r} d\mu_{V,\theta}^*(a)\\ &= \frac{1}{r}\log(z) - \sum_{k=1}^{r-1} \frac{1}{k} \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor k} z^{-\frac{k}{r}} m_{\frac{k}{r}} + \mathcal O\left(\frac{1}{z}\right) \end{align*} as $z\to\infty$. A similar reasoning will prove the case for $j=r$. \end{proof} \begin{definition} \label{ch4:def:Cn} We define the $r\times r$ upper-triangular matrix \begin{align} \label{ch4:eq:defCn} C_n = \begin{pmatrix} 1 & a_1 & a_2 & a_3 & \cdots & a_{r-1}\\ 0 & 1 & a_1 & a_2 & \cdots & a_{r-2}\\ 0 & 0 & 1 & a_1 & \cdots & a_{r-3}\\ \vdots & & & \ddots & \ddots\\ 0 & 0 & 0 & 0 & 1 & a_1\\ 0 & 0 & 0 & 0 & \cdots & 1 \end{pmatrix}, \end{align} where, with $m_\frac{1}{r}, m_\frac{2}{r}, \ldots, m_\frac{r-1}{r}$ as in \eqref{ch4:eq:defma}, we take \begin{align} a_j = \sum_{l=1}^j \frac{(-n)^l}{l!} \sum_{k_1+\ldots+k_l=j} m_\frac{k_1}{r} \cdots m_\frac{k_l}{r}. \end{align} \end{definition} \begin{lemma} \label{ch4:prop:Cn} With $C_n$ as in Definition \ref{ch4:def:Cn} we have \begin{align} \label{ch4:eq:prop:Cn} z^{-\frac{n}{r}} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm \bigoplus_{j=1}^{r} e^{n(g_{j-1}(z)-g_j(z))} = \left(C_n +\mathcal O(1/z)\right) \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm \end{align} as $z\to\infty$, for $\pm \operatorname{Im}(z)>0$. \end{lemma} \begin{proof} So let $\pm\operatorname{Im}(z)>0$. Notice that we may omit the $D^\pm$ factors in what follows, due to their diagonal form. It follows from \eqref{ch4:eq:asympgjinfty}, and some straightforward algebra, that \begin{align} z^{-\frac{n}{r}} e^{n(g_1(z)-g_2(z))} = a_0 + a_1 z^{-\frac{1}{r}} + \ldots + a_{r-1} z^{-\frac{r-1}{r}}+ \mathcal O\left(\frac{1}{z}\right) \end{align} as $z\to\infty$, where $a_0=1$ and $a_1,\ldots,a_{r-1}$ are as in Definition \ref{ch4:def:Cn}. The components of $G$ further down the diagonal have a similar expansion but with (effectively) the properly chosen branch of the $z^{-\frac{1}{r}}$ term, namely \begin{align}\label{ch4:eq:Gdiagm} z^{-\frac{n}{r}} \bigoplus_{j=1}^{r} e^{n(g_{j-1}(z)-g_j(z))} = \sum_{m=0}^{r-1} a_m z^{-\frac{m}{r}} \left(\bigoplus_{j=1}^r \Omega^{\pm (-1)^{j} \lfloor \frac{j}{2}\rfloor} \right)^m +\mathcal O\left(\frac{1}{z}\right) \end{align} as $z\to\infty$. Again, this follows from \eqref{ch4:eq:asympgjinfty}. Let's look at $m=1$ first. A simple calculation shows that \begin{align} \label{ch4:eq:U+-omegaU} U^\pm \left(\bigoplus_{j=1}^r \Omega^{\pm (-1)^{j} \lfloor \frac{j}{2}\rfloor} \right) (U^\pm)^{-1} = \begin{pmatrix} 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & & & \ddots & \vdots\\ 0 & 0 & 0 & \hdots & 1\\ 1 & 0 & 0 & \hdots & 0 \end{pmatrix}. \end{align} For the other powers $m$ in \eqref{ch4:eq:Gdiagm} we would have to take a power of this cyclic permutation matrix. Now from \eqref{ch4:eq:U+-omegaU} we arrive at \begin{multline*} z^{-\frac{1}{r}} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm \left(\bigoplus_{j=1}^r \Omega^{\pm (-1)^{j} \lfloor \frac{j}{2}\rfloor} \right) (U^\pm)^{-1} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right)^{-1}\\ = \begin{pmatrix} 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & & & \ddots & \vdots\\ 0 & 0 & 0 & \hdots & 1\\ z^{-1} & 0 & 0 & \hdots & 0 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & & & \ddots & \vdots\\ 0 & 0 & 0 & \hdots & 1\\ 0 & 0 & 0 & \hdots & 0 \end{pmatrix} +\mathcal O\left(\frac{1}{z}\right) \end{multline*} as $z\to\infty$. An analogous argument works for the other powers $m$ in \eqref{ch4:eq:Gdiagm} and we obtain that \begin{multline*} z^{-\frac{n}{r}} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm \bigoplus_{j=1}^{r} e^{n(g_{j-1}(z)-g_j(z))} (U^\pm)^{-1} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right)^{-1}\\ = \begin{pmatrix} 1 & a_1 & a_2 & a_3 & \cdots & a_{r-1}\\ 0 & 1 & a_1 & a_2 & \cdots & a_{r-2}\\ 0 & 0 & 1 & a_1 & \cdots & a_{r-3}\\ \vdots & & & \ddots & \ddots\\ 0 & 0 & 0 & 0 & 1 & a_1\\ 0 & 0 & 0 & 0 & \cdots & 1 \end{pmatrix} +\mathcal O\left(\frac{1}{z}\right) \end{multline*} as $z\to\infty$ and, after rearranging the factors and reinserting the factors $D^\pm$, we obtain \eqref{ch4:eq:prop:Cn} with $C_n$ as in \eqref{ch4:eq:defCn} from Definition \ref{ch4:def:Cn}. \end{proof} We will also need the asymptotics of the $g$-functions as $z\to 0$. \begin{proposition} \label{ch4:prop:gfunctionsbounded0} The $g$-functions are bounded near the origin. \end{proposition} \begin{proof} We claim that for all $j=0,1,\ldots,r$ \begin{align} \label{ch4:eq:behavmujs} \frac{d\mu_j(s)}{ds} &= \mathcal O\left(s^{-\frac{r}{r+1}}\right), & s\to 0. \end{align} For $j=0$ this is nothing else than what the one-cut $\frac{1}{r}$-regularity of $V$ prescribes. So let us consider $j=1,\ldots,r-1$. From the previous proof we recall (see \eqref{ch4:eq:defPsijExplicit}, \eqref{ch4:eq:defAuxMeas} and \eqref{ch4:explicitMeasures}) that \begin{align} \label{ch4:eq:intintint} \frac{d\mu_j(s)}{ds} &= \int_0^q \frac{\Psi_{j+}^a(s)-\Psi_{j-}^a(s)}{2\pi i} d\mu_{V,\frac{1}{r}}^*(a). \end{align} It follows from \eqref{ch4:eq:behavmujs} for $j=0$ that there exists a $0<\delta<\min(1,q)$ and a $c>0$ such that for $s\in(0,\delta]$ \begin{align} \label{ch4:eq:dmuVbehav0} \frac{d\mu_{V,\frac{1}{r}}^*(s)}{ds} \leq c s^{-\frac{r}{r+1}}. \end{align} Now we rewrite \eqref{ch4:eq:intintint} as \begin{align} \label{ch4:eq:intintint2} \frac{d\mu_j(s)}{ds}= \int_0^{\delta/s} \frac{\Psi_{j+}^{s a}(s)-\Psi_{j-}^{s a}(s)}{2\pi i} d\mu_{V,\frac{1}{r}}^*(s a) + \int_\delta^q \frac{\Psi_{j+}^a(s)-\Psi_{j-}^a(s)}{2\pi i} d\mu_{V,\frac{1}{r}}^*(a). \end{align} By some straightforward algebra we find for $a\in (0,\delta]$ and $s>0$ the estimate \begin{align} \label{ch4:eq:Psiestimate} \left|\frac{\Psi_{j+}^{s a}(s)-\Psi_{j-}^{s a}(s)}{2\pi i}\right| &= \frac{|\operatorname{Im}(\Omega^{(-1)^{j}\lfloor \frac{j}{2}\rfloor})| a^\frac{1}{r}}{\pi s} \left|1-\Omega^{(-1)^{j}\lfloor \frac{j}{2}\rfloor} a^\frac{1}{r}\right|^{-2} \leq \frac{a^{\frac{1}{r}}}{(1-\delta^\frac{1}{r})^2 \pi s}. \end{align} Then it follows from \eqref{ch4:eq:dmuVbehav0} and \eqref{ch4:eq:Psiestimate} that \begin{align*} \left|\int_0^{\delta/s} \frac{\Psi_{j+}^{s a}(s)-\Psi_{j-}^{s a}(s)}{2\pi i} d\mu_{V,\frac{1}{r}}^*(s a)\right| &\leq \int_0^{\delta/s} \frac{a^{\frac{1}{r}}}{(1-\delta^\frac{1}{r})^2 \pi s} c s^{-\frac{r}{r+1}} a^{-\frac{r}{r+1}} s da\\ &= \frac{r^2+r}{2r+1}\frac{\delta^{\frac{1}{r}+\frac{1}{r+1}} c}{(1-\delta^\frac{1}{r})^2 \pi } s^{\frac{1}{r}+\frac{1}{r+1}-\frac{r}{r+1}}. \end{align*} Using the one-cut $\frac{1}{r}$-regularity of $V$, but now for the behavior around $q$, we can show that the remaining integral in the right-hand side of \eqref{ch4:eq:intintint2} is bounded. We conclude that \begin{align} \label{ch4:eq:behavmujs2} \frac{d\mu_j(s)}{ds} = \mathcal O\left(s^{\frac{1}{r}+\frac{1}{r+1}-\frac{r}{r+1}}\right) + \mathcal O(1) \end{align} as $s\to 0$, and this is even better than \eqref{ch4:eq:behavmujs}. Notice that we cannot ignore the $\mathcal O(1)$ term in the case $r=2$. The claim is proved. Plugging \eqref{ch4:eq:behavmujs} in the definition \eqref{ch4:eq:defgfunctions} of the $g$-functions, we find with standard arguments that the $g$-functions are bounded near the origin. \end{proof} \subsection{Normalization $X\mapsto T$} For the next two transformations of our RHP it will turn out to be convenient to define the following function $\varphi$. \begin{align} \label{ch4:eq:defvarphi} \varphi(z) = -g_0(z)+\frac{1}{2} g_{1}(z) + \frac{1}{2}\left(V(z)+\ell\right). \end{align} Due to our assumption that $V$ is real analytic on $[0,\infty)$ we know that there exists an open neighborhood $O_V$ of $[0,\infty)$ to which $V$ has an analytic continuation. Thus $\varphi$ is analytic on $O_V\setminus (-\infty,q]$. By \eqref{ch4:g1g2jump} we have for $x>q$ that \begin{align} \label{ch4:eq:varphiqinfty} \varphi(x) = -\frac{1}{2}\left(g_{0,+}(x)+g_{0,-}(x)-g_{1,+}(x)- V(x)-\ell\right) >0 \end{align} and also by \eqref{ch4:g1g2jump} we have for $x\in(0,q)$ that \begin{align} \label{ch4:eq:varphi0q} \varphi_\pm(x) = \mp \pi i \int_x^q d\mu_0(s). \end{align} \begin{definition} \label{ch4:def:T} With $G$ as in \eqref{ch4:defG} and $C_n$ as in Definition \ref{ch4:def:Cn}, we define \begin{align} \label{ch4:defT} T(z) = L^{-1} (1\oplus C_n^{-1}) X(z) G(z) L, \end{align} where \begin{align} \label{ch4:eq:defL} L = \operatorname{diag}(e^{n \frac{r\ell}{r+1}}, e^{-n \frac{\ell}{r+1}},\ldots, e^{-n \frac{\ell}{r+1}}). \end{align} \end{definition} Then $T$ satisfies the following RHP. \begin{rhproblem} \label{ch4:RHPforT} \ \begin{description} \item[RH-T1] $T$ is analytic on $\mathbb C\setminus \mathbb R$. \item[RH-T2] $T$ has boundary values for $x\in (0,-\infty)\cup (0,q) \cup (q,\infty)$. \begin{align} \nonumber T_+(x) &= T_-(x)\\ \nonumber &\hspace{-1.7cm}\times \left\{\begin{array}{ll} \begin{pmatrix} e^{2n\varphi_+(x)} & x^{\beta}\\ 0 & e^{2n\varphi_-(x)} \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} e^{2n\varphi_+(x)} & x^{\beta}\\ 0 & e^{2n\varphi_-(x)} \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ &\hspace{6cm} x\in(0,q),\\ \nonumber T_+(x) &= T_-(x)\\ &\hspace{-1cm}\times \left\{\begin{array}{ll} \begin{pmatrix} 1 & x^{\beta} e^{-2n\varphi(x)}\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} 1 & x^{\beta} e^{-2n\varphi(x)}\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ \nonumber &\hspace{6cm} x>q,\\ \nonumber T_+(x) &= T_-(x) \hspace{0.1cm}\times\hspace{0.1cm} \left\{\begin{array}{ll} 1 \oplus \bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, & r \equiv 0 \mod 2,\\ 1\oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 1 \mod 2,\\ \end{array} \right. \\ &\hspace{6cm} x<0. \end{align} \item[RH-T3] As $|z|\to\infty$ \begin{align} \label{ch4:RHT3} T(z) = \left(\mathbb{I}+\mathcal{O}\left(\frac{1}{z}\right)\right) \left(1 \oplus z^{\frac{r-1}{2 r}} \bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) \times\left\{\begin{array}{ll} 1 \oplus U^+ D^+ , & \operatorname{Im}(z) >0,\\ 1 \oplus U^- D^- , & \operatorname{Im}(z) <0. \end{array} \right. \end{align} \item[RH-T4] As $z\to 0$ \begin{align} \label{ch4:RHT4} T(z) = \mathcal{O}\begin{pmatrix} 1 & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\\ \vdots & & & \vdots\\ 1 & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\end{pmatrix}. \end{align} \end{description} \end{rhproblem} \begin{proof} We prove RH-T2 for $r\equiv 1\mod 2$. For $x>0$ we see, using RH-X2, that \begin{multline*} T_-(x)^{-1} T_+(x)\\ = L^{-1} \bigoplus_{j=0}^r e^{n (g_{j,-}(x)-g_{j-1,-}(x))} \left(\begin{pmatrix} 1 & w_{\beta}(x)\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}\right)\\ \bigoplus_{j=0}^r e^{n (g_{j-1,+}(x)-g_{j,+}(x))} L\\ = \begin{pmatrix} e^{n(g_{0,-}(x)-g_{0,+}(x))} & x^\beta e^{n (-V(x) + g_{0,-}(x) + g_{0,+}(x) - g_{1,+}(x) - \ell)}\\ 0 & e^{n(-g_{0,-}(x) + g_{0,+}(x)+g_{1,-}(x)-g_{1,+}(x))} \end{pmatrix}\\ \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0\hspace{1.7cm} e^{n(g_{2j,-}(x) - g_{2j-1,-}(x) + g_{2j,+}(x) - g_{2j+1,+}(x))}\\ -e^{n(g_{2j+1,-}(x)-g_{2j,-}(x)+g_{2j-1,+}(x)-g_{2j,+}(x))} \hspace{1.7cm}0 \end{pmatrix} \end{multline*} Now RH-T2, both for $x\in(0,q)$ and for $x>q$, is a consequence of \eqref{ch4:eq:varphi0q}, \eqref{ch4:eq:defvarphi}, \eqref{ch4:g0jump}, \eqref{ch4:g0jump2}, \eqref{ch4:g1jump}, \eqref{ch4:g1g2jumpb} and the complex conjugated version of the latter. Let us now look at the jump for $x<0$. Similarly as before, we have \begin{multline*} T_-(x)^{-1} T_+(x) = e^{n(g_{0,-}(x)-g_{0,+}(x))}\\ \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 \hspace{1.7cm} e^{n(g_{2j-1,-}(x) - g_{2j-2,-}(x) + g_{2j-1,+}(x) - g_{2j,+}(x))}\\ -e^{n(g_{2j,-}(x)-g_{2j-1,-}(x)+g_{2j-2,+}(x)-g_{2j-1,+}(x))} \hspace{1.7cm} 0 \end{pmatrix}\\ \oplus e^{n(g_{r-1,-}(x)-g_{r-1,+}(x))} \end{multline*} Now RH-T2 follows from \eqref{ch4:g0jump2}, \eqref{ch4:grjump2}, \eqref{ch4:g1g2jumpb} and the complex conjugated version of the latter. Here we are using that $n$ is divisible by $r$, and thus $e^{-\frac{2\pi i}{r} n}=1$. The case where $r\equiv 0\mod 2$ is analogous. Let us now prove RH-T3. Using \eqref{ch4:eq:asympg0infty}, Lemma \ref{ch4:prop:Cn} and RH-X3 we have as $z\to\infty$ that \begin{align*} T(z) &= L^{-1} \left(1\oplus C_n^{-1}\right) \left(\mathbb I + \mathcal O\left(\frac{1}{z}\right)\right) \\ &\hspace{1cm}\left((e^{-n\frac{r\ell}{r+1}}+\mathcal O(1/z))\oplus (C_n+\mathcal O(1/z)) z^\frac{r-1}{2r} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm e^{n\frac{\ell}{r+1}}\right)\\ &= L^{-1} \left(1\oplus C_n^{-1}\right) L \left(\mathbb I + \mathcal O\left(\frac{1}{z}\right)\right) \left(1\oplus C_n z^\frac{r-1}{2r} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm\right)\\ &=\left(\mathbb I + \mathcal O\left(\frac{1}{z}\right)\right) \left(1\oplus C_n^{-1}\right) \left(1\oplus C_n z^\frac{r-1}{2r} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm\right) \end{align*} for $\pm\operatorname{Im}(z)>0$ and we obtain RH-T3. RH-T4 follows from the fact that the $g$-functions are bounded around $z=0$ (see Proposition \ref{ch4:prop:gfunctionsbounded0}). \end{proof} When $n$ is not divisible by $r$ we can nevertheless arrive at the same RHP by using an additional transformation, see Appendix \ref{ch:appendixA}. From this point onwards the RHPs do not depend on the particular modulo class $r$ that $n$ is in. \subsection{Opening of the lens $T\mapsto S$} We will open a lens from $0$ to $q$ with the $\varphi$-function as defined in \eqref{ch4:eq:defvarphi}, as usual. We denote the upper and lower lips of this lens by $\Delta_0^+$ and $\Delta_0^-$ respectively. The direction of the lips of the lens near the origin is perpendicular to the real line for now (see Figure \ref{ch4:FigS}), but later on, in Section \ref{ch4:sec:defLocalP}, we will slightly deform the lips. \begin{figure} \caption{Contour $\Sigma_{S} \label{ch4:FigS} \end{figure} \begin{definition} \label{ch4:def:S} $S$ is defined by \begin{align*} S(z) &= T(z) \left(\begin{pmatrix} 1 & 0\\ -z^{-\beta} e^{2 n \varphi(z)} & 1\end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)}\right), &z \text{ in the upper part of the lens}\\ S(z) &= T(z) \left(\begin{pmatrix} 1 & 0\\ z^{-\beta} e^{2 n \varphi(z)} & 1\end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)}\right), &z \text{ in the lower part of the lens}\\ S(z) &= T(z), &\text{elsewhere.} \end{align*} \end{definition} Then $S$ satisfies the following RHP. \begin{rhproblem} \label{ch4:RHPforS} \ \begin{description} \item[RH-S1] $S$ is analytic on $\mathbb C\setminus \Sigma_S$. \item[RH-S2] $S$ has boundary values for $x\in \Sigma_S$. \begin{align} \nonumber S_+(x) &= S_-(x)\\ \nonumber &\hspace{-0.5cm}\times \left\{\begin{array}{ll} \begin{pmatrix} 0 & x^{\beta}\\ -x^{-\beta} & 0 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} 0 & x^{\beta}\\ -x^{-\beta} & 0 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ &\hspace{6cm} x \in (0,q),\\ \nonumber S_+(x) &= S_-(x)\\ \nonumber &\hspace{-1.1cm}\times \left\{\begin{array}{ll} \begin{pmatrix} 1 & x^{\beta} e^{-2n\varphi(x)}\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} 1 & x^{\beta} e^{-2n\varphi(x)}\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ &\hspace{6cm} x>q,\\ \nonumber S_+(x) &= S_-(x) \hspace{0.1cm}\times \left\{\begin{array}{ll} 1 \oplus \bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, & r \equiv 0 \mod 2,\\ 1\oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 1 \mod 2,\\ \end{array} \right. \\ &\hspace{6cm} x < 0,\\ \nonumber S_+(z) &= S_-(z) \begin{pmatrix} 1 & 0\\ z^{-\beta} e^{2 n \varphi(z)} & 1 \end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)}\\ &\hspace{6cm} z \in \Delta_0^\pm. \end{align} \item[RH-S3] As $|z|\to\infty$ \begin{align} \label{ch4:RHS3} S(z) = \left(\mathbb{I}+\mathcal{O}\left(\frac{1}{z}\right)\right) \left(1 \oplus z^{\frac{r-1}{2 r}} \bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) \times\left\{\begin{array}{ll} 1 \oplus U^+ D^+ , & \operatorname{Im}(z) >0,\\ 1 \oplus U^- D^- , & \operatorname{Im}(z) <0. \end{array} \right. \end{align} \item[RH-S4] As $z\to 0$ \begin{align} \nonumber S(z) &= \mathcal{O}\begin{pmatrix} z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\\ \vdots & & \vdots\\ z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\end{pmatrix}\\ \label{ch4:RHS4a} &\hspace{4cm} \text{for $z$ to the right of }\Delta_0^\pm\\ \nonumber S(z) &= \mathcal{O}\begin{pmatrix} 1 & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\\ \vdots & & & \vdots\\ 1 & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z) & \ldots & z^{-\frac{r-1}{2r}} h_{\alpha+\frac{r-1}{r}}(z)\end{pmatrix}\\ \label{ch4:RHS4b} &\hspace{4cm} \text{for $z$ to the left of }\Delta_0^\pm \end{align} \end{description} \end{rhproblem} We omit a proof, since the procedure is standard. \section{Global parametrix} The jump matrix on $\Delta_0^\pm$ and $(q,\infty)$ will tend to the unit matrix as $n\to\infty$. For $z\in\Delta_0^\pm$ we have that $\operatorname{Re} \varphi(z)<0$ if the distance of the lips of the lens to $(0,q)$ is small enough (we have the freedom to make this distance as small as we want), one can argue this with the Cauchy-Riemann equations and \eqref{ch4:eq:varphi0q}, in the usual way. Thus there are no jumps on $\Delta_0^\pm$ in the limit that $n\to\infty$. On $(q,\infty)$ one may use \eqref{ch4:eq:varphiqinfty} to see that the upper-left $2\times 2$ block tends to the unit matrix as $n\to\infty$. The global parametrix problem is the problem where we replace the jumps on these contours by their large $n$ limit. It should be a good approximation of $S$ away from the end points $0$ and $q$ for large $n$. \subsection{The global parametrix problem} \begin{figure} \caption{Contour $\Sigma_{N} \label{ch4:FigN} \end{figure} The global parametrix problem takes the following form. \begin{rhproblem} \label{ch4:RHPforN} \ \begin{description} \item[RH-N1] $N$ is analytic on $\mathbb C\setminus \mathbb R$. \item[RH-N2] $N$ has boundary values for $x\in \Sigma_N$ (see Figure \ref{ch4:FigN}), and we have the jumps \begin{align} \nonumber N_+(x) &= N_-(x)\\ \nonumber &\hspace{-0.5cm}\times \left\{\begin{array}{ll} \begin{pmatrix} 0 & x^{\beta}\\ -x^{-\beta} & 0 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} 0 & x^{\beta}\\ -x^{-\beta} & 0 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ &\hspace{6cm} x \in (0,q),\\ \nonumber N_+(x) &= N_-(x)\\ \nonumber &\hspace{0.4cm}\times \left\{\begin{array}{ll} \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r}{2}-1} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ &\hspace{6cm} x > q,\\ \nonumber N_+(x) &= N_-(x) \\ \nonumber &\hspace{1.5cm}\times \left\{\begin{array}{ll} 1 \oplus \bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, & r \equiv 0 \mod 2,\\ 1\oplus \bigoplus_{j=1}^{\frac{r-1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 1 \mod 2,\\ \end{array} \right. \\ &\hspace{6cm} x < 0. \end{align} \item[RH-N3] As $|z|\to\infty$ \begin{align} \label{ch4:RHN3} N(z) = \left(\mathbb{I}+\mathcal{O}\left(\frac{1}{z}\right)\right) \left(1 \oplus z^{\frac{r-1}{2 r}} \bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) \times \left\{\begin{array}{ll} 1 \oplus U^+ D^+ , & \operatorname{Im}(z) >0,\\ 1 \oplus U^- D^- , & \operatorname{Im}(z) <0, \end{array} \right. \end{align} \end{description} \end{rhproblem} We have some freedom in our choice of the behavior of $N$ as $z\to 0$ and $z\to q$.\\ In \cite{KuMo} we were lucky in that we could use a minor modification of the global parametrix from \cite{KuMFWi}. For $r>2$, we really have to solve the global parametrix problem. \\ \begin{remark} In what follows, we opt to find a solution to RH-N with an appropriate algebraic equation and corresponding Riemann surface (e.g., see \cite{KuMFWi}). There is an alternative method though. After making the jumps constant with Szeg\H{o} functions, one can use the method with differentials introduced by Kuijlaars and Mo in \cite{KuMo} (see \cite{KuLo} for the larger size case, and in particular on how to obtain the correct asymptotic behavior as $z\to\infty$). The latter has the advantage that technical calculations can mostly be avoided. There is a trade-off however, in that the formulae seem to get more explicit in the first method. In the end however, all that really matters to us, is how $N$ behaves near $0$ and $q$ (which we describe in RH-N4 later). The choice for the first method is one of personal preference. \end{remark} \subsection{An $r+1$ sheeted Riemann surface} First we will solve RH-N for $\beta=0$. We need an $r+1$ sheeted Riemann surface. To find out what Riemann surface will help us, we let the associated vector equilibrium problem for linear external fields guide us. Let $\mu_0',\mu_1',\ldots,\mu_{r-1}'$ be the equilibrium measures of \eqref{ch4:eq:eqProblem} corresponding to a lineair external field $x$ (see \cite[Proposition 5.1]{Ku}). Using \cite[Theorem 1.8]{ClRo} for $\theta=r$ and external field $x^r$ we see after a little calculation that $\mu_0'$ has support $[0,\frac{2r}{r+1}]$. For this we used the duality between $\theta=r$ and $\theta=\frac{1}{r}$ of the MBE. We consider the Stieltjes transforms \begin{align*} F_j(z) &= \displaystyle \int_{\Delta_j'} \frac{d\mu_j'(s)}{z-s}, & j=1,2,\ldots,r, \end{align*} where $\Delta_0'=[0,\frac{2r}{r+1}]$ and $\Delta_j'=\Delta_j$ as in \eqref{ch4:eq:defDeltaj} for $j=1,\ldots,r$. From these we construct a function on a Riemann surface as follows \begin{align*} \zeta(z) = \left\{\begin{array}{ll} \zeta_0(z) = 1 - F_1(z), & z\in \mathfrak R_0\\ \zeta_j(z) = F_j(z) - F_{j+1}(z), & z\in \mathfrak R_j, j=1,\ldots,r-1,\\ \zeta_r(z) = F_r(z), & z\in \mathfrak R_r. \end{array}\right. \end{align*} It is known from \cite[Proposition 5.1]{Ku} that $\zeta$ defines a meromorphic function from the $r+1$ sheeted Riemann surface with cut $\Delta_j'$ between sheet $\mathfrak R_j$ and sheet $\mathfrak R_{j+1}$, where $j=0,1,\ldots,r-1$, to the extended complex plane. We can get the asymptotic behavior as $z\to\infty$ immediately from \text{Proposition \ref{ch4:asympgfunctions}}, by taking the derivative (linear external fields are one-cut $\frac{1}{r}$-regular). Namely, we have as $z\to \infty$ \begin{align} \label{ch4:eq:zeta0AsympInfty} \zeta_0(z) &= 1-\frac{1}{z} + O\left(\frac{1}{z^2}\right), \end{align} and for $j=1,\ldots,r$ and $\pm \operatorname{Im}(z)>0$, we have \begin{align} \label{ch4:eq:zetajAsympInfty} \zeta_{j}(z) &= \frac{1}{r z} \left(1+\sum_{k=1}^{r-1} k m'_{\frac{k}{r}} \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor k} z^{-\frac{k}{r}} + O\left(\frac{1}{z}\right)\right), \end{align} where $m_\frac{1}{r}',\ldots,m_\frac{r-1}{r}'$ correspond to the external field $x$, the accent is added to avoid confusion with $m_\frac{1}{r},\ldots,m_\frac{r-1}{r}$ from Proposition \ref{ch4:asympgfunctions} corresponding to our general external field $V$.\\ \begin{proposition} \label{ch4:prop:zetaAsympInfty} $\zeta$ satisfies the algebraic equation \begin{align} \label{ch4:eq:AlgeqforZeta} \zeta^{r+1} = \left(\zeta-\frac{1}{r z}\right)^r. \end{align} \end{proposition} \begin{proof} We can read off from the asymptotic behaviors \eqref{ch4:eq:zeta0AsympInfty} and \eqref{ch4:eq:zetajAsympInfty} that $\zeta$ has a zero of order $r$ at infinity. The equilibrium measure $\mu_0'$ behaves as $d_1 s^{-\frac{r}{r+1}}\left(1+\mathcal O\left(x^\frac{1}{r+1}\right)\right)$ as $s\to 0^+$ for some constant $d_1>0$. To see this, one combines Theorem 1.8 and Remark 1.9 from \cite{ClRo} with the duality between $\theta=r$ and $\theta=\frac{1}{r}$ of the MBE. It follows from the behavior of the equilibrium measure that $\zeta_0(z) \sim - (r+1) d_1 z^{-\frac{r}{r+1}}$ as $z\to 0$. This implies that $\zeta$ has a pole of order $r$ at $z=0$, since $z=0$ is a branch point of order $r$ of the Riemann surface. We conclude that there cannot be any other poles. From the asymptotic behaviors \eqref{ch4:eq:zeta0AsympInfty} and \eqref{ch4:eq:zetajAsympInfty} we also read off that as $z\to\infty$ \begin{align} \label{ch4:zzzzz} \zeta_0(z)+\zeta_1(z)+\ldots+\zeta_r(z) &= 1+\mathcal O\left(\frac{1}{z^{2}}\right)\\ \zeta_0(z)\zeta_1(z) + \zeta_0(z)\zeta_2(z) + \ldots + \zeta_{r-1}(z)\zeta_r(z) &= \frac{1}{z} + \mathcal O\left(\frac{1}{z^{2}}\right)\\ \zeta_0(z)\zeta_1(z)\zeta_2(z)+\zeta_0(z)\zeta_1(z)\zeta_3(z) + \ldots + \zeta_{r-2}(z)\zeta_{r-1}(z)\zeta_r(z) &= \frac{1}{r} \binom{r}{2} \frac{1}{z^2} + \mathcal O\left(\frac{1}{z^{3}}\right)\\ \nonumber &\vdots\\ \label{ch4:zzzzz2} \zeta_0(z)\zeta_1(z)\cdots \zeta_r(z) &= \frac{1}{r^{r}} \binom{r}{r} \frac{1}{z^r} + \mathcal O\left(\frac{1}{z^{r+1}}\right). \end{align} No calculation is needed to argue that there are only integer powers of $z$. Each formula in \eqref{ch4:zzzzz}-\eqref{ch4:zzzzz2} represents an elementary symmetric polynomial, and is thus invariant under any permutation of $\zeta_0,\ldots,\zeta_r$. Then the expressions in \eqref{ch4:zzzzz}-\eqref{ch4:zzzzz2} do not have jumps, and the asymptotic behaviors can only contain integer powers of $z$. Additionally, it implies that \eqref{ch4:zzzzz}-\eqref{ch4:zzzzz2} represent meromorphic functions in the full complex plane with only a possible pole at $z=0$. It follows from \eqref{ch4:eq:behavmujs}, with external field $x$, that we have for $j=1,\ldots,r$ that \begin{align*} \zeta_j(z) = \mathcal O\left(z^{-\frac{r-1}{r}}\right) \end{align*} as $z\to 0$. Then it actually follows that the $\mathcal O$ terms in \eqref{ch4:zzzzz}-\eqref{ch4:zzzzz2} vanish. We conclude that \begin{align*} \prod_{j=0}^r (\zeta - \zeta_j(z)) &= \zeta^{r+1}+\sum_{k=0}^{r} \zeta^{j} \frac{(-1)^{r+1-j}}{r^{r-j}} \binom{r}{j} \frac{1}{z^{r-j}} = \zeta^{r+1} - \left(\zeta-\frac{1}{r z}\right)^r \end{align*} and we arrive at \eqref{ch4:eq:AlgeqforZeta}. \end{proof} We mention that \eqref{ch4:eq:AlgeqforZeta} is in agreement with (2.15) from \cite{KuMo} when $r=2$. To solve the global parametrix problem, it is convenient to modify $\zeta$. Namely, we define \begin{align} \label{ch4:eq:defXi} \xi_j(z) &= 1 - \frac{1}{r z \zeta_j\left(\displaystyle\frac{c_q}{r} z\right)}, & c_q = \frac{r+1}{2 q}, \hspace{1cm} j=0,1,\ldots,r. \end{align} Then $\xi$ is a meromorphic function on the $(r+1)$-sheeted Riemann surface with cuts $\Delta_0,\ldots,\Delta_r$ as in \eqref{ch4:eq:defDeltaj}. Proposition \ref{ch4:prop:zetaAsympInfty} immediately implies the following algebraic equation for $\xi$. \begin{corollary} $\xi$ satisfies \begin{align} \label{ch4:eq:xiAlg} c_q z = \frac{1}{(1-\xi) \xi^r}. \end{align} \end{corollary} It is a direct consequence of the asymptotic behaviors of $\zeta_0,\ldots,\zeta_r$ in \eqref{ch4:eq:zeta0AsympInfty} and \eqref{ch4:eq:zetajAsympInfty} that as $z\to\infty$ \begin{align} \label{ch4:eq:behavInftyXi1} \xi_0(z) &= 1 - \frac{r}{c_q z}+\mathcal O\left(\frac{1}{z^2}\right),\\ \label{ch4:eq:behavInftyXi2} \xi_j(z) &= m'_{\frac{1}{r}} \displaystyle\left(\frac{r}{c_q}\right)^\frac{1}{r} \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor} z^{-\frac{1}{r}} + \mathcal O\left(z^{-\frac{2}{r}}\right) \end{align} for $\pm\operatorname{Im}(z)>0$. In principle, we could write down the expansion of $\xi_j(z)$ completely in powers of $z^{-1/r}$ but we will not need it. The equation \eqref{ch4:eq:behavInftyXi2} illuminates why we chose the particular modification $\zeta\to \xi$ as in \eqref{ch4:eq:defXi}, we want $\xi_1,\ldots,\xi_r$ to correspond intuitively to (a multiple of) $z^{-\frac{1}{r}}$ for large $z$. This will be key to getting the asymptotics of the global parametrix as $z\to\infty$ right, as shall be clear shortly. From \eqref{ch4:eq:xiAlg} we may also deduce the asymptotic behavior as $z\to 0$. Namely, for $j=0,1,\ldots,r$, we have as $z\to 0$ that \begin{align} \label{ch4:eq:behav0Xi} \xi_j(z) = \mathcal O\left(z^{-\frac{1}{r+1}}\right). \end{align} We shall be more precise about the behavior around the origin in a moment. \subsection{Properties of $\xi$} We prove some properties for $\xi$ that will be practical later on. \begin{lemma} \label{ch4:prop:propXi} For $j=0,1,\ldots,r$ we have \begin{itemize} \item[(i)] $\xi_{j,\pm}(z) = \xi_{j+1,\mp}(z)$ for all $z\in \Delta_j$ (and $j<r$). \item[(ii)] $\xi_j(\overline{z}) = \overline{\xi_j(z)}$ for all $z\in\mathbb C\setminus \Delta_j$. \item[(iii)] $\operatorname{sgn}\operatorname{Im}(\xi_j(z)) = (-1)^j\operatorname{sgn}\operatorname{Im}(z)$ for all $z\in \mathbb C\setminus \mathbb R$. \end{itemize} Furthermore, we have \begin{align*} \xi_0((q,\infty))=\left(\frac{r}{r+1},1\right)\quad \text{and}\quad \xi_0((-\infty,0))=(1,\infty). \end{align*} \end{lemma} \begin{proof} $\xi$ inherits its jumps from $\zeta$ (though with a rescaled cut on the first and second sheet), hence (i) holds trivially. Let us prove (ii). By complex conjugating both \eqref{ch4:eq:xiAlg} and $z$ we find that \begin{align*} c_q z = \frac{1}{(1-\overline{\xi(\overline{z})}) \overline{\xi(\overline{z})}^r}. \end{align*} Then on any small neighborhood in, say, the upper half-plane we find a number $\sigma(j)\in\{0,1,\ldots,r\}$ such that \begin{align*} \overline{\xi_j(\overline{z})} = \xi_{\sigma(j)}(z). \end{align*} By analytic continuation this must hold on the entire upper half-plane. Of course, a similar argument works in the lower half-plane. Then it follows from either \eqref{ch4:eq:behavInftyXi1} or \eqref{ch4:eq:behavInftyXi2} that this can only be true if $\sigma(j)=j$ for all $j=0,1,\ldots,r$. The equality extends to $\mathbb C\setminus \Delta_j$ by continuity. Let us prove (iii). First we prove that the sign of $\operatorname{Im}(\xi_j(z))$ is fixed in the upper half-plane and the lower half-plane. Suppose that $z_1$ and $z_2$ are two points in the upper half-plane such that $\operatorname{Im}(\xi_j(z_1))\neq\operatorname{Im}(\xi_j(z_2))$. Then, by continuity, there must exist a $z_3$ in the segment $[z_1,z_2]$, and in the upper half-plane in particular, such that $\operatorname{Im}(\xi_j(z_3))=0$. By (ii) this means that $\xi_j(z_3) = \xi_j(\overline{z_3})$. This is a contradiction since it would mean that $\xi$ is not an isomorphism (it has to be, since $\zeta$ is). We conclude that the sign of $\operatorname{Im}(\xi_j(z))$ is fixed in the upper half-plane. Of course a similar argument applies to the lower half-plane. Let us now view the case $j=0$. By \eqref{ch4:eq:behavInftyXi1} it should hold that $\operatorname{Im}(\xi_0(z))$ and $\operatorname{Im}(z)$ have the same sign for large $z$. Since the sign is fixed in the upper half-plane and the lower half-plane, as we just proved, this actually holds for all $z\in \mathbb C\setminus\mathbb R$. Thus we proved (iii) for $j=0$. The cases $j>0$ now simply follow from the cuts in (i). For the remaining part of the proposition we first prove that $\xi_0(q)=\frac{r}{r+1}$. It follows from (ii) and the bijectivity of $\xi$ that $\xi_0$ and $\xi_r$ are the only functions amongst $\xi_0,\ldots,\xi_r$ that can attain real values. Let us look at the function in the right-hand side of \eqref{ch4:eq:xiAlg}. It has a local minimum at $\xi = \frac{r}{r+1}$ and this is its only extremum. Since our map $\xi$ is bijective this local minimum must correspond to an endpoint of either $\Delta_0$ or $\Delta_r$. It follows from the asymptotic behaviors \eqref{ch4:eq:behavInftyXi1}, \eqref{ch4:eq:behavInftyXi2} and \eqref{ch4:eq:behav0Xi} that it cannot correspond to either $\infty$ or $0$. We must conclude that $\xi_0(q) = \frac{r}{r+1}$. Now using this, \eqref{ch4:eq:behavInftyXi1} and the fact that $\xi_0((q,\infty))\subset \mathbb R$ we infer that $\xi_0((q,\infty)) = (\frac{r}{r+1},1)$. Then we cannot have $\xi_0((-\infty,0))=(-\infty,1)$ and we must conclude that $\xi_0((-\infty,0))=(1,\infty)$. For this, we also used \eqref{ch4:eq:behav0Xi}. \end{proof} \begin{corollary} \label{ch4:cor:behavxi0} Let $j=0,1,\ldots,r$. As $z\to 0$ we have \begin{align} \label{ch4:eq:behav0Xi2} \xi_j(z) = \left\{\begin{array}{ll} \displaystyle\left(\frac{r}{c_q}\right)^\frac{1}{r+1} \omega^{(-1)^{j}(\lfloor \frac{j+1}{2}\rfloor + \frac{1}{2})}z^{-\frac{1}{r+1}} + \mathcal O\left(z^{-\frac{2}{r+1}}\right), & \operatorname{Im}(z)>0,\\ \displaystyle\left(\frac{r}{c_q}\right)^\frac{1}{r+1} \omega^{(-1)^{j-1}(\lfloor \frac{j+1}{2}\rfloor + \frac{1}{2})} z^{-\frac{1}{r+1}} + \mathcal O\left(z^{-\frac{2}{r+1}}\right), & \operatorname{Im}(z)<0. \end{array} \right. \end{align} \end{corollary} We remind the reader that $\omega$ is as in \eqref{ch4:defab}. \begin{proof} Since $\xi_0((-\infty,0))=(1,\infty)$ we must have, using \eqref{ch4:eq:xiAlg}, that $$\xi_0(z) = \left(\frac{r}{c_q}\right)^\frac{1}{r+1}(-x)^{-\frac{1}{r+1}}$$ as $x\to 0^-$. Thus we have $\xi_0(z) = \omega^{\pm \frac{1}{2}} z^{-\frac{1}{r+1}}$ as $z\to 0$ for $\pm\operatorname{Im}(z)>0$. The behaviors for $\xi_1,\ldots,\xi_r$ follow from \ref{ch4:prop:propXi}(i). \end{proof} \subsection{Solution of the global parametrix for $\beta=0$} We first find a solution to the global parametrix problem for $\beta=0$. \begin{theorem} \label{ch4:thm:globalParam0} For $\beta=0$ the global parametrix problem is solved by \begin{align} \label{ch4:eq:defN0} N_0(z) = \begin{pmatrix} p_0(\xi_0(z)) F(\xi_0(z)) & p_0(\xi_1(z)) F(\xi_1(z)) & \cdots & p_0(\xi_r(z)) F(\xi_r(z))\\ p_1(\xi_0(z)) F(\xi_0(z)) & p_1(\xi_1(z)) F(\xi_1(z)) & \cdots & p_1(\xi_r(z)) F(\xi_r(z))\\ \vdots & & & \vdots\\ p_r(\xi_0(z)) F(\xi_0(z)) & p_r(\xi_1(z)) F(\xi_1(z)) & \cdots & p_r(\xi_r(z)) F(\xi_r(z)) \end{pmatrix}, \end{align} where $p_0,\ldots,p_r$ are unique polynomials of (at most) degree $r$ and where \begin{align} \label{ch4:eq:defF} F(\xi) = \frac{1}{\sqrt{(r+1)\xi^r-r \xi^{r-1}}}. \end{align} In the definition of $F$ the square root is taken to have the $r$ cuts $\xi_{0,+}(\Delta_0), \xi_{1,+}(\Delta_1), \ldots, \xi_{r-1,+}(\Delta_{r-1})$ and it is positive for large positive values of $\xi$. \end{theorem} \begin{proof} We start by proving that RH-N3 is satisfied. It turns out to be convenient to write the asymptotics in terms of $\xi_j(z)$ rather than $z$, for $j=1,\ldots,r$. Namely, it follows from the algebraic equation \eqref{ch4:eq:xiAlg} and the asymptotics \eqref{ch4:eq:behavInftyXi2} that \begin{align} \label{ch4:eq:xitoz1} c_q^\frac{k}{r} \xi_j^k (1-\xi_j)^\frac{k}{r} = \left\{\begin{array}{rl} \Omega^{(-1)^{j}\lfloor \frac{j}{2}\rfloor k}z^{-\frac{k}{r}}, & \operatorname{Im}(z)>0,\\ \Omega^{(-1)^{j-1}\lfloor \frac{j}{2}\rfloor k}z^{-\frac{k}{r}}, & \operatorname{Im}(z)<0, \end{array}\right. \end{align} for large enough $z$. With large enough $z$ we mean that we should have $|\xi_j(z)|<1$(see \eqref{ch4:eq:behavInftyXi2}) so that $(1-\xi_j)^\frac{k}{r}$ is well-defined. Then by \eqref{ch4:eq:xitoz1} we have for $\pm\operatorname{Im}(z)>0$ that \begin{align} \nonumber \Omega^{\pm (-1)^{j}\lfloor \frac{j}{2}\rfloor k} z^{-\frac{k}{r}} &= c_q^\frac{k}{r} \xi^k \sum_{m=0}^\infty \binom{\frac{k}{r}}{m} (-1)^m \xi^m\\ \label{ch4:eq:zktoxi} &= c_q^\frac{k}{r} \sum_{m=k}^{r-1} \binom{\frac{k}{r}}{m-k} (-1)^{m-k} \xi^m + \mathcal O(\xi^r). \end{align} as $z\to \infty$. We will use this expansion in a moment. We know that $\xi_0(q)=\frac{r}{r+1}$ (see Lemma \ref{ch4:prop:propXi}). We may decompose $F$ as \begin{align} \label{ch4:eq:decompF} F(\xi) = \frac{1}{\sqrt{r+1}} \frac{1}{\sqrt{\xi-\xi_0(q)}} (-1)^{\sigma(\xi)} \xi^\frac{1-r}{2}, \end{align} where the square root function has the cut $\xi_{0,+}(\Delta_0)$, is positive for large positive values of $\xi$, and, as usual, $\xi^\frac{1-r}{2}$ is taken with the principle branch (when $r\equiv 0\mod 2$). $\sigma(\xi)$ counts the number of cuts among $\xi_{1,+}(\Delta_1), \ldots, \xi_{r-1,+}(\Delta_{r-1})$ that have been touched if we go with a circular arc from $|\xi|$ to $\xi$, let's say that this arc is in the upper half-plane if $\operatorname{Im}(\xi)>0$ and in the lower half-plane if $\operatorname{Im}(\xi)<0$. We then have $\sigma(\xi)=0$ for $\xi$ in the lower half-plane, because $\xi_{1,+}(\Delta_1), \ldots, \xi_{r-1,+}(\Delta_{r-1})$ are all in the upper half-plane by Lemma \ref{ch4:prop:propXi}(c). We find that for $j=1,\ldots,r$ \begin{align} \label{ch4:eq:1-1jsigma} \sigma(\xi_j(z)) = \left\{\begin{array}{rl} \frac{1+(-1)^j}{2} (j-1), & \operatorname{Im}(z)>0,\\ \frac{1-(-1)^j}{2} (j-1), & \operatorname{Im}(z)<0. \end{array}\right. \end{align} Now using \eqref{ch4:eq:behavInftyXi2} and \eqref{ch4:eq:1-1jsigma} we find for $j=1,\ldots,r$ that \begin{align*} F(\xi_j(z)) = i\frac{r^\frac{r-1}{2r} c_q^{-\frac{r-1}{2r}}}{\sqrt r (m_\frac{1}{r}')^\frac{r-1}{2}}\frac{1}{\sqrt{1-\frac{r+1}{r}\xi_j(z)}} (-1)^{\sigma(j)+\left\lfloor\frac{j}{2}\right\rfloor} \Omega^{\pm (-1)^{j} \frac{1}{2}\left\lfloor\frac{j}{2}\right\rfloor} z^{-\frac{r-1}{2 r}} \left(1 + \mathcal O\left(z^{-\frac{1}{r}}\right)\right) \end{align*} for $\pm\operatorname{Im}(z)>0$ as $z\to\infty$. The expression between brackets on the far right can be written as \begin{align*} f\left(\Omega^{\pm (-1)^{j}\left\lfloor\frac{j}{2}\right\rfloor} z^{-\frac{1}{r}}\right), \end{align*} for some function $f$ with $f(0)=1$ that is analytic around $0$, and does not depend on $j$, which is clear from \eqref{ch4:eq:zetajAsympInfty} and \eqref{ch4:eq:defXi}. Then there exists a function $h$ with $h(0)\neq 0$ that is analytic around $0$, and does not depend on $j$, such that for $j=1,\ldots,r$ \begin{align} \label{ch4:eq:Fxiasymph} F(\xi_j(z)) = (-1)^{\sigma(j)+\left\lfloor\frac{j}{2}\right\rfloor} \Omega^{\pm (-1)^{j} \frac{1}{2}\left\lfloor\frac{j}{2}\right\rfloor} z^{-\frac{r-1}{2 r}} h(\xi_j(z)) \end{align} for $\pm\operatorname{Im}(z)>0$ as $z\to\infty$. One can verify that the power of $-1$ in \eqref{ch4:eq:Fxiasymph} follows exactly the patern of the matrix $D^\pm$ as defined in $\eqref{ch4:eq:defDpm}$. We conclude that \begin{align} \label{ch4:eq:asympFxi} F(\xi_j(z)) = D^\pm_{jj} z^{\frac{r-1}{2 r}} h(\xi_j(z)) \end{align} as $z\to\infty$, for $j=1,\ldots,r$. Now, considering only the lower-right $r\times r$ block, RH-N3 demands that \begin{multline*} \begin{pmatrix} p_1(\xi_1(z)) & p_1(\xi_2(z)) & \hdots & p_1(\xi_r(z))\\ p_2(\xi_1(z)) & p_2(\xi_2(z)) & \hdots & p_2(\xi_r(z))\\ \vdots & & & \vdots\\ p_r(\xi_1(z)) & p_r(\xi_2(z)) & \hdots & p_r(\xi_r(z)) \end{pmatrix} \bigoplus_{j=1}^r F(\xi_j(z))\\ = \left(\mathbb I +\mathcal O\left(\frac{1}{z}\right)\right) z^{\frac{r-1}{2 r}} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm. \end{multline*} as $z\to\infty$ for $\pm\operatorname{Im}(z)>0$. Using \eqref{ch4:eq:asympFxi} we see that this implies that we should have \begin{multline*} \left(\mathbb I +\mathcal O\left(\frac{1}{z}\right)\right) \begin{pmatrix} p_1(\xi_1(z)) & p_1(\xi_2(z)) & \hdots & p_1(\xi_r(z))\\ p_2(\xi_1(z)) & p_2(\xi_2(z)) & \hdots & p_2(\xi_r(z))\\ \vdots & & & \vdots\\ p_r(\xi_1(z)) & p_r(\xi_2(z)) & \hdots & p_r(\xi_r(z)) \end{pmatrix}\\ = i \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm \left(\bigoplus_{j=1}^r h(\xi_j(z))\right)^{-1}. \end{multline*} as $z\to\infty$ for $\pm\operatorname{Im}(z)>0$. Looking in the $k$-th column and the $j$-th row this means that $p_k$ should satisfy \begin{align} \label{ch4:eq:asympforpk} p_k(\xi_j(z)) + \mathcal O\left(\frac{1}{z}\right) = i z^{-\frac{k}{r}} \Omega^{\pm (-1)^{j} \lfloor \frac{j}{2}\rfloor k} h(\xi_j(z))^{-1} \end{align} as $z\to\infty$. Now by \eqref{ch4:eq:zktoxi} and the fact that $h(0)\neq 0$, we infer that there exist coefficients $a_0^{[k]}, a_1^{[k]}, \ldots$, not depending on $j$, such that \begin{align*} i z^{-\frac{k}{r}} \Omega^{\pm (-1)^{j} \lfloor \frac{j}{2}\rfloor k} h(\xi_j(z))^{-1} = a_0^{[k]} + a_1^{[k]} \xi_j(z) + a_2^{[k]} \xi_j(z)^2 + \ldots + a_{r-1}^{[k]} \xi_{j}(z)^{r-1} + \mathcal O\left(\xi_j(z)^r\right) \end{align*} as $z\to\infty$. In principle these coefficients can be determined explicitly with the help of a taylor series, but we shall not need an explicit description. Looking at \eqref{ch4:eq:asympforpk}, we infer that we obtain RH-N3 in the lower-right $r\times r$ block if we define \begin{align} \label{ch4:eq:defpkbk} p_k(\xi) = a_0^{[k]} + a_1^{[k]} \xi + a_2^{[k]} \xi^2 + \ldots + a_{r-1}^{[k]} \xi^{r-1} + b_k \xi^r, \end{align} where we still have some freedom in choosing $b_k$. Let us now focus on getting the correct asymptotics RH-N3 in the first colum. By \eqref{ch4:eq:decompF} and \eqref{ch4:eq:behavInftyXi1} we have as $z\to\infty$ that \begin{align} \label{ch4:eq:Fxi0asymp} F(\xi_0(z)) = 1+ \mathcal O\left(\frac{1}{z}\right). \end{align} We should have for all $k=1,\ldots,r$ that \begin{align*} p_k(\xi_0(z)) F(\xi_0(z)) = \mathcal O\left(\frac{1}{z}\right) \end{align*} as $z\to\infty$. Using the asymptotics of $\xi_0$ from \eqref{ch4:eq:behavInftyXi1} and \eqref{ch4:eq:Fxi0asymp} this means that we should have $p_k(1) = 0$. This can easily be achieved, simply by choosing $b_k = -a_0^{[k]} - a_1^{[k]} - a_2^{[k]} - \ldots - a_{r-1}^{[k]}$ in \eqref{ch4:eq:defpkbk}. This fixes the definition of $p_1,\ldots,p_r$ and we have RH-N3 for all rows except the first row. We should have \begin{align*} p_0(\xi_0(z))F(\xi_0(z)) &= 1+\mathcal O\left(\frac{1}{z}\right),\\ p_0(\xi_j(z)) F(\xi_j(z)) &=\mathcal O\left(\frac{1}{z}\right), \end{align*} as $z\to\infty$, for all $j=1,\ldots,r$. In view of \eqref{ch4:eq:Fxi0asymp}, and the asymptotics \eqref{ch4:eq:behavInftyXi1} and \eqref{ch4:eq:behavInftyXi2}, this is achieved if we define \begin{align*} p_0(\xi) = \xi^r. \end{align*} We conclude that RH-N3 is satisfied with our particular choice of polynomials $p_0, p_1, \ldots, p_r$. It is clear that, when we impose that $p_0, p_1, \ldots, p_r$ have degree at most $r$, we have no other choice for their coefficients, and the uniqueness follows. It remains to prove that RH-N2 is satisfied. These are more or less immediate due to the cuts of the Riemann surface associated with $\xi$ (see Lemma \ref{ch4:prop:propXi}(i)), except that we should argue that the minus signs are in the correct place, i.e., in the lower-left component of each $2\times 2$ blocks. For $x>0$ a particular $2\times 2$ block in the jump of $N_0$ takes the form \begin{align} \begin{pmatrix} \label{ch4:eq:2by2blockwithFF} 0 & F(\xi_{2j+1})_+(x)/F(\xi_{2j})_-(x)\\ F(\xi_{2j})_+(x)/F(\xi_{2j+1})_-(x) & 0 \end{pmatrix} \end{align} where $j=0,1,\ldots,\lfloor \frac{r-1}{2} \rfloor$. We know that $\xi_{2j+1,+}((0,\infty))$ does not intersect with any of the cuts of $F$, hence we will not get a minus sign. However, we know that $\xi_{2j,+}([0,\infty))= \xi_{2j,+}(\Delta_{2j})$ is one of the cuts, hence we get a factor $-1$ in the lower left component of \eqref{ch4:eq:2by2blockwithFF}. In the case that $r$ is even, the last block is a $1\times 1$ block. Indeed we have \begin{align*} F(\xi_r)_+(x)/F(\xi_r)_-(x) = 1 \end{align*} because $\xi_{r}((0,\infty))$ does not intersect with any of the cuts of $F$. An analogous argument works for the jump with $x<0$. \end{proof} \subsection{Definition of the global parametrix for general $\beta$} With the global parametrix $N_0$ for $\beta=0$, and the functions $\xi_0,\ldots,\xi_r$ at our disposal, we are ready to define the global parametrix for general $\beta$ (or $\alpha$ equivalently). \begin{definition} \label{ch4:def:N} We define the global parametrix by \begin{align} \label{ch4:eq:defN} N(z) = C_\beta N_0(z) (z^{-\beta} \oplus \mathbb I_{r\times r}) \bigoplus_{j=0}^r e^{-\beta \log(1-\xi_j(z))}, \end{align} where $C_\beta$ is the matrix given by \begin{multline} \label{ch4:eq:defGammabeta} r \operatorname{diag}\left(1,c_q, c_q^2, \ldots, c_q^r\right) \begin{pmatrix} r^{\beta-1} c_q^{-\beta} & 0 & 0 & 0 & \hdots & 0\\ 0 & 1 & \binom{\beta+\frac{1}{r}}{1} & \binom{\beta+\frac{2}{r}}{2} & \hdots & \binom{\beta+\frac{r-1}{r}}{r-1}\\ 0 & 0 & 1 & \binom{\beta+\frac{1}{r}}{1} & \hdots & \binom{\beta+\frac{r-2}{r}}{r-2}\\ 0 & 0 & 0 & 1 & \hdots & \binom{\beta+\frac{r-3}{r}}{r-3}\\ \vdots & & & & \ddots & \vdots\\ 0 & 0 & 0 & 0 & \hdots & 1 \end{pmatrix} \operatorname{diag}\left(1,c_q, c_q^2, \ldots, c_q^r\right)^{-1}. \end{multline} \end{definition} \begin{theorem} The global parametrix problem, for general $\alpha>-1$, is solved by $N$ as in Definition \ref{ch4:def:N}. \end{theorem} \begin{proof} We know from Lemma \ref{ch4:prop:propXi} that $\xi_0((-\infty,0))=(1,\infty)$ and $\xi_0((q,\infty))=(\frac{r}{r+1},1)$. Then the values $(\frac{r}{r+1},\infty)$ are not attained by the other $\xi_j$. Hence $1-\xi_j(\mathbb C\setminus \Delta_j)$ does not intersect with $(-\infty,0)$ when $j=1,\ldots,r$. Thus of all the components of the diagonal matrix in \eqref{ch4:eq:defN} only $z^{-\beta} e^{-\beta \log(1-\xi_0(z))}$ could possibly be a case where the cut of the logarithm is intersected. Using Lemma \ref{ch4:prop:propXi}(iii) we infer that for $x<0$ we get the jump \begin{align*} \left(x^{-\beta} e^{-\beta \log(1-\xi_0(x))}\right)_\pm &= e^{\pm \pi i \beta} |x|^{-\beta} e^{\mp \pi i\beta} |1-\xi_0(x)|^{-\beta} = |x|^{-\beta} |1-\xi_0(x)|^{-\beta}. \end{align*} We conclude that $z^{-\beta} e^{-\beta \log(1-\xi_0(z))}$ does not have a jump on $(-\infty,0)$. Combining this with Lemma \ref{ch4:prop:propXi}(i), we infer that $N$ satisfies RH-N2. Notice that the factor $z^{-\beta}$ in \eqref{ch4:eq:defN} does indeed yield the correct powers of $z$ in the upper-left $2\times 2$ block of the jump for $x\in (0,q)$. It remains to show that RH-N3 is satisfied if we choose $C_\beta$ correctly. It follows from \eqref{ch4:eq:behavInftyXi1} that \begin{align} \label{ch4:eq:rbetaelogxi0} r^\beta c_q^{-\beta} e^{-\beta \log(1-\xi_0(z))} = 1 + \mathcal O\left(\frac{1}{z}\right) \end{align} as $z\to\infty$. For $j=1,\ldots,r$ we may use \eqref{ch4:eq:behavInftyXi2} to conclude that \begin{align} \label{ch4:eq:rbetaelogxij} e^{-\beta \log(1-\xi_j(z))} = (1-\xi_j(z))^{-\beta} \end{align} for $z$ large enough. We are going to find $C_\beta$ in the form $C_\beta = 1 \oplus \Gamma_\beta$, where $\Gamma_\beta$ is an $r\times r$ matrix that only depends on $\beta$. Then, in view of \eqref{ch4:eq:rbetaelogxi0} and \eqref{ch4:eq:rbetaelogxij}, we obtain RH-N3 if for $\pm\operatorname{Im}(z)>0$ we have as $z\to\infty$ that \begin{align*} \Gamma_\beta z^\frac{r-1}{2r} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm \left(\bigoplus_{j=1}^r 1-\xi_j(z)\right)^{-\beta} = \left(\mathbb I + \mathcal O\left(\frac{1}{z}\right)\right) z^\frac{r-1}{2r} \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm D^\pm. \end{align*} Thus we should have that \begin{align} \label{ch4:eq:Gammabetaimplicit} \Gamma_\beta +\mathcal O\left(\frac{1}{z}\right) = \left(\bigoplus_{j=1}^r z^{-\frac{j-1}{r}}\right) U^\pm \left(\bigoplus_{j=1}^r 1-\xi_j(z)\right)^{\beta} (U^\pm)^{-1} \bigoplus_{j=1}^r z^{\frac{j-1}{r}}. \end{align} for $\pm\operatorname{Im}(z)>0$ as $z\to\infty$. Using \eqref{ch4:eq:xitoz1} we can write the component in the $k$-th column and $l$-th row of the right-hand side of \eqref{ch4:eq:Gammabetaimplicit} as \begin{align*} c_q^\frac{k-l}{r} \sum_{m=1}^r \xi_m(z)^{k-l} (1-\xi_m(z))^{\frac{k-l}{r}+\beta}. \end{align*} Then we must have \begin{align} \label{ch4:eq:GammabetaxiO} \Gamma_{\beta,kl} &= c_q^\frac{k-l}{r} \sum_{m=1}^r \xi_m(z)^{k-l} (1-\xi_m(z))^{\frac{k-l}{r}+\beta} + \mathcal O\left(\frac{1}{z}\right) \end{align} as $z\to\infty$. The expression on the right-hand side of \eqref{ch4:eq:GammabetaxiO} (without the $\mathcal O$ term) is invariant under permutations of $\xi_1,\ldots,\xi_r$. Then it cannot have a jump for large enough $z$ ($\xi_0$ does not enter the equation when $|z|>q$) hence it must have a Laurent series expansion (of integer powers of $z$) around $z=0$. All these powers have to be non-positive due to \eqref{ch4:eq:behavInftyXi2}. Indeed, we always have $\xi_m(z)^{k-l}=\mathcal O\left(z^{-\frac{r}{r-1}}\right)$ thus there is no positive (integer) power of $z$ in the Laurent series. Hence we deduce from \eqref{ch4:eq:GammabetaxiO} that we obtain RH-N3 if we take \begin{align*} C_{\beta,kl} &= c_q^\frac{k-l}{r} \lim_{R\to \infty} \oint_{|z|=R} \sum_{m=1}^r \xi_m(z)^{k-l} (1-\xi_m(z))^{\frac{k-l}{r}+\beta} \frac{dz}{z}\\ &= \left\{\begin{array}{ll} r \binom{\frac{k-l}{r}+\beta}{l-k} c_q^{\frac{k-l}{r}}, & k\leq l\\ 0, & k>l.\end{array}\right. \end{align*} for $k,l=1,\ldots,r$, and this is in agreement with \eqref{ch4:eq:defGammabeta}. In the last step we have again used the argument of invariance under permutations of $\xi_1,\ldots,\xi_r$ to argue that certain expressions are analytic and thus vanish when integrated over. \end{proof} \subsection{Behavior of the global parametrix near the hard and soft edge} \begin{proposition} The global parametrix $N$ has the following behavior near the branch points. \begin{align} \label{ch4:eq:behavNasz0} N(z) & \operatorname{diag}\left(z^\frac{r\beta}{r+1}, z^{-\frac{\beta}{r+1}},\ldots,z^{-\frac{\beta}{r+1}}\right) = \mathcal O\left(z^{-\frac{r}{2(r+1)}}\right), & \text{as } z\to 0.\\ \label{ch4:eq:behavNaszq} N(z) &= \mathcal O\begin{pmatrix}(z-q)^{-\frac{1}{4}} & (z-q)^{-\frac{1}{4}} & 1 & \hdots & 1\\ \vdots & & & & \vdots\\ (z-q)^{-\frac{1}{4}} & (z-q)^{-\frac{1}{4}} & 1 & \hdots & 1 \end{pmatrix}, & \text{as } z\to q. \end{align} \end{proposition} \begin{proof} It follows from \eqref{ch4:eq:behav0Xi} that for all $k,j=0,1,\ldots,r$ \begin{align} \label{ch4:eq:pkxijbehavat0} p_k(\xi_j(z))=\mathcal O\left(z^{-\frac{r}{r+1}}\right) \end{align} as $z\to 0$, where $p_0,\ldots,p_r$ are the polynomials of degree $r$ in the definition of $N_0$ in Theorem \ref{ch4:thm:globalParam0}. On the other hand, we have for $j=0,1,\ldots,r$ that \begin{align} \label{ch4:eq:Fxijbehavat0} F(\xi_j(z)) = \mathcal O\left(z^{\frac{r}{2(r+1)}}\right) \end{align} as $z\to 0$, where $F$ is as in Theorem \ref{ch4:thm:globalParam0}. Then, plugging \eqref{ch4:eq:pkxijbehavat0} and \eqref{ch4:eq:Fxijbehavat0} in \eqref{ch4:eq:defN}, we have as $z\to 0$ \begin{align} \label{ch4:eq:CbetaN0behav0} C_\beta N_0(z) =\mathcal O\left(z^{-\frac{r}{2(r+1)}}\right). \end{align} Using Corollary \ref{ch4:cor:behavxi0} we infer that for $j=0,1,\ldots,r$ as $z\to 0$ \begin{align*} e^{-\beta \log(1-\xi_j(z))} = \mathcal O\left(z^{\frac{\beta}{r+1}}\right). \end{align*} From this equation it follows that \begin{align*} (z^{-\beta} \oplus \mathbb I_{r\times r}) \bigoplus_{j=0}^r e^{-\beta \log(1-\xi_j(z))} \operatorname{diag}\left(z^\frac{r\beta}{r+1}, z^{-\frac{\beta}{r+1}},\ldots,z^{-\frac{\beta}{r+1}}\right) = \mathcal O\left(1\right) \end{align*} as $z\to 0$, and if we combine this with \eqref{ch4:eq:CbetaN0behav0} and \eqref{ch4:eq:defN} then we arrive at \eqref{ch4:eq:behavNasz0}. To prove \eqref{ch4:eq:behavNaszq} we first notice that $F$, as defined in Theorem \ref{ch4:thm:globalParam0}, satisfies \begin{align} \label{ch4:eq:behavFxiq} F(\xi) = \mathcal O\left(\left(\xi-\frac{r}{r+1}\right)^{-\frac{1}{2}}\right) \end{align} as $z\to \frac{r}{r+1}$. We know from Lemma \ref{ch4:prop:propXi} that $\xi_0(q) = \frac{r}{r+1}$ and consequently also $\xi_1(q) = \frac{r}{r+1}$. The point $q$ corresponding to the first two sheets of the Riemann surface associated with \eqref{ch4:eq:xiAlg} is a regular point of the Riemann surface, and only $\xi_0$ and $\xi_1$ can approach $\frac{r}{r+1}$. Then we must conclude that there is a square root branch at $q$, i.e., we have \begin{align*} \xi_j(z)-\frac{r}{r+1} = \mathcal O\left(\sqrt{z-q}\right) \end{align*} as $z\to q$, for $j=0$ and $j=0$. Plugging this in \eqref{ch4:eq:behavFxiq} we find that \begin{align*} F(\xi_j(z)) = \mathcal O\left(\left(z-q\right)^{-\frac{1}{4}}\right) \end{align*} as $z\to q$, for $j=0$ and $j=1$. For $j=2,\ldots,r-1$ the functions $\xi_j$ are bounded around $q$ and their limiting values are not $\frac{r}{r+1}$ or $0$, hence $F(\xi_j(z))$ is bounded around $z=q$. It's a simple task to verify that all the other expressions in the definition of $N$ are bounded, and \eqref{ch4:eq:behavNaszq} follows. \end{proof} \section{Local parametrices} Close to the end points $0$ and $q$ the global parametrix cannot be a good approximation. This means that we have to consider local parametrix problems around $z=0$ and $z=q$. The local parametrix problem around $z=q$ is standard and we omit the details, but the local parametrix problem around $z=0$ is new (when $r>2$) and we work it out in detail. It shows similarities with the bare Meijer-G parametrix from \cite{BeBo}, although I do not believe that there is a (simple) way to map the local parametrix problems to each other. \subsection{The local parametrix problem around the hard edge $z=0$} \label{ch4:sec:localParamSetUp} The assumption that $V$ is real analytic on $[0,\infty)$ implies that there is an open neighborhood $O_V$ of $[0,\infty)$ on which $V$ can be analytically continued. We now consider a disk $D(0,r_0)\subset O_V$ around the origin of radius $r_0$. Here $r_0$ is a positive number that we shall eventually fix (see Section \ref{ch4:sec:conformalf}). It is assumed that $r_0$ is sufficiently small, such that the lips of the lens inside $D(0,r_0)$ are on the imaginary axis (see Figure \ref{ch4:FigS} also). We shall orient the boundary circle of any disk positively. The \textit{initial local parametrix problem} (we explain this terminology in a moment) is as follows. \begin{rhproblem} \label{ch4:RHPforMathringP} \ \begin{description} \item[RH-$\mathring{\text{P}}$1] $\mathring P$ is analytic on $D(0,r_0) \setminus \Sigma_S$. \item[RH-$\mathring{\text{P}}$2] $\mathring P$ has the same jumps as $S$ has on $D(0,r_0) \setminus \Sigma_S$. \item[RH-$\mathring{\text{P}}$3] $\mathring P$ has the same asymptotics as $S$ has near the origin. \end{description} \end{rhproblem} The matching condition is usually given as RH-$\mathring{\text{P4}}$. In larger size RHPs obtaining the matching is often a major technical issue, and ours is no exception. We will therefore use a double matching (see \cite{Mo}) instead of an ordinary matching. Then there is also a jump on a shrinking circle inside $D(0,r_0)$, in our case this circle turns out to be $\partial D(0,r_n)$ where \begin{align*} r_n &=n^{-\frac{r+1}{2}}, & n=1,2,\ldots \end{align*} On the other hand, in the annulus $r_n<|z|<r_0$, denoted $A(0;r_n,r_0)$, the local parametrix will not have a jump on the lips of the lens anymore. Hence the actual local parametrix $P$, i.e., the one that we will use in the final transformation, satisfies an altered version of RH-$\mathring{\text{P}}$ (and RH-$\mathring{\text{P}}$2 in particular). Indeed, this is why we called the local parametrix problem for $\mathring P$ the initial local parametrix problem. In general, we will use the same notations and terminology as in \cite{Mo} as much as possible. We will first find a solution $\mathring P$ to RH-$\mathring{\text{P}}$ and then, using the double matching approach, we will construct $P$ as \begin{align*} P(z) = \left\{\begin{array}{ll} E_n^0(z) \mathring P(z), & z\in D(0,r_n),\\ E_n^\infty(z) N(z), & z\in A(0;r_n,r_0), \end{array}\right. \end{align*} where $E_n^0$ and $E_n^\infty$ are analytic prefactors that we shall obtain from Theorem 1.2 in \cite{Mo}. Then it will turn out that $P$ satisfies a double matching of the form \begin{align*} P_+(z) N(z)^{-1} &= \mathbb I + \mathcal O\left(\frac{1}{n}\right), & \text{uniformly for }z\in\partial D(0,r_0),\\ P_+(z) P_-(z)^{-1} &= \mathbb I + \mathcal O\left(\frac{1}{n^{r+2}}\right), & \text{uniformly for }z\in\partial D(0,r_n), \end{align*} as $n\to\infty$. For convenience to the reader, we repeat Theorem 1.2 of \cite{Mo} in Section \ref{ch4:sec:matching}, as \text{Theorem \ref{lem:matching}}. Much of our approach concerning the local parametrix is parallel to the approach in \cite{KuMo}, where the case $r=2$ was treated. \subsection{Reduction to constant jumps} The first step to solving a local parametrix problem is generally to transform it to a problem with constant jumps. To that end we define $\varphi$-functions as follows. \subsubsection{Definition of the $\varphi$-functions} \begin{definition} For $z\in O_V \setminus \mathbb R$ with $\pm\operatorname{Im}(z)>0$ we define \begin{align} \label{ch4:eq:defvarphi0} \varphi_0(z) &= - g_0(z) + \frac{1}{2} g_1(z) + \frac{1}{2}(V(z)+\ell) \pm \pi i,\\ \nonumber \varphi_j(z) &= \frac{1}{2} g_{j-1}(z) - g_j(z) + \frac{1}{2} g_{j+1}(z) \pm (-1)^j \frac{r-j}{r} \pi i,\\ \label{ch4:eq:defvarphij} &\hspace{4cm} j=1,\ldots,r-2,\\ \label{ch4:eq:defvarphir-1} \varphi_{r-1}(z) &= \frac{1}{2} g_{r-2}(z) - g_{r-1}(z) \pm (-1)^{r-1} \frac{\pi i}{r}. \end{align} Here $g_0, g_1,\ldots, g_{r-1}$ are the $g$-functions as in \eqref{ch4:eq:defgfunctions}. For convenience, we also define $\varphi_{-1}=\varphi_r=0$. \end{definition} Notice that we have $\varphi_0(z) = \varphi(z) \pm \pi i$ according to \eqref{ch4:eq:defvarphi}. The explicit form of the $\varphi$-functions is dictated by the variational equations \eqref{ch4:varCon1} and \eqref{ch4:varCon2}. Notice that the definition of $\varphi_2, \ldots, \varphi_{r-1}$ makes sense on $\mathbb C\setminus \mathbb R$, for our purposes it will suffice to let them have domain $O_V\setminus \mathbb R$ though. \begin{lemma} \label{ch4:prop:phiRelations} For all $j=0,1,\ldots,r-1$ we have for $x\in \Delta_j\cap O_V$ that \begin{align} \label{ch4:eq:varphij+=varphij-} \varphi_{j,+}(x) &= -\varphi_{j,-}(x). \end{align} Furthermore, for all $j=0,\ldots,r-1$ we have for $x\in (\mathbb R\cap O_V)\setminus \Delta_j$ that \begin{align} \label{ch4:eq:varphijcrelation} \varphi_{j+}(x) &= \varphi_{j-1,-}(x)+\varphi_{j-}(x)+\varphi_{j+1,-}(x). \end{align} \end{lemma} \begin{proof} First we prove the relation \eqref{ch4:eq:varphij+=varphij-}. For $x\in (0,q) \cap O_V$ and $j=0$ we have \begin{align} \nonumber \varphi_{0,\pm}(x) &= - \int_0^q \log|x-s| d\mu_0(s) \mp \pi i \int_x^q d\mu_{0}(s) + \frac{1}{2} \int_{-\infty}^0 \log|x-s| d\mu_1(s) + \frac{1}{2}(V(x)+\ell) \pm \pi i\\ \label{ch4:eq:varphieven0} &= \pm\mu_0([0,x]), \end{align} where we have used the variational conditions \eqref{ch4:varCon1}, and the fact that $\mu_0$ has total mass $1$. Similarly, we have by \eqref{ch4:varCon2} for even $j>0$ and $x>0$ that \begin{align} \nonumber \varphi_{j,\pm}(x) &= \frac{1}{2} \int_{-\infty}^0 \log|x-s| d\mu_{j-1}(s) - \int_0^\infty \log|x-s| d\mu_j(s) \\ \nonumber &\hspace{0.5cm}\mp \pi i \int_x^\infty d\mu_j(s)+ \frac{1}{2} \int_{-\infty}^0 \log|x-s| d\mu_{j+1}(s) \pm \frac{r-j}{r} \pi i\\ \nonumber &= \mp \pi i \mu_j([x,\infty]) \pm \frac{r-j}{r} \pi i\\ \label{ch4:eq:varphievenj} &= \pm \pi i \mu_j([0,x]). \end{align} Here we have used that $\mu_j$ has total mass $\frac{r-j}{r}$, see \eqref{ch4:eq:totalMass}. For odd $j$ we have for $x<0$ that \begin{align} \nonumber \varphi_{j,\pm}(x) &= \frac{1}{2} \int_{0}^\infty \log|x-s| d\mu_{j-1}(s) \mp \frac{\pi i}{2} \int_0^\infty d\mu_{j-1}(s)\\ \nonumber &\quad - \int_0^\infty \log|x-s| d\mu_j(s) \pm \pi i \int_x^0 d\mu_j(s)\\ \nonumber &\quad + \frac{1}{2} \int_{0}^\infty \log|x-s| d\mu_{j+1}(s) \mp \int_0^\infty d\mu_{j+1}(s) \pm \frac{r-j}{r} \pi i\\ \nonumber &= \mp \frac{\pi i}{2} \left(\frac{r-j+1}{r}+\frac{r-j-1}{r}\right) \pm \mu_j([x,0]) \pm \frac{r-j}{r} \pi i\\ \label{ch4:eq:varphioddj} &= \pm \mu_j([x,0]), \end{align} where we used that $\mu_{j-1}$ and $\mu_{j+1}$ have total mass $\frac{r-j+1}{r}$ and $\frac{r-j-1}{r}$ respectively. We conclude that $\varphi_{j+}(x) = -\varphi_{j-}(x)$ for $x\in \Delta_j \cap O_V$ for all $j=0,1,\ldots,r-1$. Now we prove \eqref{ch4:eq:varphijcrelation}. For $x<0$ we have \begin{align*} \varphi_{0,+}(x) - \varphi_{0,-}(x) &= - (g_{0+}(x)-g_{0-}(x)) + \frac{1}{2} (g_{1+}(x)-g_{1-}(x)) - 2\pi i\\ &= 2\pi i \int_0^q d\mu_0(s) - \pi i \int_x^0 d\mu_1(s) - 2\pi i\\ &= -\pi i \mu_{1}([x,0])\\ &= \varphi_{1,-}(x), \end{align*} where we have used that $V$ is analytic and that $\mu_0$ has total mass $1$, and \eqref{ch4:eq:varphioddj} with $j=1$ in the last line. Similarly we have for even $j>0$ and $x<0$ that \begin{align*} &\varphi_{j,+}(x) - \varphi_{j,-}(x)\\ &= \frac{1}{2} (g_{j-1,+}(x) - g_{j-1,-}(x)) - (g_{j,+}(x) - g_{j,-}(x)) + \frac{1}{2} (g_{j+1,+}(x) - g_{j+1,-}(x)) - 2\pi i \frac{r-j}{r}\\ &= -\pi i\int_x^0 d\mu_{j-1}(s) + 2\pi i\int_0^\infty d\mu_j(s) - \pi i\int_x^0 d\mu_{j+1}(s) - 2\pi i \frac{r-j}{r}\\ &= -\pi i\mu_{j-1}([x,0]) + 2\pi i \frac{r-j}{r} - \pi i\mu_{j+1}([x,0]) - 2\pi i \frac{r-j}{r}\\ &= \varphi_{j-1,-}(x) + \varphi_{j+1,-}(x), \end{align*} where we used that $\mu_j$ has total mass $\frac{r-j}{r}$, and we used \eqref{ch4:eq:varphioddj} in the last line. For odd $j$ and $x>0$ we find \begin{align*} \varphi_{j,+}(x) - \varphi_{j,-}(x) &= \pi i\int_x^\infty d\mu_{j-1}(s) + \pi i\int_x^\infty d\mu_{j+1}(s) - 2\pi i \frac{r-j}{r}\\ &= \pi i\mu_{j-1}([x,\infty)) + \pi i\mu_{j+1}([x,\infty)) - 2\pi i \frac{r-j}{r}\\ &= - \pi i\mu_{j-1}([0,x]) - \pi i\mu_{j+1}([0,x])\\ &= \varphi_{j-1,-}(x) + \varphi_{j+1,-}(x), \end{align*} where we used \eqref{ch4:eq:varphievenj}, and \eqref{ch4:eq:varphieven0} for $j=1$, in the last line. We conclude that $$\varphi_{j,+}(x)=\varphi_{j-1,-}(x) + \varphi_{j,-}(x)+ \varphi_{j+1,-}(x)$$ for $x\in (\mathbb R\cap O_V)\setminus \Delta_j$ for all $j=0,1,\ldots,r-1$. \end{proof} \subsubsection{Analytic functions constructed out of the $\varphi$-functions} From the $\varphi$-functions we construct functions $f_1,\ldots,f_m$ that we will eventually use to reduce the jumps of the local parametrix problem to constant jumps. These functions are the generalization of the analytic functions $f_1$ and $f_2$ from Section 5.3 in \cite{KuMo}. \begin{definition} \label{ch4:def:fm} Let $m=1,2,\ldots,r$. We define for $z\in D(0,q) \cap O_V$ \begin{align} \label{ch4:eq:deffm} f_m(z) = -z^{-\frac{m}{r+1}} \left\{\begin{array}{ll} \displaystyle\sum_{j=0}^{r-1} \left(\sum_{k=0}^j \omega^{m ((-1)^{k} \lfloor \frac{k+1}{2}\rfloor+\frac{1}{2})} \right) \varphi_j(z), & \operatorname{Im}(z)>0,\\ \displaystyle\sum_{j=0}^{r-1} \left(\sum_{k=0}^j \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor - \frac{1}{2})} \right) \varphi_j(z), & \operatorname{Im}(z)<0. \end{array} \right. \end{align} \end{definition} \begin{proposition} \label{ch4:prop:fmanalytic} $f_m$ defines an analytic function for every $m=1,2,\ldots,r$. \end{proposition} \begin{proof} We only prove it for the case $r\equiv 0\mod{2}$, the case $r\equiv 1\mod{2}$ is analogous. For $x\in (0,q) \cap O_V$ we have by \eqref{ch4:eq:varphij+=varphij-} and \eqref{ch4:eq:varphijcrelation} that \begin{align*} -\omega^{-\frac{m}{2}} x^\frac{m}{r+1} f_{m+}(x) &= \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j+}(x) +\sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j+1,+}(x)\\ &= - \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j-}(x)\\ & \quad + \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) (\varphi_{2j,-}(x)+\varphi_{2j+1,-}(x)+\varphi_{2j+2,-}(x)) \end{align*} Notice that we have multiplied $f_m$ with an appropriate factor, $-\omega^{-\frac{m}{2}} x^\frac{m}{r+1}$. This is simply a pragmatic choice that makes our equations look nicer. When we shift the summation index for $\varphi_{2j+2,-}(x)$ we can write this as \begin{align*} \nonumber & (-1 + 1 + \omega^{-m}) \varphi_{0,-}(x)\\ \nonumber &+ \sum_{j=1}^{\frac{r}{2}-1} \left(-\left(\sum_{k=0}^{2j} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) + \left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right)\right. \left. +\left(\sum_{k=0}^{2j-1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right)\right) \varphi_{2j-}(x)\\ &\hspace{4.7cm} + \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j+1,-}(x)\\ &= \sum_{j=0}^{\frac{r}{2}-1} \left(\omega^{-m (j+1)}+\sum_{k=0}^{2j-1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor}\right)\\ &\hspace{4.7cm} + \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j+1,-}(x). \end{align*} To prove that $f_m$ has no jump for $x\in (0,\infty)\cap O_V$ it then suffices to show the following two identities, \begin{align} \label{ch4:eq:firstWeirdOmegaIdentity} \sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} = \sum_{k=0}^{2j+1} \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor - 1)} \end{align} and \begin{align} \label{ch4:eq:secondWeirdOmegaIdentity} \omega^{-m (j+1)}+\sum_{k=0}^{2j-1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} =\sum_{k=0}^{2j} \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor - 1)}, \end{align} because then we would obtain the coefficients as in \eqref{ch4:eq:deffm} for $\operatorname{Im}(z)<0$. To prove the first identity we notice that \begin{align*} \sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor} &= \sum_{k=0}^j \omega^{2 m k} + \sum_{k=0}^j \omega^{-m (2k+1)}\\ &= \frac{1-\omega^{2m (j+1)}}{1-\omega^{2m}} + \omega^{-m} \frac{1-\omega^{-2m (j+1)}}{1-\omega^{-2m}}\\ &= \frac{1-\omega^m - \omega^{m(2j+2)}+\omega^{-m(2j+1)}}{1-\omega^{2m}}. \end{align*} Indeed, then, complex conjugating and multiplying by $\omega^{-m}$, we have \begin{align} \nonumber \sum_{k=0}^{2j+1} \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor - 1)} &= \omega^{-m} \frac{1-\omega^{-m} - \omega^{-m(2j+2)}+\omega^{m(2j+1)}}{1-\omega^{-2m}}\\ \nonumber &= \frac{-\omega^m (1-\omega^{-m} - \omega^{-m(2j+2)}+\omega^{m(2j+1)})}{1-\omega^{2m}}\\ \label{ch4:eq:OmegaSums2} &= \sum_{k=0}^{2j+1} \omega^{m (-1)^{k} \lfloor \frac{k+1}{2}\rfloor}. \end{align} The first identity \eqref{ch4:eq:firstWeirdOmegaIdentity} is proved. To prove the second identity \eqref{ch4:eq:secondWeirdOmegaIdentity} we use \eqref{ch4:eq:OmegaSums2} to see that \begin{align*} \sum_{k=0}^{2j-1} & \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor -1)}+ \omega^{-m (j+1)}\\ &= \sum_{k=0}^{2j-1} \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor -1)} + \omega^{m (-\lfloor \frac{2j+1}{2}\rfloor -1)}\\ &= \sum_{k=0}^{2j} \omega^{m ((-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor -1)}. \end{align*} Then \eqref{ch4:eq:secondWeirdOmegaIdentity} is also proved. We conclude that $f_m$ does not have a jump on $(0,q)\cap O_V$. Now we will prove that it also does not have a jump on $(-q,0)\cap O_V$. For $x<0$, we have, again using \eqref{ch4:eq:varphij+=varphij-} and \eqref{ch4:eq:varphijcrelation}, that \begin{align*} \nonumber - |x|^\frac{m}{r+1} f_{m+}(x) &= \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) (\varphi_{2j-1,-}(x)+\varphi_{2j,-}(x)+\varphi_{2j+1,-}(x))\\ \nonumber & \quad - \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j+1,-}(x)\\ \nonumber &= \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j,-}(x)\\ \nonumber & \quad + \sum_{j=0}^{\frac{r}{2}-1} \left( \left(\sum_{k=0}^{2j+2} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) -\left(\sum_{k=0}^{2j+1} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) \right. \\ &\hspace{4cm}\left. +\left(\sum_{k=0}^{2j} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) \right) \varphi_{2j+1,-}(x)\\ \nonumber &= \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} \right) \varphi_{2j,-}(x)\\ &\hspace{2cm}+ \sum_{j=0}^{\frac{r}{2}-1} \left(\sum_{k=0}^{2j} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} + \omega^{-m (j+1) } \right) \varphi_{2j+1,-}(x) \end{align*} Here we have used for the second equality that \begin{align*} \sum_{j=0}^{2(\frac{r}{2}-1)+2} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} = \sum_{j=0}^r \omega^{j} = 0. \end{align*}On the other hand, we have \begin{align*} - |x|^\frac{m}{r+1} f_{m-}(x) = \sum_{j=0}^{r-1} \left(\sum_{k=0}^j \omega^{m ((-1)^{k} \lfloor \frac{k+1}{2}\rfloor + 1)} \right) \varphi_{j-}(x). \end{align*} To see that $f_m$ does not have a jump on $(-q,0)\cap O_V$ we should then have the two identities \begin{align*} \sum_{k=0}^{2j} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} = \sum_{k=0}^{2j} \omega^{m ((-1)^{k} \lfloor \frac{k+1}{2}\rfloor+1)} \end{align*} and \begin{align*} \sum_{k=0}^{2j+1} \omega^{m (-1)^{k-1} \lfloor \frac{k+1}{2}\rfloor} = \sum_{k=0}^{2j} \omega^{m ((-1)^{k} \lfloor \frac{k+1}{2}\rfloor+1)} + \omega^{-m (j+1) }. \end{align*} These identities are simply the complex conjugate of the identities \eqref{ch4:eq:firstWeirdOmegaIdentity} and \eqref{ch4:eq:secondWeirdOmegaIdentity} that we found before. We conclude that $f_m$ has no jumps in $D(0,q)\cap O_V$. Since the $g$ functions (see Proposition \ref{ch4:prop:gfunctionsbounded0}) and $V$ are bounded on $D(0,q)$ we conclude that $f_m$ is analytic. \end{proof} \begin{proposition} \label{ch4:prop:sumfm} Let $j=0,1,\ldots,r$. We have for $z\in D(0,q)\cap O_V$ and $\pm\operatorname{Im}(z)>0$ that \begin{align} \label{ch4:eq:sumfm} \sum_{m=1}^r \omega^{\pm (-1)^{l-1} (\frac{1}{2}+\lfloor\frac{l}{2}\rfloor) m} z^\frac{m}{r+1} f_m(z) = \sum_{j=0}^{r-1} (j+1) \varphi_j(z) - (r+1) \sum_{j=l}^{r-1} \varphi_j(z). \end{align} \end{proposition} \begin{proof} One may verify, by considering the different cases of parity, that \begin{align*} (-1)^{l-1} \left(\frac{1}{2}+\left\lfloor\frac{l}{2}\right\rfloor\right) + (-1)^k \left\lfloor\frac{k+1}{2}\right\rfloor + \frac{1}{2} \equiv 0 \mod (r+1) \end{align*} only has $k=l$ as a solution (under the assumption that $0\leq k\leq r$). Combining this with Definition \ref{ch4:def:fm}, we get for $\pm\operatorname{Im}(z)>0$ that \begin{align*} -&\sum_{m=1}^r \omega^{\pm (-1)^{l-1} (\frac{1}{2}+\lfloor\frac{l}{2}\rfloor) m} z^\frac{m}{r+1} f_m(z)\\ &= \sum_{j=0}^{r-1} \sum_{k=0}^{j} \sum_{m=1}^r \omega^{\pm m ((-1)^{l-1} (\frac{1}{2}+\lfloor\frac{l}{2}\rfloor)+(-1)^k \lfloor \frac{k+1}{2}\rfloor)} \varphi_j(z)\\ &= \sum_{j=0}^{l-1} \sum_{k=0}^j (-1) \varphi_j(z) + \sum_{j=l}^{r-1} \left(r+\sum_{k=0, k\neq l}^{j} (-1)\right) \varphi_j(z)\\ &= -\sum_{j=0}^{l-1} (j+1) \varphi_j(z) + \sum_{j=l}^{r-1} (r-j) \varphi_j(z)\\ &= -\sum_{j=0}^{r-1} (j+1) \varphi_j(z) + (r+1) \sum_{j=l}^{r-1} \varphi_j(z). \end{align*} \end{proof} \subsubsection{A local parametrix problem with constant jumps} \label{ch4:sec:szegoconstantjumps} We define the following function. \begin{definition} \label{ch4:def:D0} We define the $(r+1)\times (r+1)$ diagonal matrix \begin{align} \label{ch4:eq:defD0} D_0(z) &= \exp\left(\frac{2}{r+1} \sum_{j=0}^{r-1} (j+1) \varphi_j(z)\right) \bigoplus_{l=0}^{r} \exp\left(-2\sum_{j=l}^{r-1} \varphi_j(z)\right). \end{align} \end{definition} This function is the equivalent of (5.15) in \cite{KuMo}. The function $D_0(z)$ relates RH-$\mathring{\text{P}}$ to a RHP with constant jumps, in the following way. \begin{proposition} \label{ch4:prop:RHPfromPtoQ} Suppose that \begin{align*} \widetilde P(z) = \mathring P(z) \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta}) D_0(z)^n. \end{align*} Then $\mathring P$ satisfies RHP-$\mathring{\text{P}}$ if and only if $\widetilde P$ satisfies RH-$\widetilde{\text{P}}$, as defined below. \end{proposition} \begin{rhproblem} \label{ch4:RHPforQ} \ \begin{description} \item[RH-$\widetilde{\text{P}}$1] $\widetilde P$ is analytic on $D(0,r_0) \setminus \Sigma_S$. \item[RH-$\widetilde{\text{P}}$2] $\widetilde P$ has boundary values for $x\in D(0,r_0) \cap \Sigma_S$ \begin{align} \nonumber \widetilde P_+(x) &= \widetilde P_-(x)\\ \nonumber &\hspace{2cm}\times \left\{\begin{array}{ll} \bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \bigoplus_{j=1}^{\frac{r+1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ \label{ch4:RHS2} &\hspace{5.5cm} x \in (0,r_0),\\ \nonumber \widetilde P_+(x) &= \widetilde P_-(x)\\ \nonumber &\times \left\{\begin{array}{ll} 1 \oplus \bigoplus_{j=1}^{\frac{r}{2}} e^{2\pi i\beta} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, & r \equiv 0 \mod 2,\\ 1\oplus \bigoplus_{j=1}^{\frac{r-1}{2}} e^{2\pi i\beta} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus e^{2\pi i\beta}, & r \equiv 1 \mod 2,\\ \end{array} \right. \\ &\hspace{5.5cm} x \in (-r_0,0),\\ \nonumber \widetilde P_+(z) &= \widetilde P_-(z) \begin{pmatrix} 1 & 0\\ 1 & 1 \end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)}\\ &\hspace{4cm} z \in \Delta_0^\pm \cap D(0,r_0). \end{align} \item[RH-$\widetilde{\text{P}}$3] As $z\to 0$ \begin{align} \nonumber \widetilde P(z) &= \mathcal{O}\begin{pmatrix} h_{-\alpha-\frac{r-1}{r}}(z) & h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\\ \vdots & & & \vdots\\ h_{-\alpha-\frac{r-1}{r}}(z) & h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\end{pmatrix}\\ \label{ch4:RHQ3a} &\hspace{4cm} \text{for $z$ to the right of }\Delta_0^\pm,\\ \nonumber \label{ch4:RHQ3b} \widetilde P(z) &= \mathcal{O}\begin{pmatrix} 1 & h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\\ \vdots & & & \vdots\\ 1 & h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\end{pmatrix}\\ &\hspace{4cm} \text{for $z$ to the left of }\Delta_0^\pm. \end{align} \end{description} \end{rhproblem} \begin{proof} So let us suppose that $\mathring P$ satisfies RH-$\mathring{\text{P}}$. For the positive and negative real axis the jumps of $\widetilde P$ follow from Proposition \ref{ch4:prop:sumfm}. For example, for $x<0$ the upper-right component of the $l$-th $2\times 2$ block of the jump matrix equals \begin{align*} & e^{\pi i\beta} |x|^\beta \exp\left(\sum_{m=1}^r \omega^{(-1)^{2l-2} (\frac{1}{2}+\lfloor\frac{2l-1}{2}\rfloor m} \omega^\frac{m}{2} |x|^\frac{m}{r+1} f_m(x)\right)\\ &\quad e^{\pi i\beta} |x|^{-\beta} \exp\left(\sum_{m=1}^r \omega^{-(-1)^{2l-1} (\frac{1}{2}+\lfloor\frac{2l}{2}\rfloor m} \omega^{-\frac{m}{2}} |x|^\frac{m}{r+1} f_m(x)\right)\\ &= e^{2\pi i\beta} \exp \left(\sum_{m=1}^r \omega^{(\frac{1}{2}+l-1) m} \omega^\frac{m}{2} |x|^\frac{m}{r+1} f_m(x) - \sum_{m=1}^r \omega^{(\frac{1}{2}+l) m} \omega^{-\frac{m}{2}} |x|^\frac{m}{r+1} f_m(x)\right)\\ &= e^{2\pi i\beta}. \end{align*} The other components, both for the jump for $x<0$ and $x>0$, follow with an analogous argument. For the jump on the lips of the lens we notice that \begin{align} \nonumber \widetilde P_-(z)^{-1} \widetilde P_+(z) &= \operatorname{diag}(1,z^\beta,\ldots,z^\beta) \\ \nonumber & \qquad D_0(z)^{-n} \left(\begin{pmatrix} 1 & 0\\ z^{-\beta} e^{2 n \varphi(z)} & 1 \end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)}\right) D_0(z)^n\\ \nonumber & \hspace{5cm} \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta})\\ \label{ch4:eq:jumpstildeliplens} &= \begin{pmatrix} 1 & 0\\ D_{0,11}(z)^{-1} e^{2 n \varphi(z)} D_{0,00}(z) & 1 \end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)} \end{align} We see that \begin{align*} D_{0,11}(z)^{-n} e^{2 n \varphi(z)} D_{0,00}(z)^n &= \exp 2n\left(\varphi(z) + \sum_{j=1}^{r-1} \varphi_j(z) - \sum_{j=0}^{r-1} \varphi_j(z)\right)\\ &= \exp 2n(\varphi(z) - \varphi_0(z)) = 1. \end{align*} Plugging this in \eqref{ch4:eq:jumpstildeliplens}, we conclude that $\widetilde P$ satisfies RH-$\widetilde{\text{P}}$2. To see that it satisfies RH-$\widetilde{\text{P}}$3 we first notice that the components of $D_0(z)$ are bounded around $z=0$, which follows from Proposition \ref{ch4:prop:fmanalytic} for example. Now RH-$\widetilde{\text{P}}$3 follows from the observation that \begin{align*} z^{-\frac{r-1}{r}} h_{\alpha+\frac{r-1}{r}}(z) z^\beta = z^{-\alpha-\frac{r-1}{r}} h_{\alpha+\frac{r-1}{r}}(z) = h_{-\alpha-\frac{r-1}{r}}(z). \end{align*} We conclude that $\widetilde P$ satisfies RH-$\widetilde{\text{P}}$. The opposite implication is more or less analogous. \end{proof} We will eventually find a solution to RH-$\widetilde{\text{P}}$ in the form \begin{align*} \widetilde P(z) = \Psi\left(n^{r+1} f(z)\right). \end{align*} Here $\Psi$ is a solution to a so-called bare parametrix problem, that we define explicitly in Section \ref{ch4:sec:bareParametrix}. This problem has the same behavior as in RH-$\widetilde{\text{P}}$, except that the jump contours are extended to infinity, i.e., we have jumps on the positive and negative real axis and on the positive and negative imaginary axis. We denote these by $\mathbb R^\pm$ and $i\mathbb R^\pm$. Additionally, $\Psi$ has a specific asymptotic behavior at $\infty$. The function $f$ is a conformal map with $f(0)=0$ that maps positive numbers to positive numbers. Without loss of generality, by slightly deforming the lips of the lens, we may assume that it maps the jump contours of RH-$\widetilde{\text{P}}$2 into the jump contours of $\Psi$. \subsubsection{The conformal map $f$} \label{ch4:sec:conformalf} The conformal map that we use in the construction of the local parametrix is defined as follows. \begin{definition} \label{ch4:def:conformalf} We define the map \begin{align} \label{ch4:eq:conformalf} f(z) = z \left(\frac{2 f_1(z)}{(r+1)^2}\right)^{r+1} \end{align} \end{definition} We will be more precise about the domain of $f$ in a moment. \begin{proposition} \label{ch4:prop:conformalf} In a small enough neighborhood of $0$ the function $f$, as defined in \eqref{ch4:eq:conformalf}, is a conformal map with $f(0)=0$ that maps positive numbers to positive numbers. Furthermore, with $c_{0,V}$ as in \eqref{ch4:eq:onecutthetabehav}, we have \begin{align} \label{ch4:eq:constf0} f'(0) = \left(\frac{\pi c_{0,V}}{\sin\left(\frac{\pi}{r+1}\right)}\right)^{r+1}. \end{align} \end{proposition} \begin{proof} By Proposition \ref{ch4:prop:fmanalytic} we already know that $f_1$ is analytic on $D(0,q)\cap O_V$, then $f$, too, is analytic on $D(0,q)\cap O_V$. It suffices to prove that $f_1(0)>0$. If we subtract the formula in Proposition \ref{ch4:prop:sumfm} for $l=0$ from the one for $l=1$, then we obtain for $\operatorname{Im}(z)>0$ that \begin{align} \label{ch4:eq:behavfr+1phi0} 2i \sum_{m=1}^r \sin\left(\frac{\pi m}{r+1}\right) z^\frac{m}{r+1} f_m(z) = (r+1) \varphi_0(z). \end{align} By \eqref{ch4:eq:varphieven0} and the one-cut $\frac{1}{r}$-regularity of we have for $x>0$ that \begin{align*} \varphi_{0,+}(z) = \mu([0,x]) = (r+1)\pi i c_{0,V} x^\frac{1}{r+1} (1+o(1)) \end{align*} as $x\to 0$. Then it follows for $x>0$ that \begin{align*} 2 i \sin\left(\frac{\pi}{r+1}\right) x^\frac{1}{r+1} f_1(x) + \mathcal O\left(x^{\frac{2}{r+1}}\right) = (r+1)^2 \pi i c_{0,V} x^\frac{1}{r+1} (1+o(1)) \end{align*} as $z\to 0$. This can only be true if \begin{align*} f_1(0) = (r+1)^2 \frac{\pi c_{0,V}}{2\sin\left(\frac{\pi}{r+1}\right)}. \end{align*} Then we conclude that $f_1(0)>0$, and furthermore \begin{align*} f'(0) = \left(\frac{\pi c_{0,V}}{\sin\left(\frac{\pi}{r+1}\right)}\right)^{r+1}. \end{align*} \end{proof} We infer that there exists an $r_0>0$ such that $f$ is a conformal map if we take it to have domain $D(0,r_0)$. We fix such an $r_0$ and we will use it as the radius of the disk on which our local parametrix is defined. \subsection{Bare local parametrix problem} \label{ch4:sec:bareParametrix} As mentioned before the bare parametrix problem has the same behavior as in RH-$\widetilde{\text{P}}$, but with the jump contours extended to infinity, and with an additional behavior at $\infty$. It is the generalization of the RHP in Section 3 in \cite{KuMo}. The bare parametrix problem takes the following form. \begin{rhproblem} \label{ch4:RHPforQ} \ \begin{description} \item[RH-$\Psi$1] $\Psi$ is analytic on $\mathbb C\setminus (\mathbb R \cup i\mathbb R)$. \item[RH-$\Psi$2] $\Psi$ has boundary values for $x\in (\mathbb R \cup i\mathbb R)\setminus \{0\}$ \begin{align} \nonumber \Psi_+(x) &= \Psi_-(x) \times \left\{\begin{array}{ll} \bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \bigoplus_{j=1}^{\frac{r+1}{2}} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \\ \label{ch4:RHPsi2} &\hspace{6cm} x \in \mathbb R^+,\\ \nonumber \Psi_+(x) &= \Psi_-(x) \\ \nonumber &\hspace{-0.6cm}\times \left\{\begin{array}{ll} 1 \oplus \bigoplus_{j=1}^{\frac{r}{2}} e^{2\pi i\beta} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, & r \equiv 0 \mod 2,\\ 1\oplus \bigoplus_{j=1}^{\frac{r-1}{2}} e^{2\pi i\beta} \begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix} \oplus e^{2\pi i\beta}, & r \equiv 1 \mod 2,\\ \end{array} \right. \\ &\hspace{6cm} x \in \mathbb R^{-},\\ \nonumber \Psi_+(x) &= \Psi_-(x) \begin{pmatrix} 1 & 0\\ 1 & 1 \end{pmatrix} \oplus \mathbb I_{(r-1)\times (r-1)}\\ &\hspace{6cm} x \in i\mathbb R^{\pm}. \end{align} \item[RH-$\Psi$3] As $z\to\infty$ we have for $\pm\operatorname{Im}(z)>0$ \begin{multline} \label{ch4:RHPsi3} \Psi_\alpha(z) = \left(\mathbb I + \mathcal O\left(\frac{1}{z}\right)\right) L_\alpha(z)\\ \left\{ \begin{array}{l} \displaystyle\bigoplus_{j=1}^\frac{r}{2} \begin{pmatrix} e^{-(r+1) \omega^{\pm (\frac{r}{2}-j)}z^\frac{1}{r+1}} & 0\\ 0 & e^{-(r+1) \omega^{\mp (\frac{r}{2}-j)}z^\frac{1}{r+1}} \end{pmatrix} \oplus e^{-(r+1)z^\frac{1}{r+1}}, \\ \hspace{7cm}r\equiv 0\mod 2,\\ \displaystyle\bigoplus_{j=1}^\frac{r+1}{2} \begin{pmatrix} e^{-(r+1) \omega^{\pm (\frac{r}{2}-j)}z^\frac{1}{r+1}} & 0\\ 0 & e^{-(r+1) \omega^{\mp (\frac{r}{2}-j)}z^\frac{1}{r+1}} \end{pmatrix}, \\ \hspace{7cm}r\equiv 1\mod 2, \end{array}\right. \end{multline} where $L_\alpha(z)$ is as in Definition \ref{ch4:def:Lalpha} below. \item[RH-$\Psi$4] As $z\to 0$ \begin{align} \label{ch4:RHPsi4a} \Psi(z) &= \mathcal{O}\begin{pmatrix} h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\\ \vdots & & \vdots\\ h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\end{pmatrix}, & \operatorname{Re}(z)>0,\\ \label{ch4:RHPsi4b} \Psi(z) &= \mathcal{O}\begin{pmatrix} 1 & h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\\ \vdots & & & \vdots\\ 1 & h_{-\alpha-\frac{r-1}{r}}(z) & \ldots & h_{-\alpha-\frac{r-1}{r}}(z)\end{pmatrix}, & \operatorname{Re}(z) < 0. \end{align} \end{description} \end{rhproblem} It will be practical to choose a notation for the consecutive powers of $\omega$ in RH-$\Psi$3. We set \begin{align} \label{ch4:eq:defkj} k_j &= (-1)^{j-1} \left(\frac{r}{2} - \left\lfloor \frac{j-1}{2}\right\rfloor\right), & j=1,\ldots,r+1. \end{align} With this definition we can rewrite RH-$\Psi$3 as \begin{align} \label{ch4:RHPsi3rewritten} \Psi(z) = \left(\mathbb I +\mathcal O\left(\frac{1}{z}\right)\right) L_\alpha(z) \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}} \end{align} as $z\to\infty$, for $\pm\operatorname{Im}(z)>0$. \begin{definition} \label{ch4:def:Lalpha} We define \begin{align} \label{ch4:eq:defLalpha} L_\alpha(z) = \frac{(2\pi)^\frac{r}{2}}{\sqrt{r+1}} z^{-\frac{r}{r+1}\beta} \bigoplus_{j=0}^r z^{-\frac{r}{2(r+1)}+\frac{j}{r+1}} \left\{\begin{array}{ll} M_\alpha^+, & \operatorname{Im}(z)>0,\\ M_\alpha^-, & \operatorname{Im}(z)<0, \end{array} \right. \end{align} where $M^+$ and $M^-$ are $(r+1)\times (r+1)$ matrices given by \begin{align} \label{ch4:eq:defMalpha+} \hspace{-0.5cm}\left(M_\alpha^+\right)_{kl} &= \operatorname{diag}(1,-1,1,\ldots) \begin{pmatrix} 1 & 1 & \cdots\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix} \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right) \times \left(\operatorname{diag}\left(1, -1, 1,\ldots\right)\right)^r \end{align} \begin{align} \nonumber \hspace{-0.5cm}\left(M_\alpha^-\right)_{kl} &= \operatorname{diag}(1,-1,1,\ldots) \begin{pmatrix} 1 & 1 & \cdots\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix} \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right) \left(\operatorname{diag}\left(1, -1, 1,\ldots\right)\right)^r\\ \label{ch4:eq:defMalpha-} & \hspace{2cm}\times\left\{\begin{array}{ll} \displaystyle\bigoplus_{j=1}^{\frac{r}{2}} \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix} \oplus 1, & r \equiv 0 \mod 2,\\ \displaystyle\bigoplus_{j=1}^{\frac{r+1}{2}} \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix} , & r \equiv 1 \mod 2, \end{array} \right. \end{align} where \begin{align} \label{ch4:eq:defTheta} \eta = -\frac{r}{r+1} \left(\beta+\frac{1}{2}\right). \end{align} \end{definition} We will eventually prove that RH-$\Psi$ has a unique solution. In the next section we construct the solution. It follows by standard arguments from Riemann-Hilbert theory that $\det \Psi(z)$ is a constant multiple of $z^{-r\beta}$. \begin{remark} RH-$\Psi$ shows a great similarity with the bare Meijer-G parametrix for $p$-chain from \cite[p. 34]{BeBo} (in our case $p=r$). Its solution is constructed with Meijer G-functions, and our solution will also be constructed with Meijer G-functions, as we shall see in the next section. It is natural to wonder if these bare parametrix problems are somehow related. In \cite{BeBo} the bare parametrix problem was obtained after a RH analysis where two lenses, rather than one lens, had to be opened. Another difference is that the jumps on the lenses are not simply a direct product of a $2\times 2$ block and an $(r+1)\times (r+1)$ unit matrix, as in our case. Initially, we suspected that it is possible to map RH-$\Psi$ to the bare Meijer-G parametrix problem, by artificially adding jumps. It appears that this does not work. Then perhaps, these two problems are actually inherently different. \end{remark} \subsubsection{Definition of $\Psi$ with Meijer G-functions} We will find a solution of RHP-$\Psi$ in terms of Meijer G-functions. See \cite{BeSm} for an introduction on Meijer G-functions. One may also consult Appendix \ref{ch:appendixB} for general information on the Meijer G-function. We will use the particular Meijer G-function defined by \begin{align} \label{ch4:eq:defMeijerG} G_{0,r+1}^{r+1,0}\left(\left. \begin{array}{c} -\\ 0,-\alpha,-\alpha-\frac{1}{r},\ldots,-\alpha-\frac{r-1}{r}\end{array}\right| z\right) = \frac{1}{2\pi i} \int_L \Gamma\left(s\right) \prod_{j=0}^{r-1} \Gamma\left(s-\alpha-\frac{j}{r}\right) z^{-s} ds, \end{align} where $L$ encircles the interval $(-\infty,\max(0,\alpha+\frac{r-1}{r})]$ in the complex $s$-plane. We shall simply abbreviate the left-hand side of \eqref{ch4:eq:defMeijerG} with the notation $G_{0,r+1}^{r+1,0}(z)$, suppressing the parameters. $G_{0,r+1}^{r+1,0}(z)$ can be viewed as a multi-valued function. By \cite[p. 144]{Lu} the definition \eqref{ch4:eq:defMeijerG} makes sense for $z$ with argument between $-\frac{r+1}{2}\pi$ and $\frac{r+1}{2}\pi$. We will consider $G_{0,r+1}^{r+1}(z)$ as a function where $z$ has argument between $-\pi$ and $\pi$ though, to preserve our convention that (non-integer) powers of $z$ have a cut at $(-\infty,0]$, and that the argument should lie between $-\pi$ and $\pi$. If we want to consider values of $G_{0,r+1}^{r+1,0}$ for $z$ with argument outside this range, then we use a notation that we introduce in a moment. The function in \eqref{ch4:eq:defMeijerG} is a solution to the $(r+1)$-th order linear differential equation \begin{align} \label{ch4:eq:MeijerGdvgl} \vartheta \prod_{j=0}^{r-1} \left(\vartheta+\alpha+\frac{j}{r}\right) \psi(z) + (-1)^r z \psi(z) &= 0, & \vartheta = z\frac{d}{dz}. \end{align} Around $z=0$ a basis of solutions can be expressed in terms of generalized hypergeometric functions. A particular basis of solutions, practical for our purposes, to \eqref{ch4:eq:MeijerGdvgl} is given by \begin{align} \label{ch4:eq:defpsij} \psi_j(z) &= \gamma_j G_{0,r+1}^{r+1,0}\left(z e^{2\pi i k_j}\right), & j=1,2,\ldots, r+1, \end{align} where \begin{align} \nonumber k_j &= (-1)^{j-1} \left(\frac{r}{2} - \left\lfloor \frac{j-1}{2}\right\rfloor\right),\\ \label{ch4:eq:defgammaj} \gamma_j &= (-1)^{r (j-1)} e^{2\pi i\beta k_j}. \end{align} We have repeated the definition for $k_j$ (see \eqref{ch4:eq:defkj}) as a convenience to the reader. The notation $z e^{2\pi i k_j}$ means that we have analytically continued $G_{0,r+1}^{r+1,0}$ along a circle a total of $|k_j|$ times, in positive direction if $k_j>0$ and in negative direction if $k_j<0$. Notice that $k_j$ may be a half-integer. We remark that the functions $\psi_j$ have a cut at $(-\infty,0]$. It will in some cases be convenient to notice that we can write $k_j$ alternatively as \begin{align} \label{ch4:eq:defkjAlternative} k_j = \frac{(-1)^{j-1}\left(r+1 - 2\left\lfloor\frac{j}{2}\right\rfloor\right)-1}{2}. \end{align} Additionally, we define \begin{align} \label{ch4:eq:defpsi0} \psi_0(z) &= \psi_1(z) + \psi_2(z) = e^{\pi i r\beta} G_{0,r+1}^{r+1,0}(z e^{\pi i r}) + (-1)^r e^{-\pi i r\beta} G_{0,r+1}^{r+1,0}(z e^{-\pi i r}). \end{align} \begin{lemma} \label{ch4:prop:psi0entire} $\psi_0$ is an entire function. \end{lemma} \begin{proof} Using \eqref{ch4:eq:defpsi0} and \eqref{ch4:eq:defMeijerG} we find that \begin{align} \nonumber \psi_0(z) &= \frac{1}{2 \pi i} \int_L \Gamma(s) \prod_{j=0}^{r-1} \Gamma\left(s-\alpha-\frac{j}{r}\right) \left(e^{\pi i r\beta} e^{-\pi i r s} z^{-s} + (-1)^r e^{-\pi i r\beta} e^{\pi i r s} z^{-s}\right) ds\\ \label{ch4:eq:residueCalc} &=-\frac{i^{r-1}}{\pi} \int_L \Gamma(s) \prod_{j=0}^{r-1} \Gamma\left(s-\alpha-\frac{j}{r}\right) \sin\left(\pi r (s-\alpha)\right) z^{-s} ds. \end{align} The sine factor removes all the poles of the Gamma functions to the right of the product symbol. That means that only the poles of $\Gamma(s)$ survive, and these are located at $\ldots, -2, -1, 0$. Then, using Euler's reflection formula for the Gamma function and the well-known trigonometric identity \begin{align*} \sin(r x) = 2^{r-1} \prod_{j=0}^{r-1} \sin\left(x+\frac{j \pi}{r}\right), \end{align*} we find with a residue calculation of \eqref{ch4:eq:residueCalc} that \begin{align*} \psi_0(z) &= -(2\pi i)^r \sum_{m=0}^\infty \frac{(-1)^{m(r+1)}}{m!} \left(\prod_{j=0}^{r-1} \frac{1}{\Gamma\left(1+m+\alpha+\frac{j}{r}\right)}\right) z^m\\ &= (-1)^{r+1} (2\pi i)^r \prod_{j=0}^{r-1} \frac{1}{\Gamma\left(1+\alpha+\frac{j}{r}\right)} {}_{0}F_{r}\left(\left. \begin{array}{c} -\\ 1+\alpha,1+\alpha+\frac{1}{r},\ldots,1+\alpha+\frac{r-1}{r}\end{array}\right| (-1)^{r+1} z\right), \end{align*} where the function ${}_{0}F_{r}$ is a generalized hypergeometric function. Since hypergeometric functions of this type are entire, we conclude that $\psi_0$ is an entire function. \end{proof} \begin{definition} \label{ch4:def:Psi} We define, with $\psi_0,\psi_1,\ldots,\psi_{r+1}$ as above, \begin{align} \label{ch4:eq:defPsialpha} \Psi_\alpha(z) = \left\{\begin{array}{ll} \begin{pmatrix} \psi_1(z) & \psi_2(z) & \psi_3(z) & \psi_4(z) & \ldots\\ \vartheta\psi_1(z) & \vartheta\psi_2(z) & \vartheta\psi_3(z) & \vartheta\psi_4(z) & \ldots\\ \vdots & & & & \vdots\\ \vartheta^r\psi_1(z) & \vartheta^r\psi_2(z) & \vartheta^r\psi_3(z) & \vartheta^r\psi_4(z) & \ldots \end{pmatrix}, & 0<\operatorname{arg}(z) < \frac{\pi}{2},\\ \begin{pmatrix} \psi_0(z) & \psi_2(z) & \psi_3(z) & \psi_4(z) & \ldots\\ \vartheta\psi_0(z) & \vartheta\psi_2(z) & \vartheta\psi_3(z) & \vartheta\psi_4(z) & \ldots\\ \vdots & & & & \vdots\\ \vartheta^r\psi_0(z) & \vartheta^r\psi_2(z) & \vartheta^r\psi_3(z) & \vartheta^r\psi_4(z) & \ldots\\ \end{pmatrix}, & \frac{\pi}{2} <\operatorname{arg}(z) < \pi,\\ \begin{pmatrix} \psi_2(z) & -\psi_1(z) & \psi_{4}(z) & -\psi_3(z) & \ldots\\ \vartheta\psi_2(z) & -\vartheta\psi_1(z) &\vartheta \psi_4(z) & -\vartheta \psi_3(z) & \ldots\\ \vdots & & & & \vdots\\ \vartheta^r\psi_2(z) & -\vartheta^r\psi_1(z) &\vartheta^r \psi_4(z) & -\vartheta^r \psi_3(z) & \ldots \end{pmatrix}, & -\frac{\pi}{2} <\operatorname{arg}(z) < 0,\\ \begin{pmatrix} \psi_0(z) & -\psi_1(z) & \psi_{4}(z) & -\psi_3(z) & \ldots\\ \vartheta\psi_0(z) & -\vartheta\psi_1(z) &\vartheta \psi_4(z) & -\vartheta \psi_3(z) & \ldots\\ \vdots & & & & \vdots\\ \vartheta^r\psi_0(z) & -\vartheta^r\psi_1(z) &\vartheta^r \psi_4(z) & -\vartheta^r \psi_3(z) & \ldots \end{pmatrix} & -\pi<\operatorname{arg}(z) < -\frac{\pi}{2}. \end{array}\right. \end{align} \end{definition} We will eventually prove that $\Psi_\alpha$ is the unique solution to RH-$\Psi$ (see Theorem \ref{ch4:thm:PsiaexistUnique}). \begin{lemma} \label{ch4:prop:PsiandQ} $\Psi_\alpha$ satisfies RHP-$\Psi$1, RH-$\Psi$2 and RH-$\Psi$4. \end{lemma} \begin{proof} For RHP-$\Psi$2 we only have to check that the jump on the negative axis is satisfied. The other jumps are satisfied by construction. By Lemma \ref{ch4:prop:psi0entire} we know that $\psi_0$ does not have a jump on the negative axis. Hence there is no jump in the $1\times 1$ block in the upper-left corner, i.e., the corresponding component equals $1$. The last block, i.e., in the lower-right corner, is $2\times 2$ when $r\equiv 0\mod 2$ and $1\times 1$ when $r\equiv 1\mod 2$. Let us verify the jumps for the $2\times 2$ blocks, excluding the lower-right $2\times 2$ block when $r\equiv 0\mod 2$. In the next few arguments we use that $G_{0,r+1}^{r+1,0}$ can be viewed as a multi-valued function (with argument between $-\frac{r+1}{2}\pi$ and $\frac{r+1}{2}$). Now let $x<0$. For every even $2\leq j\leq r$ we have \begin{align} \nonumber -\psi_{j-1,+}(x) &= - \gamma_{j-1} G_{0,r+1}^{r+1,0}(|x| e^{-\pi i +2\pi i k_{j-1}}) = -\frac{\gamma_{j-1}}{\gamma_{j+1}} C_{j+1} G_{0,r+1}^{r+1,0}(|x| e^{\pi i +2\pi i k_{j+1}})\\ \label{ch4:eq:psij-1+psij+1} &= -\frac{\gamma_{j-1}}{\gamma_{j+1}} \psi_{j+1,-}(x), \end{align} and for every even $2\leq j\leq r-1$ we have \begin{align} \nonumber \psi_{j+2,+}(x) &= - \gamma_{j+2} G_{0,r+1}^{r+1,0}(|x| e^{-\pi i+ 2\pi i k_{j+2}}) = \frac{\gamma_{j+2}}{\gamma_j} C_j G_{0,r+1}^{r+1,0}(|x| e^{\pi i +2\pi i k_{j}})\\ \label{ch4:eq:psij-1+psij+1b} &= \frac{\gamma_{j+2}}{\gamma_j} \psi_{j,-}(x). \end{align} Here we have used for the first equality that $k_{j+1}+1 = k_{j-1}$ and for the second equality that $k_{j+2}=k_{j}+1$, which is clear from \eqref{ch4:eq:defkj}. Indeed, using these two identities for the $k_j$, and \eqref{ch4:eq:defgammaj}, we also have \begin{align} \label{ch4:eq:gammaj+1etc} \frac{\gamma_{j-1}}{\gamma_{j+1}} = \frac{\gamma_{j+2}}{\gamma_j} = e^{2\pi i\beta}. \end{align} Combining \eqref{ch4:eq:psij-1+psij+1}, \eqref{ch4:eq:psij-1+psij+1b} and \eqref{ch4:eq:gammaj+1etc}, we conclude that we get the correct jump in the $2\times 2$ blocks, i.e., for every $1\leq j\leq r-1$ \begin{align*} \begin{pmatrix} -\psi_{j-1,+}(x) & \psi_{j+2,+}(x) \end{pmatrix} = \begin{pmatrix} \psi_{j,-}(x) & \psi_{j+1,-}(x) \end{pmatrix} \begin{pmatrix} 0 & e^{2\pi i\beta}\\ -e^{2\pi i\beta} & 0 \end{pmatrix} \end{align*} Let us move on to the lower-right block. Let us first consider the case where $r\equiv 0\mod 2$. Then the last block is a $2\times 2$ block. We should have \begin{align} \label{ch4:eq:psilast2x2block} \begin{pmatrix} -\psi_{r-1,+}(x) & \psi_{r+1,+}(x) \end{pmatrix} = \begin{pmatrix} \psi_{r,-}(x) & \psi_{r+1,-}(x) \end{pmatrix} \begin{pmatrix} 0 & e^{2\pi i\beta}\\ -e^{2\pi i\beta} & 0 \end{pmatrix} \end{align} Then it suffices (because the other case has already been treated in \eqref{ch4:eq:psij-1+psij+1}) to show that \begin{align*} \psi_{r+1,+}(x) = e^{2\pi i\beta} \psi_{r,-}(x). \end{align*} Indeed, we have \begin{align*} \psi_{r+1,+}(x) &= \gamma_{r+1} G_{0,r+1}^{r+1,0}(|x| e^{-\pi i+2\pi i k_{r+1}}) = \frac{\gamma_{r+1}}{\gamma_r} \gamma_r G_{0,r+1}^{r+1,0}(|x| e^{\pi i+2\pi i k_{r}})\\ &= \frac{\gamma_{r+1}}{\gamma_r} \psi_{r,-}(x), \end{align*} where we have used that $k_{r+1}=0=k_r + 1$ according to \eqref{ch4:eq:defkj}. Indeed, using \eqref{ch4:eq:defgammaj}, we also have \begin{align*} \frac{\gamma_{r+1}}{\gamma_r} = \frac{1}{e^{-2\pi i\beta}} = e^{2\pi i\beta}, \end{align*} as it should be, and we obtain \eqref{ch4:eq:psilast2x2block}. Now we consider the case where $r\equiv 1\mod 2$. Then the last block is a $1\times 1$ block. Indeed, we have \begin{align*} -\psi_{r,+}(x) &= -\gamma_{r} G_{0,r+1}^{r+1,0}(|x| e^{-\pi i+2\pi i k_{r}}) = -\frac{\gamma_{r}}{\gamma_{r+1}} \gamma_{r+1} G_{0,r+1}^{r+1,0}(|x| e^{\pi i+2\pi i k_{r+1}})\\ &= -\frac{\gamma_{r}}{\gamma_{r+1}} \psi_{r+1,+}(x). \end{align*} Here we have used that $k_{r+1}=-\frac{1}{2}$ and $k_r=\frac{1}{2}$, as follows from \eqref{ch4:eq:defkj}. Indeed, by \eqref{ch4:eq:defgammaj}, we have that \begin{align*} \frac{\gamma_{r}}{\gamma_{r+1}} = \frac{e^{\pi i\beta}}{-e^{-\pi i\beta}} = -e^{2\pi i\beta}. \end{align*} This proves that RH-$\Psi$2 is satisfied. We should still prove that RH-$\Psi$4 is satisfied. The $\psi_j$ are solutions to the linear differential equation \eqref{ch4:eq:MeijerGdvgl} of order $r+1$. We can write them in the corresponding Frobenius basis. Since the indicial equation has roots $0,-\alpha, -\alpha-\frac{1}{r},\ldots,-\alpha-\frac{r-1}{r}$, we infer that all solutions behave as $\mathcal O(z^{-\alpha-\frac{r-1}{r}})$ as $z\to 0$. It remains to show that the behavior in the first column for $\operatorname{Re}(z)<0$ is only $\mathcal O(1)$ as $z\to 0$. This is a direct consequence of Lemma \ref{ch4:prop:psi0entire} however. \end{proof} \subsubsection{Asymptotics of $\Psi$ as $z\to\infty$} We also investigate the asymptotics of $\Psi_\alpha$ as $z\to\infty$. To this end we first collect some properties of $L_\alpha$ as defined in Definition \ref{ch4:def:Lalpha}. \begin{proposition} \label{ch4:eq:sameJumpsLPsi} $L_\alpha$ has the same jumps as $\Psi_\alpha$ has on the positive and negative real axis. \end{proposition} \begin{proof} The jump on the positive real axis is satisfied by construction, so let us focus on the jump on the negative ray. Set $x<0$. We have that \begin{align} \nonumber L_{\alpha,-}(x)^{-1} L_{\alpha,+}(x) &= (M_\alpha^+)^{-1} e^{2\pi i\frac{r}{r+1}\beta} \left(\bigoplus_{j=0}^r e^{\pi i \frac{r}{r+1}-\frac{2\pi i j}{r+1}}\right) M_\alpha^-\\ \label{ch4:eq:LadirectprodLa} &= e^{2\pi i \frac{r}{r+1}(\beta+\frac{1}{2})} (M_\alpha^+)^{-1} \operatorname{diag}\left(1, \omega^{-1}, \omega^{-2}, \ldots\right) M_\alpha^-. \end{align} The factor $\operatorname{diag}(1,-1,1,\ldots)$ in the left of \eqref{ch4:eq:defMalpha+} and \eqref{ch4:eq:defMalpha-} has no effect on the jump. We notice that \begin{align*} \begin{pmatrix} 1 & 1 & \cdots & 1\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix}^{-1} = \frac{1}{r+1} \begin{pmatrix} 1 & \omega^{-k_1} & \omega^{-2 k_1} & \cdots & \omega^{-r k_1}\\ 1 & \omega^{-k_2} & \omega^{-2 k_2} & \cdots & \omega^{-r k_2}\\ \vdots & \vdots & \vdots & & \vdots\\ 1 & \omega^{-k_{r+1}} & \omega^{-2 k_{r+1}} & \cdots & \omega^{-r k_{r+1}} \end{pmatrix}. \end{align*} Then we see that \begin{multline} \begin{pmatrix} 1 & 1 & \cdots & 1\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix}^{-1} \operatorname{diag}\left(1, \omega^{-1}, \omega^{-2}, \ldots\right) \begin{pmatrix} 1 & 1 & \cdots & 1\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix}\\ \label{ch4:eq:weirdPermutationMatrix} = \frac{1}{r+1} \begin{pmatrix} 1 & \omega^{-k_1} & \omega^{-2 k_1} & \cdots & \omega^{-r k_1}\\ 1 & \omega^{-k_2} & \omega^{-2 k_2} & \cdots & \omega^{-r k_2}\\ \vdots & \vdots & \vdots & & \vdots\\ 1 & \omega^{-k_{r+1}} & \omega^{-2 k_{r+1}} & \cdots & \omega^{-r k_{r+1}} \end{pmatrix} \begin{pmatrix} 1 & 1 & \cdots\\ \omega^{k_1-1} & \omega^{k_2-1} & \cdots & \omega^{k_{r+1}-1}\\ \omega^{2(k_1-1)} & \omega^{2 (k_2-1)} & \cdots & \omega^{2 (k_{r+1}-1)}\\ \vdots & \vdots & & \vdots\\ \omega^{r(k_1-1)} & \omega^{r (k_2-1)} & \cdots & \omega^{r (k_{r+1}-1)} \end{pmatrix} \end{multline} Looking at \eqref{ch4:eq:LadirectprodLa} and the definitions \eqref{ch4:eq:defMalpha+} and \eqref{ch4:eq:defMalpha-} of $M_{\alpha}^\pm$, it is important to investigate the matrix in \eqref{ch4:eq:weirdPermutationMatrix}. The $(j,l)$ component of the matrix in \eqref{ch4:eq:weirdPermutationMatrix} is given by \begin{align*} \frac{1}{r+1}\sum_{m=0}^{r} \omega^{-m k_j} \omega^{m (k_l-1)} = \frac{1}{r+1} \sum_{m=0}^{r} \omega^{(k_l-k_j-1) m} \end{align*} This sum is $1$ when $k_j-k_l-1\equiv 0\mod (r+1)$ and $0$ otherwise. For every $1\leq j\leq r+1$ there is exactly one $1\leq l\leq r+1$ such that $k_j-k_l-1\equiv 0\mod (r+1)$ is satisfied. This is because $\{k_1,k_2,\ldots,k_{r+1}\}=\{-\frac{r}{2},-\frac{r}{2}+1,\ldots,\frac{r}{2}\}$, i.e., there cannot be two different solutions $l$ since the difference between the two corresponding $k_j$'s would have to be more than $r$ apart. Then we infer that \eqref{ch4:eq:weirdPermutationMatrix} is a permutation matrix. In fact, we claim that \eqref{ch4:eq:weirdPermutationMatrix} equals the permutation matrix \begin{align} \label{ch4:eq:weirdPermutationMatrix1} \left(1 \oplus \bigoplus_{j=1}^\frac{r}{2} \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\right) \left(\bigoplus_{j=1}^\frac{r}{2} \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix} \oplus 1\right) \end{align} when $r \equiv 0\mod 2$ and \begin{align} \label{ch4:eq:weirdPermutationMatrix2} \left(1 \oplus \bigoplus_{j=1}^\frac{r-1}{2} \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix} \oplus 1\right) \left(\bigoplus_{j=1}^\frac{r+1}{2} \begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}\right) \end{align} when $r \equiv 1\mod 2$. To prove this we remark that $k_j-k_l-1\equiv 0\mod (r+1)$ happens exactly when \begin{align} \label{ch4:eq:weirdRelation} (-1)^l \left\lfloor\frac{l}{2}\right\rfloor - (-1)^j \left\lfloor\frac{j}{2}\right\rfloor \equiv 1 \mod (r+1). \end{align} We have used \eqref{ch4:eq:defkjAlternative} to arrive at \eqref{ch4:eq:weirdRelation}. We first consider the case that $r \equiv 0\mod 2$. The way to argue this is as follows. Since both \eqref{ch4:eq:weirdPermutationMatrix} and \eqref{ch4:eq:weirdPermutationMatrix1} are permutation matrices, it suffices to show that their non-zero elements, i.e., the ones, are in the same position. So let us consider a non-zero entry of \eqref{ch4:eq:weirdPermutationMatrix1}. Let us assume that it is in row $j$, with $j$ ranging from $1$ to $r+1$. If $j$ is even and $j<r$, then our non-zero entry is made by the $(j,j+1)$ component of the left matrix in \eqref{ch4:eq:weirdPermutationMatrix1}. Then it has to be multiplied by a component in the $j+1$-th row of the right matrix in \eqref{ch4:eq:weirdPermutationMatrix}. The only non-zero component there is at position $(j+1,j+2)$. So we have $l=j+2$. Hence we see that \eqref{ch4:eq:weirdRelation} is satisfied. If $j=r$, then we should have that $l=r+1$, and then \eqref{ch4:eq:weirdRelation} is also satisfied. Let us now consider the case where $j$ is odd and $j\geq 3$. Then the non-zero component in the $j$-th row of the left matrix in \eqref{ch4:eq:weirdPermutationMatrix1} is at position $(j,j-1)$. Then this component is multiplied by the non-zero component in the $j-1$-th row of the right matrix in \eqref{ch4:eq:weirdPermutationMatrix1}, which is at position $(j-1,j-2)$. Thus we infer that $l=j-2$ and again we see that \eqref{ch4:eq:weirdRelation} is satisfied. In the case that $j=1$ we find that $l=2$ and then \eqref{ch4:eq:weirdRelation} is also satisfied. The claim is proved for $r\equiv 0\mod 2$. Replacing \eqref{ch4:eq:weirdPermutationMatrix} by \eqref{ch4:eq:weirdPermutationMatrix1}, and then inserting it into \eqref{ch4:eq:LadirectprodLa} with the help of \eqref{ch4:eq:defMalpha+} and \eqref{ch4:eq:defMalpha-}, we must conclude that \begin{align} \nonumber & L_{\alpha,-}(x)^{-1} L_{\alpha,+}(x)\\ \nonumber &= e^{2\pi i \frac{r}{r+1}(\beta+\frac{1}{2})} \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right)^{-1} \left(1 \oplus \bigoplus_{j=1}^\frac{r}{2} \begin{pmatrix} 0 & 1\\ -1 & 0\end{pmatrix}\right)\\ \nonumber & \hspace{2.2cm}\left(\bigoplus_{j=1}^\frac{r}{2} \begin{pmatrix} e^{2\pi i (\beta+\eta) k_{2j}} & 0\\ 0 & e^{2\pi i (\beta+\eta) k_{2j-1}}\end{pmatrix} \oplus e^{2\pi i (\beta+\eta) k_{r+1}}\right)\\ \label{ch4:eq:jumpwithkjs} &= e^{2\pi i \frac{r}{r+1}(\beta+\frac{1}{2})} \left(e^{2\pi i(\beta+\eta) (k_2-k_1)} \oplus \bigoplus_{j=1}^\frac{r}{2} \begin{pmatrix} 0 \hspace{1.4cm} e^{2\pi i(\beta+\eta) (k_{2j+2}-k_{2j})}\\ -e^{2\pi i(\beta+\eta) (k_{2j-1}-k_{2j+1})} \hspace{0.8cm} 0\end{pmatrix}\right) \end{align} We have $k_2-k_1 = -\frac{r}{2}-\frac{r}{2} = -r$ and, as one can check with \eqref{ch4:eq:defkj}, we have for all $j=1,\ldots,r-1$ \begin{align*} (-1)^{j} ( k_{j+2}-k_j) = 1. \end{align*} Indeed, using the definition of $\eta$ in \eqref{ch4:eq:defTheta} we have \begin{align*} \frac{r}{r+1} (\beta+\frac{1}{2}) - r (\beta+ \eta) = \frac{r}{2}. \end{align*} and we have \begin{align*} \frac{r}{r+1} (\beta+\frac{1}{2}) + \beta+ \eta = \beta, \end{align*} Thus, plugging these in \eqref{ch4:eq:jumpwithkjs}, we obtain the correct jump \eqref{ch4:RHPsi2} when $r \equiv 0 \mod 2$. A similar argument works for $r \equiv 1\mod 2$, we omit the details. \end{proof} \begin{lemma} \label{ch4:prop:behavInftyPsi} For $\pm\operatorname{Im}(z)>0$ we have as $z\to\infty$ the asymptotic expansion \begin{align} \label{ch4:eq:prop:behavInftyPsi} \Psi_\alpha(z) \sim L_\alpha(z) \sum_{m=0}^\infty A_{\alpha,m} z^{-\frac{m}{r+1}} \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}} \end{align} for $(r+1)\times (r+1)$ matrices $A_{\alpha,0}=\mathbb I$ and $A_{\alpha,1}, A_{\alpha,2}, \ldots$ that depend only on $\alpha$. \end{lemma} \begin{proof} Due to \cite{Lu} (Theorem 5 on p. 179) we have as $z\to\infty$ that \begin{align} \label{ch4:eq:MeijerGasympLuke} G_{0,r+1}^{r+1,0}(z) \sim \frac{(2\pi)^\frac{r}{2}}{\sqrt{r+1}} e^{-(r+1) z^\frac{1}{r+1}} z^\eta \sum_{m=0}^\infty M_m z^{-\frac{m}{r+1}}, \end{align} where $\eta$ is as in \eqref{ch4:eq:defTheta} and where $M_m$ are some coefficients that depend on $\alpha$, and $M_0=1$. The $M_m$ can be calculated with a cumbersome procedure, in principle. The asymptotic expansion \eqref{ch4:eq:MeijerGasympLuke} allows us to find the asymptotic behavior of the $\psi_j$. Namely, we have as $z\to\infty$ \begin{align} \label{ch4:eq:notvarthetapsij} \psi_j(z) \sim (-1)^{r(j-1)} e^{2\pi i(\beta+\eta) k_j} \frac{(2\pi)^\frac{r}{2}}{\sqrt{r+1}} e^{-(r+1) \omega^{k_j} z^\frac{1}{r+1}} z^\eta \sum_{m=0}^\infty M_m \omega^{-k_j m} z^{-\frac{m}{r+1}}. \end{align} These asymptotics follow simply by analytic continuation of \eqref{ch4:eq:MeijerGasympLuke}. Let us now look at the derivatives. Applying powers of the operator $\vartheta$ to \eqref{ch4:eq:notvarthetapsij}, we infer that there exist coefficients $M^{[l]}_m$ such that \begin{align} \label{ch4:eq:varthetapsij} \vartheta^l\psi_j(z) \sim (-1)^{r(j-1)} e^{2\pi i(\beta+\eta) k_j} \frac{(2\pi)^\frac{r}{2}}{\sqrt{r+1}} e^{-(r+1) \omega^{k_j} z^\frac{1}{r+1}} \omega^{k_j l} z^{\eta+\frac{l}{r+1}} \sum_{m=0}^\infty M_m^{[l]} \omega^{-k_j m} z^{-\frac{m}{r+1}}. \end{align} In fact, it is not hard to see that we should have $M_0^{[l]}=(-1)^l$. A closed form expression for the $M^{[l]}_m$ with $m>0$ seems hard to find, but we will not need it anyway. Plugging \eqref{ch4:eq:varthetapsij} in the definition of $\Psi_\alpha$ of the first quadrant, we have as $z\to\infty$ \begin{multline*} \Psi_\alpha(z) \sim \frac{(2\pi)^\frac{r}{2}}{\sqrt{r+1}} z^\eta\\ \hspace{-0.25cm}\begin{pmatrix} \sum_{m=0}^\infty M^{[0]}_m \omega^{-k_1 m} z^{-\frac{m}{r+1}} & \sum_{m=0}^\infty M^{[0]}_m \omega^{-k_2 m} z^{-\frac{m}{r+1}} & \cdots\\ z^\frac{1}{r+1}\sum_{m=0}^\infty M^{[1]}_m \omega^{k_1 (1-m)} z^{-\frac{m}{r+1}} & z^\frac{1}{r+1}\sum_{m=0}^\infty M^{[1]}_m \omega^{k_2 (1-m)} z^{-\frac{m}{r+1}} & \cdots\\ z^\frac{2}{r+1}\sum_{m=0}^\infty M^{[2]}_m \omega^{k_1 (2-m)} z^{-\frac{m}{r+1}} & z^\frac{2}{r+1}\sum_{m=0}^\infty M^{[2]}_m \omega^{k_2 (2-m)} z^{-\frac{m}{r+1}} & \cdots\\ \vdots & \vdots & \\ z^\frac{2}{r+1}\sum_{m=0}^\infty M^{[r]}_m \omega^{k_1 (r-m)} z^{-\frac{m}{r+1}} & z^\frac{2}{r+1}\sum_{m=0}^\infty M^{[r]}_m \omega^{k_2 (r-m)} z^{-\frac{m}{r+1}} & \cdots \end{pmatrix}\\ \left(\operatorname{diag}\left(1, -1, 1,\ldots, (-1)^r\right)\right)^r \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right) \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{k_j} z^\frac{1}{r+1}}. \end{multline*} We can rewrite this as \begin{multline} \label{ch4:eq:PsialphaMatrix} \Psi_\alpha(z) \sim \frac{(2\pi)^\frac{r}{2}}{\sqrt{r+1}} z^{\eta+\frac{r}{2(r+1)}} \left(\bigoplus_{j=0}^r z^{-\frac{r}{2(r+1)}+\frac{j}{r+1}}\right)\\ \begin{pmatrix} \sum_{m=0}^\infty M^{[0]}_m \omega^{-k_1 m} z^{-\frac{m}{r+1}} & \sum_{m=0}^\infty M^{[0]}_m \omega^{-k_2 m} z^{-\frac{m}{r+1}} & \cdots & \sum_{m=0}^\infty M^{[0]}_m \omega^{-k_{r+1} m} z^{-\frac{m}{r+1}}\\ \sum_{m=0}^\infty M^{[1]}_m \omega^{k_1 (1-m)} z^{-\frac{m}{r+1}} & \sum_{m=0}^\infty M^{[1]}_m \omega^{k_2 (1-m)} z^{-\frac{m}{r+1}} & \cdots & \sum_{m=0}^\infty M^{[1]}_m \omega^{k_{r+1} (1-m)} z^{-\frac{m}{r+1}}\\ \sum_{m=0}^\infty M^{[2]}_m \omega^{k_1 (2-m)} z^{-\frac{m}{r+1}} & \sum_{m=0}^\infty M^{[2]}_m \omega^{k_2 (2-m)} z^{-\frac{m}{r+1}} & \cdots & \sum_{m=0}^\infty M^{[2]}_m \omega^{k_{r+1} (2-m)} z^{-\frac{m}{r+1}}\\ \vdots & \vdots & & \vdots\\ \sum_{m=0}^\infty M^{[r]}_m \omega^{k_1 (r-m)} z^{-\frac{m}{r+1}} & \sum_{m=0}^\infty M^{[r]}_m \omega^{k_2 (r-m)} z^{-\frac{m}{r+1}} & \cdots & \sum_{m=0}^\infty M^{[r]}_m \omega^{k_{r+1} (r-m)} z^{-\frac{m}{r+1}} \end{pmatrix}\\ \left(\operatorname{diag}\left(1, -1, 1,\ldots, (-1)^r\right)\right)^r \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right) \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{k_j} z^\frac{1}{r+1}}. \end{multline} Using $M_0^{[l]}=(-1)^l$ we see that the matrix in the second line of \eqref{ch4:eq:PsialphaMatrix} equals \begin{multline*} \operatorname{diag}(1,-1,1,\ldots, (-1)^r) \begin{pmatrix} 1 & 1 & \cdots & 1\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix}\\ + \operatorname{diag}(M^{[0]}_1,M^{[1]}_1,\ldots) \begin{pmatrix} \omega^{-k_1} & \omega^{-k_2} & \cdots & \omega^{-k_{r+1}}\\ 1 & 1 & \cdots & 1\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{(r-1) k_1} & \omega^{(r-1) k_2} & \cdots & \omega^{(r-1) k_{r+1}} \end{pmatrix} z^{-\frac{1}{r+1}} + \ldots \end{multline*} Then there exist coefficients $\mathring A_{\alpha,0}=\mathbb I$ and $\mathring A_{\alpha,1}, \mathring A_{\alpha,2},\ldots$ such that the matrix in the second line of \eqref{ch4:eq:PsialphaMatrix} equals as a formal series \begin{align*} \operatorname{diag}(1,-1,1,\ldots, (-1)^r) \begin{pmatrix} 1 & 1 & \cdots & 1\\ \omega^{k_1} & \omega^{k_2} & \cdots & \omega^{k_{r+1}}\\ \omega^{2 k_1} & \omega^{2 k_2} & \cdots & \omega^{2 k_{r+1}}\\ \vdots & \vdots & & \vdots\\ \omega^{r k_1} & \omega^{r k_2} & \cdots & \omega^{r k_{r+1}} \end{pmatrix} \sum_{m=0}^\infty \mathring A_{\alpha,m} z^{-\frac{m}{r+1}}. \end{align*} Using that $\eta+\frac{r}{2(r+1)} = -\frac{r}{r+1}\beta$ (see \eqref{ch4:eq:defTheta}) and comparing with Definition \ref{ch4:def:Lalpha}, we see now that \eqref{ch4:eq:PsialphaMatrix} turns into \begin{align*} \Psi_\alpha(z) \sim L_\alpha(z) \sum_{m=0}^\infty A_{\alpha,m} z^{-\frac{m}{r+1}} \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{k_j} z^\frac{1}{r+1}}, \end{align*} as $z\to\infty$, where \begin{multline*} A_{\alpha,m} = \left(\operatorname{diag}\left(1, -1, 1,\ldots, (-1)^r\right)\right)^r \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right)^{-1} \mathring A_{\alpha,m}\\ \left(\bigoplus_{j=1}^{r+1} e^{2\pi i(\beta+\eta) k_j}\right) \left(\operatorname{diag}\left(1, -1, 1,\ldots, (-1)^r\right)\right)^r. \end{multline*} As one can verify, we get a similar expression in the three other quadrants, with the same $A_{\alpha,m}$. \end{proof} In fact, Lemma \ref{ch4:prop:behavInftyPsi} can be improved upon. The next proposition, combined with Lemma \ref{ch4:prop:behavInftyPsi2}, shows us that $\Psi$ solves RH-$\Psi$. \begin{lemma} \label{ch4:prop:behavInftyPsi2} For $\pm\operatorname{Im}(z)>0$ we have as $z\to\infty$ the asymptotic expansion \begin{align} \label{ch4:eq:propbehavInftyPsi2} \Psi_\alpha(z) \sim \sum_{m=0}^\infty \frac{C_{\alpha,m}}{z^m} L_\alpha(z) \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}} \end{align} for $(r+1)\times (r+1)$ matrices $C_{\alpha,0}=\mathbb I$ and $C_{\alpha,1}, C_{\alpha,2}, \ldots$ that depend only on $\alpha$. \end{lemma} \begin{proof} It's a simple exercise to verify that \begin{align*} \Psi_\alpha(z) &\bigoplus_{j=1}^{r+1} e^{(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}}, & \pm\operatorname{Im}(z)>0, \end{align*} has the same jumps as $\Psi_\alpha(z)$ has on the positive and negative real axis. Then it follows from \text{Proposition \ref{ch4:eq:sameJumpsLPsi}} that \begin{align*} L_\alpha(z)^{-1} & \Psi_\alpha(z) \bigoplus_{j=1}^{r+1} e^{(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}}, & \pm\operatorname{Im}(z)>0, \end{align*} has no jumps on the positive and negative real axis. This can only be true if the expansion in \eqref{ch4:eq:prop:behavInftyPsi} consists solely of integer powers of $z$, i.e., we have for $\pm\operatorname{Im}(z)>0$ \begin{align*} \Psi_\alpha(z) \sim L_\alpha(z) \sum_{m=0}^\infty \frac{A_{\alpha,(r+1) m}}{z^{m}} \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}} \end{align*} as $z\to\infty$. We can rewrite this as \begin{align} \nonumber \Psi_\alpha(z) \sim& L_\alpha(z) \sum_{m=0}^\infty \frac{A_{\alpha,(r+1) m}}{z^{m}} L_\alpha(z)^{-1}\\ \label{ch4:eq:PsialphaSimintegerpowers} & \hspace{2cm} L_\alpha(z) \bigoplus_{j=1}^{r+1} e^{-(r+1) \omega^{\pm k_j} z^\frac{1}{r+1}} \end{align} as $z\to\infty$, for $\pm\operatorname{Im}(z)>0$. For any positive integer $k$ we know that $L_\alpha(z)^{-1}$ and \begin{align*} \left(\mathbb I + \frac{A_{\alpha,r+1}}{z}+ \frac{A_{\alpha,2(r+1)}}{z^2} +\ldots+\frac{A_{\alpha,k(r+1)}}{z^k}\right) L_\alpha(z)^{-1} \end{align*} have the same jumps on the positive and negative real axis. Thus it follows that \begin{align*} L_\alpha(z) \left(\mathbb I + \frac{A_{\alpha,r+1}}{z}+ \frac{A_{\alpha,2(r+1)}}{z^2} +\ldots+\frac{A_{\alpha,k(r+1)}}{z^k}\right) L_\alpha(z)^{-1} \end{align*} has no jumps. This means that there exist $C_{\alpha,0}=\mathbb I$ and $C_{\alpha,1}, C_{\alpha,2},\ldots$ such that \begin{align*} L_\alpha(z) \sum_{m=0}^\infty \frac{A_{\alpha,(r+1) m}}{z^{m}} L_\alpha(z)^{-1} = \sum_{m=0}^\infty \frac{C_{\alpha,m}}{z^m} \end{align*} as formal series, and, inserting this in \eqref{ch4:eq:PsialphaSimintegerpowers}, we are done. \end{proof} \subsubsection{Asymptotic behavior of $\Psi^{-1}$ as $z\to 0$} \label{ch4:sec:inversePsi} In principle, we may use the argumentation from \cite{KuMo} to find an expression for the inverse of $\Psi_\alpha$. The explicit construction then uses \begin{align*} G_{0,r+1}^{r+1,0}\left(\left. \begin{array}{c} -\\ 0,\alpha,\alpha+\frac{1}{r},\ldots,\alpha+\frac{r-1}{r}\end{array}\right| -z\right) \end{align*} and its analytic continuations along circular arcs. In \cite{KuMo} we used the explicit form of $\Psi_\alpha(z)^{-1}$ to eventually show that the scaling limit for the correlation kernel coincides with \eqref{ch4:eq:scalingLimitIK}. This was technically not necessary (although it was a nice way of verification) as there is a faster way to show this. Since the associated formulae tend to become big for $r>2$, we opt to omit the explicit form of $\Psi_\alpha(z)^{-1}$. All that really turns out to be important to us about the inverse of $\Psi_\alpha$, is the asymptotic behavior of $\Psi_\alpha(z)^{-1}$ as $z\to 0$ in the left half-plane. \begin{lemma} For $\operatorname{Re}(z)<0$ we have as $z\to 0$ that \begin{align} \label{ch4:eq:behavPsiinv0} \Psi_\alpha(z)^{-1} = \left\{\begin{array}{ll} \mathcal O \begin{pmatrix} 1 & 1 & \hdots & 1\\ z^\alpha & z^\alpha & \hdots & z^\alpha\\ \vdots & & & \vdots\\ z^\alpha & z^\alpha & \hdots & z^\alpha \end{pmatrix}, & \alpha\neq 0,\\ \mathcal O \begin{pmatrix} 1 & 1 & \hdots & 1\\ \log z & \log z & \hdots & \log z\\ \vdots & & & \vdots\\ \log z & \log z & \hdots & \log z \end{pmatrix}, & \alpha= 0. \end{array}\right. \end{align} \end{lemma} \begin{proof} We only prove it for $\alpha\neq 0$. In the $j$-th quadrant we have a connection matrix $\Gamma_j$ such that \begin{align} \label{ch4:eq:behavPsiinv0pre} \Psi_\alpha(z) \Gamma_j = M_j(z) \operatorname{diag}(1,z^{-\alpha},z^{-\alpha-\frac{1}{r}},\ldots,z^{-\alpha-\frac{r-1}{r}}) \end{align} for some non-singular analytic function $M_j(z)$. This follows from the fact that the indicial equation associated to \eqref{ch4:eq:MeijerGdvgl} has solutions $0, -\alpha, -\alpha-\frac{1}{2},\ldots,-\alpha-\frac{r-1}{r}$ at $z=0$. In fact it follows from the asymptotics of $\Psi_\alpha(z)$ as $z\to \infty$, the fact that its jumps all have determinant $1$, and an application of Liouville's theorem that $M_j$ should have a constant determinant. For $z$ in the left half-plane, i.e., for $j=2$ and $j=3$, we have by Lemma \ref{ch4:prop:psi0entire} that \begin{align*} \Gamma_j = 1 \oplus \frac{1}{r+1} U, \end{align*} where $U$ is some $r\times r$ matrix consisting of powers of $\Omega$ and powers of $e^{2\pi i\alpha}$, that we could determine explicitly in principle. We conclude that as $z\to 0$ we have \begin{align*} \Psi_\alpha(z)^{-1} &= \Gamma_j \operatorname{diag}(1,z^{\alpha},z^{\alpha+\frac{1}{r}},\ldots,z^{\alpha+\frac{r-1}{r}}) M_j(z)^{-1}\\ &= \left(1 \oplus \frac{1}{r+1} U\right) \mathcal O\begin{pmatrix} 1 & 1 & \hdots & 1\\ z^\alpha & z^\alpha & \hdots & z^\alpha\\ z^{\alpha+\frac{1}{r}} & z^{\alpha+\frac{1}{r}} & \hdots & z^{\alpha+\frac{1}{r}}\\ \vdots & & & \vdots\\ z^{\alpha+\frac{r-1}{r}} & z^{\alpha+\frac{r-1}{r}} & \hdots & z^{\alpha+\frac{r-1}{r}} \end{pmatrix}\\ &= \mathcal O\begin{pmatrix} 1 & 1 & \hdots & 1\\ z^\alpha & z^\alpha & \hdots & z^\alpha\\ \vdots & & & \vdots\\ z^\alpha & z^\alpha & \hdots & z^\alpha \end{pmatrix}. \end{align*} \end{proof} \subsubsection{Uniqueness of $\Psi_\alpha$} In this section we prove that $\Psi_\alpha$ is the only solution to RH-$\Psi$. \begin{theorem} \label{ch4:thm:PsiaexistUnique} $\Psi_\alpha$ is the unique solution to RH-$\Psi$. \end{theorem} \begin{proof} It follows from Lemma \ref{ch4:prop:PsiandQ} and Lemma \ref{ch4:prop:behavInftyPsi2} (see \eqref{ch4:RHPsi3rewritten} also) that $\Psi_\alpha$ does indeed solve RH-$\Psi$. To prove uniqueness, suppose that $\Psi(z)$ is a solution to RH-$\Psi$. Then $\Psi(z)\Psi_\alpha(z)^{-1}$ has no jumps and it behaves like $\mathbb I+\mathcal O(1/z)$ as $z\to \infty$. For $z$ in the left half-plane we have by \eqref{ch4:eq:behavPsiinv0} and RH-$\Psi$4 that \begin{align*} \Psi(z)\Psi_\alpha(z)^{-1} = \left\{\begin{array}{lr} \mathcal O(z^\alpha), & -1<\alpha<-1+\frac{1}{r},\\ \mathcal O(z^{-1+\frac{1}{r}} \log z), & \alpha = -1+\frac{1}{r},\\ \mathcal O(z^{-1+\frac{1}{r}}), & \alpha>-1+\frac{1}{r}, \alpha\neq 0,\\ \mathcal O(z^{-1+\frac{1}{r}}\log z), & \alpha=0, \end{array}\right. \end{align*} as $z\to 0$. Either way, we conclude that the singularity at $z=0$ is removable. Then Liouville's theorem shows that $\Psi(z)\Psi_\alpha(z)^{-1}=\mathbb I$, and we are done. \end{proof} \subsection{Definition of the local parametrix at the hard edge} \label{ch4:sec:defLocalP} \subsubsection{Definition of the initial local parametrix $\mathring P$} In what follows, we will assume that the lips of the lens are slightly deformed around $z=0$, such that, in $D(0,r_0)$, $f$ maps the lips of the lens into the positive and negative imaginary axis. Notice that we indeed have the freedom to do this. Our initial local parametrix is defined as follows. \begin{definition} \label{ch4:def:mathringP} With $\Psi, f(z)$ and $D_0$ as in Definition \ref{ch4:def:Psi}, Definition \ref{ch4:def:conformalf} and Definition \ref{ch4:def:D0}, we define the initial local parametrix by \begin{align} \label{ch4:eq:defMathringP} \mathring P(z) &= \mathring E(z) \Psi\left(n^{r+1} f(z)\right) \operatorname{diag}(1,z^\beta,\ldots,z^\beta) D_0(z)^{-n}, & z\in D(0,r_0), \end{align} where we take \begin{align} \label{ch4:def:mathringE} \mathring E(z) = n^{-r\beta} \left(\frac{f(z)}{z}\right)^{-\frac{r\beta}{r+1}}. \end{align} \end{definition} \begin{proposition} $\mathring P$, as in Definition \ref{ch4:def:mathringP}, satisfies RH-$\mathring{\text{P}}$. \end{proposition} \begin{proof} By Proposition \ref{ch4:prop:conformalf} we know that composing $\Psi$ with $f$ does not change its jump matrices or asymptotic behavior as $z\to 0$. Then by Lemma \ref{ch4:prop:PsiandQ} and Proposition \ref{ch4:prop:RHPfromPtoQ} we know that \begin{align*} \Psi_\alpha(n^b f(z)) \operatorname{diag}(1,z^\beta,\ldots,z^\beta) D_0(z)^{-n} \end{align*} satisfies RH-$\mathring{\text{P}}$. By Proposition \ref{ch4:prop:conformalf} we also know that $\mathring E(z)$ is analytic on $D(0,r_0)$. Then multiplication on the left with $\mathring E(z)$ does not change any of the conditions in RH-$\mathring{\text{P}}$. \end{proof} \subsubsection{The double matching} \label{ch4:sec:matching} As indicated in Section \ref{ch4:sec:localParamSetUp} we will apply the double matching procedure from \cite{Mo}. In this section we will show that the conditions of \cite[Theorem 1.2]{Mo} can be met, we repeat this theorem for convenience. \begin{theorem}\label{lem:matching} Let $\mathring P$ and $N$ be defined in a neighborhood of $\overline{D(0,\rho)}$ for some $\rho>0$. These are matrix-valued functions of size $m\times m$ that may vary with $n$. Let $a, b, c, d, e \geq 0$ satisfy \begin{align} \label{eq:assumpabcde} a\leq e < b\quad\quad\text{ and }\quad\quad d<\min(b,c). \end{align} Suppose that uniformly for $z\in\partial D(0,n^{-a})$ as $n\to\infty$ \begin{align}\label{eq:almostMatching} \mathring P(z) N(z)^{-1} E(z) &= \mathbb I + \frac{C(z)}{n^b z} + \mathcal O\left(n^{-c}\right), \end{align} where $C$ and $E$ are $m\times m$ functions in a neighborhood of $\overline{D(0,\rho)}$ that may vary with $n$, and \begin{itemize} \item[(i)] $C$ is meromorphic with only a possible pole at $z=0$, whose order is bounded by some non-negative integer $p$ for all $n$, and $C$ is uniformly bounded for $z\in \partial D(0,n^{-a})$ as $n\to\infty$, \item[(ii)] $E$ is non-singular, analytic, and uniformly for $z,w\in \partial D(0,n^{-a})$ we have as $n\to\infty$ \begin{align} \label{eq:assumpE} E(z) = \mathcal O(n^\frac{d}{2}), \quad\quad E(z)^{-1} = \mathcal O(n^{\frac{d}{2}}),\quad\quad \text{ and }\quad E(z)^{-1} E(w) &= \mathbb I+\mathcal O(n^e (z-w)). \end{align} \end{itemize} Then there are non-singular analytic functions ${E_n^0:\overline{D(0,n^{-a})}\to\mathbb C^{m\times m}}$, ${E_n^\infty:\overline{A(0;n^{-a},\infty)}\to\mathbb C^{m\times m}}$ such that as $n\to\infty$ \begin{align*} E_n^0(z) \mathring P(z) &= \left(\mathbb I + \mathcal O(n^{d-c})\right) E_n^\infty(z) N(z), &\text{uniformly for }z\in \partial D(0,n^{-a}),\\ E_n^\infty(z) &= \mathbb I + \mathcal O(n^{d-b}), &\text{uniformly for }z\in \partial D(0,\rho). \end{align*} \end{theorem} Obviously, our present situation requires that one takes $m=r+1$. The main objective of the current section is to show that the assumptions of Theorem \ref{lem:matching}, and in particular the estimates in (ii), hold for some choice of the constants $a, b, c, d$ and $e$. We shall determine the explicit values of the constants $a, b, c, d$ and $e$ as we go along. As indicated before in \text{Section \ref{ch4:sec:localParamSetUp}}, we want the double matching on the circle with radius $\rho = r_0$ and the circle of radius $r_n$. We remind the reader that \begin{align*} r_n &= n^{-\frac{r+1}{2}}, & n=1,2,\ldots \end{align*} As mentioned in Section \ref{ch4:sec:localParamSetUp}, we can use the analytic prefactors $E_n^0$ and $E_n^\infty$ from Theorem \ref{lem:matching} and construct the local parametrix $P$ as \begin{align*} P(z) = \left\{\begin{array}{ll} E_n^0(z) \mathring P(z), & z\in D(0,r_n),\\ E_n^\infty(z) N(z), & z\in A(0;r_n,r_0), \end{array}\right. \end{align*} and $P$ will then satisfy a matching condition on both the inner and the outer circle.\\ We know that $|n^{r+1} f(z)|\to \infty$ uniformly for $z\in \partial D(0,r_n)$ as $n\to\infty$. Then, by Lemma \ref{ch4:prop:behavInftyPsi2} and \eqref{ch4:eq:defMathringP}, we have uniformly for $z\in\partial D(0,r_n)$ that \begin{align} \label{ch4:eq:almostMatching} \mathring P(z) N(z)^{-1} \sim \left(\mathbb I + \frac{C_{\alpha,1}}{n^{r+1} f(z)} +\frac{C_{\alpha,2}}{(n^{r+1}f(z))^2} + \ldots\right) E(z)^{-1}, \end{align} as $n\to\infty$, where $E$ is defined as follows. \begin{definition} \label{ch4:def:E} For $z\in D(0,r_0)$, we define the function \begin{align} \label{ch4:eq:defE} E(z) = \mathring E(z)^{-1} N(z) \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta}) D_0(z)^n \left(\bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}}\right) L_\alpha\left(n^{r+1}f(z)\right)^{-1}. \end{align} \end{definition} Notice that $E$ depends on $n$. When $r=1$, i.e., when we have a $2\times 2$ RH analysis, the factor \begin{align*} D_0(z)^n \left(\bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}}\right) \end{align*} in \eqref{ch4:eq:defE} equals the unit matrix and, as it turns out, $E$ can then be used as an analytic prefactor (for an ordinary matching, that is). When the size of the RHP is larger, $E$ cannot serve as an analytic prefactor, unfortunately. See Section 1.3 in \cite{Mo} for more on where the difficulty of the matching comes from in larger size RHPs. The function $E$ is a central ingredient for the double matching procedure though. We prove some properties for $E$ in the next three propositions. These statements together with their proofs are straightforward generalizations of their counterparts for $r=1$ in \cite{KuMo} (see Lemma 5.10(c) and \text{Lemma 5.13}). \begin{proposition} \label{ch4:prop:Eanalytic} $E(z)$ is an analytic non-singular function. \end{proposition} \begin{proof} It is clear that all the factors in the right-hand side of \eqref{ch4:eq:defE} are non-singular, hence $E$ is singular as well. As one can easily verify, $L_\alpha\left(n^{r+1}f(z)\right)$ and \begin{align*} L_\alpha(n^{r+1}f(z)) \bigoplus_{j=1}^{r+1} e^{-n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}} \end{align*} have the same jumps, namely those in RH-$\widetilde{\text{P}}$2 according to Proposition \ref{ch4:eq:sameJumpsLPsi}. On the other hand, we also know that \begin{align*} N(z) \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta}) D_0(z)^n \end{align*} must have the same jumps on the positive and negative real axis as in RH-$\widetilde{\text{P}}$2. This is due to Proposition \ref{ch4:prop:RHPfromPtoQ} and the fact that $N$ has the same jumps as $\mathring P$ has on the positive and negative real axis. With these two insights, it follows that \begin{align*} N(z) \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta}) D_0(z)^n \left(\bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}}\right) L_\alpha\left(n^{r+1}f(z)\right)^{-1} \end{align*} does not have any jumps. Then $E$ also does not have any jumps, by Proposition \ref{ch4:prop:Eanalytic} and \eqref{ch4:eq:defE}. This implies that $E$ has a Laurent series around $z=0$. It is clear from Definition \ref{ch4:def:Lalpha} that as $z\to 0$ \begin{align} \label{ch4:eq:behavzLalphaO} z^{-\frac{r}{r+1}\beta} L_\alpha(z)^{-1} = \mathcal O\left(z^{-\frac{r}{2(r+1)}}\right). \end{align} The factors $D_0(z)^n$ and \begin{align*} \bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}} \end{align*} are bounded due to Proposition \ref{ch4:prop:fmanalytic}, Proposition \ref{ch4:prop:sumfm} and Definition \ref{ch4:def:conformalf} as $z\to 0$. Combining this with \eqref{ch4:eq:behavzLalphaO} and the asymptotic behavior of $N$ in \eqref{ch4:eq:behavNasz0}, we infer that \begin{align*} E(z) = \mathcal O\left(z^{-\frac{r}{2(r+1)}} z^{-\frac{r}{2(r+1)}}\right) = \mathcal O\left(z^{-1+\frac{1}{r+1}}\right) \end{align*} as $z\to 0$. Hence $E$ has a removable singularity at $z=0$ and the proposition follows. \end{proof} \begin{lemma} \label{ch4:eq:estimateEd} Uniformly for $z\in \overline{D(0,r_n)}$ we have as $n\to\infty$ \begin{align} \label{ch4:eq:eq:estimateEd} E(z) = \mathcal O(n^\frac{r}{2}) \quad \text{and} \quad E(z)^{-1} = \mathcal O(n^\frac{r}{2}). \end{align} \end{lemma} \begin{proof} We define the auxilliary function \begin{align} \label{ch4:eq:defwidetildeLalpha} \widetilde L_\alpha(z) = z^{\frac{r}{r+1}\beta} L_\alpha(z). \end{align} Then we have \begin{align} \label{ch4:eq:identitywidetildeLalpha3} L_\alpha(n^{r+1} f(z))^{-1} = n^{r\beta} \left(\frac{f(z)}{z}\right)^{\frac{r}{r+1}\beta} z^{\frac{r}{r+1}\beta} \widetilde L_\alpha(n^{r+1} f(z))^{-1}. \end{align} Furthermore, $\widetilde L_\alpha$ satisfies the identities \begin{align} \label{ch4:eq:identitywidetildeLalpha1} \widetilde L_\alpha\left(n^\frac{r+1}{2} z\right) &= \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) \widetilde L_\alpha(z),\\ \label{ch4:eq:identitywidetildeLalpha2} \widetilde L_\alpha\left(n^{r+1} f(z)\right) &= \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) \widetilde L_\alpha\left(n^\frac{r+1}{2} f(z)\right) \end{align} which is clear from Definition \ref{ch4:def:Lalpha}. Plugging \eqref{ch4:eq:identitywidetildeLalpha3}, \eqref{ch4:eq:identitywidetildeLalpha1} and \eqref{ch4:eq:identitywidetildeLalpha1} into the definition \eqref{ch4:eq:defE} of $E$, we can express $E$ as \begin{multline} \label{ch4:eq:rewriteE} E(z)=M_\alpha(z) \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) \widetilde L_\alpha\left(n^\frac{r+1}{2} z\right) D_0(z)^n \\ \left(\bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}}\right) \widetilde L_\alpha\left(n^\frac{r+1}{2} f(z)\right)^{-1} \left(\bigoplus_{j=0}^r n^{\frac{r}{4}-\frac{j}{2}}\right), \end{multline} where \begin{align} \label{ch4:eq:defanaMalpha} M_\alpha(z) = N(z) \operatorname{diag}\left(z^\frac{r\beta}{r+1},z^{-\frac{\beta}{r+1}}, \ldots, z^{-\frac{\beta}{r+1}}\right) \widetilde L_\alpha(z)^{-1}. \end{align} Notice that $M_\alpha$ is an analytic function that does not depend on $n$. Indeed, this is because $N(z)$ and \begin{align*} \widetilde L_\alpha(z) \operatorname{diag}\left(z^{-\frac{r\beta}{r+1}},z^{\frac{\beta}{r+1}}, \ldots,z^{\frac{\beta}{r+1}}\right) = L_\alpha(z) \operatorname{diag}\left(1,z^\beta,\ldots,z^\beta\right) \end{align*} have the same jumps, as one may verify. Then it is trivial that $M_\alpha$ is $\mathcal O(1)$ uniformly for $z\in \partial D(0,r_n)$ as $n\to\infty$. Using \eqref{ch4:eq:defkj} it follows after some straightforward algebra that \begin{align*} \omega^{k_{l+1}} = -\omega^{(-1)^{l-1} (\frac{1}{2} + \lfloor \frac{l}{2}\rfloor)} \end{align*} for all $l=0,1,\ldots, r$. Then it follows from Proposition \ref{ch4:prop:fmanalytic}, Proposition \ref{ch4:prop:sumfm} and Definition \ref{ch4:def:D0} that \begin{align*} D_0(z)^n \left(\bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}}\right) &= \bigoplus_{j=0}^r \exp \left(n \sum_{m=2}^r \omega^{\pm (-1)^{l-1} (\frac{1}{2}+\lfloor\frac{l}{2}\rfloor) m} z^\frac{m}{r+1} f_m(z)\right)\\ &= \exp{\mathcal O\left(n z^\frac{2}{r+1}\right)} = \mathcal O(1) \end{align*} uniformly for $z\in\partial D(0,r_n)$ as $z\to 0$. We conclude that the factor \begin{align} \label{ch4:eq:defmathcalLalpha} \mathcal L_\alpha(z) = \widetilde L_\alpha\left(n^\frac{r+1}{2} z\right) D_0(z)^n \left(\bigoplus_{j=1}^{r+1} e^{n (r+1) \omega^{k_j} f(z)^\frac{1}{r+1}}\right) \widetilde L_\alpha\left(n^\frac{r+1}{2} f(z)\right)^{-1} \end{align} in \eqref{ch4:eq:rewriteE} is uniformly bounded for $z\in\partial D(0,r_n)$, which follows from Definition \ref{ch4:def:Lalpha} and the fact that both $n^\frac{r+1}{2} z$ and $n^\frac{r+1}{2} f(z)$ are of order $1$ on $\partial D(0,r_n)$. Then in view of \eqref{ch4:eq:rewriteE} and the boundedness of $M_\alpha(z)$ we have that \begin{align*} E(z) = \mathcal O\left(1\cdot n^\frac{r}{4} \cdot 1 \cdot n^\frac{r}{4}\right) = \mathcal O\left(n^\frac{r}{2}\right) \end{align*} uniformly for $z\in\partial D(0,r_n)$ as $z\to\infty$. By the maximum modulus principle the estimate also holds on $D(0,r_n)$. Hence we have the first estimate in \eqref{ch4:eq:eq:estimateEd}. The estimate for the inverse of $E$ follows in similar fashion. \end{proof} \begin{lemma} \label{ch4:eq:estimateEe} Uniformly for $z,w\in \overline{D(0,r_n)}$ we have as $n\to\infty$ \begin{align} \label{ch4:eq:eq:estimateEe} E(z)^{-1} E(w) = \mathbb I + \mathcal O\left(n^{r+\frac{1}{2}}(z-w)\right). \end{align} \end{lemma} \begin{proof} Using \eqref{ch4:eq:rewriteE} we find that \begin{multline} \label{ch4:eq:EzEwnn} E(z)^{-1} E(w) = \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) \mathcal L_\alpha(z)^{-1} \left(\bigoplus_{j=0}^r n^{\frac{r}{4}-\frac{j}{2}}\right)\\ M_\alpha(z)^{-1} M_\alpha(w) \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) \mathcal L_\alpha(w) \left(\bigoplus_{j=0}^r n^{\frac{r}{4}-\frac{j}{2}}\right), \end{multline} with $\mathcal L_\alpha$ as in \eqref{ch4:eq:defmathcalLalpha} and $M_\alpha$ as in \eqref{ch4:eq:defanaMalpha} above. Due to the analyticity of $M_\alpha$ we have the estimate \begin{align} \label{ch4:eq:nMMnIO} \left(\bigoplus_{j=0}^r n^{\frac{r}{4}-\frac{j}{2}}\right) M_\alpha(z)^{-1} M_\alpha(w) \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) =\mathbb I + \mathcal O\left(n^\frac{r}{2}(z-w)\right), \end{align} uniformly for $z,w \in\partial D(0,r_n)$ as $n\to\infty$. As we proved before, $\mathcal L_\alpha$ is uniformly bounded on $\partial D(0,r_n)$. Then we have that \begin{align} \nonumber \mathcal L_\alpha(z)^{-1} &\left(\mathbb I + \mathcal O(n^\frac{r}{2}(z-w))\right) \mathcal L_\alpha(w)\\ &= \mathcal L_\alpha(z)^{-1} \mathcal L_\alpha(w) + \mathcal L_\alpha(z)^{-1} \mathcal O(n^\frac{r}{2}(z-w)) \mathcal L_\alpha(w)\\ \nonumber &= \mathbb I + \mathcal O\left(n^\frac{r+1}{2}(z-w)\right) + \mathcal O\left(n^\frac{r}{2}(z-w)\right)\\ \label{ch4:eq:LIOL} &= \mathbb I + \mathcal O\left(n^\frac{r+1}{2}(z-w)\right) \end{align} uniformly for $z,w\in\partial D(0,r_n)$ as $n\to\infty$. Here we have used a standard argument using Cauchy's integral formula to estimate $\mathcal L_\alpha(z)^{-1} \mathcal L_\alpha(w)$. Plugging \eqref{ch4:eq:nMMnIO} and \eqref{ch4:eq:LIOL} into \eqref{ch4:eq:EzEwnn}, we conclude that \begin{align*} E(z)^{-1} E(w) &= \left(\bigoplus_{j=0}^r n^{-\frac{r}{4}+\frac{j}{2}}\right) \left(\mathbb I + \mathcal O\left(n^\frac{r+1}{2}(z-w)\right)\right) \left(\bigoplus_{j=0}^r n^{\frac{r}{4}-\frac{j}{2}}\right)\\ \nonumber &= \mathbb I + \mathcal O\left(n^\frac{r}{4} n^\frac{r+1}{2}(z-w) n^\frac{r}{4}\right)\\ \nonumber &= \mathbb I + \mathcal O(n^{r+\frac{1}{2}}(z-w)) \end{align*} uniformly for $z,w\in\partial D\left(0,r_n\right)$ as $n\to\infty$, and we have arrived at \eqref{ch4:eq:eq:estimateEe}. By a double application of the maximum principle, applied to the analytic function $$(z,w)\mapsto \frac{E(z)^{-1} E(w) - \mathbb I}{z-w},$$ the estimate holds for $z,w\in\overline{D(0,r_n)}$. \end{proof} In line with Theorem \ref{lem:matching} we define the constants \begin{align} \label{ch4:eq:abcde} a= \frac{r+1}{2}, \quad b = r+1, \quad d = r, \quad \text{ and }\quad e = r + \frac{1}{2}, \end{align} and the meromorphic function \begin{align} \label{ch4:eq:defmeromorphicC} C(z) = \frac{z}{f(z)} \left(C_{\alpha,1} + \frac{C_{\alpha,2}}{(n^b f(z))} + \frac{C_{\alpha,3}}{(n^b f(z))^2}\right). \end{align} When $r=1$ or $r=2$ we are allowed to take fewer terms in the expansion in \eqref{ch4:eq:defmeromorphicC}. We will nevertheless fix the definition of $C$ with three terms as in \eqref{ch4:eq:defmeromorphicC}. Then by \eqref{ch4:eq:almostMatching} we have uniformly for $z\in\partial D(0,r_n)$ that \begin{align*} \mathring P(z) N(z)^{-1} = \left(\mathbb I + \frac{C(z)}{n^{r+1} z} + \mathcal O\left(n^{-c}\right)\right) E(z)^{-1} \end{align*} as $n\to\infty$, where \begin{align} \label{ch4:eq:defabc} c=2 (r+1). \end{align} Now all the requirements for Theorem \ref{lem:matching} are met. Hence we obtain analytic prefactors ${E_n^0 : D(0,r_n)\to \mathbb C}$ and $E_n^\infty : A(0;r_n,r_0)\to \mathbb C$ such that \begin{align} \label{ch4:eq:doubleMatching0} E_n^0(z) \mathring P(z) = \left(\mathbb I + \mathcal O\left(\frac{1}{n^{r+2}}\right)\right) E_n^\infty(z) N(z) \end{align} uniformly for $z\in \partial D(0,r_n)$ as $n\to\infty$, and \begin{align} \label{ch4:eq:doubleMatchingInfty} E_n^\infty(z) = \mathbb I + \mathcal O\left(\frac{1}{n}\right) \end{align} uniformly for $z\in \partial D(0,r_0)$ as $n\to\infty$. Analytic prefactors with the properties as in Theorem \ref{lem:matching} are not unique, but we fix them to be defined as in Section 2.1 in \cite{Mo}. The reason for this particular choice, is that it will allow us to apply Theorem 3.1 of \cite{Mo} later on when we calculate the scaling limit of the correlation kernel in Section \ref{sec:proofOfMainThm}. We omit the explicit formulae for $E_n^0(z)$ and $E_n^\infty(z)$ though, since such formulae are not insightful in my opinion, and they will not be relevant to us. \subsubsection{Definition of the local parametrix $P$ at the hard edge} We are now ready to fix the definition of the local parametrix $P$ at the origin. \begin{definition} \label{ch4:def:P} We define the local parametrix at the hard edge $z=0$ by \begin{align} \label{ch4:eq:defP} P(z) = \left\{\begin{array}{ll} E_n^0(z) \mathring P(z), & z\in D(0,r_n),\\ E_n^\infty(z) N(z), & z\in A(0;r_n,r_0). \end{array}\right. \end{align} \end{definition} Considering our discussion in Section \ref{ch4:sec:matching}, and \eqref{ch4:eq:doubleMatching0} and \eqref{ch4:eq:doubleMatchingInfty} in particular, we have the following corollary. \begin{corollary} \label{ch4:cor:doubleMatching} $P$, as defined in Definition \ref{ch4:def:P}, satisfies a double matching of the form: \begin{align*} P(z) N(z)^{-1} &= \mathbb I + \mathcal O\left(\frac{1}{n}\right), & \text{uniformly for }z\in \partial D(0,r_0),\\ P_+(z) P_-(z)^{-1} &= \mathbb I +\mathcal O\left(\frac{1}{n^{r+2}}\right), &\text{uniformly for }z\in \partial D(0,r_n), \end{align*} as $n\to\infty$. \end{corollary} \subsection{The local parametrix $Q$ at the soft edge $z=q$} The local parametrix problem around $q$ is defined on a disk around $q$. Without loss of generality we may assume that its radius is $r_0$, as before. \begin{rhproblem} \label{ch4:RHPfortildeP} \ \begin{description} \item[RH-Q1] $Q$ is analytic on $D(q,r_0) \setminus \Sigma_S$. \item[RH-Q2] $Q$ has the same jumps as $S$ has on $D(q,r_0) \setminus \Sigma_S$. \item[RH-Q3] $Q$ is bounded around $q$. \item[RH-Q4] $Q$ satisfies the matching condition: \begin{align} \label{ch4:eq:matchingQ} Q(z) N(z)^{-1} = \mathbb I + \mathcal O\left(\frac{1}{n}\right) \end{align} uniformly for $z\in\partial D(q,r_0)$ as $n\to \infty$. \end{description} \end{rhproblem} The construction of the local parametrix, with Airy functions, is standard and we omit the details. See \cite{Ku2} for an example were this is done for size $3\times 3$, it easily generalizes to size $(r+1)\times (r+1)$. \section{Final transformation and proof of the main theorem} \subsection{The final transformation $S\mapsto R$} With the global and local parametrices as before, we define the final transformation as \begin{align} \label{ch4:eq:defR} R(z) = \left\{\begin{array}{ll} S(z) P(z)^{-1} & z\in D(0,r_0),\\ S(z) Q(z)^{-1} & z\in D(q,r_0),\\ S(z) N(z)^{-1} & \text{elsewhere}. \end{array}\right. \end{align} We remark that $R$ has a jump on $\partial D(0,r_n)$ and on the lips of the lens inside $A(0;r_n,r_0)$. This is a difference with the usual case, where an ordinary matching is used. Notice that there are no jumps inside $D(0,r_n)$ because the jumps of $S$ and $\mathring P$, and thus $P$, are the same there. Similarly, there are no jumps on $(-r_0,r_n)$ and $(r_n,r_0)$ because the jumps of $S$ and $N$, and thus $P$, are the same there. See Figure \ref{ch4:FigR} for the corresponding jump contour $\Sigma_R$. \begin{figure} \caption{Contour $\Sigma_{R} \label{ch4:FigR} \end{figure} \begin{lemma} The singularity of $R$ at $z=0$ is removable. \end{lemma} \begin{proof} In the left half-plane we have, using \eqref{ch4:eq:defP} and \eqref{ch4:eq:defMathringP}, that \begin{align} \label{ch4:eq:Rformula} R(z) = S(z) \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta}) D_0(z)^{-n} \Psi_\alpha(z)^{-1} E_n^0(z)^{-1}. \end{align} Also, using Proposition \ref{ch4:prop:sumfm} and the asymptotics in RH-S4, we have as $z\to 0$ \begin{align*} S(z) \operatorname{diag}(1,z^{-\beta},\ldots,z^{-\beta}) D_0(z)^{-n} = \mathcal O\begin{pmatrix} 1 & h_{-\alpha-\frac{r-1}{r}}(z) & h_{-\alpha-\frac{r-1}{r}}(z)\\ \vdots & & \vdots\\ 1 & h_{-\alpha-\frac{r-1}{r}}(z) & h_{-\alpha-\frac{r-1}{r}}(z)\end{pmatrix}. \end{align*} We remind the reader that $h_\alpha$ is defined in RH-Y4. Combining the above with \eqref{ch4:eq:Rformula} and \eqref{ch4:eq:behavPsiinv0} we infer, for $\alpha\neq 0$, that as $z\to 0$ \begin{align*} R(z) &= \mathcal O\begin{pmatrix} 1 & h_{-\alpha-\frac{r-1}{r}}(z) & h_{-\alpha-\frac{r-1}{r}}(z)\\ \vdots & & \vdots\\ 1 & h_{-\alpha-\frac{r-1}{r}}(z) & h_{-\alpha-\frac{r-1}{r}}(z)\end{pmatrix} \mathcal O\begin{pmatrix} h_\alpha(z) & \hdots & h_\alpha(z)\\ z^\alpha & \hdots & z^\alpha\\ \vdots & & \vdots\\ z^\alpha & \hdots & z^\alpha \end{pmatrix}\\ &= \mathcal O(h_\alpha(z)+z^\alpha h_{-\alpha-\frac{r-1}{r}}(z))\\ &= \mathcal O(h_\alpha(z) + z^{-\frac{r-1}{r}} h_{\alpha+\frac{r-1}{r}}(z)). \end{align*} A slightly different behavior holds for $\alpha=0$. By considering the cases separately, one finds that as $z\to 0$ \begin{align*} R(z) = \left\{\begin{array}{lr} \mathcal O(z^\alpha), & -1<\alpha<-1+\frac{1}{r},\\ \mathcal O(z^{-1+\frac{1}{r}} \log z), & \alpha = -1+\frac{1}{r},\\ \mathcal O(z^{-1+\frac{1}{r}}), & \alpha>-1+\frac{1}{r}, \alpha\neq 0,\\ \mathcal O(z^{-1+\frac{1}{r}}\log z), & \alpha=0. \end{array}\right. \end{align*} Either way, we must conclude that the singularity at $z=0$ is removable. \end{proof} The same is true for the singularity at $z=q$, although we omit the details. We conclude that $R$ is analytic on $\mathbb C\setminus \Sigma_R$. \begin{theorem} \label{ch4:thm:RtoI} (a) As $n\to\infty$ we have uniformly on the indicated contours that \begin{align} \label{ch4:eq:estimatesJumpsR1} R_+(z) &= R_-(z) \left(\mathbb I+\mathcal O\left(\frac{1}{n}\right)\right), \hspace{0.1cm}z\in \partial D(0,r_0) \cup \partial D(0,r_n) \cup \partial D(q,r_0),\\ \label{ch4:eq:estimatesJumpsR2} R_+(z) &= R_-(z) \left(\mathbb I + \mathcal O(e^{-c_1 \sqrt n})\right), \hspace{2.2cm} z\in \Delta_0^\pm \cap A(0;r_n,r_0),\\ \label{ch4:eq:estimatesJumpsR3} R_+(z) &= R_-(z) \left(\mathbb I + \mathcal O(e^{-c_2 n})\right), \hspace{2cm} z\text{ in the remaining parts.} \end{align} where $c_1, c_2$ are positive constants (see Figure \ref{ch4:FigR} also).\\ (b) We have as $n\to\infty$ that \begin{align} \label{ch4:eq:asympRn} R(z) = \mathbb I + \mathcal O\left(\frac{1}{n}\right) \end{align} uniformly for $z\in\mathbb C\setminus \Sigma_R$. \end{theorem} \begin{proof} The estimates \eqref{ch4:eq:estimatesJumpsR1} for the jumps on the circles follows from the double matching around $z=0$ and the matching around $z=q$, see Corollary \ref{ch4:cor:doubleMatching} and \eqref{ch4:eq:matchingQ}. The jump on the lens inside $A(0;r_n,r_0)$ is according to Definition \ref{ch4:def:P} and RH-S2 given by \begin{align*} \nonumber R_-(z)^{-1} R_+(z) &= E_n^\infty(z) N(z) S_-(z)^{-1} S_+(z) N(z)^{-1} E_n^\infty(z)^{-1}\\ &= \mathbb I + E_n^\infty(z) N(z) z^{-\beta} e^{2n\varphi(z)} E_{21} N(z)^{-1} E_n^\infty(z)^{-1}, \end{align*} where $E_{21}$ is the matrix with a $1$ on the component in the first column and second row and all other components equal to $0$. The factors $E_n^\infty(z), N(z)$ and their inverses depend polynomially on $n$. To get the correct behavior \eqref{ch4:eq:estimatesJumpsR2}, it then suffices to show that there exists a $c>0$ such that \begin{align} \label{ch4:eq:nRephic} n \operatorname{Re}(\varphi(z)) \leq -c \sqrt n \end{align} uniformly for $z\in \Delta_0^\pm\cap A(0;r_n,r_0)$ as $n\to \infty$. To prove this, we remember from \eqref{ch4:eq:behavfr+1phi0} that for $\pm\operatorname{Im}(z)>0$ \begin{align*} \pm 2i \sum_{m=1}^r \sin\left(\frac{\pi m}{r+1}\right) z^\frac{m}{r+1} f_m(z) = (r+1) \varphi_0(z). \end{align*} Actually, \eqref{ch4:eq:behavfr+1phi0} was only stated for $z$ in the upper half-plane, but it is easy to see how to extend it. Then, using $\varphi_0(z) = \varphi(z)\pm \pi i$, we get uniformly for $z\in \Delta_0^\pm\cap A(0;r_n,r_0)$ that \begin{align*} (r+1) \operatorname{Re}(\varphi(z)) &= \pm 2\sin\left(\frac{1}{r+1}\right) \operatorname{Re}\left(i z^\frac{1}{r+1}\right) + \mathcal O\left(|z|^\frac{2}{r+1}\right)\\ &= - 2 \sin\left(\frac{1}{r+1}\right) \sin\left(\frac{1}{2(r+1)}\right) |z|^\frac{1}{r+1} + \mathcal O\left(|z|^\frac{2}{r+1}\right)\\ &\leq - \sin\left(\frac{1}{r+1}\right) \sin\left(\frac{1}{2(r+1)}\right) \frac{1}{\sqrt n} \end{align*} as $n\to\infty$. Hence we get \eqref{ch4:eq:nRephic} for a particular choice of $c$ and we conclude that we get the estimate \eqref{ch4:eq:estimatesJumpsR2} for the jump on $\Delta_0^\pm\cap A(0;r_n,r_0)$ for some $c_1>0$. The estimate \eqref{ch4:eq:estimatesJumpsR3} for the jump on $(q,\infty)$ follows from the variational equations and Assumption \ref{ch4:assump:varStrict}, and for the estimate on the remaining parts of the lips of the lens one uses a standard argument with the Cauchy-Riemann equations.\\ \noindent (b) This follows from (a) with standard arguments from Riemann-Hilbert theory. One may use arguments similar to those from Appendix A in \cite{BlKu2}. \end{proof} Theorem \ref{ch4:thm:RtoI}(b) is usually sufficient to obtain the scaling limit of the correlation kernel. In our case it will not be enough though, as we shall see in the next section. We present a stronger result in \text{Section \ref{sec:proofOfMainThm}}, that will allow us to calculate the scaling limit. This result comes from Theorem 3.1 of \cite{Mo}. \subsection{Rewriting of the correlation kernel} In this section we invert all the transformations of our RH analysis, with the goal of finding a relation between the correlation kernel and the bare parametrix $\Psi$ in particular. \begin{lemma} \label{lem:corKerRewrite} For $x,y\in (0,r_n)$ the correlation kernel can be written as \begin{multline} \label{eq:justbeforexnyn} K_{V,n}^{\alpha,\frac{1}{r}}(x,y) = \frac{e^{\frac{r n}{r+1} (V(x)-V(y))}}{2\pi i(x-y)} \\ \begin{pmatrix} - 1 & 1 & 0 & \cdots & 0\end{pmatrix} \Psi_{\alpha,+}\left(n^{r+1} f(y)\right)^{-1} E_n^0(y)^{-1} R(y)^{-1} R(x) E_n^0(x) \Psi_{\alpha,+}\left(n^{r+1} f(x)\right) \begin{pmatrix} 1 \\ 1 \\ 0 \\ \vdots \\ 0\end{pmatrix}. \end{multline} \end{lemma} \begin{proof} A simple calculation, where we invert the transformations $Y\mapsto X\mapsto T \mapsto S$, shows that for $z$ in $D(0,r_n)$ in the first quadrant \begin{align*} Y(z) \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0\end{pmatrix} = e^{-n\frac{r\ell}{r+1}} r^{-\frac{r}{2r+2}} e^{n g_{0}(z)} (1\oplus C_n) L S(z) \begin{pmatrix} 1 \\ z^{-\beta} e^{2 n\varphi(z)} \\ 0 \\ \vdots \\ 0\end{pmatrix} \end{align*} and \begin{multline*} \begin{pmatrix} 0 & w_\alpha(z) & w_{\alpha+\frac{1}{r}}(z) & \cdots \end{pmatrix} Y(z)^{-1} = \\ r^\frac{r}{2r+2} e^{-n\frac{\ell}{r+1}} w_\alpha(z) z^\frac{r-1}{2 r} e^{n (g_0(z)-g_1(z))} \begin{pmatrix} -z^{-\beta} e^{2n\varphi(z)} & 1 & 0 & \cdots & 0\end{pmatrix} S(z)^{-1} L^{-1} (1\oplus C_n^{-1}). \end{multline*} Then with the help of \eqref{ch4:eq:KinY} we can write the correlation kernel for $x,y\in (0,r_n)$ as \begin{multline*} K_{V,n}^{\alpha,\frac{1}{r}}(x,y) = \frac{1}{2\pi i(x-y)} |y|^\frac{r-1}{2r} w_\alpha(y) e^{-n \ell} e^{n (g_{0+}(x)+g_{0+}(y) - g_{1+}(y))} \\ \begin{pmatrix} -|y|^{-\beta} e^{2n\varphi_+(y)} & 1 & 0 & \cdots & 0\end{pmatrix} P_+(y)^{-1} R(y)^{-1} R(x) P_+(x) \begin{pmatrix} 1 \\ |x|^{-\beta} e^{2 n\varphi_+(x)} \\ 0 \\ \vdots \\ 0\end{pmatrix} \end{multline*} (This formula is also valid when $n$ is not divisible by $r$, see Proposition \ref{prop:corKerWidetildeY} in Appendix \ref{ch:appendixA}). Using \eqref{ch4:eq:defP} and \eqref{ch4:eq:defMathringP}, we can express this as \begin{multline*} K_{V,n}^{\alpha,\frac{1}{r}}(x,y) = \frac{1}{2\pi i(x-y)} |y|^{-\alpha} w_\alpha(y) e^{-n \ell} e^{n (g_{0+}(x)+g_{0+}(y) - g_{1+}(y))} \\ \begin{pmatrix} - D_{0+,00}(y)^n e^{2n\varphi_+(y)} & D_{0+,11}(y)^n & 0 & \cdots & 0\end{pmatrix} \Psi_{\alpha,+}\left(n^{r+1} f(y)\right)^{-1} E_n^0(y)^{-1} R(y)^{-1} R(x) E_n^0(x)\\ \Psi_{\alpha,+}\left(n^{r+1} f(x)\right) \begin{pmatrix} D_{0+,00}(x)^{-n} \\ D_{0+,11}(x)^{-n} e^{2 n\varphi_+(x)} \\ 0 \\ \vdots \\ 0\end{pmatrix}. \end{multline*} Now we use that $D_{0+,11}(z)=D_{0+,00}(z) e^{2 \varphi_{0,+}(z)}=D_{0+,00}(z) e^{2 \varphi_+(z)}$, which follows from Definition \ref{ch4:def:D0} and the relation $\varphi_0(z) = \varphi(z) \pm \pi i$. Then we obtain \begin{multline*} K_{V,n}^{\alpha,\frac{1}{r}}(x,y) = \frac{1}{2\pi i(x-y)} \frac{D_{0+,00}(y)^n}{D_{0+,00}(x)^n} w_0(y) e^{-n \ell} e^{n (g_{0+}(x)+g_{0+}(y) - g_{1+}(y)+2\varphi_+(y))} \\ \begin{pmatrix} - 1 & 1 & 0 & \cdots & 0\end{pmatrix} \Psi_{\alpha,+}\left(n^{r+1} f(y)\right)^{-1} E_n^0(y)^{-1} R(y)^{-1} R(x) E_n^0(x) \Psi_{\alpha,+}\left(n^{r+1} f(x)\right) \begin{pmatrix} 1 \\ 1 \\ 0 \\ \vdots \\ 0\end{pmatrix}. \end{multline*} By \eqref{ch4:eq:defgfunctions} and \eqref{ch4:eq:defvarphi0} we have that \begin{align*} g_{0+}(y) - g_{1+}(y) + 2\varphi_{0,+}(y) = -g_{0+}(y) + V(y) + \ell. \end{align*} Hence we have \begin{multline*} K_{V,n}^{\alpha,\frac{1}{r}}(x,y) = \frac{1}{2\pi i(x-y)} \frac{D_{0+,00}(y)^n}{D_{0+,00}(x)^n} e^{n (g_{0+}(x)-g_{0+}(y))} \\ \begin{pmatrix} - 1 & 1 & 0 & \cdots & 0\end{pmatrix} \Psi_{\alpha,+}\left(n^{r+1} f(y)\right)^{-1} E_n^0(y)^{-1} R(y)^{-1} R(x) E_n^0(x) \Psi_{\alpha,+}\left(n^{r+1} f(x)\right) \begin{pmatrix} 1 \\ 1 \\ 0 \\ \vdots \\ 0\end{pmatrix}. \end{multline*} Now the Lemma follows, if we can show that \begin{align} \label{eq:D0+gVl} D_{0+,00}(x) = g_{0,+}(x) - \frac{r}{r+1} (V(x)+\ell). \end{align} Indeed, using \eqref{ch4:eq:defvarphi0}, \eqref{ch4:eq:defvarphij} and \eqref{ch4:eq:defvarphir-1} we find that \begin{align*} -2 \sum_{j=0}^{r-1} (r-j) \varphi_j(z) =& -2 r\varphi_0(z) + \sum_{j=1}^{r-1} (r-j) (-g_{j-1}(z) + 2 g_j(z) - g_{j+1}(z))\\ =& -2 r\varphi_0(z) - (r-1) g_0(z) +r g_1(z)\\ &+ \sum_{j=1}^{r-1} \left(-(r-j-1) + 2(r-j) - (r-j+1)\right) g_j(z)\\ =& - 2r \varphi_0(z) - (r-1) g_0(z) + r g_1(z)\\ =& (r+1) g_0(z) - r (V(z) + \ell) \end{align*} for any $z\in O_V$. Now using Definition \ref{ch4:def:D0}, we get \eqref{eq:D0+gVl}, and we are done. \end{proof} \subsection{Proof of the main theorem} \label{sec:proofOfMainThm} To obtain the scaling limit of the correlation kernel at the hard edge $z=0$ it will be convenient to introduce, for any $x,y>0$, the notation \begin{align} \label{eq:defxnyn} x_n = \frac{x}{f'(0) n^{r+1}}\quad\quad\text{and}\quad\quad y_n = \frac{y}{f'(0) n^{r+1}}. \end{align} Remember (see Proposition \ref{ch4:prop:conformalf}) that \begin{align*} f'(0) = \left(\frac{\pi c_{0,V}}{\sin\left(\frac{\pi}{r+1}\right)}\right)^{r+1}. \end{align*} Hence \eqref{eq:defxnyn} can also be written as \begin{align} \label{eq:defxnyn2} x_n = \frac{x}{(c n)^{r+1}}\quad\quad\text{and}\quad\quad y_n = \frac{y}{(c n)^{r+1}}, \hspace{2cm} c = \frac{\pi c_{0,V}}{\sin\left(\frac{\pi}{r+1}\right)}. \end{align} Notice that $x_n, y_n$ are in $(0,r_n)$ for $n$ big enough. In view of \eqref{eq:justbeforexnyn} we would like \begin{align*} E_n^0(y_n)^{-1} R(y_n)^{-1} R(x_n) E_n^0(x_n) \end{align*} to be close to the identity matrix. In \cite{KuMo} we used a method to show this when $r=2$. Following this method, we find using standard arguments with Cauchy's formula (see \cite[Lemma 6.5]{KuMo}) that \begin{align*} R(y_n)^{-1} R(x_n) = \mathbb I + \mathcal O\left(n^{-\frac{r+3}{2}}(x-y)\right) \end{align*} uniformly for $x,y$ in compact sets as $n\to\infty$. Then we find, using Lemma \ref{ch4:eq:estimateEd} and Lemma \ref{ch4:eq:estimateEe}, that uniformly for $x,y$ in compact sets \begin{align*} E_n^0(y_n)^{-1} R(y_n)^{-1} R(x_n) E_n^0(x_n) &= \mathbb I + \mathcal O\left(n^{r+\frac{1}{2}}(x_n-y_n)\right) + \mathcal O\left(n^\frac{r}{2} n^{-\frac{r+3}{2}} (x-y) n^\frac{r}{2}\right)\\ &= \mathbb I + \mathcal O\left(\frac{x-y}{\sqrt n}\right) + \mathcal O\left(n^\frac{r-3}{2}(x-y)\right) \end{align*} as $n\to\infty$. For $r=1$ and $r=2$, this is enough, but for $r\geq 3$ we run into a problem. We conclude that we can not use the approach from \cite{KuMo}, unfortunately. Instead we will use \cite[Theorem 3.1]{Mo}, which is specifically designed for scaling limits of correlation kernels. \begin{lemma} Uniformly for $x,y>0$ in compact sets we have as $n\to\infty$ that \begin{align} \label{eq:estimateERRE} E_n^0(y_n)^{-1} R(y_n)^{-1} R(x_n) E_n^0(x_n) = \mathbb I + \mathcal O\left(\frac{x-y}{\sqrt n}\right). \end{align} \end{lemma} \begin{proof} We argue that the conditions of the theorem are met. With $a, b, c, d, e$ as in \eqref{ch4:eq:abcde} and \eqref{ch4:eq:defabc}, we should have the inequality \begin{align*} 2(r+1) = c \geq \min\left(\frac{3}{2}a+d,\frac{3}{2}a+2d-e\right) = \frac{7}{4}r+\frac{1}{4}, \end{align*} which indeed holds. It is clear that $C$ is uniformly bounded on $\partial D(0,n^{-e})$. Another condition for \cite[Theorem 3.1]{Mo} to hold, is that the jumps of $R$ should satisfy specific estimates, and that $R\to \mathbb I$ uniformly as $n\to\infty$. Indeed, these conditions are provided by Theorem \ref{ch4:thm:RtoI}, the estimates on the jumps by \eqref{ch4:eq:estimatesJumpsR1}-\eqref{ch4:eq:estimatesJumpsR3} and the large $n$ behavior of $R$ by \eqref{ch4:eq:asympRn}. That the inversion $s\mapsto s^{-1}$ is bounded in $L^2\left(\Sigma_R\setminus D(0,r_0)\right)$ as $n\to\infty$ is obvious. The remaining conditions for \cite[Theorem 3.1]{Mo} follow from standard Riemann-Hilbert theory (see Appendix A from \cite{BlKu2} for example). Hence we may apply \cite[Theorem 3.1]{Mo} and the lemma follows. \end{proof} We are ready to give the proof of the main result.\\ \noindent\textit{Proof of Theorem \ref{ch4:mainThm}.} By standard analysis arguments we have that \begin{align} \label{eq:err+1VV} e^{\frac{r n}{r+1} (V(x_n)-V(y_n))} = 1 + \mathcal O\left(\frac{x-y}{n^r}\right) \end{align} and \begin{align} \label{Psir+1fPsi} \Psi_{\alpha,+}\left(n^{r+1} f(y_n)\right)^{-1} \Psi_{\alpha,+}\left(n^{r+1} f(x_n)\right) = \Psi_{\alpha,+}(y)^{-1} \Psi_{\alpha,+}(x) + \mathcal O\left(\frac{x-y}{n^{r+1}}\right) \end{align} uniformly for $x,y$ in compact sets as $n\to\infty$. Plugging \eqref{eq:estimateERRE}, \eqref{eq:err+1VV} and \eqref{Psir+1fPsi} into \eqref{eq:justbeforexnyn} for $x=x_n$ and $y=y_n$, we get \begin{multline*} \frac{1}{f'(0) n^{r+1}}K_{V,n}^{\alpha,\frac{1}{r}}(x_n,y_n) = \frac{1}{2\pi i(x-y)} \begin{pmatrix} - 1 & 1 & 0 & \cdots & 0\end{pmatrix} \Psi_{\alpha,+}(y)^{-1} \Psi_{\alpha,+}(x) \begin{pmatrix} 1 \\ 1 \\ 0 \\ \vdots \\ 0\end{pmatrix}+\mathcal O\left(\frac{1}{\sqrt n}\right). \end{multline*} We know the value of $f'(0)$ from Proposition \ref{ch4:prop:conformalf} and we arrive at \begin{multline} \label{eq:scalingLimEnd} \lim_{n\to\infty} \frac{1}{\mathfrak (c n)^{r+1}}K_{V,n}^{\alpha,\frac{1}{r}}\left(\frac{x}{(c n)^{r+1}},\frac{y}{(c n)^{r+1}}\right) = \frac{1}{2\pi i(x-y)} \begin{pmatrix} - 1 & 1 & 0 & \cdots & 0\end{pmatrix} \Psi_{\alpha,+}(y)^{-1} \Psi_{\alpha,+}(x) \begin{pmatrix} 1 \\ 1 \\ 0 \\ \vdots \\ 0\end{pmatrix} \end{multline} uniformly for $x,y>0$ in compact sets, where $c$ is as in \eqref{eq:defxnyn2} (and as in Theorem \ref{ch4:mainThm}). We know that this scaling limit must coincide with the one for $V(x)=x$, a case which has already been treated by Borodin \cite{Bo} (for general $\theta>0$ actually). Since the limit, i.e., the right-hand side of \eqref{eq:scalingLimEnd}, is independent of $V$, the limit must hold for all one-cut $\frac{1}{r}$-regular external fields $V$. Theorem \ref{ch4:mainThm} is proved. \qed \phantomsection \cleardoublepage \appendix \section{Removal of the restriction that $r$ divides $n$}\label{ch:appendixA} We will show that the restriction that $n$ is divisible by $r$ can be removed. Let us fix an integer $p\in\{1,2,\ldots,r-1\}$. We consider all $n = r m+p$, where $m$ runs over the natural numbers. Then the corresponding RHP is the same as RH-Y from Section \ref{ch4:sec:theRHP}, but with RH-Y3 replaced by \begin{itemize} \item[RH-Y3] As $|z|\to\infty$ \begin{align} \label{RHY2widetilde} Y(z) = \left(\mathbb{I}+\mathcal{O}\left(\frac{1}{z}\right)\right) \left(1\oplus z^{-\frac{n+r-p}{r}} \mathbb{I}_{p\times p} \oplus z^{-\frac{n-p}{r}} \mathbb{I}_{(r-p)\times (r-p)}\right). \end{align} \end{itemize} Instead of defining the first transformation as in Section \ref{ch4:sec:firstTransformation} directly, we first apply an intermediate transformation $Y\mapsto \widetilde Y$. \begin{definition} We define \begin{align} \label{eq:defWidetildeY} \widetilde Y(x) = (1\oplus \sigma) Y(z) \left(1\oplus z^{\frac{r-p}{r}} \mathbb{I}_{p\times p} \oplus z^{-\frac{p}{r}} \mathbb{I}_{(r-p)\times (r-p)}\right) (1\oplus \sigma) \end{align} where $\sigma$ is the cyclic permutation matrix whose components are given by \begin{align} \sigma_{kj} = \left\{\begin{array}{ll} 1, & \text{if }k \equiv j+p \mod r,\\ 0, & \text{otherwise,} \end{array}\right. \end{align} where the indices range from $1$ to $r$. \end{definition} \begin{proposition} $\widetilde Y$ satisfies RH-Y from Section \ref{ch4:sec:theRHP}, but with an additional jump for $x<0$, given by \begin{align*} \widetilde Y_+(x) &= \widetilde Y_-(x) \left(1\oplus \Omega^p \mathbb{I}_{r\times r}\right). \end{align*} \end{proposition} \begin{proof} RH-Y3 is clear. Let $x>0$. We let the indices of the matrices range from $0$ to $1$. Then for $j\geq 1$ we have \begin{align} \nonumber &\left(\widetilde Y_-(x)^{-1} \widetilde Y_+(x)\right)_{0j}\\ &= \sum_{k=1}^{p} \left(Y_-(x)^{-1} Y_+(x)\right)_{0k} z^\frac{r-p}{r} \sigma_{kj} + \sum_{k=p+1}^r \left(Y_-(x)^{-1} Y_+(x)\right)_{0k} z^{-\frac{p}{r}}\sigma_{kj}\\ \label{eq:YsigmaJump0} &= \sum_{k=1}^{p} w_{\alpha+\frac{k-1+r-p}{r}}(x) \sigma_{kj} + \sum_{k=p+1}^r w_{\alpha+\frac{k-1-p}{r}}(x) \sigma_{kj}. \end{align} The term $w_{\alpha+\frac{j-1}{r}}(x)$ corresponds to either $k=j+p-r$ or $k=j+p$ depending on which one of the two is among $1,\ldots,r$. Then by the definition of $\sigma$ we infer that \eqref{eq:YsigmaJump0} equals $w_{\alpha+\frac{j-1}{r}}(x)$. It is clear that other non-zero components of the jump for $x>0$ must be on the diagonal. Then we have for $j=1,\ldots,r$ \begin{align*} \left(\widetilde Y_-(x)^{-1} \widetilde Y_+(x)\right)_{jj} &= \sum_{k=1}^{r} (\sigma^{-1})_{jk} \left(Y_-(x)^{-1} Y_+(x)\right)_{kl} \sigma_{kj}\\ &= \sum_{k=1}^{r} (\sigma^{-1})_{jk} \sigma_{kj} = 1. \end{align*} where we have ignored the factors $z^\frac{r-p}{r}$ and $z^{-\frac{p}{r}}$ from the beginning, because they must come from the same block and thus cancel each other. One can also verify that the upper-left component of the jump equals $1$. We conclude that we get the jump from RH-Y2 for $x>0$. Let us now look at $x<0$. Indeed, then we get \begin{align*} \widetilde Y_-(&x)^{-1} \widetilde Y_+(x)\\ &= (1\oplus \sigma)^{-1} \left(1\oplus e^{\frac{r-p}{r} \pi i}|x|^\frac{r-p}{r} \mathbb I_{p\times p} \oplus e^{-\frac{p}{r}\pi i} |x|^{-\frac{p}{r}}\mathbb I_{(r-p)\times (r-p)}\right)^{-1}\\ &\hspace{1cm}\left(1\oplus e^{-\frac{r-p}{r} \pi i}|x|^\frac{r-p}{r} I_{p\times p} \oplus e^{\frac{p}{r}\pi i} |x|^{-\frac{p}{r}} I_{(r-p)\times (r-p)}\right) (1\oplus \sigma)\\ &= (1\oplus \sigma)^{-1} \left(1\oplus \Omega^{p-r} \mathbb I_{p\times p}\oplus \Omega^p \mathbb I_{(r-p)\times (r-p)}\right) (1\oplus \sigma)\\ &= 1\oplus \Omega^p \sigma^{-1} \sigma\\ &= 1 \oplus \Omega^p \mathbb I_{r\times r}. \end{align*} \end{proof} \begin{proposition} \label{prop:corKerWidetildeY} The correlation kernel can be expressed as \begin{align*} K_{V,n}^{\alpha,\theta}(x,y)=\frac{1}{2\pi i(x-y)} \begin{pmatrix} 0 & w_\alpha(y) & w_{\alpha+\frac{1}{r}}(y) & \hdots & w_{\alpha+\frac{r-1}{r}}(y) \end{pmatrix} \widetilde Y_+^{-1}(y) \widetilde Y_+(x) \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \end{align*} \end{proposition} \begin{proof} We should show that the formula coincides with \eqref{ch4:eq:KinY}. It is clear that \begin{align} \label{eq:secondPartCorKer} \widetilde Y_+(x) \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} = Y_+(x) \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}. \end{align} Let us look at \begin{align*} \begin{pmatrix} 0 & w_\alpha(y) & w_{\alpha+\frac{1}{r}}(y) & \hdots & w_{\alpha+\frac{r-1}{r}}(y) \end{pmatrix} (1\oplus \sigma)^{-1} \left(1\oplus z^{\frac{r-p}{r}} \mathbb{I}_{p\times p} \oplus z^{-\frac{p}{r}} \mathbb{I}_{(r-p)\times (r-p)}\right)^{-1}. \end{align*} We label its components with indices from $0$ to $r$. Then its component with index $j$, when $j>0$, is given by \begin{align*} \sum_{k=1}^r w_{\alpha+\frac{k-1}{r}}(y) (\sigma^{-1})_{kj} \times \left\{\begin{array}{rl} z^\frac{p-r}{r}, & j\leq p,\\ z^\frac{p}{r}, & j>p \end{array}\right. = \left\{\begin{array}{rl} \sum_{k=1}^r w_{\alpha+\frac{k-1+t-r}{r}}(y) \sigma_{jk}, & j\leq p,\\ \sum_{k=1}^r w_{\alpha+\frac{k-1+t}{r}}(y) \sigma_{jk}, & j>p. \end{array}\right. \end{align*} Now, considering both cases $j\leq p$ and $j>p$, and using the definition of $\sigma$, we find that the component equals $w_{\alpha+\frac{j-1}{r}}(y)$. We conclude that \begin{multline} \label{eq:firstPartCorKer} \begin{pmatrix} 0 & w_\alpha(y) & w_{\alpha+\frac{1}{r}}(y) & \hdots & w_{\alpha+\frac{r-1}{r}}(y) \end{pmatrix} \widetilde Y_+^{-1}(y)\\ =\begin{pmatrix} 0 & w_\alpha(y) & w_{\alpha+\frac{1}{r}}(y) & \hdots & w_{\alpha+\frac{r-1}{r}}(y) \end{pmatrix} Y_+^{-1}(y) \end{multline} and the proposition now follows by inserting \eqref{eq:firstPartCorKer} and \eqref{eq:secondPartCorKer} in \eqref{ch4:eq:KinY}. \end{proof} From this point onwards the transformations of the RHP are carried out in the same way as before, i.e., we apply the transformation to $\widetilde Y \mapsto X$ as in Definition \ref{ch4:def:X}, but with $Y$ replaced by $\widetilde Y$. After that we appy the transformation $X\mapsto T$, as in Definition \ref{ch4:def:T}. The factor $\Omega^p$ for the jump on the negative real axis will then drop out due to \eqref{ch4:g1g2jump}. Namely, for $x<0$, i.e., for odd $j$, we have \begin{align*} n(- g_{j-1,-}(x)+g_{j,-}(x)+g_{j,+}(x) - g_{j+1,+}(x)) &= -\frac{2\pi i n}{r} \equiv -\frac{2\pi i p}{r} \mod 2\pi i \end{align*} when $r \equiv 0 \mod 2$. When $r \equiv 1 \mod 2$ one also has to use \eqref{ch4:grjump2}. From this point onwards there are no differences with the case where $n$ is divisible by $r$ anymore and the analysis of the RHP is identical. \phantomsection \cleardoublepage \section{The Meijer G-function}\label{ch:appendixB} The Meijer G-function is defined by the following contour integral: \begin{align} \label{appendixA:MeijerG} \MeijerG{m}{n}{p}{q}{a_1, \ldots, a_p}{b_1, \ldots, b_q}{z} = \frac{1}{2\pi i} \int_{L} \frac{\prod_{j=1}^{m} \Gamma(b_j +s) \prod_{j=1}^{n} \Gamma(1-a_j -s)}{\prod_{j=m+1}^{q} \Gamma(1-b_j -s) \prod_{j=n+1}^{p} \Gamma(a_j +s)} z^{-s} ds, \end{align} where $\Gamma$ denotes the gamma function and empty products in \eqref{appendixA:MeijerG} should be interpreted as $1$, as usual. The expressions involved satisfy the following conditions (e.g., see \cite[section 5.2]{Lu}). \begin{itemize} \item $m, n, p$, and $q$ are integers with $0\leq m\leq q$ and $0\leq n \leq p$. \item $a_i - b_j$ is not a positive integer, for all $i=1,\ldots, p$ and $j=1,\ldots,q$. \item There are three possible options for the path of integration $L$. \begin{itemize} \item[(i)] $L$ is a path from $+i\infty$ to $-i\infty$ so that all the poles of $\Gamma(b_j+s)$ lie to the left of the path, and all poles of $\Gamma(1-a_i-s)$ lie to the right of the path. This option works when $\delta= m+n-\frac{1}{2}(p+q)>0$ for $|\operatorname{arg}(z)|<\delta \pi$. \item[(ii)] $L$ is a loop, starting and ending at $-\infty$, and encircling all the poles of $\Gamma(b_j+s)$ in negative direction, but no poles of $\Gamma(1-a_i-s)$. This option works when $q\geq 1$ and either $p<q$ or $p=q$, for $|z|<1$. \item[(iii)] $L$ is a loop, starting and ending at $+\infty$, and encircling all the poles of $\Gamma(1-a_i-s)$ in positive direction, but no poles of $\Gamma(b_j+s)$. This option works when $p\geq 1$ and either $p>q$ or $p=q$, for $|z|>1$. \end{itemize} \end{itemize} Variations on the three possibilities for $L$ are possible. In this paper we have $p=n=0$ and $q=m=r+1$ for a positive integer $r$. Then we are in the situation of option (i) and option (ii). Because $p=0$, we may actually consider the contour of option (ii) for all $|\operatorname{arg}(z)<\frac{r+1}{2} \pi|$. The Meijer G-function satisfies the following higher order linear differential equation, known as the \textit{generalized hypergeometric equation}. \begin{align*} \left[\vartheta \prod_{j=1}^{q-1} \left(\vartheta + b_j -1\right) - (-1)^{p-m-n} z \prod_{i=1}^p \left(\vartheta+a_i\right)\right] \psi(z) = 0, \quad\quad \vartheta = z \frac{d}{dz}. \end{align*} Here, an empty product is to be read as $1$. The Meijer G-function is related to the generalized hypergeometric function via \begin{multline*} \MeijerG{m}{n}{p}{q}{a_1, \ldots, a_p}{b_1, \ldots, b_q}{z} = \sum_{h=1}^m \frac{\prod_{j=1}^m \Gamma(b_j - b_h)^* \prod_{i=1}^n \Gamma(1+b_h-a_i)}{\prod_{j=m+1}^q \Gamma(1+b_h-b_j) \prod_{j=n+1}^p \Gamma(a_j-b_h)}\\ z^{b_h} {_{q} F_{p-1}}\left( {1+b_h - a_1, \ldots 1+b_h-a_p \atop 1+b_h - b_1, \ldots, 1+b_h - b_h^*, \ldots, 1+b_h - b_q} ; (-1)^{p-m-n} z\right) \end{multline*} when all $b_j$ are pairwise distinct ($\log$ terms will enter if this is not the case) \cite{Lu}. Here the asterisk $^*$ denotes that the factor with $j=h$ should be suppressed in the product in the first line, and similarly for the parameters of the generalized hypergeometric functions in the last line. Indeed, these generalized hypergeometric functions, multiplied with their factor $z^{b_h}$, form a basis of solutions to the generalized hypergeometric equation around $z=0$. In particular, the indicial equation at $z=0$ has solutions $b_1,\ldots,b_q$. \end{document}
math
209,792
\begin{document} \author{D. S. Lubinsky} \address{School of Mathematics\\ Georgia Institute of Technology\\ Atlanta, GA 30332-0160\\ USA.\\ [email protected]} \title[Universality Limits]{A New Approach to Universality Limits involving Orthogonal Polynomials} \date{January 10, 2007} \begin{abstract} We show how localization and smoothing techniques can be used to establish universality in the bulk of the spectrum for a fixed positive measure ${\Greekmath 0116} $ on $\left[ -1,1\right] $. Assume that ${\Greekmath 0116} $ is a regular measure, and is absolutely continuous in an open interval containing some closed subinterval $J$ of $\left( -1,1\right) $. Assume that in $J$, the absolutely continuous component ${\Greekmath 0116} ^{\prime }$ is positive and continuous. Then universality in $ J$ for ${\Greekmath 0116} $ follows from universality for the classical Legendre weight. We also establish universality in an $L_{p}$ sense under weaker assumptions on ${\Greekmath 0116} .$ \end{abstract} \maketitle \section{Introduction and Results\protect\footnote{ Research supported by NSF grant DMS0400446 and US-Israel BSF grant 2004353}} Let ${\Greekmath 0116} $ be a finite positive Borel measure on $\left( -1,1\right) $. Then we may define orthonormal polynomials \begin{equation*} p_{n}\left( x\right) ={\Greekmath 010D} _{n}x^{n}+...,{\Greekmath 010D} _{n}>0, \end{equation*} $n=0,1,2,...$ satisfying the orthonormality conditions \begin{equation*} \int_{-1}^{1}p_{n}p_{m}d{\Greekmath 0116} ={\Greekmath 010E} _{mn}. \end{equation*} These orthonormal polynomials satisfy a recurrence relation of the form \begin{equation} xp_{n}\left( x\right) =a_{n+1}p_{n+1}\left( x\right) +b_{n}p_{n}\left( x\right) +a_{n}p_{n-1}\left( x\right) , \end{equation} where \begin{equation*} a_{n}=\frac{{\Greekmath 010D} _{n-1}}{{\Greekmath 010D} _{n}}>0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }b_{n}\in \mathbb{R} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }n\geq 1, \end{equation*} and we use the convention $p_{-1}=0$. Throughout we use \begin{equation*} w=\frac{d{\Greekmath 0116} }{dx} \end{equation*} to denote the absolutely continuous part of ${\Greekmath 0116} $. A classic result of E.A. Rakhmanov \cite{Simon2005} asserts that if $w>0$ a.e. in $\left[ -1,1\right] $, then ${\Greekmath 0116} $ belongs to the Nevai-Blumenthal class $\mathcal{M}$, that is \begin{equation} \lim_{n\rightarrow \infty }a_{n}=\frac{1}{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }\lim_{n\rightarrow \infty }b_{n}=0. \end{equation} We note that there are pure jump and pure singularly continuous measures in $ \mathcal{M}$, despite the fact that one tends to associate it with weights that are positive a.e. A class of measures that contains $\mathcal{M}$ is the class of \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{regular measures} on $\left[ -1,1\right] $ \cite {StahlTotik1992}, defined by the condition \begin{equation*} \lim_{n\rightarrow \infty }{\Greekmath 010D} _{n}^{1/n}=\frac{1}{2}. \end{equation*} Orthogonal polynomials play an important role in random matrix theory \cite {Deift1999}, \cite{Mehta1991}. One of the key limits there involves the reproducing kernel \begin{equation} K_{n}\left( x,y\right) =\sum_{k=0}^{n-1}p_{k}\left( x\right) p_{k}\left( y\right) . \end{equation} Because of the Christoffel-Darboux formula, it may also be expressed as \begin{equation} K_{n}\left( x,y\right) =a_{n}\frac{p_{n}\left( x\right) p_{n-1}\left( y\right) -p_{n-1}\left( x\right) p_{n}\left( y\right) }{x-y}. \end{equation} Define the normalized kernel \begin{equation} \widetilde{K}_{n}\left( x,y\right) =w\left( x\right) ^{1/2}w\left( y\right) ^{1/2}K_{n}\left( x,y\right) . \end{equation} The simplest case of the universality law is the limit \begin{equation} \lim_{n\rightarrow \infty }\frac{\widetilde{K}_{n}\left( x+\frac{a}{ \widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) }{\widetilde{K}_{n}\left( x,x\right) }=\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }. \end{equation} Typically this holds uniformly for $x$ in a compact subinterval of $\left( -1,1\right) $ and $a,b$ in compact subsets of the real line. Of course, when $a=b$, we interpret $\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }$ as $1$. We cannot hope to survey the vast body of results on universality limits here - the reader may consult \cite{Baiketal2006}, \cite {Deift1999}, \cite{Deiftetal1999}, \cite{Mehta1991} and the forthcoming proceedings of the conference devoted to the 60th birthday of Percy Deift. Our goal here is to present what we believe is a new approach, based on localization and smoothing. Our main result is:\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 1.1} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a finite positive Borel measure on }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ that is regular. Let} $I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a closed subinterval of }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ such that }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is absolutely continuous in an open interval containing }$I.$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ Assume that }$w$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is positive and continuous in }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Then uniformly for }$x\in I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and }$a,b$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ in compact subsets of the real line, we have} \begin{equation} \lim_{n\rightarrow \infty }\frac{\widetilde{K}_{n}\left( x+\frac{a}{ \widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) }{\widetilde{K}_{n}\left( x,x\right) }=\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }. \end{equation} Note that we allow the case where $I$ consists of just a single point. \newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Corollary 1.2}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$m\geq 1$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and } \begin{equation*} R_{m}\left( y_{1},y_{2},...,y_{m}\right) =\det \left( \widetilde{K} _{n}\left( y_{i},y_{j}\right) \right) _{i,j=1}^{m} \end{equation*} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{denote the }$m-$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{point correlation function. Uniformly for }$x \mathit{\in }I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{, and for given }$\left\{ {\Greekmath 0118} _{j}\right\} _{j=1}^{m}$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{, we have} \begin{eqnarray*} &&\lim_{n\rightarrow \infty }\frac{1}{\widetilde{K}_{n}\left( x,x\right) ^{m} }R_{m}\left( x+\frac{{\Greekmath 0118} _{1}}{\widetilde{K}_{n}\left( x,x\right) },x+\frac{ {\Greekmath 0118} _{2}}{\widetilde{K}_{n}\left( x,x\right) },...,x+\frac{{\Greekmath 0118} _{m}}{ \widetilde{K}_{n}\left( x,x\right) }\right) \\ &=&\det \left( \frac{\sin {\Greekmath 0119} \left( {\Greekmath 0118} _{i}-{\Greekmath 0118} _{j}\right) }{{\Greekmath 0119} \left( {\Greekmath 0118} _{i}-{\Greekmath 0118} _{j}\right) }\right) _{i,j=1}^{m}. \end{eqnarray*} \begin{equation} \end{equation} \newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Corollary 1.3}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$r,s$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be non-negative integers and } \begin{equation} K_{n}^{\left( r,s\right) }\left( x,x\right) =\sum_{k=0}^{n-1}p_{k}^{\left( r\right) }\left( x\right) p_{k}^{\left( r\right) }\left( s\right) . \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let } \begin{equation} {\Greekmath 011C} _{r,s}=\left\{ \begin{array}{rr} 0, & r+s\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ odd} \\ \frac{\left( -1\right) ^{\left( r+s\right) /2}}{r+s+1}, & r+s\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ even} \end{array} \right. . \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$I^{\prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a closed subinterval of }$I^{0}.$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Then uniformly for} $x\in I^{\prime },$ \begin{equation} \lim_{n\rightarrow \infty }\frac{1}{n^{r+s+1}}K_{n}^{\left( r,s\right) }\left( x,x\right) =\frac{1}{{\Greekmath 0119} w\left( x\right) \left( 1-x^{2}\right) ^{\left( r+s+1\right) /2}}{\Greekmath 011C} _{r,s}. \end{equation} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Remarks}\newline (a) We believe that the hypotheses above are the weakest imposed so far guaranteeing universality for a fixed weight on $\left( -1,1\right) .$ Most hypotheses imposed so far involve analyticity, for example in \cite {KuijlaarsVanlessen2002}.\newline (b) The only reason for restricting $a,b$ to be real in (1.7), is that $ \widetilde{K}_{n}\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+ \frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) $ involves the weight evaluated at arguments involving $a$ and $b$. If we consider instead $ K_{n}\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{ \widetilde{K}_{n}\left( x,x\right) }\right) $, then the limits hold uniformly for $a,b$ in compact subsets of the plane. We also present $L_{p}$ results, assuming less about $w$:\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 1.4}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a finite positive Borel measure on }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ that is regular. Let }$p>0.$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ Let }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ \ be a closed subinterval of }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ in which }${\Greekmath 0116} $ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is absolutely continuous, and} $w$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is bounded above and below by positive constants, and moreover, }$w$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is Riemann integrable in }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Then if }$I^{\prime }$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{is a closed subinterval of }$I^{0},$ \begin{equation} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert \frac{\widetilde{K} _{n}\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{ \widetilde{K}_{n}\left( x,x\right) }\right) }{\widetilde{K}_{n}\left( x,x\right) }-\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) } \right\vert ^{p}dx=0, \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ uniformly for }$a,b$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ in compact subsets of the real line. \newline } The restriction of Riemann integrability of $w$ arises in showing $w\left( x+ \frac{a}{\widetilde{K}_{n}\left( x,x\right) }\right) /w\left( x\right) \rightarrow 1$ as $n\rightarrow \infty $, in a suitable sense. If we do not assume $w$ is Riemann integrable in $I$, we can prove:\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 1.5}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a finite positive Borel measure on }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ that is regular. Let }$p>0.$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ Let }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ \ be a closed subinterval of }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ in which }${\Greekmath 0116} $ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is absolutely continuous, and} $w$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is bounded above and below by positive constants. Then if }$I^{\prime }$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{is a closed subinterval of }$I^{0}$, \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{uniformly for }$a,b$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{in compact subsets of the plane,} \begin{equation} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert \frac{K_{n}\left( x+ \frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{\widetilde{K} _{n}\left( x,x\right) }\right) }{K_{n}\left( x,x\right) }-\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }\right\vert ^{p}dx=0. \end{equation} When we assume only that $w$ is bounded below, and do not assume absolute continuity of ${\Greekmath 0116} $, we can still prove an $L_{1}$ form of universality: \newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 1.6}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a finite positive Borel measure on }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ that is regular. Let }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a closed subinterval of }$\left( -1,1\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ in which }$w$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is bounded below by a positive constant. Then if }$I^{\prime }$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{is a closed subinterval of }$I^{0}$, \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{uniformly for }$a,b$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{in compact subsets of the plane,} \begin{equation} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert \frac{1}{n} K_{n}\left( x+\frac{{\Greekmath 0119} a\sqrt{1-x^{2}}}{n},x+\frac{{\Greekmath 0119} b\sqrt{1-x^{2}}}{n} \right) -\frac{1}{{\Greekmath 0119} w\left( x\right) \sqrt{1-x^{2}}}\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }\right\vert dx=0. \end{equation} In the sequel\thinspace\ $C,C_{1},C_{2},...$ denote constants independent of $n,x,y,s,t$. The same symbol does not necessarily denote the same constant in different occurences. We shall write $C=C\left( {\Greekmath 010B} \right) $ or $ C\neq C\left( {\Greekmath 010B} \right) $ to respectively denote dependence on, or independence of, the parameter ${\Greekmath 010B} $. Given measures ${\Greekmath 0116} ^{\ast }$, $ {\Greekmath 0116} ^{\#}$, we use $K_{n}^{\ast },K_{n}^{\#}$ and $p_{n}^{\ast },p_{n}^{\#}$ to denote respectively their reproducing kernels and orthonormal polynomials. Similarly superscripts $\ast ,\#$ are used to distinguish other quantities associated with them. The superscript $L$ denotes quantities associated with the Legendre weight $1$ on $\left[ -1,1\right] $. For $x\in \mathbb{R}$ and ${\Greekmath 010E} >0$, we set \begin{equation*} I\left( x,{\Greekmath 010E} \right) =\left[ x-{\Greekmath 010E} ,x+{\Greekmath 010E} \right] . \end{equation*} Recall that the $n$th Christoffel function for a measure ${\Greekmath 0116} $ is \begin{equation*} {\Greekmath 0115} _{n}\left( x\right) =1/K_{n}\left( x,x\right) =\min_{\deg \left( P\right) \leq n-1}\left( \int_{-1}^{1}P^{2}d{\Greekmath 0116} \right) /P^{2}\left( x\right) . \end{equation*} The most important new idea in this paper is a localization principle for universality. We use it repeatedly in various forms, but the following basic inequality is typical. Suppose that ${\Greekmath 0116} ,{\Greekmath 0116} ^{\ast }$ are measures with $ {\Greekmath 0116} \leq {\Greekmath 0116} ^{\ast }$ in $\left[ -1,1\right] $. Then for $x,y\in \left[ -1,1 \right] ,$ \begin{eqnarray*} &&\left\vert K_{n}\left( x,y\right) -K_{n}^{\ast }\left( x,y\right) \right\vert /K_{n}\left( x,x\right) \\ &\leq &\left( \frac{K_{n}\left( y,y\right) }{K_{n}\left( x,x\right) }\right) ^{1/2}\left[ 1-\frac{K_{n}^{\ast }\left( x,x\right) }{K_{n}\left( x,x\right) }\right] ^{1/2} \\ &=&\left( \frac{{\Greekmath 0115} _{n}\left( x\right) }{{\Greekmath 0115} _{n}\left( y\right) } \right) ^{1/2}\left[ 1-\frac{{\Greekmath 0115} _{n}\left( x\right) }{{\Greekmath 0115} _{n}^{\ast }\left( x\right) }\right] ^{1/2}. \end{eqnarray*} Observe that on the right-hand side, we have only Christoffel functions, and their asymptotics are very well understood. The paper is organised as follows. In Section 2, we present some asymptotics for Christoffel functions. In Section 3, we prove our localization principle, including the above inequality. In Section 4, we approximate locally the measure ${\Greekmath 0116} $ in Theorem 1.1 by a scaled Jacobi weight and then prove Theorem 1.1. In Section 5, we prove the $L_{1}$ result Theorem 1.6, and in Section 6, prove the $L_{p}$ results Theorem 1.4 and 1.5. In Section 7, we prove Corollaries 1.2 and 1.3.\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Acknowledgement}\newline This research was stimulated by the wonderful conference in honor of Percy Deift's 60th birthday, held at the Courant Institute in June 2006. In the present form, it was also inspired by a visit to Peter Sarnak at Princeton University, and discussions with Eli Levin during our collaboration on \cite {LevinLubinsky2007}. \section{Christoffel functions\newline } We use ${\Greekmath 0115} _{n}^{L}$ to denote the $n$th Christoffel function for the Legendre weight on $\left[ -1,1\right] $. The methods used to prove the following result are very well known, but I could not find this theorem as stated in the literature. The issue is that known asymptotics for Christoffel functions do not include the increment $a/n$. We could use existing results in \cite{Mateetal1991}, \cite{Nevai1979}, \cite{Nevai1986}, \cite{Totik2000} to treat the case where $x+a/n\in J$, and add a proof for the case where this fails, but the amount of effort seems almost the same. \newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 2.1}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a regular measure on }$\left[ -1,1\right] .$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ Assume that }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is absolutely continuous in an open interval containing }$J=\left[ c,d\right] $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and in }$J$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{,} $ w={\Greekmath 0116} ^{\prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is positive and continuous.} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$A>0$. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Then uniformly for }$a\in \left[ -A,A\right] ,$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{and} $x\in J, $ \begin{equation} \lim_{n\rightarrow \infty }{\Greekmath 0115} _{n}\left( x+\frac{a}{n}\right) /{\Greekmath 0115} _{n}^{L}\left( x+\frac{a}{n}\right) =w\left( x\right) . \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Moreover, uniformly for }$n\geq n_{0}\left( A\right) ,x\in J$, \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{and} $a\in \left[ -A,A\right] ,$ \begin{equation} {\Greekmath 0115} _{n}\left( x+\frac{a}{n}\right) \sim \frac{1}{n}. \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{The constants implicit in }$\sim $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ do not depend on }${\Greekmath 011A} $ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{.} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Remarks}\newline (a) The notation $\sim $ means that the ratio of the two Christoffel functions is bounded above and below by positive constants independent of $n$ and $a$\newline (b) We emphasize that we are assuming that $w$ is continuous in $\left[ c,d \right] $ when regarded as a function defined on $\left( -1,1\right) $. \newline (c) Using asymptotics for ${\Greekmath 0115} _{n}^{L}$, we can rewrite (2.1) as \begin{equation*} \lim_{n\rightarrow \infty }n{\Greekmath 0115} _{n}\left( x+\frac{a}{n}\right) ={\Greekmath 0119} \sqrt{1-x^{2}}w\left( x\right) . \end{equation*} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof}\newline Let ${\Greekmath 0122} >0$ and choose ${\Greekmath 010E} >0$ such that ${\Greekmath 0116} $ is absolutely continuous in \begin{equation*} I=\left[ c-{\Greekmath 010E} ,d+{\Greekmath 010E} \right] \subset \left( -1,1\right) \end{equation*} and such that \begin{equation} \left( 1+{\Greekmath 0122} \right) ^{-1}\leq \frac{w\left( x\right) }{w\left( y\right) }\leq 1+{\Greekmath 0122} ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }x\in \left[ c-{\Greekmath 010E} ,d+{\Greekmath 010E} \right] \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }\left\vert x-y\right\vert \leq {\Greekmath 010E} . \end{equation} (Of course, this is possible because of uniform continuity and positivity of $w$). Let us fix $x_{0}\in J$, let \begin{equation*} I\left( x_{0},{\Greekmath 010E} \right) =\left[ x_{0}-{\Greekmath 010E} ,x_{0}+{\Greekmath 010E} \right] \end{equation*} and define a measure ${\Greekmath 0116} ^{\ast }$ with \begin{equation*} {\Greekmath 0116} ^{\ast }={\Greekmath 0116} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) \end{equation*} and in $I\left( x_{0},{\Greekmath 010E} \right) $, let ${\Greekmath 0116} ^{\ast }$ be absolutely continuous, with absolutely continuous component $w^{\ast }$ satisfying \begin{equation} w^{\ast }=w\left( x_{0}\right) \left( 1+{\Greekmath 0122} \right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in } I\left( x_{0},{\Greekmath 010E} \right) . \end{equation} Because of (2.3), $d{\Greekmath 0116} \leq d{\Greekmath 0116} ^{\ast }$ in $\left[ -1,1\right] ,$ so that if ${\Greekmath 0115} _{n}^{\ast }$ is the $n$th Christoffel function for ${\Greekmath 0116} ^{\ast }$, we have for all $x,$ \begin{equation} {\Greekmath 0115} _{n}\left( x\right) \leq {\Greekmath 0115} _{n}^{\ast }\left( x\right) . \end{equation} We now find an upper bound for ${\Greekmath 0115} _{n}^{\ast }\left( x\right) $\ for $ x\in I\left( x_{0},{\Greekmath 010E} /2\right) $. There exists $r\in \left( 0,1\right) $ depending only on ${\Greekmath 010E} $ such that \begin{equation} 0\leq 1-\left( \frac{t-x}{2}\right) ^{2}\leq r\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }x\in I\left( x_{0},{\Greekmath 010E} /2\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }t\in \left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) . \end{equation} (In fact, we may take $r=1-\left( \frac{{\Greekmath 010E} }{4}\right) ^{2}$). Let ${\Greekmath 0111} \in \left( 0,\frac{1}{2}\right) $ and choose ${\Greekmath 011B} >1$ so close to $1$ that \begin{equation} {\Greekmath 011B} ^{1-{\Greekmath 0111} }<r^{-{\Greekmath 0111} /4}. \end{equation} Let $m=m\left( n\right) =n-2\left[ {\Greekmath 0111} n/2\right] $. Fix $x\in I\left( x_{0},{\Greekmath 010E} /2\right) $ and choose a polynomial $P_{m}$ of degree $\leq $ $ m-1$ such that \begin{equation*} {\Greekmath 0115} _{m}^{L}\left( x\right) =\int_{-1}^{1}P_{m}^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and } P_{m}^{2}\left( x\right) =1. \end{equation*} \newline Thus $P_{m}$ is the minimizing polynomial in the Christoffel function for the Legendre weight at $x$. Let \begin{equation*} S_{n}\left( t\right) =P_{m}\left( t\right) \left( 1-\left( \frac{t-x}{2} \right) ^{2}\right) ^{\left[ {\Greekmath 0111} n/2\right] }, \end{equation*} a polynomial of degree $\leq $ $m-1+2\left[ {\Greekmath 0111} n/2\right] \leq n-1$ with $ S_{n}\left( x\right) =1$. Then using (2.4) and (2.6), \begin{eqnarray*} {\Greekmath 0115} _{n}^{\ast }\left( x\right) &\leq &\int_{-1}^{1}S_{n}^{2}d{\Greekmath 0116} ^{\ast } \\ &\leq &w\left( x_{0}\right) (1+{\Greekmath 0122} )\int_{I\left( x_{0},{\Greekmath 010E} \right) }P_{m}^{2}+\left\Vert P_{m}\right\Vert _{L_{\infty }\left( \left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) \right) }^{2}r^{\left[ {\Greekmath 0111} n/2\right] }\int_{\left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) }d{\Greekmath 0116} ^{\ast } \\ &\leq &w\left( x_{0}\right) (1+{\Greekmath 0122} ){\Greekmath 0115} _{m}^{L}\left( x\right) +\left\Vert P_{m}\right\Vert _{L_{\infty }\left[ -1,1\right] }^{2}r^{\left[ {\Greekmath 0111} n/2\right] }\int_{-1}^{1}d{\Greekmath 0116} ^{\ast }. \end{eqnarray*} Now we use the key idea from \cite[Lemma 9, p. 450]{Mateetal1991}. For $ m\geq m_{0}\left( {\Greekmath 011B} \right) $, we have \begin{eqnarray*} \left\Vert P_{m}\right\Vert _{L_{\infty }\left[ -1,1\right] }^{2} &\leq &{\Greekmath 011B} ^{m}\int_{-1}^{1}P_{m}^{2} \\ &=&{\Greekmath 011B} ^{m}{\Greekmath 0115} _{m}^{L}\left( x\right) . \end{eqnarray*} (This holds more generally for any polynomial $P$ of degree $\leq m-1$, and is a consequence of the regularity of the Legendre weight. Alternatively, we could use classic bounds for the Christoffel functions for the Legendre weight.) Then from (2.7), uniformly for $x\in I\left( x_{0},{\Greekmath 010E} /2\right) $, \begin{eqnarray*} {\Greekmath 0115} _{n}^{\ast }\left( x\right) &\leq &w\left( x_{0}\right) (1+{\Greekmath 0122} ){\Greekmath 0115} _{m}^{L}\left( x\right) \left\{ 1+C\left[ {\Greekmath 011B} ^{1-{\Greekmath 0111} }r^{{\Greekmath 0111} /2}\right] ^{n}\right\} \\ &\leq &w\left( x_{0}\right) (1+{\Greekmath 0122} ){\Greekmath 0115} _{m}^{L}\left( x\right) \left\{ 1+o\left( 1\right) \right\} , \end{eqnarray*} so as ${\Greekmath 0115} _{n}\leq {\Greekmath 0115} _{n}^{\ast },$ \begin{eqnarray} &&\sup_{x\in I\left( x_{0},{\Greekmath 010E} /2\right) }{\Greekmath 0115} _{n}\left( x\right) /{\Greekmath 0115} _{n}^{L}\left( x\right) \notag \\ &\leq &w\left( x_{0}\right) (1+{\Greekmath 0122} )\left\{ 1+o\left( 1\right) \right\} \sup_{x\in I\left( x_{0},{\Greekmath 010E} \right) }{\Greekmath 0115} _{m}^{L}\left( x\right) /{\Greekmath 0115} _{n}^{L}\left( x\right) . \end{eqnarray} The $o\left( 1\right) $ terms is independent of $x_{0}$. Now for large enough $n$, and some $C$ independent of ${\Greekmath 0111} ,m,n,x_{0},$ \begin{equation} \sup_{x\in \left[ -1,1\right] }{\Greekmath 0115} _{m}^{L}\left( x\right) /{\Greekmath 0115} _{n}^{L}\left( x\right) \leq 1+C{\Greekmath 0111} . \end{equation} Indeed if $\left\{ p_{k}^{L}\right\} $ denote the orthonormal Legendre polynomials, they admit the bound \cite[p.170]{Nevai1979} \begin{equation*} \left\vert p_{k}^{L}\left( x\right) \right\vert \leq C\left( 1-x^{2}+\frac{1 }{k^{2}}\right) ^{-1/4},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }x\in \left[ -1,1\right] . \end{equation*} Then uniformly for $x\in \left[ -1,1\right] ,$ \begin{eqnarray*} 0 &\leq &1-\frac{{\Greekmath 0115} _{n}^{L}\left( x\right) }{{\Greekmath 0115} _{m}^{L}\left( x\right) }={\Greekmath 0115} _{n}^{L}\left( x\right) \sum_{k=m}^{n-1}\left( p_{k}^{L}\left( x\right) \right) ^{2} \\ &\leq &C{\Greekmath 0115} _{n}^{L}\left( x\right) \left( n-m\right) \max_{\frac{n}{2} \leq k\leq n}\left( 1-x^{2}+\frac{1}{k^{2}}\right) ^{-1/2} \\ &\leq &C{\Greekmath 0111} n{\Greekmath 0115} _{n}^{L}\left( x\right) \left( 1-x^{2}+\frac{1}{n^{2}} \right) ^{-1/2} \\ &\leq &C{\Greekmath 0111} , \end{eqnarray*} by classical bounds for Christoffel functions \cite[p. 108, Lemma 5] {Nevai1979}. So we have (2.9), and then (2.8) and (2.3) give for $n\geq n_{0}=n_{0}\left( x_{0},{\Greekmath 010E} \right) ,$ \begin{equation*} \sup_{x\in I\left( x_{0},{\Greekmath 010E} /2\right) }{\Greekmath 0115} _{n}\left( x\right) /\left( {\Greekmath 0115} _{n}^{L}\left( x\right) w\left( x\right) \right) \leq (1+{\Greekmath 0122} )^{2}(1+C{\Greekmath 0111} ). \end{equation*} By covering $J$ with finitely many such intervals $I\left( x_{0},{\Greekmath 010E} /2\right) $, we obtain for some maximal threshhold $n_{1}$, that for $n\geq n_{1}=n_{1}\left( {\Greekmath 0122} ,{\Greekmath 010E} ,J\right) ,$ \begin{equation*} \sup_{x\in \left[ c-{\Greekmath 010E} /2,d+{\Greekmath 010E} /2\right] }{\Greekmath 0115} _{n}\left( x\right) /\left( {\Greekmath 0115} _{n}^{L}\left( x\right) w\left( x\right) \right) \leq (1+{\Greekmath 0122} )^{2}(1+C{\Greekmath 0111} ). \end{equation*} It is is essential here that $C$ is independent of ${\Greekmath 0122} ,{\Greekmath 0111} $. Now let $A>0$ and $\left\vert a\right\vert \leq A$. There exists $ n_{2}=n_{2}\left( A\right) $ such that for $n\geq n_{2}$ and all $\left\vert a\right\vert \leq A$ and all $x\in J$, we have $x+\frac{a}{n}\in \left[ c-{\Greekmath 010E} /2,d+{\Greekmath 010E} /2\right] $. We deduce that \begin{equation*} \limsup_{n\rightarrow \infty }\sup_{a\in \left[ -A,A\right] ,x\in J}\frac{ {\Greekmath 0115} _{n}\left( x+\frac{a}{n}\right) }{{\Greekmath 0115} _{n}^{L}\left( x+\frac{a}{n }\right) w\left( x\right) }\leq (1+{\Greekmath 0122} )^{2}\left( 1+C{\Greekmath 0111} \right) . \end{equation*} As the left-hand side is independent of the parameters ${\Greekmath 0122} ,{\Greekmath 0111} $, we deduce that \begin{equation} \limsup_{n\rightarrow \infty }\left( \sup_{a\in \left[ -A,A\right] ,x\in J} \frac{{\Greekmath 0115} _{n}\left( x+\frac{a}{n}\right) }{{\Greekmath 0115} _{n}^{L}\left( x+ \frac{a}{n}\right) w\left( x\right) }\right) \leq 1. \end{equation} In a similar way, we can establish the converse bound \begin{equation} \limsup_{n\rightarrow \infty }\left( \sup_{a\in \left[ -A,A\right] ,x\in J} \frac{{\Greekmath 0115} _{n}^{L}\left( x+\frac{a}{n}\right) w\left( x\right) }{{\Greekmath 0115} _{n}\left( x+\frac{a}{n}\right) }\right) \leq 1. \end{equation} Indeed with $m,x$ and ${\Greekmath 0111} $ as above, let us choose a polynomial $P$ of degree $\leq m-1$ such that \begin{equation*} {\Greekmath 0115} _{m}\left( x\right) =\int_{-1}^{1}P_{m}^{2}\left( t\right) d{\Greekmath 0116} \left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }P_{m}^{2}\left( x\right) =1. \end{equation*} Then with $S_{n}$ as above, and proceeding as above, \begin{equation*} {\Greekmath 0115} _{n}^{L}\left( x\right) \leq \int_{-1}^{1}S_{n}^{2} \end{equation*} \begin{eqnarray*} &\leq &\left[ w\left( x_{0}\right) ^{-1}(1+{\Greekmath 0122} )\right] \int_{I\left( x_{0},{\Greekmath 010E} \right) }P_{m}^{2}d{\Greekmath 0116} +\left\Vert P_{m}\right\Vert _{L_{\infty }\left( \left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) \right) }^{2}r^{\left[ {\Greekmath 0111} n/2\right] }\int_{\left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) }1 \\ &\leq &\left[ w\left( x_{0}\right) ^{-1}(1+{\Greekmath 0122} )\right] {\Greekmath 0115} _{m}\left( x\right) \left\{ 1+C\left[ {\Greekmath 011B} ^{1-{\Greekmath 0111} }\left( 1-r\right) ^{{\Greekmath 0111} /2}\right] ^{n}\right\} , \end{eqnarray*} and so as above, \begin{eqnarray*} &&\sup_{x\in I\left( x_{0},{\Greekmath 010E} /2\right) }{\Greekmath 0115} _{m}^{L}\left( x\right) /{\Greekmath 0115} _{m}\left( x\right) \\ &\leq &\left[ w\left( x_{0}\right) ^{-1}(1+{\Greekmath 0122} )(1+o\left( 1\right) ) \right] \sup_{x\in I\left( x_{0},{\Greekmath 010E} /2\right) }{\Greekmath 0115} _{m}^{L}\left( x\right) /{\Greekmath 0115} _{n}^{L}\left( x\right) \\ &\leq &\left[ w\left( x_{0}\right) ^{-1}(1+{\Greekmath 0122} )\right] \left\{ 1+o\left( 1\right) \right\} \left( 1+C{\Greekmath 0111} \right) . \end{eqnarray*} Then (2.11) follows after a scale change $m\rightarrow n$ and using monotonicity of ${\Greekmath 0115} _{n}$ in \ $n$, much as above. Together (2.10) and (2.11) give (2.1). Finally, (2.2) follows from standard bounds for the Christoffel function for the Legendre weight. $\blacksquare $ \section{Localization} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 3.1}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Assume that }${\Greekmath 0116} ,{\Greekmath 0116} ^{\ast }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ are regular measures on }$ \left[ -1,1\right] $ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{that are absolutely continuous in an open interval containing }$J=\left[ c,d\right] $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Assume that }$w={\Greekmath 0116} ^{\prime }$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{is positive and continuous in }$J$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and } \begin{equation*} d{\Greekmath 0116} =d{\Greekmath 0116} ^{\ast }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{in} }J. \end{equation*} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$A>0$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Then as }$n\rightarrow \infty ,$ \begin{equation} \sup_{a,b\in \left[ -A,A\right] ,x\in J}\left\vert \left( K_{n}-K_{n}^{\ast }\right) \left( x+\frac{a}{n},x+\frac{b}{n}\right) \right\vert /n=o\left( 1\right) . \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof\newline }We initially assume that \begin{equation} d{\Greekmath 0116} \leq d{\Greekmath 0116} ^{\ast }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\left( -1,1\right) . \end{equation} The idea is to estimate the $L_{2}$ norm of $K_{n}\left( x,t\right) -K_{n}^{\ast }\left( x,t\right) $ over $\left[ -1,1\right] $, and then to use Christoffel function estimates. Now \begin{eqnarray*} &&\int_{-1}^{1}\left( K_{n}\left( x,t\right) -K_{n}^{\ast }\left( x,t\right) \right) ^{2}d{\Greekmath 0116} \left( t\right) \\ &=&\int_{-1}^{1}K_{n}^{2}\left( x,t\right) d{\Greekmath 0116} \left( t\right) -2\int_{-1}^{1}K_{n}\left( x,t\right) K_{n}^{\ast }\left( x,t\right) d{\Greekmath 0116} \left( t\right) +\int_{-1}^{1}K_{n}^{\ast 2}\left( x,t\right) d{\Greekmath 0116} \left( t\right) \\ &=&K_{n}\left( x,x\right) -2K_{n}^{\ast }\left( x,x\right) +\int_{-1}^{1}K_{n}^{\ast 2}\left( x,t\right) d{\Greekmath 0116} \left( t\right) , \end{eqnarray*} by the reproducing kernel property. As $d{\Greekmath 0116} \leq d{\Greekmath 0116} ^{\ast },$ we also have \begin{equation*} \int_{-1}^{1}K_{n}^{\ast 2}\left( x,t\right) d{\Greekmath 0116} \left( t\right) \leq \int_{-1}^{1}K_{n}^{\ast 2}\left( x,t\right) d{\Greekmath 0116} ^{\ast }\left( t\right) =K_{n}^{\ast }\left( x,x\right) . \end{equation*} So \begin{eqnarray} &&\int_{-1}^{1}\left( K_{n}\left( x,t\right) -K_{n}^{\ast }\left( x,t\right) \right) ^{2}d{\Greekmath 0116} \left( t\right) \notag \\ &\leq &K_{n}\left( x,x\right) -K_{n}^{\ast }\left( x,x\right) . \end{eqnarray} Next for any polynomial $P$ of degree $\leq n-1$, we have the Christoffel function estimate \begin{equation} \left\vert P\left( y\right) \right\vert \leq K_{n}\left( y,y\right) ^{1/2}\left( \int_{-1}^{1}P^{2}d{\Greekmath 0116} \right) ^{1/2}. \end{equation} Applying this to $P\left( t\right) =K_{n}\left( x,t\right) -K_{n}^{\ast }\left( x,t\right) $ and using (3.3) gives, for all $x,y\in \left[ -1,1 \right] ,$ \begin{eqnarray*} &&\left\vert K_{n}\left( x,y\right) -K_{n}^{\ast }\left( x,y\right) \right\vert \\ &\leq &K_{n}\left( y,y\right) ^{1/2}\left[ K_{n}\left( x,x\right) -K_{n}^{\ast }\left( x,x\right) \right] ^{1/2} \end{eqnarray*} so \begin{eqnarray} &&\left\vert K_{n}\left( x,y\right) -K_{n}^{\ast }\left( x,y\right) \right\vert /K_{n}\left( x,x\right) \notag \\ &\leq &\left( \frac{K_{n}\left( y,y\right) }{K_{n}\left( x,x\right) }\right) ^{1/2}\left[ 1-\frac{K_{n}^{\ast }\left( x,x\right) }{K_{n}\left( x,x\right) }\right] ^{1/2}. \end{eqnarray} Now we set $x=x_{0}+\frac{a}{n}$ and $y=x_{0}+\frac{b}{n}$, where $a,b\in \left[ -A,A\right] $ and $x_{0}\in J$. By Theorem 2.1, uniformly for such $x, $ $\frac{K_{n}^{\ast }\left( x,x\right) }{K_{n}\left( x,x\right) }=1+o\left( 1\right) $, for they both have the same asymptotics as for the weight $w$ on \ $\left[ -1,1\right] $. Moreover, uniformly for $a,b\in \left[ -A,A\right] , $ \begin{equation*} K_{n}\left( x_{0}+\frac{b}{n},x_{0}+\frac{b}{n}\right) \sim K_{n}\left( x_{0}+\frac{a}{n},x_{0}+\frac{a}{n}\right) \sim n, \end{equation*} so \begin{equation*} \sup_{a,b\in \left[ -A,A\right] ,x_{0}\in J}\left\vert \left( K_{n}-K_{n}^{\ast }\right) \left( x_{0}+\frac{a}{n},x_{0}+\frac{b}{n}\right) \right\vert /n=o\left( 1\right) . \end{equation*} Now we drop the extra hypothesis (3.2). Define a measure ${\Greekmath 0117} $ by ${\Greekmath 0117} ={\Greekmath 0116} ={\Greekmath 0116} ^{\ast }$ in $J;$ and in $\left[ -1,1\right] \backslash J$, let \begin{equation*} d{\Greekmath 0117} \left( x\right) =\max \left\{ \left\vert x-c\right\vert \left\vert x-d\right\vert ,w\left( x\right) ,w^{\ast }\left( x\right) \right\} dx+d{\Greekmath 0116} _{s}\left( x\right) +d{\Greekmath 0116} _{s}^{\ast }\left( x\right) , \end{equation*} where $w,w^{\ast }$ and ${\Greekmath 0116} _{s},{\Greekmath 0116} _{s}^{\ast }$ are respectively the absolutely continuous and singular components of ${\Greekmath 0116} ,{\Greekmath 0116} ^{\ast }$. Then $ d{\Greekmath 0116} \leq d{\Greekmath 0117} $ and $d{\Greekmath 0116} ^{\ast }\leq d{\Greekmath 0117} $, and ${\Greekmath 0117} $ is regular as its absolutely continuous component is positive in $\left( -1,1\right) $, and hence lies in the even smaller class $\mathcal{M}.$ Moreover, ${\Greekmath 0117} $ is absolutely continuous in an open interval containing $J,$ and ${\Greekmath 0117} ^{\prime }=w$ in $J$ . The case above shows that the reproducing kernels for ${\Greekmath 0116} $ and ${\Greekmath 0116} ^{\ast }$ have the same asymptotics as that for ${\Greekmath 0117} $, in the sense of (3.1), and hence the same asymptotics as each other. $\blacksquare $ \section{Smoothing} In this section, we approximate ${\Greekmath 0116} $ of Theorem 1.1 by a scaled Legendre Jacobi measure ${\Greekmath 0116} ^{\#}$ and then prove Theorem 1.1. Recall that $\tilde{K} _{n}$ is the normalized kernel, given by (1.5). Our smoothing result (which may also be viewed as localization) is:\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Theorem 4.1}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }${\Greekmath 0116} $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be as in Theorem 1.1. Let }$A>0,{\Greekmath 0122} \in \left( 0,\frac{1}{2}\right) $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and choose }${\Greekmath 010E} >0$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ such that (2.3) holds. Let }$x_{0}\in J.$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Then there exists }$C$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ and} $n_{0}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ such that for }$n\geq n_{0},$ \begin{equation} \sup_{a,b\in \left[ -A,A\right] ,x\in I\left( x_{o},\frac{{\Greekmath 010E} }{2}\right) \cap J}\left\vert \left( \tilde{K}_{n}-K_{n}^{L}\right) \left( x+\frac{a}{n} ,x+\frac{b}{n}\right) \right\vert /n\leq C{\Greekmath 0122} ^{1/2}, \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{where }$C$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is independent of }${\Greekmath 0122} ,{\Greekmath 010E} ,n,x_{0}$ .\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof}\newline Fix $x_{0}\in J$ and let $w^{\#}$ be the scaled Legendre weight \begin{equation*} w^{\#}=w\left( x_{0}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\left( -1,1\right) . \end{equation*} Note that \begin{equation} K_{n}^{\#}\left( x,y\right) =\frac{1}{w\left( x_{0}\right) }K_{n}^{L}\left( x,y\right) . \end{equation} (Recall that the superscript indicates the Legendre weight on $\left[ -1,1 \right] $). Because of our localization result Theorem 3.1, we may replace $ d{\Greekmath 0116} $ by $w^{\ast }\left( x\right) dx$, where \begin{equation*} w^{\ast }=w\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }I\left( x_{0},{\Greekmath 010E} \right) \end{equation*} and \begin{equation*} w^{\ast }=w\left( x_{0}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\left[ -1,1\right] \backslash I\left( x_{0},{\Greekmath 010E} \right) , \end{equation*} without affecting the asymptotics for $K_{n}\left( x+\frac{a}{n},x+\frac{b}{n }\right) $ in the interval $I\left( x_{0},\frac{{\Greekmath 010E} }{2}\right) $. (Note that ${\Greekmath 0122} $ and ${\Greekmath 010E} $ play no role in Theorem 3.1). So in the sequel, we assume that $w=w\left( x_{0}\right) =w^{\#}$ in $\left[ -1,1 \right] \backslash I\left( x_{0},{\Greekmath 010E} \right) $, while not changing $w$ in $I$. Observe that (2.3) implies that \begin{equation} \left( 1+{\Greekmath 0122} \right) ^{-1}\leq \frac{w}{w^{\#}}\leq 1+{\Greekmath 0122} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{, in }\left[ -1,1\right] . \end{equation} Then, much as in the previous section, \begin{eqnarray*} &&\int_{-1}^{1}\left( K_{n}\left( x,t\right) -K_{n}^{\#}\left( x,t\right) \right) ^{2}w^{\#}\left( t\right) dt \\ &=&\int_{-1}^{1}K_{n}^{2}\left( x,t\right) w^{\#}\left( t\right) dt-2\int_{-1}^{1}K_{n}\left( x,t\right) K_{n}^{\#}\left( x,t\right) w^{\#}\left( t\right) dt+\int_{-1}^{1}K_{n}^{\#2}\left( x,t\right) w^{\#}\left( t\right) dt \\ &=&\int_{-1}^{1}K_{n}^{2}\left( x,t\right) w\left( t\right) dt+\int_{I\left( x_{0},{\Greekmath 010E} \right) }K_{n}^{2}\left( x,t\right) \left( w^{\#}-w\right) \left( t\right) dt-2K_{n}\left( x,x\right) +K_{n}^{\#}\left( x,x\right) \\ &=&K_{n}^{\#}\left( x,x\right) -K_{n}\left( x,x\right) +\int_{I\left( x_{0},{\Greekmath 010E} \right) }K_{n}^{2}\left( x,t\right) \left( w^{\#}-w\right) \left( t\right) dt, \end{eqnarray*} recall that $w=w^{\#}$ in $\left[ -1,1\right] \backslash I$. By (4.3), \begin{equation*} \int_{I\left( x_{0},{\Greekmath 010E} \right) }K_{n}^{2}\left( x,t\right) \left( w^{\#}-w\right) \left( t\right) dt\leq {\Greekmath 0122} \int_{I\left( x_{0},{\Greekmath 010E} \right) }K_{n}^{2}\left( x,t\right) w\left( t\right) dt\leq {\Greekmath 0122} K_{n}\left( x,x\right) . \end{equation*} So \begin{equation} \int_{-1}^{1}\left( K_{n}\left( x,t\right) -K_{n}^{\#}\left( x,t\right) \right) ^{2}w^{\#}\left( t\right) dt\leq K_{n}^{\#}\left( x,x\right) -\left( 1-{\Greekmath 0122} \right) K_{n}\left( x,x\right) . \end{equation} Applying an obvious analogue of (3.4) to $P\left( t\right) =K_{n}\left( x,t\right) -K_{n}^{\#}\left( x,t\right) $ and using (4.4) gives for $y\in \left[ -1,1\right] ,$ \begin{eqnarray*} &&\left\vert K_{n}\left( x,y\right) -K_{n}^{\#}\left( x,y\right) \right\vert \\ &\leq &K_{n}^{\#}\left( y,y\right) ^{1/2}\left[ K_{n}^{\#}\left( x,x\right) -\left( 1-{\Greekmath 0122} \right) K_{n}\left( x,x\right) \right] ^{1/2} \end{eqnarray*} so \begin{eqnarray*} &&\left\vert K_{n}\left( x,y\right) -K_{n}^{\#}\left( x,y\right) \right\vert /K_{n}^{\#}\left( x,x\right) \\ &\leq &\left( \frac{K_{n}^{\#}\left( y,y\right) }{K_{n}^{\#}\left( x,x\right) }\right) ^{1/2}\left[ 1-\left( 1-{\Greekmath 0122} \right) \frac{ K_{n}\left( x,x\right) }{K_{n}^{\#}\left( x,x\right) }\right] ^{1/2}. \end{eqnarray*} In view of (4.3), we also have \begin{equation*} \frac{K_{n}\left( x,x\right) }{K_{n}^{\#}\left( x,x\right) }=\frac{{\Greekmath 0115} _{n}^{\#}\left( x\right) }{{\Greekmath 0115} _{n}\left( x\right) }\geq \frac{1}{ 1+{\Greekmath 0122} }, \end{equation*} so for all $y\in \left[ -1,1\right] ,$ \begin{eqnarray*} &&\left\vert K_{n}\left( x,y\right) -K_{n}^{\#}\left( x,y\right) \right\vert /K_{n}^{\#}\left( x,x\right) \\ &\leq &\left( \frac{K_{n}^{\#}\left( y,y\right) }{K_{n}^{\#}\left( x,x\right) }\right) ^{1/2}\left[ 1-\frac{1-{\Greekmath 0122} }{1+{\Greekmath 0122} } \right] ^{1/2} \\ &\leq &\sqrt{2{\Greekmath 0122} }\left( \frac{K_{n}^{\#}\left( y,y\right) }{ K_{n}^{\#}\left( x,x\right) }\right) ^{1/2} \\ &=&\sqrt{2{\Greekmath 0122} }\left( \frac{K_{n}^{L}\left( y,y\right) }{ K_{n}^{L}\left( x,x\right) }\right) ^{1/2}=\sqrt{2{\Greekmath 0122} }\left( \frac{ {\Greekmath 0115} _{n}^{L}\left( x\right) }{{\Greekmath 0115} _{n}^{L}\left( y\right) }\right) ^{1/2}. \end{eqnarray*} Here we have used (4.2). Now we set $x=x_{1}+\frac{a}{n}$ and $y=x_{1}+\frac{ b}{n}$, where $x_{1}\in I\left( x_{0},\frac{{\Greekmath 010E} }{2}\right) $ and $a,b\in \left[ -A,A\right] $. By classical estimates for Christoffel functions for the Legendre weight (or even Theorem 2.1), uniformly for $a,b\in \left[ -A,A \right] ,$ and $x_{1}\in J,$ \begin{equation*} {\Greekmath 0115} _{n}^{L}\left( x_{1}+\frac{b}{n}\right) \sim {\Greekmath 0115} _{n}^{L}\left( x_{1}+\frac{a}{n}\right) \sim n^{-1}, \end{equation*} and also the constants implicit in $\sim $ are independent of ${\Greekmath 0122} ,{\Greekmath 010E} $ and $x_{1}$ (this is crucial!). Thus for some $C$ and $n_{0}$ depending only on $A$ and $J$, we have for $n\geq n_{0},$ \begin{equation*} \sup_{a,b\in \left[ -A,A\right] ,x_{1}\in I\left( x_{0,}\frac{{\Greekmath 010E} }{2} \right) \cap J}\left\vert \left( K_{n}-K_{n}^{\#}\right) \left( x_{1}+\frac{a }{n},x_{1}+\frac{b}{n}\right) \right\vert /n\leq C\sqrt{{\Greekmath 0122} }. \end{equation*} Finally, recall (4.2), that \begin{equation*} \left( 1+{\Greekmath 0122} \right) ^{-1}\leq \frac{w\left( x_{1}\right) }{w\left( x_{0}\right) }\leq 1+{\Greekmath 0122} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }x_{1}\in I\left( x_{0},{\Greekmath 010E} \right) \end{equation*} and that $w$ is continuous at the endpoints of $I\left( x_{0,}\frac{{\Greekmath 010E} }{ 2}\right) \cap J$. $\blacksquare $\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof of Theorem 1.1}\newline Let $A,{\Greekmath 0122} _{1}>0$. Choose ${\Greekmath 0122} >0$ so small that the right-hand side $C{\Greekmath 0122} ^{1/2}$ of (4.1) is less than ${\Greekmath 0122} _{1} $. Choose ${\Greekmath 010E} >0$ such that (2.3) holds. Now divide $J$ into, say ~$M$ intervals $I\left( x_{j},\frac{{\Greekmath 010E} }{2}\right) $, $1\leq j\leq M$, each of length ${\Greekmath 010E} $. For each $j$, there exists a threshhold $ n_{0}=n_{0}\left( j\right) $ for which (4.1) holds for $n\geq n_{0}\left( j\right) $ with $I\left( x_{0},\frac{{\Greekmath 010E} }{2}\right) $ replaced by $ I\left( x_{j},\frac{{\Greekmath 010E} }{2}\right) $. Let $n_{1}$ denote the largest of these. Then we obtain, for $n\geq n_{1}$, \begin{equation*} \sup_{a,b\in \left[ -A,A\right] ,x_{0}\in J}\left\vert \left( \tilde{K} _{n}-K_{n}^{L}\right) \left( x+\frac{a}{n},x+\frac{b}{n}\right) \right\vert /n\leq {\Greekmath 0122} _{1}. \end{equation*} It follows that \begin{equation} \lim_{n\rightarrow \infty }\left( \sup_{a,b\in \left[ -A,A\right] ,x\in J}\left\vert \left( \tilde{K}_{n}-K_{n}^{L}\right) \left( x+\frac{a}{n},x+ \frac{b}{n}\right) \right\vert \right) =0. \end{equation} Finally the universality limit for the Legendre weight (see for example \cite {KuijlaarsVanlessen2002}) gives as $n\rightarrow \infty $, \begin{eqnarray} &&\frac{{\Greekmath 0119} \sqrt{1-x^{2}}}{n}K_{n}^{L}\left( x+\frac{u{\Greekmath 0119} \sqrt{1-x^{2}}}{n} ,x+\frac{v{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) \notag \\ &\rightarrow &\frac{\sin {\Greekmath 0119} \left( u-v\right) }{{\Greekmath 0119} \left( u-v\right) }, \end{eqnarray} uniformly for $u,v$ in compact subsets of the real line, and $x$ in compact subsets of $\left( -1,1\right) $. Setting \begin{equation*} a=u{\Greekmath 0119} \sqrt{1-x^{2}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }b=v{\Greekmath 0119} \sqrt{1-x^{2}} \end{equation*} in (4.5), we obtain as $n\rightarrow \infty ,$ uniformly for $x\in J$ and $ u,v$ in compact subsets of the real line, \begin{eqnarray} &&\lim_{n\rightarrow \infty }\frac{{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\tilde{K}_{n}\left( x+\frac{u{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{v{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) \notag \\ &=&\frac{\sin {\Greekmath 0119} \left( u-v\right) }{{\Greekmath 0119} \left( u-v\right) }. \end{eqnarray} Since uniformly for $x\in J,$ by Theorem 2.1, \begin{eqnarray*} \tilde{K}_{n}\left( x,x\right) ^{-1} &=&K_{n}^{L}\left( x,x\right) ^{-1}\left( 1+o\left( 1\right) \right) \\ &=&{\Greekmath 0119} \sqrt{1-x^{2}}/n\left( 1+o\left( 1\right) \right) , \end{eqnarray*} we then also obtain the conclusion of Theorem 1.1. $\blacksquare $ For future use, we record also that \begin{equation} \lim_{n\rightarrow \infty }\frac{1}{n}\tilde{K}_{n}\left( x+\frac{a}{n},x+ \frac{b}{n}\right) =\frac{\sin \left( \left( a-b\right) /\sqrt{1-x^{2}} \right) }{{\Greekmath 0119} \left( a-b\right) } \end{equation} uniformly for $x\in J$ and $a,b\in \left[ -A,A\right] $. \section{Universality in $L_{1}$} In this section, we prove Theorem 1.6. We assume that \begin{equation} w\geq C_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }I. \end{equation} Let $\Delta >0$ and define a measure ${\Greekmath 0116} ^{\#}$ by \begin{equation*} {\Greekmath 0116} ^{\#}={\Greekmath 0116} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\left[ -1,1\right] \backslash I \end{equation*} and in $I$, we define $d{\Greekmath 0116} ^{\#}\left( x\right) =w^{\#}\left( x\right) dx$, where \begin{equation} w^{\#}\left( x\right) =\frac{1}{\Delta }\int_{x-\Delta }^{x+\Delta }w=\int_{-1}^{1}w\left( x+s\Delta \right) ds. \end{equation} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Lemma 5.1}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$I^{\prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a closed subinterval of }$I^{0}$. \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{(a) }${\Greekmath 0116} ^{\#}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is continuous in }$I^{0}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and }$ w^{\#}\geq \frac{1}{2}C_{0}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ in }$I^{0}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{.\newline (b)} ${\Greekmath 0116} ^{\#}$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{is regular on }$\left[ -1,1\right] $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{.} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{(c) There exists }$C_{1}>0$,\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ independent of }$\Delta $, \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ such that for} $n\geq 1,$ \begin{equation} \sup_{t\in I^{\prime }}\frac{1}{n}K_{n}\left( t,t\right) \leq C_{1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }\sup_{t\in I^{\prime }}\frac{1}{n}K_{n}^{\#}\left( t,t\right) \leq C_{1}. \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{(d)} \begin{equation} \lim_{n\rightarrow \infty }\frac{1}{n}\int_{I^{\prime }}\left\vert K_{n}-K_{n}^{\#}\right\vert \left( t,t\right) dt=\frac{1}{{\Greekmath 0119} } \int_{I^{\prime }}\left\vert \frac{1}{w\left( t\right) }-\frac{1}{ w^{\#}\left( t\right) }\right\vert \frac{dt}{\sqrt{1-t^{2}}}. \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{(e) For some }$C_{2}>0$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ independent of} $\Delta ,$ \begin{eqnarray} &&\int_{I^{\prime }}\frac{1}{\sqrt{1-t^{2}}}\left\vert \frac{1}{w\left( t\right) }-\frac{1}{w^{\#}\left( t\right) }\right\vert dt \notag \\ &\leq &C_{2}\sup_{\left\vert u\right\vert \leq \Delta }\int_{I}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt. \end{eqnarray} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof}\newline (a) is immediate.\newline (b) This follows from Theorem 5.3.3 in \cite[p. 148]{StahlTotik1992}. As $ {\Greekmath 0116} $ is regular, that theorem shows that the restriction of ${\Greekmath 0116} $ to $ \left[ -1,1\right] \backslash I$ is regular. Hence the restriction of ${\Greekmath 0116} ^{\#}$ is trivially regular in $\left[ -1,1\right] \backslash I$. The restriction of ${\Greekmath 0116} ^{\#}$ to $I$ is regular as its absolutely continuous component $w^{\#}$\thinspace $>0$ there. Then Theorem 5.3.3 in \cite[p. 148] {StahlTotik1992} shows that ${\Greekmath 0116} ^{\#}$ is regular as a measure on all of $ \left[ -1,1\right] .$\newline (c) In view of (5.1), we have for $x\in I^{\prime },$ \begin{eqnarray*} {\Greekmath 0115} _{n}\left( x\right) &\geq &C_{0}\inf_{\deg \left( P\right) \leq n-1}\int_{I}P^{2}/P^{2}\left( x\right) \\ &\geq &C_{0}C_{1}/n. \end{eqnarray*} Here we are using classical bounds for the Legendre weight translated to the interval $I$, and the constant $C_{1}$ depends only on the intervals $ I^{\prime }$ and $I$. Then the first bound in (5.3) follows, and that for $ {\Greekmath 0115} _{n}^{\#}$ is similar. Since the lower bound on ${\Greekmath 0116} ^{\#}$ in $I$ is independent of $\Delta $, it follows that the constants we obtain in (5.3) will also be independent of $\Delta .$\newline (d) Since ${\Greekmath 0116} $ is regular, and ${\Greekmath 0116} ^{\prime }=w$ is bounded below by a positive constant in $I$, we have a.e. in $I,$ \begin{equation*} \lim_{n\rightarrow \infty }K_{n}\left( x,x\right) /n=\frac{1}{{\Greekmath 0119} w\left( x\right) \sqrt{1-x^{2}}}. \end{equation*} See for example \cite[p. 449, Thm. 8]{Mateetal1991} or \cite[Theorem 1] {Totik2000}. A similar limit holds for $K_{n}^{\#}/n$. We also have the uniform bound in (c). Then Lebesgue's Dominated Convergence Theorem gives the result. \newline (e) Recall that $I$ is a positive distance from $\pm 1$, while $w,w^{\#}$ are bounded below in $I$ by $C_{0}/2$. Then \begin{eqnarray*} &&\int_{I^{\prime }}\frac{1}{\sqrt{1-t^{2}}}\left\vert \frac{1}{w\left( t\right) }-\frac{1}{w^{\#}\left( t\right) }\right\vert dt \\ &\leq &C\int_{I^{\prime }}\left\vert w^{\#}\left( t\right) -w\left( t\right) \right\vert dt \\ &\leq &C\int_{I^{\prime }}\int_{-1}^{1}\left\vert w\left( t+s\Delta \right) -w\left( t\right) \right\vert ds\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }dt \\ &=&C\int_{-1}^{1}\int_{I^{\prime }}\left\vert w\left( t+s\Delta \right) -w\left( t\right) \right\vert dt\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }ds \\ &\leq &C\sup_{\left\vert u\right\vert \leq \Delta }\int_{I^{\prime }}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt. \end{eqnarray*} $\blacksquare $\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof of Theorem 1.6}\newline As per usual, \begin{eqnarray*} &&\int_{-1}^{1}\left( K_{n}-K_{n}^{\#}\right) ^{2}\left( x,t\right) d{\Greekmath 0116} ^{\#}\left( t\right) \\ &=&\int_{-1}^{1}K_{n}^{\#2}\left( x,t\right) d{\Greekmath 0116} ^{\#}\left( t\right) -2\int_{-1}^{1}K_{n}^{\#}\left( x,t\right) K_{n}\left( x,t\right) d{\Greekmath 0116} ^{\#}\left( t\right) +\int_{-1}^{1}K_{n}^{2}\left( x,t\right) d{\Greekmath 0116} \left( t\right) \\ &&+\int_{I}K_{n}^{2}\left( x,t\right) d\left( {\Greekmath 0116} ^{\#}-{\Greekmath 0116} \right) \left( t\right) \\ &=&K_{n}^{\#}\left( x,x\right) -K_{n}\left( x,x\right) +\int_{I}K_{n}^{2}\left( x,t\right) d\left( {\Greekmath 0116} ^{\#}-{\Greekmath 0116} \right) \left( t\right) \\ &\leq &K_{n}^{\#}\left( x,x\right) -K_{n}\left( x,x\right) +\int_{I}K_{n}^{2}\left( x,t\right) \left( w^{\#}-w\right) \left( t\right) dt \end{eqnarray*} recall that ${\Greekmath 0116} ={\Greekmath 0116} ^{\#}$ outside $I$ and that ${\Greekmath 0116} ^{\#}$ is absolutely continuous in $I$. Then the Christoffel function estimate (3.4) gives for $ y\in \left[ -1,1\right] ,$ \begin{eqnarray*} &&\left\vert K_{n}-K_{n}^{\#}\right\vert \left( x,y\right) \\ &\leq &K_{n}^{\#}\left( y,y\right) ^{1/2}\left( K_{n}^{\#}\left( x,x\right) -K_{n}\left( x,x\right) +\int_{I}K_{n}^{2}\left( x,t\right) \left( w^{\#}-w\right) \left( t\right) dt\right) ^{1/2}. \end{eqnarray*} \begin{equation} \end{equation} We now replace $x$ by $x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}$, $y$ by $x+\frac{ a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}$, integrate over $I^{\prime }$, and then use the Cauchy-Schwarz inequality. We obtain \begin{eqnarray} &&\int_{I^{\prime }}\left\vert K_{n}-K_{n}^{\#}\right\vert \left( x+\frac{ a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) dx \notag \\ &\leq &T_{1}^{1/2}T_{2}^{1/2}, \end{eqnarray} where \begin{equation*} T_{1}=\int_{I^{\prime }}K_{n}^{\#}\left( x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+ \frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) dx \end{equation*} and \begin{eqnarray} T_{2} &=&\int_{I^{\prime }}\left( K_{n}^{\#}-K_{n}\right) \left( x+\frac{ a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) dx \notag \\ &&+\int_{I^{\prime }}\left[ \int_{I}K_{n}^{2}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n},t\right) \left( w^{\#}-w\right) \left( t\right) dt\right] dx \notag \\ &=&:T_{21}+T_{22}. \end{eqnarray} Now let $A>0$ and $a,b\in \left[ -A,A\right] $. Choose a subinterval $ I^{\prime \prime }$ of $I^{0}$ such that $I^{\prime }\subset \left( I^{\prime \prime }\right) ^{0}$. Observe that for some $n_{0}$ depending only on $A$ and $I^{\prime },I^{\prime \prime }$, we have \begin{equation} x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\in I^{\prime \prime }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }x\in I^{\prime },b\in \left[ -A,A\right] ,n\geq n_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} Then (c) of Lemma 5.1 shows that\ for $n\geq n_{0}$, \begin{equation} T_{1}\leq C_{2}n, \end{equation} where $C_{2}$ is independent of $n$ and $b\in \left[ -A,A\right] $. Next, we make the substitution $s=x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}$ in $T_{21}$. Observe that \begin{equation*} \frac{ds}{dx}=1-\frac{a{\Greekmath 0119} x}{n\sqrt{1-x^{2}}}\in \left[ \frac{1}{2},2\right] , \end{equation*} for $n\geq n_{1}$, where $n_{1}$ depends only on $A$ and $I$. We can also assume that (5.9) holds for $n\geq n_{1}$. Hence for $n\geq \max \left\{ n_{0},n_{1}\right\} $ and all $a\in \left[ -A,A\right] ,$ \begin{eqnarray*} \left\vert T_{21}\right\vert &\leq &\int_{I^{\prime }}\left\vert K_{n}^{\#}-K_{n}\right\vert \left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{ a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) dx \\ &\leq &2\int_{I^{\prime \prime }}\left\vert K_{n}^{\#}-K_{n}\right\vert \left( s,s\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }ds \end{eqnarray*} so using (d), (e) of the above lemma, \begin{equation*} \limsup_{n\rightarrow \infty }\frac{1}{n}T_{21}\leq C\sup_{\left\vert u\right\vert \leq \Delta }\int_{I^{\prime \prime }}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt, \end{equation*} where $C$ does not depend on $\Delta $. Next, \begin{equation*} \left\vert T_{22}\right\vert \leq \int_{I}\left\vert w-w^{\#}\right\vert \left( t\right) \left[ \int_{I^{\prime }}K_{n}^{2}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n},t\right) dx\right] dt. \end{equation*} Here for $n\geq \max \left\{ n_{0},n_{1}\right\} $, \begin{eqnarray*} &&\int_{I^{\prime }}K_{n}^{2}\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},t\right) dx \\ &\leq &\frac{1}{C_{0}}\int_{I^{\prime }}K_{n}^{2}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n},t\right) w\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) dx \\ &\leq &\frac{2}{C_{0}}\int_{I^{\prime \prime }}K_{n}^{2}\left( s,t\right) w\left( s\right) ds\leq \frac{2}{C_{0}}K_{n}\left( t,t\right) . \end{eqnarray*} Then using\ (c), (e) of the previous lemma, we obtain \begin{eqnarray*} \left\vert T_{22}\right\vert &\leq &Cn\int_{I}\left\vert w-w^{\#}\right\vert \left( t\right) dt \\ &\leq &Cn\sup_{\left\vert u\right\vert \leq \Delta }\int_{I^{\prime \prime }}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt. \end{eqnarray*} Substituting all the above estimates in (5.7), we obtain \begin{eqnarray*} &&\limsup_{n\rightarrow \infty }\frac{1}{n}\int_{I^{\prime }}\left\vert K_{n}-K_{n}^{\#}\right\vert \left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{ b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) dx \\ &\leq &C\left( \sup_{\left\vert u\right\vert \leq \Delta }\int_{I^{\prime \prime }}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt\right) ^{1/2}, \end{eqnarray*} uniformly for $a,b\in \left[ -A,A\right] $, where $C$ is independent of $ \Delta $. Now as ${\Greekmath 0116} ^{\#}$ is regular, is absolutely continuous in $I$, and $w^{\#}$ is continuous in $I^{0}$, Theorem 1.1 shows that \begin{eqnarray*} &&\lim_{n\rightarrow \infty }\frac{1}{n}K_{n}^{\#}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) \\ &=&\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }\frac{1}{{\Greekmath 0119} \sqrt{1-x^{2}}w^{\#}\left( x\right) }, \end{eqnarray*} uniformly for $x\in I^{\prime }$ and $a,b\in \left[ -A,A\right] $. It follows that \begin{eqnarray*} &&\limsup_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert \frac{1}{n} K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n} \right) -\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }\frac{1}{ {\Greekmath 0119} \sqrt{1-x^{2}}w\left( x\right) }\right\vert dx \\ &\leq &\left\vert \frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) } \right\vert \int_{I^{\prime }}\frac{1}{{\Greekmath 0119} \sqrt{1-x^{2}}}\left\vert \frac{1 }{w^{\#}\left( x\right) }-\frac{1}{w\left( x\right) }\right\vert dx \\ &&+C\left( \sup_{\left\vert u\right\vert \leq \Delta }\int_{I^{\prime \prime }}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt\right) ^{1/2}, \end{eqnarray*} uniformly for $a,b\in \left[ -A,A\right] $, where $C$ is independent of $ \Delta $. Since the left-hand side is independent of $\Delta $, we may apply (e) of the previous lemma, and then let $\Delta \rightarrow 0+$ to get the result. Of course, as $w$ is integrable, we have as $\Delta \rightarrow 0+,$ \begin{equation*} \sup_{\left\vert u\right\vert \leq \Delta }\int_{I^{\prime \prime }}\left\vert w\left( t+u\right) -w\left( t\right) \right\vert dt\rightarrow 0. \end{equation*} $\blacksquare $ \section{Universality in $L_{p}$} The case $p=1$ of Theorem 1.5 is an immediate consequence of Theorem 1.6 and the following lemma:\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Lemma 6.1}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$A>0$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and }$I^{\prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a closed subinterval of }$I^{0}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. As }$n\rightarrow \infty $, \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ uniformly for }$a,b\in \left[ -A,A\right] $, \begin{eqnarray} &&\frac{1}{n}\int_{I^{\prime }}\left\vert K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) -K_{n}\left( x+\frac{a}{ \tilde{K}_{n}\left( x,x\right) },x+\frac{b}{\tilde{K}_{n}\left( x,x\right) } \right) \right\vert dx \notag \\ &\rightarrow &0. \end{eqnarray} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof}\newline Choose a subinterval $I^{\prime \prime }$ of $I^{0}$ such that $I^{\prime }\subset \left( I^{\prime \prime }\right) ^{0}$. Define $r_{n}\left( x\right) $ by \begin{equation*} \frac{1}{\tilde{K}_{n}\left( x,x\right) }=\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n} r_{n}\left( x\right) . \end{equation*} Then the integrand in (6.1) may be written as \begin{eqnarray*} &&\left\vert \begin{array}{c} K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n} \right) \\ -K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}r_{n}\left( x\right) ,x+\frac{ b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}r_{n}\left( x\right) \right) \end{array} \right\vert \\ &\leq &\left\vert \frac{\partial }{\partial s}K_{n}\left( s,x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) \right\vert _{|s={\Greekmath 0118} }\frac{\left\vert a\right\vert {\Greekmath 0119} \sqrt{1-x^{2}}}{n}\left\vert 1-r_{n}\left( x\right) \right\vert \\ &&+\left\vert \frac{\partial }{\partial t}K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n}r_{n}\left( x\right) ,t\right) \right\vert _{|t={\Greekmath 0110} }\frac{ \left\vert b\right\vert {\Greekmath 0119} \sqrt{1-x^{2}}}{n}\left\vert 1-r_{n}\left( x\right) \right\vert \end{eqnarray*} where ${\Greekmath 0118} $ lies between $x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}$ and $x+\frac{ a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}r_{n}\left( x\right) $, with a similar restriction on ${\Greekmath 0110} $. Now by Lemma 5.1(c) and Cauchy-Schwarz, \begin{equation*} \sup_{s,t\in I}\left\vert K_{n}\left( s,t\right) \right\vert \leq Cn. \end{equation*} By Bernstein's inequality \cite[p. 98, Corollary 1.2]{DeVoreLorentz1993}, \begin{equation*} \sup_{s\in I^{\prime \prime },t\in I}\left\vert \frac{\partial }{\partial s} K_{n}\left( s,t\right) \right\vert \leq C_{1}Cn \end{equation*} with a similar bound for $\frac{\partial }{\partial t}K_{n}$. Here $C_{1}$ depends only on $I$ and $I^{\prime \prime }$. Then for some $C_{2}$ independent of $a,b,n,x,$ \begin{eqnarray*} &&\frac{1}{n}\left\vert \begin{array}{c} K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n} \right) \\ -K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{1-x^{2}}}{n}r_{n}\left( x\right) ,x+\frac{ b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}r_{n}\left( x\right) \right) \end{array} \right\vert \\ &\leq &C\left\vert 1-r_{n}\left( x\right) \right\vert . \end{eqnarray*} \newline Hence the integral in the left-hand side of (6.1) is bounded above by \begin{equation*} C\int_{I^{\prime }}\left\vert 1-r_{n}\left( x\right) \right\vert dx. \end{equation*} Of course $C$ is independent of $n$. Next \cite[p. 449, Thm. 8]{Mateetal1991} , \begin{equation} r_{n}\left( x\right) =\frac{n}{K_{n}\left( x,x\right) w\left( x\right) {\Greekmath 0119} \sqrt{1-x^{2}}}\rightarrow 1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ a.e. in }I \end{equation} by Theorem 2.1. We shall shortly show that \ \begin{equation} r_{n}\left( x\right) \leq C\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }x\in I^{\prime }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }n\geq n_{0}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} Then Lebesgue's Dominated Convergence Theorems shows that \begin{equation*} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert 1-r_{n}\left( x\right) \right\vert dx=0. \end{equation*} To prove (6.3), choose $M>0$ such that $w\leq M$ in $I$. Define a measure $ {\Greekmath 0116} ^{\ast }$ by \begin{eqnarray*} d{\Greekmath 0116} &=&d{\Greekmath 0116} ^{\ast }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }\left[ -1,1\right] \backslash I; \\ d{\Greekmath 0116} ^{\ast }\left( x\right) &=&Mdx\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in }I. \end{eqnarray*} Then $d{\Greekmath 0116} \leq d{\Greekmath 0116} ^{\ast }$ in $\left[ -1,1\right] $ so ${\Greekmath 0115} _{n}\leq {\Greekmath 0115} _{n}^{\ast }$ in $\left[ -1,1\right] $. As the absolutely continuous component of ${\Greekmath 0116} ^{\ast }$ is positive and continuous in $I$, Theorem 2.1 shows that for some $C>0$, \begin{equation*} {\Greekmath 0115} _{n}^{\ast }\left( x\right) \leq \frac{C}{n}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }x\in I^{\prime }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }n\geq 1, \end{equation*} and then \begin{equation} \frac{n}{K_{n}\left( x,x\right) }=n{\Greekmath 0115} _{n}\left( x\right) \leq C\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }x\in I^{\prime }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }n\geq 1. \end{equation} The definition (6.2) of $r_{n}$, the fact that $w$ is bounded below in $I$, and this last inequality, give (6.3). $\blacksquare $\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof of Theorem 1.5}\newline As $w$ is bounded above and below in $I$, the lemma and Theorem 1.6 give uniformly for $a,b\in \left[ -A,A\right] ,$ \begin{equation*} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert K_{n}\left( x+\frac{a }{\tilde{K}_{n}\left( x,x\right) },x+\frac{b}{\tilde{K}_{n}\left( x,x\right) }\right) \frac{w\left( x\right) {\Greekmath 0119} \sqrt{1-x^{2}}}{n}-\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }\right\vert dx=0. \end{equation*} Now a.e. in $I,$ \begin{equation*} \frac{1}{K_{n}\left( x,x\right) }=\frac{w\left( x\right) {\Greekmath 0119} \sqrt{1-x^{2}}}{ n}\left( 1+o\left( 1\right) \right) . \end{equation*} Moreover, by (6.4), Lemma 5.1(c), and Cauchy-Schwarz, both $\frac{1}{n} K_{n}\left( x+\frac{a}{\tilde{K}_{n}\left( x,x\right) },x+\frac{b}{\tilde{K} _{n}\left( x,x\right) }\right) $ and $K_{n}\left( x+\frac{a}{\tilde{K} _{n}\left( x,x\right) },x+\frac{b}{\tilde{K}_{n}\left( x,x\right) }\right) /K_{n}\left( x,x\right) $ are bounded above uniformly for $a,b\in \left[ -A,A \right] ,$ $x\in I^{\prime }$, and $n\geq n_{0}$. We deduce that \begin{equation*} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert K_{n}\left( x+\frac{a }{\tilde{K}_{n}\left( x,x\right) },x+\frac{b}{\tilde{K}_{n}\left( x,x\right) }\right) /K_{n}\left( x,x\right) -\frac{\sin {\Greekmath 0119} \left( a-b\right) }{{\Greekmath 0119} \left( a-b\right) }\right\vert dx=0. \end{equation*} Finally, as we have just noted, the integrand in the last integral is bounded above uniformly for $a,b\in \left[ -A,A\right] ,$ $x\in I^{\prime }$ , and $n\geq n_{0}$, so we may replace the first power by the $p$th power, for any $p>1$. For $p<1$, we can use H\"{o}lder's inequality. $\blacksquare $ \newline \newline In proving Theorem 1.4, our last step is to replace $\frac{K_{n}\left( x+ \frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{\widetilde{K} _{n}\left( x,x\right) }\right) }{K_{n}\left( x,x\right) }$ by $\frac{ \widetilde{K}_{n}\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+ \frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) }{\widetilde{K} _{n}\left( x,x\right) }$. This is more difficult than one might expect - it is only here that we need Riemann integrability of $w$ in $I$. For general Lebesgue measurable $w$, it seems difficult to deal with the factor $ \widetilde{K}_{n}\left( x,x\right) =w\left( x\right) K_{n}\left( x,x\right) $ below.\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Lemma 6.2} \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Assume that }$w$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is Riemann integrable and bounded below by a positive constant in }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Let }$I^{\prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be a compact subinterval of }$I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Let }$p,A>0$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{. Then uniformly for }$a,b\in \left[ -A,A\right] $\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{, we have} \begin{equation*} \lim_{n\rightarrow \infty }\int_{I^{\prime }}\left\vert \sqrt{w\left( x+ \frac{a}{\widetilde{K}_{n}\left( x,x\right) }\right) w\left( x+\frac{b}{ \widetilde{K}_{n}\left( x,x\right) }\right) }/w\left( x\right) -1\right\vert ^{p}dx=0. \end{equation*} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof}\newline Let $a,b\in \left[ -A,A\right] $. From (6.4), for a suitable integer $n_{0}$ and some $L>0$, we have \begin{equation*} \left\vert \frac{a}{\widetilde{K}_{n}\left( x,x\right) }\right\vert \leq \frac{L}{n}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }\left\vert \frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right\vert \leq \frac{L}{n}, \end{equation*} uniformly for $x\in I^{\prime }$, $a,b\in \left[ -A,A\right] $, and $n\geq n_{0}$. Next, as $w$ is Riemann integrable in $I$, it is continuous a.e. in $ I$ \cite[p. 23]{RieszNagy1990}. For $x\in I$ and $n\geq 1,$ let \begin{equation*} \Omega _{n}\left( x\right) =\sup \left\{ \left\vert w\left( x+s\right) -w\left( x\right) \right\vert :\left\vert s\right\vert \leq \frac{L}{n} \right\} . \end{equation*} Note that for $x\in I^{\prime },n\geq n_{0}$ and $a,b\in \left[ -A,A\right] $ , \begin{equation*} \left\vert w\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) }\right) -w\left( x\right) \right\vert \leq \Omega _{n}\left( x\right) . \end{equation*} We have at every point of continuity of $w$ and in particular for a.e. $x\in I$, \begin{equation*} \lim_{n\rightarrow \infty }\Omega _{n}\left( x\right) =0. \end{equation*} Moreover, as $w$ is Riemann integrable, $\Omega _{n}$ is bounded above in $I$ , uniformly in $n$. Then Lebesgue's Dominated Convergence Theorem gives uniformly for $a\in \left[ -A,A\right] ,$ \begin{eqnarray*} &&\int_{I^{\prime }}\left\vert w\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) }\right) -w\left( x\right) \right\vert ^{p}dx \\ &\leq &\int_{I^{\prime }}\Omega _{n}\left( x\right) ^{p}dx\rightarrow 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ , }n\rightarrow \infty . \end{eqnarray*} This, the fact that $w$ is bounded above and below, and some elementary manipulations, give the result. $\blacksquare $\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof of Theorem 1.4}\newline Since $\frac{K_{n}\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+ \frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) }{K_{n}\left( x,x\right) }$ is bounded uniformly in $n,x,a,b$ (over the relevant ranges) and \begin{eqnarray*} &&\frac{\widetilde{K}_{n}\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) },x+\frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) }{ \widetilde{K}_{n}\left( x,x\right) }/\frac{K_{n}\left( x+\frac{a}{\widetilde{ K}_{n}\left( x,x\right) },x+\frac{b}{\widetilde{K}_{n}\left( x,x\right) } \right) }{K_{n}\left( x,x\right) } \\ &=&\sqrt{w\left( x+\frac{a}{\widetilde{K}_{n}\left( x,x\right) }\right) w\left( x+\frac{b}{\widetilde{K}_{n}\left( x,x\right) }\right) }/w\left( x\right) , \end{eqnarray*} this follows directly from the lemma above and Theorem 1.5. $\blacksquare $ \section{Proof of Corollaries 1.2 and 1.3} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof of Corollary 1.2}\newline This follows directly by substituting (1.6) into the determinant defining $ R_{m}$. $\blacksquare $\newline \newline In proving Corollary 1.3, we need \newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Lemma 7.1}\newline \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$w\geq C$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{in} $I$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ \ and }$I^{\prime },I^{\prime \prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ be closed subintervals of }$I^{0}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ such that }$I^{\prime }$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ is contained in the interior of} $ I^{\prime \prime }$. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Let }$A>0$. \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{There exists }$C_{2}$ \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ such that for }$n\geq 1,x\in I^{\prime },$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ and all }$ {\Greekmath 010B} ,{\Greekmath 010C} \in \mathbb{C}$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ with }$\left\vert {\Greekmath 010B} \right\vert ,\left\vert {\Greekmath 010C} \right\vert \leq A$\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{, } \begin{equation} \left\vert \frac{1}{n}K_{n}\left( x+\frac{{\Greekmath 010B} }{n},x+\frac{{\Greekmath 010C} }{n} \right) \right\vert \leq C_{2}. \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\newline }\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof}\newline Recall that $\frac{1}{n}K_{n}\left( x,x\right) $ is uniformly bounded above for $x\in I^{\prime }$ by Lemma 5.1(c). Applying Cauchy-Schwarz, we obtain for $x,y\in I^{\prime \prime },$ \begin{equation} \frac{1}{n}\left\vert K_{n}\left( x,y\right) \right\vert \leq \sqrt{\frac{1}{ n}K_{n}\left( x,x\right) }\sqrt{\frac{1}{n}K_{n}\left( y,y\right) }\leq C_{1}. \end{equation} Next we note Bernstein's growth lemma for polynomials in the plane \cite[ Theorem 2.2, p. 101]{DeVoreLorentz1993}: if $P$ is a polynomial of degree $ \leq n$, we have for $z\notin \left[ -1,1\right] $, \begin{equation*} \left\vert P\left( z\right) \right\vert \leq \left\vert z+\sqrt{z^{2}-1} \right\vert ^{n}\left\Vert P\right\Vert _{L_{\infty }\left[ -1,1\right] }. \end{equation*} From this we deduce that given $L>0$, and $0<{\Greekmath 010E} <1$, there exists $ C_{2}\neq C_{2}\left( n,P,z\right) $ such that for $\left\vert \func{Re} \left( z\right) \right\vert \leq {\Greekmath 010E} $, and $\left\vert \func{Im} z\right\vert \leq \frac{L}{n}$ \begin{equation*} \left\vert P\left( z\right) \right\vert \leq C_{2}\left\Vert P\right\Vert _{L_{\infty }\left[ -1,1\right] }. \end{equation*} Mapping this to $I$ by a linear transformation, we deduce that for $\func{Re} z\in I^{\prime }$ and $\left\vert \func{Im}z\right\vert \leq \frac{L}{n},$ \begin{equation*} \left\vert P\left( z\right) \right\vert \leq C_{3}\left\Vert P\right\Vert _{L_{\infty }\left( I^{\prime \prime }\right) } \end{equation*} where $C_{3}\neq C_{3}\left( n,P,z\right) $. We now apply this to $\frac{1}{n }K_{n}\left( x,y\right) $, separately in each variable, obtaining the stated result. $\blacksquare $\newline \newline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Proof of Corollary 1.3}\newline Note first from the lemma, $\left\{ \frac{1}{n}K_{n}\left( x+\frac{{\Greekmath 010B} }{n },x+\frac{{\Greekmath 010C} }{n}\right) \right\} _{n=1}^{\infty }$ is analytic in $ {\Greekmath 010B} ,{\Greekmath 010C} $ and uniformly bounded for ${\Greekmath 010B} ,{\Greekmath 010C} $ in compact subsets of the plane. Moreover, from (4.8), and continuity of $w$, \begin{equation*} \lim_{n\rightarrow \infty }\frac{1}{n}w\left( x\right) K_{n}\left( x+\frac{ {\Greekmath 010B} }{n},x+\frac{{\Greekmath 010C} }{n}\right) =\frac{\sin \left( \left( {\Greekmath 010B} -{\Greekmath 010C} \right) /\sqrt{1-x^{2}}\right) }{{\Greekmath 0119} \left( {\Greekmath 010B} -{\Greekmath 010C} \right) } \end{equation*} uniformly for $x\in I^{\prime }$ and ${\Greekmath 010B} ,{\Greekmath 010C} $ in compact subsets of $ I^{\prime }$. By convergence continuation theorems, this last limit then holds uniformly for ${\Greekmath 010B} ,{\Greekmath 010C} $ in compact subsets of the plane. Next, expanding $p_{k}\left( x+\frac{{\Greekmath 010B} }{n}\right) $ and $p_{k}\left( x+\frac{ {\Greekmath 010C} }{n}\right) $ in Taylor series about $x,$ \begin{eqnarray*} \frac{1}{n}K_{n}\left( x+\frac{{\Greekmath 010B} }{n},x+\frac{{\Greekmath 010C} }{n}\right) &=& \frac{1}{n}\sum_{k=0}^{n-1}p_{k}\left( x+\frac{{\Greekmath 010B} }{n}\right) p_{k}\left( x+\frac{{\Greekmath 010C} }{n}\right) \\ &=&\frac{1}{n}\sum_{r,s=0}^{\infty }\frac{\left( \frac{{\Greekmath 010B} }{n}\right) ^{r}}{r!}\frac{\left( \frac{{\Greekmath 010C} }{n}\right) ^{s}}{s!} \sum_{k=0}^{n-1}p_{k}^{\left( r\right) }\left( x\right) p_{k}^{\left( s\right) }\left( x\right) \\ &=&\sum_{r,s=0}^{\infty }\frac{{\Greekmath 010B} ^{r}}{r!}\frac{{\Greekmath 010C} ^{s}}{s!}\frac{1}{ n^{r+s+1}}K_{n}^{\left( r,s\right) }\left( x,x\right) , \end{eqnarray*} with the notation (1.9). Since the series terminates, the interchanges are valid. By using the Maclaurin series of $\sin $ and the binomial theorem, we see that \begin{equation*} \frac{\sin \left( {\Greekmath 010B} -{\Greekmath 010C} \right) }{{\Greekmath 010B} -{\Greekmath 010C} }=\sum_{r,s=0}^{ \infty }\frac{{\Greekmath 010B} ^{r}}{r!}\frac{{\Greekmath 010C} ^{s}}{s!}{\Greekmath 011C} _{r,s}, \end{equation*} where ${\Greekmath 011C} _{r,s}$ is given by (1.10). Since uniformly convergent sequences of analytic functions have Taylor series that also converge, we see that for $x\in I$, and each $r,s\geq 0,$ \begin{equation*} \lim_{n\rightarrow \infty }\frac{1}{n^{r+s+1}}w\left( x\right) K_{n}^{\left( r,s\right) }\left( x,x\right) =\frac{{\Greekmath 011C} _{r,s}}{{\Greekmath 0119} }\left( 1-x^{2}\right) ^{-\left( r+s\right) /2}. \end{equation*} This establishes the limit (1.11), but we must still prove uniformity. Let $ A,{\Greekmath 0122} >0$. By the uniform convergence in Theorem 1.1, there exists $ n_{0}$ such that for $n\geq n_{0},$ \begin{eqnarray} &&\left\vert \begin{array}{c} \frac{w\left( x\right) \sqrt{1-x^{2}}}{n}K_{n}\left( x+\frac{a{\Greekmath 0119} \sqrt{ 1-x^{2}}}{n},x+\frac{b{\Greekmath 0119} \sqrt{1-x^{2}}}{n}\right) \\ -\frac{w\left( y\right) \sqrt{1-y^{2}}}{n}K_{n}\left( y+\frac{a{\Greekmath 0119} \sqrt{ 1-y^{2}}}{n},y+\frac{b{\Greekmath 0119} \sqrt{1-y^{2}}}{n}\right) \end{array} \right\vert \notag \\ &\leq &{\Greekmath 0122} , \end{eqnarray} uniformly for $x,y\in I,a,b\in \left[ -A,A\right] $ and $n\geq n_{0}$. Using Bernstein's growth inequality as in the lemma above, applied to the polynomial in the left-hand side of (7.3), we obtain that this inequality persists for complex ${\Greekmath 010B} ,{\Greekmath 010C} $ with $\left\vert {\Greekmath 010B} \right\vert ,\left\vert {\Greekmath 010C} \right\vert \leq A$, except that we must replace $ {\Greekmath 0122} $ by $C{\Greekmath 0122} $, where $C$ depends only on $A$, not on $ n,x,a,b,{\Greekmath 0122} $. We can now use Cauchy's inequalities to bound the Taylor series coefficients of the double series in $a,b$ implicit in the left-hand side in (7.3). This leads to bounds on \begin{equation*} \left\vert \frac{1}{n^{r+s+1}}w\left( x\right) K_{n}^{\left( r,s\right) }\left( x,x\right) -\frac{1}{n^{r+s+1}}w\left( y\right) K_{n}^{\left( r,s\right) }\left( y,y\right) \right\vert \end{equation*} that are uniform in $x,y$. $\blacksquare $ \end{document}
math
82,046
\begin{document} \title{Complete manifolds with nonnegative curvature operator } \alphauthor{Lei Ni}\thanks{The first author was supported in part by NSF Grants and an Alfred P. Sloan Fellowship, USA} \alphaddress{Department of Mathematics, University of California at San Diego, La Jolla, CA 92093} \epsilonmail{[email protected]} \alphauthor{Baoqiang Wu} \alphaddress{Department of Mathematics, Xuzhou Normal University, Xuzhou, Jiangsu, China} \epsilonmail{[email protected]} \date{June 2006} \begin{abstract} In this short note, as a simple application of the strong result proved recently by B\"ohm and Wilking, we give a classification on closed manifolds with $2$-nonnegative curvature operator. Moreover, by the new invariant cone constructions of B\"ohm and Wilking, we show that any complete Riemannian manifold (with dimension $\gammae 3$) whose curvature operator is bounded and satisfies the pinching condition $R\gammae \delta R_{I}>0$, for some $\delta>0$, must be compact. This provides an intrinsic analogue of a result of Hamilton on convex hypersurfaces. \epsilonnd{abstract} \Upsiloneywords{} \maketitle \section{Introduction} Let $(M, g)$ be a Riemannian manifold. The curvature operator of $(M, g)$ lies in the subspace $S_B^2(\wedge^2TM)$ of $S^2(\wedge^2TM)$ cut out by the Bianchi identity. The decomposition $S_B^2(\wedge^2TM)=\operatorname{l}angle \operatorname{I}\rangle \oplus \operatorname{l}angle \operatorname{Ric}_0\rangle \oplus \operatorname{l}angle \operatorname{W}\rangle$ splits the space of algebraic curvature operators into $O(n)$-invariant orthogonal irreducible subspaces. For an orthonormal basis $\partialhi_{\alphalpha}$ (say $\partialhi_\alphalpha=e_i\wedge e_j$) of $\wedge^2TM$ (which can be identified with $so(n)$), the Lie bracket is given in terms of $$ [\partialhi_\alphalpha, \partialhi_\beta]=c_{\alphalpha \beta \gammaamma} \partialhi_\gammaamma. $$ It is easy to check, by simple linear algebra, that $ \operatorname{l}angle [\partialhi, \partialsi],\omega\rangle =-\operatorname{l}angle [\omega, \partialsi], \partialhi\rangle. $ Here $\operatorname{l}angle A, B\rangle=-\frac{1}{2}\operatorname{tr}(AB)$. This immediately implies that $c_{\alphalpha \beta\gammaamma}$ is anti-symmetric. If $A, B\sqrt {-1}n S^2(\wedge^2TM)$ one can define $$ (A\#B)_{\alphalpha \beta}=\frac{1}{2}c_{\alphalpha \gammaamma \epsilonta}c_{\beta \delta \theta}A_{\gammaamma \delta}B_{\epsilonta \theta}. $$ It is easy to see that $A\#B$ is symmetric too. Also from the anti-symmetry of $c_{\alphalpha\beta\gammaamma}$ $ A\#B=B\#A. $ In \cite{BW}, a remarkable algebraic identity was proved on how a linear transformation of $S_B^2(\wedge^2 TM)$ changes the quadratic form $Q(R)=R^2+R^\#$. B\"ohm and Wilking then constructed a continuous {\sqrt {-1}t pinching family of invariant closed convex cones}. Using this construction they confirmed a conjecture of Hamilton stating that {\sqrt {-1}t on a compact manifold the normalized Ricci flow evolves a Riemannian metric with $2$-positive curvature operator to a limit metric with constant sectional curvature}. Hence it gives a complete topological classification of compact manifolds with positive $2$-positive curvature operator. In this short notes, based on the strong result and the techniques of \cite{BW}, we give the classification for manifolds with $2$-nonnegative curvature operators and an application of their invariant cone constructions to the compactness of Riemannian manifolds with pinching curvature operator. \section{A strong maximum principle} Let $(M, g(t))$ be a complete solution to Ricci flow such that there exists a constant $A$ and the curvature tensor of $g(t)$ satisfies $|R_{ijkl}|^2(x,t) \operatorname{l}e A$, for all $(x, t)\sqrt {-1}n M\times[0, T]$. In \cite{H86}, Hamilton proved that under the evolving normal frame the curvature tensor satisfies the following evolution equation. \begin{proposition}[Hamilton] \begin{equation} \operatorname{l}abel{ham861}\operatorname{l}eft( \frac{\partialartial}{\partialartial t}-\Delta\rightght) R=2\operatorname{l}eft( R^{2}+R^{\#}\rightght) \epsilonnd{equation} where $R^{\#}=R\#R$. \epsilonnd{proposition} The following was observed for compact manifolds in \cite{Chen, H90}. We spell out the argument for the noncompact case for the sake of the completeness. \begin{proposition}\operatorname{l}abel{chen1} The convex cone of $2$-nonnegative curvature operator is preserved under the Ricci flow. \epsilonnd{proposition} \begin{proof} Let $\operatorname{I}$ be the identity of $S_B^2(\wedge^2TM)$, which can be identified with the induced metric on $\wedge^2TM$ (as a section of $\wedge^2TM\otimes \wedge^2TM$). We also denote the identity map of $TM$ by $\sqrt {-1}d$. With respect to the evolving normal frame we have that $\nabla \operatorname{I}=0$ and $\frac{\partial}{\partial t}\operatorname{I}=0$. Let $\partialsi(x,t)> 0$ be the fast growth function constructed in Lemma 1.1 of \cite{NT1} satisfying $\frac{\partial}{\partial t}\partialsi-\Delta \partialsi \gammae C_1 \partialsi$. Here $C_1$ can be chosen as arbitrarily large as we wish. We shall consider $\tilde R=R+\epsilonpsilon \partialsi \operatorname{I}$ and show that $\tilde R $ is $2$-positive for every (sufficiently small) $\epsilonpsilon$. If not by the boundedness of $R$ and growth of $\partialsi$ we know that it can only fail somewhere finite. Assume that $t_0$ is the first time $\tilde R$ fails to be $2$-positive and it happens at some point $x_0$. If we choose orthonormal basis $\omega_\alphalpha$ (it may not be in the form of $e_i\wedge e_j$ as $\partialhi_\alphalpha$) such that $\tilde R$ is diagonal (so is $R$) with eigenvalue $\mu_1\operatorname{l}e \mu_2\operatorname{l}e\cdot\cdot \cdot \operatorname{l}e \mu_N$, where $N=\frac{n(n-1)}{2}$. Parallel translate $\omega_\alphalpha$ to a neighborhood of $(x_0, t_0)$, and let $\tilde R_{\alphalpha\alphalpha}=\operatorname{l}angle R(\omega_\alphalpha), \omega_\alphalpha\rangle$ then at $(x_0, t_0)$ we have, by the maximum principle, that \begin{eqnarray*} 0&\gammae& \operatorname{l}eft(\frac{\partial}{\partial t} -\Delta \rightght)\operatorname{l}eft(\tilde R_{11}+\tilde R_{22}\rightght)\\ &\gammae& (R^2+R^\#)_{11}+(R^2+R^\#)_{22}+2\epsilonpsilon C_1\partialsi\\ &=& (\tilde R^2+\tilde R^\#)_{11}+(\tilde R^2+\tilde R^\#)_{22}+2\epsilonpsilon C_1\partialsi\\ &\quad &+\operatorname{l}eft(R^2+R^\#-\tilde R^2-\tilde R^\#\rightght)_{11}+\operatorname{l}eft(R^2+R^\#-\tilde R^2-\tilde R^\#\rightght)_{22} \\ &=&\mu_1^2 +\mu_2^2+\sum (c^2_{1\beta\gammaamma}+c^2_{2\beta\gammaamma})\mu_\beta\mu_\gammaamma+2\epsilonpsilon C_1\partialsi \\&\quad & -\epsilonpsilon \partialsi\operatorname{l}eft(\operatorname{l}eft(2\operatorname{Ric}\wedge \sqrt {-1}d +(n-1)\epsilonpsilon \partialsi \operatorname{I}\rightght)_{11}+\operatorname{l}eft(2\operatorname{Ric}\wedge \sqrt {-1}d +(n-1)\epsilonpsilon \partialsi \operatorname{I}\rightght)_{22}\rightght). \epsilonnd{eqnarray*} Here in the last equation above we have used Lemma 2.1 of \cite{BW}, which asserts that $R+R\#\operatorname{I}=\operatorname{Ric}\wedge \sqrt {-1}d$ (the use is not really necessary). Since $\mu_1+\mu_2\gammae 0$ and $\mu_\gammaamma \gammae 0$ for all $\gammaamma\gammae 2$, $$ \sum (c^2_{1\beta\gammaamma}+c^2_{2\beta\gammaamma})\mu_\beta\mu_\gammaamma=2\sum_{\gammaamma \gammae 3}(c^2_{12\gammaamma}+c^2_{21\gammaamma})(\mu_1+\mu_2)\mu_\gammaamma+\sum_{\beta, \gammaamma \gammae 3}(c^2_{1\beta\gammaamma}+c^2_{2\beta\gammaamma})\mu_\beta\mu_\gammaamma\gammae 0. $$ Notice also that at $(x_0, t_0)$ we have that $\mu_{11}+\mu_{22}=0$, which implies that $R_{11}+R_{22}=-2\epsilonpsilon \partialsi$, then $2\epsilonpsilon \partialsi(x_0, t_0)\operatorname{l}e 2A$. Hence at $(x_0, t_0)$ we have that \begin{eqnarray}\operatorname{l}abel{str1} 0&\gammae& \operatorname{l}eft(\frac{\partial}{\partial t} -\Delta \rightght)\operatorname{l}eft(\tilde R_{11}+\tilde R_{22}\rightght)\nonumber\\ &\gammae &\mu_1^2 +\mu_2^2+2\epsilonpsilon C_1\partialsi-200nA\epsilonpsilon \partialsi. \epsilonnd{eqnarray} This is a contradiction if we choose $C_1>100nA$. \epsilonnd{proof} By choosing the barrier function more carefully as in \cite{NT2, N04} (see for example Theorem 2.1 of \cite{N04}), we can have the following strong maximum principle. \begin{corollary}\operatorname{l}abel{strong1} Assume that $R(g(0))$ is $2$-nonnegative and $2$-positive somewhere. Then there exists $f(x,t)>0$ for $t>0$ and $f(x,0)\gammae \frac{1}{2}(\mu_1+\mu_2)$, such that $$\operatorname{l}eft(\mu_{1}+\mu_{2}\rightght)(x,t)\gammae f(x, t).$$ In particular, if $R(g(0))$ is $2$-nonnegative and $(\mu_1+\mu_2)(x_0, t_0)=0$ for some $t_0\gammae 0$, then $(\mu_1+\mu_2)(x, t)\epsilonquiv 0$ for all $(x, t)$ with $t\operatorname{l}e t_0$. Moreover, $\mu_1(x, t)=\mu_2(x,t)=0$ for all $(x, t)$ with $t\operatorname{l}e t_0$ and $$ \mathcal{N}_2(x, t)=\mbox{span}\{\omega_1, \omega_2\} $$ is a distribution on $M$ which is invariant under the parallel translation. \epsilonnd{corollary} The above result together with (\ref{str1}) implies the following classification of closed $2$-nonnegative manifolds. \begin{corollary}\operatorname{l}abel{topo} Assume that $R(g(0))$ is $2$-nonnegative. Then for $t>0$, either the curvature operator $R(g(t))$ is $2$-positive, or $R(g(t))\gammae 0$. Hence Suppose $\operatorname{l}eft( M^{n},g_{0}\rightght) $ is a closed Riemannian manifold with $2$-nonnegative curvature operator. Let $\tilde{g}\operatorname{l}eft( t\rightght) $ be the lift to the universal cover $\tilde{M}$ of the solution $g\operatorname{l}eft( t\rightght) $ to the Ricci flow with $g\operatorname{l}eft( 0\rightght) =g_{0}.$ Then for any $t>0$ we have either $\operatorname{l}eft( \tilde{M}^{n},\tilde{g}\operatorname{l}eft( t\rightght) \rightght) $ is a closed manifold with $2$-positive curvature operator or it is isometric to the product of the following: \begin{enumerate} \sqrt {-1}tem Euclidean space, \sqrt {-1}tem closed symmetric space, \sqrt {-1}tem closed Riemannian manifold with positive curvature operator, \sqrt {-1}tem closed K\"{a}hler manifold with positive curvature operator on real $\operatorname{l}eft( 1,1\rightght) $-forms. \epsilonnd{enumerate} \epsilonnd{corollary} \begin{proof} It follows from the above corollary and Hamilton's classification result on the solutions with nonnegative curvature operator. See for example \cite{CLN}, Theorem 7.34. \epsilonnd{proof} Topologically, it is now known, by \cite{BW}, that simply-connected $2$-positive manifolds is sphere, and the K\"ahler manifold in the last case is biholomorphic to the complex projective space by the earlier result of Mori -Siu-Yau. The fact that the curvature operator of the evolving metrics becomes either $2$-positive or nonnegative has been observed in \cite{Chen}. However, in \cite{Chen} there is no clear statement of the strong maximum principle, namely Corollary \ref{strong1}, on which the observation relies. If evoking Theorem 2.3 of \cite{N04}, the splitting result on solutions of Ricci flow on complete Riemannian manifold with nonnegative curvature operator, we can write a similar statement even when $M$ is not assume to be compact. However, in this case the Euclidean factor is only topological (not isometric). Also we do not know if a complete noncompact $2$-positive Riemannian manifold is diffeomorphic to $\Bbb R^n$ or not. \section{Manifolds with pinched curvature} In \cite{H91} Hamilton proved that any convex hypersurface (with dimension $\gammae 3$) in Euclidean space with second fundamental form $h_{ij}\gammae \delta \frac{\operatorname{tr}(h)}{n}\sqrt {-1}d$ must be compact. In \cite{CZ}, using the pre-established estimates of \cite{Hu} and \cite{Sh2}, Chen and Zhu proved the following weak version of above-mentioned Hamilton's result in terms of curvature operators. Namely, they proved that {\sqrt {-1}t if a complete Riemannian manifold $(M^n, g)$ with bounded and $(\epsilonpsilon, \delta_n)$- pinched curvature operator (with $ n\gammae 3$) in the sense that $$ |R_{\operatorname{W}}|^2+|R_{\operatorname{Ric}_0}|^2\operatorname{l}e \delta_n (1-\epsilonpsilon)^2|R_I|^2=\delta_n (1-\epsilonpsilon)^2\frac{2}{n(n-1)}{\operatorname{Scal} (R)}^2 $$ for $\epsilon>0$, $\delta_3>0, \delta_4=\frac{1}{5}, \delta_5=\frac{1}{10}$ and $\delta_n =\frac{2}{(n-2)(n+1)}$, where $R_{\operatorname{W}}$, $R_{\operatorname{Ric}_0}$ and $R_{\operatorname{I}}$ denote the Weyl curvature tensor, traceless Ricci part and the scalar curvature part. Then $M$ must be compact.} The strong pinching condition was the one originally assumed in \cite{Hu} to obtain various estimates and the smooth convergence result. It was also shown in \cite{Hu} that it implies that $R\gammae \epsilonpsilon R_{I}$. In \cite{N05} the first author showed that the above result of Chen-Zhu can be shown by the blow-up analysis of \cite{H90} and some non-existence results on gradient steady and expanding solitons obtained in \cite{N05}. (The detailed proof on these non-existence results were submitted to 2004 ICCM proceedings a while ago. See also forthcoming book \cite{CLN}.) With the help of a family of invariant cones constructed in \cite{BW}, we can now prove the following general result. \begin{theorem}\operatorname{l}abel{ham} Let $(M^n, g_0)$ be a complete Riemannian manifold with $n\gammae 3$. Assume that the curvature operator of $M$ is uniformly bounded ($|R_{ijkl}|(x)\operatorname{l}e A$) and satisfies that \begin{equation}\operatorname{l}abel{pin1} R\gammae \delta R_{\operatorname{I}}>0 \epsilonnd{equation} for some $\delta>0$. Then $(M, g)$ must be compact. \epsilonnd{theorem} Recall that $R_{\operatorname{I}}=\frac{1}{n(n-1)}\operatorname{Scal}(R)\operatorname{I}$, where $\operatorname{I}$ is the identity of $S_B^2(so(n))$. The above result is a natural analogue of Hamilton result for hypersurfaces. \begin{proof} Let $g(t)$ be the solution to Ricci flow with initial metric $g_0$ constructed by \cite{Sh1}. First we show that if $M$ is noncompact, $g(t)$ can be extended to a long-time solution defined on $M\times [0, \sqrt {-1}nfty)$. In order to do that we first show that for sufficient small $b>0$, $R(g_0)$ lies inside the invariant cone constructed by Lemma 3.4 of \cite{BW}. Recall from \cite {BW} the linear transformation $$ \operatorname{l}_{a, b}: R\to R+2(n-1)aR_{\operatorname{I}}+(n-2)b R_{\operatorname{Ric}_0}. $$ More precisely \begin{eqnarray*} \operatorname{l}_{a, b}(R)&=&R+2a {\bar{\alpha}}r{\operatorname{l}ambda}\operatorname{I} +2b\sqrt {-1}d \wedge \operatorname{Ric}_0(R)\\ &=& (1+2(n-1)a)R_{\operatorname{I}}+(1+(n-2)b)R_{\operatorname{Ric}_0}+R_{\operatorname{W}}. \epsilonnd{eqnarray*} It is easy to see that $\operatorname{l}_{a,b}(S_B^2(so(n)))\subset S_B^2(so(n))$ and is invertible if $a\ne -\frac{1}{2(n-1)}$ and $b\ne -\frac{1}{n-2}$. Using this linear map and Theorem 2 of \cite{BW}, a pinching family of invariant convex cones are constructed. In particular, as one step of the construction, it was shown that \begin{lemma}[B\"ohm-Wilking]\operatorname{l}abel{keylemma1}For $b\sqrt {-1}n [0, \frac{1}{2}]$, let $$ a=\frac{(n-2)b^2+2b}{2+2(n-2)b^2}\, \mbox{ and } p=\frac{(n-2)b^2}{1+(n-2)b^2}. $$ Then the set $\operatorname{l}_{a, b}(C(b))$ where $$ C(b)=\operatorname{l}eft\{R\sqrt {-1}n S_B^2(so(n))\, |\, R\gammae 0,\, \operatorname{Ric} \gammae p(b)\frac{\operatorname{tr}(\operatorname{Ric})}{n}\rightght\}, $$ is invariant under the vector fields $Q(R)$. In fact for $b\sqrt {-1}n(0, \frac{1}{2}]$ it is transverse to the boundary of the set at all boundary points $R\ne 0$. \epsilonnd{lemma} We claim that there exists $b>0$ so small that $R(g_0)\sqrt {-1}n \operatorname{l}_{a,b}(C(b))$, which is equivalent to that $\operatorname{l}_{a, b}^{-1}(R(g_0))\sqrt {-1}n C(b)$. For simplicity let $\tilde R=R(g_0)$, ${\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)=\frac{\operatorname{Scal}(\tilde R)}{n}$ and $\operatorname{l}=\operatorname{l}_{a, b}$. Direct computation shows that $$ R:=\operatorname{l}^{-1}(\tilde R)=\tilde{R}_{\operatorname{W}}+\frac{1}{1+2(n-1)a}\tilde{R}_{\operatorname{I}}+\frac{1}{1+(n-2)b}\tilde{R}_{\operatorname{Ric}_0} $$ which implies that $$ \operatorname{Ric}(\operatorname{l}^{-1}(\tilde R))=\frac{{\bar{\alpha}}r{\operatorname{l}ambda}(\tilde {R})}{1+2(n-1)a}\sqrt {-1}d+\frac{1}{1+(n-2)b}\operatorname{Ric}_0(\tilde R) $$ and $$ {\bar{\alpha}}r{\operatorname{l}ambda}(R):=\frac{\operatorname{tr}(\operatorname{l}^{-1}(\tilde R))}{n}={\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)\operatorname{l}eft(1-\frac{2(n-1) a}{1+2(n-1)a}\rightght). $$ Let $\tilde{\lambda}_i$ be the eigenvalues of $\operatorname{Ric}_0(\tilde R)$. Then by the assumption (\ref{pin1}) we have that \begin{equation}\operatorname{l}abel{pin2} \tilde{\lambda}_i+{\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)\gammae \delta {\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R). \epsilonnd{equation} Clearly we also have that \begin{equation}\operatorname{l}abel{pin3} \tilde{\lambda}_i+{\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)\operatorname{l}e n{\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R). \epsilonnd{equation} We first check that $R$ satisfies the Ricci pinching condition. In fact if $\operatorname{l}ambda_i$ are the eigenvalues of $\operatorname{Ric}_0(R)$, from the above formulae we have that \begin{eqnarray*} -\operatorname{l}ambda_i&=&-\frac{1}{1+(n-2)b}\tilde{\lambda}_i\\ &\operatorname{l}e& \frac{1-\delta}{1+(n-2)b}{\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)\\ &=& (1-\delta)\frac{1+2(n-1)a}{1+(n-2)b}{\bar{\alpha}}r{\operatorname{l}ambda}( R). \epsilonnd{eqnarray*} Then there exist $\delta_1>0$ and $b_0$ such that for all $b\sqrt {-1}n [0, b_0]$, $-\operatorname{l}ambda_i\operatorname{l}e (1-\delta_1){\bar{\alpha}}r{\operatorname{l}ambda}( R)$. Then we can find $b_1\operatorname{l}e b_0$ such that for any $b\sqrt {-1}n [0, b_1]$, $p(b)\operatorname{l}e \delta_1$. Hence $R=\operatorname{l}_{a, b}^{-1}(\tilde R)$ satisfies the pinching condition of $C(b)$. Now we check that $R=\operatorname{l}^{-1}_{a, b}(\tilde R)\gammae 0$. Rewrite $$ R=\tilde{R}-\frac{2(n-1)a}{1+2(n-1)a}\tilde{R}_{\operatorname{I}}-\frac{(n-2)b}{1+(n-2)b}\tilde{R}_{\operatorname{Ric}_0}. $$ Noticing that $a\to 0$ as $b\to 0$, we can find $b_2$ such that for any $b\sqrt {-1}n [0, b_2]$ we have that $$ R\gammae \frac{\delta}{2}\tilde R_{\operatorname{I}}-\frac{(n-2)b}{1+(n-2)b}\tilde{R}_{\operatorname{Ric}_0}. $$ But the eigenvalue (with respect to $e_i\wedge e_j$, where $\{e_i\}$ is a basis of $TM$ consisting of eigenvectors of $\operatorname{Ric}_0(\tilde R)$) of the right hand side operator can be computed as $$ \frac{\delta}{2}\frac{{\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)}{n-1}-\frac{b}{1+(n-2)b}\operatorname{l}eft(\tilde{\lambda}_i+\tilde{\lambda}_j\rightght). $$ Using (\ref{pin3}), the above can be bounded from below by $$ {\bar{\alpha}}r{\operatorname{l}ambda}(\tilde R)\operatorname{l}eft(\frac{\delta}{2(n-1)}-\frac{2(n-1)b}{1+(n-2)b}\rightght)>0 $$ if $b$ is close to $0$. This shows that there exists $b_3>0$ such that for any $b\sqrt {-1}n (0, b_3]$, $R(g_0)\sqrt {-1}n \operatorname{l}_{a, b}(C(b))$. Now the virtue of the proof of Theorem 5.1 in \cite{BW}, along with the short time existence result of \cite{Sh1}, shows that the Ricci flow has long time solution. Otherwise, by Theorem 16.2 of \cite{H90}, we would end up with a blow-up solution, which is nonflat, noncompact, but whose curvature operator $R=R_{\operatorname{I}}$. In view of Schur's theorem, this is a contradiction. Note that $R(g_0)\sqrt {-1}n \operatorname{l}_{a, b}(C(b))$ allows us to apply the generalized pinching set construction (Theorem 4.1) from \cite{BW}, and since the evolving metric has positive curvature operator and the manifold is assumed to be noncompact, the injectivity radius always has a lower bound in terms of the size of the curvature. All these ingredients allow us to perform Hamilton's blow-up analysis \cite{H90} (Theorem 16.2). We continue to show that the extra assumption that $M$ is noncompact will lead us to a contradiction by performing the singularity analysis of \cite{H90} as $t\to \sqrt {-1}nfty$. Notice that for all $t$, $R(g(t))$ will stay in the cone $\operatorname{l}_{a, b}(C(b))$ for some fixed (but sufficiently small) $b$, by the tensor maximum principle, which can be verified in the same way as Proposition \ref{chen1}. Now we claim that the curvature of $g(t)$ satisfies that \begin{equation}\operatorname{l}abel{ric-pin} \operatorname{Ric} \gammae p\frac{\operatorname{tr}(\operatorname{Ric})}{n}\sqrt {-1}d. \epsilonnd{equation} for some $p>0$. Let $R^*=R(g(t))$. First, by Lemma \ref{keylemma1} we know that $R(g(t))\sqrt {-1}n \operatorname{l}_{a,b}(C(b)$ for some fixed small $b$. Thus we can find $R\sqrt {-1}n C(b)$ such that $\operatorname{l}_{a, b}(R)=R^*$. Now let ${\bar{\alpha}}r{\operatorname{l}ambda}=\frac{\operatorname{tr}(\operatorname{Ric}(R))}{n}$ and $\operatorname{l}ambda_i$ be the eigenvalues of $\operatorname{Ric}_0(R)$. By the assumption we have that $-\operatorname{l}ambda_i\operatorname{l}e (1-p){\bar{\alpha}}r{\operatorname{l}ambda}$. Now we compute the Ricci curvature and its trace for $R^*$. By the definition of $\operatorname{l}_{a, b}$ we have that $$ \operatorname{Ric}(R^*)=\operatorname{Ric} +2(n-1)a {\bar{\alpha}}r{\operatorname{l}ambda}\sqrt {-1}d +(n-2)b \operatorname{Ric}_0 $$ and $$ {\bar{\alpha}}r{\operatorname{l}ambda}^*:=\frac{\operatorname{tr}(\operatorname{Ric}(R^*))}{n}={\bar{\alpha}}r{\operatorname{l}ambda}(1+2(n-1)a). $$ Letting $\operatorname{l}ambda^*_i$ be the eigenvalue of $R^*$ we have that $ {\bar{\alpha}}r{\operatorname{l}ambda}^*+\operatorname{l}ambda_i^* =(1+2(n-1)a){\bar{\alpha}}r{\operatorname{l}ambda}+(1+(n-2)b)\operatorname{l}ambda_i $. Therefore \begin{eqnarray*} -\operatorname{l}ambda_i^*&=&-(1+(n-2)b)\operatorname{l}ambda_i\\ &\operatorname{l}e &(1-p)(1+(n-2)b){\bar{\alpha}}r{\operatorname{l}ambda}\\ &=& (1-p)\frac{1+(n-2)b}{1+2(n-1)a}{\bar{\alpha}}r{\operatorname{l}ambda}^*\\ &\operatorname{l}e &(1-p){\bar{\alpha}}r{\operatorname{l}ambda}^*. \epsilonnd{eqnarray*} Here we have used the fact that $1+2(n-1)a=1+(n-1)\frac{(n-2)b^2+2b}{1+(n-2)b}>1+(n-2)b$. This completes the proof of the claim (\ref{ric-pin}). Since for all $g(t)$, its Ricci curvature satisfies (\ref{ric-pin}), this holds up on the blow-down/blow-up solutions, which after passing to its universal cover, are either a nonflat gradient steady soliton or a nonflat gradient expanding soliton, with nonnegative curvature operator, by results from \cite{H90} (Theorem 16.2, Corollary 16.4) (See also \cite{N02}, Theorem 4.2 and \cite{CZ}). This contradicts to Corollary 3.1 of \cite{N05}. \epsilonnd{proof} \section{Discussions} In \cite{W}, the topology of so-called $p$-positive manifolds was studied. In view of the result of B\"ohm-Wilking, it is reasonable speculate that any noncompact complete Riemannian manifold with $2$-positive curvature operator must be diffeomorphic to $\Bbb R^n$. In \cite{N05} we speculated that any complete Riemannian manifolds with positive pinched Ricci curvature must be compact. Theorem \ref{ham} confirms it under stronger assumption on curvature operator. The problem in full generality still remains unknown. {\sqrt {-1}t Acknowledgement}. Part of this paper was completed during the first author's visit of ETH, Z\"urich. He would like to thank ETH, especially Tom Ilamnen, for providing a stimulating environment and various discussions. He also held informal discussions on \cite{BW} with Ben Chow and Nolan Wallach. \begin{thebibliography}{9999} \bibitem[BW]{BW} C. B\"ohm and B. Wilking, \textit{ Manifolds with positive curvature operator are space form}, preprint. \bibitem[Chen]{Chen} H. Chen, \textit{ Pointwise $\frac14$-pinched $4$-manifolds}, Ann. Global Anal. Geom. 9 (1991), no. 2, 161--176. \bibitem[CZ]{CZ} B.-L. Chen and X.-P. Zhu, \textit{ Complete Riemannian manifolds with pointwise pinched curvature} Invent. Math. \textbf{ 140}(2000) no. 2, 423--452. \bibitem[CLN]{CLN}B. Chow, P. Lu and L. Ni, \textit{ Hamilton's Ricci flow,} Graduate Studies in Mathematics, AMS Press, to appear. \bibitem[H1]{H86} R. Hamilton, \textit{ Four-manifolds with positive curvature operator }, J. Differenital. Geom. \textbf{24} (1986 ), 153--179. \bibitem[H2]{H91} R. S. Hamilton,\textit{ Convex hypersurfaces with pinched second fundamental form,} Comm. Anal. Geom. \textbf{2}(1994), no. 1, 167--172. \bibitem[H3]{H90} R. S. Hamilton,\textit{ Formation of singularities in the Ricci flow,} Surveys in Differential Geom.\textbf{ 2}(1995), 7--136. \bibitem[Hu]{Hu}G. Huisken, \textit{ Ricci deformation of the metric on a Riemannian manifold,} J. Differential Geom. \textbf{21}(1985), no. 1, 47--62. \bibitem[N1]{N04} L. Ni, \textit{ Ricci flow and nonnegativity of sectional curvature}, Math. Res. Lett. \textbf{11}(2004), no. 5-6, 883--904. \bibitem[N2]{N02}L. Ni, \textit{ Monotonicity and K\"ahler-Ricci flow. Geometric evolution equations,} 149--165, Contemp. Math., \textbf{367}, Amer. Math. Soc., Providence, RI, 2005. \bibitem[N3]{N05} L. Ni, \textit{Ancient solutions to K\"ahler-Ricci flow}, Math. Res. Lett. \textbf{12}(2005), no. 5-6, 633--653. \bibitem[NT1]{NT1} L. Ni and L.-F. Tam, \textit{Plurisubharmonic functions and the K\"ahler-Ricci flow,} Amer. J. Math. \textbf{125}(2003), no. 3, 623--654. \bibitem[NT2]{NT2}L. Ni and L.-F. Tam, \textit{ Plurisubharmonic functions and the structure of complete K\"ahler manifolds with nonnegative curvature}, J. Differential Geom. \textbf{64}(2003), no. 3, 457--524. \bibitem[Sh1]{Sh1} W. X. Shi, \textit{ Deforming the metric on complete Riemannian manifolds,} J. Differential Geom.\textbf{30}(1989), 223--301. \bibitem[Sh2]{Sh2} W. X. Shi,\textit{ Ricci deformation of the metric on complete noncompact Riemannian manifolds,} J. Differential Geom.\textbf{30}(1989), 303--394. \bibitem[W]{W} H. Wu, \textit{Manifolds of partially positive curvature,} Indiana Univ. Math. J. \text{36}(1987), no. 3, 525--548. \epsilonnd{thebibliography} \epsilonnd{document}
math
26,343
\begin{document} \title{Counting Small Induced Subgraphs exorpdfstring{\} \begin{abstract} Given a graph property $\Phi$, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ asks, on input a graph~$G$ and a positive integer $k$, to compute the number $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$ of induced subgraphs of size $k$ in $G$ that satisfy $\Phi$. The search for \emph{explicit} criteria on $\Phi$ ensuring that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is hard was initiated by Jerrum and Meeks~[J.\ Comput.\ Syst.\ Sci.\ 15] and is part of the major line of research on counting small patterns in graphs. However, apart from an implicit result due to Curticapean, Dell and Marx [STOC 17] proving that a full classification into ``easy'' and ``hard'' properties is possible and some partial results on edge-monotone properties due to Meeks [Discret.\ Appl.\ Math.\ 16] and Dörfler et al.\ [MFCS~19], not much is known. In this work, we fully answer and explicitly classify the case of monotone, that is subgraph-closed, properties: We show that for any non-trivial monotone property $\Phi$, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be solved in time $f(k)\cdot |V(G)|^{o(k/ {\log^{1/2}(k)})}$ for any function $f$, unless the Exponential Time Hypothesis fails. By this, we establish that any significant improvement over the brute-force approach is unlikely; in the language of parameterized complexity, we also obtain a $\#\W{1}$-completeness result. To prove our result, we use that for fixed $\Phi$ and $k$, we can express the function $G \mapsto \#\ensuremath{\mathsf{IndSub}}s{\Phi, k}{G}$ as a finite linear-combination of homomorphism counts from graphs $H_i$ to $G$. The coefficient vectors of these homomorphism counts in the linear combination are called the \emph{homomorphism vectors} associated to $\Phi$; by the Complexity Monotonicity framework of Curticapean, Dell and Marx [STOC 17], the positions of non-zero entries of these vectors are known to determine the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. Our main technical result lifts the notion of $f$-polynomials from simplicial complexes to graph properties and relates the derivatives of the $f$-polynomial of~$\Phi$ to its homomorphism vector. We then apply results from the theory of Hermite-Birkhoff interpolation to the $f$-polynomial to establish sufficient conditions on $\Phi$ which ensure that certain entries in the homomorphism vector do not vanish---which in turn implies hardness. For monotone graph properties, non-triviality then turns out to be a sufficient condition. Using the same method, we also prove a conjecture by Jerrum and Meeks [TOCT\,15, Combinatorica\,19]: $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete if $\Phi$ is a non-trivial graph property only depending on the number of edges of the graph. \end{abstract} \section{Introduction} Detection, enumeration and counting of patterns in graphs are among the most well-studied computational problems in theoretical computer science with a plethora of applications in diverse disciplines, including biology~\cite{SchreiberS05,GrochovK07}, statistical physics~\cite{TemperleyF61,Kasteleyn61,Kasteleyn63}, neural and social networks~\cite{Miloetal02} and database theory~\cite{GroheSS01}, to name but a few. At the same time, those problems subsume in their unrestricted forms some of the most infamous $\mathrm{NP}$-hard problems such as Hamiltonicity, the clique problem, or, more generally, the subgraph isomorphism problem~\cite{Cook71,Ullmann76}. In the modern-day era of ``big data'', where even quadratic-time algorithms may count as inefficient, it is hence crucial to find relaxations of hard computational problems that allow for tractable instances. A very successful approach for a more fine-grained understanding of hard computational problems is a multivariate analysis of the complexity of the problem: Instead of establishing upper and (conditional) lower bounds only depending on the input size, we aim to find additional parameters that, in the best case, are small in real-world instances and allow for efficient algorithms if assumed to be bounded. In case of detection and counting of patterns in graphs, it turns out that the size of the pattern is often significantly smaller than the size of the graph: Consider as an example the evaluation of database queries. While a classical analysis of this problem requires considering instances where the size of the query is as large as the database, a multivariate analysis allows us to impose the restriction of the query being much smaller than the database, which is the case for real-world instances. More concretely, suppose we are given a query $\varphi$ of size $k$ and a database $B$ of size $n$, and we wish to evaluate the query $\varphi$ on $B$. Assume further, that we are given two algorithms for the problem: One has a running time of $O(n^k)$, and the other one has a running time of $O(2^k \cdot n)$. While, classically, both algorithms are inefficient in the sense that their running times are not bounded by a polynomial in the input size $n+k$, the second algorithm is significantly better than the first one for real-world instances and can even be considered efficient. In this work, we focus on \emph{counting} of small patterns in large graphs. The field of counting complexity was founded by Valiant's seminal result on the complexity of computing the permanent~\cite{Valiant79,Valiant79b}, where it was shown that computing the number of perfect matchings in a graph is $\#\mathrm{P}$-complete, and thus harder than every problem in the polynomial-time hierarchy $\mathrm{PH}$~\cite{Toda91}. This is in sharp contrast to the fact that \emph{finding} a perfect matching in a graph can be done in polynomial-time~\cite{Edmonds65}. Hence, a perfect matching is a pattern that allows for efficient detection but is unlikely to admit efficient counting. Initiated by Valiant, computational counting evolved into a well-studied subfield of theoretical computer science. In particular, it turns out that counting problems are closely related to computing partition functions in statistical physics~\cite{TemperleyF61,Kasteleyn61,Kasteleyn63,GoldbergJ19,Chenetal19,Backensetal20}. Indeed, one of the first algorithmic result in the field of computational counting is the famous FKT-Algorithm by the statistical physicists Fisher, Kasteleyn and Temperley~\cite{TemperleyF61,Kasteleyn61,Kasteleyn63} that computes the partition function of the so-called dimer model on planar structures, which is essentially equivalent to computing the number of perfect matchings in a planar graph. The FKT-Algorithm is the foundation of the framework of holographic algorithms, which, among others, have been used to identify the tractable cases of a variety of complexity classifications for counting constraint satisfaction problems~\cite{Valiant08,CaiL11,CaiFGW15,CaiHL12,HuangL16,CaiLX17,CaiLX18,Backens18}. Unfortunately, the intractable cases of those classifications indicate that, except for rare examples, counting is incredibly hard (from a complexity theory point of view). In particular, many efficiently solvable combinatorial decision problems turn out to be intractable in their counting versions, such as counting of satisfying assignments of monotone $2$-CNFs~\cite{Valiant79b}, counting of independent sets in bipartite graphs~\cite{ProvanB83} or counting of $s$-$t$-paths~\cite{Valiant79b}, to name but a few. For this reason, we follow the multivariate approach as outlined previously and restrict ourselves in this work on counting of \emph{small} patterns in large graphs. Among others, problems of this kind find applications in neural and social networks~\cite{Miloetal02}, computational biology~\cite{Nogaetat08,Schilleretal15}, and database theory~\cite{DurandM15,ChenM15,ChenM16,DellRW19icalp}. \noindent Formally, we follow the approach of Jerrum and Meeks~\cite{JerrumM15} and study the family of problems $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$: Given a graph property $\Phi$, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ asks, on input a graph $G$ and a positive integer $k$, to compute the number of induced subgraphs of size $k$ in $G$ that satisfy $\Phi$.\footnote{Note that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is identical to $p$-$\#\textsc{UnlabelledInducedSubgraphWithProperty}(\Phi)$ as defined in~\cite{JerrumM15}.} As observed by Jerrum and Meeks, the generality of the definition allows to express counting of almost arbitrary patterns of size $k$ in a graph, subsuming counting of $k$-cliques and $k$-independent sets as very special cases. Assuming that $\Phi$ is computable, we note that the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ can be solved by brute-force in time $O(f(k)\cdot |V(G)|^k)$ for some function $f$ only depending on $\Phi$. The corresponding algorithm enumerates all subsets of $k$ vertices of $G$ and counts how many of those subsets satisfy $\Phi$. As we consider $k$ to be significantly smaller than $|V(G)|$, we are interested in the dependence of the exponent on~$k$. More precisely, the goal is to find the best possible $g(k)$ such that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ can be solved in time \begin{equation}\label{eq:intro_goal} O(f(k)\cdot |V(G)|^{g(k)}) \end{equation} for some function $f$ such that $f$ and $g$ only depend on $\Phi$. Readers familiar with parameterized complexity theory will identify the case of $g(k)\in O(1)$ as fixed-parameter tractability (FPT) results. We first provide some background and elaborate on the existing results on $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ before we present the contributions of this paper. \ensuremath{\mathsf{Sub}}section{Prior Work} So far, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ has been investigated using primarily the framework of parameterized complexity theory. As indicated before, $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is in $\mathrm{FPT}$ if the function $g$ in \cref{eq:intro_goal} is bounded by a constant (independent of $k$), and the problem is $\#\W{1}$-complete, if it is at least as hard as the parameterized clique problem; here $\#\W{1}$ should be considered a parameterized counting equivalent of $\mathrm{NP}$ and we provide the formal details in \cref{sec:prelims}. In particular, the so-called Exponential Time Hypothesis (ETH) implies that $\#\W{1}$-complete problems are not in $\mathrm{FPT}$; again, this is made formal in \cref{sec:prelims}. The problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ was first studied by Jerrum and Meeks~\cite{JerrumM15}. They introduced the problem and proved that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete if $\Phi$ is the property of being connected. Implicitly, their proof also rules out the function $g$ of \cref{eq:intro_goal} being in $o(k)$, unless ETH fails, which establishes a tight conditional lower bound. In a subsequent line of research~\cite{JerrumM15density,Meeks16,JerrumM17}, Jerrum and Meeks proved $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ to be $\#\W{1}$-complete if at least one of the following is true: \begin{enumerate}[(1)] \item $\Phi$ has \emph{low edge-densities}; this is true for instance for all sparse properties such as planarity and made formal in \cref{sec:low_edge_dens}. \item $\Phi$ holds for a graph $H$ if and only if the number of edges of $H$ is even/odd. \item $\Phi$ is closed under the addition of edges, and the minimal elements have large treewidth. \end{enumerate} Unfortunately, none of the previous results establishes a conditional lower bound that comes close to the upper bound given by the brute-force algorithm. This is particularly true due to the application of Ramsey's Theorem in the proofs of many of the prior results: Ramsey's Theorem states that there is a function $R(k) = 2^{\Theta(k)}$ such that every graph with at least $R(k)$ vertices contains either a $k$-independent set or a $k$-clique~\cite{Ramsey30,Spencer75,Erdos87}. Relying on this result for a reduction from finding or counting $k$-independent sets or $k$-cliques, the best implicit conditional lower bounds achieved only rule out an algorithm running in time $f(k)\cdot |V(G)|^{o(\log k)}$ for any function $f$. Moreover, the previous results only apply to a very specific set of properties. In particular, Jerrum and Meeks posed the following open problem concerning a generalization of the second result (2); we say that~$\Phi$ is $k$-trivial, if it is either true or false for all graphs with $k$ vertices. \begin{conjecture}[\cite{JerrumM15density,JerrumM17}]\label{conj:JM} Let $\Phi$ be a graph property that only depends on the number of edges of a graph. If for infinitely many $k$ the property $\Phi$ is not $k$-trivial, then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete.\lipicsEnd \end{conjecture} Note that the condition of $\Phi$ not being $k$-trivial for infinitely many $k$ is necessary for hardness, as otherwise, the problem becomes trivial if $k$ exceeds a constant depending only on $\Phi$. The first major breakthrough towards a complete understanding of the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is the following implicit classification due to Curticapean, Dell and Marx~\cite{CurticapeanDM17}: \begin{theorem}[\cite{CurticapeanDM17}] Let $\Phi$ denote a graph property. Then the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is either $\mathrm{FPT}$ or $\#\W{1}$-complete.\lipicsEnd \end{theorem} While the previous classification provides a very strong result for the structural complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$, it leaves open the question of the precise bound on the function $g$. Furthermore, it is implicit in the sense that it does not reveal the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ if a concrete property $\Phi$ is given. Nevertheless, the technique introduced by Curticapean, Dell and Marx, which is now called \emph{Complexity Monotonicity}, turned out to be the right approach for the treatment of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. In particular, the subsequent results on $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$, including the classification in this work, have been obtained by strong refinements of \emph{Complexity Monotonicity}; we provide a brief introduction when we discuss the techniques used in this paper. More concretely, a superset of the authors established the following classifications for edge-monotone properties in recent years~\cite{RothS18,DorflerRSW19}: The problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and, assuming ETH, cannot be solved in time $f(k)\cdot |V(G)|^{o(k)}$ for any function $f$, if at least one of the following is true:\footnote{We provide simplified statements; the formal and more general results can be found in~\cite{RothS18,DorflerRSW19}.} \begin{itemize} \item $\Phi$ is non-trivial, closed under the removal of edges and false on odd cycles. \item $\Phi$ is non-trivial on bipartite graphs and closed under the removal of edges. \end{itemize} While the second result completely answers the case of edge-monotone properties on bipartite graphs, a general classification of edge-monotone properties is still unknown. \ensuremath{\mathsf{Sub}}section{Our Results} We begin with monotone properties, that is, properties that are closed under taking subgraphs. We classify those properties completely and explicitly; the following theorem establishes hardness and an almost tight conditional lower bound. \begin{mtheorem}\label{thm:monotone_refined_intro} Let $\Phi$ denote a monotone graph property. Suppose that for infinitely many~$k$ the property $\Phi$ is not $k$-trivial. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time $f(k)\cdot |V(G)|^{o(k/\sqrt{\log k})}$ for any function $f$, unless ETH fails. \lipicsEnd \end{mtheorem} In fact, we obtain a tight bound, that is, we can drop the factor of $1/\sqrt{\log k}$ in the exponent, assuming the conjecture that ``you cannot beat treewidth''~\cite{Marx10}. The latter is an important open problem in parameterized and fine-grained complexity theory asking for a tight conditional lower bound for the problem of \emph{finding} a homomorphism from a small graph $H$ to a large graph $G$: The best known algorithm for that problem runs in time $\mathsf{poly}(|V(H)|) \cdot |V(G)|^{O(\mathsf{tw}(H))}$, where $\mathsf{tw}(H)$ is the treewidth\footnote{We will only rely on treewidth in a black-box manner in this paper and thus refer the reader for instance to \cite[Chapter~7]{CyganFKLMPPS15} for a detailed treatment.} of $H$ (see for instance \cite{DiazST02,Marx10,CurticapeanDM17}), and the question is whether this running time is essentially optimal; we discuss the details later in the paper. \noindent As a concrete example of a property that is classified by \cref{thm:monotone_refined_intro}, but that was not classified before, consider the (monotone) property of being $3$-colorable: Recall that a $3$-coloring of a graph is a function mapping each vertex to one of three colors such that no two adjacent vertices are mapped to the same color. Clearly any subgraph of a graph $G$ admits a $3$-coloring if $G$ does. Note that in \cref{thm:monotone_refined_intro}, the assumption of $\Phi$ not being $k$-trivial for infinitely many $k$ is necessary, as otherwise the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ becomes trivial for all $k$ that are greater than a constant only depending on $\Phi$. Note further, that $\#\W{1}$-completeness in \cref{thm:monotone_refined_intro} is not surprising, as the decision version of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ was implicitly shown to be $\W{1}$-complete by Khot and Raman~\cite{KhotR02}; $\W{1}$ is the decision version of $\#\W{1}$ and should be considered a parameterized equivalent of $\ensuremath{\mathbb{N}}P$. However, their reduction is not parsimonious. Also, their proof uses Ramsey's Theorem and thus only yields an implicit conditional lower bound of $f(k)\cdot |V(G)|^{o\left(\log k\right)}$, whereas our lower bound is almost tight. Our second result establishes an almost tight lower bound for sparse properties, that is, properties $\Phi$ that admit a constant $s$ such that ever graph $H$ for which $\Phi$ holds has at most $s\cdot |V(H)|$ many edges. Furthermore, the bound can be made tight if the set $\mathcal{K}(\Phi)$ of positive integers $k$ for which $\Phi$ is not $k$-trivial is additionally \emph{dense}. By this we mean that there is a constant~$\ell$ such that for every positive integer~$n$, there exists $n \leq k \leq \ell n$ such that $\Phi$ is not $k$-trivial. Note that density rules out artificial properties such as $\Phi(H)=1$ if and only if $H$ is an independent set and has precisely $2\uparrow n$ vertices for some positive integer~$n$, with $2\uparrow n$ the $n$-fold exponential tower with base~$2$. In particular, for every property $\Phi$ that is $k$-trivial only for finitely many $k$, the set $\mathcal{K}(\Phi)$ is dense. \begin{mtheorem}\label{thm:sparse_intro} Let $\Phi$ denote a sparse graph property such that $\Phi$ is not $k$-trivial for infinitely many~$k$. Then, $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time $f(k)\cdot |V(G)|^{o\left(k/\log k\right)}$ for any function $f$, unless ETH fails. If $\mathcal{K}(\Phi)$ is additionally dense, then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be solved in time $f(k)\cdot |V(G)|^{o\left(k\right)}$ for any function $f$, unless ETH fails.\lipicsEnd \end{mtheorem} Our third result solves the open problem posed by Jerrum and Meeks by proving that (a strengthened version of) \cref{conj:JM} is true. \begin{mtheorem}\label{cor:number of edges_refined_intro} Let $\Phi$ denote a computable graph property that only depends on the number of edges of a graph. If $\Phi$ is not $k$-trivial for infinitely many $k$, then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time $f(k)\cdot |V(G)|^{o\left(k/\log k\right)}$ for any function $f$, unless ETH fails. If $\mathcal{K}(\Phi)$ is additionally dense, then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be solved in time $f(k)\cdot |V(G)|^{o(k/\sqrt{\log k})}$ for any function $f$, unless ETH fails.\lipicsEnd \end{mtheorem} Note that, similar to \cref{thm:monotone_refined_intro}, the conditional lower bounds in the previous two theorems become tight, if ``you cannot beat treewidth''~\cite{Marx10}; in particular, the condition of being dense can be removed in that case. Finally, we consider properties that are \emph{hereditary}, that is, closed under taking \emph{induced} subgraphs. We obtain a criterion on such graph properties that, if satisfied, yields a tight conditional lower bound for the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. While the statement of the criterion is deferred to the technical discussion, we can see that every hereditary property that is defined by a single forbidden induced subgraph satisfies the criterion. \begin{mtheorem}\label{thm:hereditary_intro} Let $H$ be a graph with at least $2$ vertices and let $\Phi$ denote the property of being $H$-free, that is, a graph satisfies $\Phi$ if and only if it does not contain $H$ as an induced subgraph. Then, $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time $f(k)\cdot |V(G)|^{o(k)}$ for any function $f$, unless ETH fails.\lipicsEnd \end{mtheorem} Note that the case of $H$ being the graph with one vertex, which is excluded above, yields the property $\Phi$ which is false on all graphs $G$ with at least one vertex, for which $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is the constant zero-function and thus trivially solvable. Hence, \cref{thm:hereditary_intro} establishes indeed a complete classification for all properties $\Phi$=``$H$-free''. \ensuremath{\mathsf{Sub}}section{Technical Overview} We rely on the framework of Complexity Monotonicity of computing linear combinations of homomorphism counts~\cite{CurticapeanDM17}. More precisely, it is known that for every computable graph property $\Phi$ and positive integer $k$, there exists a unique computable function $a$ from graphs to rational numbers such that for all graphs $G$ \begin{equation}\label{eq:gmp_intro} \#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G} = \sum_H a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{G}\,, \end{equation} where $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$ denotes the number of induced subgraphs of size $k$ in $G$ that satisfy $\Phi$, and $\#\ensuremath{\mathsf{Hom}}s{H}{G}$ denotes the number of graph homomorphisms from $H$ to $G$. It is known that the function $a$ has finite support, that is, there is only a finite number of graphs $H$ for which $a(H)\neq 0$. Intuitively, Complexity Monotonicity states that computing a linear combination of homomorphism counts is \emph{precisely} as hard as computing its hardest term~\cite{CurticapeanDM17}. Furthermore, the complexity of computing the number of homomorphisms from a small graph $H$ to a large graph $G$ is (almost) precisely understood by the dichotomy result of Dalmau and Jonsson~\cite{DalmauJ04} and the conditional lower bound under ETH due to Marx~\cite{Marx10}: Roughly speaking, it is possible to compute $\#\ensuremath{\mathsf{Hom}}s{H}{G}$ efficiently if and only if $H$ has small treewidth. As a consequence, the complexity of computing $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$ is precisely determined by the support of the function $a$. Unfortunately, determining the latter turned out to be an incredibly hard task: It was shown in~\cite{RothS18} and~\cite{DorflerRSW19} that the function $a$ subsumes a variety of algebraic and even topological invariants. As a concrete example, a subset of the authors showed that for edge-monotone properties $\Phi$, the coefficient $a(K_k)$ of the complete graph in \cref{eq:gmp_intro} is, up to a factor of $k!$, equal to the reduced Euler characteristic of what is called the simplicial graph complex of $\Phi$ and $k$~\cite{RothS18}. By this, a connection to Karp's Evasiveness Conjecture\footnote{Intuitively, the Evasiveness Conjecture states that every decision tree algorithm verifying a non-trivial edge-monotone graph property has to query every edge of the input graph in the worst case~\cite{Rosenberg73,Miller13}.} was established. In particular, it is known that $\Phi$ is evasive on $k$-vertex graphs if the reduced Euler characteristic is non-zero~\cite{KahnSS84}. As a consequence, the coefficient $a(K_k)$ can reveal a property to be evasive on $k$-vertex graphs if shown to be non-zero. The previous example illustrated that identifying the support of the function $a$ in \cref{eq:gmp_intro} is a hard task, but using the framework of Complexity Monotonicity requires us to solve this task. In this work, we present a solution for properties whose $f$-vectors (see below) have low Hamming weight: Given a property $\Phi$ and a positive integer $k$, we define a $\binom{k}{2}+1$ dimensional vector $f^{\Phi,k}$ by setting $f^{\Phi,k}_i$ to be the number of edge-subsets of size $i$ of the complete graph with $k$ vertices such that the induced graph satisfies $\Phi$, that is, \[f^{\Phi,k}_i := \#\{ A \ensuremath{\mathsf{Sub}}seteq E(K_k) ~|~\#A = i \wedge \Phi(K_k[A])=1 \} \] for all $i=0,\dots,\binom{k}{2}$. By this, we lift the notion of $f$-vectors from abstract simplicial complexes to graph properties; readers familiar with the latter will observe that the $f$-vector of an edge-monotone property~$\Phi$ equals the $f$-vector of its associated graph complex (see for instance \cite{Billera97}). Similarly, we introduce the notions of $h$-vectors $h^{\Phi,k}$ and $f$-polynomials $\ensuremath{\mathtt{f}}_{\Phi,k}$ of graph properties, defined as follows; we set $d= \binom{k}{2}$. \[h^{\Phi,k}_\ell := \sum_{i=0}^\ell (-1)^{\ell-i} \cdot \binom{d - i}{\ell -i}\cdot f^{\Phi,k}_i, \text{ where } \ell\in\{0,\dots,d\};\quad\text{and}\quad\ensuremath{\mathtt{f}}_{\Phi,k}(x) := \sum_{i=0}^d f^{\Phi,k}_i \cdot x^{d-i}\!.\] Our main combinatorial insight relates the function $a$ in \cref{eq:gmp_intro} to the $h$-vector of $\Phi$. For the formal statement, we let $\mathcal{H}(\Phi,k,i)$ denote the set of all graphs $H$ with $k$ vertices and $i$ edges that satisfy $\Phi$. We then show that for all $i =0,\dots, d$, we have \[ k! \sum_{H \in \mathcal{H}(\Phi,k,i)} a(H) = h^{\Phi,k}_i\!.\] In particular, the previous equation shows that there is a graph $H$ with $i$ edges that survives with a non-zero coefficient $a(H)$ in \cref{eq:gmp_intro} whenever the $i$-th entry of the $h$-vector $h^{\Phi,k}$ is non-zero. As graphs with many edges have high treewidth, we can thus establish hardness of computing $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$ by proving that there is a non-zero entry with a high index in $h^{\Phi,k}$. To this end, we relate $h^{\Phi,k}$ and $f^{\Phi,k}$ by observing that their entries are evaluations of the derivatives of the $f$-polynomial $\ensuremath{\mathtt{f}}_{\Phi,k}(x)$. More concretely, our goal is to show that a large amount of high-indexed zero entries of $h^{\Phi,k}$ yields that the only polynomial of degree at most $d$ that satisfies the constrains given by the evaluations of the derivatives is the zero polynomial. However, the latter can only be true if $\Phi$ is trivially false on $k$-vertex graphs. Using Hermite-Birkhoff interpolation and P\'olya's Theorem we are able to achieve this goal whenever the Hamming weight of $f^{\Phi,k}$ is small. Our meta-theorem thus classifies the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ in terms of the Hamming weight of the $f$-vectors of $\Phi$. \begin{mtheorem}\label{thm:main_general_intro} Let $\Phi$ denote a computable graph property and suppose that $\Phi$ is not $k$-trivial for infinitely many $k$. Let $\beta: \mathcal{K}(\Phi) \to \mathbb{Z}_{\ge 0}$ denote the function that maps $k$ to $\binom{k}{2}-\mathsf{hw}(f^{\Phi,k})$. If $\beta(k)\in \omega(k)$ then the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot n^{o((\beta(k)/k)/(\log(\beta(k)/k)))}\] for any function $g$, unless ETH fails.\lipicsEnd \end{mtheorem} For the refined conditional lower bounds in case of monotone properties and properties for which the set $\mathcal{K}(\Phi)$ is dense (see \cref{thm:monotone_refined_intro,thm:sparse_intro,cor:number of edges_refined_intro}), we furthermore rely on a consequence of the Kostochka-Thomason-Theorem~\cite{Kostochka84,Thomason01} that establishes a lower bound on the size of the smallest clique-minors of graphs with many edges. In contrast to the previous families of properties, we do not rely on the general meta-theorem (\cref{thm:main_general_intro}) for our treatment of hereditary properties. Instead, we carefully construct a reduction from counting $k$-independent sets in bipartite graphs. Given a hereditary graph property $\Phi$ defined by the (possibly infinite) set $\Gamma(\Phi)$ of forbidden induced subgraphs, we say that $\Phi$ is \emph{critical} if there is a graph $H\in \Gamma(\Phi)$ and an edge $e$ of $H$ such that the graph obtained from $H$ by deleting $e$ and then cloning the former endpoints of $e$ satisfies $\Phi$; the formal definition is provided in \cref{sec:hereditary}. The reduction from counting $k$-independent sets in bipartite graphs then yields the following result: \begin{mtheorem}\label{thm:critical_hardness_intro} Let $\Phi$ denote a computable and critical hereditary graph property. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time $g(k)\cdot n^{o(k)}$ for any function $g$, unless ETH fails.\lipicsEnd \end{mtheorem} We then establish that every hereditary property with precisely one non-trivial forbidden subgraph $H$ is critical, which yields \cref{thm:hereditary_intro}. \ensuremath{\mathsf{Sub}}section{Organization of the Paper} We begin with providing all necessary technical background in \cref{sec:prelims}. In particular, we introduce the most important notions in parameterized and fine-grained complexity theory, as well as the principle of Hermite-Birkhoff interpolation. \noindent \Cref{sec:main_result} presents and proves our main combinatorial result which relates the $f$-vectors and $h$-vectors of a computable graph property $\Phi$ on $k$-vertex graphs to the coefficients in the associated linear combination of homomorphisms as given by \cref{eq:gmp_intro}. We derive the meta-theorem for the complexity classification of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ afterwards in \cref{sec:meta_theorem} and illustrate its applicability by establishing new conditional lower bounds for properties that are monotone, that have low edge-densities, and that depend only on the number of edges of a graph. Those bounds are refined to match the statements of \cref{thm:monotone_refined_intro,thm:sparse_intro} in \cref{sec:refined_bounds} by combining our main combinatorial result with results from extremal graph theory that relate the number of edges of a graph to the size of its largest clique-minor. Finally, we present our treatment of hereditary properties in \cref{sec:hereditary}. \ensuremath{\mathsf{Sub}}section*{Acknowledgements} We thank D{\'{a}}niel Marx for pointing out a proof of \cref{lem:hfree_critical} and Jacob Focke for helpful comments. \section{Preliminaries}\label{sec:prelims} Given a finite set $S$, we write $\# S$ and $|S|$ for the cardinality of $S$. Further, given a non-negative integer~$r$, we set $[r]:=\{1,\dots,r\}$; in particular, we have $[0]=\emptyset$. The \emph{hamming weight} of a vector $f\in \mathbb{Q}^n$, denoted by $\mathsf{hw}(f)$, is defined to be the number of non-zero entries of $f$. \noindent Graphs in this work are simple and do not contain self-loops. Given a graph $G$, we write $V(G)$ for the vertices and $E(G)$ for the edges of $G$. Furthermore, we define $\mathcal{G}$ to be the set of all (isomorphism classes of) graphs. The \emph{complement} $\overline{G}$ of a graph $G$ has vertices $V(G)$ and edges $\overline{E(G)}\setminus \{ \{v,v\}~|~v \in V(G)\}$. Given a subset $\hat{E}$ of edges of a graph $G$, we write $G[\hat{E}]$ for the graph with vertices $V(G)$ and edges~$\hat{E}$. Given a subset $\hat{V}$ of vertices of a graph $G$, we write $G[\hat{V}]$ for the graph with vertices $\hat{V}$ and edges~$E(G)\cap\hat{V}^2$. In particular, we say that $G[\hat{V}]$ is an \emph{induced subgraph} of $G$. Given graphs $H$ and $G$, we define $\ensuremath{\mathsf{IndSub}}s{H}{G}$ to be the set of all induced subgraphs of $G$ that are isomorphic to $H$. Given graphs $H$ and $G$, a \emph{homomorphism} from $H$ to $G$ is a function $\varphi: V(H) \rightarrow V(G)$ such that $\{\varphi(u),\varphi(v)\} \in E(G)$ whenever $\{u,v\} \in E(H)$. We write $\ensuremath{\mathsf{Hom}}s{H}{G}$ for the set of all homomorphisms from $H$ to $G$. In particular, we write $\#\ensuremath{\mathsf{Hom}}s{H}{\star}$ for the function that maps a graph $G$ to $\#\ensuremath{\mathsf{Hom}}s{H}{G}$. A bijective homomorphism from a graph $H$ to itself is an \emph{automorphism} and we write $\auts{H}$ to denote the set of all automorphisms of $H$. For a graph $H$, we define the {\em average degree} of $H$ as \[d(H) := \frac{1}{|V(H)|}\cdot \sum_{v \in V(H)} \mathsf{deg}(v).\] Further, we rely on the \emph{treewidth} of a graph, which is a graph parameter $\mathsf{tw}: \mathcal{G} \rightarrow \mathbb{N}$. However, we only work with the treewidth in a black-box manner, and thus we omit the definition and refer the interested reader to the literature, (see for instance \cite[Chapter 7]{CyganFKLMPPS15}). In particular, we use the following well-known result from extremal graph theory, which relates the treewidth of a graph $H$ to its average degree. \begin{lemma}[Folklore, see for instance {\cite[Corollary~1]{ChandranS05}}]\label{lem:convenient} For any graph $H$ with average degree at least~$d$, we have $\mathsf{tw}(H)\geq \frac{d}{2}$.\lipicsEnd \end{lemma} Finally, we also rely on the following celebrated result from extremal graph theory: \begin{theorem}[Tur\'an's Theorem, see for instance {\cite[Section 2.1]{Lovasz12}}] \label{thm:turan} A graph $H$ with more than $\left(1-\frac{1}{r}\right)\cdot \frac{1}{2}|V(H)|^2$ edges contains the clique $K_{r+1}$ as a subgraph.\lipicsEnd \end{theorem} \paragraph*{Graph Properties} A \emph{graph property} $\Phi$ is a function from graphs to $\{0,1\}$ such that $\Phi(H)=\Phi(G)$ whenever $H$ and $G$ are isomorphic. We say that a graph $H$ \emph{satisfies} $\Phi$ if $\Phi(H)=1$. Given a positive integer $k$ and a graph property~$\Phi$, we write~$\Phi_k$ for the set of all (isomorphism classes of) graphs with $k$ vertices that satisfy $\Phi$. Furthermore, given a graph $G$, a positive integer $k$, and a graph property $\Phi$, we write $\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$ for the set of all induced subgraphs with $k$ vertices of $G$ that satisfy $\Phi$. In particular, we write $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star}$ for the function that maps a graph $G$ to $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$. Given a graph property $\Phi$, we define $\neg\Phi(H) = 1 :\Leftrightarrow \Phi(H) = 0$ as the \emph{negation} of~$\Phi$. Furthermore, we define $\overline{\Phi}(H)= 1 :\Leftrightarrow \Phi(\overline{H}) = 1$ as the \emph{inverse} of~$\Phi$.\footnote{We omit using the word ``complement'' for graph properties to avoid confusion on whether we mean $\neg\Phi$ or $\overline{\Phi}$.} We observe the following identities: \begin{fact}\label{fac:invariance} For every graph property $\Phi$, graph $G$ and positive integer $k$, we have \begin{align*} \#\ensuremath{\mathsf{IndSub}}s{\neg\Phi,k}{G} &= \binom{|V(G)|}{k} - \#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}\,, \text{ and}\\ \#\ensuremath{\mathsf{IndSub}}s{\overline{\Phi},k}{G} &= \#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\overline{G}} \,. \end{align*} \end{fact} \begin{proof} The first identity is immediate. For the second one, we observe that \begin{align*} \#\ensuremath{\mathsf{IndSub}}s{\overline{\Phi},k}{G} &= \sum_{H \in \overline{\Phi}_k} \#\ensuremath{\mathsf{IndSub}}s{H}{G} = \sum_{\overline{H} \in \Phi_k} \#\ensuremath{\mathsf{IndSub}}s{H}{G}\\ &= \sum_{\overline{H} \in \Phi_k} \#\ensuremath{\mathsf{IndSub}}s{\overline{H}}{\overline{G}} = \#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\overline{G}}, \end{align*} where we use the equality $\#\ensuremath{\mathsf{IndSub}}s{H}{G} = \#\ensuremath{\mathsf{IndSub}}s{\overline{H}}{\overline{G}}$ from \cite[Section~5.2.3]{Lovasz12}. \end{proof} \paragraph*{Fine-Grained and Parameterized Complexity Theory} Given a computable graph property $\Phi$, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ asks, given a graph $G$ with $n$ vertices and a positive integer $k$, to compute $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{G}$, that is, the number of induced subgraphs of size $k$ in $G$ that satisfy $\Phi$. Note that the problem can be solved by brute-force in time $f(k)\cdot O(n^k)$ by iterating over all subsets of $k$ vertices in $G$ and testing which of the subsets induce a graph that satisfies~$\Phi$; the latter part takes time $f(k)$ for some $f$ depending on $\Phi$. As elaborated in the introduction, our goal is to understand the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ for instances with small $k$ and large $n$. More precisely, we wish to identify the best possible exponent of $n$ in the running time. To this end, we rely on the frameworks of fine-grained and parameterized complexity theory. Regarding the former, we prove conditional lower bounds based on the \emph{Exponential Time Hypothesis} due to Impagliazzo and Paturi~\cite{ImpagliazzoP01}: \begin{conjecture}[Exponential Time Hypothesis (ETH)] The problem $3$-$\textsc{SAT}$ cannot be solved in time $\mathsf{exp}(o(m))$, where $m$ is the number of clauses of the input formula.\lipicsEnd \end{conjecture} Assuming ETH, we are able to prove that the exponent ($k$) of the brute-force algorithm for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be improved significantly for non-trivial monotone properties by establishing that no algorithm with a running time of $f(k)\cdot |V(G)|^{o(k / \sqrt{\log k})}$ for any function $f$ exists. \noindent In the language of parameterized complexity theory, our reductions also yield $\#\W{1}$-completeness results, where $\#\W{1}$ should be considered the parameterized counting equivalent of $\mathrm{NP}$; we provide a rough introduction in what follows and refer the interested reader to references like \cite{CyganFKLMPPS15} and~\cite{FlumG04} for a detailed treatment. A \emph{parameterized counting problem} is a pair of a function $P: \Sigma^\ast \rightarrow \mathbb{N}$ and a computable parameterization $\kappa:\Sigma^\ast \rightarrow \mathbb{N}$. Examples include the problems $\#\textsc{VertexCover}$ and $\#\textsc{Clique}$ which ask, given a graph $G$ and a positive integer $k$, to compute the number $P(G,k)$ of vertex covers or cliques, respectively, of size $k$. Both problems are parameterized by the solution size, that is $\kappa(G,k):=k$. Similarly, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ can be viewed as a parameterized counting problem when parameterized by $\kappa(G,k):=k$; we implicitly assume this parameterization of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ in the remainder of this paper. A parameterized counting problem is called \emph{fixed-parameter tractable} (FPT) if there is a computable function $f$ such that the problem can be solved in time $f(\kappa(x))\cdot |x|^{O(1)}$, where $|x|$ is the input size. Given two parameterized counting problems $(P,\kappa)$ and $(\hat{P},\hat{\kappa})$, a \emph{parameterized Turing-reduction} from $(P,\kappa)$ to $(\hat{P},\hat{\kappa})$ is an algorithm $\mathbb{A}$ that is given oracle access to $\hat{P}$ and, on input $x$, computes $P(x)$ in time $f(\kappa(x))\cdot |x|^{O(1)}$ for some computable function $f$; furthermore, the parameter $\kappa(y)$ of every oracle query posed by $\mathbb{A}$ must be bounded by $g(\kappa(x))$ for some computable function $g$. While $\#\textsc{VertexCover}$ is known to be fixed-parameter tractable~\cite{FlumG04}, $\#\textsc{Clique}$ is not fixed-parameter tractable, unless ETH fails~\cite{Chenetal05,Chenetal06}. Moreover, $\#\textsc{Clique}$ is the canonical complete problem for the parameterized complexity class $\#\W{1}$, see~\cite{FlumG04}; in particular, we use the following definition of $\#\W{1}$-completeness in this work. \begin{definition} A parameterized counting problem is $\#\W{1}$-\emph{complete} if it is interreducible with $\#\textsc{Clique}$ with respect to parameterized Turing-reductions. \lipicsEnd \end{definition} Note that that the absence of an FPT algorithm for $\#\textsc{Clique}$ under ETH and the definition of parameterized Turing-reductions yield that $\#\W{1}$-complete problems are not fixed-parameter tractable, unless ETH fails, legitimizing the notion of $\#\W{1}$-completeness as evidence for (fixed-parameter) intractability. Jerrum and Meeks~\cite{JerrumM15} have shown that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ reduces to $\#\textsc{Clique}$ for every computable property~$\Phi$ with respect to parameterized Turing-reductions. Thus we will only treat the ``hardness part'' of the $\#\W{1}$-completeness results in this paper. The fine-grained and parameterized complexity of the homomorphism counting problem are the foundation of the lower bounds established in this work: Given a class of graphs $\mathcal{H}$, the problem $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ asks, on input a graph $H \in \mathcal{H}$ and an arbitrary graph $G$, to compute $\#\ensuremath{\mathsf{Hom}}s{H}{G}$. The parameter is given by $|V(H)|$. The following classification shows that, roughly speaking, the complexity of $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ is determined by the treewidth of the graphs in $\mathcal{H}$. \begin{theorem}[\cite{DalmauJ04,Marx10}]\label{thm:homsdicho} Let $\mathcal{H}$ denote a recursively enumerable class of graphs. If the treewidth of~$\mathcal{H}$ is bounded by a constant, then $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ is solvable in polynomial time. Otherwise, the problem is $\#\W{1}$-complete and cannot be solved in time \[f(|V(H)|)\cdot |V(G)|^{o\left(\frac{\mathsf{tw}(H)}{\log \mathsf{tw}(H)}\right)} \] for any function $f$, unless ETH fails. \end{theorem} Note that the classification of $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ into polynomial-time and $\#\W{1}$-complete cases is explicitly stated and proved in the work of Dalmau and Jonson~\cite{DalmauJ04}. However, the conditional lower bound follows only implicitly by a result of Marx~\cite{Marx10}. We provide a proof for completeness. \pagebreak \begin{proof} As discussed, we focus on the lower bound, which follows implicitly from a result of Marx~\cite{Marx10} on the complexity of \emph{Partitioned Subgraph Isomorphism}: The problem $\textsc{PartitionedSub}(\mathcal{H})$ asks, given a graph $H \in \mathcal{H}$, an arbitrary graph $G$ and a (not necessarily proper) vertex coloring $c:V(G)\rightarrow V(H)$, to \emph{decide} whether there is an injective homomorphism $\varphi$ from $H$ to $G$ such that $c(\varphi(v)) = v$ for each vertex~$v$ of $H$. The result of Marx~\cite[Corollary~6.2]{Marx10} states that for every $\mathcal{H}$ of unbounded treewidth, the problem $\textsc{PartitionedSub}(\mathcal{H})$ cannot be solved in time \begin{equation}\label{eq:marx_bound} f(|V(H)|)\cdot |V(G)|^{o({\mathsf{tw}(H)}/{\log \mathsf{tw}(H)})} \end{equation} for any function $f$, unless ETH fails. Now suppose we are given a graph $H \in \mathcal{H}$, an arbitrary graph $G$ and a coloring $c:V(G)\rightarrow V(H)$. We wish to decide whether there is an injective homomorphism~$\varphi$ from~$H$ to~$G$ such that $c(\varphi(v)) = v$ holds for each vertex $v$ of~$H$. Note first, that we can drop the requirement of $\varphi$ being injective, as every homomorphism that preserves the coloring is injective. Note further, that without loss of generality, we can assume that~$c$ is a homomorphism from $G$ to $H$: Every edge $\{u,v\}$ of~$G$ such that $\{c(u),c(v)\} \notin E(H)$ is irrelevant for finding a homomorphism $\varphi$ from $H$ to $G$ that preserves the coloring~$c$. Hence, we can delete all of those edges from $G$. Thus, the problem $\textsc{PartitionedSub}(\mathcal{H})$ is equivalent to the problem $\textsc{cp}\text{-}\ensuremath{\textsc{Hom}}(\mathcal{H})$ which asks, given a graph $H \in \mathcal{H}$, an arbitrary graph~$G$, and a homomorphism $c \in \ensuremath{\mathsf{Hom}}s{G}{H}$, to decide whether there is a $\varphi \in \ensuremath{\mathsf{Hom}}s{H}{G}$ such that $c(\varphi(v)) = v$ for each~$v\in V(H)$. Finally, it is known that (the counting version of) $\textsc{cp}\text{-}\ensuremath{\textsc{Hom}}(\mathcal{H})$ tightly reduces to $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ via the principle of inclusion and exclusion \cite[Lemma~2.52]{Roth19} or polynomial interpolation \cite[Section~3.2]{DellRW19icalp}. Thus the conditional lower bound in~\cref{eq:marx_bound} holds for $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ as well. \end{proof} The question whether the lower bound from \cref{thm:homsdicho} can be strengthened to $f(|V(H)|)\cdot |V(G)|^{o(\mathsf{tw}(H))}$ is known as ``Can you beat treewidth?'' and constitutes a major open problem in parameterized complexity theory and an obstruction for tight conditional lower bounds on the complexity of a variety of (parameterized) problems, see for instance \cite{LokshtanovMS11,Curticapean15,CurticapeanM14,CurticapeanDM17}. As described in the introduction, the complexity of computing a finite linear combination of homomorphism counts is precisely determined by the complexity of computing the non-vanishing terms. The formal statement is provided subsequently. \begin{theorem}[Complexity Monotonicity~\cite{ChenM16,CurticapeanDM17}]\label{thm:monotonicity} Let $a:\mathcal{G} \rightarrow \mathbb{Q}$ denote a function of finite support and let $F$ denote a graph such that $a(F) \neq 0$. There are a computable function $g$ and a deterministic algorithm $\mathbb{A}$ with oracle access to the function \[G \mapsto \sum_{H \in \mathcal{G}} a(H)\cdot \#\ensuremath{\mathsf{Hom}}s{H}{G},\] and which, given a graph $G$ with $n$ vertices, computes $\#\ensuremath{\mathsf{Hom}}s{F}{G}$ in time $g(a)\cdot n^c$, where $c$ is a constant independent of $a$. Furthermore, each queried graph has at most $g(a)\cdot n$ vertices.\lipicsEnd \end{theorem} As observed by Curticapean, Dell and Marx~\cite{CurticapeanDM17}, counting induced subgraphs of size $k$ that satisfy~$\Phi$ is equivalent to computing a finite linear combination of homomorphism counts. Thus, the previous results yield an \emph{implicit} dichotomy for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. \begin{theorem}[\cite{CurticapeanDM17}]\label{thm:impl_dicho} Let $\Phi$ denote a computable graph property and let $k$ denote a positive integer. There is a unique and computable function $a:\mathcal{G} \rightarrow \mathbb{Q}$ of finite support such that \[ \#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star} = \sum_{H \in \mathcal{G}} a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{\star}. \] Furthermore, the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is either fixed-parameter tractable or $\#\W{1}$-complete.\lipicsEnd \end{theorem} Note that the result on $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ in the previous theorem does not concern the fine-grained complexity of the problem. To reveal the latter, it is necessary to understand the support of the function $a$; we tackle this task in detail in \cref{sec:main_result}. \paragraph*{$\bm{f}$-Vectors and $\bm{h}$-Vectors} It was observed in~\cite{RothS18} that there is a close connection between the structure of the simplicial graph complex of edge-monotone properties $\Phi$ and the complexity of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. In this work, we generalize two important topological invariants of simplicial complexes to arbitrary graph properties: The $f$-vector and the $h$-vector. \begin{definition}\label{def:fhvectors} Let $\Phi$ denote a graph property, let $k$ denote a positive integer and set $d=\binom{k}{2}$. The $f$\emph{-vector} $f^{\Phi,k} = (f^{\Phi,k}_i)_{i=0}^d$ of $\Phi$ and $k$ is defined by \[f^{\Phi,k}_i := \#\{ A \ensuremath{\mathsf{Sub}}seteq E(K_k) ~|~\#A = i \wedge \Phi(K_k[A])=1 \}\,, \text{ where } i\in\{0,\dots,d\}, \] that is, $f^{\Phi,k}_i$ is the number of edge-subsets of size $i$ of $K_k$ such that the induced graph satisfies~$\Phi$. The $h$\emph{-vector} $h^{\Phi,k} = (h^{\Phi,k}_\ell)_{\ell=0}^d$ is defined by \[h^{\Phi,k}_\ell := \sum_{i=0}^\ell (-1)^{\ell-i} \cdot \binom{d - i}{\ell -i}\cdot f^{\Phi,k}_i\,, \text{ where } \ell\in\{0,\dots,d\}.\lipicsEnd\] \end{definition} As mentioned before, note that those notions of $f$ and $h$-vectors correspond to the eponymous notions for simplicial (graph) complexes.\footnote{In some parts of the literature, the $f$-vector comes with an index shift of $-1$ due to the topological interpretation of simplicial complexes.} We omit the definition of the latter as we are only concerned with the generalized notions and refer the interested reader e.g. to \cite{Billera97}. It turns out that the \emph{non-vanishing} of suitable entries $h^{\Phi,k}_\ell$ of the $h$-vector implies hardness for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. The result in~\cite{RothS18} can be considered as a very restricted special case as it shows that the non-vanishing of the reduced Euler characteristic of the complex (which is equal to the entry $h^{\Phi,k}_d$) implies hardness. On the other hand, for many graph properties it is easy to deduce information about the $f$-vector (for instance that $f^{\Phi,k}_\ell=0$ for sufficiently large $\ell$ with respect to $k$). We observe that the $f$ and $h$-vectors of a graph property are related by the so-called the $f$-polynomial which is again a generalization of the epynomous notion for simplicial complexes: \begin{definition} Let $\Phi$ denote a graph property, let $k$ denote a positive integer and set $d=\binom{k}{2}$. The $f$\emph{-polynomial} of~$\Phi$ and~$k$ is a univariate polynomial of degree at most $d$ defined as follows: \[ \ensuremath{\mathtt{f}}_{\Phi,k}(x) := \sum_{i=0}^d f^{\Phi,k}_i \cdot x^{d-i}\!.\lipicsEnd\] \end{definition} As we see in the proof of Lemma \ref{lem:birkhoff_interpol}, the entries of the $f$ and $h$-vectors are given up to combinatorial factors by derivatives of the $f$-polynomial at $0$ and $-1$. Intuitively, we apply Hermite-Birkhoff interpolation on $\ensuremath{\mathtt{f}}_{\Phi,k}$ and its derivatives to prove that specific entries of $h^{\Phi,k}$ cannot vanish in case a sufficient number of entries of~$f^{\Phi,k}$ do, unless $\Phi$ is trivially false on $k$-vertex graphs. \paragraph*{Hermite-Birkhoff Interpolation and P\'olya's Theorem} While a univariate polynomial of degree $d$ is uniquely determined by $d+1$ evaluations in pairwise different points, the problem of \emph{Hermite-Birkhoff interpolation} asks under which conditions we can uniquely recover the polynomial if we instead impose conditions on the derivatives of the polynomial at $m$ distinct points. Following the notation of Schoenberg~\cite{Schoenberg66}, the problem is formally expressed as follows. Given a matrix $E=(\varepsilon_{ij})\in\{0,1\}^{m \times d+1}$ where $i \in \{1,\dots,m\}$ and $j \in \{0,\dots d\}$, as well as reals~$x_1 < \dots < x_m$, the goal is to find a polynomial~$\ensuremath{\mathtt{f}}$ of degree at most $d$ such that for all $i$ and $j$ with $\varepsilon_{ij} = 1$ we have \[\ensuremath{\mathtt{f}}^{(j)}(x_i) = 0\] Here, $\ensuremath{\mathtt{f}}^{(j)}$ denotes the $j$-th derivative of $\ensuremath{\mathtt{f}}$. In particular, we are interested under which conditions on the matrix $E$, the zero polynomial is the \emph{unique} solution. In this case, $E$ is called \emph{poised}. It turns out that the case $m=2$ is sufficient for our purposes; fortunately, this case was fully solved by P\'olya: \begin{theorem}[P\'olya's Theorem~\cite{Polya31,Schoenberg66}]\label{thm:polya} Let $E$ be defined as above with $m=2$. Suppose that $\sum_{i,j}\varepsilon_{ij} = d+1$ and for every $j \in \{0,\dots,d\}$ set \[M_j := \sum_{i=0}^j \varepsilon_{1,i} + \varepsilon_{2,i}. \] Then, $E$ is poised if and only if $M_j \geq j+1$ holds true for all $j \in \{0,\dots,d-1\}$.\lipicsEnd \end{theorem} \section{Homomorphism Vectors of Graph Properties}\label{sec:main_result} In this section we discuss and prove our main technical result: \begin{restatable}{theorem}{mnthmcom}\label{thm:main_theorem_combinatorial} Let $\Phi$ denote a computable graph property, let $k$ denote a positive integer, and let~$w$ denote the Hamming weight of the $f$-vector $f^{\Phi,k}$. Suppose that $\Phi$ is not trivially false on $k$-vertex graphs. Then there is a unique and computable function $a:\mathcal{G} \rightarrow \mathbb{Q}$ of finite support such that \[\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star} = \sum_{H \in \mathcal{G}} a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{\star},\] satisfying that there is a graph $K$ on $k$ vertices and at least $\binom{k}{2}-w+1$ edges such that $a(K) \neq 0$.\ifx1\undefined{\lipicsEnd}\fi \end{restatable} \def1{1} \noindent First, recall from \cref{thm:impl_dicho} that for any computable graph property $\Phi$ and positive integer $k$, there is a unique computable function $a:\mathcal{G} \rightarrow \mathbb{Q}$ (with finite support) satisfying \begin{equation}\label{eq:indsubgmp} \#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star} = \sum_{H \in \mathcal{G}} a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{\star}. \end{equation} Now, for the remainder of the section, fix a (computable) graph property $\Phi$ and a positive integer $k$ (and thus the function $a$). This allows us to simplify the notation for the $f$ and $h$-vectors, as well as for the $f$-polynomial: We write $f := f^{\Phi, k}\!$, $h := h^{\Phi, k}\!$, and $\ensuremath{\mathtt{f}} := \ensuremath{\mathtt{f}}_{\Phi, k}$. Furthermore, we set $d := \binom{k}{2}$ and we write~$\mathcal{H}_i$ for the set of all graphs on $k$ vertices and with $i$ edges. Next, we define the vector $\tilde{h}_i$ as \[ \tilde{h}_i := \sum_{K \in \mathcal{H}_i} a(K) \,, \text{ where } i\in\{0,\dots,d\}, \] that is, the $i$-th entry of $\tilde{h}$ is the sum of the coefficients of graphs with $k$ vertices and $i$ edges in~\cref{eq:indsubgmp}. Now we establish the aforementioned connection between the coefficients of~\cref{eq:indsubgmp} and the $h$-vector of the property~$\Phi$. \begin{lemma}\label{lem:coef_sums} We have $k! \cdot \tilde{h} = h$. \end{lemma} Note that as a consequence, the $h$-vector of a simplicial graph complex is determined by the coefficients of its associated linear combination of homomorphisms. \begin{proof} Given two graphs $H$ and $H'$ on $k$ vertices each, we write $\#\{H'\supseteq H\}$ for the number of possibilities of adding edges to $H$ such that (a graph isomorphic to) $H'$ is obtained. We start with the following claim which was implicitly shown in~\cite{RothS18}; we include a proof for completeness. \begin{claim}\label{clm:single_coef} Let $K$ denote a graph with $k$ vertices and define $a$ as in~\cref{eq:indsubgmp}. We have \[a(K) = \sum_{H \in \Phi_k}\#\auts{H}^{-1}\cdot (-1)^{\#E(K)-\#E(H)} \cdot \#\{K \supseteq H\}. \] \end{claim} \begin{claimproof} Fix a graph $K$ with $k$ vertices. Using the standard transformations from strong embeddings to embeddings and from embeddings to homomorphisms (see for instance Lov{\'{a}}sz~\cite{Lovasz12}), we obtain the following:\footnote{This step is done explicitly in~\cite{RothS18}.} \begin{align*} &\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star}\\ &\quad=\sum_{H\in \Phi_k} \#\auts{H}^{-1} \sum_{H' \in \mathcal{G}} (-1)^{\#E(H')-\#E(H)} \cdot \#\{H'\supseteq H\} \sum_{\rho \geq \emptyset} \mu(\emptyset,\rho) \cdot \#\ensuremath{\mathsf{Hom}}s{H'/\rho}{\star}, \end{align*} where $\mu$ denotes the Möbius function and the rightmost sum ranges over the partition lattice of the set of vertices of $H'$. Furthermore, $H'/\rho$ is the quotient graph obtained by identifying vertices of $H'$ along the partition~$\rho$. In particular, $H'/\emptyset = H'$. We omit the details, which can be found in~\cite{Lovasz12,RothS18}, as we only need that $H'/\rho$ has strictly less than $k$ vertices for all $\rho > \emptyset$ and that $\mu(\emptyset,\emptyset)=1$. This allows us to rewrite the previous equation as follows: \begin{align*} &\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star} \\ &\quad=\sum_{H\in \Phi_k} \#\auts{H}^{-1} \sum_{H' \in \mathcal{G}} (-1)^{\#E(H')-\#E(H)} \#\{H'\supseteq H\} \cdot \#\ensuremath{\mathsf{Hom}}s{H'}{\star} + R(\Phi_k), \end{align*} where the remainder $R(\Phi_k)$ does not depend on any numbers $\#\ensuremath{\mathsf{Hom}}s{F}{\star}$ for graphs $F$ with $k$ vertices. In particular, reordering and grouping the coefficients of $\#\ensuremath{\mathsf{Hom}}s{K}{\star}$ yields the claim. \end{claimproof} Next, we investigate the term $\sum_{K \in \mathcal{H}_\ell}\#\{K \supseteq H\}$. \begin{claim}\label{clm:collect} Let $\ell\in\{0,\dots,d\}$ denote an integer and let $H$ denote a graph with $k$ vertices and at most $\ell$ edges. Then, we have \[ \sum_{K \in \mathcal{H}_\ell}\!\!\! \#\{K \supseteq H\} = \binom{d - \#E(H)}{\ell - \#E(H)}. \] \end{claim} \begin{claimproof} Any extension from the graph $H$ to a graph with $\ell$ edges has to add $\ell - \#E(H)$ edges to $H$; there are exactly $d - \#E(H)$ possible choices for these $\ell - \#E(H)$ edges. Hence the claim follows from basic combinatorics. \end{claimproof} Now, fix an $\ell \in \{0,\dots,d\}$; we proceed to show that $k!\cdot \tilde{h}_\ell = h_\ell$, which proves the lemma. To that end, from the definition of $\tilde{h}$, we obtain \begin{align*} k! \cdot \tilde{h}_\ell &= k!\cdot \sum_{K \in \mathcal{H}_\ell} a(K)\\ ~&= k!\cdot \sum_{K \in \mathcal{H}_\ell} \sum_{H \in \Phi_k}\#\auts{H}^{-1}\cdot (-1)^{\ell-\#E(H)} \cdot \#\{K \supseteq H\}\\ ~&=\sum_{H \in \Phi_k} k!\cdot \#\auts{H}^{-1}\cdot (-1)^{\ell-\#E(H)} \sum_{K \in \mathcal{H}_\ell} \#\{K \supseteq H\}\,, \end{align*} where the second equality holds due to \cref{clm:single_coef}. Now observe that $\#\{K \supseteq H\} = 0$ if $H$ has more edges than $K$. Thus we see that \begin{align*} k! \cdot \tilde{h}_\ell &= \sum_{\ensuremath{\mathsf{Sub}}stack{H \in \Phi_k\\ \#E(H)\leq \ell}} k!\cdot \#\auts{H}^{-1}\cdot (-1)^{\ell-\#E(H)} \sum_{K \in \mathcal{H}_\ell} \#\{K \supseteq H\}\\ ~&= \sum_{\ensuremath{\mathsf{Sub}}stack{H \in \Phi_k\\ \#E(H)\leq \ell}} k!\cdot \#\auts{H}^{-1}\cdot (-1)^{\ell-\#E(H)} \cdot \binom{d - \#E(H)}{\ell - \#E(H)}\,, \end{align*} where the last equality holds due to \cref{clm:collect}. Next we use the fact that $k!$ is the order of the symmetric group $\mathsf{Sym}_k$: For any graph $H$ in the above sum, choose a set $A$ of edges of the labeled complete graph~$K_k$ on $k$ vertices such that the corresponding subgraph $K_k[A]$ is isomorphic to $H$. The group $\mathsf{Sym}_k$ acts on the vertices and thus on the edges of $K_k$. By the definition of a graph automorphism, the stabilizer of the set $A$ has exactly $\#\auts{H}$ elements. \noindent Now observe that the orbit of $A$ under $\mathsf{Sym}_k$ is the collection of all sets $A'$ such that $K_k[A']\cong H$. Therefore, by the Orbit Stabilizer Theorem, we have \[k! \cdot \#\auts{H}^{-1} = \#\{A' \ensuremath{\mathsf{Sub}}seteq E(K_k)~|~K_k[A'] \cong H\}.\] Hence we can conclude that \begin{align*} k! \cdot \tilde{h}_\ell &= \sum_{\ensuremath{\mathsf{Sub}}stack{H \in \Phi_k\\ \#E(H)\leq \ell}} \#\{A \ensuremath{\mathsf{Sub}}seteq E(K_k)~|~K_k[A] \cong H\}\cdot (-1)^{\ell-\#E(H)} \cdot \binom{d - \#E(H)}{\ell - \#E(H)}\\ ~&=\sum_{i=0}^\ell \sum_{\ensuremath{\mathsf{Sub}}stack{H \in \Phi_k\\ \#E(H)= i}} \#\{A \ensuremath{\mathsf{Sub}}seteq E(K_k)~|~K_k[A] \cong H\}\cdot (-1)^{\ell-i} \cdot \binom{d - i}{\ell - i}\\ ~&=\sum_{i=0}^\ell \#\{ A \ensuremath{\mathsf{Sub}}seteq E(K_k) ~|~\#A = i \wedge \Phi(K_k[A])=1 \}\cdot (-1)^{\ell-i} \cdot \binom{d - i}{\ell - i}\\ ~&= \sum_{i=0}^\ell f_i\cdot (-1)^{\ell-i} \cdot \binom{d - i}{\ell - i} = h_\ell, \end{align*} completing the proof. \end{proof} In the next step, we use P\'olya's Theorem to prove that the Hamming weight of the $f$-vector determines an index $\beta$ of the $h$-vector such that at least one entry of $h$ with index at least~$\beta$ is non-zero. By \cref{lem:coef_sums} the same then follows for $\tilde{h}$. \begin{lemma}\label{lem:birkhoff_interpol} Let $w$ denote the Hamming weight of $f$ and set $\beta = d-w$. If $\Phi$ is not trivially false on $k$-vertex graphs then at least one of the values $h_d,\dots,h_{\beta+1}$ is non-zero. \end{lemma} \begin{proof} Recall the definition of the $f$-polynomial $\ensuremath{\mathtt{f}}(x)=\sum_{i=0}^d f_i \cdot x^{d-i}$ and observe that \[ \ensuremath{\mathtt{f}}^{(j)}(x) = \sum_{i=0}^{d-j} f_i \cdot (d-i)_j \cdot x^{d-j-i}. \] By $j_j = j!$, we immediately obtain $\ensuremath{\mathtt{f}}^{(j)}(0) = f_{d-j} \cdot j!$. Therefore, by assumption, we have $\ensuremath{\mathtt{f}}^{(j)}(0) = 0$ for $\beta+1$ many indices $j$. Furthermore, we see that \begin{alignat*}{3} \ensuremath{\mathtt{f}}^{(j)}(-1) &= \sum_{i=0}^{d-j} f_i \cdot (d-i)_j \cdot (-1)^{d-j-i} &&= j! \cdot \sum_{i=0}^{d-j} f_i \cdot \binom{d-i}{j} \cdot (-1)^{d-j-i}\\ ~&= j! \cdot \sum_{i=0}^{d-j} f_i \cdot \binom{d-i}{(d-j)-i} \cdot (-1)^{d-j-i}~ &&= j! \cdot h_{d-j}. \end{alignat*} Now assume for the sake of contradiction that each of the values $h_d,\dots,h_{\beta+1}$ is zero. Consequently, $\ensuremath{\mathtt{f}}^{(j)}(-1)=0$ for $j=0,\dots, w-1$. Interpreting those evaluations of the derivatives of the $f$-polynomial as an instance of Hermite-Birkhoff interpolation, the corresponding matrix $E$ looks as follows:\footnote{Recall that an entry~$1$ in the matrix $E$ represents an evaluation $\ensuremath{\mathtt{f}}^{(j)}(-1)=0$ in the first row and an evaluation $\ensuremath{\mathtt{f}}^{(j)}(0)=0$ in the second row.} \[ \begin{blockarray}{rcccccccc} ~ & 0 & 1 & 2 & \dots & w-1 & w& \dots & d \\[1em] \begin{block}{l(cccccccc)} ~& ~1 & 1 & 1 & \dots & 1 & 0 & \dots & 0 \\ ~ & \varepsilon_{20} & \varepsilon_{21} & \varepsilon_{22} & \dots & \varepsilon_{2(w-1)}& \varepsilon_{2w} & \dots & \varepsilon_{2d}~ \\ \end{block} \end{blockarray} \] In particular, at least $\beta+1=d+1-w$ of the values $\varepsilon_{2j}$ are $1$; As $\beta+1$ and $w$ sum up to $d+1$, we can easily verify that the conditions of P\'olya's Theorem (\cref{thm:polya}) are satisfied: Let us modify $E$ by arbitrarily choosing \emph{precisely} $\beta+1$ of the $\varepsilon_{2,j}$ that are $1$ and set the others to $0$, and call the resulting matrix $\hat{E}$. We then have both $M_j \geq j+1$ (for all $j \in \{0,\dots,d-1\}$) and the first and second row of $\hat{E}$ sum up to \emph{precisely} $d+1$. Hence the matrix $\hat{E}$ is poised, that is, the only polynomial of degree at most $d$ that satisfies the corresponding instance of Hermite-Birkhoff interpolation is the zero polynomial. As we obtained $\hat{E}$ from $E$ just by ignoring some vanishing conditions, the same conclusion is true for $E$ and thus $\ensuremath{\mathtt{f}}=0$ is the unique solution. This, however, contradicts the fact that the property $\Phi$ is not trivially false on $k$-vertex graphs, completing the proof. \end{proof} Combining \cref{lem:coef_sums,lem:birkhoff_interpol} yields our main technical result, which we restate here for convenience. \mnthmcom* \begin{proof} Set $d=\binom{k}{2}$ and $\beta=d-w$. By \cref{eq:indsubgmp} the function $a$ exists and is computable and has a finite support. Now, \cref{lem:birkhoff_interpol} implies that at least one of the values $h^{\Phi,k}_d,\dots,h^{\Phi,k}_{\beta+1}$ is non-zero and thus, by \cref{lem:coef_sums}, at least one of the values $\tilde{h}_d,\dots,\tilde{h}_{\beta+1}$ is non-zero as well. Next, observe that $\tilde{h}_i = \sum_{K \in \mathcal{H}_i} a(K)$ for all $i \in \{0,\dots,d\}$, where $\mathcal{H}_i$ is the set of all graphs on $k$ vertices and $i$ edges. In particular, $\tilde{h}_i \neq 0$ implies that $a(K)\neq 0$ for at least one $K \in \mathcal{H}_i$, yielding the claim. \end{proof} \section{A Classification of \texorpdfstring{\#IndSub$\bm{(\Phi)}$}{{\#IndSub(Phi)}} by the Hamming Weight of the \texorpdfstring{$\bm{f}$}{f}-Vectors}\label{sec:meta_theorem} In this section, we derive a general hardness result for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ based on the Hamming weight of the $f$-vector. In a sense, we ``black-box'' \cref{thm:main_theorem_combinatorial}; using the resulting classification, we establish first hardness results and almost tight conditional lower bounds for a variety of families of graph properties. However, note that taking a closer look at the number of edges of the graphs with non-vanishing coefficients (as provided by \cref{thm:main_theorem_combinatorial}) often yields improved, sometimes even matching conditional lower bounds; we defer the treatment of the refined analysis to \cref{sec:refined_bounds}. In what follows, we write $\mathcal{K}(\Phi)$ for the set of all $k$ such that $\Phi_k$ is non-empty. \begin{theorem}\label{thm:main_general} Let $\Phi$ denote a computable graph property and suppose that the set $\mathcal{K}(\Phi)$ is infinite. Let $\beta: \mathcal{K}(\Phi) \to \mathbb{Z}_{\ge 0}$ denote the function that maps $k$ to $\binom{k}{2}-\mathsf{hw}(f^{\Phi,k})$. If $\beta(k)\in \omega(k)$ then the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o({(\beta(k)/k)}/{\log(\beta(k)/k)})}\] for any function $g$, unless ETH fails. The same statement holds for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$. \end{theorem} Note that the condition of $\mathcal{K}(\Phi)$ being infinite is necessary for hardness: Otherwise there is a constant~$c$ such that we can output $0$ whenever $k\geq c$ and solve the problem by brute-force if $k< c$, yielding an algorithm with a polynomial running time. Note further that the $(\log(\beta(k)/k))^{-1}$-factor in the exponent is related to the question of whether it is possible to ``beat treewidth''~\cite{Marx10}.\footnote{See \cref{thm:homsdicho} and its discussion.} In particular, if the factor of $(\log \mathsf{tw}(H))^{-1}$ in \cref{thm:homsdicho} can be dropped, then all further results in this section can be strengthened to yield tight conditional lower bounds under ETH. \begin{proof} By \cref{thm:main_theorem_combinatorial}, for each $k\in \mathcal{K}(\Phi)$ we obtain a graph $H_k$ with $k$ vertices and at least $\beta(k)$ edges such that $a(H_k)\neq 0$, where $a$ is the function in~\cref{eq:indsubgmp}. The average degree of $H_k$ satisfies \[d(H_k) = \frac{1}{k}\cdot \sum_{v \in V(H_k)}\mathsf{deg}(v) = \frac{2|E(H_k)|}{k} \geq \frac{2\beta(k)}{k} \,,\] where the second equality is due to the Handshaking Lemma. By \cref{lem:convenient}, we thus obtain that $\mathsf{tw}(H_k) \geq \frac{\beta(k)}{k}$, which is unbounded as $\beta(k)\in \omega(k)$ by assumption. Now let $\mathcal{H}$ denote the set of all graphs $H_k$ for $k \in \mathcal{K}(\Phi)$. By \cref{thm:homsdicho}, we obtain that $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o(({\beta(k)/k})/{\log(\beta(k)/k)})}\] for any function $g$, unless ETH fails. Further, by Complexity Monotonicity (\cref{thm:monotonicity}), the same is true for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ as well. Finally, we use \cref{fac:invariance} to obtain the same result for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$; completing the proof. \end{proof} \ensuremath{\mathsf{Sub}}section{Low Edge-Densities and Sparse Graph Properties}\label{sec:low_edge_dens} As a first application of \cref{thm:main_general}, we consider properties $\Phi$ that satisfy \[\mathsf{hw}(f^{\Phi,k}) \in o(k^2).\] We say that such a property $\Phi$ has a \emph{low edge-densities}. Properties with low edge-density subsume, for example, exclusion of a set of fixed minors such as planarity. They have been studied by Jerrum and Meeks~\cite{JerrumM15density}, where they show that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete for these properties. However, their proof uses Ramsey's Theorem and thus only establishes an implicit conditional lower bound of $g(k) \cdot |V(G)|^{o(\log k)}$. In contrast, we achieve the following, almost tight lower bound: \begin{theorem}\label{cor:low_edge_densities} Let $\Phi$ denote a computable graph property with low edge-densities. Suppose that the set~$\mathcal{K}(\Phi)$ is infinite. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o\left(k/\log k\right)}\] for any function $g$, unless ETH fails. The same is true for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$. \end{theorem} \begin{proof} If $\Phi$ has low edge-densities, then we have $\beta(k)=\binom{k}{2}- \mathsf{hw}(f^{\Phi,k}) \in \Theta(k^2)$. Thus \[ o\left(\frac{\beta(k)/k}{\log(\beta(k)/k)}\right) = o\left(k/\log k\right) \,.\] The claim hence follows by \cref{thm:main_general}. \end{proof} The previous result applies, in particular, to sparse properties. In \cref{sec:refined_bounds} we show, that a refined analysis based on Tur\'an's Theorem as well as \cref{thm:main_theorem_combinatorial} establishes a tight conditional lower bound for sparse properties $\Phi$ that additionally satisfy a density condition on $\mathcal{K}(\Phi)$; the combination of those two results then implies \cref{thm:sparse_intro}. \ensuremath{\mathsf{Sub}}section{Graph Properties Depending Only on the Number of Edges} Jerrum and Meeks~\cite{JerrumM15density,JerrumM17} asked whether $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete whenever $\Phi$ is non-trivial infinitely often and only depends on the number of edges of a graph, that is, \[\forall H_1, H_2: |E(H_1)|=|E(H_2)| \Rightarrow \Phi(H_1) = \Phi(H_2).\] We answer this question affirmatively, even for properties that can depend both on the number of edges and vertices of the graph, and additionally provide an almost tight conditional lower bound: \begin{theorem}\label{cor:number of edges} Let $\Phi$ denote a computable graph property that only depends on the number of edges and the number of vertices of a graph. If $\Phi_k$ is non-trivial only for finitely many $k$ then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is fixed-parameter tractable. Otherwise, $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o\left(k/\log k\right)}\] for any function $g$, unless ETH fails. \end{theorem} Note that \cref{cor:number of edges} is also true for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$, as $\neg\Phi$ and $\overline{\Phi}$ depend only on the number of edges and vertices of a graph if and only if $\Phi$ does. \begin{proof} First, assume that $\Phi_k$ is non-trivial only for finitely many $k$. Then, there is a constant $c$ such that for every $k > c$, the property $\Phi_k$ is either trivially true or trivially false. Hence, given as input a graph $G$ and an integer $k$, we check whether $k\leq c$. If this is the case, we solve the problem by brute-force. Otherwise, we check whether $\Phi_k$ is trivially false or trivially true.\footnote{This step is the reason why we only get fixed-parameter tractability and not necessarily polynomial-time tractability.} If $\Phi_k$ is false, we output $0$; otherwise we output $\binom{n}{k}$. It is immediate that this algorithm yields fixed-parameter tractability. Now assume that $\Phi_k$ is non-trivial for infinitely many $k$. Since for $\Phi_k$ we fix the number of vertices to be $k$, by assumption $\Phi_k$ only depends on the number of edges of a graph. Thus, we have \begin{equation}\label{eq:edges_complement} \mathsf{hw}(f^{\neg\Phi,k}) = \binom{k}{2} - \mathsf{hw}(f^{\Phi,k}). \end{equation} Hence, set \[\hat{\Phi}_k := \begin{cases} \Phi_k &\text{if } \mathsf{hw}(f^{\Phi,k}) \leq \frac{1}{2}\binom{k}{2}\\ \neg\Phi_k &\text{if } \mathsf{hw}(f^{\Phi,k}) > \frac{1}{2}\binom{k}{2} \,. \end{cases} \] We observe that, by assumption, $\mathcal{K}(\hat{\Phi})$ is infinite, and by \cref{fac:invariance} the problems $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\hat{\Phi})$ are equivalent. By definition and by~\cref{eq:edges_complement}, we see that $\mathsf{hw}(f^{\hat{\Phi},k}) \leq \frac{1}{2}\binom{k}{2}$ and therefore $\beta(k) = \binom{k}{2} - \mathsf{hw}(f^{\hat{\Phi},k}) \in \Theta(k^2)$. Thus, we have \[o\left(\frac{\beta(k)/k}{\log(\beta(k)/k)}\right) = o\left(k/\log k\right).\] The claim now follows by \cref{thm:main_general}. \end{proof} \ensuremath{\mathsf{Sub}}section{Monotone Graph Properties} Recall that a property $\Phi$ is called monotone if it is closed under taking subgraphs. The decision version of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$, that is, deciding whether there is an induced subgraph of size $k$ that satisfies $\Phi$ is known to be $\W{1}$-complete if $\Phi$ is monotone. Further, $\Phi$ is non-trivial and $\mathcal{K}(\Phi)$ is infinite (this follows implicitly by a result of Khot and Raman~\cite{KhotR02}). However, as the reduction of Khot and Raman is not parsimonious, the reduction does not yield $\#\W{1}$-completeness of the counting version. More importantly, the proof of Khot and Raman uses Ramsey's Theorem and thus only implies a conditional lower bound of $g(k)\cdot |V(G)|^{o(\log k)}$. Using our main result, we achieve a much stronger and almost tight lower bound under ETH. \begin{theorem}\label{thm:monotone_basic} Let $\Phi$ denote a computable graph property that is monotone and non-trivial. Suppose that $\mathcal{K}(\Phi)$ is infinite. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o\left(k/\log k\right)}\] for any function $g$, unless ETH fails. The same is true for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$. \end{theorem} \begin{proof} As $\Phi$ is non-trivial, there is a graph $F$ such that $\Phi(H)$ is false for every $H$ that contains $F$ as a (not necessarily induced) subgraph. Set $r=|V(F)|$ and fix $k\in \mathcal{K}(\Phi)$. By Tur\'ans Theorem (\cref{thm:turan}) we have that every graph $H$ on $k$ vertices with more than $\left(1-\frac{1}{r}\right)\cdot \frac{k^2}{2}$ edges contains the clique $K_{r+1}$ and thus $F$ as a subgraph. Consequently, $\Phi$ is false on every graph with $k$ vertices and more than $\left(1-\frac{1}{r}\right)\cdot \frac{k^2}{2}$ edges. Therefore, we have \[\beta(k)= \binom{k}{2} - \mathsf{hw}(f^{\Phi,k}) \geq \binom{k}{2} - \left(1-\frac{1}{r}\right)\cdot \frac{k^2}{2} = \frac{k^2}{2r} - \frac{k}{2} \in \Omega(k^2). \] Thus $\beta(k) \in \Theta(k^2)$ and we conclude that \[o\left(\frac{\beta(k)/k}{\log(\beta(k)/k)}\right) = o\left(k/\log k\right) .\] The claim hence follows by \cref{thm:main_general}. \end{proof} \section{Refined Lower Bounds and Clique-Minors}\label{sec:refined_bounds} Recall that the lower bounds of the previous section become tight if it is impossible to ``beat treewidth'', that is, if the $(\log k)^{-1}$ factor in the exponent of \cref{thm:homsdicho} can be dropped. In this section, we show that the lower bounds of the previous section can also be refined---and in case of sparse properties even be made tight---without the latter assumption. This requires relying on two results from extremal graph theory on forbidden cliques and clique-minors. The first one is Tur\'an's Theorem, which we have seen already. The second one is a consequence of the Kostochka-Thomason-Theorem: \begin{theorem}[\cite{Kostochka84,Thomason01}]\label{thm:clique_minors} There is a constant $c > 0$ such that every graph $H$ with an average degree of at least $ct\sqrt{\log t}$ contains the clique $K_t$ as a minor. \end{theorem} Note that \cref{thm:clique_minors} is often stated in terms of the number of edges of a graph, instead of its average degree. However, due to the Handshaking-Lemma, both statements are equivalent. Roughly speaking, we combine the Kostochka-Thomason-Theorem with \cref{thm:main_theorem_combinatorial} to find graphs with large clique-minors in the linear combination of homomorphisms associated with a graph property~$\Phi$ as given by \cref{eq:indsubgmp}. This then allows us to derive hardness by a reduction from the problem of \emph{finding} cliques, instead of relying on \cref{thm:homsdicho}. In what follows, we say that a subset $\mathcal{K}$ of the natural numbers is \emph{dense} if there is a constant $\ell\geq 1$ such that for all $n\in \mathbb{N}_{>0}$ there is a $k\in \mathcal{K}$ that satisfies $n\leq k \leq \ell n$. Now recall that $\mathcal{K}(\Phi)$ is the set of all~$k$ such that $\Phi_k$ is not empty. The lower bounds for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ in the current section require the set~$\mathcal{K}(\Phi)$ to be dense. Roughly speaking, this is to exclude artificial properties (such as for instance $\Phi(H)= 1$ if and only if $H$ is an independent set and $|V(H)|=2\uparrow m$ for some $m\in\mathbb{N}$, where $2\uparrow m$ is the $m$-fold exponential tower of $2$). While the latter property satisfies the conditions of \cref{thm:main_general}, we cannot construct a \emph{tight} reduction to $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ as the only non-trivial oracle queries satisfy $k= 2\uparrow m$ for some $m\in\mathbb{N}$. Fortunately, monotone properties exclude such artificial properties: \begin{lemma}\label{lem:easy_monotone} Let $\Phi$ denote a non-trivial monotone graph property such that $\mathcal{K}(\Phi)$ is infinite. Then~$\mathcal{K}(\Phi)$ is the set of all positive integers and thus dense. \end{lemma} \begin{proof} Fix a $n\in \mathbb{N}_{>0}$. As $\mathcal{K}(\Phi)$ is infinite, there is a $k\geq n$ in $\mathcal{K}(\Phi)$. Thus there is a graph $H\in \Phi_k$. Now delete $k-n$ arbitrary vertices of $H$ and call the resulting graph $H'$. As $\Phi$ is monotone, we have that $H'\in \Phi_n$ and hence $n\in \mathcal{K}(\Phi)$ as well. \end{proof} The following technical lemma is the basis for the lower bounds in this section; recall that the problem $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ asks, given a graph $H\in \mathcal{H}$ and an arbitrary graph $G$, to compute the number of homomorphisms from $H$ to $G$. \begin{lemma}\label{lem:main_refined_bounds} Let $r\geq 1$ denote a constant and let $\mathcal{H}$ denote a decidable class of graphs. Further, let~$\mathcal{K}(\mathcal{H})$ denote the set of all positive integers $k$ such that there is a graph $H\in \mathcal{H}$ with $k$ vertices and at least $k^2/2r - k/2$ edges. If $\mathcal{K}(\mathcal{H})$ is dense, then $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ cannot be solved in time \[ f(|V(H)|) \cdot |V(G)|^{o(|V(H)|/\sqrt{\log |V(H)|})} \] for any function $f$, unless ETH fails. \end{lemma} \begin{proof} We construct a tight reduction from the problem $\textsc{Clique}$, which asks, given a graph $G$ and a parameter $\hat{k}\in\mathbb{N}_{>0}$, to \emph{decide} whether there is a clique of size $\hat{k}$ in $G$. It is known that $\textsc{Clique}$ cannot be solved in time $\hat{f}(\hat{k})\cdot |V(G)|^{o(\hat{k})}$ for any function $\hat{f}$, unless ETH fails~\cite{Chenetal05,Chenetal06}. Now assume there is an algorithm $\mathbb{A}$ that solves $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ in time \[ f(|V(H)|) \cdot |V(G)|^{o(|V(H)|/\sqrt{\log |V(H)|})} \] for some function $f$. We use $\mathbb{A}$ to solve $\textsc{Clique}$ in time $\hat{f}(\hat{k})\cdot |V(G)|^{o(\hat{k})}$ for some function $\hat{f}$. As $\mathcal{K}(\mathcal{H})$ is dense, there is a constant $\ell \geq 1$ such that for all $\hat{k}\in \mathbb{N}_{>0}$ there is a $k\in \mathcal{K}(\mathcal{H})$ such that $\hat{k} \leq k \leq \ell \hat{k}$. Given a graph $G$ with $n$ vertices and $\hat{k}\in \mathbb{N}_{>0}$, we construct the graph $G'$ as follows: We first search for a $k\in \mathcal{K}(\mathcal{H})$ that satisfies $\hat{k} \leq k \leq \ell \hat{k}$. Note that finding $k$ is computable as $\mathcal{K}(\mathcal{H})$ is decidable. Then, we add $k-\hat{k}$ ``fresh'' vertices to $G$ and add edges between all pairs of new vertices and between all pairs of a new vertex and an old vertex. Next, let $c$ denote the constant from \cref{thm:clique_minors} and set \[h(k):= cr'\cdot \sqrt{\log ({k}/{cr'})}\,,\] where $r' := \max\{1/c,r+1\}$. Now, we construct a graph $\hat{G}$ from $G'$ as follows: The vertices of $\hat{G}$ are the $h(k)$-cliques of $G'$,\footnote{\label{ftnt:rounding}Formally, we have to round $h(k)$ and later $k/h(k)$. For the sake of readability, we assume that all logs and fractions yield integers, but we point out that this might require to find a $\lceil k/h(k)\rceil$-clique at the end of the proof, while the oracle can only determine the existence of a $\lfloor k/h(k)\rfloor$-clique. However, using the latter, we can easily decide whether there is a $\lceil k/h(k)\rceil$-clique by checking for each vertex whether its neighbourhood contains a $\lfloor k/h(k)\rfloor$-clique.} and two vertices $C_1$ and $C_2$ of $\hat{G}$ are made adjacent if all edges between vertices in $C_1$ and vertices in $C_2$ are present in $G'$. Note that $\hat{G}$ has $O(n^{h(k)})$ vertices and can be constructed in time $g(\hat{k})\cdot O(n^{h(k)})$ for some computable function $g$. \begin{claim}\label{clm:cliqueETH} The graph $G$ contains a clique of size $\hat{k}$ if and only if $\hat{G}$ contains a clique of size $k/h(k)$. \end{claim} \begin{claimproof} Let $C$ denote a $\hat{k}$-clique in $G$. Then we obtain a $k$-clique in $G'$ by adding the fresh $k-\hat{k}$ vertices to $C$. Next, partition $C$ in blocks of size $h(k)$. Each block will be a vertex of $\hat{G}$ and all corresponding vertices are pairwise adjacent by the construction of $\hat{G}$. As there are $k/h(k)$ many blocks, we found the desired clique in $\hat{G}$. For the other direction, let $\hat{C}$ denote a $k/h(k)$-clique in $\hat{G}$. By the definition of $\hat{G}$, each vertex of $\hat{C}$ corresponds to a clique of size $h(k)$ in $G'$. Furthermore, two different of those cliques cannot share a common vertex as the corresponding vertices in $\hat{G}$ are adjacent---recall that we do not allow self-loops. Consequently, the union of the $k/h(k)$ many cliques constitutes a clique of size $k$ in $G'$. Finally, at most $k-\hat{k}$ of the vertices of this clique can be fresh vertices, and thus $G$ contains a clique of size (at least) $\hat{k}$. \end{claimproof} Next, we search for a graph $H\in \mathcal{H}$ with $k$ vertices and at least $k^2/2r - k/2$ edges. By assumption, this can be done in time $g'(k)$ for some computable function $g$ as $\mathcal{H}$ is decidable. Now we see that \[d(H)= \frac{1}{k} \cdot \sum_{v \in V(H)} \mathsf{deg}(v) = \frac{2 |E(H)|}{k} \geq \frac{k}{r} -1\,. \] Set $t = \frac{k}{h(k)}$. For $k$ large enough,\footnote{If $k$ is not large enough for the inequalities to hold, then $k$ and thus $\hat{k}$ are bounded by a constant, and we can compute the number of $\hat{k}$-cliques in $G$ by brute-force.} we have \begin{align*} ct\sqrt{\log t} &= c \cdot \frac{k}{cr'\cdot \sqrt{\log ({k}/{cr'})}} \cdot \sqrt{\log ({k}/{cr') - \log\sqrt{\log ({k}/{cr'})}}}\\ &\leq c \cdot \frac{k}{cr'\cdot \sqrt{\log ({k}/{cr'})}} \cdot \sqrt{\log ({k}/{cr'})} = \frac{k}{r'} \leq \frac{k}{r} - 1 \leq d(H) \end{align*} \Cref{thm:clique_minors} thus implies that $K_t$ is a minor of $H$. Furthermore, it is known that, whenever a graph $F$ is a minor of a graph $H$, there is a tight reduction from counting homomorphisms from $F$ to counting homomorphisms from $H$ --- see for instance~\cite[Chapter~2.5]{Roth19} and Section~3 in the full version\footnote{Full version available at \url{https://arxiv.org/abs/1902.04960}.} of~\cite{DellRW19icalp}. Consequently, we can use the algorithm $\mathbb{A}$ to compute the number of homomorphisms from $K_t$ to $\hat{G}$. By assumption on $\mathbb{A}$, this takes time at most \[ f(|V(H)|) \cdot |V(\hat{G})|^{o(|V(H)|/\sqrt{\log |V(H)|})} = f(k) \cdot (n^{h(k)})^{o(k/\sqrt{\log k})} = f(k) \cdot n^{o(k)} \,,\] where the latter holds as $h(k)\in \Theta(\sqrt{\log k})$---recall that $r'$ and $c$ are constants. However, it is easy to see that $\hat{G}$ contains a $t$-clique if and only if the number of homomorphisms from $K_t$ to $\hat{G}$ is at least $1$. By \cref{clm:cliqueETH}, this is equivalent to $G$ having a clique of size $\hat{k}$. As $k \in O(\hat{k})$ (recall that $\ell$ is a constant), the overall running time is bounded by \[ g(\hat{k})\cdot O(n^{h(k)}) + g'(k) + f(k) \cdot n^{o(k)} \leq \hat{f}(\hat{k}) \cdot n^{o(\hat{k})} \] for $\hat{f}(\hat{k}) := g(\hat{k}) + g'(\ell \hat{k}) + f(\ell \hat{k})$. This yields the desired contradiction and concludes the proof. \end{proof} We are now able to establish the refined lower bounds for $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. \begin{theorem}\label{thm:monotone_refined} Let $\Phi$ denote a computable graph property that is monotone and non-trivial. Suppose that $\mathcal{K}(\Phi)$ is infinite. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be solved in time \[g(k)\cdot |V(G)|^{o(k/\sqrt{\log k})}\] for any function $g$, unless ETH fails. The same is true for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$. \end{theorem} \begin{proof} We begin similarly as in the proof of \cref{thm:monotone_basic}: As $\Phi$ is non-trivial, there is a graph $F$ such that $\Phi(H)$ is false for every $H$ that contains $F$ as a (not necessarily induced) subgraph. Set $r=|V(F)|$ and fix $k\in \mathcal{K}(\Phi)$. By Tur\'ans Theorem (\cref{thm:turan}) we have that every graph $H$ on $k$ vertices with more than $\left(1-\frac{1}{r}\right)\cdot \frac{k^2}{2}$ edges contains the clique $K_{r+1}$ and thus $F$ as a subgraph. Consequently, $\Phi$ is false on every graph with $k$ vertices and more than $\left(1-\frac{1}{r}\right)\cdot \frac{k^2}{2}$ edges. We now use \cref{thm:main_theorem_combinatorial} and obtain a computable and unique function $a$ of finite support such that \[\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\ast} = \sum_{H \in \mathcal{G}} a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{\ast},\] satisfying that there is a graph $H_k$ on $k$ vertices and at least \[\binom{k}{2}-\mathsf{hw}(f^{\Phi,k})+1 \geq \binom{k}{2} - \left(1-\frac{1}{r}\right)\cdot \frac{k^2}{2} + 1 \geq \frac{k^2}{2r} - \frac{k}{2}\] edges such that $a(H_k) \neq 0$. Consequently, Complexity Monotonicity (\cref{thm:monotonicity}) yields a tight reduction from the problem $\#\ensuremath{\textsc{Hom}}(\mathcal{H})$ where $\mathcal{H}:=\{H_k~|~ k\in \mathcal{K}(\Phi)\}$. By the previous observation, the graph $H_k$ has $k$ vertices and at least $k^2/2r - k/2$ many edges, and by \cref{lem:easy_monotone} the set of $k$ such that $H_k\in \mathcal{H}$ is dense. Thus we can use \cref{lem:main_refined_bounds}, which concludes the proof---note that the results for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$ follow by \cref{fac:invariance}. \end{proof} We continue with the refined lower bound for properties that only depend on the number of edges of a graph; in this case, we have to assume density. \begin{theorem}\label{cor:number of edges_refined} Let $\Phi$ denote a computable graph property that only depends on the number of edges of a graph. If the set of $k$ for which $\Phi_k$ is non-trivial is dense, then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be solved in time \[g(k)\cdot |V(G)|^{o(k/\sqrt{\log k})}\] for any function $g$, unless ETH fails. \end{theorem} \begin{proof} We use the same set-up as in the proof of \cref{cor:number of edges}. In particular, we obtain $\hat{\Phi}$ such that $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\hat{\Phi})$ are equivalent, $\mathcal{K}(\hat{\Phi})$ is dense, and $\mathsf{hw}(f^{\hat{\Phi},k}) \leq \frac{1}{2}\binom{k}{2}$ for all $k\in \mathcal{H}(\hat{\Phi})$. We use \cref{thm:main_theorem_combinatorial} and obtain a computable and unique function $a$ of finite support such that \[\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\ast} = \sum_{H \in \mathcal{G}} a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{\ast},\] satisfying that there is a graph $H_k$ on $k$ vertices and at least \[\binom{k}{2}-\mathsf{hw}(f^{\Phi,k})+1 \geq \frac{k^2}{4} - \frac{k}{2}\] edges such that $a(H_k) \neq 0$. The application of Complexity Monotonicity (\ref{thm:monotonicity}) and \cref{lem:main_refined_bounds} is now similar to the previous proof; the only difference is, that we can choose $r=2$. \end{proof} As a final result in this section, we establish a tight conditional lower bound for sparse properties; recall that a property $\Phi$ is called \emph{sparse} if there is a constant $s$ only depending on $\Phi$ such that $\Phi$ is false on every graph with $k$ vertices and more than $sk$ edges. Instead of relying in the Kostochka-Thomason-Theorem, it suffices to use Tur\'an's Theorem, however. \begin{theorem}\label{thm:sparse_tight} Let $\Phi$ denote a computable sparse graph property such that $\mathcal{K}(\Phi)$ is dense. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ cannot be solved in time \[g(k)\cdot |V(G)|^{o\left(k\right)}\] for any function $g$, unless ETH fails. The same is true for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$. \end{theorem} \begin{proof} Let $s$ denote the constant given by the definition of sparsity, and fix $k\in \mathcal{K}(\Phi)$. We use \cref{thm:main_theorem_combinatorial} and obtain a computable and unique function $a$ of finite support such that \[\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\ast} = \sum_{H \in \mathcal{G}} a(H) \cdot \#\ensuremath{\mathsf{Hom}}s{H}{\ast},\] satisfying that there is a graph $H_k$ on $k$ vertices and at least $\binom{k}{2}-\mathsf{hw}(f^{\Phi,k})+1$ edges such that $a(H_k) \neq 0$. Now choose $r := \lceil\frac{k}{2(s+1)+1}\rceil$, and observe that \[\binom{k}{2}-\mathsf{hw}(f^{\Phi,k})+1 > \binom{k}{2}-sk > \binom{k}{2}-(s+1)k \geq \left(1-\frac{1}{r}\right)\cdot \frac{1}{2}|V(H_k)|^2\,.\] By Tur\'ans Theorem (\cref{thm:turan}), $H_k$ hence contains $K_{r+1}$ as a subgraph and, in particular, $K_r$ as a minor. Furthermore, Complexity Monotonicity (\cref{thm:monotonicity}) shows that we can, given a graph $G$ compute $\#\ensuremath{\mathsf{Hom}}s{H_k}{G}$ in linear time if we are given oracle access to $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star}$. As we have seen in the proof of Lemma~\ref{lem:main_refined_bounds}, it is known that, whenever a graph $F$ is a minor of a graph $H$, there is a tight reduction from counting homomorphisms from $F$ to counting homomorphisms from $H$~\cite{DellRW19icalp,Roth19}. In particular, this implies that we can compute $\#\ensuremath{\mathsf{Hom}}s{K_{r}}{G}$ in linear time if we are given oracle access to $\#\ensuremath{\mathsf{IndSub}}s{\Phi,k}{\star}$. Note further that $\#\ensuremath{\mathsf{Hom}}s{K_{r}}{G}$ is at least~$1$ if and only if $G$ contains a clique of size $r$. We continue similarly as in the proof of \cref{lem:main_refined_bounds}: Assume that there is an algorithm $\mathbb{A}$ that solves $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ in time $g(k)\cdot |V(G)|^{o\left(k\right)}$ for some function $g$. We show that $\mathbb{A}$ can be used to solve the problem of \emph{finding} a clique of size $\hat{k}$ in a graph $G$ in time $\hat{f}(\hat{k})\cdot|V(G)|^{o(\hat{k})}$ for some function $\hat{f}$. Given $\hat{k}$ and $G$, search for the smallest $k\in\mathcal{K}(\Phi)$ such that \[\hat{k} \leq \lceil\frac{k}{2(s+1)+1}\rceil \,,\] and note that finding such a $k$ is computable in time only depending on $\hat{k}$ as $\Phi$ is computable. Note further that $k\in O(\hat{k})$ as $s$ is a constant and $\mathcal{K}(\Phi)$ is dense. We construct the graph $G'$ from $G$ by adding $\lceil\frac{k}{2(s+1)+1}\rceil - \hat{k}$ ``fresh'' vertices and adding edges between every pair of new vertices and every pair containing one new and one old vertex. It is easy to see that $G$ has a clique of size $\hat{k}$ if and only if $G'$ has a clique of size $\lceil\frac{k}{2(s+1)+1}\rceil$. By the analysis and assumptions above, we can decide whether the latter is true by using $\mathbb{A}$ in time $g(k)\cdot |V(G')|^{o\left(k\right)}$. As $|V(G')|\in O(|V(G)|)$ and $k\in O(\hat{k})$, the overall time to decide whether $G$ has a clique of size $\hat{k}$ is hence bounded by $\hat{f}(\hat{k})\cdot|V(G)|^{o(\hat{k})}$ for some function $\hat{f}$, which is impossible, unless ETH fails~\cite{Chenetal05,Chenetal06}. The results for $\#\ensuremath{\mathsf{IndSub}}sprob(\overline{\Phi})$ and $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$ follow by Fact~\ref{fac:invariance}. \end{proof} \section{Hereditary Graph Properties}\label{sec:hereditary} While we established hardness for a variety of graph properties with $f$-vectors of small hamming weight in the previous sections, we observe that our meta-theorem (\cref{thm:main_general}) does not apply for properties~$\Phi$ for which $f^{\Phi,k}$, $f^{\neg\Phi,k}$ and $f^{\overline{\Phi},k}$ have large hamming weight. A well-studied class of properties containing examples of such $\Phi$ is the family of hereditary graph properties: In contrast to monotone properties, which are closed under taking subgraphs, a property $\Phi$ is called \emph{hereditary} if it is closed under taking \emph{induced} subgraphs. It is a well-known fact that every hereditary property $\Phi$ is characterized by a (possibly infinite) set $\Gamma(\Phi)$ of forbidden induced subgraphs, that is \[\Phi(G) = 1 \Leftrightarrow \forall H \in \Gamma(\Phi): \#\ensuremath{\mathsf{IndSub}}s{H}{G}=0\,. \] Given any graph $H$, the property $\Phi$ of being \emph{$H$-free}, that is, not containing $H$ as an induced subgraph, is hereditary with $\Gamma(\Phi)=\{H\}$. In this section, we settle the hardness-question for many hereditary graph properties as well. Note that $\Phi$ is hereditary if and only if its inverse $\overline{\Phi}$ is. In particular, $H\in \Gamma(\Phi) \Leftrightarrow \overline{H} \in \Gamma(\overline{\Phi})$. \begin{restatable}{theorem}{hermn} Let $H$ denote graph that is not the trivial graph with a single vertex and let $\Phi$ denote the property $\Phi(G)=1:\Leftrightarrow$ $G$ is $H$-free. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o(k)}\] for any function $g$, unless ETH fails. The same is true for the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$.\ifx1\undefined{\lipicsEnd}\fi \end{restatable} \def1{1} \noindent We start with some terminology used in this section. Given a graph $H$, a pair $(u,v) \in V(H)^2$, and two non-negative integers $x$ and $y$, we construct the \emph{exploded} graph $H(u,v,x,y)$ by adding $x-1$ clones of $u$ and $y-1$ clones of $v$, including all incident edges; if $x$ or $y$ are zero, then we delete $u$ or $v$, respectively. Given an edge $e=\{u,v\}$ of $H$ and two non-negative integers $x$ and $y$, we define the $e$-\emph{exploded} graph as $H_{u,v}^{x,y} := (V(H),E(H)\setminus \{u, v\})(u,v,x,y)$. Consult \cref{fig:explosion} for a visualization. An edge $e=\{u,v\}$ of a graph $H$ is called \emph{critical}, if $\#\ensuremath{\mathsf{IndSub}}s{H}{H^{x,y}_{u,v}}=0$ for every pair~$x,y\in \mathbb{N}_{\geq 0}$. Now let $\Phi$ denote a hereditary graph property and let $\Gamma(\Phi)$ denote the associated set of forbidden induced subgraphs. We say that $\Phi$ has a \emph{critical edge} if there is a graph $H\in \Gamma(\Phi)$ and an edge $\{u,v\}\in E(H)$ such that for all positive integers $x$ and $y$, the graph $H_{u,v}^{x,y}$ satisfies $\Phi$, that is, for every $\hat{H}\in\Gamma(\Phi)$, we have \[\#\ensuremath{\mathsf{IndSub}}s{\hat{H}}{H_{u,v}^{x,y}} = 0.\] Finally, we say that a hereditary property $\Phi$ is \emph{critical} if either $\Phi$ or its inverse $\overline{\Phi}$ has a critical edge. We will see later in this section that every critical property $\Phi$ will induce hardness of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$. \usetikzlibrary{calc,shapes,fit} \tikzset{vertex/.style={circle, fill, inner sep=1.5pt, outer sep=1.9pt}} \tikzset{cvertex/.style={circle, fill, inner sep=1pt, outer sep=1.3pt, lipicsGray!60}} \tikzset{ccvertex/.style={circle, fill, inner sep=1.3pt, outer sep=1.7pt, lipicsGray!80}} \tikzset{svertex/.style={circle,draw=white, line width=1.2pt,fill=black, inner sep=1.8pt, outer sep=0pt}} \defred{red} \defblue{blue} \defred!50!blue{red!50!blue} \tikzset{edge/.style={very thick}} \tikzset{ccedge/.style={lipicsGray!50}} \tikzset{cedge/.style={lipicsGray!75}} \tikzset{dedge/.style={gray,thick, double=white, double distance=9pt}} \tikzset{sdedge/.style={white,thick, double=black, double distance=1.2pt}} \begin{figure} \caption{Different explosions of the edge $\{u, v\} \label{fig:explosion} \end{figure} First, we establish that for every graph $H$ with at least two vertices, the property ``$H$-free'' is critical. To this end, we rely on neighbour-sharing vertices: Given two vertices $u$ and $v$ of a graph $H$, we say that $u$ and $v$ are \emph{false twins} if they have the same set of adjacent vertices. Note that, in particular, false twins cannot be adjacent as we consider graphs without self-loops. Furthermore, for a graph $H$, we define the partition $P(H)$ by adding two vertices to the same block if and only if they are false twins. Finally, we define the graph $H\!\!\downarrow$ by identifying all vertices in $H$ with the block of $P(H)$ they belong to, that is, the vertices of $H\!\!\downarrow$ are the blocks of $P(H)$ and two blocks $B$ and $B'$ are adjacent if there are vertices $v\in B$ and $v'\in B'$ such that $\{v,v'\}\in E(H)$. \begin{lemma}\label{lem:hfree_critical} Let $H$ denote a graph with at least $2$ vertices and let $\Phi$ denote a hereditary graph property such that~$\Gamma(\Phi)=\{H\}$. Then $\Phi$ is critical. \end{lemma} \begin{proof} We show that either $\Phi$ or $\overline{\Phi}$ has a critical edge. As $\Gamma(\Phi)=\{H\}$, we need to prove that at least one of $H$ and $\overline{H}$ has a critical edge. Let us start with the following claim. \begin{claim}\label{clm:critical_singleton} Let $F$ denote a graph with an edge $\{u,v\}$ such that $\{u\}$ and $\{v\}$ are singleton sets in the partition $P(F)$. Then $\{u,v\}$ is a critical edge of $F$. \end{claim} \begin{claimproof} Suppose there are integers $x,y\in \mathbb{N}_{\geq 0}$ such that there is an induced subgraph $F'$ of $F^{x,y}_{u,v}$ that is isomorphic to $F$. Then there is a natural bijection between the blocks of $F$ and the blocks of $F^{x,y}_{u,v}$ sending the class of each vertex not equal to $u,v$ to themselves, sending the class $\{u\}$ to the class of its $x$ clones and similar for $v$. But note that the graph $F^{x,y}_{u,v}\!\!\downarrow$ has one fewer edge than $F\!\!\downarrow$ (since the previous edge $\{u,v\}$ was removed). However, the induced subgraph $F'$ of $F^{x,y}_{u,v}$ satisfies that $F'\!\!\downarrow$ is a subgraph of $F^{x,y}_{u,v}\!\!\downarrow$ and thus also has at least one fewer edges than $F\!\!\downarrow$, a contradiction to $F'$ being isomorphic to $F$. \end{claimproof} Using the previous claim, it suffices to show that there are two vertices $u$ and $v$ such that $\{u,v\}$ is an edge and $\{u\}$ and $\{v\}$ are singletons in either one of $H$ and $\overline{H}$. We first show that for every vertex $z\in V(H)=V(\overline{H})$, the set $\{z\}$ is either a singleton in $P(H)$ or in $P(\overline{H})$. To this end, assume that $z$ has a false twin $z'$ in $H$ and a false twin $z''$ in $\overline{H}$. Consequently, $\{z,z'\}\notin E(H)$ and $\{z,z''\}\notin E(\overline{H})$, and thus $\{z,z'\} \in E(\overline{H})$ and $\{z,z''\}\in E(H)$. Now, as $z$ and $z'$ are false twins in $H$ and $\{z,z''\}\in E(H)$, we see that $\{z', z''\} \in E(H)$. However, as $z$ and $z''$ are false twins in $\overline{H}$ and $\{z,z'\} \in E(\overline{H})$, we see that $\{z', z''\} \in E(\overline{H})$ as well, which leads to the desired contradiction. Now assume without loss of generality that $H$ has at least one edge; otherwise we consider $\overline{H}$. If there are false twins $u$ and $v$ in $H$, then, by the previous argument, $\{u\}$ and $\{v\}$ are singletons in $P(\overline{H})$ and $u$ and $v$ are adjacent in $\overline{H}$. By \cref{clm:critical_singleton}, we obtain a critical edge of $\overline{H}$. If there are no false twins in~$H$, we can choose an arbitrary edge of~$H$ which is then again critical by \cref{clm:critical_singleton}. This concludes the proof.\footnote{For your amusement: if you color all vertices red that are singletons in $H$ and color all vertices blue that are singletons only in $\overline H$, then the fact that the Ramsey number $R(2)$ is $3$ shows that for $H$ having at least $3$ vertices, we find a critical edge in $H$ or $\overline{H}$.} \end{proof} We have shown that every hereditary property defined by a single (non-trivial) forbidden induced subgraph is critical. Let us now provide some examples of critical hereditary properties that are defined by multiple forbidden induced subgraphs: \begin{enumerate} \item $\Phi(H) = 1 :\Leftrightarrow H$ is perfect. A graph $H$ is \emph{perfect} if for every induced subgraph of $H$, the size of the largest clique equals the chromatic number. By the Strong Perfect Graph Theorem~\cite{ChudnovskyRST06}, we have that~$\Gamma(\Phi)$ is the set of all odd cycles of length at least $5$ and their complements. Now observe that every edge of the cycle of length $5$ is critical for $\Phi$ as the exploded graph is bipartite and thus perfect. \item $\Phi(H) = 1 :\Leftrightarrow H$ is chordal. A graph is \emph{chordal} if it does not contain an induced cycle of length $4$ or more. Consequently, we can choose an arbitrary edge of the cycle of length $4$ as a critical edge for $\Phi$, as the resulting exploded graphs do not contain any cycle. \item $\Phi(H) = 1 :\Leftrightarrow H$ is a split graph. A \emph{split graph} is a graph whose vertices can be partitioned into a clique and an independent set. It is known~\cite{FoldesH77} that $\Gamma(\Phi)$ contains the cycles of length $4$ and $5$, and the complement of the cycle of length $4$. The latter is the graph containing two disjoint edges, and it is easy to see that any of those two edges is critical for $\Phi$. \end{enumerate} We now establish hardness of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ for critical hereditary properties. We start with the following lemma, which constructs a (tight) parameterized Turing-reduction from counting cliques of size~$k$ in bipartite graphs. \begin{lemma}\label{lem:reduction_hereditary} Let $\Phi$ denote a computable and critical hereditary graph property. There is an algorithm~$\mathbb{A}$ with oracle access to $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ that expects as input a bipartite graph $G$ and a positive integer $k$, and computes the number of independent sets of size $k$ in $G$ in time $O(|G|)$. Furthermore, the number of calls to the oracle is bounded by $O(1)$ and every queried pair~$(\hat{G},\hat{k})$ satisfies $|V(\hat{G})| \in O(|V(G)|)$ and $\hat{k} \in O(k)$. \end{lemma} \begin{proof} Assume without loss of generality that $\Phi$ has a critical edge; otherwise we use \cref{fac:invariance} and proceed with $\overline{\Phi}$. Hence choose $H\in \Gamma(\Phi)$ and $e=\{u,v\} \in E(H)$ such that $\#\ensuremath{\mathsf{IndSub}}s{\hat{H}}{H_{u,v}^{x,y}} = 0$ for every $\hat{H}\in \Gamma(\Phi)$ and for all non-negative integers $x,y$. Now let $G=(U\dot\cup V, E)$ and $k$ denote the given input. If $U$ or $V$ are empty, then we can trivially compute the number of independent sets of size $k$. Hence, we have $U=\{u_1,\dots,u_{n_1}\}$ and $V=\{v_1,\dots,v_{n_2}\}$ for some integers $n_1,n_2 >0$. Now, we proceed as follows: In the first step, we construct the graph $H_{u,v}^{n_1,n_2}$. In the next step, we identify $u$ and its $n_1-1$ clones with the vertices of $U$ and $v$ and its $n_2-1$ clones with the vertices of $V$. Finally, we add the edges $E$ of $G$. We call the resulting graph $\hat{G}$ and we observe that $\hat{G}$ can clearly be constructed in time $O(|G|)$; note that $|H|$ is a constant as $\Phi$ is fixed. Consult \cref{fig:ghat} for a visualization of the construction. \begin{figure} \caption{The construction of $\hat{G} \label{fig:ghat} \end{figure} Note that the construction induces a partition of the vertices of $\hat{G}$ into three sets: \[V(\hat{G}) = R ~\dot\cup~ U ~\dot\cup~ V,\] where $R = V(H)\setminus \{u,v\}$; we set $r:= |R|$. Now define \[\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R] := \{ F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} ~|~ R\ensuremath{\mathsf{Sub}}seteq V(F) \},\] that is, $\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$ is the set of all induced subgraphs $F$ of size $k+r$ in $\hat{G}$ that satisfy $\Phi$ and that contain all vertices in $R$. Next, we show that the cardinality of $\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$ reveals the number of independent sets of size $k$ in $G$. \begin{claim} Let $\mathsf{IS}_k$ denote the set of independent sets of size $k$ in $G$. We have \[\#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R] =\#\mathsf{IS}_k. \] \end{claim} \begin{claimproof} Let $b$ denote the function that maps a graph $F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$ to a $k$-vertex subset of~$G$ given by \[b(F) := V(F) \cap (U ~\dot\cup ~V).\] We show that $\mathsf{im}(b) = \mathsf{IS}_k$. ``$\ensuremath{\mathsf{Sub}}seteq$'': Fix a graph $F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$ and write $b(F)=U' ~\dot\cup~V'$ where $U' \ensuremath{\mathsf{Sub}}seteq U$ and $V' \ensuremath{\mathsf{Sub}}seteq V$. As $|V(F)| = k+r$, as well as $V(F) = R~\dot\cup~ U' ~\dot\cup ~V'$, and $|R|=r$, we see that $|b(F)|=k$. Now assume that $b(F)$ is not an independent set, that is, there is an edge $(u,v)\in U' \times V'$ in $F$. Observing that the induced subgraph $F[R\cup \{u,v\}]$ of $F$ is isomorphic to $H$, and that $H$ is a forbidden induced subgraph of the property $\Phi$, yields the desired contradiction. ``$\supseteq$'': Let $U' ~\dot\cup ~V'$ denote an independent set of size $k$ of $G$ with $|U'|=k_1$ and $|V'|=k_2$; note that $k_1$ or $k_2$ might be zero. Let $F$ denote the induced subgraph of $\hat{G}$ with vertices $U' ~\dot\cup ~V' ~\dot\cup~ R$. Then $F$ has $k+r$ vertices and is isomorphic to $H_{u,v}^{k_1,k_2}$. Suppose $F$ does not satisfy $\Phi$. Then $F$ has an induced subgraph isomorphic to a graph in $\Gamma(\Phi)$. However, this is impossible as $\ensuremath{\mathsf{IndSub}}s{\hat{H}}{H_{u,v}^{x,y}} = \emptyset$ for all $\hat{H}\in \Gamma(\Phi)$ and non-negative integers $x,y$. This shows that $b$ is a surjective function from $\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$ to $\mathsf{IS}_k$. Furthermore, injectivity is immediate by the definition of $b$ as $V(F)\setminus (U~\dot\cup ~V) =R$ for every $F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$, which proves the claim. \end{claimproof} It hence remains to show how our algorithm $\mathbb{A}$ can compute the cardinality of $\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]$. \begin{claim} Write $R=\{z_1,\dots,z_r\}$. We see that \[ \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R] = \sum_{J \ensuremath{\mathsf{Sub}}seteq[r]} (-1)^{|J|} \cdot \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}\setminus J}, \] where $\hat{G}\setminus J$ is the graph obtained from $\hat{G}$ by deleting all vertices $z_i$ with $i\in J$. \end{claim} \begin{claimproof} Using the principle of inclusion and exclusion, we obtain that \begin{align*} &\#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}}[R]\\ &\quad= \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} - \#\{F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} ~|~ \exists i \in [r]: z_i \notin V(F) \} \\ &\quad= \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} - \left|\bigcup_{i=1}^r \{F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} ~|~ z_i \notin V(F) \} \right|\\ &\quad= \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} - \sum_{\emptyset \neq J \ensuremath{\mathsf{Sub}}seteq [r]} (-1)^{|J|+1} \left| \bigcap_{i \in J} \{F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} ~|~ z_i \notin V(F) \}\right| \\ &\quad= \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} - \sum_{\emptyset \neq J \ensuremath{\mathsf{Sub}}seteq [r]} (-1)^{|J|+1} \# \{F \in \ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} ~|~ \forall i \in J: z_i \notin V(F) \} \\ &\quad= \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}} - \sum_{\emptyset \neq J \ensuremath{\mathsf{Sub}}seteq [r]} (-1)^{|J|+1} \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}\setminus J} \\ &\quad=\sum_{J \ensuremath{\mathsf{Sub}}seteq[r]} (-1)^{|J|} \cdot \#\ensuremath{\mathsf{IndSub}}s{\Phi,k+r}{\hat{G}\setminus J}\,. \end{align*} \end{claimproof} Consequently, the algorithm $\mathbb{A}$ requires linear time in $|G|$ and $2^r\in O(1)$ oracle calls, each query of the form $(\hat{G}\setminus J, k+r)$, to compute the number of independent sets of size $k$ in $G$ as shown in the previous claims. In particular, $|V(\hat{G}\setminus J)| \in O(|V(G)|)$ for each $J \ensuremath{\mathsf{Sub}}seteq [r]$ and $k+r \in O(k)$; this completes the proof. \end{proof} \begin{theorem}\label{thm:critical_hardness} Let $\Phi$ denote a computable and critical hereditary graph property. Then $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$ is $\#\W{1}$-complete and cannot be solved in time \[g(k)\cdot |V(G)|^{o(k)}\] for any function $g$, unless ETH fails. The same is true for the problem $\#\ensuremath{\mathsf{IndSub}}sprob(\neg\Phi)$. \end{theorem} \begin{proof} It is known that counting independent sets of size $k$ in bipartite graphs is $\#\W{1}$-hard~\cite{CurticapeanDFGL19} and cannot be solved in time $g(k)\cdot |V(G)|^{o(k)}$ for any function $g$, unless ETH fails~\cite{DorflerRSW19}. The theorem thus follows by \cref{lem:reduction_hereditary}. \end{proof} As a particular consequence we establish a complete classification for the properties of being $H$-free, including for instance claw-free graphs and co-graphs~\cite{Seinsche74}. \hermn* \begin{proof} Holds by \cref{lem:hfree_critical,thm:critical_hardness}. \end{proof} Finally, note that the previous result is in sharp contrast to the result of Khot and Raman~\cite{KhotR02} concerning the decision version of $\#\ensuremath{\mathsf{IndSub}}sprob(\Phi)$: Their result implies that \emph{finding} an induced subgraph of size $k$ in a graph $G$ that satisfies a hereditary property $\Phi$ with $\Gamma(\Phi)=\{H\}$ for some graph $H$ can be done in time $f(k)\cdot|V(G)|^{O(1)}$ for some function $f$ if $H$ is neither a clique nor an independent set of size at least $2$. \end{document}
math
116,167
\betaegin{document} \betaegin{abstract} We prove the existence of generalized solution for incompressible and viscous non-Newtonian two-phase fluid flow for spatial dimension 2 and 3. The phase boundary moves along with the fluid flow plus its mean curvature while exerting surface tension force to the fluid. An approximation scheme combining the Galerkin method and the phase field method is adopted. \varepsilonnd{abstract} \maketitle \makeatletter \@addtoreset{equation}{section} \renewcommand{\tauhesection.\@arabic\c@equation}{\tauhesection.\@arabic\c@equation} \makeatother \section{Introduction} \quad In this paper we prove existence results for a problem on incompressible viscous two-phase fluid flow in the torus $\Omega={\mathbb T}^d=({\mathbb R}/{\mathbb Z})^d$, $d=2,\,3$. A freely moving $(d-1)$-dimensional phase boundary $\Gammaamma(t)$ separates the domain $\Omega$ into two domains $\Omega^+(t)$ and $\Omega^-(t)$, $t\geq 0$. The fluid flow is described by means of the velocity field $u:\Omega\tauimes [0,\infty)\rightarrow {\mathbb R}^d$ and the pressure $\Pi:\Omega\tauimes [0,\infty)\rightarrow \mathbb R$. We assume the stress tensor of the fluids is of the form $T^{\partialm}(u,\Pi)=\tauau^{\partialm}(e(u))-\Pi\, I$ on $\Omega^{\partialm}(t)$, respectively. Here $e(u)$ is the symmetric part of the velocity gradient $\nabla u$, i.e. $e(u)=(\nabla u+\nabla u^T)/2$ and $I$ is the $d\tauimes d$ identity matrix. Let $\mathbb{S}(d)$ be the set of $d\tauimes d$ symmetric matrices. We assume that the functions $\tauau^{\partialm}:\mathbb{S}(d)\rightarrow\mathbb{S}(d)$ is locally Lipschitz and satisfy for some $\nu_0>0$ and $p>\varphirac{d+2}{2}$ and for all $s,\,\hat{s}\in \mathbb{S}(d)$ \betaegin{equation} \nu_0 |s|^p \lambdaeq \tauau^{\partialm}(s):s\lambdaeq \nu_0^{-1}(1+|s|^p),\lambdaabel{taucond1} \varepsilonnd{equation} \betaegin{equation} |\tauau^{\partialm}(s)|\lambdaeq \nu_0^{-1}(1+|s|^{p-1}),\lambdaabel{taucond2} \varepsilonnd{equation} \betaegin{equation} (\tauau^{\partialm}(s)-\tauau^{\partialm}(\hat{s})):(s-\hat{s})\geq 0.\lambdaabel{taucond3} \varepsilonnd{equation} Here we define $A:B=\rm{tr}(AB)$ for $d\tauimes d$ matrices $A,\, B$. A typical example is $\tauau^{\partialm}(s)=(a^{\partialm}+b^{\partialm}|s|^2)^{\varphirac{p-2}{2}}s$ with $a^{\partialm}>0$ and $b^{\partialm}>0$. We assume that the velocity field $u(x,t)$ satisfies the following non-Newtonian fluid flow equation: \betaegin{eqnarray} \varphirac{\partial u}{\partial t}+u\cdot\nabla u ={\rm div}\,(T^+(u,\Pi)),\hspace{.5cm}{\rm div}\, u=0 &\quad & {\rm on} \ \Omega^+(t), \ t> 0,\lambdaabel{main1}\\ \varphirac{\partial u}{\partial t}+u\cdot\nabla u ={\rm div}\,(T^-(u,\Pi)),\hspace{.5cm}{\rm div}\, u=0 &\quad & {\rm on} \ \Omega^-(t), \ t> 0,\lambdaabel{main2}\\ u^+= u^-,\hspace{.5cm}n\cdot (T^+(u,\Pi)-T^-(u,\Pi))= \kappaappa_1 H &\quad & {\rm on} \ \Gammaamma(t), \ t> 0.\qquad \qquad \lambdaabel{main3} \varepsilonnd{eqnarray} The upper script $\partialm$ in \varepsilonqref{main3} indicates the limiting values approaching to $\Gammaamma(t)$ from $\Omega^{\partialm}(t)$, respectively, $n$ is the unit outer normal vector of $\partial\Omega^+(t)$, $H$ is the mean curvature vector of $\Gammaamma(t)$ and $\kappaappa_1>0$ is a constant. The condition \varepsilonqref{main3} represents the force balance with an isotropic surface tension effect of the phase boundary. The boundary $\Gammaamma(t)$ is assumed to move with the velocity given by \betaegin{equation} V_{\Gammaamma}=(u\cdot n)n+\kappaappa_2 H \hspace{.5cm}{\rm on} \quad\Gammaamma(t),\quad t> 0, \lambdaabel{velocity} \varepsilonnd{equation} where $\kappaappa_2>0$ is a constant. This differs from the conventional kinematic condition ($\kappaappa_2=0$) and is motivated by the phase boundary motion with hydrodynamic interaction. The reader is referred to \cite{Liu} and the references therein for the relevant physical background. By setting $\varphi=1$ on $\Omega^+(t)$, $\varphi=-1$ on $\Omega^-(t)$ and \betaegin{equation*} \tauau(\varphi,e(u))=\varphirac{1+\varphi}{2}\tauau^+(e(u))+\varphirac{1-\varphi}{2} \tauau^-(e(u)) \varepsilonnd{equation*} on $\Omega^+(t)\cup\Omega^-(t)$, the equations \varepsilonqref{main1}-\varepsilonqref{main3} are expressed in the distributional sense as \betaegin{equation} \betaegin{split} \varphirac{\partial u}{\partial t}+u\cdot\nabla u &={\rm div}\,\tauau(\varphi,e(u)) -\nabla \Pi +\kappaappa_1 H\mathcal{H}^{d-1}\lambdafloor_{\Gammaamma(t)} \hspace{.5cm} {\rm on} \ \Omega\tauimes (0,\infty), \lambdaabel{nsdist}\\ {\rm div}\, u&=0 \hspace{.5cm} {\rm on} \ \Omega\tauimes (0,\infty). \varepsilonnd{split} \varepsilonnd{equation} where $\mathcal{H}^{d-1}$ is the $(d-1)$-dimensional Hausdorff measure. The expression \varepsilonqref{nsdist} makes it evident that the phase boundary exerts surface tension force on the fluid wherever $H\neq 0$ on $\Gammaamma(t)$. Note that if $\Gammaamma(t)$ is a boundary of convex domain, the sign of $H$ is taken so that the presence of surface tension tends to accelerate the fluid flow inwards in general. We remark that the sufficiently smooth solutions of \varepsilonqref{main1}-\varepsilonqref{velocity} satisfy the following energy equality, \betaegin{equation} \varphirac{d}{dt}\lambdaeft\{\varphirac{1}{2}\int_{\Omega}|u|^2\,dx+\kappaappa_1{\mathcal H}^{d-1}(\Gammaamma(t))\right\}=-\int_{\Omega}\tauau(\varphi,e(u)):e(u)\,dx -\kappaappa_1\kappaappa_2\int_{\Gammaamma(t)}|H|^2\,d{\mathcal H}^{d-1}. \lambdaabel{energyeq} \varepsilonnd{equation} This follows from the first variation formula for the surface measure \betaegin{equation} \varphirac{d}{dt}{\mathcal H}^{d-1}(\Gammaamma(t))=-\int_{\Gammaamma(t)} V_{\Gammaamma}\cdot H\, d{\mathcal H}^{d-1} \lambdaabel{firstvar} \varepsilonnd{equation} and by the equations \varepsilonqref{main1}-\varepsilonqref{velocity}. The aim of the present paper is to prove the time-global existence of the weak solution for \varepsilonqref{main1}-\varepsilonqref{velocity} (see Theorem \ref{maintheorem} for the precise statement). We construct the approximate solution via the Galerkin method and the phase field method. Note that it is not even clear for our problem if the phase boundary may stay as a codimension 1 object since a priori irregular flow field may tear apart or crumble the phase boundary immediately, with a possibility of developing singularities and fine-scale complexities. Even if we set the initial datum to be sufficiently regular, the eventual occurrence of singularities of phase boundary or flow field may not be avoided in general. To accommodate the presence of singularities of phase boundary, we use the notion of varifolds from geometric measure theory. In establishing \varepsilonqref{velocity} we adopt the formulation due to Brakke \cite{Brakke} where he proved the existence of moving varifolds by mean curvature. We have the extra transport effect $(u\cdot n)n$ which is not very regular in the present problem. Typically we would only have $u\in L^p_{loc}([0,\infty);W^{1,p}(\Omegamega)^d)$. This poses a serious difficulty in modifying Brakke's original construction in \cite{Brakke} which is already intricate and involved. Instead we take advantage of the recent progress on the understanding on the Allen-Cahn equation with transport term to approximate the motion law \varepsilonqref{velocity}, \[\varphirac{\partial\varphi}{\partial t}+u\cdot\nabla\varphi=\kappaappa_2\lambdaeft(\mathcal{D}elta\varphi-\varphirac{W'(\varphi)}{\varepsilon^2}\right).\hspace{1cm}{\rm (ACT)} \] Here $W$ is the equal depth double-well potential and we set $W(\varphi)=(1-\varphi^2)^2/2$. When $\varepsilon\rightarrow 0$, we have proved in \cite{LST1} that the interface moves according to the velocity \varepsilonqref{velocity} in the sense of Brakke with a suitable regularity assumptions on $u$. To be more precise, we use a regularized version of (ACT) as we present later for the result of \cite{LST1} to be applicable. The result of \cite{LST1} was built upon those of many earlier works, most relevant being \cite{Ilmanen1,Ilmanen2} which analyzed (ACT) with $u=0$, and also \cite{Hutchinson,Tonegawa,Sato,Roeger}. Since the literature of two-phase flow is immense and continues to grow rapidly, we mention results which are closely related or whose aims point to some time-global existence with general initial data. In the case without surface tension $(\kappaappa_1=\kappaappa_2=0)$, Solonnikov \cite{Solonnikov1} proved the time-local existence of classical solution. The time-local existence of weak solution was proved by Solonnikov \cite{Solonnikov2}, Beale \cite{Beale1}, Abels \cite{Abels1}, and others. For time-global existence of weak solution, Beale \cite{Beale2} proved in the case that the initial data is small. Nouri-Poupaud \cite{Nouri} considered the case of multi-phase fluid. Giga-Takahashi \cite{GigaTakahashi} considered the problem within the framework of level set method. When $\kappaappa_1>0$, $\kappaappa_2=0$, Plotnikov \cite{Plotnikov} proved the time-global existence of varifold solution for $d=2$, $p>2$, and Abels \cite{Abels2} proved the time-global existence of measure-valued solution for $d=2, 3$, $p>\varphirac{2d}{d+2}$. When $\kappaappa_1>0$, $\kappaappa_2>0$, Maekawa \cite{Maekawa} proved the time-local existence of classical solution with $p=2$ (Navier-Stokes and Stokes) and for all dimension. Abels-R\"{o}ger \cite{Abels-Roeger} considered a coupled problem of Navier-Stokes and Mullins-Sekerka (instead of motion by mean curvature in the present paper) and proved the existence of weak solutions. As for related phase field approximations of sharp interface model which we adopt in this paper, Liu and Walkington \cite{Liu} considered the case of fluids containing visco-hyperelastic particles. Perhaps the most closely related work to the present paper is that of Mugnai and R\"{o}ger \cite{Mugnai} which studied the identical problem with $p=2$ (linear viscosity case) and $d=2,3$. There they introduced the notion of $L^2$ velocity and showed that \varepsilonqref{velocity} is satisfied in a weak sense different from that of Brakke for the limiting interface. Kim-Consiglieri-Rodrigues \cite{Kim} dealt with a coupling of Cahn-Hilliard and Navier-Stokes equations to describe the flow of non-Newtonian two-phase fluid with phase transitions. Soner \cite{Soner} dealt with a coupling of Allen-Cahn and heat equations to approximate the Mullins-Sekerka problem with kinetic undercooling. Soner's work is closely related in that he showed the surface energy density bound which is also essential in the present problem. The organization of this paper is as follows. In Section 2, we summarize the basic notations and main results. In Section 3 we construct a sequence of approximating solutions for the two-phase flow problem. Section 4 describes the result of \cite{LST1} which establishes the upper density ratio bound for surface energy and which proves \varepsilonqref{velocity}. In the last Section 5 we combine the results from Section 3 and 4 and obtain the desired weak solution for the two-phase flow problem. \section{Preliminaries and Main results} \quad For $d\tauimes d$ matrices $A,B$ we denote $A:B={\rm tr}\,(AB)$ and $|A|:=\sqrt{A:A}$. For $a \in \mathbb R^d$, we denote by $a\omegatimes a$ the $d\tauimes d$ matrix with the $i$-th row and $j$-th column entry equal to $a_i a_j$. \subsection{Function spaces} \quad Set $\Omega={\mathbb T}^d$ throughout this paper. We set function spaces for $p>\varphirac{d+2}{2}$ as follows: \betaegin{equation*} \betaegin{split} &{\mathcal V}=\lambdaeft\{v \in C^{\infty}(\Omega)^d\,;\,{\rm div}\,v=0\right\},\\ &{\rm for} \ s\in {\mathbb Z}^+ \cup\{0\}, \ W^{s,p}(\Omega)=\{v \ : \ \nabla ^j v\in L^p(\Omega) \ {\rm for } \ 0\lambdaeq j\lambdaeq s\},\\ &V^{s,p}= {\rm closure \ of} \ {\mathcal V} \ {\rm in \ the} \ W^{s,p}(\Omega)^d{\rm \mathchar`-norm.} \varepsilonnd{split} \varepsilonnd{equation*} We denote the dual space of $V^{s,p}$ by $(V^{s,p})^*$. The $L^2$ inner product is denoted by $(\cdot,\cdot)$. Let $\chi_A$ be the characteristic function of $A$, and let $|\nabla\chi_A|$ be the total variation measure of the distributional derivative $\nabla \chi_A$. \subsection{Varifold notations} \quad We recall some notions from geometric measure theory and refer to \cite{Allard,Brakke,Simon} for more details. A {\it general $k$-varifold} in $\mathbb R^d$ is a Radon measure on $\mathbb R^d\tauimes G(d,k)$, where $G(d,k)$ is the space of $k$-dimensional subspaces in $\mathbb R^d$. We denote the set of all general $k$-varifolds by ${\betaf V}_k(\mathbb R^d)$. When $S$ is a $k$-dimensional subspace, we also use $S$ to denote the orthogonal projection matrix corresponding to $\mathbb R^d\rightarrow S$. The first variation of $V$ can be written as \betaegin{equation*} \delta V(g)=\int_{\mathbb R^d\tauimes G(d,k)}\nabla g(x):S\,dV(x,S) =-\int_{\mathbb R^d}g(x)\cdot H(x)\,d\|V\|(x) \quad {\rm if }\, \|\delta V\|\lambdal \|V\|. \varepsilonnd{equation*} Here $V \in {\betaf V}_k(\mathbb R^d)$, $\|V\|$ is the mass measure of $V$, $g \in C_c^1(\mathbb R^d)^d$, $H=H_V$ is the generalized mean curvature vector if it exists and $\|\delta V\|\lambdal \|V\|$ denotes that $\|\delta V\|$ is absolutely continuous with respect to $\|V\|$. We call a Radon measure $\mu$ {\it $k$-integral} if $\mu$ is represented as $\mu=\tauheta{\mathcal H}^k\lambdafloor_X$, where $X$ is a countably $k$-rectifiable, ${\mathcal H}^k$-measurable set, and $\tauheta \in L^1_{\rm loc}({\mathcal H}^k\lambdafloor_X)$ is positive and integer-valued ${\mathcal H}^k$ a.e on $X$. ${\mathcal H}^k\lambdafloor_X$ denotes the restriction of ${\mathcal H}^k$ to the set $X$. We denote the set of $k$-integral Radon measures by ${\mathcal{IM}}_k$. We say that a $k$-integral varifold is of {\it unit density} if $\tauheta=1$ ${\mathcal H}^k$ a.e. on $X$. For each such $k$-integral measure $\mu$ corresponds a unique $k$-varifold $V$ defined by \[\int_{\mathbb R^d\tauimes G(d,k)}\partialhi(x,S)\,dV(x,S)=\int_{\mathbb R^d}\partialhi(x,T_x\mu)\,d\mu(x)\quad {\rm for} \ \partialhi\in C_c(\mathbb R^d\tauimes G(d,k)),\] where $T_x\mu$ is the approximate tangent $k$-plane. Note that $\mu=\|V\|$. We make such identification in the following. For this reason we define $H_{\mu}$ as $H_V$ (or simply $H$) if the latter exists. When $X$ is a $C^2$ submanifold without boundary and $\tauheta$ is constant on $X$, $H$ corresponds to the usual mean curvature vector for $X$. In the following we suitably adopt the above notions on $\Omega={\mathbb T}^d$ such as ${\betaf V}_k(\Omega)$, which present no essential difficulties. \subsection{Weak formulation of free boundary motion} For sufficiently smooth surface $\Gamma(t)$ moving by the velocity \varepsilonqref{velocity}, the following holds for any $\partialhi\in C^2(\Omega;\mathbb R^+)$ due to the first variation formula \varepsilonqref{firstvar}: \betaegin{equation} \varphirac{d}{dt}\int_{\Gamma(t)}\partialhi\, d{\mathcal H}^{d-1}\lambdaeq \int_{\Gamma(t)}(-\partialhi H+\nabla\partialhi)\cdot\{\kappaappa_2 H+(u\cdot n)n\}\, d{\mathcal H}^{d-1}. \lambdaabel{weakvelo} \varepsilonnd{equation} One can check that having this inequality for any $\partialhi\in C^2(\Omega;\mathbb R^+)$ implies \varepsilonqref{velocity} thus \varepsilonqref{weakvelo} is equivalent to \varepsilonqref{velocity}. Such use of non-negative test functions to characterize the motion law is due to Brakke \cite{Brakke} where he developed the theory of varifolds moving by the mean curvature. Here we suitably modify Brakke's approach to incorporate the transport term $u$. To do this we recall \betaegin{thm}{\betaf (Meyers-Ziemer inequality)} For any Radon measure $\mu$ on $\mathbb R^d$with \betaegin{equation*}D=\sup_{r>0,\, x \in {\mathbb R}^d}\varphirac{\mu(B_r(x))}{\omega_{d-1}r^{d-1}}<\infty, \varepsilonnd{equation*} we have \betaegin{equation} \int_{\mathbb R^d}|\partialhi|\,d\mu\lambdaeq c D\int_{\mathbb R^d}|\nabla \partialhi|\,dx \lambdaabel{MZ1} \varepsilonnd{equation} for $\partialhi \in C_c^1(\mathbb R^d)$. Here $c$ depends only on $d$. \lambdaabel{MZ} \varepsilonnd{thm} See \cite{Meyers} and \cite[p.266]{Ziemer}. By localizing \varepsilonqref{MZ1} to $\Omega={\mathbb T}^d$ we obtain (with $r$ in the definition of $D$ above replaced by $0<r<1/2$) \betaegin{equation} \int_{\Omega}|\partialhi|^2\, d\mu\lambdaeq c D \lambdaeft(\|\partialhi\|_{L^2(\Omega)}^2+\|\nabla \partialhi\|_{L^2(\Omega)}^2\right) \lambdaabel{MZ2} \varepsilonnd{equation} where the constant $c$ may be different due to the localization but depends only on $d$. The inequality \varepsilonqref{MZ2} allows us to define $\int_{\Omega}|\partialhi|^2\, d\mu$ for $\partialhi\in W^{1,2}(\Omega)$ by the standard density argument when $D<\infty$. We define for any Radon measure $\mu$, $u\in L^2(\Omega)^d$ and $\partialhi\in C^1(\Omega:\mathbb R^+)$ \betaegin{equation} {\mathcal B}(\mu,\, u,\, \partialhi)=\int_{\Omega} (-\partialhi H+\nabla\partialhi)\cdot\{\kappaappa_2 H+(u\cdot n)n\}\, d\mu \lambdaabel{rhs} \varepsilonnd{equation} if $\mu\in {\mathcal{IM}}_{d-1}(\Omega)$ with generalized mean curvature $H\in L^2(\mu)^d$ and with \betaegin{equation}\sup_{\varphirac12>r>0,\, x \in \Omega} \varphirac{\mu(B_r(x))}{\omega_{d-1}r^{d-1}}<\infty \lambdaabel{den} \varepsilonnd{equation} and $u\in W^{1,2}(\Omega)^d$. Due to the definition of ${\mathcal {IM}}_{d-1}(\Omega)$, the unit normal vector $n$ is uniquely defined $\mu$ a.e. on $\Omega$ modulo $\partialm$ sign. Since we have $(u,n)n$ in \varepsilonqref{rhs}, the choice of sign does not affect the definition. The right-hand side of \varepsilonqref{rhs} gives a well-defined finite value due to the stated conditions and \varepsilonqref{MZ2}. If any one of the conditions is not satisfied, we define ${\mathcal B}(\mu,\, u,\, \partialhi)=-\infty$. Next we note \betaegin{prop} For any $0<T<\infty$ and $p>\varphirac{d+2}{2}$, $$\lambdaeft\{u\in L^{p}([0,T];V^{1,p})\,;\,\varphirac{\partial u}{\partial t}\in L^{\varphirac{p}{p-1}}([0,T]; (V^{1,p})^*)\right\}\hookrightarrow C([0,T];\, V^{0,2}).$$ \lambdaabel{embed} \varepsilonnd{prop} The Sobolev embedding gives $V^{1,p} \hookrightarrow V^{0,2}$ for such $p$ and we may apply \cite[p. 35, Lemma 2.45]{Malek} to obtain the above embedding. Indeed, we only need $p>\varphirac{2d}{d+2}$ for Proposition \ref{embed} to be and we have $\varphirac{d+2}{2}>\varphirac{2d}{d+2}$. Thus for this class of $u$ we may define $u(\cdot, t)\in V^{0,2}$ for all $t\in [0,T]$ instead of a.e. $t$ and we may tacitly assume that we redefine $u$ in this way for all $t$. For $\{\mu_t\}_{t\in [0,\infty)}$, $u\in L^p_{loc}([0,\infty);V^{1,p})$ with $\varphirac{\partial u}{\partial t}\in L^{\varphirac{p}{p-1}}_{loc} ([0,\infty); (V^{1,p})^*)$ for $p>\varphirac{d+2}{2}$ and $\partialhi\in C^1(\Omega;\mathbb R^+)$, we define ${\mathcal B}(\mu_t,\, u(\cdot,t),\, \partialhi)$ as in \varepsilonqref{rhs} for all $t\geq 0$. \subsection{The main results} Our main results are the following. \betaegin{thm} Let $d=2$ or $3$ and $p>\varphirac{d+2}{2}$. Let $\Omega={\mathbb T}^d$. Assume that locally Lipschitz functions $\tauau^{\partialm}:\mathbb{S}(d)\rightarrow\mathbb{S}(d)$ satisfy \varepsilonqref{taucond1}-\varepsilonqref{taucond3}. For any initial data $u_0\in V^{0,2}$ and $\Omega^+(0)\subset\Omega$ having $C^1$ boundary $\partialartial\Omega^+(0)$, there exist \betaegin{enumerate} \item[(a)] $u \in L^{\infty}([0,\infty);V^{0,2})\cap L^p_{loc}([0,\infty);V^{1,p})$ with $\varphirac{\partial u}{\partial t}\in L^{\varphirac{p}{p-1}}_{loc}([0,\infty);(V^{1,p})^*)$, \item[(b)] a family of Radon measures $\{\mu_t\}_{t\in [0,\infty)}$ with $\mu_t\in {\mathcal{IM}}_{d-1}$ for a.e. $t\in [0,\infty)$ and \item[(c)] $\varphi \in BV_{loc}(\Omega\tauimes [0,\infty)) \cap L^{\infty}([0,\infty);BV(\Omega)) \cap C^{\varphirac{1}{2}}_{loc}([0,\infty);L^1(\Omega))$ \varepsilonnd{enumerate} such that the following properties hold: \betaegin{enumerate} \item[(i)] The triplet $(u(\cdot,t),\, \varphi(\cdot,t),\,\mu_t)_{t\in [0,\infty)}$ is a weak solution of \varepsilonqref{nsdist}. More precisely, for any $T>0$ we have \betaegin{equation} \int_0^T \int_{\Omega}-u\cdot \varphirac{\partial v}{\partial t}+(u\cdot\nabla u)\cdot v+\tauau(\varphi,e(u)):e(v)\,dxdt =\int_{\Omega}u_0\cdot v(0)\,dx+\int_0^T\int_{\Omega}\kappaappa_1 H\cdot v \, d\mu_t dt \lambdaabel{maintheorem1} \varepsilonnd{equation} for any $v \in C^{\infty}([0,T];{\mathcal V})$ such that $v(T)=0$. Here $H\in L^2([0,\infty);L^2(\mu_t)^d)$ is the generalized mean curvature vector corresponding to $\mu_t$. \item[(ii)] The triplet $(u(\cdot,t),\, \varphi(\cdot,t),\,\mu_t)_{t\in [0,\infty)}$ satisfies the energy inequality \betaegin{equation} \betaegin{split} \varphirac12\int_{\Omega}|u(\cdot,T)|^2\,dx+\kappaappa_1\mu_T(\Omega)&+\int_0^T\int_{\Omega} \tauau(\varphi,e(u)):e(u)\, dxdt+\kappaappa_1\kappaappa_2\int_0^T\int_{\Omega}|H|^2\, d\mu_t dt\\ &\lambdaeq \varphirac12\int_{\Omega}|u_0|^2\,dx+\kappaappa_1{\mathcal H}^{d-1}(\partialartial \Omega^+(0)) =: E_0\varepsilonnd{split} \lambdaabel{eneineq} \varepsilonnd{equation} for all $T<\infty$. \item[(iii)] For all $0\lambdaeq t_1<t_2< \infty$ and $\partialhi\in C^2(\Omega;\mathbb R^+)$ we have \betaegin{equation} \mu_{t_2}(\partialhi)-\mu_{t_1}(\partialhi)\lambdaeq \int_{t_1}^{t_2}{\mathcal B}(\mu_t,\, u(\cdot,t),\, \partialhi)\, dt. \lambdaabel{maintheorem3} \varepsilonnd{equation} Moreover, ${\mathcal B}(\mu_t,\, u(\cdot,t),\, \partialhi)\in L^{1}_{loc}([0,\infty))$. \item[(iv)] We set $D_0=\sup_{0<r<1/2,\, x\in \Omega}\varphirac{{\mathcal H}^{d-1} (\partialartial\Omega^+(0)\cap B_r(x))}{\omega_{d-1}r^{d-1}}$. For any $0<T<\infty$, there exists a constant $D=D(E_0,D_0,T,p,\nu_0,\kappa_1,\kappa_2)$ such that $$\sup_{0<r<1/2,\,x\in \Omega}\varphirac{\mu_t(B_r(x))}{\omegamega_{d-1}r^{d-1}} \lambdaeq D$$ for all $t\in [0,T]$. \item[(v)] The function $\varphi$ satisfies the following properties.\\ \ (1) $\varphi=\partialm 1$ {\rm a.e. on} $\Omega$ for all $t\in [0,\infty)$.\\ \ (2) $\varphi(x,0)=\chi_{\Omega^+(0)}-\chi_{\Omega\setminus\Omega^+(0)}$ {\rm a.e. on} $\Omega$.\\ \ (3) ${\rm spt}|\nabla\chi_{\{\varphi(\cdot,t)=1\}}| \subset{\rm spt}\mu_t$ for all $t\in [0,\infty)$. \item[(vi)] There exists \[T_1=T_1(E_0,D_0,p,\nu_0,\kappa_1,\kappa_2)>0\] such that $\mu_t$ is of unit density for a.e. $t\in [0,T_1]$. In addition $|\nabla\chi_{\{\varphi(\cdot,t)=1\}}|=\mu_t$ for a.e. $t\in [0,T_1]$. \varepsilonnd{enumerate} \lambdaabel{maintheorem} \varepsilonnd{thm} \betaegin{rem} Somewhat different from $u=0$ case we do not expect that \betaegin{equation} \lambdaimsup_{\mathcal{D}elta t\rightarrow 0}\varphirac{\mu_{t+\mathcal{D}elta t}(\partialhi) -\mu_t(\partialhi)}{\mathcal{D}elta t}\lambdaeq {\mathcal B}(\mu_t,\, u(\cdot,t),\partialhi) \lambdaabel{ve1} \varepsilonnd{equation} holds for all $t\geq 0$ and $\partialhi \in C^2(\Omega; \mathbb R^+)$ in general. While we know that the right-hand side is $<\infty$ (by definition) for all $t$, we do not know in general if the left-hand side is $<\infty$. One may even expect that at a time when $\int_{\Omega}|\nabla u(\cdot,t)|^p\,dx=\infty$, it may be $\infty$. Thus we may need to define \varepsilonqref{velocity} in the integral form \varepsilonqref{maintheorem3} for the definition of Brakke's flow. Note that in case $u=0$, one can show that the left-hand side of \varepsilonqref{ve1} is $<\infty$ for all $t\geq 0$ (see \cite{Brakke}). \varepsilonnd{rem} \betaegin{rem} The difficulty of multiplicities have been often encountered in the measure-theoretic setting like ours. Varifold solutions constructed by Brakke \cite{Brakke} have the same properties in this regard. On the other hand, (vi) says that there is no `folding' for some initial time interval $[0,T_1]$ at least. \varepsilonnd{rem} \betaegin{rem} In the following we set $\kappaappa_1=\kappaappa_2=1$ for notational simplicity, while all the argument can be modified with any positive $\kappaappa_1$ and $\kappaappa_2$ with no essential differences. On the other hand, their being positive plays an essential role, and most of the estimates and claims deteriorate as $\kappaappa_1,\, \kappaappa_2\rightarrow 0$ and fail in the limit. How severely they fail in the limit may be of independent interest which we do not pursue in the present paper. Note that $\kappaappa_2=0$ limit should correspond precisely to the setting of Plotnikov \cite{Plotnikov} for $d=2$. \varepsilonnd{rem} We use the following theorem. See \cite[p.196]{Malek} and the reference therein. \betaegin{thm}{\betaf(Korn's inequality)} Let $1<p<\infty$. Then there exists a constant $c_K=c(p,d)$ such that \[\|v\|_{W^{1,p}(\Omega)}^p\lambdaeq c_K (\|e(v)\|_{L^p(\Omega)}^p+\|v\|^p_{L^1(\Omega)})\] holds for all $v \in W^{1,p}(\Omega)^d$. \lambdaabel{Korn} \varepsilonnd{thm} \section{Existence of approximate solution} \quad In this section we construct a sequence of approximate solutions of \varepsilonqref{main1}-\varepsilonqref{velocity} by the Galerkin method and the phase field method. The proof is a suitable modification of \cite{LinLiu} for the non-Newtonian setting even though we need to incorporate a suitable smoothing of the interaction terms. First we prepare a few definitions. We fix a sequence $\{\varepsilon_i\}_{i=1}^{\infty}$ with $\lambdaim_{i\rightarrow\infty} \varepsilon_i=0$ and fix a radially symmetric non-negative function $\zeta\in C^{\infty}_c(\mathbb R^d)$ with ${\rm spt}\, \zeta\subset B_1(0)$ and $\int\zeta\, dx=1$. For a fixed $0<\gamma<\varphirac12$ we define \betaegin{equation} \zeta^{\varepsilon_i}(x)=\varphirac{1}{\varepsilon_i^{\gamma}}\zeta\lambdaeft(\varphirac{x} {\varepsilon_i^{\gamma/d}}\right). \lambdaabel{zeta} \varepsilonnd{equation} We defined $\zeta^{\varepsilon_i}$ so that $\int \zeta^{\varepsilon_i}\, dx=1$, $|\zeta^{\varepsilon_i}|\lambdaeq c(d)\varepsilon_i^{-\gamma}$ and $|\nabla\zeta^{\varepsilon_i}| \lambdaeq c(d)\varepsilon_i^{-\gamma-\gamma/d}$. For a given initial data $\Omega^+(0)\subset\Omega$ with $C^1$ boundary $\partial \Omega^+(0)$, we can approximate $\Omega^+(0)$ in $C^1$ topology by a sequence of domains $\Omega^{i+}(0)$ with $C^3$ boundaries. Let $d^{i}(x)$ be the signed distance function to $\partial \Omega^{i+}(0)$ so that $d^{i}(x)>0$ on $\Omega^{i+}(0)$ and $d^{i}(x)<0$ on $\Omega^{i-}(0)$. Choose $b^{i}>0$ so that $d^{i}$ is $C^3$ function on the $b^{i}$-neighborhood of $\partial\Omega^{i+}(0)$. Now we associate $\{\varepsilon_i\}_{i=1}^{\infty}$ with $\Omega^{i+}(0)$ by re-labeling the index if necessary so that $\lambdaim_{i\rightarrow\infty}\varepsilon_i/b^i=0$ and $\lambdaim_{i\rightarrow\infty}\varepsilon_i^{j-1}|\nabla^j d^i|=0$ for $j=2,\, 3$ on the $b^{i}$-neighborhood of $\partial\Omega^{i+}(0)$. Let $h\in C^{\infty}(\mathbb R)$ be a function such that $h$ is monotone increasing, $h(s)=s$ for $0\lambdaeq s\lambdaeq 1/4$ and $h(s)= 1/2$ for $1/2<s$, and define $h(-s)=-h(s)$ for $s<0$. Then define \betaegin{equation} \varphi_0^{\varepsilon_i}(x)=\tauanh(b^i h(d^i(x)/b^i)/\varepsilon_i). \lambdaabel{tanh} \varepsilonnd{equation} Note that we have $\varphi_0^{\varepsilon_i}\in C^3(\Omega)$ and $\varepsilon_i^j|\nabla^j\varphi_0^{\varepsilon_i}|$ for $j=1,\, 2,\, 3$ are bounded uniformly independent of $i$. The well-known property of phase field approximation shows that \betaegin{equation} \lambdaim_{i\rightarrow \infty}\|\varphi_0^{\varepsilon_i}-(\chi_{\Omega^+(0)}-\chi_{\Omega^-(0)})\|_{L^1(\Omega)}=0,\hspace{.5cm} \varphirac{1}{\sigma}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi_0^{\varepsilon_i}|^2}{2} +\varphirac{W(\varphi_0^{\varepsilon_i})}{\varepsilon_i}\right)\, dx\rightarrow {\mathcal H}^{d-1}\lambdafloor_{ \partial\Omega^+(0)} \lambdaabel{tanhprop} \varepsilonnd{equation} as Radon measures. Here $\sigma=\int_{-1}^{+1}\sqrt{2W(s)}\, ds$. For $V^{s,2}$ with $s>\varphirac{d}{2}+1$ let $\{\omega^i\}_{i=1}^{\infty}$ be a set of basis for $V^{s,2}$ such that it is orthonormal in $V^{0,2}$. The choice of $s$ is made so that the Sobolev embedding theorem implies $W^{s-1,2}(\Omega)\hookrightarrow L^{\infty}(\Omega)$ thus $\nabla \omega^i \in L^{\infty}(\Omega)^{d^2}$. Let $P_i:V^{0,2}\rightarrow V^{0,2}_i={\rm span}\,\{\omega_1,\omega_2,\cdots,\omega_i\}$ be the orthogonal projection. We then project the problem \varepsilonqref{main1}-\varepsilonqref{velocity} to $V^{0,2}_i$ by utilizing the orthogonality in $V^{0,2}$. Note that just as in \cite{LinLiu}, we approximate the mean curvature term in \varepsilonqref{nsdist} by the appropriate phase field approximation. We consider the following problem: \betaegin{eqnarray} \hspace{.3cm} \varphirac{\partial u^{\varepsilon_i}}{\partial t}=P_i\lambdaeft({\rm div}\,\tauau(\varphi^{\varepsilon_i}, e(u^{\varepsilon_i}))- u^{\varepsilon_i}\cdot\nabla u^{\varepsilon_i}-\varphirac{\varepsilon_i}{\sigma}{\rm div}\,((\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i})\right) & & {\rm on} \ \Omega\tauimes[0,\infty),\lambdaabel{appeq1}\\ u^{\varepsilon_i}(\cdot,t)\in V^{0,2}_i \qquad \qquad \qquad \qquad \qquad & & {\rm for} \ t\geq 0,\lambdaabel{appeq2}\\ \varphirac{\partial\varphi^{\varepsilon_i}}{\partial t}+(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}=\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2} \qquad \qquad \quad & & {\rm on} \ \Omega\tauimes [0,\infty),\lambdaabel{appeq3}\\ u^{\varepsilon_i}(x,0)=P_i u_0(x),\quad \varphi^{\varepsilon_i}(x,0)=\varphi_0^{\varepsilon_i}(x) \qquad \qquad \qquad & & {\rm on} \ \Omega.\lambdaabel{appeq4} \varepsilonnd{eqnarray} Here $*$ is the usual convolution. We prove the following theorem. \betaegin{thm} For any $i\in {\mathbb N}$, $u_0\in V^{0,2}$ and $\varphi^{\varepsilon_i}_0$, there exists a weak solution $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ of \varepsilonqref{appeq1}-\varepsilonqref{appeq4} such that $u^{\varepsilon_i} \in L^{\infty}([0,\infty);V^{0,2})\cap L^p_{loc}([0,\infty);V^{1,p})$, $|\varphi^{\varepsilon_i}|\lambdaeq 1$, $\varphi^{\varepsilon_i} \in L^{\infty}([0,\infty);C^3(\Omega))$ and $\varphirac{\partial\varphi^{\varepsilon_i}}{\partial t}\in L^{\infty}([0,\infty);C^1(\Omega))$. \lambdaabel{globalexistence} \varepsilonnd{thm} We write the above system in terms of $u^{\varepsilon_i}=\sum_{k=1}^{i}c^{\varepsilon_i}_k(t)\omega_k(x)$ first. Since \betaegin{gather*} \lambdaeft(\varphirac{d}{dt}u^{\varepsilon_i},\,\omega_j\right)=\betaigg(\varphirac{d}{dt}\sum_{k=1}^i c^{\varepsilon_i}_k(t)\,\omega_k,\,\omega_j\betaigg)=\varphirac{d}{dt}c^{\varepsilon_i}_j(t),\\ (u^{\varepsilon_i}\cdot\nabla u^{\varepsilon_i},\,\omega_j)=\sum_{k,l=1}^i c_k^{\varepsilon_i}(t)c_l^{\varepsilon_i}(t)(\omega_k\cdot\nabla\omega_l,\,\omega_j),\\ \varepsilon_i({\rm div}\,((\nabla\varphi^{\varepsilon_i} \omegatimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}),\,\omega_j) = \,-\varepsilon_i \int_{\Omega} (\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}: \nabla \omega_j \,dx,\\ \lambdaeft({\rm div}\,\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})),\,\omega_j\right) = -\int_{\Omega}\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(\omega_j)\,dx \varepsilonnd{gather*} for $j=1,\cdots,i$, \varepsilonqref{appeq1} is equivalent to \betaegin{equation} \betaegin{split} \varphirac{d}{dt}c_j^{\varepsilon_i}(t)=& \,-\int_{\Omega}\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(\omega_j)\,dx -\sum_{k,l=1}^i c_k^{\varepsilon_i}(t)c_l^{\varepsilon_i}(t)(\omega_k\cdot\nabla\omega_l,\,\omega_j) \\ & +\varphirac{\varepsilon_i}{\sigma}\int_{\Omega}(\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}: \nabla \omega_j \,dx= \,A^{\varepsilon_i}_j(t)+B_{klj} c^{\varepsilon_i}_k(t)c^{\varepsilon_i}_l(t)+D^{\varepsilon_i}_j(t).\lambdaabel{appeq1-2} \varepsilonnd{split} \varepsilonnd{equation} Moreover, the initial condition of $c_j^{\varepsilon_i}$ is \[c^{\varepsilon_i}_j(0)=(u_0,\,\omega_j)\quad {\rm for} \ j=1,2,\deltaots,i.\] We also set \[E_0={\mathcal H}^{d-1}(\partial\Omega^+(0))+\varphirac12 \int_{\Omega}|u_0|^2\, dx\] and note that \betaegin{equation} \varphirac{1}{\sigma}\int_{\Omega}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi_0^{\varepsilon_i}|^2}{2} +\varphirac{W(\varphi^{\varepsilon_i}_0)}{\varepsilon_i}\right)\,dx+\varphirac12\sum_{j=1}^i(c^{\varepsilon_i}_j(0))^2\lambdaeq E_0 +o(1) \lambdaabel{eqeq} \varepsilonnd{equation} by \varepsilonqref{tanhprop} and by the projection $P_i$ being orthonormal. We use the following lemma to prove Theorem \ref{globalexistence}. \betaegin{lemma} There exists a constant $T_0=T_0(E_0,i,\nu_0,p)>0$ such that \varepsilonqref{appeq1}-\varepsilonqref{appeq4} with \varepsilonqref{eqeq} has a weak solution $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ in $\Omega\tauimes[0,T_0]$ such that $u^{\varepsilon_i} \in L^{\infty}([0,T_0];V^{0,2})\cap L^p([0,T_0];V^{1,p})$, $|\varphi^{\varepsilon_i}|\lambdaeq 1$, $\varphi^{\varepsilon_i} \in L^{\infty}([0,T_0];C^3(\Omega))$ and $\varphirac{\partial\varphi^{\varepsilon_i}}{\partial t} \in L^{\infty}([0,T_0];C^1(\Omega))$. \lambdaabel{localexistence} \varepsilonnd{lemma} {\it Proof.} Assume that we are given a function $u(x,t)=\sum_{j=1}^i c_j^{\varepsilon_i}(t)\omega_j(x)\in C^{1/2}([0,T];V^{s,2})$ with \betaegin{equation} c^{\varepsilon_i}_j(0)=(u_0,\,\omega_j),\hspace{.5cm} \max_{t\in[0,T]}\lambdaeft(\varphirac12\sum_{j=1}^i|c^{\varepsilon_i}_j(t)|^2\right)^{1/2}+ \sup_{0\lambdaeq t_1<t_2\lambdaeq T}\sum_{j=1}^i \varphirac{|c_j^{\varepsilon_i}(t_1)-c_j^{\varepsilon_i}(t_2)|}{|t_1-t_2|^{1/2}}\lambdaeq \sqrt{2E_0}.\lambdaabel{leraycond} \varepsilonnd{equation} We let $\varphi (x,t)$ be the solution of the following parabolic equation: \betaegin{equation} \betaegin{split} \varphirac{\partial\varphi}{\partial t}+(u*\zeta^{\varepsilon_i})\cdot\nabla\varphi=\mathcal{D}elta\varphi-\varphirac{W'(\varphi)}{\varepsilon_i^2},\\ \varphi(x,0)=\varphi^{\varepsilon_i}_0(x). \varepsilonnd{split}\lambdaabel{acapprox} \varepsilonnd{equation} The existence of such $\varphi$ with $|\varphi|\lambdaeq 1$ is guaranteed by the standard theory of parabolic equations (\cite{Ladyzhenskaya}). By \varepsilonqref{acapprox} and the Cauchy-Schwarz inequality, we can estimate \betaegin{equation*} \varphirac{d}{dt}\int_{\Omega}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi|^2}{2}+\varphirac{W(\varphi)}{\varepsilon_i}\right)\,dx \lambdaeq -\varphirac{\varepsilon_i}{2}\int_{\Omega} \lambdaeft(\mathcal{D}elta\varphi-\varphirac{W'(\varphi)}{\varepsilon_i^2}\right)^2\,dx+\varphirac{\varepsilon_i}{2}\int_{\Omega} \lambdaeft\{(u*\zeta^{\varepsilon_i})\cdot\nabla\varphi\right\}^2\,dx. \varepsilonnd{equation*} Since for any $t \in [0,T]$ \betaegin{equation*} \|u*\zeta^{\varepsilon_i}\|^2_{L^{\infty}(\Omega)} \lambdaeq \varepsilon_i^{-2\gamma}\|u\|^2_{L^{\infty}(\Omega)} \lambdaeq i\varepsilon_i^{-2\gamma}\max_{1\lambdaeq j \lambdaeq i}\|\omega_j(x)\|^2_{L^{\infty}(\Omega)} \sum_{j=1}^i|c^{\varepsilon_i}_j(t)|^2 \lambdaeq c(i)E_0, \varepsilonnd{equation*} \betaegin{equation*} \varphirac{d}{dt}\int_{\Omega}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi|^2}{2}+\varphirac{W(\varphi)}{\varepsilon_i}\right)\,dx \lambdaeq c(i) E_0\int_{\Omega}\varphirac{\varepsilon_i|\nabla\varphi|^2}{2}\,dx. \varepsilonnd{equation*} This gives \betaegin{equation} \sup_{0\lambdaeq t \lambdaeq T} \varphirac{1}{\sigma}\int_{\Omega}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi|^2}{2}+\varphirac{W(\varphi)}{\varepsilon_i}\right)\,dx \lambdaeq e^{c(i) E_0 T}E_0.\lambdaabel{energyest} \varepsilonnd{equation} Hence as long as $T\lambdaeq 1$, \betaegin{equation} |D_j^{\varepsilon_i}(t)| \lambdaeq c \|\nabla\omega_j\|_{L^{\infty}(\Omega)}\varphirac{1}{\sigma}\int_{\Omega}\int_{\Omega}\varepsilon_i|\nabla\varphi(y)|^2\zeta^{\varepsilon_i}(x-y)\,dydx \lambdaeq c(i)e^{c(i) E_0}E_0\lambdaabel{Dest} \varepsilonnd{equation} by $\nabla\omega_j \in L^{\infty}(\Omega)^{d^2}$ and \varepsilonqref{energyest}. Next we substitute the above solution $\varphi$ into the place of $\varphi^{\varepsilon_i}$, and solve \varepsilonqref{appeq1-2} with the initial condition $c^{\varepsilon_i}_j(0)=(u_0,\,\omega_j)$. Since $\tauau$ is locally Lipschitz with respect to $e(u)$, there is at least some short time $T_1$ such that \varepsilonqref{appeq1-2} has a unique solution $\tauilde{c}^{\varepsilon_i}_j(t)$ on $[0,T_1]$ with the initial condition $\tauilde{c}_j^{\varepsilon_i}(0)=(u_0,\,\omega_j)$ for $1\lambdaeq j\lambdaeq i$. We show that the solution exists up to $T_0=T_0(i,E_0,p,\nu_0)$ satisfying \varepsilonqref{leraycond}. Let $\tauilde{c}(t)=\varphirac12\sum_{j=1}^i|\tauilde{c}^{\varepsilon_i}_j(t)|^2$. Then, \betaegin{equation*} \varphirac{d}{dt}\tauilde{c}(t)= A^{\varepsilon_i}_j\tauilde{c}^{\varepsilon_i}_j+B_{klj}\tauilde{c}^{\varepsilon_i}_k\tauilde{c}^{\varepsilon_i}_l\tauilde{c}^{\varepsilon_i}_j+D_j^{\varepsilon_i}\tauilde{c}^{\varepsilon_i}_j. \varepsilonnd{equation*} By \varepsilonqref{taucond1} $A_j^{\varepsilon_i}\tauilde{c}^{\varepsilon_i}_j\lambdaeq 0$ hence \betaegin{equation*} \varphirac{d}{dt}\tauilde{c}(t) \lambdaeq c(i,E_0)(\tauilde{c}^{3/2}+\tauilde{c}^{1/2}). \varepsilonnd{equation*} Therefore, \betaegin{equation} \alpharctan\sqrt{\tauilde{c}(t)} \lambdaeq \alpharctan\sqrt{E_0}+2c(i,E_0) t.\lambdaabel{arc} \varepsilonnd{equation} We can also estimate $|dc_j^{\varepsilon_i}/dt|$ due to \varepsilonqref{appeq1-2}, \varepsilonqref{Dest}, \varepsilonqref{arc} and \varepsilonqref{taucond2} depending only on $E_0,i,p,\nu_0$. Thus, by choosing $T_0$ small depending only on $E_0,i,p,\nu_0$ we have the existence of solution for $t\in[0,T_0]$ satisfying \varepsilonqref{leraycond}. We then prove the existence of a weak solution on $\Omega\tauimes [0,T_0]$ by using Leray-Schauder fixed point theorem (see \cite{Ladyzhenskaya}). We define \[\tauilde{u}(x,t)=\sum_{j=1}^i\tauilde{c}^{\varepsilon_i}_j(t)\omega_j(x)\] and we define a map $\mathcal{L}:u\mapsto \tauilde{u}$ as in the above procedure. Let \betaegin{equation*} \betaegin{split}V(T_0):=&\lambdaeft\{u(x,t) =\sum_{j=1}^i c_j(t)\omega_j(x)\,;\,\,\max_{t\in[0,T_0]}\lambdaeft(\varphirac12\sum_{j=1}^i|c_j(t)|^2\right)^{1/2}\right.\\ &\lambdaeft.+ \sup_{0\lambdaeq t_1<t_2\lambdaeq T_0}\sum_{j=1}^i \varphirac{|c_j(t_1)-c_j(t_2)|}{|t_1-t_2|^{1/2}}\lambdaeq \sqrt{2E_0},\,c_j(0)=(u_0,\,\omega_j),\,c_j\in C^{1/2}([0,T_0]) \right\}. \varepsilonnd{split} \varepsilonnd{equation*} Then $V(T_0)$ is a closed, convex subset of $C^{1/2}([0,T_0];V^{0,2}_i)$ equipped with the norm \[\|u\|_{V(T_0)}=\max_{t\in[0,T_0]}\lambdaeft(\varphirac12\sum_{j=1}^i|c_j(t)|^2\right)^{1/2}+ \sup_{0\lambdaeq t_1<t_2\lambdaeq T_0}\sum_{j=1}^i \varphirac{|c_j(t_1)-c_j(t_2)|}{|t_1-t_2|^{1/2}}\] and by the above argument $\mathcal{L}:V(T_0)\rightarrow V(T_0)$. Moreover by the Ascoli-Arzel\`a compactness theorem $\mathcal{L}$ is a compact operator. Therefore by using the Leray-Schauder fixed point theoremC$\mathcal{L}$ has a fixed point $u^{\varepsilon_i}\in V(T_0)$. We denote by $\varphi^{\varepsilon_i}$ the solution of \varepsilonqref{appeq3} and \varepsilonqref{appeq4}. Then $(u^{\varepsilon_i}, \varphi^{\varepsilon_i})$ is a weak solution of \varepsilonqref{appeq1}-\varepsilonqref{appeq4} in $\Omega\tauimes [0,T_0]$. Note that we have the required regularities for $\varphi^{\varepsilon_i}$ due to the regularity of $u^{\varepsilon_i}*\zeta^{\varepsilon_i}$ in $x$ and by the standard parabolic regularity theory. $ {\Box}$ \betaegin{thm} Let $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ be the weak solution of \varepsilonqref{appeq1}-\varepsilonqref{appeq4} with \varepsilonqref{eqeq} in $\Omega\tauimes[0,T]$. Then the following energy estimate holds: \betaegin{equation} \betaegin{split} \int_{\Omega}\varphirac{1}{\sigma}&\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi^{\varepsilon_i}(\cdot,T)|^2}{2}+\varphirac{W(\varphi^{\varepsilon_i}(\cdot,T))}{\varepsilon_i}\right)+\varphirac{|u^{\varepsilon_i}(\cdot,T)|^2}{2}\,dx\\ &+\int_0^{T}\int_{\Omega}\varphirac{\varepsilon_i}{\sigma}\lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right)^2+\nu_0|e(u^{\varepsilon_i})|^p\,dxdt \lambdaeq E_0+o(1). \lambdaabel{localenergy1} \varepsilonnd{split} \varepsilonnd{equation} Moreover for any $0\lambdaeq T_1<T_2<\infty$ \betaegin{equation} \int_{T_1}^{T_2}\|u^{\varepsilon_i}(\cdot,t)\|_{W^{1,p}(\Omega)}^p\, dt\lambdaeq c_K \{\nu_0^{-1}E_0+(T_2-T_1)E_0^{\varphirac{p}{2}}\}+o(1). \lambdaabel{localenergysup} \varepsilonnd{equation} \lambdaabel{localenergy} \varepsilonnd{thm} {\it Proof.} Since $(u^{\varepsilon_i},\varphi^{\varepsilon_i})$ is the weak solution of \varepsilonqref{appeq1}-\varepsilonqref{appeq4}, we derive \betaegin{equation} \betaegin{split} & \varphirac{d}{dt}\int_{\Omega}\varphirac{1}{\sigma}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi^{\varepsilon_i}|^2}{2}+\varphirac{W(\varphi^{\varepsilon_i})}{\varepsilon_i}\right)+\varphirac{|u^{\varepsilon_i}|^2}{2}\,dx\\ & =\int_{\Omega}-\varphirac{\varepsilon_i}{\sigma}\varphirac{\partial \varphi^{\varepsilon_i}}{\partial t}\lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right)+\varphirac{\partial u^{\varepsilon_i}}{\partial t}\cdot u^{\varepsilon_i}\,dx\\ & =\int_{\Omega}-\varphirac{\varepsilon_i}{\sigma}\lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}-(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}\right) \lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon^2}\right)\,dx\\ & +\int_{\Omega}\lambdaeft\{{\rm div}\,\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))-u^{\varepsilon_i}\cdot\nabla u^{\varepsilon_i} -\varphirac{\varepsilon_i}{\sigma}{\rm div}\,((\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i})\right\}\cdot u^{\varepsilon_i}\,dx=I_1+I_2. \varepsilonnd{split}\lambdaabel{localenergy1cal} \varepsilonnd{equation} Since ${\rm div}\, (u^{\varepsilon_i}*\zeta^{\varepsilon_i})=({\rm div}\, u^{\varepsilon_i})*\zeta^{\varepsilon_i}=0$, \betaegin{equation*} \sigma I_1 = -\int_{\Omega}\varepsilon_i\lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi)}{\varepsilon_i^2}\right)^2\,dx+\varepsilon_i\int_{\Omega}(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}\mathcal{D}elta\varphi^{\varepsilon_i}\,dx. \varepsilonnd{equation*} For $I_2$, with \varepsilonqref{taucond1} \betaegin{equation*} \int_{\Omega}{\rm div}\,\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))\cdot u^{\varepsilon_i}\,dx =-\int_{\Omega}\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(u^{\varepsilon_i})\,dx \lambdaeq -\nu_0\int_{\Omega}|e(u^{\varepsilon_i})|^p\,dx. \varepsilonnd{equation*} Moreover the second term of $I_2$ vanishes by ${\rm div}\,u^{\varepsilon_i}=0$ and \betaegin{equation*} \betaegin{split} & -\int_{\Omega}\varepsilon_i {\rm div}\,(\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot u^{\varepsilon_i}\,dx = -\int_{\Omega}\varepsilon_i \lambdaeft(\nabla \varphirac{|\nabla\varphi^{\varepsilon_i}|^2}{2}+ \nabla \varphi^{\varepsilon_i}\mathcal{D}elta\varphi^{\varepsilon_i}\right)*\zeta^{\varepsilon} \cdot u^{\varepsilon_i}\,dx\\ & = -\varepsilon_i\int_{\Omega}(u^{\varepsilon_i}*\zeta^{\varepsilon_i})\cdot\nabla\varphi^{\varepsilon_i}\mathcal{D}elta\varphi^{\varepsilon_i}\,dx. \varepsilonnd{split} \varepsilonnd{equation*} Hence \varepsilonqref{localenergy1cal} becomes \betaegin{equation*} \varphirac{d}{dt}\int_{\Omega}\varphirac{1}{\sigma}\lambdaeft(\varphirac{\varepsilon_i|\nabla\varphi^{\varepsilon_i}|^2}{2}+\varphirac{W(\varphi^{\varepsilon_i})}{\varepsilon_i}\right)+\varphirac{|u^{\varepsilon_i}|^2}{2}\,dx \lambdaeq -\int_{\Omega}\varphirac{\varepsilon_i}{\sigma}\lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right)^2 +\nu_0 |e(u^{\varepsilon_i})|^p\, dx. \varepsilonnd{equation*} Integrating with respect to $t$ over $t\in[0,T]$ and by \varepsilonqref{eqeq}, we obtain \varepsilonqref{localenergy1}. The proof of \varepsilonqref{localenergysup} follows from \varepsilonqref{localenergy1} and Theorem \ref{Korn}. $ {\Box}$\\ {\it Proof of Theorem \ref{globalexistence}.} For each fixed $i$ we have a short time existence for $[0,T_0]$ where $T_0$ depends only on $i,E_0,p,\nu_0$ at $t=0$. By Lemma \ref{localenergy} the energy at $t=T_0$ is again bounded by $E_0+o(1)$. By repeatedly using Lemma \ref{localexistence}, Theorem \ref{globalexistence} follows. $ {\Box}$\\ \section{Proof of main theorem} \quad In this section we first prove that $\{\varphi^{\varepsilon_i}\}_{i=1}^{\infty}$ in Section 3 and the associated surface energy measures $\{\mu_t^{\varepsilon_i}\}_{ i=1}^{\infty}$ converge subsequentially to $\varphi$ and $\mu_t$ which satisfy the properties described in Theorem \ref{maintheorem}. Most of the technical and essential ingredients have been proved in \cite{LST1} and we only need to check the conditions to apply the results. We then prove that the limit velocity field satisfies the weak non-Newtonian flow equation, concluding the proof of Theorem \ref{maintheorem}. First we recall the upper density ratio bound of the surface energy. \betaegin{thm} (\cite[Theorem 3.1]{LST1}) Suppose $d\geq 2$, $\Omega={\mathbb T}^d$, $p>\varphirac{d+2}{2}$, $\varphirac12>\gamma\geq 0$, $1\geq \varepsilon>0$ and $\varphi$ satisfies \betaegin{eqnarray} \varphirac{\partial \varphi}{\partial t}+u\cdot\nabla\varphi=\mathcal{D}elta\varphi-\varphirac{W'(\varphi)}{\varepsilon^2} \qquad \qquad \quad & & {\rm on} \ \Omega\tauimes [0,T],\lambdaabel{allen1}\\ \varphi(x,0)=\varphi_0(x) \qquad \qquad \qquad & & {\rm on} \ \Omega,\lambdaabel{allen2} \varepsilonnd{eqnarray} where $\nabla^i u,\, \nabla^j \varphi, \nabla^k \varphi_t\in C(\Omega\tauimes[0,T])$ for $0\lambdaeq i,\, k\lambdaeq 1$ and $0\lambdaeq j\lambdaeq 3$. Let $\mu_t$ be the Radon measure on $\Omega$ defined by \betaegin{equation} \int_{\Omega}\partialhi(x)\, d\mu_t(x)=\varphirac{1}{\sigma}\int_{\Omega}\partialhi(x)\lambdaeft(\varphirac{\varepsilon|\nabla\varphi(x,t)|^2}{2}+\varphirac{W(\varphi(x,t))}{\varepsilon}\right)\, dx \lambdaabel{dmu} \varepsilonnd{equation} for $\partialhi\in C(\Omega)$, where $\sigma=\int_{-1}^1 \sqrt{2 W(s)}\, ds$. We assume also that \betaegin{gather} \sup_{\Omega}|\varphi_0|\lambdaeq 1\mbox{ and }\sup_{\Omega}\varepsilon^i|\nabla^i\varphi_0|\lambdaeq c_{1}\mbox{ for $1\lambdaeq i\lambdaeq 3$},\lambdaabel{inibound}\\ \sup_{\Omega}\lambdaeft(\varphirac{\varepsilon|\nabla\varphi_0|^2}{2}-\varphirac{W(\varphi_0)}{\varepsilon}\right)\lambdaeq \varepsilon^{-\gamma},\lambdaabel{disbd}\\ \sup_{\Omega\tauimes[0,T]}\lambdaeft\{\varepsilon^{\gamma}|u|,\, \varepsilon^{1+\gamma}|\nabla u|\right\}\lambdaeq c_{2}, \lambdaabel{uinfbound}\\ \int_0^T\|u(\cdot,t)\|^p_{W^{1,p}(\Omega)}\, dt\lambdaeq c_3.\lambdaabel{ubound} \varepsilonnd{gather} Define for $t\in [0,T]$ \betaegin{equation} D(t)=\max\lambdaeft\{\sup_{x\in\Omega,\, 0<r\lambdaeq \varphirac12}\varphirac{1}{\omega_{d-1}r^{d-1}} \mu_t(B_r(x)), 1\right\},\hspace{1.cm}D(0)\lambdaeq D_0. \lambdaabel{dtdef} \varepsilonnd{equation} Then there exist $\varepsilonpsilon_1>0$ which depends only on $d$, $p$, $W$, $c_1$, $c_2$, $c_3$, $D_0$, $\gamma$ and $T$, and $c_4$ which depends only on $c_3$, $d$, $p$, $D_0$ and $T$ such that for all $0<\varepsilon\lambdaeq \varepsilonpsilon_1$, \betaegin{equation} \sup_{0\lambdaeq t\lambdaeq T}D(t)\lambdaeq c_4. \lambdaabel{fin1} \varepsilonnd{equation} \lambdaabel{mainmono} \varepsilonnd{thm} Using this we prove \betaegin{prop} For $\{\varphi^{\varepsilon_i}\}_{i=1}^{\infty}$ in Theorem \ref{globalexistence}, define $\mu_t^{\varepsilon_i}$ as in \varepsilonqref{dmu} replacing $\varphi$ by $\varphi^{\varepsilon_i}$, and define $D^{\varepsilon_i}(t)$ as in \varepsilonqref{dtdef} replacing $\mu_t$ by $\mu_t^{\varepsilon_i}$. Given $0<T<\infty$, there exists $c_5$ which depends only on $E_0,\, \nu_0, \, \gamma,\, D_0,\, T,\, d,\, p$ and $W$ such that \betaegin{equation} \sup_{0\lambdaeq t\lambdaeq T}D^{\varepsilon_i}(t)\lambdaeq c_5 \lambdaabel{key} \varepsilonnd{equation} for all sufficiently large $i$. \lambdaabel{du} \varepsilonnd{prop} {\betaf Proof}. We only need to check the conditions of Theorem \ref{mainmono} for $\varphi^{\varepsilon_i}$ and $\mu_t^{\varepsilon_i}$. Note that $u$ in \varepsilonqref{allen1} is replaced by $u^{\varepsilon_i}*\zeta^{\varepsilon_i}$. We have $d\geq 2$, $\Omega={\mathbb T}^d$, $p>\varphirac{d+2}{2}$, $\varphirac12>\gamma\geq 0$, $1\geq\varepsilon>0$ and \varepsilonqref{allen1} and \varepsilonqref{allen2}. The regularity of functions is guaranteed in Theorem \ref{globalexistence}. With an appropriate choice of $c_1$, \varepsilonqref{inibound} is satisfied for all sufficiently large $i$ due to the choice of $\varepsilon_i$ in \varepsilonqref{tanh}. The sup bound \varepsilonqref{disbd} is satisfied with even 0 on the right-hand side instead of $\varepsilon_i^{-\gamma}$. The bound for $u^{\varepsilon_i}* \zeta^{\varepsilon_i}$ \varepsilonqref{uinfbound} is satisfied due to \varepsilonqref{zeta} and \varepsilonqref{localenergy1}, and \varepsilonqref{ubound} is satisfied due to \varepsilonqref{localenergysup}. Thus we have all the conditions, and Theorem \ref{mainmono} proves the claim. $ {\Box}$ We next prove \betaegin{prop} For $\{u^{\varepsilon_i}*\zeta^{\varepsilon_i}\}_{i=1}^{\infty}$ in Theorem \ref{globalexistence}, there exist a subsequence (denoted by the same index) and the limit $u\in L^{\infty}([0,\infty);V^{0,2})\cap L^p_{loc}([0,\infty); V^{1,p})$ such that for any $0<T<\infty$ \betaegin{equation} u^{\varepsilon_i}*\zeta^{\varepsilon_i}\rightharpoonup u\mbox{ weakly in }L^p([0,T]; W^{1,p}(\Omega)^d), \hspace{1.cm}u^{\varepsilon_i}*\zeta^{\varepsilon_i}\rightarrow u\mbox{ strongly in }L^2([0,T];L^2(\Omega)^d). \lambdaabel{weak} \varepsilonnd{equation} \varepsilonnd{prop} {\betaf Proof}. Let $\partialsi \in V^{s,2}$ with $||\partialsi||_{V^{s,2}}\lambdaeq 1$. With \varepsilonqref{appeq1}, \varepsilonqref{appeq2} and integration by parts, we have \betaegin{equation*} \betaegin{split} \lambdaeft(\varphirac{\partial u^{\varepsilon_i}}{\partial t},\partialsi\right)&= \lambdaeft(\varphirac{\partial u^{\varepsilon_i}}{\partial t}, P_i\partialsi\right) = \lambdaeft(-u^{\varepsilon_i}\cdot \nabla u^{\varepsilon_i}+{\rm div}\,\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})) -\varphirac{\varepsilon_i}{\sigma}{\rm div} (\nabla\varphi^{\varepsilon_i}\omegatimes \nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i}, P_i\partialsi\right) \\ &=\lambdaeft(u^{\varepsilon_i}\omegatimes u^{\varepsilon_i}-\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})) +\varphirac{\varepsilon_i}{\sigma} (\nabla\varphi^{\varepsilon_i}\omegatimes \nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i},\nabla P_i\partialsi\right). \varepsilonnd{split} \varepsilonnd{equation*} Here we remark that \[\|\nabla P_i\partialsi\|_{L^{\infty}(\Omega)}\lambdaeq c(d) \|P_i\partialsi\|_{W^{s,2}(\Omega)}\lambdaeq c(d)\|\partialsi\|_{W^{s,2}(\Omega)}=c(d)\|\partialsi\|_{V^{s,2}}\lambdaeq c(d)\] by $s> \varphirac{d+2}{2}$ and properties of $P_i$ (see \cite{Lions} or \cite[p.290]{Malek}). Thus by \varepsilonqref{taucond2} and \varepsilonqref{localenergy1}, we obtain \betaegin{equation*} \lambdaeft(\varphirac{\partial u^{\varepsilon_i}}{\partial t},\partialsi\right)\lambdaeq c(d,p,\nu_0)\lambdaeft(1+E_0+\|u^{\varepsilon_i} \|_{W^{1,p}(\Omega)}^{p-1}\right). \varepsilonnd{equation*} Again using \varepsilonqref{localenergy1} and integrating in time we obtain \betaegin{equation} \int_0^T\lambdaeft|\lambdaeft|\varphirac{\partial u^{\varepsilon_i}}{\partial t}\right|\right|_{(V^{s,2})^*}^{\varphirac{p}{p-1}}\,dt\lambdaeq c(d,p,E_0,\nu_0,T). \lambdaabel{utes} \varepsilonnd{equation} Now we use Aubin-Lions compactness Theorem \cite[p.57]{Lions} with $B_0=V^{s,2}$, $B=V^{0,2}\subset L^2(\Omega)^d$, $B_1=(V^{s,2})^*$, $p_0=p$ and $p_1=\varphirac{p}{p-1}$. Then there exists a subsequence still denoted by $\{u^{\varepsilon_i}\}_{i=1}^{\infty}$ such that \betaegin{equation*} u^{\varepsilon_i} \rightarrow u \quad {\rm in} \ L^p([0,T];L^2(\Omega)^d). \lambdaabel{u-converge1} \varepsilonnd{equation*} Since we have uniform $L^{\infty}([0,T];L^2(\Omega)^d)$ bound for $u^{\varepsilon_i}$, the strong convergence also holds in $L^2([0,T];L^2(\Omega)^d)$. Note that we also have proper norm bounds to extract weakly convergent subsequences due to \varepsilonqref{localenergy1}. For each $T_n$ which diverges to $\infty$ as $n\rightarrow\infty$, we choose a subsequence and by choosing a diagonal subsequence, we obtain the convergent subsequence with \varepsilonqref{weak} with $u^{\varepsilon_i}$ instead of $u^{\varepsilon_i}*\zeta^{\varepsilon_i}$. It is not difficult to show at this point that the same convergence results hold for $u^{\varepsilon_i}* \zeta^{\varepsilon_i}$ as in \varepsilonqref{weak}. $ \Box$ {\betaf Proof of main theorem}. At this point, the rest of the proof concerning the existence of the limit Radon measure $\mu_t$ and the limit $\varphi=\lambdaim_{i\rightarrow \infty}\varphi^{\varepsilon_i}$ and their respective properties described in Theorem \ref{maintheorem} can be proved by almost line by line identical argument in \cite[Section 4,\, 5]{LST1}. The only difference is that the energy $E_0$ in \cite{LST1} depends also on $T$, while in this paper $E_0$ depends only on the initial data due to \varepsilonqref{localenergy1}. This allows us to have time-global estimates such as $u\in L^{\infty}([0,\infty);V^{0,2})$ and $\varphi\in L^{\infty}([0,\infty);BV(\Omega))$. The argument in \cite{LST1} then complete the existence proof of Theorem \ref{maintheorem} (b), (c) along with (iii)-(vi). We still need to prove (a), (i) and (ii). Due to \varepsilonqref{utes}, \varepsilonqref{taucond2} and \varepsilonqref{localenergysup} we may extract a further subsequence so that \betaegin{equation} \varphirac{\partialartial u^{\varepsilon_i}}{\partialartial t} \rightharpoonup \varphirac{\partialartial u}{\partialartial t}\mbox{ weakly in } L^{\varphirac{p}{p-1}}([0,T];(V^{s,2})^*),\hspace{.5cm} \tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))\rightharpoonup \hat{\tauau} \mbox{ weakly in }L^{\varphirac{p}{p-1}}([0,T];L^{\varphirac{p}{p-1}}(\Omegamega)^{d^2}). \lambdaabel{meascon1} \varepsilonnd{equation} For $\omegamega_j\in V^{s,2}$ ($j=1,\cdots$) and $h \in C^{\infty}_c((0,T))$ we have \betaegin{equation*} \int_{\Omegamega}{\rm div} ((\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i})*\zeta^{\varepsilon_i})\cdot h\omegamega_j\, dx=\int_{\Omegamega}\lambdaeft(\mathcal{D}elta\varphi^{\varepsilon_i}-\varphirac{W'(\varphi^{\varepsilon_i})}{\varepsilon_i^2}\right) \nabla\varphi^{\varepsilon_i}\cdot h\omegamega_j*\zeta^{\varepsilon_i}\, dx \varepsilonnd{equation*} by integration by parts and ${\rm div}\,\omegamega_j=0$. Thus the argument in \cite[p.212]{Lions} and the similar convergence argument in \cite{LST1} show \betaegin{equation} \int_0^T\lambdaeft\{\lambdaeft(\varphirac{\partial u}{\partial t},h\omega_j\right)+\int_{\Omegamega} (u\cdot\nabla u)\cdot h\omegamega_j+h\hat{\tauau}:e(\omegamega_j) \, dx\right\}dt=\int_0^T\int_{\Omegamega}H\cdot h\omegamega_j\, d\mu_t dt. \lambdaabel{meascon2} \varepsilonnd{equation} Again by the similar argument using the density ratio bound and Theorem \ref{MZ} one show by the density argument and \varepsilonqref{meascon2} that $\varphirac{\partial u}{\partial t}\in L^{\varphirac{p}{p-1}}([0,T];(V^{1,p})^*)$ and \betaegin{equation} \int_0^T \lambdaeft\{\lambdaeft(\varphirac{\partial u}{\partial t}, v\right)+\int_{\Omegamega}(u\cdot\nabla u)\cdot v+\hat{\tauau}:e(v) \, dx\right\}dt=\int_0^T\int_{\Omegamega}H\cdot v\, d\mu_t dt. \lambdaabel{meascon3} \varepsilonnd{equation} for all $v\in L^p([0,T];V^{1,p})$. We next prove \betaegin{equation} \int_0^T\int_{\Omega}\hat{\tauau}:e(v)\, dxdt =\int_0^T\int_{\Omega}\tauau(\varphi,e(u)):e(v)\, dxdt \lambdaabel{last} \varepsilonnd{equation} for all $v\in C^{\infty}_c((0,T);{\mathcal{V}})$. As in \cite[p.213 (5.43)]{Lions}, we may deduce that \betaegin{equation} \varphirac12 \|u(t_1)\|^2_{L^2(\Omegamega)}+\int_0^{t_1}\int_{\Omegamega}\hat{\tauau} :e(u)\, dxdt\geq \int_0^{t_1}\int_{\Omegamega}H\cdot u\, d\mu_t dt +\varphirac12\|u(0)\|^2_{L^2(\Omegamega)} \lambdaabel{last1} \varepsilonnd{equation} for a.e. $t_1\in [0,T]$. We set for any $v\in V^{1,p}$ \betaegin{equation} A_i^{t_1}=\int_0^{t_1}\int_{\Omegamega} (\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i}))-\tauau(\varphi^{\varepsilon_i},e(v))):(e(u^{\varepsilon_i})-e(v))\, dxdt+ \varphirac12 \|u^{\varepsilon_i}(t_1)\|^2_{L^2(\Omegamega)}. \lambdaabel{last2} \varepsilonnd{equation} The property \varepsilonqref{taucond3} of $e(\cdot)$ shows that the first term of \varepsilonqref{last2} is non-negative. We may further assume that $u^{\varepsilon_i}(t_1)$ converges weakly to $u(t_1)$ in $L^2(\Omegamega)^d$ thus we have \betaegin{equation} \lambdaiminf_{i\rightarrow\infty}A_i^{t_1}\geq\varphirac12 \|u(t_1)\|^2_{L^2(\Omegamega)}. \lambdaabel{last3} \varepsilonnd{equation} By \varepsilonqref{appeq1} we have \betaegin{equation*} \betaegin{split} A_i^{t_1}=&\varphirac12\|u^{\varepsilon_i}(0)\|_{L^2(\Omegamega)}^2-\varphirac{\varepsilon_i}{\sigma} \int_0^{t_1}\int_{\Omegamega}{\rm div}((\nabla\varphi^{\varepsilon_i}\omegatimes\nabla\varphi^{\varepsilon_i}) *\zeta^{\varepsilon_i})\cdot u^{\varepsilon_i}\\ &-\int_0^{t_1}\int_{\Omegamega}\tauau(\varphi^{\varepsilon_i},e(u^{\varepsilon_i})):e(v)+ \tauau(\varphi^{\varepsilon_i},e(v)):(e(u^{\varepsilon_i})-e(v))\, dxdt \varepsilonnd{split} \varepsilonnd{equation*} which converges to \betaegin{equation} A^{t_1}=\varphirac12\|u(0)\|_{L^2(\Omegamega)}^2+\int_0^{t_1}\int_{\Omegamega} H\cdot u\, d\mu_t dt-\int_0^{t_1}\int_{\Omegamega}\hat{\tauau}:e(v) +\tauau(\varphi,e(v)):(e(u)-e(v))\, dxdt. \lambdaabel{last4} \varepsilonnd{equation} Here we used that $\varphi^{\varepsilon_i}$ converges to $\varphi$ a.e. on $\Omegamega\tauimes [0,T]$. By \varepsilonqref{last1}, \varepsilonqref{last3} and \varepsilonqref{last4}, we deduce that \betaegin{equation*} \int_0^{t_1}\int_{\Omegamega}(\hat{\tauau}-\tauau(\varphi,e(v))): (e(u)-e(v))\, dxdt\geq 0. \varepsilonnd{equation*} By choosing $v=u+\varepsilonpsilon\tauilde{v}$, divide by $\varepsilonpsilon$ and letting $\varepsilonpsilon\rightarrow 0$, we prove \varepsilonqref{last}. Finally, \varepsilonqref{eneineq} follows from \varepsilonqref{last}, the strong $L^1(\Omega\tauimes[0,T])$ convergence of $\varphi^{\varepsilon_i}$, the lower semicontinuity of the mean curvature square term (see \cite{LST1}) and the energy equality appearing in Theorem \ref{localenergy}. This concludes the proof of Theorem \ref{maintheorem} $ {\Box}$ \betaegin{thebibliography}{99} \betaibitem{Abels1} H.~Abels, {\it The initial value problem for the Navier-Stokes equations with a free surface in $L^q$-Sobolev spaces}, Adv.\ Diff.\ Eqns.\ {\betaf 10} (2005), 45--64. \betaibitem{Abels2} H.~Abels, {\it On generalized solutions of two-phase flows for viscous incompressible fluids}, Interface.\ Free Bound.\ {\betaf 9} (2007), 31--65. \betaibitem{Abels-Roeger} H.~Abels, M.~R\"{o}ger, {\it Existence of weak solutions for a non-classical sharp interface model for a two-phase flow of viscous, incompressible fluids}, preprint. \betaibitem{Allard} W.~Allard, \tauextit{On the first variation of a varifold}, Ann.\ of Math.\ \tauextbf{95} (1972), 417--491. \betaibitem{Beale1} J.~T.~Beale, {\it The initial value problem for the Navier-Stokes equations with a free surface}, Comm.\ Pure.\ Appl.\ Math.\ \tauextbf{34} (1981), 359--392. \betaibitem{Beale2} J.~T.~Beale, {\it Large-time regularity of viscous surface waves}, Arch.\ Ration.\ Mech.\ Anal.\ \tauextbf{84} (1984), 307--352. \betaibitem{Brakke} K.~Brakke, \tauextit{The motion of a surface by its mean curvature}, Princeton University Press, Princeton, NJ, (1978). \betaibitem{Chen-Struwe} Y.~Chen, M.~Struwe, {\it Existence and partial regularity for the solutions to evolution problems for harmonic maps}, Math. Z. \tauextbf{201} (1989), 83--103. \betaibitem{Evans} L.~C.~Evans, \tauextit{Partial differential equations}, Graduate Studies in Math. AMS, (1998). \betaibitem{EvansGariepy} L.~C.~Evans, R.~F.~Gariepy, \tauextit{Measure theory and fine properties of functions}, Studies in Advanced Math.\, CRC Press, (1992). \betaibitem{GigaTakahashi} Y.~Giga, S.~Takahashi, {\it On global weak solutions of the nonstationary two phase Stokes flow}, SIAM J.\ Math.\ Anal.\ \tauextbf{25} (1994), 876--893. \betaibitem{Huisken} G.~Huisken, {\it Asymptotic behavior for singularities of the mean curvature flow}, J. Diff. Geom. \tauextbf{31} (1990), 285--299. \betaibitem{Hutchinson} J.~E.~Hutchinson,Y.~Tonegawa, {\it Convergence of phase interfaces in the van der Waals-Cahn-Hilliard theory}, Calc.\ Var.\ PDE\ \tauextbf{10} (2000), 49--84. \betaibitem{Ilmanen1} T.~Ilmanen, \tauextit{Convergence of the Allen-Cahn equation to Brakke's motion by mean curvature}, J.\ Diff.\ Geom.\ \tauextbf{38} (1993), 417--461. \betaibitem{Ilmanen2} T.~Ilmanen, \tauextit{Elliptic regularization and partial regularity for motion by mean curvature}, Mem.\ Amer.\ Math.\ Soc.\ \tauextbf{108} (1994), no.~520. \betaibitem{Kim} N.~Kim, L.~Consiglieri, J.~F.~Rodrigues, {\it On non-Newtonian incompressible fluids with phase transitions}, Math.\ Meth.\ Appl.\ Sci.\ {\betaf 29} (2006), 1523--1541. \betaibitem{Ladyzhenskaya} O.~A.~Ladyzhenskaya, N.~A.~Solonnikov, N.~N.~Uraltseva, {\it Linear and Quasilinear Equations Of Parabolic Type}, Transl.\ Math.\ Monographs, Vol.~23, Amer.\ Math.\ Soc.\ (1968). \betaibitem{LinLiu} F.~H.~Lin, C.~Liu, {\it Nonparabolic dissipative systems modeling the flow of liquid crystals}, Comm.\ Pure.\ Appl.\ Math.\ \tauextbf{48} (1995), 501--537. \betaibitem{Lions} J.~L.~Lions, {\it Quelques M\'{e}thodes de R\'{e}solution des Probl\`{e}mes aux Limites Non Lin\'{e}aires}, Dunod, Paris. \betaibitem{LST1} C.~Liu, N.~Sato, Y.~Tonegawa, {\it On the existence of mean curvature flow with transport term}, Interface.\ Free Bound.\ {\betaf 12} (2010), 251--277. \betaibitem{Liu} C.~Liu, N.~J.~Walkington, {\it An Eulerian description of fluids containing visco-hyperelastic particles}, Arch.\ Ration.\ Mech.\ Anal.\ \tauextbf{159} (2001), 229--252. \betaibitem{Maekawa} Y.~Maekawa, {\it On a free boundary problem for viscous incompressible flows}, Interface.\ Free Bound.\ {\betaf 9} (2007), 549--589. \betaibitem{Malek} J.~M\'alek, J.~Ne\v{c}as, M.~Rokyta, M.~R\r{u}\v{z}i\v{c}ka, {\it Weak and measure-valued solutions to evolutionary PDEs}, Appl.\ Math.\ Math.\ Comput.\ 13, Chapman \& Hall, London (1996). \betaibitem{Meyers} N.~G.~Meyers, W.~P.~Ziemer, {\it Integral inequalities of Poincar\'e and Wirtinger type for BV functions}, Amer.\ J.\ Math.\ {\betaf 99} (1977), 1345--1360. \betaibitem{Mugnai} L.~Mugnai, M.~R\"{o}ger, {\it Convergence of perturbed Allen-Cahn equations to forced mean curvature flow}, preprint. \betaibitem{Nouri} A.~Nouri, F.~Poupaud, {\it An existence theorem for the multifluid Navier-Stokes Problem}, J.\ Diff.\ Eqns.\ \tauextbf{122} (1995), 71--88. \betaibitem{Plotnikov} P.~I.~Plotnikov \tauextit{Generalized solutions to a free boundary problem of motion of a non-Newtonian fluid}, Siberian Math.\ J.\ {\betaf 34} (1993), 704--716. \betaibitem{Roeger} M.~R\"oger, R.~Sch\"atzle, \tauextit{On a modified conjecture of De Giorgi}, Math.\ Z.\ \tauextbf{254} (2006), 675--714. \betaibitem{Sato} N.~Sato, \tauextit{A simple proof of convergence of the Allen-Cahn equation to Brakke's motion by mean curvature}, Indiana Univ.\ Math.\ J.\ \tauextbf{57} (2008), 1743--1751. \betaibitem{Simon} L.~Simon, \tauextit{Lectures on geometric measure theory}, Proc.\ Centre Math.\ Anal.\ Austral.\ Nat.\ Univ.\ \tauextbf{3} (1983). \betaibitem{Solonnikov1} V.~A.~Solonnikov, \tauextit{Estimates of the solution of a certain initial-boundary value problem for a linear nonstationary system of Navier-Stokes equations}, Zap.\ Nauchn.\ Sem.\ Leningrad.\ Otdel.\ Mat.\ Inst.\ Steklov.\ (LOMI) {\betaf 59} (1976), 178--254, 257 (in Russian). \betaibitem{Solonnikov2} V.~A.~Solonnikov, \tauextit{On the transient motion of an isolated volume of viscous incompressible fluid}, Math.\ USSR-Izv.\ {\betaf 31} (1988), 381--405. \betaibitem{Soner} H.~M.~Soner, \tauextit{Convergence of the phase-field equations to the Mullins-Sekerka problem with kinetic undercooling}, Arch.\ Ration.\ Mech.\ Anal. {\betaf 131} (1995), 139--197. \betaibitem{Tonegawa} Y.~Tonegawa, {\it Integrality of varifolds in the singular limit of reaction-diffusion equations}, Hiroshima Math. J. \tauextbf{33}, (2003), 323--341. \betaibitem{Ziemer} W.~P.~Ziemer, \tauextit{Weakly differentiable functions}, Springer-Verlag (1989). \varepsilonnd{thebibliography} \varepsilonnd{document}
math
65,380
\begin{document} \title{Radon numbers grow linearly} \author{D\"om\"ot\"or P\'alv\"olgyi\footnote{MTA-ELTE Lend\"ulet Combinatorial Geometry Research Group, Institute of Mathematics, E\"otv\"os Lor\'and University (ELTE), Budapest, Hungary. Research supported by the Lend\"ulet program of the Hungarian Academy of Sciences (MTA), under grant number LP2017-19/2017.}} \maketitle \begin{abstract} Define the $k$-th Radon number $r_k$ of a convexity space as the smallest number (if it exists) for which any set of $r_k$ points can be partitioned into $k$ parts whose convex hulls intersect. Combining the recent abstract fractional Helly theorem of Holmsen and Lee with earlier methods of Bukh, we prove that $r_k$ grows linearly, i.e., $r_k\le c(r_2)\cdot k$. \end{abstract} \section{Introduction} Define a \emph{convexity space} as a pair $(X,\mbox{\ensuremath{\mathcal C}}\xspace)$, where $X$ is any set of points and $\mbox{\ensuremath{\mathcal C}}\xspace$, the collection of convex sets, is any family over $X$ that contains $\emptyset, X$, and is closed under intersection. The convex hull, $conv(S)$, of some point set $S\subset X$ is defined as the intersection of all convex sets containing $S$, i.e., $conv(S)=\cap\{C\in\mbox{\ensuremath{\mathcal C}}\xspace\mid S\subset C\}$; since \mbox{\ensuremath{\mathcal C}}\xspace is closed under intersection, $conv(S)$ is the minimal convex set containing $C$. This generalization of convex sets includes several examples; for an overview, see the book by van de Vel \cite{Vel} or for a more recent work, \cite{MoranY}. It is a natural question what properties of convex sets of $\mbox{\ensuremath{\mathbb R}}\xspace^d$ are preserved or what the relationships are among them for general convexity spaces. A much investigated function is the \emph{Radon number} $r_k$ (sometimes also called \emph{partition number} or \emph{Tverberg number}), which is defined as the smallest number (if it exists) for which any set of $r_k$ points can be partitioned into $k$ parts whose convex hulls intersect. For $k=2$, we simply write $r=r_2$. In case of the convex sets of $\mbox{\ensuremath{\mathbb R}}\xspace^d$, it was shown by Radon \cite{Radon} that $r=d+2$ and by Tverberg \cite{Tverberg} that $r_k=(d+1)(k-1)+1$. Calder \cite{Calder} and Eckhoff \cite{Eckhoff79} raised the question whether $r_k\le (r-1)(k-1)+1$ also holds for general convexity spaces (when $r$ exists), and this became known as Eckhoff's conjecture. It was shown by Jamison \cite{Jamison} that the conjecture is true if $r=3$, and that the existence of $r$ always implies that $r_k$ exists and $r_k\le r^{\lceil \log_2 k\rceil}\le (2k)^{\log_2 r}$. His proof used the recursion $r_{kl}\le r_kr_l$ which was later improved by Eckhoff \cite{Eckhoff00} to $r_{2k+1}\le (r-1)(r_{k+1}-1)+r_k+1$, but this did not significantly change the growth rate of the upper bound. Then Bukh \cite{Bukh} has disproved the conjectured $r_k\le (r-1)(k-1)+1$ by showing an example where $r=4$, but $r_k\ge 3k-1$ (just one more than the conjectured value) and has also improved the upper bound to $r_k= O(k^2\log^2 k)$, where the hidden constant depends on $r$. We improve this to $r_k=O(k)$, which is optimal up to a constant factor and might lead to interesting applications. \begin{thm}\label{main} If a convexity space $(X,\mbox{\ensuremath{\mathcal C}}\xspace)$ has Radon number $r$, then $r_k\le c(r)\cdot k$. \end{thm} Our proof combines the methods of Bukh with recent results of Holmsen and Lee \cite{HolmsenLee}. In particular, we will use the following version of the classical fractional Helly theorem \cite{KL}. \begin{thm}[Holmsen-Lee \cite{HolmsenLee}]\label{fh} If a convexity space $(X,\mbox{\ensuremath{\mathcal C}}\xspace)$ has Radon number $r$, then there is an $f$ such that for any $\alpha>0$ there is a $\beta>0$ such that for any finite family \mbox{\ensuremath{\mathcal F}}\xspace of convex sets if at least an $\alpha$ fraction of the $f$-tuples of \mbox{\ensuremath{\mathcal F}}\xspace are intersecting, then a $\beta$ fraction of \mbox{\ensuremath{\mathcal F}}\xspace intersects. \end{thm} There are several other connections between the parameters of a convexity space \cite{Vel}; for example, earlier it was already shown \cite{Levi} that in convexity spaces the Helly number is always strictly less than $r$, while in \cite{HolmsenLee} it was also shown that the colorful Helly number \cite{Barany} can be also bounded by some function of $r$ (and this implied Theorem \ref{fh} combined with a combinatorial result from \cite{Holmsen}).\footnote{We would like to point out that a difficulty in proving these results is that the existence of a Carath\'eodory-type theorem is not implied by the existence of $r$.} It was also shown in \cite{HolmsenLee} that it follows from the work of Alon et al.~\cite{AKMM} that weak \mbox{\ensuremath{\varepsilon}}\xspace-nets \cite{ABFK} of size $c(\mbox{\ensuremath{\varepsilon}}\xspace,r)$ also exist and a $(p,q)$-theorem \cite{AK} also holds, so understanding these parameters better might lead to improved \mbox{\ensuremath{\varepsilon}}\xspace-net bounds. It remains an interesting challenge and a popular topic to find new connections among such theorems; for some recent papers studying the Radon numbers or Tverberg theorems of various convexity spaces, see \cite{dLLHRS17a,dLLHRS17b,dLHMM,FGKVW,Letzter,Patak,Patakova,Soberon}, while for a comprehensive survey, see B\'ar\'any and Sober\'on \cite{BS}. \noindent \textbf{\large Restricted vs.\ multiset} In case of general convexity spaces, there are two, slightly different definitions of Radon numbers (\cite{Vel}: 5.19). When in the point set $P$ to be partitioned we do not allow repetitions, i.e., $P$ consist of \emph{different} points, the parameter is called \emph{restricted} Radon number, which we will denote by $r_k^{(1)}$. If repetitions are also allowed, i.e., we want to partition a multiset, the parameter is called \emph{unrestricted} or \emph{multiset} Radon number, which we will denote by $r_k^{(m)}$. The obvious connection between these parameters is $r_k^{(1)}\le r_k^{(m)}\le (k-1)(r_k^{(1)}-1)+1$. In the earlier papers multiset Radon numbers were preferred, while later papers usually focused on restricted Radon numbers; we followed the spirit of the age, so the results in the Introduction were written using the definition of $r_k^{(1)}$, although some of the bounds (like Jamison's or Eckhoff's) are valid for both definitions. The proof of Theorem \ref{main}, however, also works for multisets, so we will in fact prove the stronger $r_k^{(m)}=O(k)$, and in the following simply use $r_k$ for the multiset Radon number $r_k^{(m)}$. A similar issue arises in Theorem \ref{fh}; is \mbox{\ensuremath{\mathcal F}}\xspace allowed to be a multifamily? Though not emphasized in \cite{HolmsenLee}, their proof also works in this case and we will use it for a multifamily. Note that this could be avoided with some cumbersome tricks, like adding more points to the convexity space without increasing the Radon number $r$ to make all sets of a family different, but we do not go into details, as Theorem \ref{fh} anyhow holds for multifamilies. \section{Proof} Fix $r$, and a collection of points $P$ with cardinality $tk$, where we allow repetitions and the cardinality is understood as the sum of the multiplicities. We will treat all points of $P$ as if they were different even if they coincide in $X$, e.g., when taking subsets. We need to show that if $t\ge c(r)$, then we can partition $P$ into $k$ sets whose convex hulls intersect. For a fixed constant $s$, define \mbox{\ensuremath{\mathcal F}}\xspace to be the family of convex sets that are the convex hull of some $s$-element subset of $P$, i.e., $\mbox{\ensuremath{\mathcal F}}\xspace=\{conv(S)\mid S\subset P, |S|=s\}$. Since we treat all points of $P$ as different, \mbox{\ensuremath{\mathcal F}}\xspace will be a multifamily with $|\mbox{\ensuremath{\mathcal F}}\xspace|=\binom{tk}s$. We will refer to the point set $S$ whose convex hull gave some $F=conv(S)\in\mbox{\ensuremath{\mathcal F}}\xspace$ as the \emph{vertices} of $F$ (despite that some of the points might be in the interior of $F$). Note that for some $S\ne S'$, we might have $conv(S)=conv(S')$, but the vertices of $conv(S)$ and $conv(S')$ will still be $S$ and $S'$; since $P$ is a multiset, it is even possible that $S\cap S'=\emptyset$. The constants $t$ and $s$ will be set to be large enough compared to some parameters that we get from Theorem \ref{fh} when we apply it to a fixed $\alpha$. (Our arguments work for any $0<\alpha<1$.) First we set $s$ to be large enough depending on $\alpha$ and $r_f$ (recall that $r_f\le r^{\log f}$ is a constant \cite{Jamison}), then $t$ to be large enough depending on $s$ and $\beta$ (that belongs to our chosen $\alpha$). In particular, we can set $s=\log(\frac 1{1-\alpha_s})r_ff^{fr_f}$ and $t=\max(\frac{s^2}{\beta};\frac{(fs)^2}{k(1-\alpha_t)})$, where $0<\alpha_s,\alpha_t<1$ are any two numbers such that $\alpha_s\cdot\alpha_t=\alpha$. Also, we note that the proof from \cite{Holmsen,HolmsenLee} gives $f\le r^{r^{\log r}}$ and $\beta=\Omega(\alpha^{r^f})$ for Theorem \ref{fh}. Combining all these would give an upper bound around of ${r^{r^{r^{\log r}}}}$ for $t$. Theorem \ref{main} will be implied by the following lemma and Theorem \ref{fh}. \begin{lem}\label{fhholds} An $\alpha$ fraction of the $f$-tuples of \mbox{\ensuremath{\mathcal F}}\xspace are intersecting. \end{lem} \begin{proof} Since $t$ is large enough, almost all $f$-tuples will be vertex-disjoint, thus it will be enough to deal with such $f$-tuples. More precisely, the probability of an $f$-tuple being vertex-disjoint is at least $(1-\frac{fs}{tk})^{fs}\ge 1-\frac{(fs)^2}{tk}\ge \alpha_t$ by the choice of $t$. We need to prove that at least an $\alpha_s$ fraction of these vertex-disjoint $f$-tuples will be intersecting. Partition the vertex-disjoint $f$-tuples into groups depending on which $(fs)$-element subset of $P$ is the union of their vertices. We will show that for each group an $\alpha_s$ fraction of them are intersecting. We do this by generating the $f$-tuples of a group uniformly at random and show that such a random $f$-tuple will be intersecting with probability at least $\alpha_s$. For technical reasons, suppose that $m=\frac s{r_f}$ is an integer and partition the $fs$ supporting points of the group randomly into $m$ subsets of size $fr_f$, denoted by $V_1,\ldots,V_m$. Call an $f$-tuple \emph{type} $(V_1,\ldots,V_m)$ if each set of the $f$-tuple intersects each $V_i$ in $r_f$ points. Since these $V_i$ were picked randomly, it is enough to show that the probability that a $(V_1,\ldots,V_m)$-type $f$-tuple is intersecting is at least $\alpha_s$. The $(V_1,\ldots,V_m)$-type $f$-tuples can be uniformly generated by partitioning each $V_i$ into $f$ equal parts of size $r_f$. Therefore, it is enough to show that such a random $f$-tuple will be intersecting with probability at least $\alpha_s$. Since $|V_i|\ge r_f$, there is at least one partition of the first $r_f$ points of $V_i$ to $f$ parts whose convex hulls intersect. Since we can distribute the remaining $(f-1)r_f$ points of $V_i$ to make all $f$ parts equal, we get that when we partition $V_i$ into $f$ equal parts of size $r_f$, the convex hulls of these parts will intersect with probability at least $\binom{fr_f}{r_f,r_f,\ldots,r_f}^{-1}\ge f^{-fr_f}$. Since these events are independent for each $i$, we get that the final $f$-tuple will be intersecting with probability at least $1-(1-f^{-fr_f})^m\ge 1-e^{-mf^{-fr_f}}\ge \alpha_s$ by the choice of $s$. \end{proof} Therefore, if $s$ is large enough, the conditions of Theorem \ref{fh} are met, so at least $\beta\binom{tk}s$ members of \mbox{\ensuremath{\mathcal F}}\xspace intersect. In other words, these intersecting sets form an $s$-uniform hypergraph \mbox{\ensuremath{\mathcal H}}\xspace on $tk$ vertices that is $\beta$-dense. We need to show that \mbox{\ensuremath{\mathcal H}}\xspace has $k$ disjoint edges to obtain the desired partition of $P$ into $k$ parts with intersecting convex hulls. For a contradiction, suppose that \mbox{\ensuremath{\mathcal H}}\xspace has only $k-1$ disjoint edges. Then every other edge meets one of their $(k-1)s$ vertices. There are at most $(k-1)s\binom{tk}{s-1}$ such edges, which is less than $\beta\binom{tk}s$ if $(k-1)s<\beta\frac{tk-s+1}{s}$, but this holds by the choice of $t$. This finishes the proof of Theorem \ref{fh}.\qed \noindent \textbf{\large Concluding remarks} It is an interesting question to study how big $f$ can be compared to $r$ and the Helly number $h$ of $(X,\mbox{\ensuremath{\mathcal C}}\xspace)$. The current bound \cite{HolmsenLee} gives $f\le h^{r_h}\le r^{r^{\log r}}$. We would like to point out that the first inequality, $f\le h^{r_h}$, can be (almost) strict, as shown by the following example, similar to Example 3 (cylinders) of \cite{MoranY}. Let $X=\{1,\ldots, q\}^d$ be the points of a $d$-dimensional grid, and let \mbox{\ensuremath{\mathcal C}}\xspace consist of all axis-parallel affine subspaces. (Note that for $q=2$, $X$ will be the vertices of a $d$-dimensional cube, and \mbox{\ensuremath{\mathcal C}}\xspace its faces.) It is easy to check that $h=2$, $r=\lfloor \log(d+1)+2\rfloor$ and $f=d+1$; the last equality follows from that for $\alpha=\frac{d!}{d^d}$ we need $\beta=\frac 1q$ when \mbox{\ensuremath{\mathcal F}}\xspace consists of all $qd$ axis-parallel affine hyperplanes (if $q$ is large enough). It is tempting to assume that Theorem \ref{main} would improve the second inequality, $h^{r_h}\le r^{r^{\log r}}$, as instead of $r_h\le r^{\log h}$ we can use $r_h=O(h)$. Unfortunately, recall that the hidden constant depended on $r$, in particular, it is around ${r^{r^{r^{\log r}}}}$. We have a suspicion that this might not be entirely sharp, so a natural question is whether this dependence could be removed to improve $r_k\le {r^{r^{r^{\log r}}}}\cdot k$ to $r_k\le c \cdot r\cdot k$. This would truly lead to an improvement of the upper bound on $f$ in Theorem \ref{fh} and would be enough for several applications \cite{BS}. \noindent \textbf{\large Acknowledgment} I would like to thank Boris Bukh and Narmada Varadarajan for discussions on \cite{Bukh}, Andreas Holmsen for calling my attention to the difference between restricted and multiset Radon numbers, espcially for confirming that Theorem \ref{fh} also holds for multisets, and G\'abor Dam\'asdi, Bal\'azs Keszegh, Padmini Mukkamala and G\'eza T\'oth for feedback on earlier versions of this manuscript, especially for fixing the computations in the proof of Lemma \ref{fhholds}. \end{document}
math
14,773
\begin{document} \title{Ultra-fast two-qubit ion gate using sequences of resonant pulses} \author{E. Torrontegui$^1$, D. Heinrich$^{2,3}$, M. I. Hussain$^{2,3}$, R. Blatt$^{2,3}$, and J. J. Garc{\'\i}a-Ripoll$^1$} \address{$^1$ Instituto de F\'{\i}sica Fundamental IFF-CSIC, Calle Serrano 113b, 28006 Madrid, Spain} \address{$^2$ Institut f\"ur Quantenoptik und Quanteninformation, \"Osterreichische Akademie der Wissenschaften, Technikerstr. 21a, 6020 Innsbruck, Austria} \address{$^3$ Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, 6020 Innsbruck, Austria} \ead{[email protected]} \begin{abstract} We propose a new protocol to implement ultra-fast two-qubit phase gates with trapped ions using spin-dependent kicks induced by resonant transitions. By only optimizing the allocation of the arrival times in a pulse train sequence the gate is implemented in times faster than the trapping oscillation period $T<2\pi/\omega$. Such gates allow us to increase the number of gate operations that can be completed within the coherence time of the ion-qubits favoring the development of scalable quantum computers. \end{abstract} \maketitle \section{Introduction} Trapped ions are one of the most accurate platforms for scalable quantum computation. Many ions can be loaded in Paul traps\ \cite{Porras2004, Zhang2017}, Penning traps\ \cite{Jordan2019} or possibly in other scalable architectures\ \cite{Lekitsch2017,Jain2018}. Within these traps, qubits can be stored in long-lived atomic states, which are individually manipulated using lasers or microwaves to implement high-fidelity single-qubit operations and measurements. Finally, using the vibrational states of the ion crystal mediators, it is possible to implement univeral multiqubit operations, such as the CNOT gate\ \cite{Cirac1995, Schmidt-Kaler2003}, the M{\o}lmer-S{\o}rensen gate \cite{Sorensen1999, Sackett2000}, geometric phase gates\ \cite{Milburn2000, Leibfried2003} or Toffoli operations\ \cite{Toffoli1980, Monz2009}. The actual realization of many of these gates depends on Raman transitions\ \cite{Wolf2016, Kaufmann2017, Ballance2016, Gaebler2016}, with high-fidelity \cite{Gaebler2016, Ballance2016} and excellent coherence properties \cite{Wang2017}. In practice, fidelity and speed of two-qubit gates are still limiting the depth of actual computations, and prevent the development of scalable fault tolerant computation\ \cite{Bermudez2017}. Those limitations in fidelity and speed are due to the use of highly detuned lasers, with lengthy control procedures and slow dynamics of the vibrational states. There exist faster gates based on faster and stronger acceleration of the ions\ \cite{Garcia-Ripoll2003,Garcia-Ripoll2005,Duan2004,Steane2014}. Already a strong time-dependent optical lattice may result in high-fidelity gates that are shorter than a trap period\ \cite{Schafer2018}, but are still constrained by available detuning, power and the Lamb-Dicke limit\ \cite{Wineland1998}. Another method is to excite an optical transition using picosecond laser pulses. A properly designed pulse train can create an arbitrarily fast two- or multi-qubit gate\ \cite{Garcia-Ripoll2003,Garcia-Ripoll2005}. However, as demonstrated in Ref.\ \cite{Mizrahi2013}, it remains a technical challenge to have a strong momentum kick per pulse---a Raman transition might not provide enough momentum---and to switch directions in the pulse laser---which may induce additional sources of error and decoherence. In this work we study the realization of fast high-fidelity quantum gates using a train of laser pulses that excite a resonant transition\ \cite{Heinrich2019}. We focus on a simple scenario that only requires pulse-picking from a train of laser pulses with fixed strength and repetition rate. As example, we study a realistic pulsed scheme driving the $4\mathrm{S}_{1/2}\to 4\mathrm{P}_{3/2}$ transition in ${}^{40}\mathrm{Ca}^+$\ \cite{Heinrich2019}. We design the gate protocols with a two-stage global optimization that combines a continuous approximation with a discrete genetic algorithm for fine-tuning the pulse picking. We find many choices of pulses that implement highly entangling gates in a time comparable to the trap frequency, with very weak sensitivity to the pulse arrival time or the temperature of the motional states. The manuscript is structured as follows: In Sec.\ \ref{gate-theory}, we revisit the theory for implementing phase gates using spin-dependent kicks\ \cite{Cirac1995, Garcia-Ripoll2003, Garcia-Ripoll2005}. Section\ \ref{setup} presents a possible experimental setup and an optimized control protocol based on state-of-the-art kicking and control of trapped ions\ \cite{Heinrich2019}. The results leading to the implementation of ultra-fast two-qubit gates are discussed in Sec.\ \ref{results}. In Sect.\ \ref{errors} we analyze and quantify the main source of errors in the design of such gates. Finally, we present prospective research lines related to this work in Sec.\ \ref{outlook}. \section{Methods} \subsection{Geometric phase by state-dependent kicks} \langlebel{gate-theory} Consider two ions in a 1D-harmonic potential of frequency $\omega,$ at positions $x_1$ and $x_2$. Using the center-of-mass (c) and stretch-mode (s) coordinates, $x_c=(x_1+x_2)/2$ and $x_s=x_2-x_1$ the free Hamiltonian for this system reads $H_0=\hbar\omega_ca_c^{\dag}a_c+\hbar\omega_sa_s^{\dag}a_s.$ Here $\omega_c=\omega$ and $\omega_s=\omega\sqrt{3}$ and $a_{c,s}^{\dag}$ $(a_{c,s})$ are the creation (annihilation) phonon operators for each mode. The ions interact with a laser beam that is resonant with an atomic transition. This interaction is modeled by the effective Hamiltonian \begin{equation} \langlebel{Hi} H_1=\frac{\Omega(t)}{2}[\sigma_1^{\dag}e^{i\hbar k x_1}+\sigma_2^{\dag}e^{i\hbar k x_2}+\mbox{H.c.}]. \end{equation} The pseudospin ladder operator $\sigma_i^{\dag}$ connects the ground and excited states of the $i$-th ion---in this setup, the $4\mathrm{S}_{1/2}$ and $4\mathrm{P}_{3/2}$ states of ${}^{40}\mathrm{Ca}^+.$ The interaction accounts for processes where the ion absorbs or emits a photon, changing its internal state and also modifying its momentum by $\pm\hbar k.$ The sign of $k$ depends on the direction of the laser and whether the photon is emitted or absorbed. Without loss of generality, we will forego individual addressing and assume that the Rabi frequency $\Omega(t)$ is the same for both ions. \begin{figure} \caption{{\itshape a)} \end{figure} Our gate protocols\ \cite{Garcia-Ripoll2003} alternate free evolution $H=H_0$, where the laser is switched off $(\Omega=0),$ with a very fast, pulsed interaction kicking the ion. As shown in Fig.\ \ref{fig:levels}b, we assume pairs of pulses coming from counter-propagating directions. The Rabi frequency $\Omega(t)$ and the duration of each pulse $\delta t$ satisfy $\int_0^{\delta t}\Omega(\tau)d\tau=\pi$ and $\delta t\ll 2\pi/\omega$. The pulses kick the ions, accelerating them along the same direction. In between each pair of kicks, the ions oscillate freely in the trap. The combination of both effects can be modeled analytically. The evolution operator for $N$ pulses is $\mathcal{U}=\mathcal{U}_c\mathcal{U}_s$ with $\mathcal{U}_{c,s}=\prod_{n=1}^NU_{c,s}(t_n,z_n)$ and \begin{equation}al U_c(t_n,z_n)&=&e^{i\alpha_c^{(n)}(\sigma_1^z+\sigma_2^z)(a_c+a_c^{\dag})}e^{i\omega_ct_na_c^{\dag}a_c} \\ U_s(t_n,z_n)&=&e^{i\alpha_s^{(n)}(\sigma_1^z-\sigma_2^z)(a_s+a_s^{\dag})}e^{i\omega_st_na_s^{\dag}a_s}. \end{equation}al The amplitudes $\alpha_c=\eta z/2^{3/2}$ and $\alpha_s=\alpha_c/3^{1/4}$ depend on the Lamb-Dicke parameter $\eta=\sqrt{\frac{\hbar}{2m\omega}}k.$ The sign $z=\pm 1$ indicates the net orientation of the combined kick. It depends on the relative order of pulses within each pair: $z=+1$ if the first pulse comes from the left and the second from the right, $z=-1$ in the opposite case. In the setup from Fig.\ \ref{fig:levels}b, the sign $z$ is fixed throughout the experiment. \begin{figure} \caption{Phase space trajectories for the center-of-mass (solid) and stretch mode (dashed) for a pulse sequence with $N=6$ sets of pulses, with $M=1$ pulses (black) and $M=3$ pulses (green) per set, respectively. The trajectories are drawn in the frame of reference that rotates with the frequency of the mode, that is $\braket{a e^{i\omega_{c,s} \end{figure} A kicking sequence with $N$ pulses displaces the Fock operators $a_{c,s}$ by a complex number $A_{c,s}$ that depends on the collective state of the ions \begin{equation}al a_c&\rightarrow&a_c+A_c=a_c+i(\sigma_1^z+\sigma_2^z)\alpha_c\sum_{n=1}^N e^{-i\omega_ct_n} \\ a_s&\rightarrow&a_s+A_s=a_s+i(\sigma_1^z-\sigma_2^z)\alpha_s\sum_{n=1}^N e^{-i\omega_st_n}. \end{equation}al In phase space $(\langlengle x_{c,s}\ranglengle, \langlengle p_{c,s}\ranglengle),$ the normal modes follow polygonal orbits [cf. Fig.\ \ref{fig:trajectories}]. The edges of the polygon all have uniform length $\sim\alpha_{c,s}$ and the angles between edges are determined by the arrival times of the kicks $\omega_c t_n.$ A perfect gate restores the motional state of the ion $A_c=A_s=0,$ bringing them back to their original oscillator trajectories \begin{equation} \langlebel{cond} \sum_{n=1}^N e^{i\omega t_n}=\sum_{n=1}^N e^{i\sqrt{3}\omega t_n}=0, \end{equation} and closing the orbits. Under these conditions, after a time $T$ the evolution operator becomes \cite{Garcia-Ripoll2003} \begin{equation} \mathcal{U}(\phi,T)=e^{-i\phi\sigma_1^z\sigma_2^z}e^{i\omega_cTa_c^{\dag}a_c}e^{i\omega_sTa_s^{\dag}a_s}. \end{equation} This is equivalent to free evolution in the trap, combined with a global phase $\phi$ that does not depend on the motional state, \begin{align} \langlebel{phi} \phi&=\alpha_c^2\sum_{j=2}^N\sum_{k=1}^{j-1}\left[\frac{\sin(\sqrt{3}\omega (t_{j}-t_{k}))}{\sqrt{3}}-\sin(\omega(t_j-t_k))\right]\noindenttag\\ &=:\alpha_c^2 \varphi. \end{align} When Eq.\ \eqref{cond} holds and the total phase satisfies \begin{equation} \langlebel{phase} \phi=\pi/4+ 2n\pi \quad n\in\mathbb{Z}, \end{equation} the combined evolution implements a controlled-phase gate on the internal state of the ions. The set of equations that determines the operation of the gate are solved in two steps. First, calculating the allocation positions $x_n=\omega t_n$, note that this allows one to re-scale the pulse arrival times $t_n$ and determines the value $\varphi$. Second, we adjust the trapping frequency to make it compatible with (\ref{phi}), it fulfills \begin{equation} \langlebel{ws} \omega=\frac{\hbar k^2\varphi}{16m(\pi/4+2n\pi)}. \end{equation} Note that we are allowed to overshoot the accumulated phase, exceeding the minimum value $\pi/4$ by an integer multiple $n$ of $2\pi.$ As we will see later, this allows us to fine tune the frequency, increasing $\varphi$ (i.e. more pulses) while searching for a larger overshooting factor $n.$ \subsection{Experimental setup and parameters} \langlebel{setup} We propose to implement the ultra-fast two-qubit gate using $^{40}\mbox{Ca}^+$ ions confined in a Paul trap with center-of-mass frequency $\omega\in [\omega_{min},\omega_{max}]$. The relevant internal levels of the ion are depicted in Fig.~\ref{fig:levels}a. The qubit is stored in the $4\mbox{S}_{1/2}$ and $3\mbox{D}_{5/2}$ states and we use the $4\mbox{S}_{1/2}\leftrightarrow 4\mbox{P}_{3/2}$ transition to kick the ion. As shown in Fig.\ \ref{fig:levels}b, a single source generator produces a continuous train of pulses. A pulse picker selects pulses with discrete arrival times $t_n$ compatible with a gate protocol. The discreteness of the arrival times transforms our gate design into a combinatorial optimization problem, described in Sect.\ \ref{optimization}. Each pulse is split into two identical components by a $50/50$ beam splitter. The two pulses arrive at the ion with a relative delay $\tau,$ controlled by the relative length of the two optical paths. The ion is excited by the first pulse, which in Fig.~\ref{fig:levels}b comes from the left. By absorbing a photon, the ion acquires a momentum $+\hbar k.$ Shortly after this, a second pulse coming from the opposite direction (right in Fig.~\ref{fig:levels}b) deexcites the atom. The act of emitting a photon in the opposite direction, with momentum $-\hbar k,$ increases the momentum of the ion by $+\hbar k.$ The combined action of both pulses amounts to a very fast kick with momentum $+2\hbar k.$ To implement our phase gate, we assume a pulsed laser with these characteristics: {\itshape (i)} The laser is resonant with the ion transition, operating at a central frequency of $393.4$ nm. {\itshape (ii)} The repetition rate of the laser $R\sim 5$ GHz is much faster than the allowed trap frequencies $\omega\in 2\pi\times [78 \mbox{ kHz},2\mbox{ MHz} ]$, allowing a fine-grained control of the pulse sequences. {\itshape (iii)} The length of the pulses $\delta t$ and the delay between kicks $\tau$ are both shorter than the lifetime of the $4\mbox{P}_{3/2}$ state, $\delta{t},\tau \ll t_\gamma=6.9$ ns. This allows us to neglect spontaneous emission during the pulsed excitation and during the dark times. {\itshape (iv)} The area of the pulses is calibrated to fully transfer all probability between the $4\mbox{S}_{1/2}$ and $4\mbox{P}_{3/2}$ states, i.e. $\int_0^{\delta t}\Omega(\tau)d\tau=\pi$. Almost all requirements, except for the splitting and delay of pulses, have been demonstrated by frequency-quadrupling the light generated by a commercial laser\ \cite{Heinrich2019}. \begin{figure} \caption{Genetic algorithm workflow. {\itshape a)} \end{figure} \subsection{Design and optimization of a discrete control} \langlebel{optimization} Section\ \ref{gate-theory} established that a control-Z gate can be implemented by a sequence of pulse pairs that satisfies Eqs.\ \eqref{cond} and\ \eqref{phase}. In this work we address the design of the pulse sequence as two consecutive tasks: (i) find a set of pulse arrival times $\{t_n\}_{n=1}^N$ that meet conditions (\ref{cond}), (ii) fine tune the trapping frequency $\omega$ so that the total acquired phase is compatible with the implementation of a CZ gate\ \eqref{phase}. The first task decides our pulse picking strategy. This implies solving a combinatorial optimization problem, where the times $t_n = k_n \times t_R + t_1$ are spaced by integer multiples of the laser pulse period $t_R.$ In phase space, Eq.\ \eqref{cond} ensures closed polygonal trajectories [cf. Fig.\ \ref{fig:trajectories}], with angles between edges proportional to $\omega_{c,s}(t_{n+1}-t_n)$ and edge lengths proportional to $\alpha_{c,s}.$ The area enclosed by the polygons determines the geometric phase $\phi$. By adjusting the trap frequency $\omega_c,$ we tune the kick strengths $\alpha_c=\alpha_c(\omega),$ scaling the whole trajectory in phase space. This allows us to fine tune the accumulated phase\ \eqref{phase} to the desired value, modulo an irrelevant integer $n.$ The design of the pulse sequence is a hard combinatorial optimization problem, where we pick $N$ pulses out of a much longer train. To avoid the exponential complexity in this search, we find good approximate solutions using a two-stage method. The first stage is a regular minimization of the gate error\ \eqref{error} over a set of $N$ continuous arrival times $t_n\in\mathbb{R}.$ We apply a standard algorithm to minimize the gate error\ \eqref{error} over a set of $N$ variables, using $K_\text{seed}$ random initial seeds $\vec t\equiv \{t_1,t_2,\cdots,t_N\}$ of ordered times $t_{n+1}>t_n$ and $t_N\leq 2\pi/\omega$. We select a subset of $K_{\text{opt}}$ controls maximizing the phase $\phi,$ rejecting slow solutions $T>2\times 2\pi/\omega.$ In the second stage of this process, we introduce the finite repetition of the laser. We round the $K_{\text{opt}}$ continuous solutions to the nearest laser pulses, which are spaced by a multiple of $t_R=1/R.$ These discrete protocols introduce a possible timing error $\xi=|t_n-nt_R|.$ The gate fidelity depends on the error \begin{equation} \langlebel{error} \epsilon=|A_c|^2+|A_s|^2, \end{equation} that we make in restoring the motional state of the ions. Instead of just minimizing each $\xi$, we minimize this global error $\epsilon$ with a genetic algorithm that fine tunes the pulse allocation. A genetic algorithm\ \cite{Holland1973, Holland1975} is a discrete optimizer that builds on the concept of natural selection, where solutions are iteratively improved using biologically inspired operations such as selection, crossover and mutation. In each iteration, a {\itshape population} of candidate solutions (called {\itshape individuals}) is evolved towards better solutions or {\itshape generation} based on a {\itshape fitness} function---the cost function to be optimized. On each generation, the algorithm selects a subset of individuals that maximize the fitness. These so called {\itshape parents} merge and mutate, giving rise to new solutions, the {\itshape offspring} that form the next generation. This process of selection and reproduction is repeated until the fitness reaches the desired optimal value, selected by a user-defined tolerance, or until the maximum number of generations is reached. To bring our problem into this form, we take the $N$ continous times $t_n$ and find out the $M_{\max}$ closest pulses within the sequence created by the laser [cf. Fig.\ \ref{fig:optimization}a]. We then encode a solution as a \textit{chromosome} with $N\times M_{\max}$ \textit{genes}. Each gene is a bit that becomes $1$ when the corresponding pulse is selected [cf. Fig.\ \ref{fig:optimization}b]. Our initial population is formed by $K_\text{ind}$ individuals, each with $N\times M$ active genes, indicating that we have $N$ groups of $M$ pulses around the times $t_n.$ From this pool, we select the $K_p$ individuals exhibiting the best value of the fitness function (\ref{error}). Parents mate in pairs and each child receives part of its chromosome from the first parent and the rest from the second. In our algorithm this proportion is $50/50$ made at the middle of each parent chromosome, see Fig.\ \ref{fig:optimization}b. If a child improves the fitness function it joins the parents to constitute the new population for the next generation. If not, a mutation is produced creating random variations in the chromosome. To preserve the total number of $N\times M$ pulses, we randomly swap the values of two genes from a $M_{max}$ sequence placed around one of the times $t_i,$ see Fig. \ref{fig:optimization}b. These mutants join the new population, irrespective of their value of the fitness function, and the whole process is repeated. This workflow, sketched in Fig.\ \ref{fig:optimization}, is repeated over $K_{\text{ite}}$ generations. At the end, we select the state that produces the best value of the fitness function, thereby minimizing the error Eq.\ \eqref{error}. \section{Results} \langlebel{results} \begin{figure} \caption{Partial gate optimization assuming full control over pulse arrival times. {\itshape a)} \end{figure} As mentioned above, our simulations consider a scenario where the direction of the kicks is fixed. This happens when a single pulse picker is connected to an interferometric setup, creating pairs of pulses all arriving with the same relative delay [cf. Fig.\ \ref{fig:levels}]---e.g. the left pulse always excites the ion and the right pulse immediately de-excites it, setting $z=+1.$ Scenarios where both the relative direction and the Lamb-Dicke parameter are tuned have been considered before\ \cite{Garcia-Ripoll2003, Duan2004, Bentley2013, Gale2020} leading to different degrees of controllability and thus to different gate times. Here we will show that, despite our experimentally-motivated constraints\ \cite{Heinrich2019}, it is possible to implement CZ gates in a time shorter than the trap period $T<2\pi/\omega.$ \begin{figure} \caption{Optimal gates for discrete pulse arrival times. {\itshape a)} \end{figure} Before illustrating the final protocols, Fig.\ \ref{assym} shows the intermediate results obtained when solving the commensurability equations\ \eqref{cond} with continuous variables $\{t_n\}_{n=1}^N.$ Note how for a fixed number of pulses $N$ there exist multiple schemes that restore the motional state of the ions and implement a control phase gate. Out of those combinations we select those that maximize the ratio $\varphi=|\phi/\alpha_c^2|,$ and feed them to the genetic algorithm to create discrete pulse sequences. Note that the two-qubit phase depends on $\alpha_c$ and therefore on the trap frequency $\omega_c.$ The preselection of continuous protocols with large $\varphi$ provides a broader choice of pulse sequences and frequencies (\ref{ws}) that satisfy both the experimental restriction $\omega\in[\omega_{min},\omega_{max}]$ and the phase relation Eq.\ \eqref{phase}, with either $n=0$ or $n\neq 0$ (\textit{overshooting}). The accumulated phase grows with the number of pulses in the discrete protocol as $|\phi|\propto N^{0.6},$ [cf. Fig.\ \ref{discrete}a], while the duration of the gate remains below $T\lesssim 1.055\times 2\pi/\omega$ and is close to the sequences minimizing the gate time $T$ [cf. Figs.\ \ref{assym}a and \ref{discrete}b]. The error introduced by the finite repetition rate is also negligible, Fig.\ \ref{discrete}c shows the theoretical error for one protocol consisting of $N=6$ pulses. A laser with a repetition rate $R\gtrsim 1$ GHz already produces an ultra-fast two-qubit gate with fidelity above $99.999\%.$ As shown in Fig.\ \ref{discrete}b, a short sequence with $N=4$ pulses produces very fast gates $T<2\pi/\omega,$ but with a small acquired phase. We may increase the accumulated $\varphi$, concentrating $M$ pulses around each of the $N$ kicking times [cf. Fig.\ \ref{fig:optimization}a]. This maintains the shape of the orbits, scaling the edges by a factor of $M$ [cf. Fig.\ \ref{discrete}a]. As shown in Figs.\ \ref{discrete}a-b, the duration of the gate is preserved and the accumulated phase grows with the area as $\varphi\propto M^2.$ Note that, since the phase increases in discrete steps, we still need to fine tune the trap frequency to match the desired CZ. Figures\ \ref{fig:phase}a and \ref{fig:phase}b show that this is possible for realistic trapping frequencies\ \cite{Heinrich2019}, using different multiplication factors $M.$ Figure\ \ref{fig:phase}a shows the frequencies (\ref{ws}) that implement a CZ gate and which are closest to the desired value $\omega\sim 2\pi\times 0.82$ MHz. As $\varphi$ grows with both $N$ and $M$, Fig.\ \ref{fig:phase}b shows that the specific frequency is achievable compensating the phase with a large overshooting factor $n$. \begin{figure} \caption{{\itshape a)} \end{figure} \section{Estimation of errors} \langlebel{errors} We have presented a route for the implementation of ultra-fast $T<2\pi/\omega$ quantum gates using a train of laser pulses that are resonant with the transition frequency of a trapped ion. In these protocols, the motional state of the ion is almost perfectly restored with a high-fidelity $\epsilon \sim 10^{-9}-10^{-7}$ using source generators with a constant repetition rate $R\sim 5$ GHz. When implementing these protocols, actual experiments will suffer from imperfections in the control of the ion, due to spontaneous emission during the time that the ion remains in the excited state $4P_{3/2}$ (i.e., during pulses and waiting time), and due to intensity fluctuations in the pulses. A trivial model to quantify the spontaneous emission errors, giving an upper bound on them, is to write a density matrix \begin{equation} \rho = (1-P_{err}) |\psi\ranglengle\langlengle\psi| + P_{err} |g\ranglengle\langlengle g| \end{equation} where $|g\ranglengle$ is a fictitious state accumulating the probability that an error took place. The fidelity is given by $P_{err}F_0,$ where $F_0$ is the fidelity of the gate implemented by ideal kicks. In this model, $P_{err}$ feeds from spontaneous emission effects: we assume that whenever the emission takes place, the experiment must be repeated. The probability that the ion is in an excited state $|e\ranglengle$ is \begin{equation} \frac{dP_{ok}}{dt} =- \gamma |\langlengle e|\psi(t)\ranglengle|^2 (1-P_{err}), \end{equation} with $P_{ok}+P_{err}=1.$ The decay rate $\gamma=1/t_\gamma$ is inversely proportional to the lifetime $t_\gamma$. The solution to this problem is \begin{equation}a \epsilon_{\gamma}& =& 1 - P_{ok}(T), \\ P_{ok}(T) &=& \exp\bigg(-\gamma \int_0^T |\langlengle e|\psi(t)\ranglengle|^2 dt\bigg) P_{ok}(0). \noindentnumber \end{equation}a In a very crude scenario, we upper bound the error probability, assuming that the ion is in the excited state from the beginning of the exciting pulse, to the end of the following, that is $T_e\simeq \delta t + \tau,$ \begin{equation} \epsilon_{\gamma} = 1 - \exp(-\gamma T_e) \simeq \gamma T_e. \end{equation} In the experimental setup from Fig. \ref{fig:levels}b, the waiting time $\tau$ between counter-propagating pulses is controlled by the relative length of the optical paths. The minimum separation is given by the pulse duration, $\tau\gtrsim\delta t$ to avoid interference. In our system, the excited state $4P_{3/2}$ has a lifetime $t_\gamma=6.9$ ns and $T_e\simeq 1$ ps\ \cite{Heinrich2019, Hussain2020} leading to errors $\epsilon_{\gamma}\sim\mathcal{O}(\delta t/t_\gamma)\simeq 1.4\cdot 10^{-4}$. For a sequence containing $N$ kicks, the infidelity of the gate is approximately $\epsilon_{\gamma}^{gate} = 1-(1-\epsilon_{\gamma})^N\sim\mathcal{O}(N\delta t/t_\gamma)$. We can also quantify the errors $\epsilon_A$ due to fluctuations in the $\pi-$pulses. For a general pulse shape $\theta=\int_0^{\delta t}\Omega(\tau)d\tau$ the unitary generated by the interaction Hamiltonian (\ref{Hi}) is \begin{equation} \hat U_k=\bigg(c-is\hat\sigma_1^xe^{ikx_1\hat\sigma_1^z}\bigg) \bigg(c-is\hat\sigma_2^xe^{ikx_2\hat\sigma_2^z}\bigg) \end{equation} with $c=\cos(\theta/2)$ and $s=\sin(\theta/2)$. A perfect $\pi-$pulse, i.e. $\theta=\pi$, generates the unitary $\hat U_{kick}=-\hat\sigma_1^x\hat\sigma_2^xe^{ik(x_1\hat\sigma_1^z+x_2\hat\sigma_2^2)}$. In order to quantify the errors due to area fluctuations when combining two counter-propagating pulses $\hat U_{pair}=\hat U_k\hat U_{-k}$ we consider small fluctuations $\pi+\Delta\theta=\int_0^{\delta t}\Omega(\tau)d\tau$ (with $\Delta\theta\rightarrow 0$) in the pulse area. Retaining the first order terms in $\Delta \theta$ an imperfect pair of counter-propagating pulses generates the transformation \begin{equation} \hat U_{pair}=(1-\Delta\theta^2/2)\hat U_0-\Delta\theta\hat U_e^1-\Delta\theta^2\hat U_e^2+\mathcal{O}(\Delta\theta^3) \end{equation} with $\hat U_0=e^{-2ik(x_1\hat\sigma_1^z+x_2\hat\sigma_2^z)}$ the optimal unitary generated by two perfect counter-propagating $\hat U_{kick}$ pulses, and $\hat U_e^1=i(\sigma_1^x\cos(kx_1)e^{-2ik\hat\sigma_2^zx_2}+\sigma_2^x\cos(kx_2)e^{-2ik\hat\sigma_2^zx_1})$ and $\hat U_e^2=\cos(kx_1)\cos(kx_2)\hat\sigma_1^x\hat\sigma_2^x+(e^{ikx_1\hat\sigma_1^z}+e^{ikx_z\hat\sigma_2^z})/4$ accounting for unrestored and incorrect motional dynamics. The total unitary of a gate can be approximated by the product of $N$ pairs \begin{equation} \langlebel{area} \hat U_{gate}\approx (1-\Delta\theta^2N/2)\hat U_{N}-N\Delta\theta\hat U_{err} \end{equation} with $\hat U_N=\hat U_0^N$ and collecting all the errant dynamics in $\hat U_{err}$ that it is assumed orthogonal to the ideal unitary $\hat U_0$. This is a conservative approximation that neglects terms that result in an incorrect motional state, but includes those that correctly restore the internal state \cite{Gale2020}. For any initial state $|\psi\ranglengle$ of the computational basis we can compare the dynamics of the optimal gate $\hat U_{opt}$ with the one generated by $\hat U_{gate}$. To this end we estimate the fidelity \begin{equation} F=|\langlengle\psi|\hat U_{opt}^{\dag}\hat U_{gate}|\psi\ranglengle|^2 = (1-N\epsilon_A+N^2\epsilon_A^2/4)F_0, \end{equation} with $\epsilon_A=\Delta\theta^2.$ The magnitude of the fluctuations $\epsilon_A$ depends on the specific characteristics of the laser pulses. In real setups with picosecond pulses\ \cite{Heinrich2019, Campbell2010} these fluctuations are found to induce errors of around $\epsilon_A\propto \Delta I/I\sim 10^{-3}$. However, these intensity fluctuations can be reduced experimentally, using methods such as adiabatic rapid passages with chirped laser pulses\ \cite{Malinovsky2001, Wunderlich2005, Heinrich2019b}. \section{Outlook} \langlebel{outlook} Our analysis shows that it is possible to engineer ultra-fast gates $T<2\pi/\omega$, using pulse picking strategies for an experimentally relevant setup\ \cite{Heinrich2019, Hussain2020}. Current two-qubit M{\o}lmer-S{\o}rensen gate operations require a duration of $\bar T\sim 40$ $\mu$s for entangling two qubits at a trapping frequency $\omega\simeq 2\pi\times 1.4$ MHz\ \cite{Bermudez2017}. Compared to these numbers, our scheme can provide a speedup factor $\bar T/T> 50$, for a conservative gate duration $T\sim 2\pi/\omega.$ Our investigation leaves some open questions, to be addressed in later works. The first one concerns the robustness of the protocol with respect to intensity fluctuations and spontaneous emission. Both problems may be overcome if we use STIRAP techniques \cite{Bergmann1998, Vitanov2017, Shapiro2007}, to induce excitation between the $4\text{S}_{1/2}$ and a metastable state, such as $3\text{D}_{5/2}$ or $3\text{D}_{3/2}$. Experimentally, ${}^{40}$Ca$^+$ ions have been robustly manipulated using such techniques \cite{Sorensen2006, Moller2007, Timoney2011}. For our proposal, we could detune the pulsed laser exciting the $4\text{S}_{1/2}\to 4\text{P}_{3/2}$ transition and combine it with another pulse connecting the $4\text{P}_{3/2}\leftrightarrow 3\text{D}_{5/2}$ states. These improvements can be supplemented with pulse shaping techniques\ \cite{Palao2002, Romero-Isart2007, Doria2011}, to minimize the AC Stark-shifts and dephasing associated with high-intensity pulses. A second, more pressing question, concerns the parallelizability and scalability of our pulsed schemes. Recent works have addressed theoretically\ \cite{Garcia-Ripoll2005, mehdi2020a, mehdi2020b} and demonstrated experimentally\ \cite{figgatt2019,lu2019} the simultaneous implementation of arbitrary two-qubit gates among a subset or all pairs of $K$ ions in a trap. We can use our two-step protocol to perform this task with significant speed ups. As in this work, the first step is a continuous optimization of the desired gate operation, subject to the now $2K$ dynamical constraints\ \cite{Garcia-Ripoll2005}. The resulting pulsed protocol is fine tuned with our genetic algorithm, to match the repetition rate of the laser. The process has an increased optimization cost, but the multi-qubit gates do not seem to take longer than the two-qubit ones\ \cite{Garcia-Ripoll2005}. Current ion trap quantum computers are able to run programs with up to several hundred one and two-qubit operations\ \cite{Martinez2016}. We expect that these methods and subsequent improvements ion trap quantum computers will be able to improve at least one, if not two orders of magnitude, leading to an increased quantum volume in NISQ devices. Moreover, the estimated ideal gate fidelities are compatible with existing error thresholds\ \cite{Knill2005}, which makes these methods a promising alternative for implementing fault-tolerant computation schemes\ \cite{Bermudez2017}. \ack We acknowledge support from Project PGC2018-094792-B-I00 (MCIU/AEI/FEDER,UE), CSIC Research Platform PTI-001, and CAM/FEDER Project No. S2018/TCS-4342 (QUITEMAD-CM). Authors also acknowledge support by the Institut f\"ur Quanteninformation GmbH. \\ \end{document}
math
31,996
\begin{equation}gin{document} \title{Multi-Time Formulation of Pair Creation} \begin{equation}gin{abstract} In a recent work \cite{pt:2013c}, we have described a formulation of a model quantum field theory in terms of a multi-time wave function and proposed a suitable system of multi-time Schr\"odinger equations governing the evolution of that wave function. Here, we provide further evidence that multi-time wave functions provide a viable formulation of relevant quantum field theories by describing a multi-time formulation, analogous to the one in \cite{pt:2013c}, of another model quantum field theory. This model involves three species of particles, say $x$-particles, anti-$x$-particles, and $y$-particles, and postulates that a $y$-particle can decay into a pair consisting of an $x$ and an anti-$x$ particle, and that an $x$--anti-$x$ pair, when they meet, annihilate each other creating a $y$-particle. (Alternatively, the model can also be interpreted as representing beta decay.) The wave function is a multi-time version of a time-dependent state vector in Fock space (or rather, the appropriate product of Fock spaces) in the particle-position representation. We write down multi-time Schr\"odinger equations and verify that they are consistent, provided that an even number of the three particle species involved are fermionic. Key words: Multi-time Schr\"odinger equation; multi-time wave function; relativistic quantum theory; pair creation and annihilation; Dirac equation; model quantum field theory. \end{abstract} \section{Introduction and Overview} This note extends our discussion in \cite{pt:2013c} of multi-time wave functions in quantum field theory. There, we described a multi-time version of a model quantum field theory (QFT) involving two particle species, $x$-particles and $y$-particles, such that $x$-particles can emit and absorb $y$-particles, $x\rightleftarrows x+y$. Here, we set up a multi-time version of another model QFT involving three species, $x$-particles, $\overline{x}$-particles (which may be thought of as the anti-particles of the $x$-particles), and $y$-particles, with possible reactions $x + \overline{x} \rightleftarrows y$. The model is inspired by electrons ($x$), positrons ($\overline{x}$), and photons ($y$), but we do not try here to be as realistic as possible; rather, we aim at simplicity. In particular, we take all three species to be Dirac particles and do not exclude wave functions with contributions of negative energy; also, we ignore any connection between $x$-states of negative energy and $\overline{x}$-states. We also ignore the fact that our model Hamiltonian is ultraviolet divergent (already in the 1-time formulation) because that problem is orthogonal to the issue of setting up a multi-time formulation. The set of multi-time equations that we propose and discuss in this paper, Equations \eqref{multi123} below, can be applied also to other physical situations besides pair creation. Since the equations are independent of whether or not $\overline{x}$ is the anti-particle of $x$, we may write the reaction $y\rightleftarrows x+\overline{x}$ more abstractly as $a\rightleftarrows b+c$. Each of the three species $a$, $b$, and $c$ can be chosen to be either fermions or bosons, and assigned an arbitrary mass. Thus, besides pair creation, another scenario that is included is, as a variation of the reaction $x\rightleftarrows x+y$ of \cite{pt:2013c}, the reaction $x\rightleftarrows z+y$ in which an $x$-particle emits a $y$-particle and mutates into a $z$-particle; conversely, a $z$ absorbing a $y$ becomes an $x$. For example, beta decay is of this type; in fact, beta decay (roughly speaking, the decay of a neutron into a proton, an electron, and an anti-neutrino) actually consists of two decay events, of which the first is $d \to u+W^-$ (a down quark mutates into an up quark while emitting a negatively charged $W$ boson), and the second is $W^- \to e^- + \overline{\nu}_e$ (the $W^-$ decays into an electron and an anti-$\nu_e$, where $\nu_e$ is an electron-neutrino); that is, each of the two decay events is of the form $x\to z+y$. We remark that all fundamental processes of particle creation or annihilation in nature seem to be of one of the three forms $x\rightleftarrows x+y$, $x\rightleftarrows z+y$, or $y\rightleftarrows x+\overline{x}$. Our work in \cite{pt:2013c} and in the present paper shows that all of these processes can be formulated in terms of multi-time equations. Thereby, it contributes evidence in favor of the hypothesis that multi-time wave functions provide a viable formulation of all relevant QFTs. Multi-time wave functions were considered early on in the history of quantum theory, also for quantum field theory \cite{dirac:1932,dfp:1932,bloch:1934}. However, in these early papers, only electrons were regarded as particles, and photons were replaced by a field configuration. Bloch~\cite{bloch:1934} noted that multi-time equations require consistency conditions (see also \cite{pt:2013a}). Multi-time wave functions with a variable number of time variables (see below for explanation) were considered before in \cite{schweber:1961,DV82b,DV85,Nik10,pt:2013c}. Droz-Vincent \cite{DV82b,DV85} gave an example of a consistent multi-time evolution with interaction which, however, does not correspond to any ordinary QFT model. For further references about multi-time wave functions, a deeper discussion of consistency conditions, and no-go results about interaction potentials in multi-time equations, see \cite{pt:2013a}. For a comparison of the status and significance of multi-time formulations in classical and quantum physics, see \cite{pt:2013e}. The plan of this paper is as follows. After introducing notation in Section~\ref{sec:notation}, we first describe the model QFT in Section~\ref{sec:1time} in the usual one-time formulation and then give a multi-time formulation in Section~\ref{sec:multitime}; in particular, we specify a system of multi-time Schr\"odinger equations\footnote{The expression \emph{Schr\"o\-din\-ger equation} is not meant to imply that the Hamiltonian involves the Laplace operator, but is understood as including the Dirac equation.} for the multi-time wave function $\phi$, see \eqref{multi123}. The function $\phi$ is defined on the set $\mathscr{S}_{x\overline{x} y}$ of \emph{spacelike configurations}, which means any finite number of space-time points that are mutually spacelike or equal,\footnote{Note our usage of the word \emph{spacelike}: Two space-time points that are mutually spacelike are unequal, but a spacelike configuration may contain two particles at the same space-time point.} with each point marked as either an $x$-, an $\overline{x}$-, or a $y$-particle. The set $\mathscr{S}_{x\overline{x} y}$ can be regarded as a subset of $\Gamma(\mathbb{R}^4)\times \Gamma(\mathbb{R}^4)\times \Gamma(\mathbb{R}^4)$, with one factor for each particle species, $\mathbb{R}^4$ representing space-time, and the notation \begin{equation}\label{Gammadef} \Gamma(S) = \bigcup_{N=0}^\infty S^N \end{equation} for any set $S$. Since a function $f$ on $\Gamma(S)$ can be thought of as consisting of one function $f^{(N)}$ on each sector $S^N$, and since $f^{(N)}$ is a function of $N$ variables with values in $S$, we say that $f$ is a function with a variable number of variables. Correspondingly, $\phi$ on $\mathscr{S}_{x\overline{x} y}$ involves a variable number of time variables. If all time variables in $\phi$ are set equal (relative to some Lorentz frame $L$), we recover the wave function $\psi$ of the 1-time formulation. We then verify the consistency of our proposed system of equations in Section~\ref{sec:consistency}; that is, we verify (on a non-rigorous level) that the system possesses a unique solution $\phi$ for every initial datum $\phi_0$ (at all times set equal to zero). Our proposed multi-time QFT model is not fully covariant because it involves, as a coefficient of the pair creation term, an arbitrary but fixed complex-linear mapping $\tilde{g}:S\to S\otimes S$, with $S=\mathbb{C}^4$ the 4-dimensional complex spin space used in the Dirac equation, and nonzero mappings $\tilde{g}$ of this type are never Lorentz-invariant. Apart from this point, the model is covariant, as can be seen from the Lorentz transformation of the multi-time Schr\"odinger equations that we carry out in Section~\ref{sec:lorentz}. The natural generalization to curved space-time is described in Section~\ref{sec:curved}. As a last remark in this section, we find that the multi-time equations of the reaction $a\rightleftarrows b+c$ are consistent if and only if an \emph{even} number of the three species $a,b,c$ are \emph{fermions}, and an \emph{odd} number of them are \emph{bosons} (also in case some of the species coincide, such as $a=b$). That is, either all three are bosons, or two of them are fermions and one is a boson. This rule for which kinds of reactions are allowed agrees with which kinds of reactions occur in nature.\footnote{Indeed, a fermion cannot decay into two fermions, nor into two bosons, whereas a boson can decay into two fermions (e.g., photon $\to$ electron + positron) or two bosons (e.g., Higgs $\to W^+ + W^-$); furthermore, a fermion can decay into a fermion and a boson (e.g., $d\to u+W^-$) but a boson cannot.} This rule can also be derived from the conservation of spin and the spin--statistics relation (i.e., that bosons have integer and fermions half-odd spin), but here we obtain it differently, from the consistency of the multi-time equations. That is, a single-time formulation (i.e., a Hilbert space and Hamiltonian) can be (consistently) specified for each of the 8 combinations of $a,b,c$ being bosons or fermions, but in the multi-time formulation the rule above restricts the possibilities, thereby providing a possible explanation of why reactions violating the rule do not occur in nature. \section{Notation} \label{sec:notation} Our notation follows that of \cite{pt:2013c}. Space-time points are denoted by $x=(x^0,\boldsymbol{x})$ etc.; vectors in 3-dimensional physical space $\mathbb{R}^3$ are denoted by bold symbols such as $\boldsymbol{x}$. When writing down the time evolution equations, we make very many details explicit, at the risk of making the equations appear more complex than they are, in order to ensure full clarity. A spacelike configuration $q^4\in\mathscr{S}_{x\overline{x} y}$ will mostly be written as $q^4=\bigl( x^{4M},\overline{x}^{4\overline{M}}, y^{4N}\bigr)$ with the $x$-configuration $x^{4M} = (x_1,\ldots,x_j, \ldots,x_M)$, the $\overline{x}$-configuration $\overline{x}^{4\overline{M}} = (\overline{x}_1,\ldots,\overline{x}_{\overline\jmath}, \ldots,\overline{x}_{\overline{M}})$, and the $y$-configuration $y^{4N} = (y_1,\ldots,y_k, \ldots,y_N)$. All spin indices refer to the spin space $S=\mathbb{C}^4$ of the Dirac equation and thus run from 1 to 4; $r_j$ is the spin index of particle $x_j$, $\overline{r}_{\overline\jmath}$ that of $\overline{x}_{\overline\jmath}$, and $s_k$ that of $y_k$. We will often write only those spin indices that are of particular interest in the expression at hand; e.g., we may write $\psi_{s_{N+1}}$ for a wave function that still has the usual indices $r_1,\ldots, s_N$ but also one additional index $s_{N+1}$ (that is understood as listed after $s_N$). A hat (\,$\widehat{\ }$\,) denotes omission; e.g., $\psi_{\widehat{s_k}}$ is the wave function with the usual indices except $s_k$. Configurations on the hyperplane $t=0$ (relative to some Lorentz frame $L$) or just in 3-dimensional space, $q^3\in \Gamma(\mathbb{R}^3)^3$, are denoted by $q^3=\bigl( x^{3M},\overline{x}^{3\overline{M}}, y^{3N}\bigr)$ with the $x$-configuration $x^{3M} = (\boldsymbol{x}_1,\ldots,\boldsymbol{x}_j, \ldots,\boldsymbol{x}_M)$, the $\overline{x}$-configuration $\overline{x}^{3\overline{M}} = (\boldsymbol{x}bar_1,\ldots,\boldsymbol{x}bar_{\overline\jmath}, \ldots,\boldsymbol{x}bar_{\overline{M}})$, and the $y$-configuration $y^{3N} = (\boldsymbol{y}_1,\ldots,\boldsymbol{y}_k, \ldots,\boldsymbol{y}_N)$. The notation $x^{3M}\setminus \boldsymbol{x}_j$ means that $\boldsymbol{x}_j$ is omitted, $x^{3M}\setminus \boldsymbol{x}_j = (\boldsymbol{x}_1,\ldots,\boldsymbol{x}_{j-1},\boldsymbol{x}_{j+1},\ldots,\boldsymbol{x}_M)$; likewise with $x^{4M}\setminus x_j$. \section{Single-Time Formulation} \label{sec:1time} The model that this paper is about is inspired by the models in chapter 12 of \cite{schweber:1961}. Let $\varepsilon_x$ be $+1$ if the $x$-particles are bosons, and $-1$ if they are fermions; likewise with $\varepsilon_{\overline{x}}$ and $\varepsilon_y$. The 1-time wave function $\psi(t,q^3)$ is, at any time $t\in\mathbb{R}$, a spinor-valued function on $\Gamma(\mathbb{R}^3)^3$, \begin{equation} \psi=\psi\bigl(x^{3M},\overline{x}^{3\overline{M}},y^{3N}\bigr)=\psi_{r_1\ldots r_M\overline{r}_1\ldots \overline{r}_{\overline{M}}s_1\ldots s_N}\bigl(x^{3M},\overline{x}^{3\overline{M}},y^{3N}\bigr)\,, \end{equation} with the appropriate symmetry, \begin{equation}gin{subequations}\label{sym} \begin{equation}gin{align} \psi_{r_ir_j}(\ldots, x_i, \ldots ,x_j, \ldots) &= \varepsilon_x \, \psi_{r_jr_i}(\ldots, x_j, \ldots, x_i,\ldots)\,,\label{xsym}\\ \psi_{\overline{r}_{\overline\imath}\overline{r}_{\overline\jmath}}(\ldots, \overline{x}_{\overline\imath}, \ldots ,\overline{x}_{\overline\jmath}, \ldots) &= \varepsilon_{\overline{x}} \, \psi_{\overline{r}_{\overline\jmath}\overline{r}_{\overline\imath}}(\ldots, \overline{x}_{\overline\jmath}, \ldots, \overline{x}_{\overline\imath},\ldots)\,,\label{xbarsym}\\ \psi_{s_ks_\ell}(\ldots, y_k, \ldots ,y_\ell, \ldots) &= \varepsilon_y \, \psi_{s_\ell s_k}(\ldots, y_\ell, \ldots, y_k,\ldots)\,,\label{ysym} \end{align} \end{subequations} where the dots signify that all other variables are unchanged, and unchanged spin indices were not written at all (so $\psi_{s_\ell s_k}$ means that indices $s_k$ and $s_\ell$ have been interchanged). Correspondingly, $\psi$ lies in the Hilbert space \begin{equation}\label{Hilbertdef} \mathscr{H}=\mathscr{F}_x\otimes \mathscr{F}_{\overline{x}}\otimes \mathscr{F}_y \end{equation} with \begin{equation}\label{Fockdef} \mathscr{F}_u = \bigoplus_{N=0}^\infty S_{\varepsilon_u} L^2\Bigl(\mathbb{R}^{3N},(\mathbb{C}^4)^{\otimes N}\Bigr) \end{equation} the Fock space for species $u\in\{x,\overline{x},y\}$ and $S_{+1},S_{-1}$ the symmetrization and anti-symmetrization operators, respectively. The inner product in $\mathscr{H}$ is \begin{equation}gin{multline} \scp{\psi}{\chi} = \sum_{M,\overline{M},N=0}^\infty \:\int\limits_{\mathbb{R}^{3M}} dx^{3M} \!\! \int\limits_{\mathbb{R}^{3\overline{M}}}d\overline{x}^{3\overline{M}} \!\! \int\limits_{\mathbb{R}^{3N}}dy^{3N} \!\! \sum_{r_1\ldots r_M=1}^4 \sum_{\overline{r}_1\ldots \overline{r}_{\overline{M}}=1}^4\sum_{s_1\ldots s_N=1}^4 \times\\ \times\: \psi^*_{r_1\ldots s_N}\bigl(x^{3M},\overline{x}^{3\overline{M}},y^{3N}\bigr) \, \chi_{r_1\ldots s_N}\bigl(x^{3M},\overline{x}^{3\overline{M}},y^{3N}\bigr)\,. \end{multline} The 1-time wave function $\psi$ evolves according to the 1-time Schr\"odinger equation ($\hbar=1$) \begin{equation}gin{subequations}\label{Schr1Hdef} \begin{equation}\label{Schr1} i\frac{\partial \psi}{\partial t}=H\psi \end{equation} with the 1-time Hamiltonian $H$ on $\mathscr{H}$ given by \begin{equation}gin{align}\label{Hdef} &H\psi\bigl(x^{3M},\overline{x}^{3\overline{M}},y^{3N}\bigr) = \sum_{j=1}^MH_{x_j}^\mathrm{free} \psi + \sum_{\overline\jmath=1}^{\overline{M}}H_{\overline{x}_{\overline\jmath}}^\mathrm{free} \psi + \sum_{k=1}^N H_{y_k}^\mathrm{free}\psi\:+\nonumber\\[3mm] &\quad+ \sqrt{\frac{N+1}{M\overline{M}}} \sum_{j=1}^M \sum_{\overline\jmath=1}^{\overline{M}} \varepsilon_x^{j+1} \, \varepsilon_{\overline{x}}^{\overline\jmath+1}\, \varepsilon_y^N \sum_{s_{N+1}=1}^4 g_{r_j\overline{r}_{\overline\jmath}\, s_{N+1}}\delta^3(\boldsymbol{x}_j-\boldsymbol{x}bar_{\overline\jmath}) \, \times\nonumber\\&\qquad\qquad \times\: \psi_{\widehat{r_j}\widehat{\overline{r}_{\overline\jmath}}\, s_{N+1}}\Bigl(x^{3M}\setminus \boldsymbol{x}_j, \overline{x}^{3\overline{M}} \setminus \boldsymbol{x}bar_{\overline\jmath}, (y^{3N}, \boldsymbol{x}_j)\Bigr)\nonumber\\[4mm] &\quad+ \sqrt{\frac{(M+1)(\overline{M}+1)}{N}} \sum_{k=1}^N \varepsilon_x^M\,\varepsilon_{\overline{x}}^{\overline{M}}\,\varepsilon_y^{k+1} \sum_{r_{M+1}\overline{r}_{\overline{M}+1}=1}^4 g^*_{r_{M+1}\overline{r}_{\overline{M}+1} s_k}\, \times\nonumber\\&\qquad\qquad \times\: \psi_{r_{M+1} \overline{r}_{\overline{M}+1}\widehat{s_k}}\Bigl((x^{3M}, \boldsymbol{y}_k),(\overline{x}^{3\overline{M}},\boldsymbol{y}_k),y^{3N}\setminus \boldsymbol{y}_k\Bigr)\,, \end{align} \end{subequations} where the term in the second and third line is meant to vanish whenever $M=0$ or $\overline{M}=0$, the term in the fourth and fifth line is meant to vanish whenever $N=0$, $\varepsilon=\pm 1$ (as readers may recall) depending on whether the particles are bosons or fermions, and the free Hamiltonians are free Dirac operators (with speed of light set to $c=1$), \begin{equation}gin{subequations}\label{Hfreedef} \begin{equation}gin{align} H^\mathrm{free}_{x_j} \psi_{r_j}(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) &= \sum_{r_j'=1}^4\biggl( -i \sum_{a=1}^3 (\alpha_a)_{r_jr_j'} \, \frac{\partial}{\partial x_j^a} + m_x \begin{equation}ta_{r_jr_j'} \biggr) \psi_{r_j'}(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) \\ H^\mathrm{free}_{\overline{x}_{\overline\jmath}} \psi_{\overline{r}_{\overline\jmath}}(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) &= \sum_{\overline{r}_{\overline\jmath}'=1}^4\biggl( -i \sum_{a=1}^3 (\alpha_a)_{\overline{r}_{\overline\jmath}\overline{r}_{\overline\jmath}'} \, \frac{\partial}{\partial \overline{x}_{\overline\jmath}^a} + m_{\overline{x}} \begin{equation}ta_{\overline{r}_{\overline\jmath}\overline{r}_{\overline\jmath}'} \biggr) \psi_{\overline{r}_{\overline\jmath}'}(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) \\ H^\mathrm{free}_{y_k} \psi_{s_k}(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) &= \sum_{s_k'=1}^4\biggl(\! -i \sum_{a=1}^3 (\alpha_a)_{s_ks_k'} \, \frac{\partial}{\partial y_k^a} + m_y \begin{equation}ta_{s_ks_k'} \!\biggr) \psi_{s_k'}(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) \end{align} \end{subequations} with mass parameters $m_x,m_{\overline{x}},m_y\geq 0$. Equivalently, the Hamiltonian can be written as \begin{equation} H=H_x + H_{\overline{x}} + H_y + H_{\mathrm{int}} \end{equation} with \begin{equation}gin{subequations} \begin{equation}gin{align} H_{x} &= \int d^3\boldsymbol{x} \sum_{r,r'=1}^4 a_r^\dagger(\boldsymbol{x}) \bigl(-i\boldsymbol{\alpha}_{rr'}\cdot \nabla+m_x\begin{equation}ta_{rr'}\bigr) a_{r'}(\boldsymbol{x})\label{Hxa}\\ H_\mathrm{int} &= \int d^3\boldsymbol{x}\sum_{r,\overline{r},s=1}^4 \Big( g_{r\overline{r} s} \, a_r^\dagger(\boldsymbol{x})\, \overline{a}_{\overline{r}}^\dagger(\boldsymbol{x})\, b_s(\boldsymbol{x}) + g_{r\overline{r} s}^* \, a_r(\boldsymbol{x})\, \overline{a}_{\overline{r}}(\boldsymbol{x})\, b_s^\dagger(\boldsymbol{x}) \Big) \end{align} \end{subequations} and expressions analogous to \eqref{Hxa} for $H_{\overline{x}}$ and $H_y$. Here, $^\dagger$ denotes the adjoint operator, and $a_s(\boldsymbol{x}),\overline{a}_s(\boldsymbol{x}),b_s(\boldsymbol{x})$ the annihilation operators for an $x,\overline{x},y$-particle with spin component $s$ at location $\boldsymbol{x}$ in position space, explicitly defined by \begin{equation}gin{subequations} \begin{equation}gin{align} \bigl(a_r(\boldsymbol{x})\,\psi\bigr)(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) &= \sqrt{M+1}\; \varepsilon_x^M\, \psi_{r_{M+1}=r}\bigl((x^{3M},\boldsymbol{x}),\overline{x}^{3\overline{M}},y^{3N}\bigr)\,,\label{adef}\\ \bigl(a_r^\dagger(\boldsymbol{x})\,\psi\bigr)(x^{3M},\overline{x}^{3\overline{M}},y^{3N}) &= \frac{1}{\sqrt{M}} \sum_{j=1}^M \varepsilon_x^{j+1} \, \delta_{rr_j}\,\delta^3(\boldsymbol{x}_j-\boldsymbol{x})\, \psi_{\widehat{r_j}}\bigl(x^{3M}\setminus \boldsymbol{x}_j,\overline{x}^{3\overline{M}},y^{3N}\bigr) \end{align} \end{subequations} and correspondingly for $\overline{a}_s(\boldsymbol{x})$ and $b_s(\boldsymbol{x})$. \section{Multi-Time Formulation} \label{sec:multitime} The constant $\kappa\in \mathbb{R}$ that we use below can be chosen to be $1/2$; we will show in Remark~\ref{rem:kappa} below that its value actually does not matter. The multi-time wave function $\phi$ is supposed to be defined on $\mathscr{S}_{x\overline{x} y}$ and have values $\phi(x^{4M},\overline{x}^{4\overline{M}},y^{4N})\in S^{\otimes (M+ \overline{M}+N)}$ with $S=\mathbb{C}^4$; it is governed by the multi-time Schr\"odinger equations \begin{equation}gin{subequations}\label{multi123} \begin{equation}gin{align} &i\frac{\partial}{\partial x_j^0}\phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = H_{x_j}^\mathrm{free}\phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) + \kappa \sqrt{\frac{N+1}{M\overline{M}}} \sum_{\overline\jmath=1}^{\overline{M}} \varepsilon_x^{j+1}\,\varepsilon_{\overline{x}}^{\overline\jmath+1}\,\varepsilon_y^N \,\times\nonumber\\ &\qquad \times \sum_{s_{N+1}=1}^4 \overline{G}_{r_j \,\overline{r}_{\overline\jmath}\, s_{N+1}}(\overline{x}_{\overline\jmath}-x_j)\, \phi_{\widehat{r_j}\,\widehat{\overline{r}_{\overline\jmath}}\,s_{N+1}}\Bigl(x^{4M}\setminus x_j,\overline{x}^{4\overline{M}}\setminus \overline{x}_{\overline\jmath}, (y^{4N}, x_j)\Bigr)\label{multi1}\\[3mm] &i\frac{\partial}{\partial \overline{x}_{\overline\jmath}^0}\phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = H_{\overline{x}_{\overline\jmath}}^\mathrm{free} \phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) + (1-\kappa) \sqrt{\frac{N+1}{M\overline{M}}}\sum_{j=1}^M \varepsilon_x^{j+1}\,\varepsilon_{\overline{x}}^{\overline\jmath+1}\,\varepsilon_y^N\, \times\nonumber\\ &\qquad\times \: \sum_{s_{N+1}=1}^4 G_{r_j \, \overline{r}_{\overline\jmath}\, s_{N+1}}(x_j-\overline{x}_{\overline\jmath})\, \phi_{\widehat{r_j}\,\widehat{\overline{r}_{\overline\jmath}}\,s_{N+1}}\Bigl(x^{4M}\setminus x_j, \overline{x}^{4\overline{M}}\setminus \overline{x}_{\overline\jmath}, (y^{4N}, \overline{x}_{\overline\jmath})\Bigr)\label{multi2}\\[3mm] &i\frac{\partial}{\partial y_k^0}\phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = H_{y_k}^\mathrm{free}\phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) + \sqrt{\frac{(M+1)(\overline{M}+1)}{N}} \varepsilon_x^M\,\varepsilon_{\overline{x}}^{\overline{M}}\,\varepsilon_y^{k+1} \,\times\nonumber\\ &\qquad\times \: \sum_{r_{M+1},\overline{r}_{\overline{M}+1}=1}^4 g^*_{r_{M+1} \overline{r}_{\overline{M}+1} s_k} \, \phi_{r_{M+1} \,\overline{r}_{\overline{M}+1} \widehat{s_k}}\Bigl( (x^{4M}, y_k),(\overline{x}^{4\overline{M}}, y_k),y^{4N}\setminus y_k \Bigr)\,,\label{multi3} \end{align} \end{subequations} where $G_{r\overline{r} s}(x)$ and $\overline{G}_{r\overline{r} s}(\overline{x})$ are appropriate Green functions, defined to be the solutions of the free Dirac equations \begin{equation}gin{subequations} \begin{equation}gin{align} i\frac{\partial G}{\partial x^0} &= H_x^\mathrm{free} G\label{Gevol}\\[2mm] i\frac{\partial \overline{G}}{\partial \overline{x}^0} &= H_{\overline{x}}^\mathrm{free} \overline{G}\label{Gbarevol} \end{align} with initial conditions \begin{equation}gin{align} G_{r\overline{r} s}(0,\boldsymbol{x}) &= g_{r\overline{r} s} \, \delta^3(\boldsymbol{x})\\ \overline{G}_{r\overline{r} s}(0,\boldsymbol{x}bar) &= g_{r\overline{r} s} \, \delta^3(\boldsymbol{x}bar)\,. \end{align} \end{subequations} We will show in Section~\ref{sec:consistency} that the system \eqref{multi123} is consistent on $\mathscr{S}_{x\overline{x} y}$ if and only if \begin{equation}\label{productepsilons} \varepsilon_x \, \varepsilon_{\overline{x}} \, \varepsilon_y =1\,, \end{equation} i.e., if and only if the number of fermionic species is even and that of bosonic ones is odd. \noindent{\bf Remarks.} \begin{equation}gin{enumerate} \item The single-time evolution \eqref{Schr1Hdef} is contained in the multi-time equations \eqref{multi123} (for any $\kappa$) in the sense that if we set all time variables equal in $\phi$ then the 1-time wave function thus obtained, $\psi(t,x^{3M},\overline{x}^{3\overline{M}},y^N)=\phi(t,\boldsymbol{x}_1,\ldots,t,\boldsymbol{y}_N)$, obeys \eqref{Schr1Hdef}. \item\label{rem:collision} As discussed in Remark 5 in Section 2.2 of \cite{pt:2013c}, something needs to be said about how \eqref{multi123} should be understood at the tips of $\mathscr{S}_{x\overline{x} y}$ (i.e., at configurations with two particles at the same location, henceforth called \emph{collision configurations}). That is because, for example at a configuration with $x_j=y_k$, $\partial \phi/\partial x_j^0$ does not make sense for a function $\phi$ on $\mathscr{S}_{x\overline{x} y}$ as varying $x^0_j$ while keeping $y_k$ fixed will lead away from $\mathscr{S}_{x\overline{x} y}$. The rule formulated in \cite{pt:2013c} for these cases should be adopted here as well: At such a configuration, \eqref{multi123} should be understood as prescribing the directional derivative normally written as $(\partial/\partial x_j^0 + \partial/\partial y_k^0)\phi$ (and correspondingly for other collisions). \item\label{rem:kappa} The system \eqref{multi123} for any value of $\kappa\in\mathbb{R}$ is actually equivalent to the system \eqref{multi123} for any other value of $\kappa\in\mathbb{R}$. Indeed, it is known \cite[Thm.~1.2 on p.~15]{thaller:1992} that the Green function of the Dirac equation (such as $G$ and $\overline{G}$) vanishes on spacelike vectors; thus, the terms involving $\kappa$ vanish at \emph{collision-free} spacelike configurations. They do not vanish at a collision between (say) $x_j$ and $\overline{x}_{\overline\jmath}$; there, according to Remark~\ref{rem:collision} above, the equations \eqref{multi123} are understood as specifying not $\partial \phi/\partial x_j^0$ and $\partial \phi/\partial \overline{x}_{\overline\jmath}^0$ individually but only their sum, from which the factor $\kappa$ cancels out. Thus, the solution $\phi$ of \eqref{multi123} is independent of $\kappa$; put differently, the one multi-time evolution law possesses several representations by systems of equations (involving different values of $\kappa$), roughly analogous to the way a gauge field possesses several representations corresponding to different gauges. That is, the choice of $\kappa$ is a matter of mere aesthetics; natural choices are $\kappa=0$, $\kappa=1/2$, or $\kappa=1$. \item In the same way as for Assertion~3 in \cite{pt:2013c}, one can show that the multi-time wave function $\phi$ of \eqref{multi123} can be expressed as follows in terms of the particle annihilation operators at any $(x^{4M},\overline{x}^{4\overline{M}},y^{4N})\in \mathscr{S}_{x\overline{x} y}$: \begin{equation}gin{multline} \phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = \frac{\varepsilon_x^{M(M-1)/2} \varepsilon_{\overline{x}}^{\overline{M}(\overline{M}-1)/2} \varepsilon_y^{N(N-1)/2}}{\sqrt{M!\overline{M}!N!}} \:\times\\ \times \: \Bigl\langle \emptyset \Big| a_{r_1}(x_1)\cdots a_{r_M}(x_M)\, \overline{a}_{\overline{r}_1}(\overline{x}_1) \cdots \overline{a}_{\overline{r}_{\overline{M}}}(\overline{x}_{\overline{M}})\, b_{s_1}(y_1)\cdots b_{s_N}(y_N) \Big| \psi_0 \Bigr\rangle\,. \end{multline} Here, $\psi_0$ is obtained from $\phi$ by setting all time variables equal to zero, and $a_r(t,\boldsymbol{x})=e^{iHt}a_r(\boldsymbol{x})e^{-iHt}$ is the Heisenberg-evolved annihilation operator with the single-time Hamiltonian $H$ as in \eqref{Hdef}. \item For any spacelike hypersurface $\Sigma$, let $\psi_\Sigma$ be the restriction of $\phi$ to $\Gamma(\Sigma)^3$ (i.e., to configurations on $\Sigma$), and let $\tilde\psi_\Sigma=F_{\Sigma\to\Sigma_0}\psi_\Sigma$ be the interaction picture wave function, where $\Sigma_0$ is the hypersurface of $t=0$ (in the Lorentz frame $L$) and $F_{\Sigma_1\to\Sigma_2}$ is the free time evolution from $\Sigma_1$ to $\Sigma_2$. Then $\tilde\psi$ obeys the Tomonaga--Schwinger equation \begin{equation}\label{TS} i\bigl(\tilde\psi_{\Sigma'}-\tilde\psi_\Sigma\bigr) = \biggl( \int_{\Sigma}^{\Sigma'} \!\!\!\! d^4 x \, \mathcal{H}_I(x)\biggr)\, \tilde\psi_\Sigma \end{equation} for infinitesimally neighboring spacelike hypersurfaces $\Sigma,\Sigma'$, with the interaction Hamiltonian density in the interaction picture given by \begin{equation} \mathcal{H}_I(t,\boldsymbol{x})= e^{iH^\mathrm{free} t} \sum_{r,\overline{r},s=1}^4 \Bigl( g_{r\overline{r} s} \, a_r^\dagger(\boldsymbol{x})\, \overline{a}_{\overline{r}}^\dagger(\boldsymbol{x})\, b_s(\boldsymbol{x}) + g_{r\overline{r} s}^* \, a_r(\boldsymbol{x})\, \overline{a}_{\overline{r}}(\boldsymbol{x})\, b_s^\dagger(\boldsymbol{x}) \Bigr) e^{-iH^\mathrm{free} t}\,. \end{equation} This can be derived in much the same way as Assertion~5 in \cite{pt:2013c}, using Assertion~10 of \cite{pt:2013c}. \end{enumerate} \section{Consistency} \label{sec:consistency} As discussed in detail in \cite{pt:2013a}, there is a non-trivial condition for a system of multi-time equations to be consistent, i.e., to possess a solution $\phi$ for arbitrary initial conditions at time 0 (i.e., all times equal to 0). In Sections~5.1--5.5 of \cite{pt:2013c}, we have shown for multi-time equations of the type considered there that (leaving aside the ultraviolet divergence) they are consistent on the set of spacelike configurations if, at every spacelike configuration $q$, the \emph{consistency condition} \begin{equation}\label{consistency} \biggl[ i\frac{\partial}{\partial t_j}-H_{j}, i\frac{\partial}{\partial t_k}-H_{k}\biggr]=0 \end{equation} holds for any two \emph{non-colliding} particles $j,k$ belonging to $q$ (see in particular Footnote 13 in Section 5.5 of \cite{pt:2013c}). The arguments described there apply to \eqref{multi123} as well. Therefore, the consistency of \eqref{multi123} follows if \eqref{consistency} holds for any two non-colliding particles, where $H_j\phi$ is the right-hand side of \eqref{multi1}, \eqref{multi2}, or \eqref{multi3}, whichever is appropriate depending on the species of particle $j$. This is indeed the case if \eqref{productepsilons} holds. We report here the commutators. Any pair of particles is of one of these six forms: $xx$, $\overline{x}\overline{x}$, $yy$, $x\overline{x}$, $xy$, or $\overline{x} y$. Thus, six commutators need to be computed. Let us begin with the $yy$ commutator (and note that $N\geq 2$ whenever there are two $y$ variables for which we can consider the commutator): \begin{equation}gin{align} \label{comm3_neu}&\Big[i\partial_{y_k^0} - H_{y_k}, i\partial_{y_{\ell}^0} - H_{y_{\ell}}\Big] \phi \nonumber \\ &= \sqrt{\frac{(M+1)(M+2)(\overline{M}+1)(\overline{M}+2)}{N(N-1)}} \sum_{r_{M+1},r_{M+2},\overline{r}_{\overline{M}+1},\overline{r}_{\overline{M}+2}=1}^4 g^*_{r_{M+1}\overline{r}_{\overline{M}+1}s_k} g^*_{r_{M+2}\overline{r}_{\overline{M}+2}s_{\ell}} \times \nonumber \\ &\quad \times \varepsilon_y^{k+\ell} (\varepsilon_x\varepsilon_{\overline{x}}\varepsilon_y-1) \phi_{r_{M+1}r_{M+2}\overline{r}_{\overline{M}+1}\overline{r}_{\overline{M}+2}\widehat{s_k}\widehat{s_{\ell}}}\Bigl((x^{4M},y_k,y_{\ell}), (\overline{x}^{4\overline{M}},y_k,y_{\ell}), \bigl(y^{4N} \setminus \{y_k, y_{\ell}\}\bigr)\Bigr), \end{align} assuming $k<\ell$. This is a rather unwieldy expression, but the only aspect that matters at this point is the factor $\varepsilon_x\varepsilon_{\overline{x}}\varepsilon_y-1$, which makes the whole expression vanish if \eqref{productepsilons} holds. Here are all six commutators, with those terms containing a factor of $\varepsilon_x\varepsilon_{\overline{x}}\varepsilon_y-1$ left out: \begin{equation}gin{subequations} \begin{equation}gin{align} &\Big[i\partial_{x_i^0} - H_{x_i}, i\partial_{x_j^0} - H_{x_j}\Big] = 0\,,\\ &\Big[i\partial_{\overline{x}_{\overline\imath}^0} - H_{\overline{x}_{\overline\imath}}, i\partial_{\overline{x}_{\overline\jmath}^0} - H_{\overline{x}_{\overline\jmath}}\Big] = 0\,,\\ &\Big[i\partial_{y_k^0} - H_{y_k}, i\partial_{y_{\ell}^0} - H_{y_{\ell}}\Big] = 0\,,\\ &\Big[i\partial_{x_i^0} - H_{x_i}, i\partial_{\overline{x}_{\overline\jmath}^0} - H_{\overline{x}_{\overline\jmath}}\Big] \phi \nonumber \\ &\quad= -(1-\kappa) \,\sqrt{\frac{N+1}{M\overline{M}}}\, \varepsilon_x^{i+1} \, \varepsilon_{\overline{x}}^{\overline\jmath+1} \, \varepsilon_y^N \sum_{s_{N+1}=1}^4 \bigg\{ \big( i\partial_{x_i^0}-H_{x_i}^\mathrm{free} \big) G_{r_i\,\overline{r}_{\overline\jmath}\,s_{N+1}}(x_i-\overline{x}_{\overline\jmath}) \bigg\}~\times \nonumber\\ &\quad\quad\quad\times~ \phi_{\widehat{r_i}\, \widehat{\overline{r}_{\overline\jmath}}\, s_{N+1}}\Bigl(x^{4M} \setminus x_i,\overline{x}^{4\overline{M}} \setminus \overline{x}_{\overline\jmath},(y^{4N}, \overline{x}_{\overline\jmath})\Bigr) \nonumber \\ &\quad\quad +\kappa \,\sqrt{\frac{N+1}{M\overline{M}}} \, \varepsilon_x^{i+1} \, \varepsilon_{\overline{x}}^{\overline\jmath+1}\, \varepsilon_y^N \sum_{s_{N+1}=1}^4 \bigg\{ \big( i\partial_{\overline{x}_{\overline\jmath}^0}-H_{\overline{x}_{\overline\jmath}}^\mathrm{free} \big) \overline{G}_{r_i\,\overline{r}_{\overline\jmath}\,s_{N+1}}(\overline{x}_{\overline\jmath}-x_i) \bigg\} ~\times\nonumber\\ &\quad\quad\quad \times~ \phi_{\widehat{r_i}\, \widehat{\overline{r}_{\overline\jmath}}\, s_{N+1}}\Bigl(x^{4M} \setminus x_i,\overline{x}^{4\overline{M}} \setminus \overline{x}_{\overline\jmath},(y^{4N}, x_i)\Bigr)\,,\\ \mathrm{int}text{which vanishes because each curly bracket vanishes due to \eqref{Gevol} and \eqref{Gbarevol}, and} &\Big[i\partial_{x_j^0} - H_{x_j}, i\partial_{y_k^0} - H_{y_k}\Big] \phi \nonumber \\ &\quad= -\kappa \, \varepsilon_x^{M+j+1}\, \varepsilon_y^{N+k} \!\!\!\! \sum_{r_{M+1},\overline{r}_{\overline{M}+1},s_{N+1}=1}^4 \!\!\!\! g_{r_{M+1}\overline{r}_{\overline{M}+1}s_k}^* \overline{G}_{r_j\overline{r}_{\overline{M}+1}s_{N+1}}(y_k-x_j) ~\times \nonumber \\ &\quad\quad \quad \times ~ \phi_{r_{M+1}\widehat{r_j}\widehat{s_k}s_{N+1}}\Bigl((x^{4M} \setminus x_j,y_k), \overline{x}^{4\overline{M}}, (y^{4N} \setminus y_k, x_j)\Bigr)\,, \mathrm{int}text{which vanishes on spacelike configurations for which $x_j$ does not collide with $y_k$ because then $y_k$ is spacelike separated from $x_j$, and it is known \cite[Thm.~1.2 on p.~15]{thaller:1992} that the Green function $\overline{G}$ for the Dirac equation vanishes on spacelike vectors. Finally,} &\Big[i\partial_{\overline{x}_{\overline\jmath}^0} - H_{\overline{x}_{\overline\jmath}}, i\partial_{y_k^0} - H_{y_k}\Big] \phi \nonumber \\ &\quad= -(1-\kappa)\, \varepsilon_{\overline{x}}^{\overline{M}+\overline\jmath+1} \, \varepsilon_y^{N+k} \!\!\!\! \sum_{r_{M+1},\overline{r}_{\overline{M}+1},s_{N+1}=1}^4 \!\!\!\! g_{r_{M+1}\overline{r}_{\overline{M}+1}s_k}^* \, G_{r_{M+1}\overline{r}_{\overline\jmath}\,s_{N+1}}(y_k-\overline{x}_{\overline\jmath})~\times \nonumber\\ &\quad\quad \quad \times ~ \phi_{\overline{r}_{\overline{M}+1}\widehat{\overline{r}_{\overline\jmath}}\,\widehat{s_k}s_{N+1}}\Bigl(x^{4M}, (\overline{x}^{4\overline{M}} \setminus \overline{x}_{\overline\jmath},y_k), (y^{4N} \setminus y_k, \overline{x}_{\overline\jmath})\Bigr)\,, \end{align} \end{subequations} which vanishes on spacelike configurations for which $\overline{x}_{\overline\jmath}$ does not collide with $y_k$ because the Green function $G$ vanishes on spacelike vectors. This completes our consistency proof. Conversely, consistency on $\mathscr{S}_{x\overline{x} y}$ fails if \eqref{productepsilons} is violated. Indeed, in this case, i.e., if $\varepsilon_x\varepsilon_{\overline{x}}\varepsilon_y=-1$, \eqref{comm3_neu} shows that the commutator (as an operator acting on arbitrary functions $\phi$ on $\mathscr{S}_{x\overline{x} y}$) is non-zero at every configuration. By the results of Section~5.5 of \cite{pt:2013c}, this fact at any collision-free configuration implies that the Equations \eqref{multi123} are inconsistent. We further remark that if, in the reaction $a \rightleftarrows b+c$, some of the particle species coincide (as in $x\rightleftarrows x+y$, $x\rightleftarrows y+y$, or $x\rightleftarrows x+x$), then an appropriate variant of the system \eqref{multi123} can straightforwardly be formulated, and that variant is consistent if and only if $\varepsilon_a \varepsilon_b \varepsilon_c =1$, i.e., if an even number of the three particles are fermions. \section{Behavior Under Lorentz Transformations} \label{sec:lorentz} Our discussion in this section follows the one in Section 2.4 of \cite{pt:2013c} for the multi-time equations considered there. To investigate the behavior of the system \eqref{multi123} under Lorentz transformations, let us first slightly reformulate \eqref{multi123}. Let us write $\partial_{j\mu}$ [respectively, $\partial_{\overline\jmath \mu},\partial_{k\mu}$] for $\partial/\partial x^\mu_j$ [respectively, $\partial/\partial \overline{x}_{\overline\jmath}^\mu,\partial/\partial y^\mu_k$], so that the name of the particle label conveys whether we are considering an $x$-, $\overline{x}$-, or $y$-particle. Similarly, let $\gamma_j^\mu$ [$\gamma_{\overline\jmath}^\mu,\gamma_k^\mu$] be the Dirac gamma matrices acting on the index $r_j$ [$\overline{r}_{\overline\jmath},s_k$]. Starting from \eqref{multi1} [\eqref{multi2},\eqref{multi3}], moving the free Hamiltonian to the left-hand side, and multiplying by $\gamma_j^0$ [$\gamma_{\overline\jmath}^0,\gamma_k^0$], we obtain \begin{equation}gin{subequations}\label{multi456} \begin{equation}gin{align} &\Bigl(i\gamma_{j}^\mu\partial_{j\mu} - m_x\Bigr) \phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = \kappa \sqrt{\frac{N+1}{M\overline{M}}} \sum_{\overline\jmath=1}^{\overline{M}} \varepsilon_x^{j+1}\,\varepsilon_{\overline{x}}^{\overline\jmath+1}\,\varepsilon_y^N \,\times\nonumber\\ &\qquad \times \sum_{s_{N+1}=1}^4 \tilde{\overline{G}}_{r_j\,\overline{r}_{\overline\jmath}}^{~~~~\:s_{N+1}}(\overline{x}_{\overline\jmath}-x_j)\:\: \phi_{\widehat{r_j}\,\widehat{\overline{r}_{\overline\jmath}}\,s_{N+1}}\Bigl(x^{4M}\setminus x_j,\overline{x}^{4\overline{M}}\setminus \overline{x}_{\overline\jmath}, (y^{4N}, x_j)\Bigr)\label{multi4}\\[3mm] &\Bigl(i\gamma_{\overline\jmath}^\mu\partial_{\overline\jmath\mu} - m_{\overline{x}}\Bigr)\phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = (1-\kappa) \sqrt{\frac{N+1}{M\overline{M}}}\sum_{j=1}^M \varepsilon_x^{j+1}\,\varepsilon_{\overline{x}}^{\overline\jmath+1}\,\varepsilon_y^N\, \times\nonumber\\ &\qquad\times \: \sum_{s_{N+1}=1}^4 \tilde{G}_{r_j \, \overline{r}_{\overline\jmath}}^{~~~~\: s_{N+1}}(x_j-\overline{x}_{\overline\jmath})\:\: \phi_{\widehat{r_j}\,\widehat{\overline{r}_{\overline\jmath}}\,s_{N+1}}\Bigl(x^{4M}\setminus x_j, \overline{x}^{4\overline{M}}\setminus \overline{x}_{\overline\jmath}, (y^{4N}, \overline{x}_{\overline\jmath})\Bigr)\label{multi5}\\[3mm] &\Bigl( i\gamma_k^\mu\partial_{k\mu} - m_y\Bigr) \phi\bigl(x^{4M},\overline{x}^{4\overline{M}},y^{4N}\bigr) = \sqrt{\frac{(M+1)(\overline{M}+1)}{N}} \varepsilon_x^M\,\varepsilon_{\overline{x}}^{\overline{M}}\,\varepsilon_y^{k+1} \,\times\nonumber\\ &\qquad\times \: \sum_{r_{M+1},\overline{r}_{\overline{M}+1}=1}^4 (\tilde{g}^+)^{r_{M+1} \overline{r}_{\overline{M}+1}}_{~~~~~~~~~~~s_k} \:\: \phi_{r_{M+1} \,\overline{r}_{\overline{M}+1} \widehat{s_k}}\Bigl( (x^{4M}, y_k),(\overline{x}^{4\overline{M}}, y_k),y^{4N}\setminus y_k \Bigr)\label{multi6} \end{align} \end{subequations} with implicit summation over $\mu$ but not over $j$ or $\overline\jmath$ or $k$, and \begin{equation}gin{subequations}\label{tildeGthingsdef} \begin{equation}gin{align} \tilde{\overline{G}}_{r\overline{r}}^{~~\:s}(\overline{x})&=\sum_{r'=1}^4 (\gamma^0)_{rr'} \overline{G}_{r'\overline{r} s}(\overline{x})\label{tildeGbardef}\\ \tilde{G}_{r\overline{r}}^{~~\:s}(x)&=\sum_{\overline{r}'=1}^4 (\gamma^0)_{\overline{r}\overline{r}'} G_{r\overline{r}' s}(x)\label{tildeGdef}\\ (\tilde{g}^+)^{r\overline{r}}_{~~s}&=\sum_{s'=1}^4 (\gamma^0)_{ss'} \, g^*_{r\overline{r} s'}\,.\label{tildeg+def} \end{align} \end{subequations} In \eqref{multi456} and the left-hand side of \eqref{tildeGthingsdef}, upper spin indices refer to $S^*$, the dual space of $S$, while lower ones refer to $S$. The Lorentz-invariant operation $^+$ is defined in Section 2.4 of \cite{pt:2013c}. The system of equations \eqref{multi456} is Lorentz-invariant if we regard $\tilde{\overline{G}},\tilde{G}$ as functions $\mathscr{M}\to S\otimes S \otimes S^*$, where $\mathscr{M}$ denotes Minkowski space-time, and $\tilde{g}^+$ as an element of $S^*\otimes S^* \otimes S$. In fact, $\tilde{\overline{G}}_{r\overline{r}}^{~~\:s}(\overline{x})$ is the Green function (in the variable $\overline{x}$ and the spin index $\overline{r}$) with initial spinor covariantly characterized by $\tilde{g}$. Likewise, $\tilde{G}_{r\overline{r}}^{~~\:s}(x)$ is the Green function (in the variable $x$ and the spin index $r$) with initial spinor covariantly characterized by $\tilde{g}$. That is, the objects $\tilde{\overline{G}},\tilde{G}$ and $\tilde{g}^+$ can all be obtained in a unique and covariant way (and with the right transformation behavior) once an element $\tilde{g}\in S\otimes S\otimes S^*$ has been chosen. \section{Generalization to Curved Space-Time} \label{sec:curved} In Section 2.5 of \cite{pt:2013c}, we had outlined the rather straightforward way how the multi-time equations presented there can be generalized to a curved space-time $(\mathscr{M},g)$. The multi-time system \eqref{multi123} or, equivalently, \eqref{multi456} can be adapted to curved space-time in the same way. Here is a brief overview. In this setting, $\phi$ is defined on the spacelike configurations in $\Gamma(\mathscr{M})^3$ with values \begin{equation}gin{multline} \phi(x_1,\ldots,x_M,\overline{x}_1,\ldots,\overline{x}_{\overline{M}},y_1,\ldots,y_N)\in\\ S_{x_1}\otimes \cdots \otimes S_{x_M}\otimes S_{\overline{x}_1} \otimes \cdots \otimes S_{\overline{x}_{\overline{M}}}\otimes S_{y_1}\otimes \cdots \otimes S_{y_N}\,, \end{multline} where $S_x$ is a fiber of the bundle $S$ of spin spaces, a vector bundle over the base manifold $\mathscr{M}$. This can be expressed using the notation $A\boxtimes B$ for the vector bundle over the base manifold $\mathscr{A}\times \mathscr{B}$ (obtained from vector bundles $A$ over $\mathscr{A}$ and $B$ over $\mathscr{B}$) whose fiber at $(a,b)\in\mathscr{A}\times\mathscr{B}$ is \begin{equation} (A\boxtimes B)_{(a,b)} = A_a\otimes B_b\,, \end{equation} and correspondingly $A^{\boxtimes n}$ for $A\boxtimes A \boxtimes \cdots \boxtimes A$ with $n$ factors. Then the $(M,\overline{M},N)$-particle sector of $\phi$ is a cross-section of the vector bundle $S^{(M,\overline{M},N)}=S^{\boxtimes M}\boxtimes S^{\boxtimes \overline{M}}\boxtimes S^{\boxtimes N}$ over $\mathscr{M}^{M+\overline{M}+N}$. The equations \eqref{multi456} need only minor changes and re-interpretation of symbols, such as that $\partial_{j\mu}$ is now the covariant derivative on $S$ corresponding to the connection naturally associated with the metric of $\mathscr{M}$, and correspondingly on $S^{\boxtimes (M+\overline{M}+N)}$. The coupling matrix $\tilde{g}_{r\overline{r}}^{~~s}$ gets replaced in \eqref{multi6} by a cross-section $\tilde{g}_{r\overline{r}}^{~~s}(y_k)$ of the bundle $S\otimes S \otimes S^*$. Similarly, $\tilde\overline{G}(\overline{x}-x)$ in \eqref{multi4} gets replaced by $\tilde\overline{G}(\overline{x},x)$, which is the appropriate Green function, namely the solution of the free Dirac equation in $\overline{x}$ with initial spinor at $x$ covariantly characterized by $\tilde{g}(x)$; likewise, $\tilde{G}(x-\overline{x})$ in \eqref{multi5} gets replaced by the appropriate Green function $\tilde{G}(x,\overline{x})$. Our consistency proof still applies. \noindent{\it Acknowledgments.} S.P.\ acknowledges support from Cusanuswerk, from the German--American Fulbright Commission, and from the European Cooperation in Science and Technology (COST action MP1006). R.T.\ acknowledges support from the John Templeton Foundation (grant no.\ 37433) and from the Trustees Research Fellowship Program at Rutgers. \begin{equation}gin{thebibliography}{19} \bibitem{bloch:1934} F.~Bloch: \newblock {Die physikalische Bedeutung mehrerer Zeiten in der Quantenelektrodynamik}. \newblock {\em Physikalische Zeitschrift der Sowjetunion}, 5:301--305 (1934) \bibitem{dirac:1932} P.~A.~M. Dirac: \newblock {Relativistic Quantum Mechanics}. \newblock {\em Proceedings of the Royal Society London A}, 136:453--464 (1932) \bibitem{dfp:1932} P.~A.~M. Dirac, V.~A. Fock, and B.~Podolsky: \newblock {On Quantum Electrodynamics}. \newblock {\em Physikalische Zeitschrift der Sowjetunion}, 2(6):468--479 (1932). \newblock Reprinted in J. Schwinger: {\em Selected Papers on Quantum Electrodynamics}, New York: Dover (1958) \bibitem{DV82b} Ph.~Droz-Vincent: Second quantization of directly interacting particles. Pages 81--101 in J.~Llosa (ed.): \textit{Relativistic Action at a Distance: Classical and Quantum Aspects}, Berlin: Springer-Verlag (1982) \bibitem{DV85} Ph.~Droz-Vincent: Relativistic quantum mechanics with non conserved number of particles. \textit{Journal of Geometry and Physics}, 2(1):101--119 (1985) \bibitem{Nik10} H.~Nikoli\'c: \newblock QFT as pilot-wave theory of particle creation and destruction. \newblock \textit{International Journal of Modern Physics A}, 25:1477--1505 (2010) \newblock \url{http://arxiv.org/abs/0904.2287} \bibitem{pt:2013a} S.~Petrat and R.~Tumulka: \newblock Multi-Time Schr\"odinger Equations Cannot Contain Interaction Potentials. \newblock Preprint (2013) \newblock \url{http://arxiv.org/abs/1308.1065} \bibitem{pt:2013c} S.~Petrat and R.~Tumulka: \newblock Multi-Time Wave Functions for Quantum Field Theory. \newblock Preprint (2013) \newblock \url{http://arxiv.org/abs/1309.0802} \bibitem{pt:2013e} S.~Petrat and R.~Tumulka: \newblock Multi-Time Equations, Classical and Quantum. \newblock To appear in \textit{Proceedings of the Royal Society A} (2014) \newblock \url{http://arxiv.org/abs/1309.1103} \bibitem{schweber:1961} S.~Schweber: \newblock {\em {An Introduction To Relativistic Quantum Field Theory}}. \newblock Row, Peterson and Company (1961) \bibitem{thaller:1992} B.~Thaller: \newblock \textit{The Dirac Equation}. \newblock Berlin: Springer-Verlag (1992) \end{thebibliography} \end{document}
math
46,483
\boldsymbolegin{document} \textswab{a}uthor{Yu Jiang} \textswab{a}ddress[Y. Jiang]{Division of Mathematical Sciences, Nanyang Technological University, SPMS-MAS-05-34, 21 Nanyang Link, Singapore 637371.} \varepsilonmail[Y. Jiang]{[email protected]} \boldsymbolegin{abstract} The paper presented here focuses on the classification of trivial source Specht modules. We completely classify the trivial source Specht modules labelled by hook partitions. We also classify the trivial source Specht modules labelled by two-part partitions in the odd characteristic case. Moreover, in the even characteristic case, we prove a result for the classification of the trivial source Specht modules labelled by partitions with $2$-weight $2$, which justifies a conjecture of \cite{Tara}. \varepsilonnd{abstract} \mathbf{m}aketitle \noindent \mathfrak{t}extbf{Keywords.} Trivial source module; Symmetric group; Specht module\ \ \mathfrak{t}extbf{MSC.} 20C30 \mathbf{m}athfrak{s}ection{Introduction} Let $G$ be a finite group and $\mathbb{F}$ be an algebraically closed field of positive characteristic $p$. The trivial source $\mathbb{F} G$-modules, defined as indecomposable $\mathbb{F} G$-modules with trivial sources, are important objects in the modular representation theory of $\mathbb{F} G$. Though the trivial source $\mathbb{F} G$-modules are known to have plenties of properties, it is usually very difficult to classify explicitly all the trivial source $\mathbb{F} G$-modules. Even in the context of group algebras of symmetric groups, the question of classifying all the trivial source modules is still poorly understood. For the modules of symmetric groups, it is already well-known that the signed Young modules are trivial source modules (see \cite[Theorem 3.8 (b)]{DanzLim}). However, little knowledge is known in the classification of trivial source Specht modules. Let $\mathbf{m}athfrak{s}ym{n}$ be the symmetric group on $n$ letters. According to our knowledge, the project of classifying all trivial source Specht modules of $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$ first begins in the thesis of Hudson (see \cite{Tara}). In her thesis, Hudson classified all the trivial source Specht modules labelled by partitions with $p$-weight $1$ when $p>3$ and identified the trivial source Specht modules of $\mathbb{F}\mathbf{m}athfrak{s}ym{2p}$ and $\mathbb{F} \mathbf{m}athfrak{s}ym{2p+1}$ when $p$ is odd. Moreover, for the case $p=2$, she provided the following conjecture. \boldsymbolegin{conj*}\cite[Conjecture 5.0.1]{Tara} Let $p=2$. Then the trivial source Specht modules labelled by partitions with $2$-weight $2$ are either simple or isomorphic to Young modules. \varepsilonnd{conj*} The paper presented here pursues the topic of the project. The main approaches used in the paper depend on the fixed-point functors of symmetric groups defined in \cite{Hemmer2} and the Brou\'{e} correspondence of trivial source $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules. We completely classify the trivial source Specht modules labelled by hook partitions. When $p>2$, we also classify the trivial source Specht modules labelled by two-part partitions. Moreover, for $p=2$ we prove the conjecture of Hudson. To state our results more precisely, let $\lambda$ be a partition of $n$ and denote by $S^\lambda$ and $Y^\lambda$ the Specht module and Young module labelled by $\lambda$ respectively. Take $JM(n)_p$ to be the set of partitions labelling simple Specht modules of $\mathbb{F} \mathbf{m}athfrak{s}ym{n}$. We will show the following: \boldsymbolegin{thmx}\label{T;A} Let $n$ and $r$ be integers such that $0\leq r<n$. Then $S^{(n-r,1^r)}$ is a trivial source module if and only if $(n-r, 1^r)\in JM(n)_p$ or one of the following holds: \boldsymbolegin{enumerate} \item [\varepsilonm(i)] $p>3$, $n=p$, $0<r<p-1$ and $2\mathbf{m}id r$; \item [\varepsilonm(ii)] $p=2$, $2\nmid n$, $n\geq 2r>2$, $r\neq n-2,\ n-1$ and $n\varepsilonquiv 2r+1 \pmod {2^L}$ where the integer $L$ satisfies $2^{L-1}\leq r<2^L$; \item [\varepsilonm (iii)] $p=2$, $2\nmid n$, $2r>n$, $r\neq n-2,\ n-1$ and $n\varepsilonquiv 2r+1 \pmod {2^{L'}}$ where the integer $L'$ satisfies $2^{L'-1}\leq n-r-1<2^{L'}$. \varepsilonnd{enumerate} \varepsilonnd{thmx} \boldsymbolegin{thmx}\label{T;C} Let $p>2$ and $n$, $r$ be integers such that $n>0$ and $0\leq 2r\leq n$. Then $S^{(n-r,r)}$ is a trivial source module if and only if $(n-r,r)\in JM(n)_p$. \varepsilonnd{thmx} \boldsymbolegin{thmx}\label{T;B} Let $p=2$ and $n$ be an integer such that $n\geq 4$. Let $\lambda$ be a partition of $n$ with $2$-weight $2$ and $\kappa_2(\lambda)$ be its $2$-core. Then $S^{\lambda}$ is a trivial source module if and only if $\lambda$ falls into one of the following cases: \boldsymbolegin{enumerate} \item [\varepsilonm(i)] $\lambda\in JM(n)_2$; \item [\varepsilonm(ii)]$\lambda\notin JM(n)_2$, $S^\lambda\cong Y^\mathbf{m}u$ and $\mathbf{m}u=\kappa_2(\lambda)+(2,2)$. \varepsilonnd{enumerate} \varepsilonnd{thmx} Notice that the case (ii) of Theorem \ref{T;B} indeed happens. For example, let $p=2$. It is well-known that $S^{(3,1,1)}$ is an indecomposable, non-simple, trivial source module. Theorem \ref{T;B} says that $S^{(3,1,1)}\cong Y^{(3,2)}$. The paper is presented as follows. In Section $2$, we set up the notation used in the paper and give a brief summary of the required knowledge. Section $3$ is used to introduce fixed-point functors of symmetric groups and give some properties of them. Using these properties, in Sections $4$ and $5$, we prove Theorems \ref{T;A} and \ref{T;C} respectively. In the final section, we use Brou\'{e} correspondence to study some trivial source $\mathbb{F} \mathbf{m}athfrak{s}ym{n}$-modules and show Theorem \ref{T;B}. \mathbf{m}athfrak{s}ection{Notation and Preliminaries} Throughout the whole paper, let $\mathbb{F}$ be an algebraically closed field with positive characteristic $p$. For a given finite group $G$, write $H\leq G$ $(\mathfrak{t}ext{resp}.\ H<G)$ to indicate that $H$ is a subgroup (resp. a proper subgroup) of $G$. All $\mathbb{F} G$-modules considered in the paper are finitely generated left $\mathbb{F} G$-modules. Write $\uparrow$, $\downarrow$ to denote induction and restriction of modules respectively. Use $\otimes$ and $\boldsymboloxtimes$ to present inner tensor product and outer tensor product of modules respectively. By abusing notation, also use $\mathbb{F}_G$ to denote the trivial $\mathbb{F} G$-module and omit the subscript if there is no confusion. \mathbf{m}athfrak{s}ubsection{Vertices, sources, Green correspondences and Scott modules} The reader is assumed to be familiar with modular representation theory of finite groups. For a general background of the topic, one may refer to \cite{JAlperin1} or \cite{HNYT}. Let $G$ be a finite group and $M$, $N$ be $\mathbb{F} G$-modules. Write $N\mathbf{m}id M$ if $N$ is isomorphic to a direct summand of $M$, i.e., $M\cong L\oplus N$ for some $\mathbb{F} G$-module $L$. If $N$ is indecomposable, for a decomposition of $M$ into a direct sum of indecomposable modules, the number of indecomposable direct summands that are isomorphic to $N$ is well-defined by the Krull-Schmidt Theorem and is denoted by $[M:N]$. Let $m$ be a positive integer and $N_i$ be an $\mathbb{F} G$-module for all $1\leq i\leq m$. Write $M\mathbf{m}athfrak{s}im \mathbf{m}athfrak{s}um_{i=1}^m N_i$ to indicate that there exists a filtration of $M$ such that all quotient factors are exactly isomorphic to these $N_i$'s. The dual of $M$ is denoted by $M^*$. Assume further that $M$ is an indecomposable $\mathbb{F} G$-module and $P\leq G$. Following \cite{JGreen}, say $P$ a vertex of $M$ if $P$ is a minimal (with respect to inclusion of subgroups) subgroup of $G$ subject to the condition that $M\mathbf{m}id ({M\downarrow_{P}}){\uparrow^G}$. All the vertices of $M$ are known to form a $G$-conjugacy class of $p$-subgroups of $G$. They are $G$-conjugate to a subgroup of a defect group of the $p$-block containing $M$. It is clear that $M$ and the indecomposable $\mathbb{F} G$-modules isomorphic to $M$ have same vertices. Moreover, note that $M$ and $M^*$ have same vertices by the definition. Let $P$ be a vertex of $M$. There exists some indecomposable $\mathbb{F} P$-module $S$ such that $M\mathbf{m}id S{\uparrow^G}$. It is called a $P$-source of $M$. All the $P$-sources of $M$ are $N_{G}(P)$-conjugate to each other, where $N_{G}(P)$ is the normalizer of $P$ in $G$. Call $M$ a trivial source module if all the $P$-sources of $M$ are trivial modules. Note that $M$ is a trivial source module if and only if $M^*$ is a trivial source module. Let $N_G(P)\leq H\leq G$. The Green correspondent of $M$ with respect to $H$, denoted by $\mathbf{m}athcal{G}_{H}(M)$, is an indecomposable $\mathbb{F} H$-module with a vertex $P$ such that $\mathbf{m}athcal{G}_H(M)\mathbf{m}id M{\downarrow_H}$. It is well-known that $M{\downarrow_H}\cong \mathbf{m}athcal{G}_H(M)\oplus U$ and $\mathbf{m}athcal{G}_H(M){\uparrow^G}\cong M\oplus V$ where $U$ and $V$ are $\mathbb{F} H$-module and $\mathbb{F} G$-module respectively. Furthermore, for any indecomposable direct summand $W$ of $U$ or $V$, recall that $P$ is not a vertex of $W$. For more properties of $\mathbf{m}athcal{G}_H(M)$, one can refer to \cite[\S 4.4]{HNYT}. Let $K\leq G$. For the transitive permutation module $\mathbb{F}_K{\uparrow^G}$, a well-known fact says that it has a unique trivial submodule and a unique trivial quotient module. Moreover, there exists an indecomposable direct summand of $\mathbb{F}_K{\uparrow^G}$ with a trivial submodule and a trivial quotient module. The module, unique up to isomorphism, is called the Scott module of $\mathbb{F}_K{\uparrow^G}$ and is denoted by $Sc_G(K)$. The vertices of $Sc_G(K)$ are $G$-conjugate to the Sylow $p$-subgroups of $K$ and it is a trivial source module. For more properties of $Sc_G(K)$, one may refer to \cite{Burry}. \mathbf{m}athfrak{s}ubsection{Generic Jordan types and Brauer constructions of modules} We shall introduce some tools used in the paper. Let $C_p=\langle g\rangle$ be a cyclic group of order $p$ and $M$ be an $\mathbb{F} C_p$-module. According to representation theory of cyclic groups, the matrix representations of $g$ on all indecomposable $\mathbb{F} C_p$-modules are exactly the Jordan blocks with all eigenvalues $1$ and sizes between $1$ to $p$. So $M$ can be viewed as a direct sum of the Jordan blocks. We call the direct sum of the Jordan blocks the Jordan type of $M$ and denote it by $[1]^{n_1}\ldots [p]^{n_p}$, where $[i]$ denotes a Jordan block of size $i$ and $n_i$ means the number of Jordan blocks of size $i$ in the sum. Let $E$ be an elementary abelian $p$-group of order $p^n$ with a generator set $\{g_1,\ldots,g_n\}$ and $M$ be an $\mathbb{F} E$-module. Let $\mathbb{K}$ denote the extension field $\mathbb{F}(\textswab{a}lpha_1,\ldots,\textswab{a}lpha_n)$ over $\mathbb{F}$ where $\{\textswab{a}lpha_1,\ldots,\textswab{a}lpha_n\}$ is a set of indeterminates. Write $u_{\textswab{a}lpha}$ to denote the element $1+\mathbf{m}athfrak{s}um_{i=1}^n\textswab{a}lpha_i(g_i-1)$ of $\mathbb{K} E$. Note that $\langle u_\textswab{a}lpha\rangle$ is a cyclic group of order $p$ and $M$ can be viewed as a $\mathbb{K} \langle u_\textswab{a}lpha\rangle$-module naturally. So the Jordan type of $M{\downarrow_{\langle u_\textswab{a}lpha\rangle}}$ is meaningful and is called generic Jordan type of $M$. Wheeler in \cite{WW} showed that generic Jordan type of $M$ is independent of the choices of generators of $E$. The stable generic Jordan type of $M$ is the generic Jordan type of $M$ modulo free direct summands. Notice that the modules isomorphic to $M$ as $\mathbb{F} E$-modules and $M$ have same generic Jordan type. For more properties of generic Jordan types of modules, one may refer to \cite{EFJPAS} and \cite{WW}. Let $G$ be a finite group and $M$ be an $\mathbb{F} G$-module. Say $M$ a $p$-permutation module if, for every $p$-subgroup $P$ of $G$, there exists a basis $\mathcal{B}_P$ of $M$ depending on $P$ such that $\mathcal{B}_P$ is permuted by $P$. Note that the class of $p$-permutation modules is closed by taking direct sums and non-zero direct summands. It is well-known that indecomposable $p$-permutation $\mathbb{F} G$-modules are exactly trivial source $\mathbb{F} G$-modules. Therefore, a $p$-permutation $\mathbb{F} G$-module is also equivalently defined to be a non-zero direct summand of a permutation $\mathbb{F} G$-module. To detect $p$-permutation $\mathbb{F} G$-modules, the following direct corollary of \cite[Lemma 3.1]{JLW} is helpful. \boldsymbolegin{lem}\label{T;permutaiton} Let $E$ be an elementary abelian $p$-subgroup of a finite group $G$. Let $M$ be a $p$-permutation $\mathbb{F} G$-module. Then $M{\downarrow_E}$ has stable generic Jordan type $[1]^m$ for some non-negative integer $m$. \varepsilonnd{lem} We now describe the Brauer constructions of $p$-permutation modules developed by Brou\'{e} in \cite{Broue}. Let $P\leq G$ and $M^P$ be the fixed-point subspace of an $\mathbb{F} G$-module $M$ under the action of $P$. Note that $M^P$ is an $\mathbb{F} [N_G(P)/P]$-module. For any $p$-subgroup $Q$ of $P$, the relative trace map from $M^Q$ to $M^P$, denoted by $\mathbf{m}athrm{Tr}_Q^P$, is defined to be $$ \mathbf{m}athrm{Tr}_Q^P(m)=\mathbf{m}athfrak{s}um_{g\in \{P/Q\}}gm,$$ where $\{P/Q\}$ is a complete set of representatives of left cosets of $Q$ in $P$. Observe that the linear map $\mathbf{m}athrm{Tr}_Q^P$ is well-defined as it is independent of the choices of $\{P/Q\}$. Continuously, $$\mathbf{m}athrm{Tr}^P(M)=\mathbf{m}athfrak{s}um_{Q<P}\mathbf{m}athrm{Tr}_Q^P(M^Q).$$ It is also obvious to see that $\mathbf{m}athrm{Tr}^P(M)$ is an $\mathbb{F}[N_G(P)/P]$-submodule of $M^P$. The Brauer construction of $M$ with respect to $P$, written as $M(P)$, is defined to be the $\mathbb{F}[N_G(P)/P]$-module $$M^P/\mathbf{m}athrm{Tr}^P(M).$$ Notice that $M(P)=0$ if $P$ is not a $p$-subgroup of $G$. For our purpose, we collect some well-known properties of Brauer constructions of modules as follows. \boldsymbolegin{lem}\label{L;Brauer} The following assertions hold. \boldsymbolegin{enumerate} \item [\varepsilonm(i)] Let $G$ be a finite group and $P$ be a $p$-subgroup of $G$. Let $M$ be a $p$-permutation $\mathbb{F} G$-module. If there exist some $\mathbb{F} G$-modules $U$ and $V$ such that $M\cong U\oplus V$, then $M(P)\cong U(P)\oplus V(P)$. \item [\varepsilonm(ii)] \cite[Lemma 2.7]{GLDM} Let $G_1$, $G_2$ be finite groups and $P_1$, $P_2$ be $p$-groups such that $P_1\leq G_1$ and $P_2\leq G_2$. Let $M_i$ be a $p$-permutation $\mathbb{F} G_i$-module for any $1\leq i\leq 2$. Then $(M_1\boldsymboloxtimes M_2)(P_1\mathfrak{t}imes P_2)\cong M_1(P_1)\boldsymboloxtimes M_2(P_2)$ as modules under the action of $N_{G_1\mathfrak{t}imes G_2}(P_1\mathfrak{t}imes P_2)/(P_1\mathfrak{t}imes P_2)\cong (N_{G_1}(P_1)/P_1)\mathfrak{t}imes (N_{G_2}(P_2)/P_2)$. \varepsilonnd{enumerate} \varepsilonnd{lem} The main result of this subsection, known as Brou\'{e} correspondence, is summarized as follows. \boldsymbolegin{thm}\cite[Theorems 3.2 and 3.4]{Broue}\label{T;Broue} Let $P$ be a $p$-subgroup of a finite group $G$ and $M$ be a trivial source $\mathbb{F} G$-module with a vertex $P$. Let $L$ be a $p$-permutation $\mathbb{F} G$-module. \boldsymbolegin{enumerate} \item [\varepsilonm(i)] The correspondence $M\mathbf{m}apsto M(P)$ is a bijection between the isomorphism classes of trivial source $\mathbb{F} G$-modules with a vertex $P$ and the isomorphism classes of indecomposable projective $\mathbb{F}[\mathbf{m}athrm{N}_G(P)/P]$-modules. \item [\varepsilonm(ii)] The inflation of $M(P)$ from $N_G(P)/P$ to $N_G(P)$ is isomorphic to $\mathbf{m}athcal{G}_{N_G(P)}(M)$. \item [\varepsilonm(iii)] Let $N$ be a trivial source $\mathbb{F} G$-module with a vertex $Q$. Then $N\mathbf{m}id L$ if and only if $N(Q)\mathbf{m}id L(Q)$. We also have $[L:N]=[L(Q):N(Q)]$. \varepsilonnd{enumerate} \varepsilonnd{thm} \mathbf{m}athfrak{s}ubsection{Combinatorics} Let $\mathbf{m}athbb{N}$ be the set of natural numbers. Let $n\in \mathbf{m}athbb{N}$ and $\mathbf{m}athbf{n}$ be the set $\{1,\ldots,n\}$. Let $\mathbf{m}athfrak{s}ym{n}$ be the symmetric group acting on $\mathbf{m}athbf{n}$. Set $\mathbf{m}athfrak{s}ym{0}$ to be the trivial group. A partition of a positive integer $m$ is a non-increasing sequence of positive integers $(\lambda_1,\ldots,\lambda_{\varepsilonll})$ such that $\mathbf{m}athfrak{s}um_{i=1}^\varepsilonll\lambda_i=m$. By abusing notation, as the empty set, the unique partition of $0$ is denoted by $\varnothing$. Let $\lambda=(\lambda_1,\ldots,\lambda_\varepsilonll)$ be a partition. As usual, define $|\lambda|$ to be $\mathbf{m}athfrak{s}um_{i=1}^\varepsilonll\lambda_i$ and write $\lambda\vdash |\lambda|$ to indicate that $\lambda$ is a partition of $|\lambda|$. Assume further that $\lambda\vdash n$. Say $\lambda$ a two-part partition if $\varepsilonll\leq 2$. By the exponential expression of partitions, $\lambda$ is called a hook partition if $\lambda=(n-r,1^r)$ for some integer $r$ such that $0\leq r<n$. We will use exponential expression of partitions throughout the whole paper. The Young diagram of $\lambda$, denoted by $[\lambda]$, is the set $$ \{(i,j)\in \mathbf{m}athbb{N}^2:\ 1\leq i\leq \varepsilonll,\ 1\leq j\leq \lambda_i\}.$$ Each element of $[\lambda]$ is called a node of $[\lambda]$. For any node $(i,j)$ of $[\lambda]$, the hook of the node $(i,j)$ is the set $\{(i,k)\in [\lambda]:\ k\geq j\}\cup\{(k,j)\in [\lambda]:\ k>i\}$. Denote the set by $H_{i,j}^\lambda$ and put $h_{i,j}^\lambda=|H_{i,j}^\lambda|$. We will not distinguish between $\lambda$ and $[\lambda]$ in the paper. The $p$-core $\kappa_p(\lambda)$ of $\lambda$ is the partition whose Young diagram is constructed by removing all rim $p$-hooks from $\lambda$ successively. Due to the Nakayama Conjecture, the $p$-cores of partitions of $n$ label the $p$-blocks of $\mathbb{F} \mathbf{m}athfrak{s}ym{n}$. We thus write $B_{\kappa_p(\lambda)}$ to denote the $p$-block of $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$ labelled by $\kappa_p(\lambda)$. The number of rim $p$-hooks removed from $\lambda$ to obtain $\kappa_p(\lambda)$ is called the $p$-weight of $\lambda$ and is denoted by $w_p(\lambda)$. The $p$-weight of $B_{\kappa_p(\lambda)}$ is defined to be $w_p(\lambda)$. One can choose a defect group of $B_{\kappa_p(\lambda)}$ to be a Sylow $p$-subgroup of $\mathbf{m}athfrak{s}ym{pw_p(\lambda)}$. We say that $\lambda$ is $p$-restricted if $\lambda_\varepsilonll<p$ and $\lambda_i-\lambda_{i+1}<p$ for all $1\leq i<\varepsilonll$. Take $\lambda'$ to be the conjugate of $\lambda$ and say $\lambda$ a $p$-regular partition if $\lambda'$ is $p$-restricted. The $p$-adic expansion of $\lambda$ is the unique sum $\mathbf{m}athfrak{s}um_{i=0}^mp^i\lambda(i)$ for some $m\in\mathbf{m}athbb{N}\cup\{0\}$, where $\lambda(i)$ is $p$-restricted or $\varnothing$ for all $0\leq i\leq m$, $\lambda(m)\neq \varnothing$ and $\lambda=\mathbf{m}athfrak{s}um_{i=0}^mp^i\lambda(i)$ via component-wise addition and scalar multiplication of partitions. \mathbf{m}athfrak{s}ubsection{Modules of symmetric groups} We now briefly present some material of representation theory of symmetric groups needed in the paper. One can refer to \cite{GJ1} or \cite{GJ3} for a background of the topic. Given the non-negative integers $m_1,\ldots, m_\varepsilonll$ such that $\mathbf{m}athfrak{s}um_{i=1}^{\varepsilonll}m_i\leq n$, define $\mathbf{m}athfrak{s}ym{(m_1,\ldots, m_\varepsilonll)}=\mathbf{m}athfrak{s}ym{m_1}\mathfrak{t}imes\cdots\mathfrak{t}imes\mathbf{m}athfrak{s}ym{m_\varepsilonll}\leq\mathbf{m}athfrak{s}ym{n}$, where $\mathbf{m}athfrak{s}ym{m_1}$ acts on the set $\{1,\ldots, m_1\}$ (if $m_1\neq 0$), $\mathbf{m}athfrak{s}ym{m_2}$ acts on the set $\{m_1+1,\ldots, m_1+m_2\}$ (if $m_2\neq 0$) and so on. Let $\lambda\vdash n$. The Young subgroup associated with $\lambda$ of $\mathbf{m}athfrak{s}ym{n}$ is defined to be $\mathbf{m}athfrak{s}ym{\lambda}$. Throughout the paper, for a given integer $m$ with $1\leq m\leq n$, view $\mathbf{m}athfrak{s}ym{m}$ and $\mathbf{m}athfrak{s}ym{n-m}$ as corresponding components of $\mathbf{m}athfrak{s}ym{(m,n-m)}$. So they commute with each other. Use $sgn(n)$ to denote the sign module of $\mathbf{m}athfrak{s}ym{n}$ and omit the parameter if the symmetric group is clear. If $p=2$, $sgn$ means the corresponding trivial module. To describe modules of symmetric groups, we assume that the reader is familiar with the definitions of tableau, tabloid and polytabloid (see \cite{GJ1} or \cite{GJ3}). The Young permutation module with respect to $\lambda$, denoted by $M^{\lambda}$, is the $\mathbb{F}$-space generated by all $\lambda$-tabloids where $\mathbf{m}athfrak{s}ym{n}$ permutes these tabloids. Notice that $M^\lambda$ is a $p$-permutation module since it is a permutation module. The Specht module $S^\lambda$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-submodule of $M^{\lambda}$ generated by all $\lambda$-polytabloids. The dual of $S^\lambda$ is denoted by $S_\lambda$. It is well-known that $S^{\lambda}$ is indecomposable if $p>2$ or $p=2$ and $\lambda$ is $2$-regular (or $2$-restricted). The relation of $S^{\lambda}$ and $S_{\lambda'}$ is given by $S^\lambda\otimes sgn\cong S_{\lambda'}$. In particular, if $p=2$, $S^\lambda\cong S^{\lambda'}$ if $S^\lambda$ is self-dual. Unlike the characteristic zero case, $\{S^\lambda:\ \lambda\vdash n\}$ is usually not the set of all simple modules of $\mathbf{m}athfrak{s}ym{n}$ up to isomorphism. However, when $\lambda$ is $p$-regular, James in \cite[Theorem 11.5]{GJ1} showed that $S^\lambda$ has a simple head $D^\lambda$. Moreover, the set $D_{n,p}=\{D^{\mathbf{m}u}:\ \mathbf{m}u\ \mathfrak{t}ext{is}\ \mathfrak{t}ext{a}\ \mathfrak{t}ext{$p$-regular}\ \mathfrak{t}ext{partition}\ \mathfrak{t}ext{of}\ n\}$ forms a complete set of representatives of isomorphic classes of simple $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules. It is well-known that every module in $D_{n,p}$ is self-dual. Following the notation used in \cite[Section 4]{DanzLim}, we denote the set $\{\lambda\vdash n:\ S^\lambda\ \mathfrak{t}ext{is}\ \mathfrak{t}ext{simple}\}$ by $JM(n)_p$. A Specht filtration of an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module is a filtration of the module with all the successive quotient factors isomorphic to some Specht modules. For a given decomposition of $M^\lambda$ into a direct sum of indecomposable modules, there exists an indecomposable direct summand of $M^\lambda$ that contains $S^\lambda$. The module is unique up to isomorphism and is denoted by $Y^\lambda$. Let $\mathbf{m}u\vdash n$ and $\unlhd$ denote the dominance order on the set of all partitions of $n$. James in \cite[Theorem 3.1]{GJ2} showed that $[M^\lambda:Y^\lambda]=1$ and $[M^{\lambda}:Y^{\mathbf{m}u}]\neq 0$ only if $\lambda\unlhd \mathbf{m}u$. One thus gets \boldsymbolegin{equation*} M^\lambda\cong Y^\lambda\oplus\boldsymboligoplus_{\mathbf{m}u\rhd\lambda}k_{\lambda,\mathbf{m}u}Y^\mathbf{m}u, \varepsilonnd{equation*} where $k_{\lambda,\mathbf{m}u}=[M^\lambda:Y^\mathbf{m}u]$ and all $k_{\lambda,\mathbf{m}u}$'s are known as $p$-Kostka numbers. Note that Young modules are self-dual and are $p$-permutation modules with trivial sources. The Brauer constructions of Young modules with respect to their vertices are studied by \cite{Erdmann} and \cite{GJ}. To describe them, let $\lambda$ have $p$-adic expansion $\mathbf{m}athfrak{s}um_{i=0}^mp^i\lambda(i)$ for some non-negative integer $m$, where $|\lambda(i)|=n_i$ for all $0\leq i\leq m$. Denote by $\mathcal{O}_\lambda$ the partition $$(\underbrace{p^m,\ldots,p^m}_{n_m\ \mathfrak{t}ext{times}},\ \underbrace{p^{m-1},\ldots,p^{m-1}}_{n_{m-1}\ \mathfrak{t}ext{times}}, \ldots,\ \underbrace{1,\ldots,1}_{n_0\ \mathfrak{t}ext{times}}).$$ \boldsymbolegin{thm}\label{T; Youngvertices}\cite{Erdmann, GJ} Let $\lambda\vdash n$. Then the vertices of $Y^{\lambda}$ are $\mathbf{m}athfrak{s}ym{n}$-conjugate to the Sylow $p$-subgroups of $\mathbf{m}athfrak{S}_{\mathcal{O}_\lambda}$. Moreover, $Y^\lambda$ is projective if and only if $\lambda$ is $p$-restricted. \varepsilonnd{thm} \boldsymbolegin{thm}\cite{Erdmann}\label{Erdmann} Let $\lambda\vdash n$ and $\lambda$ have $p$-adic expansion $\mathbf{m}athfrak{s}um_{i=0}^mp^i\lambda(i)$ where $|\lambda(i)|=n_i$ for all $0\leq i\leq m$. Let $\textswab{a}lpha=(n_0,\ldots,n_m)$ and $P_{\lambda}$ be a Sylow $p$-subgroup of $\mathbf{m}athfrak{s}ym{\mathcal{O}_{\lambda}}$. \boldsymbolegin{enumerate} \item [\varepsilonm(i)] We have $\mathbf{m}athfrak{s}ym{\textswab{a}lpha}\cong \mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{n}}(P_\lambda)/\mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{\mathcal{O}_\lambda}}(P_\lambda)\cong (\mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{n}}(P_\lambda)/P_\lambda)/(\mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{\mathcal{O}_\lambda}}(P_\lambda)/P_\lambda).$ \item [\varepsilonm(ii)] We have that $\mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{\mathcal{O}_\lambda}}(P_\lambda)/P_\lambda$ acts trivially on $Y^\lambda(P_\lambda)$. Therefore, $Y^\lambda(P_\lambda)$ can be viewed as an $\mathbb{F}\mathbf{m}athfrak{s}ym{\textswab{a}lpha}$-module. \item [\varepsilonm(iii)] We have $Y^{\lambda}(P_\lambda)\cong Y^{\lambda(0)}\boldsymboloxtimes Y^{\lambda(1)}\boldsymboloxtimes\cdots\boldsymboloxtimes Y^{\lambda(m)}$ as $\mathbb{F} \mathbf{m}athfrak{s}ym{\textswab{a}lpha}$-modules. \varepsilonnd{enumerate} \varepsilonnd{thm} We conclude the section by collecting some results used in the following discussion. \boldsymbolegin{thm}\label{T;Hemmer} \cite[Proposition 4.6]{DKZ} The trivial source simple $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules are exactly the simple $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-Specht modules. \varepsilonnd{thm} \boldsymbolegin{lem}\label{L;p-permutation modules} Let $\lambda\vdash n$ and $M$ be an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. Then \boldsymbolegin{enumerate} \item [\varepsilonm(i)]$M$ is a $p$-permutation module if and only if $M^*$ is a $p$-permutation module. \item [\varepsilonm(ii)] $M$ is a $p$-permutation module if and only if $M\otimes sgn$ is a $p$-permutation module, where $sgn$ means the $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-trivial module if $p=2$. \item [\varepsilonm(iii)] $S^\lambda$ is a $p$-permutation module if and only if $S^{{\lambda}'}$ is a $p$-permutation module. \varepsilonnd{enumerate} \varepsilonnd{lem} \boldsymbolegin{proof} Note that $*$ preserves direct sum of $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules and takes a permutation module to a permutation module. (i) thus follows by noticing that $M\cong (M^{*})^*$. Let $v$ be a generator of $sgn$. If $M$ is a $p$-permutation module, for any $p$-subgroup $P$ of $\mathbf{m}athfrak{s}ym{n}$, let $\mathcal{B}_P$ be a basis of $M$ which is permuted by $P$. Note that $\mathcal{B}_{P,v}=\{u\otimes v:\ u\in \mathcal{B}_P\}$ is a basis of $M\otimes sgn$ which is permuted by $P$. So $M\otimes sgn$ is a $p$-permutation module by the definition. (ii) follows by noting that $M\cong(M\otimes sgn)\otimes sgn$. As $S^{\lambda'}\cong (S^\lambda\otimes sgn)^*$, (iii) thus follows by combining (i) and (ii). The lemma follows. \varepsilonnd{proof} \mathbf{m}athfrak{s}ection{Fixed-point functors of symmetric groups} The section is designed to study fixed-point functors of symmetric groups. After a short summary of these objects, we present some properties of the functors. Some results will be used in the following two sections while others may be of independent interest. Let $m\in \mathbf{m}athbb{N}$, $m\leq n$ and $M$ be an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. Recall that $\mathbf{m}athfrak{s}ym{n-m}$ centralizes $\mathbf{m}athfrak{s}ym{m}$. So the space $M^{\mathbf{m}athfrak{s}ym{m}}$ is viewed as an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module naturally. The fixed-point functor with the parameter $m$, introduced by Hemmer in \cite{Hemmer2}, is defined to be \boldsymbolegin{align*} \mathbf{m}athcal{F}_m: \mathbb{F}\mathbf{m}athfrak{s}ym{n}\mathfrak{t}ext{-mod}\mathbf{m}apsto\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}\mathfrak{t}ext{-mod} \varepsilonnd{align*} setting $\mathbf{m}athcal{F}_m(M)\!=\!M^{\mathbf{m}athfrak{s}ym{m}}\cong \mathfrak{t}ext{Hom}_{\mathbb{F}\mathbf{m}athfrak{s}ym{m}}(\mathbb{F},M{\downarrow_{\mathbf{m}athfrak{s}ym{m}}})\!\cong\!\mathfrak{t}ext{Hom}_{\mathbb{F}\mathbf{m}athfrak{s}ym{n}}(M^{(m,1^{n-m})},M).$ It is also convenient to regard $\mathbf{m}athcal{F}_m(M)$ as the largest subspace of $M{\downarrow_{\mathbf{m}athfrak{s}ym{(m,n-m)}}}$ where $\mathbf{m}athfrak{s}ym{m}$ acts trivially. Note that $\mathbf{m}athcal{F}_1(M)=M{\downarrow_{\mathbf{m}athfrak{s}ym{n-1}}}$ by the definition. Moreover, the functor $\mathbf{m}athcal{F}_m$ is exact if $m<p$ and is not exact in general if $m\geq p$. In \cite{Hemmer1}, Hemmer obtained some results of extensions of simple modules of symmetric groups by using fixed-point functors. For any $\lambda\vdash n$, he also conjectured that $\mathbf{m}athcal{F}_m(S^\lambda)$ had a Specht filtration (see \cite[Conjecture 7.3]{Hemmer2} or \cite[Conjecture 7.2]{Hemmer1}). However, the conjecture was shown to be false in \cite{DG}. We now begin to prove some properties of fixed-point functors. Throughout the whole section, fix an integer $m$ satisfying $1\leq m\leq n$. \boldsymbolegin{lem}\label{L;Directsum} Let $M$ and $N$ be $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules. Then $\mathbf{m}athcal{F}_m(M\oplus N)=\mathbf{m}athcal{F}_m(M)\oplus \mathbf{m}athcal{F}_m(N)$. \varepsilonnd{lem} \boldsymbolegin{proof} Note that $(M\oplus N)^{\mathbf{m}athfrak{s}ym{m}}=M^{\mathbf{m}athfrak{s}ym{m}}\oplus N^{\mathbf{m}athfrak{s}ym{m}}$. The lemma follows by restricting both sides of the equality to $\mathbf{m}athfrak{s}ym{n-m}$ and the definition of fixed-point functors. \varepsilonnd{proof} \boldsymbolegin{prop}\label{P;Projective} Let $P$ be a projective $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. Then $\mathbf{m}athcal{F}_m(P)$ is a projective $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module. \varepsilonnd{prop} \boldsymbolegin{proof} We may assume that $P$ is non-zero. As $P$ is projective, for some positive integer $\varepsilonll$, we have \boldsymbolegin{align} (P{\downarrow_{\mathbf{m}athfrak{s}ym{(m,n-m)}}})^{\mathbf{m}athfrak{s}ym{m}}\cong\boldsymboligoplus_{i=1}^\varepsilonll(P_i\boldsymboloxtimes Q_i)^{\mathbf{m}athfrak{s}ym{m}}=\boldsymboligoplus_{i=1}^\varepsilonll(P_i^{\mathbf{m}athfrak{s}ym{m}}\boldsymboloxtimes Q_i), \varepsilonnd{align} where $P_i$ and $Q_i$ are indecomposable projective $\mathbb{F}\mathbf{m}athfrak{s}ym{m}$-module and $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module respectively for all $1\leq i\leq \varepsilonll$. Regarding both sides of $(3.1)$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules, we get that $\mathbf{m}athcal{F}_m(P)\cong\boldsymboligoplus_{i=1}^\varepsilonll \dim_\mathbb{F} {P_i}^{\mathbf{m}athfrak{s}ym{m}}. Q_i$, which implies the desired result. \varepsilonnd{proof} Let $\varepsilonll$ be an integer and $\mathcal{O}mega^\varepsilonll$ be the $\varepsilonll$th Heller translate of modules of a given symmetric group. We have \boldsymbolegin{cor}\label{C;Heler} Let $M$ be an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. If $m<p$, for any integer $\varepsilonll$, we have $\mathbf{m}athcal{F}_m(\mathcal{O}mega^\varepsilonll(M))\cong \mathcal{O}mega^\varepsilonll(\mathbf{m}athcal{F}_m(M))$ in the stable category of $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules. \varepsilonnd{cor} \boldsymbolegin{proof} The case $\varepsilonll=0$ is clear by Lemma \ref{L;Directsum} and Proposition \ref{P;Projective}. When $\varepsilonll\neq 0$, note that $\mathbf{m}athcal{F}_m$ is exact. By Proposition \ref{P;Projective} again, $\mathbf{m}athcal{F}_m$ takes a minimal projective resolution (resp. a minimal injective resolution) of $M$ to be a projective resolution (resp. an injective resolution) of $\mathbf{m}athcal{F}_m(M)$. Therefore, the desired result follows. \varepsilonnd{proof} Corollary \ref{C;Heler} may not be true if the fixed-point functor is not exact. For example, let $p=2$ and $n=4$. Note that $\mathbf{m}athcal{F}_2$ is not exact. Let $P$ be the projective cover of the trivial $\mathbb{F}\mathbf{m}athfrak{s}ym{4}$-module $\mathbb{F}$. For the short exact sequence $0\rightarrow\mathcal{O}mega^1(\mathbb{F})\rightarrow P\rightarrow\mathbb{F}\rightarrow 0$, by Proposition \ref{P;Projective} and long exact sequence, we have $$0\rightarrow\mathbf{m}athcal{F}_2(\mathcal{O}mega^1(\mathbb{F}))\rightarrow\mathbf{m}athcal{F}_2(P)\rightarrow\mathbb{F}\rightarrow \mathfrak{t}ext{Ext}_{\mathbb{F}\mathbf{m}athfrak{s}ym{4}}^1(M^{(2,1^2)}, \mathcal{O}mega^1(\mathbb{F}))\rightarrow 0.$$ As $\mathcal{O}mega^1(\mathbb{F}){\downarrow_{\mathbf{m}athfrak{s}ym{2}}}$ is not free, $\mathfrak{t}ext{Ext}_{\mathbb{F}\mathbf{m}athfrak{s}ym{4}}^1(M^{(2,1^2)}, \mathcal{O}mega^1(\mathbb{F}))\cong \mathfrak{t}ext{Ext}_{\mathbb{F}\mathbf{m}athfrak{s}ym{2}}^1(\mathbb{F}, \mathcal{O}mega^1(\mathbb{F}){\downarrow_{\mathbf{m}athfrak{s}ym{2}}})\neq 0$. These facts and the long exact sequence imply that $\mathbf{m}athcal{F}_2(\mathcal{O}mega^1(\mathbb{F}))\cong \mathbf{m}athcal{F}_2(P)$. So $\mathbf{m}athcal{F}_2(\mathcal{O}mega^1(\mathbb{F}))$ is projective by Proposition \ref{P;Projective}. However, the $\mathbb{F}\mathbf{m}athfrak{s}ym{2}$-module $\mathcal{O}mega^1(\mathbb{F})$ is not projective. We thus have $\mathbf{m}athcal{F}_2(\mathcal{O}mega^1(\mathbb{F}))\not\cong \mathcal{O}mega^1(\mathbb{F})$ in the stable category of $\mathbb{F}\mathbf{m}athfrak{s}ym{2}$-modules. \boldsymbolegin{lem}\label{L;permutation1} Let $H\leq \mathbf{m}athfrak{s}ym{n}$. Then $\mathbf{m}athcal{F}_m(\mathbb{F}_H{\uparrow^{\mathbf{m}athfrak{s}ym{n}}})$ is a permutation $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module. \varepsilonnd{lem} \boldsymbolegin{proof} It is sufficient to check that $\mathbf{m}athcal{F}_m(\mathbb{F}_H{\uparrow^{\mathbf{m}athfrak{s}ym{n}}})$ has a permutation basis under the action of $\mathbf{m}athfrak{s}ym{n-m}$. Let $s=|\mathbf{m}athfrak{s}ym{n}:H|$ and $\mathcal{B}=\boldsymboligcup_{i=1}^s\{g_iH\}$ be a complete set of representatives of left cosets of $H$ in $\mathbf{m}athfrak{s}ym{n}$. For any $\mathbf{m}athfrak{s}igma\in \mathbf{m}athfrak{s}ym{n}$ and $g_iH\in \mathcal{B}$, there exists a unique $g_jH\in \mathcal{B}$ such that $\mathbf{m}athfrak{s}igma g_iH=g_jH$. We denote $g_j$ by $g_{\mathbf{m}athfrak{s}igma,i}$ and view $\mathcal{B}$ as a basis of $\mathbb{F}_H{\uparrow^{\mathbf{m}athfrak{s}ym{n}}}$. For all $1\leq i\leq s$, let $\mathcal{O}(g_i)$ be the orbit of $\mathcal{B}$ containing $g_iH$ under the action of $\mathbf{m}athfrak{s}ym{m}$ and $\mathcal{O}(g_i)=\{x_{i,1}g_iH,\ldots, x_{i,t_i}g_iH\}$. So $\mathcal{B}=\boldsymboliguplus_{j=1}^{t}\mathcal{O}(g_{i_j})$, where, for all $1\leq j\leq t\leq s$, $t_{i_j}=|\mathcal{O}(g_{i_j})|$ and $x_{i_j,1},\ldots, x_{i_j,t_{i_j}}\in\mathbf{m}athfrak{s}ym{m}$. For any $\mathbf{m}athfrak{s}igma\in \mathbf{m}athfrak{s}ym{n}$, let $\mathbf{m}athfrak{s}igma\mathcal{O}(g_i)=\{\mathbf{m}athfrak{s}igma xH:\ xH\in\mathcal{O}(g_i)\}$ for all $1\leq i\leq s$. Moreover, write $\overline{\mathcal{O}(g_i)}$ to be the sum $\mathbf{m}athfrak{s}um_{xH\in \mathcal{O}(g_i)}xH$ and notice that $\overline{\mathcal{B}}=\boldsymboligcup_{j=1}^t\{\overline{\mathcal{O}(g_{i_j})}\}$ is a basis of $(\mathbb{F}_H{\uparrow^{\mathbf{m}athfrak{s}ym{n}}})^{\mathbf{m}athfrak{s}ym{m}}$. For any $\mathfrak{t}au\in\mathbf{m}athfrak{s}ym{n-m}$, $\mathcal{O}(g_{i_j})\mathbf{m}athfrak{s}ubseteq \mathcal{B}$, if $|\mathcal{O}(g_{i_j})|>1$, for any integers $u,v$ such that $1\leq u<v\leq t_{i_j}$, note that $x_{i_j,u}g_{\mathfrak{t}au,i_j}H\neq x_{i_j,v}g_{\mathfrak{t}au,i_j}H$ as $x_{i_j,u}g_{i_j}H\neq x_{i_j,v}g_{i_j}H$. Therefore, $\mathfrak{t}au\mathcal{O}(g_{i_j})\mathbf{m}athfrak{s}ubseteq\mathcal{O}(g_{\mathfrak{t}au,i_j})$. Similarly, we have $\mathfrak{t}au^{-1}\mathcal{O}(g_{\mathfrak{t}au,i_j})\mathbf{m}athfrak{s}ubseteq\mathcal{O}(g_{i_j})$. The two relations imply that $\mathfrak{t}au\mathcal{O}(g_{i_j})=\mathcal{O}(g_{\mathfrak{t}au,i_j})=\mathcal{O}(g_{i_a})$ for some $a\in \mathbf{m}athbb{N}$ and $a\leq t$. We thus have $$\mathfrak{t}au\overline{\mathcal{O}(g_{i_j})}=\mathfrak{t}au\mathbf{m}athfrak{s}um_{k=1}^{t_{i_j}}x_{i_j,k}g_{i_j}H=\mathbf{m}athfrak{s}um_{k=1}^{t_{i_j}}x_{i_j,k}\mathfrak{t}au g_{i_j}H=\mathbf{m}athfrak{s}um_{k=1}^{t_{i_j}}x_{i_j,k}g_{\mathfrak{t}au,i_j}H=\overline{\mathcal{O}(g_{\mathfrak{t}au,i_j})}=\overline{\mathcal{O}(g_{i_a})}.$$ If $|\mathcal{O}(g_{i_j})|=1$, by a similar deduction, $\mathfrak{t}au\overline{\mathcal{O}(g_{i_j})}=\overline{\mathcal{O}(g_{i_b})}$ where $b\in \mathbf{m}athbb{N}$, $b\leq t$ and $|\mathcal{O}(g_{i_b})|=1$. So $\overline{\mathcal{B}}$ is a permutation basis under the action of $\mathbf{m}athfrak{s}ym{n-m}$. \varepsilonnd{proof} We close the section by a corollary. It will be used in the following discussion. \boldsymbolegin{cor}\label{L;permutation} Let $M$ be a $p$-permutation $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. If $\mathbf{m}athcal{F}_m(M)\neq 0$, then $\mathbf{m}athcal{F}_m(M)$ is a $p$-permutation $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module. \varepsilonnd{cor} \boldsymbolegin{proof} View $M$ as a direct summand of a direct sum of some transitive permutation $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules. By Lemmas \ref{L;Directsum} and \ref{L;permutation1}, observe that $\mathbf{m}athcal{F}_m(M)$ is a non-zero direct summand of a direct sum of some permutation $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules. So $\mathbf{m}athcal{F}_m(M)$ is indeed a $p$-permutation $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module by the definition. \varepsilonnd{proof} \mathbf{m}athfrak{s}ection{The Specht modules labelled by hook partitions} The section focuses on proving Theorem \ref{T;A}. Throughout the whole section, fix $n>1$ and integers $m,r$ satisfying $1\leq m, r<n$. We first present some explicit calculation results of $\mathbf{m}athcal{F}_m(S^{(n-r,1^r)})$ and then finish the proof of Theorem \ref{T;A}. \boldsymbolegin{nota}\label{N;Notation1} For convenience, we introduce the required notation. \boldsymbolegin{enumerate}[(i)] \item Let $S$ be a set and write $\langle S\rangle$ to denote an $\mathbb{F}$-linear space spanned by elements of $S$. Set $\langle\varnothing\rangle=0$ and $\langle s\rangle=\langle S\rangle$ if $S=\{s\}$. Let $A_n$ be a $n$-dimensional linear space over $\mathbb{F}$ with a basis $\{e_1,\ldots,e_n\}$ and $A_b=\langle\{e_1,\ldots,e_b\}\rangle$ for all $1\leq b\leq n$. Write $f_i=e_i-e_n$ for all $1\leq i\leq n$. Note that $A_b$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{b}$-module as $\mathbf{m}athfrak{s}ym{b}$ permutes the subscripts of its basis $\{e_1,\ldots,e_b\}$. So $A_n\cong M^{(n-1,1)}$ and $S^{(n-1,1)}\cong\langle\{f_1,\ldots, f_{n-1}\}\rangle$. Identify a basis of $S^{(n-1,1)}$ with $\{f_1,\ldots,f_{n-1}\}$. If $n-m\geq 2$, also identify a basis of $S^{(n-m-1,1)}$ with $\{f_{m+1},\ldots,f_{n-1}\}$. \item Let $a,b,c\in \mathbf{m}athbb{N}$. Set $J(a,b,c)=\{\underline{\mathbf{m}athbf{i}}=(i_1,\ldots,i_b)\in\mathbf{m}athbb{N}^b:\ a\leq i_1<\cdots<i_b\leq c\}$ and $J^+(a,b,c)=\{\underline{\mathbf{m}athbf{i}}=(i_1,\ldots,i_b)\in\ \mathbf{m}athbb{N}^b:\ a\leq i_1<\cdots<i_b<c\}$. Write $J(a,b)$, $J^+(a,b)$ to denote $J(a,b,n)$, $J^+(a,b,n)$ respectively. Let $b\leq n$ and $\boldsymboligwedge^cA_b$, $\boldsymboligwedge^c S^{(n-1,1)}$ be the $c$th exterior power of $A_b$, $S^{(n-1,1)}$ respectively. By convention, put $\boldsymboligwedge^0S^{(n-1,1)}=\mathbb{F}$ and $\boldsymboligwedge^1S^{(n-1,1)}=S^{(n-1,1)}$. If $c<n$ and $c\leq b$, then $\boldsymboligwedge^c A_b$ has a basis $\{e_{\underline{\mathbf{m}athbf{i}}}=e_{i_1}\wedge\cdots\wedge e_{i_c}:\ \underline{\mathbf{m}athbf{i}}=(i_1,\ldots,i_c)\in J(1,c,b)\}$ and $\boldsymboligwedge^cS^{(n-1,1)}$ has a basis $\{f_{\underline{\mathbf{m}athbf{i}}}=f_{i_1}\wedge\cdots\wedge f_{i_c}:\ \underline{\mathbf{m}athbf{i}}=(i_1,\ldots,i_c)\in J^+(1,c)\}$. Use $\mathcal{B}_{A_b}^c$, $\mathcal{B}_S^c$ to denote the two bases respectively. If $r+m-n<c<r$ and $n-m\geq 2$, then the $(r-c)$th exterior power $\boldsymboligwedge^{r-c}S^{(n-m-1,1)}$ has a basis $\{f_{\underline{\mathbf{m}athbf{i}}}:\ \underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-c)\}$. For any $v\in\boldsymboligwedge^rS^{(n-1,1)}$ and $f_{\underline{\mathbf{m}athbf{i}}}\in \mathcal{B}_S^r$, let $ d_{\underline{\mathbf{m}athbf{i}}}^{v}$ be the coefficient of $f_{\underline{\mathbf{m}athbf{i}}}$ in the linear combination of elements of $\mathcal{B}_S^r$ representing $v$. \item Let $\varepsilonll$ be an integer such that $0\leq \varepsilonll\leq r$. Recall that $\mathbf{m}athbf{m}=\{1,\ldots,m\}$ and set $$B_\varepsilonll=\{f_{\underline{\mathbf{m}athbf{i}}}=f_{i_1}\wedge\cdots\wedge f_{i_r}\in\mathcal{B}_S^r:\ |\{i_1,\ldots,i_r\}\cap\mathbf{m}athbf{m}|=\varepsilonll\}.$$ Note that $B_{u}\cap B_{v}=\varnothing$ if $u\neq v$ and observe that $B_{\varepsilonll}\neq\varnothing$ if and only if \boldsymbolegin{align} r+m-n+1\leq \varepsilonll\leq m. \varepsilonnd{align} For any $v\in (\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ and $B_{\varepsilonll}\neq\varnothing$, put $\overline{B_{\varepsilonll}^v}=\mathbf{m}athfrak{s}um_{w\in B_{\varepsilonll}} d_{w}^{v}w,$ where $d_{w}^v=d_{\underline{\mathbf{m}athbf{i}}}^v$ if $w=f_{\underline{\mathbf{m}athbf{i}}}$. For completeness, let $\overline{B_{\varepsilonll}^v}=0$ if $B_{\varepsilonll}=\varnothing$. Then $v=\mathbf{m}athfrak{s}um_{i=0}^r\overline{B_{i}^v}.$ Moreover, for any $g\in\mathbf{m}athfrak{s}ym{m}$, $g\overline{B_{i}^v}=\overline{B_{i}^v}$ for all $0\leq i\leq r$ as $gv=v$. \item Let $b,c\in \mathbf{m}athbb{N}$ such that $b,c<n$. Set $v_{b,c}=\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{i}}\in J(1,c,b)}f_{\underline{\mathbf{m}athbf{i}}}$ if $c\leq b$ and put $v_{b,c}=0$ if $b<c$. So $v_{b,c}\in\boldsymboligwedge^c S^{(n-1,1)}$. Note that $\langle \{gv_{b,c}:\ g\in\mathbb{F}\mathbf{m}athfrak{s}ym{b}\}\rangle$ is isomorphic to an $\mathbb{F}\mathbf{m}athfrak{s}ym{b}$-submodule of $\boldsymboligwedge^c A_b$. In particular, $\langle v_{b,c}\rangle\mathbf{m}athfrak{s}ubseteq \boldsymboligwedge^c A_b$ up to $\mathbb{F}\mathbf{m}athfrak{s}ym{b}$-isomorphism. \varepsilonnd{enumerate} \varepsilonnd{nota} We now present our calculation results. \boldsymbolegin{lem}\label{L;S(n-1,1)} We have \[\mathbf{m}athcal{F}_m(S^{(n-1,1)})\cong\boldsymbolegin{cases} S^{(n-m-1,1)}\oplus\mathbb{F}, & \mathfrak{t}ext{if}\ p\mathbf{m}id m,\ 1<n-m,\\ M^{(n-m-1,1)}, &\mathfrak{t}ext{if}\ p\nmid m,\ 1<n-m,\\ \mathbb{F}, &\mathfrak{t}ext{if}\ n-m=1.\varepsilonnd{cases} \] \varepsilonnd{lem} \boldsymbolegin{proof} When $n-m\geq 2$, note that $$(S^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}=\langle\{f_{m+1},\ldots,f_{n-1}\}\rangle\oplus\langle f_1+\cdots+f_m\rangle.$$ If $p\mathbf{m}id m$, we get that $\mathbf{m}athfrak{s}um_{i=1}^mf_i=\mathbf{m}athfrak{s}um_{i=1}^me_i$ and \boldsymbolegin{align*} \mathbf{m}athcal{F}_m(S^{(n-1,1)})&=\langle\{f_{m+1},\ldots,f_{n-1}\}\rangle\oplus\langle e_1+\cdots+e_m\rangle\\ &\cong S^{(n-m-1,1)}\oplus\mathbb{F}. \varepsilonnd{align*} If $p\nmid m$, denote $\frac{1}{m}\mathbf{m}athfrak{s}um_{i=1}^me_i$ by $sum$ and have \boldsymbolegin{align*} \mathbf{m}athcal{F}_m(S^{(n-1,1)})=\langle\{sum-e_{m+1},\ldots, sum-e_n\}\rangle \cong M^{(n-m-1,1)}. \varepsilonnd{align*} When $n-m=1$, we get $\mathbf{m}athcal{F}_m(S^{(n-1,1)})=\langle\mathbf{m}athfrak{s}um_{i=1}^{n-1}f_i\rangle$, which is a trivial module. The lemma now follows. \varepsilonnd{proof} For further calculation, recall that $S^{(n-s,1^s)}\cong\boldsymboligwedge^s S^{(n-1,1)}$ for all $0\leq s<n$ (see \cite[Proposition 2.3]{Muller}). \boldsymbolegin{lem}\label{L;Afixedpoints} Let $p>2$, $b,c\in \mathbf{m}athbb{N}$ and $c\leq b\leq n$. We have \[(\boldsymboligwedge^cA_b)^{\mathbf{m}athfrak{s}ym{b}}=\boldsymbolegin{cases} 0, & \mathfrak{t}ext{if}\ c>1,\\ \langle \{e_1+\cdots+e_b\}\rangle, & \mathfrak{t}ext{if}\ c=1. \varepsilonnd{cases}\] \varepsilonnd{lem} \boldsymbolegin{proof} For any $u\in (\boldsymboligwedge^cA_b)^{\mathbf{m}athfrak{s}ym{b}}$ and $u=\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{i}}\in J(1,c,b)}k_{\underline{\mathbf{m}athbf{i}}} e_{\underline{\mathbf{m}athbf{i}}}$, we have $k_{\underline{\mathbf{m}athbf{i}}}=k_{\underline{\mathbf{m}athbf{j}}}$ for any $\underline{\mathbf{m}athbf{i}}$, $\underline{\mathbf{m}athbf{j}}\in J(1,c,b)$ by using suitable permutations to act on $u$ and comparing the coefficients. So $(\boldsymboligwedge^cA_b)^{\mathbf{m}athfrak{s}ym{b}}\mathbf{m}athfrak{s}ubseteq\langle\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{i}}\in J(1,c,b)}e_{\underline{\mathbf{m}athbf{i}}}\rangle$. When $c>1$, set $v=\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{i}}\in J(1,c,b)}e_{\underline{\mathbf{m}athbf{i}}}$ and get that $v=w+\mathbf{m}athfrak{s}um_{(1,2,\ldots,i_c)\in J(1,c,b)}e_1\wedge e_2 \wedge\cdots\wedge e_{i_c}$. Note that no basis element $e_1\wedge e_2\wedge\cdots\wedge e_{j_c}$ of $\mathcal{B}_{A_b}^c$ is involved in $w$. Now use the transposition $(1,2)$ to act on both sides of the equality of $v$ and obtain that $(1,2)v=(1,2)w-\mathbf{m}athfrak{s}um_{(1,2,\ldots,i_c)\in J(1,c,b)}e_1\wedge e_2\wedge\cdots\wedge e_{i_c}.$ So $(1,2)v\neq v$ and $(\boldsymboligwedge^cA_b)^{\mathbf{m}athfrak{s}ym{b}}=0$. When $c=1$, it is clear that $A_b^{\mathbf{m}athfrak{s}ym{b}}=\langle e_1+\cdots+e_b\rangle$. The lemma follows. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;p>2,calculation} Let $p>2$. If $r>1$, for any $v\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$, we have \[v=\boldsymbolegin{cases} \overline{B_{0}^v}+\overline{B_{1}^v}, & \mathfrak{t}ext{if}\ r<n-m,\\ \overline{B_{1}^v}, &\mathfrak{t}ext{if}\ r=n-m,\\ 0, & \mathfrak{t}ext{if}\ r>n-m,\varepsilonnd{cases}\] where $\overline{B_{1}^v}=\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-1)}k_{\underline{\mathbf{m}athbf{i}}}v_{m,1}\wedge f_{\underline{\mathbf{m}athbf{i}}}$ and $k_{\underline{\mathbf{m}athbf{i}}}\in\mathbb{F}$ for all $\underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-1)$. \varepsilonnd{lem} \boldsymbolegin{proof} Recall that $v=\mathbf{m}athfrak{s}um_{i=0}^r\overline{B_{i}^v}$. Moreover, $\overline{B_{i}^v}\in (\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ for all $0\leq i\leq r$ as $v\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$. We claim that $\overline{B_{i}^v}=0$ if $1<i\leq r$. If $B_i\neq \varnothing$, by the definition and an easy calculation, we get that \boldsymbolegin{equation} \overline{B_{i}^v}\in\boldsymbolegin{cases} \langle v_{m,i}\wedge w_i\rangle, & \mathfrak{t}ext{if}\ 0<i<r,\\ \langle v_{m,r}\rangle, & \mathfrak{t}ext{if}\ i=r,\varepsilonnd{cases} \varepsilonnd{equation} where $w_i\in\langle\{f_{\underline{\mathbf{m}athbf{i}}}:\ \underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-i)\}\rangle$. Suppose that $\overline{B_{u}^v}\neq 0$ for some $1<u\leq r$. We have $u\leq m$, $v_{m,u}\neq 0$ and $w_u\neq 0$ if $u<r$. Notice that $\langle v_{m,u}\rangle\mathbf{m}athfrak{s}ubseteq \boldsymboligwedge^u A_m$ up to $\mathbb{F}\mathbf{m}athfrak{s}ym{m}$-isomorphism. By Lemma \ref{L;Afixedpoints}, we get that $v_{m,u}$ is not fixed by $\mathbf{m}athfrak{s}ym{m}$ as $(\boldsymboligwedge^u A_m)^{\mathbf{m}athfrak{s}ym{m}}=0$. In particular, $v_{m,u}\wedge w_u\notin(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ if $1<u<r$ and $v_{m,r}\notin(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ if $u=r$. By $(4.2)$, these facts imply that $\overline{B_{u}^v}=0$, which is a contradiction. The claim is shown. When $r<n-m$, by $(4.1)$, note that $B_{0}\neq \varnothing$ and $B_{1}\neq \varnothing$. We therefore get $v=\overline{B_{0}^v}+\overline{B_{1}^v}$ by the claim. For the left cases, we can determine whether $B_{0}$ or $B_{1}$ is empty by the given conditions and $(4.1)$. Therefore, we obtain the corresponding results by the claim. The proof is now complete. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;F(S),p>2} Let $p>2$. If $r>1$, then we have \[\mathbf{m}athcal{F}_m(S^{(n-r,1^r)})\cong\boldsymbolegin{cases} M, & \mathfrak{t}ext{if}\ p\mathbf{m}id m,\ r<n-m,\\ N, & \mathfrak{t}ext{if}\ p\nmid m,\ r<n-m,\\ sgn, & \mathfrak{t}ext{if}\ r=n-m,\\ 0, & \mathfrak{t}ext{if}\ r>n-m, \varepsilonnd{cases}\] where $M\cong S^{(n-m-r,1^r)}\oplus S^{(n-m-r+1,1^{r-1})}$ and $N\mathbf{m}athfrak{s}im S^{(n-m-r,1^r)}+S^{(n-m-r+1,1^{r-1})}$. \varepsilonnd{lem} \boldsymbolegin{proof} It is sufficient to determine $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. When $r<n-m$, define \boldsymbolegin{align*} \mathbf{m}athcal{C}_{0}=\{f_{\underline{\mathbf{m}athbf{i}}}:\ \underline{\mathbf{m}athbf{i}}\in J^+(m+1,r)\},\ \mathbf{m}athcal{C}_{1}=\{v_{m,1}\wedge f_{\underline{\mathbf{m}athbf{j}}}:\ \underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-1)\}. \varepsilonnd{align*} Observe that the vectors of $\mathbf{m}athcal{C}_{0}\cup\mathbf{m}athcal{C}_{1}$ are all fixed by $\mathbf{m}athfrak{s}ym{m}$. Moreover, $\mathbf{m}athcal{C}_{0}\cup\mathbf{m}athcal{C}_{1}$ is linearly independent over $\mathbb{F}$. By Lemma \ref{L;p>2,calculation}, for any $v\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$, $v=\overline{B_{0}^v}+\overline{B_{1}^v}$, $\overline{B_{0}^v}\in\langle \mathbf{m}athcal{C}_0\rangle$ and $\overline{B_{1}^v}=\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-1)}k_{\underline{\mathbf{m}athbf{j}}}v_{m,1}\wedge f_{\underline{\mathbf{m}athbf{j}}},$ where $k_{\underline{\mathbf{m}athbf{j}}}\in \mathbb{F}$ for all $\underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-1)$. We thus conclude that $\mathbf{m}athcal{C}_0\cup\mathbf{m}athcal{C}_1$ is a basis of $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ by these facts. If $p\mathbf{m}id m$, $\mathbf{m}athcal{C}_{1}$ degenerates to be $\{(\mathbf{m}athfrak{s}um_{i=1}^me_i)\wedge f_{\underline{\mathbf{m}athbf{j}}}:\ \underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-1)\}$. Therefore, we get $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})=\langle \mathbf{m}athcal{C}_{0}\rangle\oplus\langle\mathbf{m}athcal{C}_{1}\rangle$, where $\langle \mathbf{m}athcal{C}_{0}\rangle\cong S^{(n-m-r,1^r)}$ and $\langle \mathbf{m}athcal{C}_{1}\rangle\cong S^{(n-m-r+1, 1^{r-1})}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules. If $p\nmid m$, let $S=\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$ and $P=\langle\mathbf{m}athcal{C}_{0}\rangle$. Notice that $P\cong S^{(n-m-r,1^r)}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules. We need to show that $S/P\cong S^{(n-m-r+1, 1^{r-1})}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules. For any $v\in S$, let $\overline{v}$ be the image of $v$ under the natural map from $S$ to $S/P$. Define a linear map $\phi$ from $S/P$ to $\boldsymboligwedge^{r-1}S^{(n-m-1,1)}$ by sending each $\overline{v_{m,1} \wedge f_{\underline{\mathbf{m}athbf{j}}}}$ to $f_{\underline{\mathbf{m}athbf{j}}}$. It is obvious to see that $\phi$ is a linear isomorphism. To show that it is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-isomorphism, it is enough to check that the action of transposition $(n-1,n)$ is preserved by $\phi$. Set $\mathbf{m}athfrak{s}igma=(n-1,n)$ and $s=\mathbf{m}athfrak{s}um_{i=1}^me_i$. Note that $v_{m,1}=s-me_n$ by the definition. For any $v_{m,1} \wedge f_{\underline{\mathbf{m}athbf{j}}}\in\mathbf{m}athcal{C}_1$ where $\underline{\mathbf{m}athbf{j}}=(j_1,\ldots,j_{r-1})$, we distinguish with two cases. \boldsymbolegin{enumerate}[\mathfrak{t}ext{Case} 1:] \item $j_{r-1}<n-1.$ \varepsilonnd{enumerate} We have \boldsymbolegin{align*} &\phi(\mathbf{m}athfrak{s}igma(\overline{v_{m,1}\wedge f_{j_1}\wedge\cdots\wedge f_{j_{r-1}}}))\\ &=\phi(\overline{(s-me_{n-1})\wedge (e_{j_1}-e_{n-1})\wedge\cdots\wedge(e_{j_{r-1}}-e_{n-1})})\\ &=\phi(\overline{(s-me_n+me_n-me_{n-1})\wedge (e_{j_1}-e_{n-1})\wedge\cdots\wedge(e_{j_{r-1}}-e_{n-1})})\\ &=\phi(\overline{(s-me_n)\wedge (e_{j_1}-e_{n-1})\wedge\cdots\wedge(e_{j_{r-1}}-e_{n-1})})\ (\mathfrak{t}ext{modulo}\ P)\\ &=(e_{j_1}-e_{n-1})\wedge\cdots\wedge (e_{j_{r-1}}-e_{n-1})\\ &=\mathbf{m}athfrak{s}igma(f_{j_1}\wedge\cdots\wedge f_{j_{r-1}})\\ &=\mathbf{m}athfrak{s}igma\phi(\overline{v_{m,1}\wedge f_{j_1}\wedge\cdots\wedge f_{j_{r-1}}}). \varepsilonnd{align*} \boldsymbolegin{enumerate}[\mathfrak{t}ext{Case} 2:] \item $j_{r-1}=n-1.$ \varepsilonnd{enumerate} We have \boldsymbolegin{align*} &\phi(\mathbf{m}athfrak{s}igma(\overline{v_{m,1}\wedge f_{j_1}\wedge\cdots\wedge f_{n-1}}))\\ &=\phi(\overline{(s-me_{n-1})\wedge (e_{j_1}-e_{n-1})\wedge\cdots\wedge(e_{n}-e_{n-1})})\\ &=\phi(\overline{(s-me_n+me_n-me_{n-1})\wedge(e_{j_1}-e_{n-1})\wedge\cdots\wedge(e_n-e_{n-1})})\\ &=\phi(\overline{(s-me_n)\wedge(e_{j_1}-e_{n-1})\wedge\cdots\wedge(e_{n}-e_{n-1})})\\ &=\phi(-\overline{(s-me_n)\wedge(e_{j_1}-e_{n})\wedge\cdots\wedge(e_{n-1}-e_{n})})\\ &=-\phi(\overline{(s-me_n)\wedge f_{j_1}\wedge\cdots\wedge f_{n-1}})\\ &=-f_{j_1}\wedge\cdots\wedge f_{n-1}\\ &=\mathbf{m}athfrak{s}igma(f_{j_1}\wedge\cdots\wedge f_{n-1})\\ &=\mathbf{m}athfrak{s}igma\phi(\overline{v_{m,1}\wedge f_{j_1}\wedge\cdots\wedge f_{n-1}}). \varepsilonnd{align*} So $\phi$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-isomorphism. We thus get that $N\mathbf{m}athfrak{s}im S^{(n-m-r,1^r)}+S^{(n-m-r+1,1^{r-1})}$ as $\boldsymboligwedge^{r-1}S^{(n-m-1,1)}\cong S^{(n-m-r+1,1^{r-1})}$. When $r=n-m$, by Lemma \ref{L;p>2,calculation} and a similar discussion as above, we obtain $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}=\langle v_{m,1}\wedge f_{m+1}\wedge\cdots\wedge f_{n-1}\rangle$ and $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})\cong sgn$. When $r>n-m$, by Lemma \ref{L;p>2,calculation}, $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})=0$ as $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}=0$. The proof is now complete. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;p=2,calculation} Let $p=2$, $a,b\in \mathbf{m}athbb{N}$ and $a\leq b<n$. For any $s\in\{b+1,\ldots,n\}$, we have \boldsymbolegin{align*} \mathbf{m}athfrak{s}um_{(i_1,\ldots,i_a)\in J(1,a,b)}(e_{i_1}+e_s)\wedge\cdots\wedge(e_{i_a}+e_s)=\boldsymbolegin{cases} v_{b,a}+bf_s, &\mathfrak{t}ext{if}\ a=1,\\ v_{b,a}+(b-a+1)f_s\wedge v_{b,a-1}, &\mathfrak{t}ext{if}\ a>1. \varepsilonnd{cases} \varepsilonnd{align*} \varepsilonnd{lem} \boldsymbolegin{proof} The case $a=1$ is trivial by a direct calculation. We now consider the case $a>1$. For any $(i_1,\ldots,i_\varepsilonll,\ldots,i_a)\in J(1,a,b)$, we shall write $f_{i_1}\wedge\cdots\wedge\widehat{f_{i_\varepsilonll}}\wedge\cdots\wedge f_{i_a}$ to denote the corresponding vector of $\boldsymboligwedge^{a-1}S^{(n-1,1)}$ without the component $f_{i_\varepsilonll}$. Since $p=2$, we have \boldsymbolegin{align*} &\mathbf{m}athfrak{s}um_{\mathbf{m}athfrak{s}ubstack{(i_1,\ldots,i_a)\in J(1,a,b)}}\!(e_{i_1}+e_s)\wedge\cdots\wedge(e_{i_a}+e_s)\\ &=\mathbf{m}athfrak{s}um_{\mathbf{m}athfrak{s}ubstack{(i_1,\ldots,i_a)\in J(1,a,b)}}\!\!\!\!\!\!\!\!(f_{i_1}+f_s)\wedge\cdots\wedge(f_{i_a}+f_s)\\ &=v_{b,a}+f_s\wedge(\!\!\!\!\!\!\!\!\!\mathbf{m}athfrak{s}um_{\mathbf{m}athfrak{s}ubstack{(i_1,\ldots,i_a)\in J(1,a,b)}}\!\mathbf{m}athfrak{s}um_{c=1}^a f_{i_1}\wedge\cdots\wedge\widehat{f_{i_c}}\wedge\cdots\wedge f_{i_a})\\ &=v_{b,a}+(b-a+1)f_s\wedge(\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{j}}\in J(1,a-1,b)} \!\!\!f_{\underline{\mathbf{m}athbf{j}}})\\ &=v_{b,a}+(b-a+1)f_s\wedge v_{b,a-1}, \varepsilonnd{align*} as desired. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;p=2,calculation1} Let $p=2$ and $s=r+m-n+1$. If $r>1$, for any $v\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$, we have \[v=\boldsymbolegin{cases} \mathbf{m}athfrak{s}um_{i=0}^r\overline{B_{i}^v}, & \mathfrak{t}ext{if}\ r<n-m,\ r\leq m,\\ \mathbf{m}athfrak{s}um_{i=0}^m\overline{B_{i}^v}, & \mathfrak{t}ext{if}\ m<r<n-m,\\ \mathbf{m}athfrak{s}um_{i=s}^r\overline{B_{i}^v}, & \mathfrak{t}ext{if}\ n-m\leq r\leq m,\\ \mathbf{m}athfrak{s}um_{i=s}^m\overline{B_{i}^v}, & \mathfrak{t}ext{if}\ n-m\leq r,\ r>m,\varepsilonnd{cases}\] where, for all $0<i<r$, $\overline{B_{i}^v}= \mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-i)}k_{\underline{\mathbf{m}athbf{i}}}v_{m,i}\wedge f_{\underline{\mathbf{m}athbf{i}}}$ and $k_{\underline{\mathbf{m}athbf{i}}}\in\mathbb{F}$ for all $\underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-i)$. Furthermore, $\overline{B_{r}^v}\in\langle v_{m,r}\rangle$. \varepsilonnd{lem} \boldsymbolegin{proof} Recall that $v=\mathbf{m}athfrak{s}um_{i=0}^r\overline{B_{i}^v}$. Moreover, we have $\overline{B_{i}^v}\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ for all $0\leq i\leq r$ as $v\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$. If $B_i\neq \varnothing$, an easy calculation gives that \boldsymbolegin{equation*} \overline{B_{i}^v}\in\boldsymbolegin{cases} \langle v_{m,i}\wedge w_{i}\rangle, & \mathfrak{t}ext{if}\ 0<i<r,\\ \langle v_{m,r}\rangle, & \mathfrak{t}ext{if}\ i=r,\varepsilonnd{cases} \varepsilonnd{equation*} where $w_i\in\langle\{f_{\underline{\mathbf{m}athbf{i}}}:\ \underline{\mathbf{m}athbf{i}}\in J^+(m+1,r-i)\}\rangle$. When $m<r$, by $(4.1)$, note that $B_{i}=\varnothing$ for all $m<i\leq r$. When $n-m\leq r$, notice that $0<s$. Therefore, by $(4.1)$ again, $B_{i}=\varnothing$ for all $0\leq i<s$. We now get the corresponding results according to these facts and the given conditions. The lemma follows. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;F(S),p=2} Let $p=2$ and $s=r+m-n+1$. If $r>1$, then we have \[\mathbf{m}athcal{F}_m(S^{(n-r,1^r)})\mathbf{m}athfrak{s}im\boldsymbolegin{cases} \mathbf{m}athfrak{s}um_{i=0}^rS^{(n-m-r+i,1^{r-i})}, & \mathfrak{t}ext{if}\ r<n-m,\ r\leq m,\\ \mathbf{m}athfrak{s}um_{i=0}^mS^{(n-m-r+i,1^{r-i})}, & \mathfrak{t}ext{if}\ m<r<n-m,\\ \mathbf{m}athfrak{s}um_{i=s}^rS^{(n-m-r+i,1^{r-i})}, & \mathfrak{t}ext{if}\ n-m\leq r\leq m,\\ \mathbf{m}athfrak{s}um_{i=s}^mS^{(n-m-r+i,1^{r-i})}, & \mathfrak{t}ext{if}\ n-m\leq r,\ r>m.\varepsilonnd{cases}\] \varepsilonnd{lem} \boldsymbolegin{proof} It is sufficient to work on $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. Let $\varepsilonll\in \mathbf{m}athbb{N}$ and $\varepsilonll<r$. When $J^+(m+1,r)$, $J^+(m+1,r-\varepsilonll)\neq\varnothing$ and $\varepsilonll\leq m$, define \boldsymbolegin{align*} \mathbf{m}athcal{C}_{0}=\{f_{\underline{\mathbf{m}athbf{i}}}:\ \underline{\mathbf{m}athbf{i}}\in J^+(m+1,r)\},\ \mathbf{m}athcal{C}_{\varepsilonll}=\{v_{m,\varepsilonll}\wedge f_{\underline{\mathbf{m}athbf{j}}}:\ \underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-\varepsilonll)\}. \varepsilonnd{align*} Set $\mathbf{m}athcal{C}_{\varepsilonll}=\varnothing$ if $J^+(m+1,r-\varepsilonll)=\varnothing$ or $m<\varepsilonll$ and put $\mathbf{m}athcal{C}_{i,j}=\boldsymboliguplus_{k=i}^j\mathbf{m}athcal{C}_{k}$ for all $1\leq i\leq j<r$. When $r<n-m$ and $r\leq m$, define $\mathbf{m}athcal{C}=\{v_{m,r}\}\cup\mathbf{m}athcal{C}_{0}\cup\mathbf{m}athcal{C}_{1,r-1}$. Note that each vector of $\mathbf{m}athcal{C}$ is fixed by $\mathbf{m}athfrak{s}ym{m}$. Moreover, $\mathbf{m}athcal{C}$ is linearly independent over $\mathbb{F}$. By Lemma \ref{L;p=2,calculation1}, for any $v\in(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$, $v=\mathbf{m}athfrak{s}um_{a=0}^r\overline{B_{a}^v}$, $\overline{B_{0}^v}\in\langle \mathbf{m}athcal{C}_0\rangle$, $\overline{B_{r}^v}\in\langle v_{m,r}\rangle$, $ \overline{B_{a}^v}=\mathbf{m}athfrak{s}um_{\underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-a)}k_{\underline{\mathbf{m}athbf{j}}}v_{m,a}\wedge f_{\underline{\mathbf{m}athbf{j}}},$ where $1\leq a<r$ and $k_{\underline{\mathbf{m}athbf{j}}}\in\mathbb{F}$ for all $\underline{\mathbf{m}athbf{j}}\in J^+(m+1,r-a)$. By these facts, we conclude that $\mathbf{m}athcal{C}$ is a basis of $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$. We shall construct the desired Specht filtration for $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. Let $U_0=\langle \mathbf{m}athcal{C}_{0}\rangle$ and $U_i=\langle \mathbf{m}athcal{C}_{0}\cup\mathbf{m}athcal{C}_{1,i}\rangle$ for all $1\leq i<r$. Set $U_r=\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. We have a chain of subspaces of $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$ given by \boldsymbolegin{equation*} 0\mathbf{m}athfrak{s}ubset U_0\mathbf{m}athfrak{s}ubset U_1\mathbf{m}athfrak{s}ubset\cdots\mathbf{m}athfrak{s}ubset U_r. \varepsilonnd{equation*} By the definition of $U_0$, note that $U_0\cong S^{(n-m-r,1^r)}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules. We claim that $U_i$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-module for all $1\leq i<r$. According to the definition of $U_i$, it is enough to check that every element of $\mathbf{m}athcal{C}_{1,i}$ is still in $U_i$ under the action of the transposition $(n-1,n)$. Write $\mathbf{m}athfrak{s}igma=(n-1,n)$. For any $v_{m,\varepsilonll}\wedge f_{\underline{\mathbf{m}athbf{j}}}\in\mathbf{m}athcal{C}_{1,i}$, note that $1\leq\varepsilonll\leq i<r\leq m$. If $\varepsilonll>1$, we have the following calculation: \boldsymbolegin{align*} \mathbf{m}athfrak{s}igma(v_{m,\varepsilonll}\wedge f_{\underline{\mathbf{m}athbf{j}}})=\mathbf{m}athfrak{s}igma v_{m,\varepsilonll}\wedge \mathbf{m}athfrak{s}igma f_{\underline{\mathbf{m}athbf{j}}} =(v_{m,\varepsilonll}+(m-\varepsilonll+1)f_{n-1}\wedge v_{m,\varepsilonll-1})\wedge \mathbf{m}athfrak{s}igma f_{\underline{\mathbf{m}athbf{j}}}\in U_i, \varepsilonnd{align*} where the last equality is from Lemma \ref{L;p=2,calculation}. Similarly, one can finish the checking if $\varepsilonll=1$. The claim is shown. Finally, we will prove that $U_i/U_{i-1}\cong \boldsymboligwedge^{r-i}S^{(n-m-1,1)}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules for all $1\leq i\leq r$. For any $v\in U_i$, write $\overline{v}$ to denote the image of $v$ under the natural map from $U_i$ to $U_i/U_{i-1}$. When $1\leq i<r$, define a linear map $\phi_i$ from $U_i/U_{i-1}$ to $\boldsymboligwedge^{r-i}S^{(n-m-1,1)}$ by sending each $\overline{v_{m,i} \wedge f_{\underline{\mathbf{m}athbf{j}}}}$ to $f_{\underline{\mathbf{m}athbf{j}}}$. It is easy to observe that $\phi_i$ is a linear isomorphism. To show that $\phi_i$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-isomorphism, we only need to check that the action of $\mathbf{m}athfrak{s}igma$ is preserved by $\phi_i$. If $i=1$, one may develop a similar checking as the one given in Lemma \ref{L;F(S),p>2}. If $1<i<r$, one can use the calculation of checking that $U_i$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-submodule to finish the checking. Therefore, $\phi_i$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-isomorphism for all $1\leq i<r$. Define a linear map $\phi_r$ from $U_r/U_{r-1}$ to $\mathbb{F}$ by sending $\overline{v_{m,r}}$ to a generator of $\mathbb{F}$. By Lemma \ref{L;p=2,calculation}, it is obvious to see that $\phi_r$ is an $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-isomorphism. We now have shown that $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$ has the desired Specht filtration as $\boldsymboligwedge^{r-i}S^{(n-m-1,1)}\cong S^{(n-m-r+i,1^{r-i})}$ for all $1\leq i\leq r$. The proofs of all the other cases are similar to the one of the case $r<n-m$ and $r\leq m$. We only list corresponding assertions for each case. One may follow the proof presented above to justify them. When $m<r<n-m$, $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ has a basis $\mathbf{m}athcal{C}_{0}\cup\mathbf{m}athcal{C}_{1,m}$. Let $U_0=\langle \mathbf{m}athcal{C}_{0}\rangle$, $U_i=\langle\mathbf{m}athcal{C}_{0}\cup\mathbf{m}athcal{C}_{1,i}\rangle$ for all $1\leq i<m$ and $U_m=\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. Then the chain of the $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules $$ 0\mathbf{m}athfrak{s}ubset U_0\mathbf{m}athfrak{s}ubset U_1\mathbf{m}athfrak{s}ubset\cdots\mathbf{m}athfrak{s}ubset U_m$$ gives the desired Specht filtration to $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. When $n-m\leq r\leq m<n-1$, $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ has a basis $\{v_{m,r}\}\cup\mathbf{m}athcal{C}_{s,r-1}$. Let $U_i=\langle\mathbf{m}athcal{C}_{s,i}\rangle$ for all $s\leq i<r$ and $U_{r}=\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. Then the chain of the $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules $$ 0\mathbf{m}athfrak{s}ubset U_s\mathbf{m}athfrak{s}ubset U_{s+1}\mathbf{m}athfrak{s}ubset\cdots\mathbf{m}athfrak{s}ubset U_r$$ provides the desired Specht filtration for $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. When $m=n-1$, $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ has a basis $\{v_{m,r}\}$. So $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})\cong S^{(1)}$. When $n-m\leq r$ and $r>m$, $(\boldsymboligwedge^rS^{(n-1,1)})^{\mathbf{m}athfrak{s}ym{m}}$ has a basis $\mathbf{m}athcal{C}_{s,m}$. Set $U_i=\langle\mathbf{m}athcal{C}_{s,i}\rangle$ for all $s\leq i<m$ and $U_m=\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. Then the chain of the $\mathbb{F}\mathbf{m}athfrak{s}ym{n-m}$-modules $$ 0\mathbf{m}athfrak{s}ubset U_s\mathbf{m}athfrak{s}ubset U_{s+1}\mathbf{m}athfrak{s}ubset\cdots\mathbf{m}athfrak{s}ubset U_m$$ induces the desired Specht filtration of $\mathbf{m}athcal{F}_m(\boldsymboligwedge^rS^{(n-1,1)})$. The proof is now complete. \varepsilonnd{proof} A direct corollary of Lemmas \ref{L;S(n-1,1)}, \ref{L;F(S),p>2} and \ref{L;F(S),p=2} is given as follows. \boldsymbolegin{cor}\label{C;Hook filtration} Let $a$ and $b$ be integers such that $0\leq a<n$ and $1\leq b\leq n$. Then the following assertions hold. \boldsymbolegin{enumerate} \item [\varepsilonm(i)] If $\mathbf{m}athcal{F}_b(S^{(n-a,1^a)})\neq 0$, then there exists a Specht filtration of $\mathbf{m}athcal{F}_b(S^{(n-a,1^a)})$ such that all the successive quotient factors of the filtration are labelled by the hook partitions of $n-b$. \item [\varepsilonm(ii)] Let $M$ be an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module with a Specht filtration and $\mathbf{m}athcal{F}_b(M)\neq 0$. If $b<p$ and all the successive quotient factors of the filtration are labelled by hook partitions of $n$, then $\mathbf{m}athcal{F}_b(M)$ has a Specht filtration with all successive quotient factors labelled by hook partitions of $n-b$. \varepsilonnd{enumerate} \varepsilonnd{cor} \boldsymbolegin{proof} Note that $\mathbf{m}athcal{F}_n$ takes an $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module to zero or a direct sum of trivial modules. Also notice that $\mathbf{m}athcal{F}_b(S^{(n)})$ is a trivial module. (i) is clear by Lemmas \ref{L;S(n-1,1)}, \ref{L;F(S),p>2} and \ref{L;F(S),p=2}. For (ii), as $\mathbf{m}athcal{F}_b$ is now exact, we get the desired result by (i) and induction on the number of quotient factors of the filtration. \varepsilonnd{proof} One more result is needed to prove Theorem \ref{T;A}. It is a part of \cite[Theorem 4.5]{Lim2}. To state it and finish the proof of Theorem \ref{T;A}, we need another notation. Let $a\in \mathbf{m}athbb{N}$ and $ ap\leq n$. Let $E_a$ be the elementary abelian $p$-subgroup $\langle \boldsymboligcup_{i=a}^b\{s_i\}\rangle$ of $\mathbf{m}athfrak{s}ym{n}$ where $b=\lfloor\frac{n}{p}\rfloor$ and $s_i=((i-1)p+1,(i-1)p+2,\ldots, ip).$ \boldsymbolegin{lem}\label{L;Hook sjt} Let $p\mathbf{m}id n$ and $a$ be the remainder of $r$ when $r$ is divided by $p$. If $2\nmid a$, then the stable generic Jordan type of $S^{(n-r,1^r)}{\downarrow_{E_1}}$ is $[p-1]^b$, where $b\in \mathbf{m}athbb{N}$. \varepsilonnd{lem} We now begin to finish the proof of Theorem \ref{T;A}. \boldsymbolegin{prop}\label{P;Hook n>p} Let $n>p>2$. Then $S^{(n-r,1^r)}$ is a trivial source module if and only if $(n-r,1^r)\in JM(n)_p$. \varepsilonnd{prop} \boldsymbolegin{proof} One direction is clear by Theorem \ref{T;Hemmer}. For the other direction, we may assume further that $p\mathbf{m}id n$, since it is well-known that $S^{(n-r,1^r)}$ is simple if $p\nmid n$ (see \cite[Theorem 24.1]{GJ1}). Let $a$ be the remainder of $r$ when $r$ is divided by $p$. By Lemmas \ref{L;Hook sjt} and \ref{T;permutaiton}, we may assume further that $r>1$ and $2\mathbf{m}id a$. We consider about the following cases. \boldsymbolegin{enumerate}[\mathfrak{t}ext{Case} 1:] \item $a=0.$ \varepsilonnd{enumerate} When $n-r\geq 2p$, by Lemma \ref{L;F(S),p>2}, we have \boldsymbolegin{align*} &\mathbf{m}athcal{F}_p(S^{(n-r,1^r)})\cong S^{(n-p-r,1^r)}\oplus S^{(n-p-r+1,1^{r-1})},\\ &\mathbf{m}athcal{F}_p(S^{(n-p-r+1,1^{r-1})})\cong S^{(n-2p-r+1,1^{r-1})}\oplus S^{(n-2p-r+2,1^{r-2})}. \varepsilonnd{align*} Using Lemma \ref{L;Hook sjt}, we get that the stable generic Jordan type of $S^{(n-2p-r+2,1^{r-2})}{\downarrow_{E_3}}$ is $[p-1]^b$, where $b\in \mathbf{m}athbb{N}$. Therefore, $S^{(n-2p-r+2,1^{r-2})}$ is not a trivial source module by Lemma \ref{T;permutaiton}. By Corollary \ref{L;permutation}, the fact implies that $S^{(n-r,1^r)}$ is not a trivial source module. When $n-r<2p$, note that $n-r=p$ since $p\mathbf{m}id n-r$ and $n>r$. By Lemma \ref{L;p-permutation modules} (iii), we know that $S^{(r+1,1^{p-1})}$ is a trivial source module if and only if $S^{(p,1^r)}$ is a trivial source module. Notice that $r>p-1$ as $r>1$ and $a=0$. We now apply $\mathbf{m}athcal{F}_p$ to $S^{(r+1,1^{p-1})}$ and obtain that $$\mathbf{m}athcal{F}_p(S^{(r+1,1^{p-1})})\cong S^{(r-p+1,1^{p-1})}\oplus S^{(r-p+2,1^{p-2})}$$ by Lemma \ref{L;F(S),p>2}. By Lemma \ref{L;Hook sjt}, the stable generic Jordan type of $S^{(r-p+2,1^{p-2})}{\downarrow_{E_2}}$ is $[p-1]^c$, where $c\in \mathbf{m}athbb{N}$. Therefore, by the same reason mentioned in the case $n-r\geq 2p$ and Lemma \ref{L;p-permutation modules} (iii), we deduce that $S^{(p,1^r)}$ is not a trivial source module. \boldsymbolegin{enumerate}[\mathfrak{t}ext{Case} 2:] \item $0<a<p.$ \varepsilonnd{enumerate} When $n-r>p$, by Lemma \ref{L;F(S),p>2}, we have $$\mathbf{m}athcal{F}_p(S^{(n-r,1^r)})\cong S^{(n-p-r,1^r)}\oplus S^{(n-p-r+1,1^{r-1})}.$$ By Lemma \ref{L;Hook sjt} again, the stable generic Jordan type of $S^{(n-p-r+1,1^{r-1})}{\downarrow_{E_2}}$ is $[p-1]^d$, where $d\in \mathbf{m}athbb{N}$. So $S^{(n-p-r+1, 1^{r-1})}$ is not a trivial source module by Lemma \ref{T;permutaiton}. We thus deduce that $S^{(n-r,1^r)}$ is not a trivial source module by Corollary \ref{L;permutation}. For the left case, note that $n-r\neq p$ as $p\mathbf{m}id n$ and $0<a<p$. When $n-r<p$, if $a=p-1$, we have $n-r=1$ as $p\mathbf{m}id n$ and $n-r<p$. So $S^{(n-r,1^r)}$ is in fact the simple module $S^{(1^{n})}$. We thus assume that $a<p-1$ in the following discussion. By Lemma \ref{L;p-permutation modules} (iii), we know that $S^{(r+1,1^{n-r-1})}$ is a trivial source module if and only if $S^{(n-r,1^r)}$ is a trivial source module. Note that $r>p$. Otherwise, we get $2p\leq n-r+r<p+p=2p$, which is absurd. Moreover, observe that $n-r-1\varepsilonquiv p-1-a\pmod p$, $2\mathbf{m}id p-1-a$ and $p-1-a>0$. One may write a similar proof as the one given in the case $n-r>p$ to deduce that $S^{(r+1,1^{n-r-1})}$ is not a trivial source module. Using Lemma \ref{L;p-permutation modules} (iii) again, we get that $S^{(n-r,1^r)}$ is not a trivial source module. By the analysis of above two cases, we have shown that $S^{(n-r,1^r)}$ is not a trivial source module if $n>p>2$, $p\mathbf{m}id n$ and $n-r>1$. Therefore, if $n>p>2$, $S^{(n-r,1^r)}$ is a trivial source module only if it is simple. The proof is now complete. \varepsilonnd{proof} \boldsymbolegin{rem}\label{R;Othercases} Let $p>2$. If $n<p$, all the Specht modules are trivial source modules. Moreover, they are all simple. If $n=p$, let $a$ be an integer such that $0\leq a<p$. It is known that $S^{(p-a,1^a)}$ is a trivial source module if and only if $2\mathbf{m}id a$ (See \cite{Erdmann}). These trivial source Specht modules are non-simple except $S^{(p)}$ and $S^{(1^p)}$. \varepsilonnd{rem} We now deal with the case $p=2$. In this case, all the indecomposable Specht modules labelled by hook partitions of $n$ were classified by Murphy in \cite{Murphy}. Let us state her result as follows. If $2\nmid n$ and $n\geq 2r$, $S^{(n-r,1^r)}$ is indecomposable if and only if $n\varepsilonquiv 2r+1\pmod {2^L}$, where $L\in \mathbf{m}athbb{N}$ and $2^{L-1}\leq r<2^L$. If $2\nmid n$ and $2r>n$, $S^{(n-r,1^r)}$ is indecomposable if and only if $n\varepsilonquiv 2r+1\pmod {2^{L'}}$, where $L'\in \mathbf{m}athbb{N}$ and $2^{{L'}-1}\leq n-r-1<2^{L'}$. If $2\mathbf{m}id n$, all Specht modules labelled by hook partitions of $n$ are indecomposable. We need the following lemma to conclude the case. \boldsymbolegin{lem}\label{L;p=2Hook} Let $p=2$ and $2\nmid n$. If $S^{(n-r,1^r)}$ is indecomposable, then \boldsymbolegin{enumerate} \item [\varepsilonm(i)] $S^{(n-r,1^r)}$ is a trivial source module. \item [\varepsilonm(ii)] $S^{(n-r,1^r)}$ is non-simple if and only if $r>1$ and $r\neq n-2,\ n-1$. \varepsilonnd{enumerate} \varepsilonnd{lem} \boldsymbolegin{proof} Recall that $r\geq 1$. We now work on $\boldsymboligwedge^rS^{(n-1,1)}$. As $2\nmid n$, it is well-known that $\boldsymboligwedge^rS^{(n-1,1)}\mathbf{m}id \boldsymboligwedge^r M^{(n-1,1)}$. (i) thus follows as $\boldsymboligwedge^r M^{(n-1,1)}$ is a permutation module. (ii) is a part of \cite[Theorem 23.7]{GJ1}. \varepsilonnd{proof} \boldsymbolegin{rem}\label{R;p=2Hook} Let $p=2$. If $2\nmid n$, all the non-simple trivial source Specht modules labelled by hook partitions of $n$ are clear by Lemma \ref{L;p=2Hook} and the Murphy's result. If $2\mathbf{m}id n$, let $P$ be a Sylow $2$-subgroup of $\mathbf{m}athfrak{s}ym{n}$. By \cite[Corollary 4.4, Theorem 4.5]{MP}, note that $P$ is a vertex of $S^{(n-r,1^r)}$ and $S^{(n-r,1^r)}{\downarrow_P}$ is a $P$-source of $S^{(n-r,1^r)}$. In particular, only the trivial module is a trivial source Specht module in the case. \varepsilonnd{rem} It is trivial to see that $S^{(1)}$ is a trivial source module. Theorem \ref{T;A} is now proved by combining Theorem \ref{T;Hemmer}, Proposition \ref{P;Hook n>p} and Remarks \ref{R;Othercases}, \ref{R;p=2Hook}. \mathbf{m}athfrak{s}ection{The Specht modules labelled by two-part partitions} The proof of Theorem \ref{T;C} will be presented in this section. To finish the proof, we first work on Specht modules labelled by partitions with at most two columns and then translate the obtained results to the Specht modules labelled by two-part partitions by Lemma \ref{L;p-permutation modules} (iii). Let $\lambda=(\lambda_1,\ldots,\lambda_\varepsilonll)\vdash n$, $s\in \mathbf{m}athbb{N}$ and $s\leq \varepsilonll$. For our purpose, define $\overline{\lambda}^s$ to be the partition obtained from $\lambda$ by deleting the first $s$ rows of $\lambda$. Namely, we have $\overline{\lambda}^s=(\lambda_{s+1},\ldots, \lambda_\varepsilonll)\vdash n-\mathbf{m}athfrak{s}um_{i=1}^s\lambda_i$ if $s<\varepsilonll$ and get $\varnothing$ if $s=\varepsilonll$. For any non-negative integer $m$, write $\varepsilonll_p(m)$ to denote the smallest non-negative integer $a$ such that $m<p^a$. We need some preliminary results. \boldsymbolegin{lem}\label{L;James} Let $\lambda=(\lambda_1,\ldots,\lambda_\varepsilonll)\vdash n$. We have \boldsymbolegin{enumerate} \item [\varepsilonm(i)] \cite[Theorem 24.4]{GJ1} $\mathfrak{t}extnormal{Hom}_{\mathbb{F}\mathbf{m}athfrak{s}ym{n}}(\mathbb{F},S^{\lambda})\neq 0$ if and only if $\lambda_i\varepsilonquiv -1 \pmod {p^{\varepsilonll_p(\lambda_{i+1})}}$ for all $1\leq i<\varepsilonll$. \item [\varepsilonm(ii)] \cite[Corollary 13.17]{GJ1} $\mathfrak{t}extnormal{Hom}_{\mathbb{F}\mathbf{m}athfrak{s}ym{n}} (\mathbb{F},S_{\lambda})\neq 0$ implies that $\lambda=(n)$ if $p>2$. \varepsilonnd{enumerate} \varepsilonnd{lem} Let $\lambda\vdash n$. We will call $\lambda$ a James partition if $\mathfrak{t}extnormal{Hom}_{\mathbb{F}\mathbf{m}athfrak{s}ym{n}}(\mathbb{F},S^{\lambda})\neq 0$. \boldsymbolegin{lem}\label{L;Jamestrivial} Let $p>2$ and $\lambda$ be a James partition of $n$. Then $S^\lambda$ is a trivial source module if and only if $\lambda=(n)$. \varepsilonnd{lem} \boldsymbolegin{proof} One direction is clear. For the other direction, let $P$ be a vertex of $S^{\lambda}$ and suppose that $\lambda\neq (n)$. As $S^{\lambda}$ is a trivial source module, we have $S^{\lambda}\mathbf{m}id \mathbb{F}_P{\uparrow^{\mathbf{m}athfrak{s}ym{n}}}$. Note that Lemma \ref{L;James} (ii) implies that $S^{\lambda}\not\cong \mathfrak{t}extnormal{Sc}_{\mathbf{m}athfrak{s}ym{n}}(P)$. However, since $\lambda$ is a James partition, we come to a contradiction as we get two trivial submodules for the transitive permutation module $\mathbb{F}_P{\uparrow^{\mathbf{m}athfrak{s}ym{n}}}$. So $\lambda$ has to be $(n)$ if $S^{\lambda}$ is a trivial source module. The lemma is shown. \varepsilonnd{proof} Let $a$ and $b$ be non-negative integers such that $b<p-1$ and $n=a(p-1)+b$. If $p>2$, it is well-known that $S^{((a+1)^b, a^{p-1-b})}$ has the simple head $sgn$ (see \cite[Example 24.5 (iii)]{GJ1}). We now prove a result which may have its own interest. \boldsymbolegin{prop} Let $p>2$ and $n=a(p-1)+b$ where $a$ and $b$ are non-negative integers such that $b<p-1$. Then $S^{((a+1)^b, a^{p-1-b})}$ is a trivial source module if and only if $n<p$. \varepsilonnd{prop} \boldsymbolegin{proof} One direction is trivial. To prove the other direction, write $\varepsilonpsilon$ to denote $((a+1)^b, a^{p-1-b})$. If $S^{\varepsilonpsilon}$ is a trivial source module, note that $S_{{\varepsilonpsilon^{'}}}$ is also a trivial source module by Lemma \ref{L;p-permutation modules} (ii). Moreover, $S_{{\varepsilonpsilon}^{'}}$ has a trivial quotient module as $S^{\varepsilonpsilon}$ has the simple head $sgn$. Let $P$ be a vertex of $S_{\varepsilonpsilon^{'}}$. The two facts imply that $S_{\varepsilonpsilon^{'}}\mathbf{m}id {\mathbb{F}_P}{\uparrow^{\mathbf{m}athfrak{s}ym{n}}}$ and $S_{\varepsilonpsilon^{'}}\cong Sc_{\mathbf{m}athfrak{s}ym{n}}(P)$. Therefore, $S_{\varepsilonpsilon^{'}}$ has a trivial submodule by the definition of $Sc_{\mathbf{m}athfrak{s}ym{n}}(P)$. Using Lemma \ref{L;James} (ii), we deduce that $\varepsilonpsilon=(1^n)$ and $n<p$. \varepsilonnd{proof} For any $m\in \mathbf{m}athbb{N}$, define $\nu_p(m)$ to be the largest non-negative integer $a$ such that $p^a\mathbf{m}id m$. In particular, $\nu_p(m)\geq 0$. By convention, put $\nu_p(0)=\infty$. The following theorem is known as the Carter criterion. One may refer to \cite[Theorem 7.3.23]{GJ3} for a proof. \boldsymbolegin{thm}\cite[Carter criterion]{GJ3} \label{T;Carter} Let $\lambda$ be a $p$-restricted partition of $n$. Then $S^{\lambda}$ is simple if and only if $\nu_p(h_{a,b}^\lambda)=\nu_p(h_{a,c}^\lambda)$ for any two nodes $(a,b)$ and $(a,c)$ of $\lambda$. \varepsilonnd{thm} To finish the preparation, we need a result found by Hemmer in \cite{Hemmer1}. \boldsymbolegin{thm}\cite[Theorem 4.5]{Hemmer1}\label{T;Hemmer1} Let $\lambda=(\lambda_1,\ldots,\lambda_{\varepsilonll})\vdash n$ and $\lambda_1<p$. Then $\mathbf{m}athcal{F}_{\lambda_1}(S^{\lambda})\cong S^{\overline{\lambda}^1}.$ \varepsilonnd{thm} We now begin to finish the proof of Theorem \ref{T;C}. Until the end of the section, fix $p>2$ and an integer $r$ satisfying $0\leq 2r\leq n$. Note that the partition $(2^r, 1^{n-2r})$ is a $p$-restricted partition of $n$. The proof of Theorem \ref{T;C} will be based on a sequence of lemmas. \boldsymbolegin{lem}\label{L;nu_p} Let $\lambda=(2^r, 1^{n-2r})\vdash n$ and $\lambda\notin JM(n)_p$. Let $a$ be the largest positive integer such that the nodes $(a,1)$, $(a,2)$ are in $\lambda$ and $\nu_p(h_{a,1}^\lambda)\neq \nu_p(h_{a,2}^\lambda)$. Then there exist non-negative integers $u$, $v$ and $w$ such that $0\leq u,v<p$ and $h_{a,2}^{\lambda}=u+vp^w$. \varepsilonnd{lem} \boldsymbolegin{proof} Note that $h_{a,2}^\lambda>0$ and let $h_{a,2}^\lambda$ have $p$-adic sum $\mathbf{m}athfrak{s}um_{i\geq 0}b_ip^i$ where $0\leq b_i<p$ for all $i\geq 0$. If $h_{a,2}^\lambda=b_0$, we may assign $u=b_0$ and $v=w=0$. If $b_0<h_{a,2}^\lambda$, let $s$ be the smallest subscript such that $s>0$ and $b_s>0$. We claim that $\mathbf{m}athfrak{s}um_{i>s}b_ip^i=0$. Suppose that $\mathbf{m}athfrak{s}um_{i>s}b_ip^i>0$. Set $t=a+\mathbf{m}athfrak{s}um_{i>s}b_ip^i$ and note that the nodes $(t,1)$, $(t,2)$ are in $\lambda$. Moreover, we have $h_{t,1}^\lambda=h_{a,1}^\lambda- \mathbf{m}athfrak{s}um_{i>s}b_ip^i$ and $h_{t,2}^\lambda=b_0+b_sp^s$. If $b_0\neq 0$, $\nu_p(h_{a,2}^\lambda)=\nu_p(h_{t,2}^\lambda)=0$. However, as $\nu_p(h_{a,1}^\lambda)\neq \nu_{p}(h_{a,2}^\lambda)$, observe that both $\nu_p(h_{a,1}^\lambda)$ and $\nu_p(h_{t,1}^\lambda)$ are non-zero. So $\nu_p(h_{t,1}^\lambda)\neq\nu_p(h_{t,2}^\lambda)$. It contradicts with the choice of $a$ as $t>a$. If $b_0=0$, $\nu_p(h_{a,2}^\lambda)=\nu_p(h_{t,2}^\lambda)=s$. By the choice of $a$, we conclude that $\nu_p(h_{t,1}^\lambda)=s$. It implies that $\nu_p(h_{a,1}^\lambda)=s$. We thus get $\nu_p(h_{a,1}^\lambda)=\nu_p(h_{a,2}^\lambda)=s$, which also contradicts with the choice of $a$. Therefore, the claim is shown. We can set $u=b_0$, $v=b_s$ and $w=s$ by the claim. The lemma follows. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;Case1} Let $\lambda=(2^r, 1^{n-2r})\vdash n$ and $\lambda\notin JM(n)_p$. If $h_{1,2}^\lambda\geq p$, $\nu_p(h_{1,1}^\lambda)\neq\nu_p(h_{1,2}^\lambda)$ and $\nu_p(h_{a,1}^\lambda)=\nu_p(h_{a,2}^\lambda)$ for all nodes $(a,1)$, $(a,2)$ of $\lambda$ such that $a>1$, then $\nu_p(h_{1,1}^\lambda)$, $\nu_p(h_{1,2}^\lambda)>0$. \varepsilonnd{lem} \boldsymbolegin{proof} By the hypotheses and Lemma \ref{L;nu_p}, let $h_{1,1}^\lambda$ and $h_{1,2}^\lambda$ have $p$-adic sums $\mathbf{m}athfrak{s}um_{i\geq 0}a_ip^i$ and $b_0+b_sp^s$ respectively, where $0\leq a_i, b_0,b_s<p$ for all $i\geq 0$ and $b_s,s>0$. Suppose that either $\nu_p(h_{1,1}^\lambda)$ or $\nu_p(h_{1,2}^\lambda)$ is zero. Without loss of generality, we may assume that $\nu_p(h_{1,1}^\lambda)=0$. So we have $a_0\neq 0$ while $b_0=0$. Note that $h_{1,1}^\lambda>a_0$ otherwise $\lambda\in JM(n)_p$ as $\lambda$ is a $p$-core. Let $u=a_0+1$ and observe that the nodes $(u,1)$ and $(u,2)$ are in $\lambda$. Moreover, $h_{u,1}^\lambda=\mathbf{m}athfrak{s}um_{i>0}a_ip^i$ and $h_{u,2}^\lambda=b_sp^s-a_0$. In particular, from the hypotheses, we have $\nu_p(h_{u,1}^\lambda)=\nu_p(h_{u,2}^\lambda)$. But it is clear to see that $\nu_p(h_{u,1}^\lambda)\geq 1$ and $\nu_p(h_{u,2}^\lambda)=0$. It shows that $\nu_p(h_{u,1}^\lambda)\neq\nu_p(h_{u,2}^\lambda)$, which is a contradiction. One may write a similar proof to get a contradiction for the case $\nu_p(h_{1,2}^\lambda)=0$. Therefore, we conclude that $\nu_p(h_{1,1}^\lambda)$, $\nu_p(h_{1,2}^\lambda)>0$. The proof is now finished. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;Case2} Let $\lambda=(2^r, 1^{n-2r})\vdash n$ and $\lambda\notin JM(n)_p$. If $\nu_p(h_{1,1}^\lambda)\neq\nu_p(h_{1,2}^\lambda)$ and $\nu_p(h_{a,1}^\lambda)=\nu_p(h_{a,2}^\lambda)$ for all nodes $(a,1)$, $(a,2)$ of $\lambda$ such that $a>1$, then $S^\lambda$ is not a trivial source module. \varepsilonnd{lem} \boldsymbolegin{proof} By the hypotheses and Lemmas \ref{L;nu_p}, \ref{L;Case1}, if $h_{1,2}^\lambda\geq p$, let $h_{1,1}^\lambda$ and $h_{1,2}^\lambda$ have $p$-adic sums $\mathbf{m}athfrak{s}um_{i>0}a_ip^i$ and $b_sp^s$ respectively, where $0\leq a_i,b_s<p$ for all $i>0$ and $b_s,s>0$. Let $t$ be the smallest subscript such that $t>0$ and $a_t>0$. Note that $\nu_p(h_{1,1}^\lambda)=t\neq s=\nu_p(h_{1,2}^\lambda)$ by computation and the hypotheses. We claim that $s<t$. Suppose that $s>t$. Let $u=1+a_tp^t$ and note that the nodes $(u,1)$, $(u,2)$ are in $\lambda$. Moreover, $h_{u,1}^\lambda=\mathbf{m}athfrak{s}um_{i\geq t+1}a_ip^i$ and $h_{u,2}^\lambda=b_sp^s-a_tp^t$. In particular, from the hypotheses, $\nu_p(h_{u,1}^\lambda)=\nu_p(h_{u,2}^\lambda)$. However, we also get that $\nu_p(h_{u,1}^\lambda)=\nu_p(\mathbf{m}athfrak{s}um_{i\geq t+1}a_ip^i)\geq t+1$ but $\nu_p(h_{u,2}^\lambda)=\nu_p(p^t(b_sp^{s-t}-a_t))=t$. The calculation implies that $\nu_p(h_{u,1}^\lambda)\neq\nu_p(h_{u,2}^\lambda)$, which is a contradiction. The claim is now shown. Suppose that $S^\lambda$ is a trivial source module. Notice that $\lambda'=(h_{1,1}^\lambda-1, h_{1,2}^\lambda)$. By Lemma \ref{L;p-permutation modules} (iii), we know that $S^{\lambda'}$ is a trivial source module. If $h_{1,2}^\lambda<p$, $\nu_p(h_{1,1}^\lambda)>0$ as $\nu_p(h_{1,1}^\lambda)\neq \nu_p(h_{1,2}^\lambda)=0$. Note that $\lambda'$ is a James partition by Lemma \ref{L;James} (i). Due to Lemma \ref{L;Jamestrivial} , we get $h_{1,2}^\lambda=0$. It is a contradiction since $\lambda\notin JM(n)_p$. If $h_{1,2}^\lambda\geq p$, by the claim and Lemmas \ref{L;Case1}, \ref{L;James} (i), we observe that $\lambda'$ is also a James partition as $s<t$. Therefore, by Lemma \ref{L;Jamestrivial}, we get $\lambda'=(n)$. It also contradicts with the condition that $\lambda\notin JM(n)_p$. Hence $S^{\lambda}$ is not a trivial source module. The proof is now complete. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;Case3} Let $\lambda=(2^r, 1^{n-2r})\vdash n$ and $\lambda\notin JM(n)_p$. Then $S^\lambda$ is not a trivial source module. \varepsilonnd{lem} \boldsymbolegin{proof} According to the assumption that $\lambda\notin JM(n)_p$ and Theorem \ref{T;Carter}, there exists a pair of nodes $(a,1)$, $(a,2)$ of $\lambda$ such that $\nu_p(h_{a,1}^\lambda)\neq\nu_p(h_{a,2}^\lambda)$ and $\nu_p(h_{b,1}^\lambda)=\nu_p(h_{b,2}^\lambda)$ for all nodes $(b,1)$, $(b,2)$ of $\lambda$ such that $b>a$. When $a=1$, the desired result follows by Lemma \ref{L;Case2}. When $a>1$, put $u=a-1$ and $v=2a-2$. By Theorem \ref{T;Hemmer1} and Corollary \ref{L;permutation}, $S^\lambda$ is a trivial source $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module only if $S^{\overline{\lambda}^u}$ is a trivial source $\mathbb{F}\mathbf{m}athfrak{s}ym{n-v}$-module. Note that $\overline{\lambda}^u$ satisfies all the hypotheses of Lemma \ref{L;Case2}. Therefore, we deduce that $S^{\overline{\lambda}^u}$ is not a trivial source module, which shows that $S^\lambda$ is not a trivial source module. The proof is now complete. \varepsilonnd{proof} \boldsymbolegin{cor}\label{C;two-column} Let $\lambda=(2^r, 1^{n-2r})\vdash n$. Then $S^\lambda$ is a trivial source module if and only if $\lambda\in JM(n)_p$. \varepsilonnd{cor} \boldsymbolegin{proof} One direction is clear by Theorem \ref{T;Hemmer} while Lemma \ref{L;Case3} implies the other direction. The corollary follows. \varepsilonnd{proof} We close the section by presenting the proof of Theorem \ref{T;C}. \boldsymbolegin{proof}[Proof of Theorem \ref{T;C}] One direction is from Theorem \ref{T;Hemmer}. Note that $S^\lambda$ is simple if and only if $S^{\lambda'}$ is simple. Therefore, the other direction can be justified by using Lemma \ref{L;p-permutation modules} (iii) and Corollary \ref{C;two-column}. We are done. \varepsilonnd{proof} \mathbf{m}athfrak{s}ection{Proof of Theorem \ref{T;B}} The section provides the proof of Theorem \ref{T;B}. Theorem \ref{T;B} will be deduced from Theorem \ref{Con;Hudson} after a necessary preparation. Unlike the former sections, our main tool used here is the Brou\'{e} correspondence of trivial source $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-modules. We state a stronger result than the conjecture of Hudson as follows and prove it after a preparation. The result implies her conjecture obviously. \boldsymbolegin{thm}\label{Con;Hudson} Let $p=2$, $n\geq 4$ and $\lambda\vdash n$ with $2$-weight $2$. If $S^\lambda$ is an indecomposable, non-simple, trivial source module, then $S^\lambda\cong Y^\mathbf{m}u$ where $\mathbf{m}u=\kappa_2(\lambda)+(2^2)$. \varepsilonnd{thm} We list some lemmas to finish the preparation. \boldsymbolegin{lem}\label{L;outertensor} Let $G$, $H$ be finite groups and $M$, $N$ be an $\mathbb{F} G$-module and an $\mathbb{F} H$-module respectively. If both $M$ and $N$ are self-dual, then $M\boldsymboloxtimes N$ is also self-dual as an $\mathbb{F}[G\mathfrak{t}imes H]$-module. \varepsilonnd{lem} \boldsymbolegin{proof} Let $\mathcal{B}_M=\{m_1,\ldots, m_s\}$ be a basis of $M$ and $\mathcal{B}_N=\{n_1,\ldots,n_t\}$ be a basis of $N$. For any $\phi\in M^*$, $\psi\in N^*$ and $v=\mathbf{m}athfrak{s}um_{i=1}^s\mathbf{m}athfrak{s}um_{j=1}^tk_{i,j}m_i\boldsymboloxtimes n_j\in M\boldsymboloxtimes N$, view $\phi\boldsymboloxtimes \psi\in (M\boldsymboloxtimes N)^*$ by setting $(\phi\boldsymboloxtimes\psi) (v)=\mathbf{m}athfrak{s}um_{i=1}^s\mathbf{m}athfrak{s}um_{j=1}^tk_{i,j}\phi(m_i)\psi(n_j)$. Note that $M^*\boldsymboloxtimes N^*=(M\boldsymboloxtimes N)^*$ under the viewpoint. So we only need to show that $M\boldsymboloxtimes N\cong M^*\boldsymboloxtimes N^*$ as $\mathbb{F}[G\mathfrak{t}imes H]$-modules. It is clear since $M\cong M^*$ and $N\cong N^*$. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;Greenself-dual} Let $G$ be a finite group and $M$ be an indecomposable $\mathbb{F} G$-module with a vertex $P$. If $N_G(P)\leq H\leq G$, then $M$ is self-dual if and only if $\mathbf{m}athcal{G}_H(M)$ is self-dual. \varepsilonnd{lem} \boldsymbolegin{proof} By properties of $\mathbf{m}athcal{G}_H(M)$, we have \boldsymbolegin{align} M{\downarrow_H}\cong \mathbf{m}athcal{G}_H(M)\oplus U,\ \mathbf{m}athcal{G}_H(M){\uparrow^G}\cong M\oplus V, \varepsilonnd{align} where, for any indecomposable direct summand $W$ of the $\mathbb{F} H$-module $U$ or the $\mathbb{F} G$-module $V$, $P$ is not a vertex of $W$. When $M\cong M^*$, by (6.1), we get \boldsymbolegin{align} \mathbf{m}athcal{G}_H(M)^*\oplus U^*\cong (M{\downarrow_H})^*=(M^*){\downarrow_H}\cong M{\downarrow_H}\cong \mathbf{m}athcal{G}_H(M)\oplus U. \varepsilonnd{align} Note that $*$ preserves vertices of indecomposable modules. We thus deduce that $\mathbf{m}athcal{G}_H(M)\cong \mathbf{m}athcal{G}_H(M)^*$ by (6.2) and the Krull-Schmidt Theorem. If $\mathbf{m}athcal{G}_H(M)\cong \mathbf{m}athcal{G}_H(M)^*$, by (6.1) again, we obtain \boldsymbolegin{align} M^*\oplus V^*\cong (\mathbf{m}athcal{G}_H(M){\uparrow^G})^*\cong (\mathbf{m}athcal{G}_H(M)^*){\uparrow^G}\cong \mathbf{m}athcal{G}_H(M){\uparrow^G}\cong M\oplus V. \varepsilonnd{align} By (6.3) and the Krull-Schmidt Theorem, we have $M\cong M^*$. The lemma follows. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;p=2simple} Let $p=2$ and $\lambda$ be a $2$-regular or $2$-restricted partition of $n$. If $S^\lambda$ is self-dual, then $S^\lambda$ is simple. \varepsilonnd{lem} \boldsymbolegin{proof} Suppose that $S^\lambda$ is not simple. When $\lambda$ is $2$-regular, $S^\lambda$ has a simple head $D^\lambda$. As $S^\lambda$ is self-dual, it forces that $S^\lambda$ has at least two composition factors isomorphic to $D^\lambda$. It is an obvious contradiction. When $\lambda$ is $2$-restricted, note that $S^\lambda\cong S^{\lambda'}$ as $S^\lambda$ is self-dual. We also get a contradiction similarly. The lemma follows. \varepsilonnd{proof} The following lemma is a straightforward fact on symmetric groups. \boldsymbolegin{lem}\label{L;Normalizer1} Let $H\leq \mathbf{m}athfrak{s}ym{n}$ and $r$ be an integer such that $0<r\leq n$. Let $H$ act on $\mathbf{m}athbf{n}$. If $H$ fixes $n-r+1,\ldots, n$, then $$N_{\mathbf{m}athfrak{s}ym{n}}(H)=N_{\mathbf{m}athfrak{s}ym{n-r}}(H)\mathfrak{t}imes \mathbf{m}athfrak{s}ym{r}\leq \mathbf{m}athfrak{s}ym{(n-r,r)}.$$ Moreover, $N_{\mathbf{m}athfrak{s}ym{n}}(H)/H\cong(N_{\mathbf{m}athfrak{s}ym{n-r}}(H)/H)\mathfrak{t}imes \mathbf{m}athfrak{s}ym{r}$. \varepsilonnd{lem} \boldsymbolegin{lem}\cite[Corollary 2]{Wei}\label{L;Normalizer2} Let $P$ be a Sylow $2$-subgroup of $\mathbf{m}athfrak{s}ym{n}$. Then we have $N_{\mathbf{m}athfrak{s}ym{n}}(P)=P$. \varepsilonnd{lem} We now deal with Theorem \ref{Con;Hudson}. From now on, write $C_4$, $C_2\mathfrak{t}imes C_2$ and $K_4$ to denote the subgroups $\langle (1,2,3,4)\rangle$, $\langle (1,2),(3,4)\rangle$ and $\langle (1,2)(3,4), (1,3)(2,4)\rangle$ of $\mathbf{m}athfrak{s}ym{4}$ respectively. Note that they are exactly all $2$-subgroups of $\mathbf{m}athfrak{s}ym{4}$ of order $4$ up to $\mathbf{m}athfrak{s}ym{4}$-conjugation. \boldsymbolegin{lem}\label{P;defectgroup Block} Let $p=2$ and $M$ be an indecomposable $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. Let $\lambda$ be a $2$-core and $w$ be the $2$-weight of $B_\lambda$. If $B_\lambda$ contains $M$ and $M$ is a trivial source module whose vertices are defect groups of $B_\lambda$, then $M\cong Y^{\textswab{a}lpha}\cong S^{\textswab{a}lpha}$, where $\textswab{a}lpha=\lambda+(2w)\in JM(n)_2$. \varepsilonnd{lem} \boldsymbolegin{proof} Let $P$ be a Sylow $2$-subgroup of $\mathbf{m}athfrak{s}ym{2w}$. Note that $P$ is also a vertex of $M$. By Lemmas \ref{L;Normalizer1} and \ref{L;Normalizer2}, we know that $N_{\mathbf{m}athfrak{s}ym{n}}(P)=N_{\mathbf{m}athfrak{s}ym{2w}}(P)\mathfrak{t}imes \mathbf{m}athfrak{s}ym{n-2w}=P\mathfrak{t}imes \mathbf{m}athfrak{s}ym{n-2w}$. Moreover, $N_{\mathbf{m}athfrak{s}ym{n}}(P)/P\cong\mathbf{m}athfrak{s}ym{n-2w}$. Since $M$ is an indecomposable module with a vertex $P$ and a trivial $P$-source, by Theorem \ref{T;Broue} (i), we get that $M(P)$ is an indecomposable projective $\mathbb{F}\mathbf{m}athfrak{s}ym{n-2w}$-module. In particular, $M(P)\cong Y^\mathbf{m}u$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{n-2w}$-modules, where $\mathbf{m}u$ is a $2$-restricted partition of $n-2w$. Let $\nu=\mathbf{m}u+(2w)$. By Theorem \ref{T; Youngvertices}, note that $Y^{\nu}$ is an indecomposable $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module with a vertex $P$ and a trivial $P$-source. By the Sylow Theorem, we can require that $P$ is a Sylow $2$-subgroup of $\mathbf{m}athfrak{s}ym{\mathcal{O}_\nu}$. According to Theorem \ref{Erdmann} and Lemma \ref{L;Normalizer2}, $\mathbf{m}athfrak{s}ym{n-2w}\cong \mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{n}}(P)/\mathbf{m}athrm{N}_{\mathbf{m}athfrak{s}ym{\mathcal{O}_{\nu}}}(P)=N_{\mathbf{m}athfrak{s}ym{n}}(P)/P$ and $Y^{\nu}(P)\cong Y^{\mathbf{m}u}$. We thus obtain that $M\cong Y^{\nu}$ by Theorem \ref{T;Broue} (i). It forces that $\textswab{a}lpha=\nu$ as $B_\lambda$ contains $M$. Note that there does not exist $\boldsymboleta\vdash n$ such that $\kappa_2(\boldsymboleta)=\lambda$ and $\boldsymboleta\rhd \textswab{a}lpha$. So $Y^{\textswab{a}lpha}\cong S^{\textswab{a}lpha}$ by \cite[2.6]{Donkin}. Moreover, $S^{\textswab{a}lpha}$ is simple by Lemma \ref{L;p=2simple}. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;C2timeC_2} Let $p=2$, $n\geq 4$ and $M$ be an indecomposable $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module. Let $\lambda$ be a $2$-core and $B_\lambda$ have $2$-weight $2$. If $M$ is a trivial source module with a vertex $C_2\mathfrak{t}imes C_2$ and $B_\lambda$ contains $M$, then $M\cong Y^{\textswab{a}lpha}$, where $\textswab{a}lpha=\lambda+(2^2)$. \varepsilonnd{lem} \boldsymbolegin{proof} Notice that $M$ is isomorphic to a Young module since $M\mathbf{m}id M^{(2^2,1^{n-4})}$. As $C_2\mathfrak{t}imes C_2$ is a vertex of $M$, by Theorem \ref{T; Youngvertices}, there exists a $2$-restricted partition of $n-4$, say $\mathbf{m}u$, such that $M\cong Y^\nu$, where $\nu=\mathbf{m}u+(2^2)$. Since $B_\lambda$ contains $M$ and $B_\lambda$ has $2$-weight $2$, we get that $\textswab{a}lpha=\nu$. The lemma is shown. \varepsilonnd{proof} \boldsymbolegin{lem}\label{L;decompositon} Let $p=2$. Then $\mathbb{F}_{K_4}{\uparrow^{\mathbf{m}athfrak{s}ym{4}}}\cong 2D^{(3,1)}\oplus Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)$. In particular, all the indecomposable direct summands of the decomposition of $\mathbb{F}_{K_4}{\uparrow^{\mathbf{m}athfrak{s}ym{4}}}$ are self-dual. \varepsilonnd{lem} \boldsymbolegin{proof} Note that $N_{\mathbf{m}athfrak{s}ym{4}}(K_4)/K_4\cong \mathbf{m}athfrak{s}ym{3}$. By an easy calculation, $\mathbb{F}_{K_4}{\uparrow^{\mathbf{m}athfrak{s}ym{4}}}(K_4)\cong\mathbb{F}\mathbf{m}athfrak{s}ym{3}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{3}$-modules. It is well-known that $\mathbb{F}\mathbf{m}athfrak{s}ym{3}\cong Y^{(1^3)}\oplus 2Y^{(2,1)}$ and $D^{(3,1)}\mathbf{m}id \mathbb{F}_{K_4}{\uparrow^{\mathbf{m}athfrak{s}ym{4}}}$. As both $D^{(3,1)}$ and $Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)$ have the vertex $K_4$, by Lemma \ref{L;Brauer} (i), Theorem \ref{T;Broue} (i), (iii) and counting dimensions, $D^{(3,1)}(K_4)\cong Y^{(2,1)}$ and $(Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4))(K_4)\cong Y^{(1^3)}$. The desired isomorphic formula follows by Theorem \ref{T;Broue} (iii). The second assertion is clear by the isomorphic formula. \varepsilonnd{proof} \boldsymbolegin{prop}\label{L;K4} Let $p=2$, $n\geq 4$ and $H=N_{\mathbf{m}athfrak{s}ym{n}}(K_4)$. Let $M$ be an indecomposable $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module with a vertex $K_4$. If $M$ is a trivial source module, then, for some $2$-restricted partition $\lambda$ of $n-4$, we have $\mathbf{m}athcal{G}_H(M)\cong D^{(3,1)}\boldsymboloxtimes Y^\lambda$ or $Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)\boldsymboloxtimes Y^{\lambda}$. In particular, $M$ is self-dual. \varepsilonnd{prop} \boldsymbolegin{proof} By Lemma \ref{L;Normalizer1}, we have $H=N_{\mathbf{m}athfrak{s}ym{4}}(K_4)\mathfrak{t}imes \mathbf{m}athfrak{s}ym{n-4}=\mathbf{m}athfrak{s}ym{4}\mathfrak{t}imes \mathbf{m}athfrak{s}ym{n-4}$. Moreover, $H/K_4\cong \mathbf{m}athfrak{s}ym{(3,n-4)}$. By Theorem \ref{T;Broue} (i), note that $M(K_4)\cong Y^{(2,1)}\boldsymboloxtimes Y^{\lambda}$ or $Y^{(1^3)}\boldsymboloxtimes Y^{\lambda}$ as $\mathbb{F}\mathbf{m}athfrak{s}ym{(3,n-4)}$-modules, where $\lambda$ is a $2$-restricted partition of $n-4$. Furthermore, from \cite[Propositions 1.1, 1.2]{KL}, we know that $D^{(3,1)}\boldsymboloxtimes Y^\lambda$ and $Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)\boldsymboloxtimes Y^\lambda$ are indecomposable $\mathbb{F}\mathbf{m}athfrak{s}ym{(4,n-4)}$-modules with a vertex $K_4$ and a trivial $K_4$-source. By Lemma \ref{L;Brauer} (ii), we also have \boldsymbolegin{align} &(D^{(3,1)}\boldsymboloxtimes Y^\lambda)(K_4)\cong D^{(3,1)}(K_4)\boldsymboloxtimes Y^\lambda\cong Y^{(2,1)}\boldsymboloxtimes Y^\lambda,\\ & (Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)\boldsymboloxtimes Y^\lambda)(K_4)\cong (Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4))(K_4)\boldsymboloxtimes Y^\lambda\cong Y^{(1^3)}\boldsymboloxtimes Y^\lambda \varepsilonnd{align} as $\mathbb{F}\mathbf{m}athfrak{s}ym{(3,n-4)}$-modules. Note that $N_{\mathbf{m}athfrak{s}ym{(4,n-4)}}(K_4)=\mathbf{m}athfrak{s}ym{4}\mathfrak{t}imes\mathbf{m}athfrak{s}ym{n-4}=H$. Therefore, $(6.4)$ and $(6.5)$ are isomorphic formulae for $\mathbb{F}[H/K_4]$-modules. From Theorem \ref{T;Broue} (ii), we get $\mathbf{m}athcal{G}_H(M)\cong \mathbf{m}athcal{G}_H(D^{(3,1)}\boldsymboloxtimes Y^\lambda)$ or $\mathbf{m}athcal{G}_H(Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)\boldsymboloxtimes Y^\lambda)$. However, observe that $\mathbf{m}athcal{G}_H(D^{(3,1)}\boldsymboloxtimes Y^\lambda)=D^{(3,1)}\boldsymboloxtimes Y^\lambda$ and $\mathbf{m}athcal{G}_H(Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)\boldsymboloxtimes Y^\lambda)=Sc_{\mathbf{m}athfrak{s}ym{4}}(K_4)\boldsymboloxtimes Y^\lambda$. The first assertion thus follows. The second one is clear by Lemmas \ref{L;decompositon}, \ref{L;outertensor} and \ref{L;Greenself-dual}. \varepsilonnd{proof} \boldsymbolegin{proof}[Proof of Theorem \ref{Con;Hudson}] Let $P$ be a vertex of $S^\lambda$. We may choose $P$ to be a $2$-subgroup of $\mathbf{m}athfrak{s}ym{4}$. According to the hypotheses and \cite[Theorem 1]{Wildon}, the order of $P$ is at least $4$ and $P$ is not cyclic. The cases where $P$ is a Sylow $2$-subgroup of $\mathbf{m}athfrak{s}ym{4}$ or a $2$-subgroup conjugated to $C_2\mathfrak{t}imes C_2$ are done by Lemmas \ref{P;defectgroup Block} and \ref{L;C2timeC_2}. To prove Theorem \ref{Con;Hudson}, it suffices to show that $P\neq K_4$. Suppose that $P=K_4$. By Proposition \ref{L;K4}, notice that $S^\lambda$ is self-dual. It implies that $\lambda$ is neither $2$-regular nor $2$-restricted by Lemma \ref{L;p=2simple}. As $\lambda$ has $2$-weight $2$, when $n>4$, it forces that $\lambda=(s+2,s-1,s-2,\ldots,2,1^3)$ for some positive integer $s$. From the fact and \cite[Theorem A]{GE}, we get that $P$ contains $C_2\mathfrak{t}imes C_2$ up to $\mathbf{m}athfrak{s}ym{n}$-conjugation, which is a contradiction. When $n=4$, we have $\lambda=(2^2)$. It is well-known that $S^{(2^2)}\cong D^{(3,1)}$, which contradicts the assumption that $S^\lambda$ is non-simple. So $P\neq K_4$. The proof is now complete. \varepsilonnd{proof} Theorem \ref{T;B} is now shown by Theorems \ref{T;Hemmer} and \ref{Con;Hudson}. We end the paper with a corollary which may have its own interest. \boldsymbolegin{cor} Let $p=2$, $n\geq 4$ and $\lambda\vdash n$ with $2$-weight $2$. Then $S^\lambda$ is an indecomposable $\mathbb{F}\mathbf{m}athfrak{s}ym{n}$-module with a vertex $K_4$ and a trivial $K_4$-source if and only if $n=4$ and $\lambda=(2^2)$. \varepsilonnd{cor} \boldsymbolegin{proof} One direction is clear. For the other direction, notice that $K_4$ can not be $\mathbf{m}athfrak{s}ym{n}$-conjugate to the Sylow $2$-subgroups of the Young subgroups of $\mathbf{m}athfrak{s}ym{n}$. By Theorems \ref{T; Youngvertices} and \ref{T;B}, we thus deduce that $\lambda\in JM(n)_2$. Due to \cite[Main Theorem]{GMathas}, it is sufficient to show that $\lambda$ is neither $2$-regular nor $2$-restricted. It is well-known that $S^\lambda\cong Y^\lambda$ if $\lambda\in JM(n)_2$ and $\lambda$ is $2$-regular. Similarly, $S^\lambda\cong Y^{\lambda'}$ if $\lambda\in JM(n)_2$ and $\lambda$ is $2$-restricted. Therefore, as $S^\lambda$ has a vertex $K_4$, $\lambda$ is neither $2$-regular nor $2$-restricted. The desired result thus follows. \varepsilonnd{proof} \mathbf{m}athfrak{s}ubsection*{Acknowledgments} The author thanks his supervisor Dr. Kay Jin Lim for some suggestions of refining this paper. He also gratefully thanks Professor David Hemmer for letting him know the question of classification of trivial source Specht modules via a private communication in the conference `Representation Theory of Symmetric Groups and Related Algebras' held in Singapore. Finally, he gratefully thanks an anonymous referee for his or her careful reading and helpful suggestions. \boldsymbolegin{thebibliography}{99} \boldsymbolibitem{JAlperin1}J. L. Alperin, Local Representation Theory, Cambridge Studies in Advanced Mathematics, $\mathbf{m}athbf{11}$, Cambridge University Press, 1986. \boldsymbolibitem{Broue} M. Brou\'{e}, On Scott modules and p-permutation modules: an approach through the Brauer morphism, Proc. Amer. Math. Soc. $\mathbf{m}athbf{93}$ (1985), 401-408. \boldsymbolibitem{Burry} D. W. Burry, Scott modules and lower defect groups, Comm. Algebra $\mathbf{m}athbf{10}$ (1982), 1855-1872. \boldsymbolibitem{DKZ} S. Danz, B. K\"{u}lshammer, R. Zimmermann, On vertices of simple modules for symmetric groups of small degrees, J. Algebra $\mathbf{m}athbf{320}$ (2008), 680-707. \boldsymbolibitem{DanzLim} S. Danz, K. J. Lim, Signed Young modules and simple Specht modules, Adv. Math. $\mathbf{m}athbf{307}$ (2017), 369-416. \boldsymbolibitem{Donkin} S. Donkin, On Schur algebras and related algebras, II. J. Algebra $\mathbf{m}athbf{111}$ (1987), 354-364. \boldsymbolibitem{DG} S. Donkin, H. Geranios, Invariants of Specht modules, J. Algebra $\mathbf{m}athbf{439}$ (2015), 188-224. \boldsymbolibitem{Erdmann} K. Erdmann, Young modules for symmetric groups, J. Aust. Math. Soc. $\mathbf{m}athbf{71}$ (2001), 201-210. \boldsymbolibitem{EFJPAS} E. M. Friedlander, J. Pevtsova, A. Suslin, Generic and maximal Jordan types, Invent. Math. $\mathbf{m}athbf{168}$ (2007), 485-522. \boldsymbolibitem{GE} E. Giannelli, A lower bound on the vertices of Specht modules for symmetric groups, Arch. Math. (Basel) $\mathbf{m}athbf{103}$ (2014), 1-9. \boldsymbolibitem{GLDM} E. Giannelli, K. J. Lim, W. O'Donovan, M. Wildon, On signed Young permutation modules and signed $p$-Kostka numbers, J. Group Theory $\mathbf{m}athbf{20}$ (2017), 637-679. \boldsymbolibitem{GJ} J. Grabmeier, Unzerlegbare Moduln mit trivialer Youngquelle und Darstellungstheorie der Schuralgebra, Bayreuth. Math. Schr. $\mathbf{m}athbf{20}$ (1985), 9-152. \boldsymbolibitem{JGreen} J. A. Green, On the indecomposable representations of a finite group, Math. Z. $\mathbf{m}athbf{70}$ (1959), 430-445. \boldsymbolibitem{Hemmer2} D. J. Hemmer, Fixed-point functors for symmetric groups and Schur algebras, J. Algebra $\mathbf{m}athbf{280}$ (2004), 295-312. \boldsymbolibitem{Hemmer1} D. J. Hemmer, A row removal theorem for the $\mathfrak{t}ext{Ext}^1$ quiver of symmetric groups and Schur algebras, Proc. Amer. Math. Soc. $\mathbf{m}athbf{133}$ (2005), 403-414. \boldsymbolibitem{Tara} T. A. Hudson, Specht Modules of Trivial Source and the Endomorphism Ring of the Lie Module, Thesis (Ph.D.)-State University of New York at Buffalo, 2018. \boldsymbolibitem{GJ1} G. D. James, The Representation Theory of the Symmetric Groups, Lecture Notes in Mathematics, $\mathbf{m}athbf{682}$, Springer, Berlin, 1978. \boldsymbolibitem{GJ2} G. D. James, Trivial source modules for symmetric groups, Arch. Math. (Basel) $\mathbf{m}athbf{41}$ (1983), 294-300. \boldsymbolibitem{GMathas} G. D. James, A. Mathas, The irreducible Specht modules in characteristic $2$, Bull. London Math. Soc. $\mathbf{m}athbf{31}$ (1999), 457-462. \boldsymbolibitem{GJ3} G. D. James, A. Kerber, The Representation Theory of the Symmetric Group, Encyclopedia of Mathematics and its Applications, $\mathbf{m}athbf{16}$, Addison-Wesley Publishing Co., 1981. \boldsymbolibitem{JLW} Y. Jiang, K. J. Lim, J. L. Wang, On the Brauer constructions and generic Jordan types of Young modules, arXiv: 1707.04075 [math. RT], 2017. \boldsymbolibitem{KL} B. K\"{u}lshammer, Some indecomposable modules and their vertices, J. Pure Appl. Algebra $\mathbf{m}athbf{86}$ (1993), 65-73. \boldsymbolibitem{Lim2}K. J. Lim, The complexity of the Specht modules corresponding to hook partitions, Arch. Math. (Basel) $\mathbf{m}athbf{93}$ (2009), 11-22. \boldsymbolibitem{Muller} J. M\"{u}ller, R. Zimmermann, Green vertices and sources of simple modules of the symmetric group labelled by hook partitions, Arch. Math. (Basel) $\mathbf{m}athbf{89}$ (2007), 97-108. \boldsymbolibitem{Murphy} G. M. Murphy, On decomposability of some Specht modules for symmetric groups, J. Algebra $\mathbf{m}athbf{66}$ (1980), 156-168. \boldsymbolibitem{MP} G. M. Murphy, M. H. Peel, Vertices of Specht modules, J. Algebra $\mathbf{m}athbf{86}$ (1984), 85-97. \boldsymbolibitem{HNYT} H. Nagao, Y. Tsushima, Representations of Finite Groups, Academic Press, San Diego, 1989. \boldsymbolibitem{Wei} L. Weisner, On the Sylow Subgroups of the Symmetric and Alternating Groups, Amer. J. Math. $\mathbf{m}athbf{47}$ (1925), 121-124. \boldsymbolibitem{WW} W. W. Wheeler, Generic module theory, J. Algebra $\mathbf{m}athbf{185}$ (1996), 205-228. \boldsymbolibitem{Wildon} M. Wildon, Two theorems on the vertices of Specht modules, Arch. Math. (Basel) $\mathbf{m}athbf{81}$ (2003), 505-511. \varepsilonnd{thebibliography} \varepsilonnd{document}
math
107,316
\begin{document} \title{A sequential design for extreme quantiles estimation under binary sampling.} \author{Michel Broniatowski and Emilie Miranda \\ LPSM, CNRS UMR 8001, Sorbonne Universite, Paris} \maketitle \begin{abstract} We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of dichotomic data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists in estimating a failure quantile from trials whose outcomes are reduced to indicators of whether the specimen have failed at the tested stress levels. The solution proposed is a sequential design making use of a splitting approach, decomposing the target probability level into a product of probabilities of conditional events of higher order. The method consists in gradually targeting the tail of the distribution and sampling under truncated distributions. The model is GEV or Weibull, and sequential estimation of its parameters involves an improved maximum likelihood procedure for binary data, due to the large uncertainty associated with such a restricted information. \end{abstract} Consider a non negative random variable $X$ with distribution function $G$.$ \ $\ Let $X_{1},..,X_{n}$ be $n$ independent copies of $X.$ The aim of this paper is to estimate $q_{1-\Greekmath 010B }$, the $\left( 1-\Greekmath 010B \right) $ -quantile of $G$ when $\Greekmath 010B $ is much smaller than $1/n.$ We therefore aim at the estimation of so-called extreme quantiles. This question has been handled by various authors, and we will review their results somehow later. The approach which we develop is quite different since we do not assume that the $X_{i}$'s \ can be observed. For any threshold $x$, we define the r.v. \[ Y=\left\{ \begin{array}{l} 1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }X\leq x \\ 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }X>x \end{array} \right. \] which therefore has a Bernoulli distribution with parameter $G(x).$ We may choose $x$, however we do not observe $X$, but merely $Y.$ Therefore any inference on $G$ suffers from a severe loss of information. This kind of setting is common in industrial statistics: When exploring the strength of a material, or of a bundle, we may set a constraint $x$, and observe whether the bundle breaks or not when subjected at this level of constraint. In the following, we will denote $R$ the resistance of this material, we observe $Y.$ Inference on $G$ can be performed for large $ n$ making use of many thresholds $x.$ Unfortunately\ such a procedure will not be of any help for extreme quantiles. To address this issue, we will consider a design of experiment enabling to progressively characterize the tail of the distribution by sampling at each step in a more extreme region of the density. It will thus be assumed in the following that we are able to observe $Y$ not only when $R$ follows $G$ but also when $R$ follows the conditional distribution of $R$ given $\{ R>x\}.$ In such a case we will be able to estimate $q_{1-\Greekmath 010B }$ even when $\Greekmath 010B <1/n$ where $n$ designates the total number of trials.\ In material sciences, this amounts to consider trials based on artificially modified materials; in the case when we aim at estimation of extreme upper quantiles, this amounts to strengthen the material. We would consider a family of increasing thresholds $ x_{1},..,x_{m}$ and for each of them realize $K_{1},..,K_{m}$ trials, each block of iid realizations $Y$'s being therefore functions of the corresponding unobserved $R$'s with distribution $G$ conditioned upon $ \{R>x_{l}\}$, $1\leq l\leq m.$ design which allows for the estimation of extreme quantiles. The present setting is therefore quite different from that usually considered for similar problems under complete information. As sketched above it is specifically suited for industrial statistics and reliability studies in the science of materials.\ From a strictly statistical standpoint, the above description may also be considered when the distribution $G$ is of some special form, namely when the conditional distribution of $R$ given $\{R>x\}$ has a functional form which differs from that of $G$ only through some changes of the parameters. In this case, simulation under these conditional distributions can be performed for adaptive choice of the thresholds $x_{l}$'s, substituting the above sequence of trials. This sequential procedure allows to estimate iteratively the initial parameters of $G$ and to obtain $q_{1-\Greekmath 010B }$ combining corresponding quantiles of the conditional distributions above thresholds, a method named splitting.\ In this method, we will choose sequentially the $ x_{l}$'s in a way that $q_{1-\Greekmath 010B }$ will be obtained easily from the last distribution of $x$ conditioned upon $\{R>x_{m}\}.$ In safety issues or in pharmaceutical control, the focus is usually set on the behavior of a variable of interest (strength, maximum tolerated dose) for small (or even\ very small) levels. In these settings the above considerations turn to be equivalently stated through a clear change of variable, considering the inverse of the variable of interest. As an example which is indeed at the core of the motivation for this paper, and in order to make this approach more intuitive, we first sketch briefly the industrial situation which motivated this work in Section \ref{IndContext}. We look at a safety property, namely thresholds $x$ which specify very rare events, typically failures under very small solicitation. As stated above, the problem at hand is the estimation of very small quantiles. Classical techniques in risk theory pertain to large quantiles estimation. For example, the Generalized Pareto Distribution, to be referred to later on, is a basic tool in modeling extreme risks and exceedances over thresholds. Denoting $R$ the variable of interest and $\widetilde{R}:=1/R,$ then obviously, for $x>0$, $\left\{ R<x\right\} $ is equivalent to $\left\{ \widetilde{R}>u\right\} $ with $u=1/x$. In this paper we will therefore make use of this simple duality, stating formulas for $R,$ starting with classical results pertaining to $\widetilde{R}$ when necessary. Note that when $q_{\Greekmath 010B }$ designates the $\Greekmath 010B -$quantile of $R$ and respectively $\widetilde{q}_{1-\Greekmath 010B }$ the $\left( 1-\Greekmath 010B \right)-$quantile of $ \widetilde{R}$, it holds $q_{\Greekmath 010B }=1/\widetilde{q}_{1-\Greekmath 010B }.$The resulting notation may seem a bit cumbersome; however the reader accustomed to industrial statistics will find it more familiar. This article is organized as follows. Section \ref{IndContext} formalizes the problem in the framework of an industrial application to aircraft industry. In Section \ref{revLit}, a short survey of extreme quantiles estimation and of existing designs of experiment are studied as well as their applicability to extreme quantiles estimation. Then, a new procedure is proposed in Section \ref{Splitting} and elaborated for a Generalized Pareto model. An estimation procedure is detailed and evaluated in Section \ref{EstimationProc}. Then an alternative Weibull model for the design proposed is presented in Section \ref{WeibullModel}. Lastly, Sections \ref{model_selection_missp} and \ref{Perspectives} provide a few ideas discussing model selection and behavior under misspecification as well as hints about extensions of the models studied beforehand. \section{Industrial challenge}\label{IndContext} \subsection{Estimation of minimal allowable stress in material fatigue} In aircraft industry, one major challenge is the characterization of extreme behaviors of materials used to design engine pieces. Especially, we will consider extreme risks associated with fatigue wear, which is a very classical type of damage suffered by engines during flights. It consists in the progressive weakening of a material due to the application of cyclic loadings a large number of times that can lead to its failure. As shown in Figure \ref {cyclefatigue}, a loading cycle is defined by several quantities: the minimal and maximal stresses $\Greekmath 011B _{\min }$ et $\Greekmath 011B _{\max }$, the stress amplitude $\Greekmath 011B _{a}=\frac{\Greekmath 011B _{\max }-\Greekmath 011B _{\min }}{2}$, and other indicators such as the stress ratio $\frac{\Greekmath 011B _{\min }}{\Greekmath 011B _{\max }}$. \begin{center} \includegraphics[scale=0.05]{fatigue2.png} \captionof{figure}{Loading cycle on a material}\label{cyclefatigue} \end{center} The fatigue strength of a given material is studied through experimental campaigns designed at fixed environmental covariates to reproduce flight conditions. The trials consist in loading at a given stress level a dimensioned sample of material up to its failure or the date of end of trial. The lifetime of a specimen is measured in terms of number of cycles to failure, usually subject to right censoring. \begin{figure} \caption{S-N curve} \label{wohler} \end{figure} The campaign results are then used to study fatigue resistance and are represented graphically in an S-N scale (see figure \ref{wohler}). S-N curves highlight the existence of three fatigue regimes. Firstly, low cycle fatigue corresponds to short lives associated with high levels of stress. Secondly, during high cycle fatigue, the number of cycles to failure decreases log-linearily with respect to the loading. The last regime is the endurance limit, in which failure occurs at a very high number of cycles or doesn't occur at all. We will focus in the following on the endurance limit, which is also the hardest regime to characterize since there is usually only few and scattered observations. In this framework, we are focusing on minimal risk. The critical quantities that are used to characterize minimal risk linked to fatigue damage are failure quantiles, called in this framework allowable stresses at a given number of cycles and for a fixed level of probability. Those quantiles are of great importance since they intervene in decisions pertaining engine parts dimensioning, pricing decisions as well as maintenance policies. \subsection{Formalization of the industrial problem} The aim of this study is to propose a new design method for the characterization of allowable stress in very high cycle fatigue, for a very low risk $\Greekmath 010B $ of order $10^{-3}$. We are willing to obtain a precise estimation method of the $\Greekmath 010B -$failure quantile based on a minimal number of trials. Denote $N$ the lifetime of a material in terms of number of cycles to failure and $S$ the stress amplitude of the loading, in MPa. Let $n_{0}$ be the targeted time span of order $10^{6}-10^{7}$ cycles. Define the allowable stress $s_{\Greekmath 010B }$ at $n_{0}$ cycles and level of probability $\Greekmath 010B $ $=10^{-3}$ the level of stress that guarantee that the risk of failure before $n_{0}$ does not exceed $\Greekmath 010B $: \begin{equation} s_{\Greekmath 010B }=\sup \left\{ s:\mathbb{P}(N\leq n_{0}|S=s)\leq \Greekmath 010B \right\} \label{pb1} \end{equation} We will now introduce a positive r.v. $R=R_{n_{0}}$ modeling the resistance of the material at $n_{0}$ cycles and homogeneous to the stress. $R$ is the variable of interest in this study and its distribution $\mathbb{P}$ is defined as: \begin{equation} \mathbb{P}(R\leq s) = \mathbb{P}(N\leq n_{0}|S=s). \label{loiR} \end{equation} Thus, the allowable stress can be rewritten as the $\Greekmath 010B -$quantile of the distribution of $R$, \QTP{Body Math} \begin{equation} s_{\Greekmath 010B }=q_{\Greekmath 010B }=\sup \left\{ s:\mathbb{P}(R\leq s)\leq \Greekmath 010B \right\}. \end{equation} However, $R$ is not directly observed. Indeed, the usable data collected at the end of a test campaign consists in couples of censored fatigue life - stress levels $\left(\min (N,n_{0}),s\right)$ where $s$ is part of the design of the experiment. The relevant information that can be drawn from those observations to characterize $R$ is restricted to indicators of whether or not the specimen tested has failed at $s$ before $n_{0}$. Therefore, the relevant observations corresponding to a campaign of $n$ trials are formed by a sample of variables $Y_{1},...,Y_{n}$ with for $1\leq i\leq n,$ \[ Y_{i}=\left\{ \begin{array}{l} 1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }R_{i}\leq s_{i} \\ 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }R_{i}>s_{i} \end{array} \right. \] where $s_{i}$ is the stress applied on specimen $i.$ Note that the number of observations is constrained by industrial and financial considerations; Thus $\Greekmath 010B $ is way lower than $1/n$ and we are considering a quantile lying outside the sample range. While we motivate this paper with the above industrial application, note that this kind of problem is of interest in other domains, such as broader reliability issues or medical trials through the estimation of the maximum tolerated dose of a given drug. \section{Extreme quantile estimation, a short survey}\label{revLit} As seen above estimating the minimal admissible constraint raises two issues; on one hand the estimation of an extreme quantile, and on the other hand the need to proceed to inference based on exceedances under thresholds.\ \ We present a short exposition of these two areas, keeping in mind that the literature on extreme quantile estimation deals with complete data, or data under right censoring. \subsection{Extreme quantiles estimation methods} Extreme quantile estimation in the univariate setting is widely covered in the literature when the variable of interest $X$ is either completely or partially observed. The usual framework is the study of the $(1-\Greekmath 010B )-$quantile of a r.v $X$, with very small $\Greekmath 010B $. The most classical case corresponds to the setting where ${x}_{1-\Greekmath 010B }$ is drawn from a $n$ sample of observations $X_1,\dots X_n$. We can distinguish estimation of high quantile, where $x_{1-\Greekmath 010B }$ lies inside the sample range, see Weissman 1978 \cite{Weissman} and Dekkers and al. 1989 \cite{dekkers1989}, and the estimation of an extreme quantile outside the boundary of the sample, see for instance De Haan and Rootz\'en 1993 \cite{deHann1993}. It is assumed that $X$ belongs to the domain of attraction of an extreme value distribution. The tail index of the latter is then estimated through maximum likelihood (Weissman 1978 \cite{Weissman}) or through an extension of Hill's estimator (see the moment estimator by Dekkers and al. 1989 \cite{dekkers1989}). Lastly, the estimator of the quantile is deduced from the inverse function of the distribution of the $k$ largest observations. Note that all the above references assume that the distribution has a Pareto tail. An alternative modeling has been proposed by De Valk 2016 \cite{valk} and De Valk and Cai 2018 \cite{valk2}, and consists in assuming a Weibull type tail, which enables to release some second order hypotheses on the tail. This last work deals with the estimation of extreme quantile lying way outside the sample range and will be used as a benchmark method in the following sections. Recent studies have also tackled the issue of censoring. For instance, Beirlant and al. 2007 \cite{beirlant2007} and Einmahl and al. 2008 \cite{Einmahl2008} proposed a generalization of the peak-over-threshold method when the data are subjected to random right censoring and an estimator for extreme quantiles. The idea is to consider a consistent estimator of the tail index on the censored data and divide it by the proportion of censored observations in the tail. Worms and Worms 2014 \cite{worms2014} studied estimators of the extremal index based on Kaplan Meier integration and censored regression. However the literature does not cover the case of complete truncation, i.e when only exceedances over given thresholds are observed. Indeed, all of the above are based on estimations of the tail index over weighed sums of the higher order statistics of the sample, which are not available in the problem of interest in this study. Classical estimation methods of extreme quantiles are thus not suited to the present issue. In the following, we study designs of experiment at use in industrial contexts and their possible application to extreme quantiles estimation. \subsection{Sequential design based on dichotomous data} In this section we review two standard methods in the industry and in biostatistics, which are the closest to our purpose.\ Up to our knowledge, no technique specifically addresses inference for extreme quantiles. We address the estimation of small quantiles, hence the events of interest are of the form $\left( R<s\right) $ and the quantile is $q_{\Greekmath 010B }$ for small $\Greekmath 010B .$ The first method is the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{staircase}, which is the present tool used to characterize a material fatigue strength\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{.} The second one is the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Continual Reassessment Method (CRM)} which is adapted for assessing the admissible toxicity level of a drug in Phase 1 clinical trials. Both methods rely on a parametric model for the distribution of the strength variable $R.$ We have considered two specifications, which allow for simple comparisons of performance, and do not aim at an accurate modelling in safety. \subsubsection{The Staircase method} Denote $\mathbb{P}(R\leq s)=\Greekmath 011E (s,\Greekmath 0112 _{0})$. Invented by Dixon and Mood (1948 \cite{dixon}), this technique aims at the estimation of the parameter $\Greekmath 0112 _{0}$ through sequential search based on data of exceedances under thresholds. The procedure is as follows. \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Procedure} Fix \begin{itemize} \item[$\bullet $] The initial value for the constraint, $S_{ini}$, \item[$\bullet $] The step $\Greekmath 010E >0$, \item[$\bullet $] The number of cycles $n_0$ to perform before concluding a trial, \item[$\bullet $] The total number of items to be tested, $K$. \end{itemize} The first item is tested at level $s_{(1)}=S_{ini}$. The next item is tested at level $s_{(2)}=S_{ini}-\Greekmath 010E $ in case of failure and $s_{(2)}=S_{ini}+\Greekmath 010E $ otherwise. Proceed sequentially on the $K-2$ remaining specimen at a level increased (respectively decreased) by $\Greekmath 010E $ in case of survival (resp. failure). The process is illustrated in figure \ref{staircase}. Note that the proper conduct of the Staircase method relies on strong assumptions on the choice of the design parameters. Firstly, $S_{ini}$ has to be sufficiently close to the expectation of $R$ and secondly, $\Greekmath 010E $ has to lay between $0.5\Greekmath 011B $ and $2\Greekmath 011B $, where $\Greekmath 011B $ designates the standard deviation of the distribution of $R$. Denote $\mathbb{P}(R\leq s)=\Greekmath 011E (s,\Greekmath 0112 _{0})$ and $Y_{i}$ the variable associated to the issue of the trial $i$, $1\leq i\leq K$, where $Y_{i}$ takes value $1$ under failure and $0$ under no failure, $Y_{i}=\mathds{1}_{N_{a}\leq n_{0}}\sim \mathcal{B}(\Greekmath 011E (s_{i},\Greekmath 0112 _{0}))$. \begin{center} \includegraphics[scale=0.7]{staircaseim.png} \captionof{figure}{Staircase procedure}\label{staircase} \end{center} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Estimation} After the $K$ trials, the parameter $\Greekmath 0112 _{0}$ is estimated through maximization of the likelihood, namely \begin{equation} \widehat \Greekmath 0112 = \underset{\Greekmath 0112 }{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{argmax}}{\prod_{i=1}^K \Greekmath 011E (s_i,\Greekmath 0112 )^{y_i } (1-\Greekmath 011E (s_i,\Greekmath 0112 ))^{ (1-y_i) }}. \end{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Numerical results} The accuracy of the procedure has been evaluated on the two models presented below on a batch of 1000 replications, each with $K=100.$ \emph{Exponential case} Let $R\sim \mathcal{E}(\Greekmath 0115 )$ with $\Greekmath 0115 =0.2$. The input parameters are $S_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ini}}=5 $ and $\Greekmath 010E =15\in \left[ 0.5\times \frac{1}{\Greekmath 0115 ^{2}},2\times \frac{1 }{\Greekmath 0115 ^{2}}\right] $. As shown in Table \ref{stairexp}, the relative error pertaining to the parameter $\Greekmath 0115 $ is roughly $25\%$, although the input parameters are somehow optimal for the method.\ The resulting relative error on the $10^{-3}$ quantile is $40\%.$ Indeed the parameter $\Greekmath 0115 $ is underestimated, which results in an overestimation of the variance $1/\Greekmath 0115 ^{2}$ , which induces an overestimation of the $10^{-3}$ quantile. \begin{table}[] \centering \renewcommand{1.2}{1.2} \begin{tabular}{|c|c|c|c|} \hline \Greekmath 0116 lticolumn{4}{|c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Relative error}} \\ \Greekmath 0116 lticolumn{2}{|c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On the parameter}} & \Greekmath 0116 lticolumn{2}{c|}{ \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On $s_{\Greekmath 010B }$}} \\ \hline\hline Mean & Std & Mean & Std \\ \hline -0.252 & 0.178 & 0.4064874 & 0.304 \\ \hline \end{tabular} \caption{Results obtained using the \emph{Staircase} method through simulations under the exponential model.} \label{stairexp} \end{table} \emph{Gaussian case } We now choose $R\sim \mathcal{N}(\Greekmath 0116 ,\Greekmath 011B )$ with $\Greekmath 0116 =60$ and $\Greekmath 011B =10$. The value of $S_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ini}}$ is set to the expectation and $\Greekmath 010E =7$ belongs to the interval $\left[ \frac{ \Greekmath 011B }{2},2\Greekmath 011B \right] .$ The same procedure as above is performed and yields the results in Table \ref{stairgaus}. \begin{table}[tbp] \centering \renewcommand{1.2}{1.2} \begin{tabular}{|c|c|c|c|c|c|} \hline \Greekmath 0116 lticolumn{6}{|c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Relative error}} \\ \Greekmath 0116 lticolumn{2}{|c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On $\Greekmath 0116 $}} & \Greekmath 0116 lticolumn{2}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On $ \Greekmath 011B $}} & \Greekmath 0116 lticolumn{2}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On $s_{\Greekmath 010B }$}} \\ \hline\hline Mean & Std & Mean & Std & Mean & Std \\ \hline -0.059 & 0.034 & 1.544 & 0.903 & -1.753 & 0.983 \\ \hline \end{tabular} \caption{Results obtained using the \emph{Staircase} method through simulations under the Gaussian model.} \label{stairgaus} \end{table} The expectation of $R$ is recovered rather accurately, whereas the estimation of the standard deviation suffers a loss in accuracy, which in turn yields a relative error of 180 \% \ on the $10^{-3}$ quantile. \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Drawback of the Staircase method} A major advantage of the Staircase lies in the fact that the number of trials to be performed in order to get a reasonable estimator of the mean is small. However, as shown by the simulations, this method is not adequate for the estimation of extreme quantiles. Indeed, the latter follows from an extrapolation based on estimated parameters, which furthermore may suffer of bias. Also, reparametrization of the distribution making use of the theoretical extreme quantile would not help, since the estimator would inherit of a large lack of accuracy. \subsubsection{The Continuous Reassesment Method (CRM)} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{General principle} The CRM (O'Quigley, Pepe and Fisher, 1990\cite{quigley}) has been designed for clinical trials and aims at the estimation of $q_{\Greekmath 010B }$ among $J$ stress levels $s_{1},...,s_{J}$, when $\Greekmath 010B $ is of order $20\%$. Denote $\mathbb{P}(R\leq s)=\Greekmath 0120 (s,\Greekmath 010C _{0})$. The estimator of $q_{\Greekmath 010B }$ is \[ s^{\ast }:=\underset{s_j \in \{s_{1},...,s_{J}\}}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{arginf}}{|\Greekmath 0120 (s_{j},\Greekmath 010C _{0})-\Greekmath 010B |}. \] This optimization is performed iteratively and $K$ trials are performed at each iteration. Start with an initial estimator $\widehat{\Greekmath 010C _{1}}$ of $\Greekmath 010C _{0}$, for example through a Bayesian choice as proposed in \cite{quigley}. Define \[ s_{1}^{\ast }:=\underset{s_j \in \{s_{1},...,s_{J}\}}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{arginf}}{|\Greekmath 0120 (s_{j},\widehat{\Greekmath 010C _{1}})-\Greekmath 010B |}. \] Every iteration follows a two-step procedure: \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Step 1.} Perform $J$ trials under $\Greekmath 0120 (.,\Greekmath 010C _{0})$, say $ R_{1,1},..,R_{1,J}$ and observe only their value under threshold, say $ Y_{1,j}:={\Large 1}_{R_{1,j}<s_{1}^{\ast }},1\leq j\leq J.$ \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Step i. }Iteration $i$ consists in two steps : \begin{itemize} \item[--] Firstly an estimate $\widehat{\Greekmath 010C _{i}}$ of $\Greekmath 010C _{0}$ is produced on the basis of the information beared by the trials performed in all the preceding iterations through maximum likelihood under $\Greekmath 0120 (.,\Greekmath 010C _{0})$ (or by maximizing the posterior distribution of the parameter). \item[--]\[ s_{i}^{\ast }:=\underset{s_j\in \{s_{1},...,s_{J}\}}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{arginf}}{|\Greekmath 0120 (s,}\widehat{{\Greekmath 010C _{i}}}{)-\Greekmath 010B |}; \] This stress level $s_{i}^{\ast }$ is the one under which the next $K$ trials $Y_{i,1},\dots,Y_{i,K}$ will be performed in the Bernoulli scheme $\mathcal{B}\left(\Greekmath 0120 (s_{i}^{\ast },\Greekmath 010C _{0})\right)$. \end{itemize} The stopping rule depends on the context (maximum number of trials or stabilization of the results). Note that the bayesian inference is useful in the cases where there is no diversity in the observations at some iterations of the procedure, i.e when, at a given level of test $s_i^*$, only failures or survivals are observed. \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Application to fatigue data} The application to the estimation of the minimal allowable stress is treated in a bayesian setting. We do not directly put a prior on the parameter $\Greekmath 010C _0$, but rather on the probability of failure. We consider a prior information of the form: \emph{at a given stress level $s$, we can expect $k$ failures out of $n$ trials.} Denote $\Greekmath 0119 _s$ the prior indexed on the stress level $s$. $\Greekmath 0119 _s$ models the failure probability at level $s$ and has a Beta distribution given by \begin{equation}\label{priorP} \Greekmath 0119 _{s} \sim \Greekmath 010C (k,n-k+1). \end{equation} Let $R$ follow an exponential distribution: $\forall s \ge 0, \Greekmath 0120 (s,\Greekmath 010C _0) = p_s = 1 - \exp(- \Greekmath 010C _0 s)$. It follows $ \forall s,~ \Greekmath 010C _0 = - \frac{1}{s}\log(1- p_s)$. Define the random variable $\Lambda_s = - \frac{1}{s}\log(1-\Greekmath 0119 _{s})$ which, by definition of $\Greekmath 0119 _s$, is distributed as an k-order statistic of a uniform distribution $U_{k,n}$. The estimation procedure of the CRM is obtained as follows: \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Step 1. }Compute an initial estimator of the parameter \[ \Lambda_{s} = \frac{1}{L} \sum_{l=1}^L - \frac{1}{s}\log(1-\Greekmath 0119 _{s}^{l} ) \] with $\Greekmath 0119 _{s}^l \sim \Greekmath 010C (k,n-k+1), 1\le l\le L$. Define \[ s_{1}^{\ast }:=\underset{s_j\in \{s_{1},...,s_{J}\}}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{arginf}}{|( 1 - \exp(- \Lambda_s s_j))-\Greekmath 010B |}. \] and perform $J$ trials at level $s_1^\ast$. Denote the observations $Y_{1,j}:={\Large 1}_{R_{1,j}<s_{1}^{\ast }},1\leq j\leq J.$ \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Step i. }At iteration $i$, compute the posterior distribution of the parameter: \begin{equation} \Greekmath 0119 ^*_{s_i} \sim \Greekmath 010C \left(k + \sum_{l=1}^{i}\sum_{j=1}^{J}Y_{l,j}~,~ n + (J\times i) -(k + \sum_{l=1}^{i}\sum_{j=1}^{J}Y_{l,j}) +1 \right) \end{equation} The above distribution also corresponds an order statistic of the uniform distribution $U_{k + \sum_{l=1}^{i}\sum_{j=1}^{J}Y_{l,j}~,~n + (J\times i) }$. We then obtain an estimate $\Lambda_{s_1^\ast}$. The next stress level $s_{i+1}^{\ast }$ to be tested in the procedure is then given by \[ s_{i+1}^{\ast }:=\underset{s_j\in \{s_{1},...,s_{J}\}}{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{arginf}}{|( 1 - \exp(- \Lambda_{s_1^\ast} s_j))-\Greekmath 010B |}. \] \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Numerical simulation for the CRM} Under the exponential model with parameter $\Greekmath 0115 =0.2$ and through $N=10$ iterations of the procedure, and $J=10$, with equally distributed thresholds $s_{1},..,s_{J}$ , and performing $K=50$ trials at each iteration, the results in Table \ref{CRMexp} are obtained. \begin{table}[] \centering \renewcommand{1.2}{1.2} \begin{tabular}{|c|c|c|c|} \hline \Greekmath 0116 lticolumn{4}{|c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Relative error}} \\ \Greekmath 0116 lticolumn{2}{|c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On the $0.1-$quantile}} & \Greekmath 0116 lticolumn{2}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On the $10^{-3}-$quantile}} \\ \hline\hline Mean & Std & Mean & Std \\ \hline 0.129 & 0.48 & -0.799 & 0.606 \\ \hline \end{tabular} \caption{Results obtained through \emph{CRM} on simulations for the exponential model} \label{CRMexp} \end{table} The $10^{-3}-$quantile is poorly estimated on a fairly simple model. Indeed for thresholds close to the expected quantile, nearly no failure is observed. So, for acceptable $K$, the method is not valid; figure \ref{crm} shows the increase of accuracy with respect to $K.$ Both the Staircase and the CRM have the same drawback in the context of extreme quantile estimation, since the former targets the central tendency of the variable of interest and the latter aims at the estimation of quantiles of order 0.2 or so, far from the target $\Greekmath 010B =10^{-3}$. Therefore, we propose an original procedure designed for the estimation of extreme quantiles under binary information. \begin{figure} \caption{Relative error on the $10^{-3} \label{crm} \end{figure} \section{A new design for the estimation of extreme quantiles}\label{Splitting} \subsection{Splitting} The design we propose is directly inspired by the general principle of Splitting methods used in the domain of rare events simulation and introduced by Kahn and Harris (1951 \cite{Kahn1951}). The idea is to overcome the difficulty of targeting an extreme event by decomposing the initial problem into a sequence of less complex estimation problem. This is enabled by the splitting methodology which decompose a small probability into the product of higher order probabilities. Denote $\mathbb{P}$ the distribution of the r.v. $R$. The event $\{ R\le s_\Greekmath 010B \}$ can be expressed as the intersection of inclusive events\: for $s_{\Greekmath 010B }=s_{m}<s_{m-1}<...<s_{1}$ it holds: \[ \{R\leq s_{\Greekmath 010B }\}=\{R\leq s_{m}\}\subset \dots \subset \{R\leq s_{1}\}. \] It follows that \begin{equation} \mathbb{P}(R\leq s_{\Greekmath 010B })=\mathbb{P}(R\leq s_{1})\prod_{j=1}^{m-1}\mathbb{P}(R\leq s_{j+1}\mid R\leq s_{j}) \label{Prod} \end{equation} \label{split} The thresholds $(s_{j})_{j=1,\dots ,m}$ should be chosen such that all $ \mathbb{P}(R\leq s_{j+1}\mid R\leq s_{j})_{j=1,\dots ,m}$ be of order $p=0.2$ or 0.3, in such a way that $\left\{ R\leq s_{j+1}\right\} $ is observed in experiments performed under the conditional distribution of $R$ given $ \left\{ R\leq s_{j}\right\} $, and in a way which makes $\Greekmath 010B $ recoverable by a rather small number of such probabilities $\mathbb{P}(R\leq s_{j+1}\mid R\leq s_{j})$ making use of \eqref{Prod}. From the formal decomposition in \eqref{Prod}, a practical experimental scheme can be deduced. Its form is given in algorithm \ref{Split}. \begin{algorithm}[H] \floatname{algorithm}{Procedure} \caption{Splitting procedure} \begin{algorithmic}\label{Split} \STATE \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Initialization} \STATE Fix \begin{itemize} $\left. \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\parbox{0.6\linewidth}{ \item[$\bullet$] the number $m$ of iterations to be performed (and of levels to be tested); \item[$\bullet$] the level of conditional probabilities $p$ (laying between 20 and 30 \%); }} \right \}$ ~~~ such that $p^m \approx \Greekmath 010B $ \item[$\bullet$] the first tested level $s_1$ (ideally the $p-$quantile of the distribution of $R$); \item[$\bullet$] the number $K$ of trials to be performed at each iteration. \end{itemize} \STATE \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{First step} \begin{itemize} \item[$\bullet$] $K$ trials are performed at level $s_1$. The observations are the indicators of failure $Y_{1,1},\dots,Y_{1,K}$, where $Y_{1,i} = \mathds{1}(R_{1,i}<s_1)$ of distribution $\mathcal{B}\left( \mathbb{P}(R \le s_1)\right)$. \item[$\bullet$] Determination of $s_2$, $p-$quantile of the truncated distribution $R \mid R \le s_1$. \end{itemize} \STATE \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Iteration $j=2$ to $m$} \begin{itemize} \item[$\bullet$] $K$ trials are performed at level $s_j$ under the truncated distribution of $R \mid R \le s_{j-1}$ resulting to observations $Y_{j,1},\dots,Y_{j,k} \sim \mathcal{B}\left( \mathbb{P}(R \le s_j\mid R \le s_{j-1})\right)$. \item[$\bullet$] Determination of $s_{j+1}$, the $p-$quantile of $R \mid R \le s_{j}$. \end{itemize} \STATE The last estimated quantile $s_m$ provides the estimate of $s_\Greekmath 010B $. \end{algorithmic} \end{algorithm} \subsection{Sampling under the conditional probability}\label{operational_procedure} In practice batches of specimen are put under trial, each of them with a decreasing strength; this allows to target the tail of the distribution $ \mathbb{P}$ iteratively. \begin{center} \includegraphics[scale=0.4]{densR2.png} \captionof{figure}{Sampling under the strengh density at $n_0$ cycles}\label{fatigue} \end{center} In other words, in the first step, points are sampled in zone (I). Then in the following step, only specimen with strength in zone II are considered, and so on. In the final step, the specimen are sampled in zone IV. At level $s_{m}$, they have a very small probability to fail before $n_{0}$ cycles under $\mathbb{P}$, however under their own law of failure, which is $\mathbb{P}(\mathbf{.}\mid R\leq s_{m-1})$, they have a probability of failure of order 0.2. In practice, sampling in the tail of the distribution is achieved by introducing flaws in the batches of specimens. The idea is that the strength of the material varies inversely with respect to the size of the incorporated flaws. The flaws are spherical and located inside the specimen (not on its surface). Thus, as the procedure moves on, the trials are performed on samples of materials incorporating flaws of greater diameter. This procedure is based on the hypothesis that there is a correspondence between the strength of the material with flaw of diameter $\Greekmath 0112 $ and the truncated strength of this same material without flaw under level of stress $s^*$, i.e. we assume that noting $R_{\Greekmath 0112 }$ the strength of the specimen with flaw of size $\Greekmath 0112 $, it holds that there exists $s^*$ such that \[ \mathcal{L}(R_{\Greekmath 0112 })\approx \mathcal{L}(R\mid R\leq s^*). \] Before launching a validation campaign for this procedure, a batch of 27 specimen has been machined including spherical defects whose sizes vary between 0 and 1.8mm (see Figure \ref{defautseprouvettes}). These first trials aim at estimating the decreasing relation between mean allowable stress and defects diameter $\Greekmath 0112 $. This preliminary study enabled to draw the abatement fatigue curve as a function of $\Greekmath 0112 $, as shown in Figure \ref{abattement}. \begin{center} \includegraphics[scale=0.4]{RXeprouvettes_defauts_sign.png} \captionof{figure}{Coupons incorporating spherical defects of size varying from 0 mm (on the left) to 1.8 mm (on the right)}\label{defautseprouvettes} \end{center} \begin{center} \includegraphics[scale=0.3]{abattement_edit.png} \captionof{figure}{Mean allowable stress with respect to the defect size}\label{abattement} \end{center} Results in Figure \ref{abattement} will be used during the splitting procedure to select the diameter $\Greekmath 0112 $ to be incorporated in the batch of specimens tested at the current iteration as reflecting the sub-population of material of smaller resistance. \subsection{Modeling the distribution of the strength, Pareto model}\label{GPD_model} The events under consideration have small probability under $\mathbb{P}.$ By (\ref{Prod}) we are led to consider the limit behavior of conditional distributions under smaller and smaller thresholds, for which we make use of classical approximations due to Balkema and de Haan (1974\cite{bal}) which stands as follows, firstly in the commonly known setting of exceedances over increasing thresholds. Denote $\widetilde{R}:=1/R$. \begin{theorem} \label{Thm de Haan}For $\widetilde{R}$ of distribution $F$ belonging to the maximum domain of attraction of an extreme value distribution with tail index $c$, i.e. $F\in MDA(c)$, it holds that: There exists $a=a(s)>0$, such that: \[ \lim\limits_{s \rightarrow \infty}\sup_{0\le x< \infty} \left\lvert \frac{1 - F\left(x + s\right)}{1 - F\left( s \right)} - \left(1 - G_{(c,a}(x)\right) \right\rvert = 0 \] where $G_{(c,a)}$ is defined through $~$ \begin{equation*} G_{(c,a)}(x)=1-\exp \left\{ -\int_{0}^{\frac{x}{a}}\left[ (1+ct)_{+}\right] ^{-1}dt\right\} \end{equation*} where $a>0$ and $c\in \mathbb{R}$. \end{theorem} The distribution $G$ is the Generalized Pareto distribution $GPD(c,a)$ is defined explicitly through \[ 1-G(x)=\left\{ \begin{array}{l} (1+\frac{c}{a}x)^{-1/c}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ when }c\neq 0 \\ \exp (-\frac{x}{a})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ when }c=0 \end{array} \right. \] where $x\geq 0$ for $c\geq 0$ and\ $0\leq x\leq -\frac{a}{c}$ if $c<0.$ Generalized Pareto distributions enjoy invariance through threshold conditioning, an important property for our sake. Indeed it holds, for $ \widetilde{R}\sim GDP(c,a)$ and $x>s$, \begin{equation} \mathbb{P}\left( \widetilde{R}>x\mid \widetilde{R}>s\right) =\left( 1+\frac{ c(x-s)}{a+cs}\right) ^{-1/c} \label{r} \end{equation} We therefore state: \begin{proposition} \label{Prop stability GPD}When $\widetilde{R}\sim GPD(c,a)$ then, given $ \left( \widetilde{R}>s\right) $, the r.v. $\widetilde{R}-s$ follows a $ GPD(c,a+cs)$. \end{proposition} The GPD's are on the one hand stable under thresholding and on the other appear as the limit distribution for thresholding operations. This chain of arguments is quite usual in statistics, motivating the recourse to the ubiquous normal or stable laws for additive models. This plays in favor of GPD's as modelling the distribution of $\widetilde{R}$ for excess probability inference. Due to the lack of memory property, the exponential distribution which appears as a possible limit distribution for excess probabilities in Theorem \ref{Thm de Haan} do not qualify for modelling. Moreover since we handle variables $R$ which can approach $0$ arbitrarily (i.e. unbounded $\widetilde{R}$) the parameter $c$ is assumed positive. Turning to the context of the minimal admissible constraint, we make use of the r.v. $R=1/\widetilde{R}$ and proceed to the corresponding change of variable. When $c>0$, the distribution function of the r.v. $R$ writes for nonnegative $x$: \begin{equation}\label{GPD 1} F_{c,a}(x)=(1+\frac{c}{ax})^{-1/c}. \end{equation} For $0<x<u$, the conditional distribution of $R$ given $\left\{ R<u\right\} $ is \[ \mathbb{P}(R<x\mid R<u)=\left( 1-\frac{c(\frac{1}{x}-\frac{1}{u})}{a+\frac{c }{u}}\right) ^{-1/c} \] which proves that the distribution of $R$ is stable under threshold conditioning with parameter $\left( a_{u},c\right) $ with \begin{equation} a_{u}=a+\frac{c}{u}. \label{transition param a} \end{equation} In practice\ at each step $j$ in the procedure the stress level $s_{j}$ equals the corresponding threshold $1/\widetilde{s}_{j}$ , a right quantile of the conditional distribution of $\widetilde{R\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }}$ given $\left\{ \widetilde{R}>\widetilde{s}_{j-1}\right\} $. Therefore the observations take the form $Y_{i}=\mathds{1}_{R_{i}<s_{j-1}}=\mathds{1}_{\widetilde{R}_{i}> \widetilde{s}_{j-1}},~~i=1,\dots ,K_{j}$. A convenient feature of model (\ref{GPD 1}) lies in the fact that the conditional distributions are completely determined by the initial distribution of $R$ , therefore by $a\ $\ and $c.$ The parameters $a_{j}$ of the conditional distributions are determined from these initial parameters and by the corresponding stress level $s_{j};$ see (\ref{transition param a}). \subsection{Notations} The distribution function of the r.v. $\widetilde{R}$ is a $ GPD(c_{T},a_{T})$ of distrubution function $G_{(c_{T},a_{T})}.$ Note $ \overline{G}_{(c_{T},a_{T})}=1-G_{(c_{T},a_{T})}.$ Our proposal relies on iterations. We make use of a set of thresholds $( \widetilde{s}_{1},...,\widetilde{s}_{m})$ and define for any $j\in \{1,...,m\}$ \[ G_{(c_{j},a_{j})}(x - \widetilde{s}_{j})=\mathbb{P(}\left. \widetilde{R}>x\right\vert \widetilde{ R}>\widetilde{s}_{j}) \] with $c_{j}=c_{T}$ and $a_{j}=a_{T}+c_{T}\widetilde{s}_{j}$ where we used ( \ref{r}). At iteration $j$, denote $(\widehat{c},\widehat{a})_{j}$ the estimators of $ (c_{j},a_{j})$.Therefore $1-G_{(\widehat{c},\widehat{a})_{j}}(x - \widetilde{s}_{j})$ estimates $ \mathbb{P(}\left. \widetilde{R}>x\right\vert \widetilde{R}>\widetilde{s}_{j}) $. Clearly, estimators of $(c_{T},a_{T})$ can be recovered from $(\widehat{c} ,\widehat{a})_{j}$ through $\widehat{c}_{T}=\widehat{c}$ and $\widehat{a} _{T}=\widehat{a}-\widehat{c}~\widetilde{s}_{j}.$ \subsection{Sequential design for the extreme quantile estimation}\label{proc} Fix $m$ and $p$ , where $m$ denotes the number of stress levels under which the trials will be performed, and $p$ is such that $p^m=\Greekmath 010B .$ Set a first level of stress, say $s_{1}$ large enough (i.e. $\widetilde{s}_{1}=1/s_{1}$ small enough) so that $p_1 = \mathbb{P}(R<s_{1})$ is large enough and perform trials at this level. The optimal value of $s_{1}$ should satisfy $p_1=p$, which cannot be secured. This choice is based on expert advice. Turn to $\widetilde{R}:=1/R$. Estimate $c_{T}$ and $a_{T}$, for the GPD $\left( c_{T},a_{T}\right) $ model describing $\widetilde{R}$, say $( \widehat{c},\widehat{a})_1$, based on the observations above $\widetilde{s}_{1}$ (note that under $s_{1}$ the outcomes of $R$ are easy to obtain, since the specimen is tested under medium stress). Define \[ \widetilde{s}_{2}:=\sup \left\{ s:\overline{G}_{(\widehat{c},\widehat{a} )_{1}}\left( s - \widetilde s_1\right) <p\right\} \] the $(1-p)-$quantile of $G_{(\widehat{c},\widehat{a})_{1}}.$ $\widetilde s_2$ is the level of stress to be tested at the following iteration. Iterating from step $j=2$ to $m-1$, perform $K$ trials under $G_{(c_1,a_1)} $ say $\widetilde{R}_{j,1},..,\widetilde{R}_{j,K}$ and consider the observable variables $Y_{j,i}:={\Large 1}_{\widetilde{R}_{j,i}>\widetilde{s}_{j}}$. Therefore the $K$ iid replications $Y_{j,1},..,Y_{j,K}$ follow a Bernoulli $\mathcal{B}(\overline{G}_{({c}_{j-1},{a}_{j-1})}\left( \widetilde s_{j} - \widetilde s_{j-1}\right) )$, where $\widetilde s_j$ has been determined at the previous step of the procedure. Estimate $(c_{j},a_{j})$ in the resulting Bernoulli scheme, say $(\widehat{c},\widehat{a})_{j}$. Then define \[ \begin{split} \widetilde s_{j+1}&:=\sup \left\{ s:\overline{G}_{(\widehat{c},\widehat{a} )_{j}}\left( s - \widetilde s_j\right) <p\right\} \\ &=G_{\left( \widehat{c},\widehat{a}\right) _{j}}^{-1}(1-p)+\widetilde{s}_{j}, \end{split} \] which is the $(1-p)-$quantile of the estimated conditional distribution of $\widetilde{R}$ given $\{ \widetilde{R}>\widetilde{s_{j}}\}$, i.e. $G_{( \widehat{c},\widehat{a})_{j}}$, and the next level to be tested. In practice a conservative choice for $m$ is given by $m=\left\lceil \frac{ log\Greekmath 010B }{logp}\right\rceil $, where $\lceil. \rceil $ denotes the ceiling function. This implies that the attained probability $\widetilde{\Greekmath 010B }$ is less than or equal to $\Greekmath 010B .$ The $m$ stress levels $\widetilde{s}_{1}<\widetilde{s}_{1}<\dots < \widetilde{s}_{m}=\widetilde{q}_{1-\Greekmath 010B }$ satisfy \[ \begin{split} \widetilde{\Greekmath 010B }& =\overline{G}(\widetilde{s}_{1})\prod_{j=1}^{m-1}\overline{G}_{\left( \widehat{c},\widehat{a}\right) _{j}}(\widetilde{s}_{j+1} - \widetilde s_j) \\ & ={p}_{1}p^{m-1} \end{split} \] Finally by its very definition $\widetilde{s}_{m}$ is a proxy of $\widetilde{ q}_{1-\Greekmath 010B }.$ Although quite simple in its definition, this method bears a number of drawbacks, mainly in the definition of $\left( \widehat{c},\widehat{a} \right) _{j}.$ The next section addresses this question. \section{Sequential enhanced design in the Pareto model}\label{EstimationProc} In this section we focus on the estimation of the parameters $\left( c_{T},a_{T}\right) $ in the $GPD(c_{T},a_{T})$ distribution of $\widetilde{R} .$ One of the main difficulties lies in the fact that the available information does not consist of replications of the r.v. $\widetilde{R}$ under the current conditional distribution $G_{(c_{j},a_{j})}$ of $ \widetilde{R}$ given $\left( \widetilde{R}>\widetilde{s_{j}}\right) $ but merely on very downgraded functions of those. At step $j$ we are given $G_{(\widehat{c},\widehat{a})_{j}}$ and define $ \widetilde{s}_{j+1}$ as its $\left( 1-p\right) -$quantile. Simulating $K$ r.v. $\widetilde{R}_{j,i}$ with distribution $G_{(c_{j},a_{j})}$, the observable outcomes are the Bernoulli ($p$) r.v.'s $Y_{j,i}:=1_{\widetilde{R}_{j,i}> \widetilde{s}_{j+1}}.$ This loss of information with respect to the $ \widetilde{R}_{j,i}$ 's makes the estimation step for the coefficients $( \widehat{c},\widehat{a})_{j+1}$ quite complex; indeed $(\widehat{c},\widehat{ a})_{j+1}$ is obtained through the $Y_{j,i}$'s, $1\leq i\leq K$. It is of interest to analyze the results obtained through standard Maximum Likelihood Estimation of $(\widehat{c},\widehat{a})_{j+1}.$ The quantile $ \widetilde{q}_{1-\Greekmath 010B }$ is loosely estimated for small $\Greekmath 010B $; as measured on 1000 simulation runs, large standard deviation of $\widehat{ \widetilde{q}}_{1-\Greekmath 010B }$ is due to poor estimation of the iterative parameters $(\widehat{c},\widehat{a})_{j+1}.$ We have simulated $n=200$ realizations of r.v.'s $Y_{i}$ with common Bernoulli distribution with parameter $\overline{G}_{\left( c_{T},a_{T}\right) }(\widetilde{s}_{1}).$ Figure \ref{loglik} shows the log likelihood function of this sample as the parameter of the Bernoulli $\overline{G}_{\left( c^{\prime },a^{\prime }\right) }(\widetilde{s_{0}})$ varies according to $\left( c^{\prime },a^{\prime }\right) .$ As expected this function is nearly flat in a very large range of $\left( c^{\prime },a^{\prime }\right) .$ This explains the poor results in Table \ref{procQuantm} obtained through the Splitting procedure when the parameters at each step are estimated by maximum likelihood, especially in terms of dispersion of the estimations. Moreover, the accuracy of the estimator of $\widetilde{q}_{1-\Greekmath 010B }$ quickly decreases with the number $K$ of replications $Y_{j,i}$, $1\leq i\leq K$. Changing the estimation criterion by some alternative method does not improve significantly; Figure \ref{disp_quant} shows the distribution of the resulting estimators of $\widetilde{q}_{1-\Greekmath 010B }$ for various estimation methods (minimum Kullback Leibler, minimum Hellinger and minimum L1 distances - see their definitions in Appendix \ref{div}) of $\left( c_{T},a_{T}\right).$ This motivates the need for an enhanced estimation procedure. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c|c} \hline Minimum & Q25 & Q50 & Mean & Q75 & Maximum \\ \hline 67.07 & 226.50 & 327.40 & 441.60 & 498.90 & 10 320.00 \\ \hline \end{tabular} \caption{Estimation of the $(1-\Greekmath 010B )-$quantile, $\widetilde s_{\Greekmath 010B } =469.103$, through procedure \protect\ref{proc} with $K=50$} \label{procQuant} \end{table} \begin{table}[] \centering \begin{tabular}{|c|c|c||c|c|} \hline & \Greekmath 0116 lticolumn{2}{c||}{$\widetilde s_m$ for $K=30$} & \Greekmath 0116 lticolumn{2}{c|}{$\widetilde s_m$ for $ K=50$} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{$\widetilde s_{\Greekmath 010B }$} & Mean & Std & Mean & Std \\ \hline 469.103 & 1 276.00 & 12 576.98 & 441.643 & 562.757 \\ \hline \end{tabular} \caption{Estimation of the $(1-\Greekmath 010B )-$quantile, $\widetilde s_{\Greekmath 010B } =469.103$, through procedure \protect\ref{proc} for different values of $K$} \label{procQuantm} \end{table} \begin{figure} \caption{Log-likelihood of the Pareto model with binary data} \label{loglik} \end{figure} \begin{figure} \caption{Estimations of the $\protect\Greekmath 010B -$quantile based on the Kullback-Leibler, L1 distance and Hellinger distance criterion} \label{disp_quant} \end{figure} \subsection{An enhanced sequential criterion for estimation}\label{procImpML} We consider an additional criterion which makes a peculiar use of the iterative nature of the procedure. We will impose some control on the stability of the estimators of the conditional quantiles through the sequential procedure. At iteration $j-1$, the sample $Y_{j-1,i}$ , $1\leq i\leq K$ has been generated under $G_{(\widehat{c},\widehat{a})_{j-2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }}$and provides an estimate of $p$ through \begin{equation} \widehat{p}_{j-1}:=\frac{1}{K}\sum_{i=1}^{n}Y_{j-1,i}. \label{p_ji} \end{equation} The above $\widehat{p}_{j-1}$ estimates $\mathbb{P}\left( \widetilde{R}> \widetilde{s}_{j-1}\mid \widetilde{R}>\widetilde{s}_{j-2}\right) $ conditionally on $\widetilde{s}_{j-1}$ and $\widetilde{s}_{j-2}.$ We write this latter expression $\mathbb{P}\left( \widetilde{R}>\widetilde{s} _{j-1}\mid \widetilde{R}>\widetilde{s}_{j-2}\right) $ as a function of the parameters obtained at iteration $j$ , namely $(\widehat{c},\widehat{a})_{j}. $The above r.v's $\ Y_{j-1,i}$ stem from variables $\widetilde{R}_{j-1,i}$ greater than $\ \widetilde{s}_{j-2}.$ At step $j,$ estimate then $\mathbb{P} \left( \widetilde{R}>\widetilde{s}_{j-1}\mid \widetilde{R}>\widetilde{s} _{j-2}\right) $ making use of $G_{(\widehat{c},\widehat{a})_{j}}.$ This backward estimator writes \[ \frac{\overline{G}_{(\widehat{c},\widehat{a})_{j}}(\widetilde{s}_{j-1})}{ \overline{G}_{(\widehat{c},\widehat{a})_{j}}(\widetilde{s}_{j-2})}=1-G_{( \widehat{c},\widehat{a})_{j}}(\widetilde{s}_{j-1}-\widetilde{s}_{j-2}). \] The distance \begin{equation} \left\vert \left( \overline{G}_{(\widehat{c},\widehat{a})_{j}}(\widetilde{s} _{j-1}-\widetilde{s}_{j-2})\right) -\widehat{p_{j-1}}\right\vert \label{A} \end{equation} should be small, since both $ \overline{G}_{(\widehat{c},\widehat{a} )_{j}}(\widetilde{s}_{j-1}-\widetilde{s}_{j-2}) $ and $\ \widehat{ p}_{j-1}$ should approximate $p.$ Consider the distance between quantiles \begin{equation} \left\vert (\widetilde{s}_{j-1}-\widetilde{s}_{j-2})-G_{(\widehat{c}, \widehat{a})_{j}}^{-1}(1-\widehat{p}_{j-1})\right\vert . \label{B} \end{equation} An estimate $(\widehat{c},\widehat{a})_{j}$ can be proposed as the minimizer of the above expression for $(\widetilde{s}_{j-1}-\widetilde{s}_{j-2})$ for all $j$. This backward estimation provides coherence with respect to the unknown initial distribution $G_{\left( c_{T},a_{T}\right) }$. Would we have started with a good guess $(\widehat{c},\widehat{a})=\left( c_{T},a_{T}\right) $ then the successive $(\widehat{c},\widehat{a})_{j},\ \widetilde{s}_{j-1}$ etc would make (\ref{B}) small, since $\widetilde{s} _{j-1}$ (resp. $\widetilde{s}_{j-2}$) would estimate the $p-$conditional quantile of $\mathbb{P}\left( \left. .\right\vert \widetilde{R}>\widetilde{s} _{j-2}\right) $ (resp. $\mathbb{P}\left( \left. .\right\vert \widetilde{R}> \widetilde{s}_{j-3}\right) $).\ It remains to argue on the set of plausible values where the quantity in ( \ref{B}) should be minimized. We suggest to consider a confidence region for the parameter $\left( c_{T},a_{T}\right) .$ With $\widehat{p}_{j}$ defined in (\ref{p_ji}) and $ \Greekmath 010D \in \left( 0,1\right) $ define the $\Greekmath 010D -$confidence region for $p$ by \[ I_{\Greekmath 010D }=\left[ \widehat{p}_{j}-z_{1-\Greekmath 010D /2}\sqrt{\frac{\widehat{p} _{j}(1-\widehat{p}_{j})}{K-1}};\widehat{p}_{j}+z_{1-\Greekmath 010D /2}\sqrt{\frac{ \widehat{p}_{j}(1-\widehat{p}_{j})}{K-1}}\right] \] where $z_{\Greekmath 011C }$ is the $\Greekmath 011C -$quantile of the standard normal distribution. Define \[ \mathcal{S}_{j}=\left\{ (c,a):\left( 1-G_{(c,a)}(\widetilde{s}_{j}- \widetilde{s}_{j-1})\right) \in I_{\Greekmath 010D }\right\} . \] Therefore $\mathcal{S}_{j}$ is a plausible set for $(\widehat{c}_{T}, \widehat{a}_{T}).$ We summarize this discussion: At iteration $j,$ the estimator of $\left( c_{T},a_{T}\right) $ is a solution of the minimization problem \[ \min_{(c,a)\in \mathcal{S}_{j}}\left\vert (\widetilde{s}_{j-1}-\widetilde{s} _{j-2})-G_{(c,a+c\widetilde{s}_{j-2})}^{-1}(1-\widehat{p}_{j-1})\right\vert . \] The optimization method used is the Safip algorithm (Biret and Broniatowski, 2016 \cite{Biret}) As seen hereunder, this heuristics provides good performance. \subsection{Simulation based numerical results\label{Subsection numGPD}} This procedure has been applied in three cases. A case considered as reference is $(c_{T},a_{T})=(1.5,1.5)$; secondly the case when $ (c_{T},a_{T})=(0.8,1.5)$ describes a light tail with respect to the reference.\ Thirdly, a case $(c_{T},a_{T})=(1.5,3)$ defines a distribution with same tail index as the reference, but with a larger dispersion index. Table \ref{parIC} shows that the estimation of $\widetilde{q}_{1-\Greekmath 010B }$ deteriorates as the tail of the distribution gets heavier; also the procedure underestimates $\widetilde{q}_{1-\Greekmath 010B }.$ \begin{table}[tbp] \centering \renewcommand{1.2}{1.2} \begin{tabular}{|c||c|c|} \hline \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Parameters} & \Greekmath 0116 lticolumn{2}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Relative error on $ \widetilde s_{\Greekmath 010B }$}} \\ & Mean & Std \\ \hline\hline $c=0.8$, $a_0=1.5$ and $\widetilde s_{\Greekmath 010B }= 469.103$ & -0.222 & 0.554 \\ \hline $c=1.5$, $a_0=1.5$ and $\widetilde s_{\Greekmath 010B }=31621.777 $ & -0.504 & 0.720 \\ \hline $c=1.5$, $a_0=3$ and $\widetilde s_{\Greekmath 010B }=63243.550 $ & 0.310 & 0.590 \\ \hline \end{tabular} \caption{Mean and std of relative errors on the $(1-\Greekmath 010B )-$quantile of GPD calculated through 400 replicas of procedure \ref{procImpML}.} \label{parIC} \end{table} Despite these drawbacks, we observe an improvement with respect to the simple Maximum Likelihood estimation; this is even more clear, when the tail of the distribution is heavy. Also, in contrast with the ML estimation, the sensitivity with respect to the number $K$ of replications at each of the iterations plays in favor of this new method: As $K$ decreases, the gain with respect to Maximum Likelihood estimation increases notably, see Figure \ref{compMVQm}. \begin{figure} \caption{Estimations of the $(1-\Greekmath 010B )-$quantile of two GPD obtained by Maximum Likelihood and by the improved Maximum Likelihood method} \label{compMVQ} \end{figure} \begin{figure} \caption{Estimations of the $(1-\Greekmath 010B )-$quantile of a $GPD(0.8,1.5)$ obtained by Maximum Likelihood and by the improved Maximum Likelihood method for different values of $K$.} \label{compMVQm} \end{figure} \subsection{Performance of the sequential estimation\label{subsection comparaison de Valk GPD}} As stated in chapter \ref{revLit}, there is to our knowledge no method dealing with similar question available in the literature. Therefore we compare the results of our method, based on observed exceedances over thresholds, with the results that could be obtained by classical extreme quantiles estimation methods assuming we have complete data at our disposal; those may be seen as benchmarks for an upper bound of the performance of our method. \subsubsection{Estimation of an extreme quantile based on complete data, de Valk's estimator} In order to provide an upper bound for the performance of the estimator, we make use of the estimator proposed by De Valk and Cai (2016). This work aims at the estimation of a quantile of order $p_{n}\in \lbrack n^{-\Greekmath 011C _{1}};n^{-\Greekmath 011C _{2}}]$, with $\Greekmath 011C _{2}>\Greekmath 011C _{1}>1$ , where $n$ is the sample size. This question is in accordance with the industrial context which motivated the present paper. De Valk's proposal is a modified Hill estimator adapted to log-Weibull tailed models. De Valk's estimator is consistent, asymptotically normally distributed, but is biased for finite sample size. We briefly recall some of the hypotheses which set the context of de Valk's approach. Let $X_{1},\dots ,X_{n}$ be $n$ iid r.v's with distribution $F$, and denote $ X_{k:n}$ the $k-$ order statistics. A tail regularity assumption is needed in order to estimate a quantile with order greater than $1-$ $1/n.$ Denote $U(t)=F^{-1}\left( 1-1/t\right) $, and let the function $q$ be defined by \[ q(y)=U(e^{y})=F^{-1}\left( 1-e^{-y}\right) \] for $y>0$. Assume that \begin{equation} \lim\limits_{y\rightarrow \infty }~\frac{\log q(y\Greekmath 0115 )-\log q(y)}{g(y)} =h_{\Greekmath 0112 }(\Greekmath 0115 )~~~\Greekmath 0115 >0 \label{logweibulltail} \end{equation} where $g$ is a regularly varying function and \[ h_{\Greekmath 0112 }(\Greekmath 0115 )=\left\{ \begin{array}{l} \frac{\Greekmath 0115 ^{\Greekmath 0112 }-1}{\Greekmath 0112 }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }\Greekmath 0112 \neq 0 \\ \log \Greekmath 0115 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }\Greekmath 0112 =0 \end{array} \right. \] de Valk writes condition \ref{logweibulltail} as $\log q\in ERV_{\Greekmath 0112 }(g)$ . \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{Remark :} Despite its naming of log-Generalized tails, this condition also holds for Pareto tailed distributions, as can be checked, providing $\Greekmath 0112 =1.$ We now introduce de Valk's extreme quantile estimator. Let \[ \Greekmath 0123 _{k,n}:=\sum_{j=k}^{n}\frac{1}{j}. \] Let $q(z)$ be the quantile of order $e^{-z}=p_{n}$ of the distribution $F$. The estimator makes use of $X_{n-l_{n}:n}$, an intermediate order statistics of $X_{1},..,X_{n}$, where $l_{n}$ tends to infinity \ as $n\rightarrow \infty $ and $l_{n}$ $\ /n\rightarrow 0.$ de Valk's estimator writes \begin{equation} \widehat{q}(z)=X_{n-l_{n}:n}\exp \left\{ g(\Greekmath 0123 _{l_{n},n})h_{\Greekmath 0112 }\left( \frac{z}{\Greekmath 0123 _{l_{n+1},n}}\right) \right\} . \end{equation} When the support of $F$ overlaps $\mathbb{R}^{-}$ then the sample size $n$ should be large; see de Valk (\cite{valk2}) for details. Note that, in the case of a $GPD(c,a)$, parameter $\Greekmath 0112 $ is known and equal to 1 and the normalizing function $g$ is defined by $g(x)=cx$ for $x>0$. \subsubsection{Loss in accurracy due to binary sampling} In Table \ref{ValkIt} we compare the performance of de Valk's method with ours on the model, making use of complete data in de Valk's estimation, and of dichotomous ones in our approach. Clearly de Valk's results cannot be attained by the present sequential method, due to the loss of information induced by thresholding and dichotomy. Despite this, the results can be compared, since even if the bias of the estimator clearly exceeds the corresponding bias of de Valk's, its dispersion is of the same order of magnitude, when handling heavy tailed GPD models. Note also that given the binary nature of the data considered, the average relative error is quite honorable. We can assess that a large part of the volatility of the estimator produced by our sequential methodology is due to the nature of the GPD model as well as to the sample size. \begin{table}[tbp] \renewcommand{1.2}{1.2} \begin{center} \begin{tabular}{|c||c|c||c|c|} \hline & \Greekmath 0116 lticolumn{4}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Relative error on the $(1-\Greekmath 010B )-$quantile}} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Parameters} & \Greekmath 0116 lticolumn{2}{c||}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On complete data}} & \Greekmath 0116 lticolumn{2}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On binary data}} \\ & Mean & Std & Mean & Std \\ \hline\hline $c=0.8$, $a_0=1.5$ and $s_\Greekmath 010B = 469.103$ & \hspace{0.2cm} 0.052 \hspace{0.3cm}& 0.257 & \hspace{0.2cm} -0.222 \hspace{0.2cm} & 0.554 \\ \hline $c=1.5$, $a_0=1.5$ and $s_\Greekmath 010B =31621.777 $ & 0.086 & 0.530 & -0.504 & 0.720 \\ \hline $c=1.5$, $a_0=3$ and $s_\Greekmath 010B =63243.550 $ & 0.116 & 0.625 & 0.310 & 0.590 \\ \hline \end{tabular} \end{center} \caption{Mean and std of the relative errors on the $1-\Greekmath 010B -$quantile of GPD on complete and binary data for samples of size $n=250$ computed through $400$ replicas of both estimation procedures.\newline Estimations on complete data are obtained with de Valk's method; estimations on binary data are provided by the sequential design.} \label{ValkIt} \end{table} \section{Sequential design for the Weibull model}\label{WeibullModel} The main property which led to the GPD model is the stability through threshold conditioning.\ However the conditional distribution of $\widetilde{ R}$ given $\left\{ \widetilde{R}>s\right\}$ takes a rather simple form which allows for some variation of the sequential design method. \subsection{The Weibull model} Denote $\widetilde{R}\sim W(\Greekmath 010B ,\Greekmath 010C )$, with $\Greekmath 010B ,\Greekmath 010C >0$ a Weibull r.v. with scale parameter $\Greekmath 010B $ and shape parameter $\Greekmath 010C .$ let $G$ denote the distribution function of $\widetilde{R}$ , $g$ its density function and $G^{-1}$ its quantile function. We thus write for non negative $x$ \[ \begin{split} ~G(x)& =1-\exp \left( -\left( \frac{x}{\Greekmath 010B }\right) ^{\Greekmath 010C }\right) \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }0<u<1,~~G^{-1}(u)& =\Greekmath 010B (-\log (1-u))^{1/\Greekmath 010C } \end{split} \] The conditional distribution of $\widetilde{R}$ is a truncated Weibull distribution \[ \begin{split} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }\widetilde{s}_{2}>\widetilde{s}_{1},~~\mathbb{P}(\widetilde{R}> \widetilde{s}_{2}\mid \widetilde{R}>\widetilde{s}_{1})& =\frac{\mathbb{P}( \widetilde{R}>\widetilde{s}_{2})}{\mathbb{P}(\widetilde{R}>\widetilde{s}_{1}) } \\ & =\exp \left\{ \left( -\left( \frac{s_{2}}{\Greekmath 010B }\right) ^{\Greekmath 010C }+\left( \frac{s_{1}}{\Greekmath 010B }\right) ^{\Greekmath 010C }\right) \right\} \end{split} \] Denote $G_{s_{2}}$ the distribution function of $\widetilde{R}$ given $ \left( \widetilde{R}>\widetilde{s}_{2}\right) $. The following result helps. For $\widetilde{s}_{2}>\widetilde{s}_{1}$, \begin{equation} \log \mathbb{P}(\widetilde{R}>\widetilde{s}_{2}\mid \widetilde{R}>\widetilde{ s}_{1})=\left[ \left( \frac{\widetilde{s}_{2}}{\widetilde{s}_{1}}\right) ^{\Greekmath 010C }-1\right] \log \mathbb{P}(\widetilde{R}>\widetilde{s}_{1}) \end{equation} Assuming $\mathbb{P}(\widetilde{R}>\widetilde{s}_{1})=p$, and given $\widetilde{s}_{1}$ we may find $\widetilde{s}_{2}$ the conditional quantile of order $1-p$ of the distribution of $\widetilde{R}$ given $\left\{ \widetilde{R}>\widetilde{s}_{1}\right\} $. This solves the first iteration of the sequential estimation procedure through \[ \log p=\left[ \left( \frac{\widetilde{s}_{2}}{\widetilde{s}_{1}}\right) ^{\Greekmath 010C }-1\right] \log p \] where the parameter $\Greekmath 010C $ has to be estimated on the first run of trials. The same type of transitions holds for the iterative procedure; indeed for $ \widetilde{s}_{j+1}>\widetilde{s}_{j}>\widetilde{s}_{j-1}$ \begin{equation} \begin{split} \log \mathbb{P}(\widetilde{R}>\widetilde{s}_{j+1}\mid \widetilde{R}> \widetilde{s}_{j})& =\left[ \frac{\log \mathbb{P}(\widetilde{R}>\widetilde{s} _{j+1}\mid \widetilde{R}>\widetilde{s}_{j-1})}{\log \mathbb{P}(\widetilde{R}> \widetilde{s}_{j}\mid \widetilde{R}>\widetilde{s}_{j-1})}-1\right] \log \mathbb{P}(\widetilde{R}>\widetilde{s}_{j}\mid \widetilde{R}>\widetilde{s} _{j-1}) \\ & =\left[ \frac{\widetilde{s}_{j-1}^{\Greekmath 010C }-\widetilde{s}_{j+1}^{\Greekmath 010C }}{ \widetilde{s}_{j-1}^{\Greekmath 010C }-\widetilde{s}_{j}^{\Greekmath 010C }}-1\right] \log \mathbb{P}(\widetilde{R}>\widetilde{s}_{j}\mid \widetilde{R}>\widetilde{s} _{j-1}) \end{split} \end{equation} At iteration $j$ the thresholds $\widetilde{s}_{j}$ and $\widetilde{s}_{j-1}$ are known; the threshold $\widetilde{s}_{j+1}$ is the $(1-p)-$ quantile of the conditional distribution, $\mathbb{P}(\widetilde{R}>\widetilde{s}_{j+1}\mid \widetilde{R}>\widetilde{s}_{j})=p$, hence solving \[ \log p=\left[ \frac{\widetilde{s}_{j-1}^{\Greekmath 010C }-\widetilde{s}_{j+1}^{\Greekmath 010C } }{\widetilde{s}_{j-1}^{\Greekmath 010C }-\widetilde{s}_{j}^{\Greekmath 010C }}-1\right] \log p \] where the estimate of $\Greekmath 010C $ is updated from the data collected at iteration $j.$ \subsection{Numerical results} Similarly as in Sections \ref{Subsection numGPD} and \ref{subsection comparaison de Valk GPD} we explore the performance of the sequential design estimation on the Weibull model. We estimate the $(1-\Greekmath 010B )-$ quantile of the Weibull distribution in three cases. In the first one, the scale parameter $a$ and the shape parameter $b$ satisfy $\left( a,b\right) =\left( 3,0.9\right)$. This corresponds to a strictly decreasing density function, with heavy tail. In the second case, the distribution is skewed since $\left( a,b\right) =\left( 3,1.5\right) $ and the third case is $\left( a,b\right) =\left( 2,1.5\right) $ and describes a less dispersed distribution with lighter tail. Table \ref{errWeibulldeValk} shows that the performance of our procedure here again depends on the shape of the distribution. The estimators are less accurate in case 1, corresponding to a heavier tail. Those results are compared to the estimation errors on complete data through de Valk's methodology. As expected, the loss of accuracy linked to data deterioration is similar to what was observed under the Pareto model, although a little more important. This can be explained by the fact that the Weibull distribution is less adapted to the splitting structure than the GPD. \begin{table}[tbp] \begin{center} \renewcommand{1.2}{1.2} \hspace{-1cm} \begin{tabular}{|c||c|c||c|c|} \hline & \Greekmath 0116 lticolumn{4}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Relative error on the $(1-\Greekmath 010B )-$quantile}} \\ \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Parameters} & \Greekmath 0116 lticolumn{2}{c||}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On binary data}} & \Greekmath 0116 lticolumn{2}{c|}{\RIfM@\expandafter\text@\else\expandafter\mbox\fibf{On complete data}} \\ & Mean & Std & Mean & Std \\ \hline\hline $a_0=3$, $b_0=0.9$ et $s_\Greekmath 010B = 25.69 $ & \hspace{0.2cm} 0.282 &\hspace{0.2cm} 0.520 & \hspace{0.2cm} 0.127 & \hspace{0.2cm} 0.197 \\ \hline $a_0=3$, $b_0=1.5$ et $s_\Greekmath 010B =10.88 $ & -0.260 & 0.490 & 0.084 & 0.122 \\ \hline $a_0=2$, $b_0=1.5$ et $s_\Greekmath 010B =7.25$ & -0.241 & 0.450 & 0.088 & 0.140 \\ \hline \end{tabular} \end{center} \caption{Mean and std of relative errors on the $(1-\Greekmath 010B )-$quantile of Weibull distributions on complete and binary data for samples of size $n=250$ computed through $400$ replicas.\newline Estimations on complete data are obtained with de Valk's method; estimations on binary data are provided by the sequential design.} \label{errWeibulldeValk} \end{table} \section{Model selection and misspecification}\label{model_selection_missp} In the above sections, we considered two models whose presentation was mainly motivated by theoretical properties. As it has already been stated in paragraph \ref{GPD_model}, the modeling of $\widetilde R$ by a GPD with $c$ strictly positive is justified by the assumption that the support of the original variable $R$ may be bounded by 0. However, note that the GPD model can be easily extended to the case where $c=0$. It then becomes the trivial case of the estimation of an exponential distribution. Though we did exclude the exponential case while modeling the excess probabilities of $\widetilde R$ by a GPD, we still considered the Weibull model in section \ref{WeibullModel}, which belongs to the max domain of attraction for $c=0$. On top of being exploitable in the splitting structure, the Weibull distribution is a classical tool when modeling reliability issues, it thus seemed natural to propose an adaptation of the sequential method for it. In this section, we discuss the modeling decisions and give some hints on how to deal with misspecification. \subsection{Model selection} The decision between the Pareto model with tail index strictly positive and the Weibull model has been covered in the literature. There exists a variety of tests on the domain of attraction of a distribution. Dietrich and al. (2002 \cite{Dietrich2002}) Drees and al. (2006 \cite{Drees2006}) both propose a test for extreme value conditions related to Cramer-von Mises tests. Let $X$ of distribution function $G$. The null hypothesis is \[ H_{O}: G \in MDA(c_0). \] In our case, the theoretical value for the tail index is $c_0=0$. The former test provides a testing procedure based on the tail empirical quantile function, while the latter uses a weighted approximation of the tail empirical distribution. Choulakian and Stephens (2001 \cite{Choulakian2001}) proposes a goodness of fit test in the fashion of Cramer-von Mises tests in which the unknown parameters are replaced by maximum likelihood estimators. The test consists in two steps: firstly the estimation of the unknown parameters, and secondly the computation of the Cramer-von Mises $W^2$ or Anderson-Darling $A^2$ statistics. Let $X_1,\dots,X_n$ be a random sample of distribution $G$. The hypothesis to be tested is: $H_O$: The sample is coming from a $GPD(c_0, \widehat{a})$. The associated test statistics are given by: \[ \begin{split} &W^2 = \sum_{i=1}^n \left(\widehat{G}(x_{(i)}) - \frac{2i-1}{2n}\right)^2 + \frac{1}{12n};\\ &A^2 = -n-\frac{1}{n} \sum_{i=1}^n (2i-1)\left\{ \log(\widehat{G}(x_{(i)})) + \log(1-\widehat{G}(x_(n+1-i)) \right\}, \end{split} \] where $x_{(i)}$ denotes the $i-$th order statistic of the sample. The authors provide the corresponding tables of critical points. Jurečková and Picek (2001 \cite{Jureckova2001}) designed a non-parametric test for determining whether a distribution $G$ is light or heavy tailed. The null hypothesis is defined by : \[ H_{c_O}: x^{1/c_0} (1 - G(x)) \le 1 ~~ \forall x>x_0 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for some } x_0>0 \] with fixed hypothetical $c_0$. The test procedure consists in splitting the data set in $N$ samples and computing the empirical distribution of the extrema of each sample. The evaluation of the suitability of each model for fatigue data is precarious. The main difficulty here is that it is not possible to perform goodness-of-fit type tests, since firstly, we collect the data sequentially during the procedure and do not have a sample of available observations beforehand, and secondly, we do not observe the variable of interest $R$ but only peaks over chosen thresholds. The existing tests procedures are not compatible with the reliability problem we are dealing with. On the first hand, they assume that the variable of interest is fully observed and are mainly semi-parametric or non-parametric tests based on order statistics. On the other hand, their performances rely on the availability of a large volume of data. This is not possible in the design we consider since fatigue trial are both time consuming and extremely expensive. Another option consists of validating the model \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{a posteriori}, once the procedure is completed using expert advices to confirm or not the results. For that matter, a procedure following the design presented in \ref{operational_procedure} is currently being carried out. Its results should be available in a few months and will give hints on the most relevant model. \subsection{Misspecification} In paragraph \ref{GPD_model}, we assumed that $\widetilde{R}$ initially follows a GPD. In practice, the distribution may have its excess probabilities converge towards it as the thresholds increase but differ from a GPD. In the following, let us assume that $\widetilde{R}$ does not follow a GPD (of distribution function $F$) but another distribution $G$ whose tail gets closer and closer to a GPD. In this case, the issue is to control the distance between $G$ and the theoretical GPD and to determine from which thresholding level it becomes negligible. One way to deal with this problem is to restrict the model to a class of distributions that are not so distant from $F$: Assume that the distribution function $G$ of the variable of interest $\widetilde{R}$ belongs to a neighborhood of the $GPD(c,a)$ of distribution function $F$, defined by: \begin{equation}\label{GPD_neighborhood} V_\Greekmath 010F (F) = \left\{G: \sup_x |\bar F(x) - \bar G(x)|w(x) \le \Greekmath 010F \right\}, \end{equation} where $\Greekmath 010F \ge 0$ and $w$ an increasing weight function such that $\lim_{x\rightarrow\infty} w(x) = \infty$. $V_\Greekmath 010F (F)$ defines a neighborhood which does not tolerate large departures from $F$ in the right tail of the distribution. Let $x \ge s$, it follows from \eqref{GPD_neighborhood} a bound for the conditional probability of $x$ given $R>s$: \begin{equation}\label{cond_ineq} \frac{\bar F(x) - \Greekmath 010F /w(x)}{\bar F(s) + \Greekmath 010F /w(s)} \le \frac{\bar G(x)}{\bar G(s)} \le \frac{\bar F(x+) + \Greekmath 010F /w(x)}{\bar F(s) - \Greekmath 010F /w(s)}. \end{equation} When $\Greekmath 010F =0$, the bounds of \eqref{cond_ineq} match the conditional probabilities of the theoretical Pareto distribution. In order to control the distance between $F$ and $G$, the bound above may be rewritten in terms of relative error with respect to the Pareto distribution. Using a Taylor expansion of the right and left bounds when $\Greekmath 010F $ is close to 0, it becomes: \begin{equation} 1 - u(s,x).\Greekmath 010F \le \frac{\frac{\bar G(x)}{\bar G(s)} }{\frac{\bar F(x)}{\bar F(s)}} \le 1 + u(x,s).\Greekmath 010F , \end{equation} where \[ u(s,x) = \frac{\left(1+\frac{cs}{a}\right)^{1/c}}{w(s)} + \frac{\left(1+\frac{cx}{a}\right)^{1/c}}{w(x)}. \] For a given $\Greekmath 010F $ close to 0, the relative error on the conditional probabilities can be controlled upon $s$. Indeed, then the relative error is bounded by a fixed level $\Greekmath 010E >0$ whenever: \[ \frac{\left(1+\frac{cs}{a}\right)^{1/c}}{w(s)} \le \frac{\Greekmath 010E }{\Greekmath 010F } \frac{\left(1+\frac{cx}{a}\right)^{1/c}}{w(x)}. \] \section{Perspectives, generalization of the two models}\label{Perspectives} In this work, we have considered two models for $\widetilde R$ that exploits the thresholding operations used in the splitting method. This is a limit of this procedure as the lack of relevant information provided by the trials do not enable a flexible modeling of the distribution of the resistance. In the following, we present ideas of extensions and generalizations of those models, based on common properties of the GPD and Weibull models. \subsection{Variations around mixture forms} When the tail index is positive, the GPD is completely monotone, and thus can be written as the Laplace transform of a probability distribution. Thyrion (1964\cite{thyrion}) and Thorin (1977\cite{thorin}) established that a $GPD(a_T,c_T)$, with $c_T>0$, can be written as the Laplace transform of a Gamma r.v $V$ whose parameters are functions of $a_T$ and $c_T$: $V ~ \sim~ \Gamma\left(\frac{1}{c_T},\frac{a_T}{c_T}\right)$. Denote $v$ the density of $V$, \begin{equation}\label{lapla} \begin{split} \forall x\ge 0, ~~ \bar{G}(x) = &\int_{0}^{\infty}\exp(-xy) v(y)dy \\ & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ where } ~~ v(y) = \frac{(a_T/c_T)^{1/c}}{\Gamma(1/c_T)}y^{1/c_T -1}\exp\left(-\frac{a_Ty}{c_T}\right). \end{split} \end{equation} It follows that the conditional survival function of $\widetilde R$, $\bar{G}_{s_j}$, is given by: \begin{alignat*}{2} \mathbb{P}(\widetilde{R}>\widetilde s_{j+1} \mid \widetilde{R}_j > \widetilde s_j) & = \bar{G}_{\widetilde s_j}(\widetilde s_{j+1}-\widetilde s_j) \\ & = \int_{0}^{\infty}\exp \left\{-(\widetilde s_{j+1} - \widetilde s_j)y \right\} v_j(y)dy, &&\\ &\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ ~~where } V_j \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ is a r.v of distribution } \Gamma\left(\frac{1}{c_j},\frac{a_j}{c_j}\right). && \end{alignat*} with $c_j=c_T$ and $a_j=a_{j-1}+c_T (\widetilde s_j - \widetilde s_{j-1})$. Expression \eqref{lapla} gives room to an extension of the Pareto model. Indeed, we could consider distributions of $\widetilde R$ that share the same mixture form with a mixing variable $W$ that possesses some common characteristics with the Gamma distributed r.v. $V.$ Similarly, the Weibull distribution $W(\Greekmath 010B , \Greekmath 010C )$ can also be written as the Laplace transform of a stable law of density $g$ whenever $\Greekmath 010C \le1$. Indeed, it holds from Feller 1971\cite{feller}) (p. 450, Theorem 1) that: \begin{equation}\label{fellerTh1} \forall x\ge 0, ~~ \exp\left\{-x^{\Greekmath 010C } \right\}= \int_{0}^{\infty}\exp(-xy) g(y)dy \end{equation} where $g$ is the density of an infinitely divisible probability distribution. It follows, for $s_j< s_{j+1}$ \begin{equation}\label{condWeibullLaplace} \begin{split} \mathbb{P}(\widetilde{R}>\widetilde s_{j+1} \mid \widetilde{R}_j > \widetilde s_j) &= \frac{\exp\left\{-(\widetilde s_{j+1}/\Greekmath 010B )^{\Greekmath 010C } \right\}}{\exp\left\{-(\widetilde s_{j}/\Greekmath 010B )^{\Greekmath 010C } \right\}} \\ & = \frac{ \int_{0}^{\infty}\exp\left\{(-(\widetilde s_{j+1}/\Greekmath 010B )y\right\} g(y)dy }{ \int_{0}^{\infty}\exp\left\{-(\widetilde s_{j}/\Greekmath 010B )y\right\} g(y)dy } = \frac{ \int_{0}^{\infty}\exp\left\{-(\widetilde s_{j+1}/\Greekmath 010B )y\right\} g(y)dy }{K(s_j)} \\ &= \frac{1}{K(s_j)} \int_{0}^{\infty}\exp\left\{-\widetilde s_{j+1}u \right\} g_{\Greekmath 010B }(u) )du \\ & \quad \quad\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with } u=y/\Greekmath 010B \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and } g_{\Greekmath 010B }(u)=\Greekmath 010B g(\Greekmath 010B u) \end{split} \end{equation} Thus an alternative modeling of $\widetilde R$ could consist in any distribution that can be written as a Laplace transform of a stable law of density $w_{\Greekmath 010B ,\Greekmath 010C }$ defined on $\mathbb{R}_+$ and parametrized by $(\Greekmath 010B ,\Greekmath 010C )$, that complies to the following condition: For any $s>0$, the distribution function of the conditional distribution of $\widetilde R$ given $\widetilde R>s$ can be written as the Laplace transform of $w_{\Greekmath 010B ,\Greekmath 010C }^{(\Greekmath 010B ,s)}( . )$ where \[x>s, w_{\Greekmath 010B ,\Greekmath 010C }^{(\Greekmath 010B ,s)}(x) = \frac{\Greekmath 010B w_{\Greekmath 010B ,\Greekmath 010C }(\Greekmath 010B x)}{K(s)},\] where $K( . )$ is defined in \eqref{condWeibullLaplace}. \subsection{Variation around the GPD} Another approach, inspired by Naveau et al. (2016\cite{naveau2016}), consists in modifying the model so that the distribution of $\widetilde R$ tends to a GPD as $x$ tends to infinity and it takes a more flexible form near 0. $\widetilde R$ is generated through $G_{(c_T,a_T)}^{-1}(U)$ with $U\sim\mathcal{U}[0,1]$. Let us consider now a deformation of the uniform variable $V=L^{-1}(U)$ defined on $[0,1]$, and the transform $W$ of the GPD: $W^{-1}(U)=G_{(c_T,a_T)}^{-1}(L^{-1} (U))$. The survival function of the GPD being completely monotone, we can choose $W$ so that the distribution of $\widetilde R$ keeps this property. \begin{proposition} If $\Greekmath 011E : [0,\infty[ \rightarrow \mathbb{R}$ is completely monotone and let $\Greekmath 0120 $ be a positive function, such that its derivative is completely monotone, then $\Greekmath 011E (\Greekmath 0120 )$ est completely monotone. \end{proposition} The transformation of the GPD has cumulative distribution function $W=L(G_(c_T,a_T))$ and survival function $\bar W= \bar L(G_(c_T,a_T))$. $G(c_T,a_T)$ is a Berstein function, thus $\bar W$ is completely monotone if $\bar L$ is also. \subsubsection{Examples of admissible functions:} \emph{(1) Exponential form : } \[ \begin{split} &L(0) = 0 \\ & L(x) = \frac{1-\exp(-\Greekmath 0115 x^\Greekmath 010B )}{1-\exp(-\Greekmath 0115 )} ~~~ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{avec } 0\le \Greekmath 010B \le 1 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ et } \Greekmath 0115 >0 \\ & L(1) = 1 \end{split} \] The obtained transformation is: $\forall x>0$, \[\bar W_{(\Greekmath 0115 ,c_T,a_T)}(x) = \bar L ( G(x)) = \frac{\exp\left(-\Greekmath 0115 \left[1-(1+\frac{c_T}{a_T})^{-1/c_T}\right]^\Greekmath 010B \right) - \exp(-\Greekmath 0115 ) }{1-\exp(-\Greekmath 0115 )} \] with $\bar W_{(\Greekmath 0115 ,c_T,a_T)}(x) $ completely monotone. \hspace{0.5cm} \emph{(2) Logarithmic form: } \[ \begin{split} &L(0) = 0 \\ & L(x) = \frac{\log(x+1)}{\log 2 } ~~~~ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \big( or more generally } \frac{\log(\Greekmath 010B x+1)}{\log 2 }, ~\Greekmath 010B >0 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\big )}\\ & L(1) = 1 \end{split} \] and $\forall x>0$, \[ \bar W_{(c_T,a_T)}(x) = 1-\frac{\log\left(2-(1+\frac{c_T}{a_T})^{-1/c_T}\right)}{\log 2} \] \hspace{0.5cm} \emph{ (3) Root form: } \[ \begin{split} &L(0) = 0 \\ & L(x) = \frac{\sqrt{x+1}-1 }{\sqrt{2} -1}\\ & L(1) = 1 \end{split} \] and \[ \bar W_{(c_T,a_T)}x) = 1-\frac{\sqrt{2-(1+\frac{c_T x}{a_T})^{-1/c_T}} -1}{\sqrt{2}} \] \hspace{0.5cm} \emph{(4) Fraction form: } \[ \begin{split} &L(0) = 0 \\ & L(x) = \frac{(\Greekmath 010B + 1)x}{x+ \Greekmath 010B }, ~~ \Greekmath 010B >0\\ & L(1) = 1 \end{split} \] and \[ \bar W_{(\Greekmath 010B ,c_T,a_T)}(x) = 1-\frac{ (\Greekmath 010B +1)\left(1-(1+\frac{c_T x}{a_T})^{-1/c}\right) }{1-(1+\frac{c_T x}{a_T})^{-1/c_T} + \Greekmath 010B } \] The shapes of the above transformations of the GPD are shown in Figure \ref{transf_gpd}. \begin{figure} \caption{Survival functions associated with transformations of the GPD$(0.8,1.5)$} \label{transf_gpd} \end{figure} However those transformations do not conserve the stability through thresholding of the Pareto distribution. Thus, their implementation does not give stable results. Still they give some insight on a simple generalization of the proposed models usable under additional information on the variable of interest. \section{Conclusion} The splitting induced procedure presented in this article proposes an innovative experimental plan to estimate an extreme quantile. Its development has been motivated by on the one hand major industrial stakes, and on the other hand the lack of relevance of existing methodologies. The main difficulty in this setting is the nature of the information at hand, since the variable of interest is latent, therefore only peaks over thresholds may be observed. Indeed, this study is directly driven from an application in material fatigue strength: when performing a fatigue trial, the strength of the specimen obviously can not be observed; only the indicator of whether or not the strength was greater than the tested level is available. Among the methodologies dealing with such a framework, none is adapted to the estimation of extreme quantiles. We therefore proposed a plan based on splitting methods in order to decompose the initial problem into less complex ones. The splitting formula introduces a formal decomposition which has been adapted into a practical sampling strategy targeting progressively the tail of the distribution of interest. The structure of the splitting equation has motivated the parametric hypothesis on the distribution of the variable of interest. Two models exploiting a stability property have been presented: one assuming a Generalized Pareto Distribution and the other a Weibull distribution. The associated estimation procedure has been designed to use the iterative and stable structure of the model by combining a classical maximum likelihood criterion with a consistency criterion on the sequentially estimated quantiles. The quality of the estimates obtained through this procedure have been evaluated numerically. Though constrained by the quantity and quality of information, those results can still be compared to what would be obtained ideally if the variable of interest was observed. On a practical note, while the GPD is the most adapted to the splitting structure, the Weibull distribution has the benefit of being particularly suitable for reliability issues. The experimental campaign launched to validate the method will contribute to select a model. \end{document}
math
86,931
\betagin{document} \timestle[Classification of Nahm pole solutions]{Classification of Nahm pole solutions of the Kapustin-Witten equations on $S^1\times\Sigma\times\mathbb R^+$} \author{Siqi He} \operatorname{ad}dress{Simons Center for Geometry and Physics, StonyBrook University\\Stonybrook, NY 11794} \email{[email protected]} \author{Rafe Mazzeo} \operatorname{ad}dress{Department of Mathematics, Stanford University\\Stanford,CA 94305} \email{[email protected]} \betagin{abstract} In this note, we classify all solutions to the $\mathrm{SU(n)}$ Kapustin-Witten equations on $S^1\times\Sigma \times \mathbb R^+$, where $\Sigma$ is a compact Riemann surface, with Nahm pole singularity at $S^1\times\Sigma \times \{0\}$. We provide a similar classification of solutions with generalized Nahm pole singularities along a simple divisor (a ``knot'') in $S^1\times\Sigma \times \{0\}$. \end{abstract} \maketitle \section{Introduction} An important conjecture by Witten \cite{witten2011fivebranes} posits a relationship between the Jones polynomial of a knot and a count of solutions to the Kapustin-Witten equations. More specifically, let $K$ be a knot in $X = \mathbb R^3$ or $S^3$, and fix an $\mathrm{SU(n)}$ bundle $P$ over $X \times \mathbb R^+$ with associated adjoint bundle $\mathfrak{g}_P$. The Kapustin-Witten (KW) equations \cite{KapustinWitten2006} are equations for a pair $(A, \Phi)$, where $A$ is a connection on $P$ and $\Phi$ is a $\mathfrak{g}_P$-valued $1$-form. We augment these with the singular Nahm pole boundary conditions at $y=0$ (where $y$ is a linear variable on the $\mathbb R^+$ factor), and with an additional singularity imposed along $K \times \{0\}$ . The conjecture states that an appropriate count of solutions to the KW equations with these boundary conditions computes the Jones polynomial. One can define these equations when $X$ is a more general Riemannian $3$-manifold, and in that case this gauge-theoretic enumeration may lead to new $3$-manifold invariants when $K = \emptyset$, or to a generalization of the Jones polynomial for $K$ lying in a general $3$-manifold, see \cite{Witten2014LecturesJonesPolynomial, gaiotto2012knot}. The core of all of this is to investigate the properties of the moduli space of solutions. Significant partial progress has been made, see \cite{MazzeoWitten2013,MazzeoWitten2017,He2017,RyosukeEnergy,taubes1982self}, as well as Taubes' recent advance \cite{Taubescompactness} regarding compactness properties. As usual in gauge theory, it is reasonable to seek to understand a dimensionally reduced version of this problem. Thus suppose that $X = S^1\times \Sigma$, where $\Sigma$ is a compact Riemann surface of genus $g$. Solutions which are invariant in the $S^1$ direction are solutions of the so-called extended Bogomolny equations. General existence theorems for solutions of these dimensionally reduced equations were proved in \cite{HeMazzeo2017,MazzeoHe18}. In the present paper, we adapt arguments from \cite{MazzeoWitten2017} and prove that every solution to the KW equation on $S^1\times \Sigma \times \mathbb R^+$ satisfying Nahm pole boundary conditions is necessarily invariant in the $S^1$ direction. This leads to a complete classification of solutions in this special case. \betagin{theorem} Consider the Kapustin-Witten equations on $S^1\times \Sigma \times \mathbb R^+_y$ for fields satisfying the Nahm pole boundary condition at $y=0$ (with no knot singularity) and which converge to a flat $\mathrm{SL}(n,\mathbb C)$ connection as $y\to\infty$. \betagin{itemize} \item [i)] There are no solutions if $g=0$; \item [ii)] There is a unique solution (up to unitary gauge equivalence) if $g=1$; \item[iii)] If $g>1$, there exists a solution if and only if the limiting flat connection as $y \to \infty$ lies in the Hitchin section in the $\mathrm{SL}(n,\mathbb C)$ Hitchin moduli space, and in that case, this solution is unique up to unitary gauge. \end{itemize} \end{theorem} Part ii) here largely comes from the uniqueness theorem in \cite{MazzeoWitten2013} for solutions on $\mathbb R^3 \times \mathbb R^+$. The Hitchin section in part iii) is also known as the Hitchin component of the $\mbox{SL(n, $\mathbb R$)}$ representation variety, cf.\ \cite{hitchin1992lie}. We recall that there are in fact $n^{2g}$ equivalent Hitchin components, depending on the different choices of spin structure. Next suppose that the knot $K \mathfrak{su}bset S^1 \times \Sigma$ is a union of `parallel' copies of $S^1$, $K = \sqcup_i(S^1 \times \{p_i\})$. The Nahm boundary conditions at a knot require that we specify a weight $\mathbf{k}^i$, i.e., an $(n-1)$-tuple of positive integers $(k_1^i, \ldots, k_{n-1}^i) \in \mathbb N^{n-1}$ for each component $K_i$. \betagin{theorem} Consider the Kapustin-Witten equations on $S^1 \times \Sigma \times \mathbb R^+_y$ for fields which satisfy the Nahm pole boundary condition with knot singularities with weights $\mathbf{k}^i$, as described above, along $K\times\{0\}$, where $K = \sqcup_i (S^1 \times \{p_i\})$, and which converge to a flat $\mathrm{SL}(n,\mathbb C)$ connection, corresponding to a stable Higgs pair $(\mathcal{E},\varphi)$, as $y \to \infty$. \betagin{itemize} \item [i)] There are no solutions when $g=0$; \item [ii)] If $g>1$ and $\rho$ is irreducible, there exists a solutions with these boundary conditions at $K$ if and only if there exists a holomorphic line subbundle $L$ of $\mathcal{E}$ such that the data set $\mathfrak{d} (\mathcal{E},\varphi,L)=\{(p_i,\mathbf{k}_i)\}$, \end{itemize} \end{theorem} \noindent The definition of data sets $\mathfrak{d} (\mathcal{E},\varphi,L)$ is recalled in Section \ref{datasetdefinition}. \betagin{remark} We do not discuss the case $g=1$ here. Indeed, it is not clear what the correct existence theory for solutions with knot singularities should be in this case. \end{remark} \betagin{corollary} There exists, up to unitary gauge, at most $n^{2g}$ solutions to the KW equations which converge to the given flat connection associated to $(\mathcal{E}, \varphi)$ and with Nahm singularity along $K = \sqcup S^1 \times \{p_i\}$. \end{corollary} The knot points $p_i$ and the weights $\mathbf{k}^i$ determine the divisor $D=\mathfrak{su}m_ip_i\mathfrak{su}m_jk_j^i$. \betagin{theorem} If $\deg D$ is not divisible by $n$, there exist no Nahm pole solutions to the KW equations with knot singularity along $K$. In particular, there are no solutions to the $\mathrm{SU}(2)$ extended Bogomolny equations\; with only a single knot singularity of weight $1$. \end{theorem} \textbf{Acknowledgements.} The first author would like to thank Simon Donaldson for numerous helpful discussions. The second author was supported by the NSF grant DMS-1608223. \section{The Kapustin-Witten Equations and the Nahm Pole Boundary Conditions} We begin with some background materials on the Kapustin-Witten equations \cite{KapustinWitten2006} and Nahm pole boundary conditions: \mathfrak{su}bsection{The Kapustin-Witten Equations} Let $(M,g)$ be a Riemannian $4$-manifold, and $P$ an $\mathrm{SU(n)}$ bundle over $M$ with the adjoint bundle $\mathfrak{g}_P$. The Kapustin-Witten equations for a connection $A$ and a $\mathfrak{g}_P$-valued $1$-form $\Phi$ are \betagin{equation} \betagin{split} F_A-\Phi\wedge\Phi+\star d_A\Phi=0,\ \ d_A^{\star}\Phi=0. \label{KW} \end{split} \end{equation} When $M$ is closed, all solutions to the KW equations are flat $\mathrm{SL}(n,\mathbb C)$ connections \cite{KapustinWitten2006}. Indeed, in this setting, a Weitzenb\"ock formula shows that solutions must satisfy the decoupled equations \betagin{equation} F_A-\Phi\wedge\Phi=0,\;d_A\Phi=0,\;d_A\star\Phi=0, \label{flatconnection} \end{equation} or equivalently, $F_{\mathcal A} = 0$ where $\mathcal A := A + i\Phi$ and $d_A \star \Phi = 0$. Following \cite{witten2011fivebranes,Witten2014LecturesJonesPolynomial}, the main case of interest here is when $M=X\times\mathbb R^+$, where $X$ is a closed 3-manifold and $\mathbb R^+:=(0,\infty)$ with linear coordinate $y$. From now on, we fix a Riemannian metric on $X$ with volume $1$, and endow $X\times\mathbb R^+$ with the product metric. \mathfrak{su}bsubsection{The Nahm Pole Boundary Condition} Let $G:=\mathrm{SU(n)}$, with Lie algebra $\mathfrak{g}$ and choose a principal embedding $\varrho:\mathfrak{su}(2)\to\mathfrak{g}$ as well as a global orthonormal coframe $\{\mathfrak{e}^*_a,\ a=1,2,3\}$ of $T^*X$, which is possible since $X$ is parallelizable. Next, choose a section $e$ of $T^*X \otimesmes \mathfrak{g}_P$, $e = \mathfrak{su}m \mathfrak t_a e_a^*$ for some everywhere nonvanishing sections $\mathfrak{t}_a$, $a=1,2,3$, of the adjoint bundle $\mathfrak{g}_P$ which satisfy the commutation relations $[\mathfrak{t}_a,\mathfrak{t}_b]=\epsilonsilon_{abc}\mathfrak{t}_c$, and which lie in the conjugacy class of the image of $\varrho$. This choice of $e$ is called a {\it dreibein form}. \betagin{definition} With all notation as above, the pair $(A,\Phi)$ satisfies the \textbf{Nahm pole boundary condition} at $y=0$ if, in some gauge, $A=A_0 + \mathcal{O}(y^{\epsilon})$ and $\Phi=\frac{e}{y}+\mathcal{O}(y^{-1+\epsilon})$ for some $\epsilon>0$. \end{definition} The rationale for this name is that the dimensional reduction of the KW equations to $\mathbb R^+$ are the Nahm equations, and in this case $(0,\frac{e}{y})$ is a `standard' solution of the Nahm equations with a so-called pole at $y=0$. We remark also that as proved in \cite{MazzeoWitten2013}, it is sufficient to assume that $A = \mathcal{O}(y^{-1+\epsilon})$, since the regularity theory for solutions shows that there is automatically a leading coefficient $A_0$. \mathfrak{su}bsubsection{The Nahm Pole Boundary Condition with Knot Singularities} A generalization of this boundary condition incorporates certain `knot' singularities at $y=0$. Before describing this, recall from \cite{witten2011fivebranes} the model solution when $G = \mathrm{SU(2)}$ and $X=\mathbb R^3 = \mathbb{R}\times\mathbb{C}$ with coordinate $(x_1,z = x_2 + ix_3)$. Introduce spherical coordinate $(R,s,\theta)$ in the $(z,y)$ half-space: $z=re^{i\theta}$, $R=\sqrt{r^2+y^2}$, $y=R\sigman s$, $r= |z| = R\cos s$. The model knot is the line $(x_1,0,0)\mathfrak{su}bset \mathbb{R}^3 \times \{0\}$. Writing $\Phi=\phi_zdz+\phi_{\bar{z}} d\bar{z}+\phi_1 dx_1+\phi_ydy$, the model solution of weight $k$ takes the form \betagin{equation} \betagin{split} A&=-(k+1)\cos^2 s\frac{(1+\sigman s)^k-(1-\sigman s)^k}{(1+\sigman s)^{k+1}-(1-\sigman s)^{k+1}}d\theta \left(\betagin{array}{cc} \frac i2& 0\\ 0& \frac i2 \end{array}\right),\\ \phi_z&=\frac{2(k+1)e^{ik\theta}\cos^k s}{R(1+\sigman s)^{k+1}-R(1-\sigman s)^{k+1}}\left(\betagin{array}{cc} 0& 1\\ 0& 0 \end{array}\right),\\ \phi_1&=\frac{k+1}{R}\frac{(1+\sigman s)^{k+1}+(1-\sigman s)^{k+1}}{(1+\sigman s)^{k+1}-(1-\sigman s)^{k+1}}\left(\betagin{array}{cc} \frac i2& 0\\ 0& \frac i2 \end{array}\right),\;\phi_y=0. \end{split} \end{equation} There is a less explicit model solution when $G= \mathrm{SU(n)}$, due to Mikhaylov \cite{Mikhaylov2012solutions}. The weight in that case is an $(n-1)$-tuple $\mathbf{k}=(k_1,\cdots,k_{n-1})$, and the corresponding solution is denoted $(A^{\mathrm{mod}}_{\mathbf{k}},\Phi^{\mathrm{mod}}_{\mathbf{k}})$. As in the case $n=2$, $|A^{\mathrm{mod}}_{\mathbf{k}}|\sigmam R^{-1}s^0$ and $|\Phi^{\mathrm{mod}}_\mathbf{k}|\sigmam R^{-1}s^{-1}$ near $z=0, y=0$. In general, given a knot $K\mathfrak{su}bset X\times\{0\}$, introduce local coordinates $(x_1,z = x_2 + i x_3 ,y)$ near $K$, where $K = \{z = y = 0\}$ and $t$ is a coordinate along $K$. We can use cylindrical coordinates $(R,s,\theta, x_1)$ near $K$, where $y=R\sigman s$, $z=R\cos s e^{i\theta}$. Then, as in \cite{witten2011fivebranes,MazzeoWitten2017}, we make the \betagin{definition} With $P$ and $G$ as above, and $K \mathfrak{su}bset X$ a knot, then $(A,\Phi)$ satisfies \textbf{Nahm pole boundary condition with knot $K$ and weight $\mathbf{k}$} if in some gauge \betagin{itemize} \item [i)] $(A,\Phi)$ satisfies the Nahm pole boundary condition.away from knots $K$, \item [ii)] near $K$, $A=A^{\mathrm{mod}}_{\mathbf{k}}+\mathcal{O}(R^{-1+\epsilon}s^{-1+\epsilon})$, $\Phi=\Phi^{\mathrm{mod}}_{\mathbf{k}}+\mathcal{O}(R^{-1+\epsilon}s^{-1+\epsilon}).$ \end{itemize} \end{definition} \mathfrak{su}bsubsection{The Boundary Condition at $y=\infty$} We must also impose an asymptotic boundary condition at the cylindrical end, as $y \to \infty$. We change to a temporal gauge, i.e., so that $A_y \equiv 0$. Then writing $\Phi=\phi+\phi_ydy$ (so $\phi$ includes the $\phi_1$ part), the KW equations become flow equations \betagin{equation} \betagin{split} &\partial_y A=\star d_A\phi+[\phi_y,\phi],\\ &\partial_y\phi=d_A\phi_y+\star(F_A-\phi\wedge\phi),\\ &\partial_y\phi_y=d_A^{\star}\phi. \label{flowequation} \end{split} \end{equation} We shall assume that $(A,\Phi)$ converges to a "steady-state" ($y$-independent) solution as $y \to \infty$, which is then necessarily a flat $SL(n,\mathbb C)$ connection. The $y$-independence, together with the equations \eqref{flatconnection} yield that $[\phi,\phi_y]=d_A\phi_y=0$; this shows that if $\phi_y \neq 0$, then $\mathcal A$ is reducible. \betagin{proposition} If $(A,\Phi)$ satisfies the KW equations together with Nahm pole boundary conditions (possibly with knots), and converges to an irreducible flat $\mathrm{SL}(n,\mathbb C)$ connection as $y \to \infty$, then $\phi_y \equiv 0$. \label{vanphiy} \end{proposition} Indeed, the hypothesis and the remark above shows that $\lim_{y\rightarrow +\infty}\phi_y=0$. A well-known vanishing theorem then implies that $\phi_y \equiv 0$, see \cite[Page 36]{taubes2013compactness} or \cite[Corollary 4.7]{He2017} for a proof. We assume henceforth, as in \cite{Taubescompactness}, that $\phi_y \equiv 0$. We now define the moduli spaces \betagin{equation} \betagin{split} \mathcal{M}_{\mathrm{NP}}^{\mathrm{KW}}:=\{(A,\Phi): \ \mathrm{KW}(A,\Phi)=0, \ (A,\Phi) \mbox{ converges to a flat } \mathrm{SL}(n,\mathbb C)\;connection\\ \mbox{as} \ y\to\infty\ \mbox{and}\ \mbox{ satisfies the Nahm Pole boundary condition at} \ y=0\}/\mathcal{G}_0, \end{split} \end{equation} and \betagin{equation} \betagin{split} \mathcal{M}^{\mathrm{KW}}_{\mathrm{NPK}} :=\left\{ (A,\Phi): \mathrm{KW}(A,\Phi)=0,\ (A,\phi,\phi_1)\ \mbox{ satisfies the Nahm pole} \right. \\ \mbox{ boundary condition with knot $K$ and converges to a flat } \\mathrm{SL}(n,\mathbb C) \ \textrm{connection as} \ y\to\infty \}/\mathcal{G}_0, \label{complexgeometrymodulispace} \end{split} \end{equation} where $\mathcal{G}_0$ is the space of gauge transformations preserving the boundary conditions. \mathfrak{su}bsection{The Regularity theorems of Nahm pole Solutions} We next recall the regularity theory for this singular boundary condition at $y=0$, as developed in \cite{MazzeoWitten2013,MazzeoWitten2017}. Still working on $X\times\mathbb R^+$, fix a smooth background connection $\nablabla$, and write $\nabla_x, \nabla_y$ for the covariant derivatives in the $x \in X$ and $y$ directions. \betagin{theorem}{\cite{MazzeoWitten2013, MazzeoWitten2017}} \label{expansions} Let $(A,\Phi)$ satisfy the KW equations with Nahm pole boundary condition, and write $A = A_0 + a$, $\Phi=\frac{e}{y}+b$ near $y=0$ where $a = \mathcal{O}(y^\epsilon)$, $b = \mathcal{O}(y^{-1 + \epsilon})$. Then $a$ and $b$ are polyhomogeneous. Furthermore, the leading term $A_0 $ of $A$ must correspond, under the intertwining provided by the dreibein $e$, with the Levi-Civita connection on $X$. If $(A,\Phi)$ satisfies the Nahm pole boundary condition with a knot singularity along $K$ of weight $\mathbf{k}$ at $y=0$, then writing $A=A^{\mathrm{mod}}_\mathbf{k}+a,\;\Phi=\Phi^{\mathrm{mod}}_{\mathbf{k}}+b$, where $(A^{\mathrm{mod}}_\mathbf{k},\Phi^{\mathrm{mod}}_\mathbf{k})$ is the model solution and $a, b = \mathcal{O}(R^{-1 + \epsilon} s^{-1+\epsilon})$, then $a, b$ are polyhomogeneous, i.e., have expansions in positive powers of $R$ and $s$, and nonnegative integer powers of $\log R$ and $\log s$, with coefficients smooth in the tangential variables. These expansions are of product type at the corner $R = s= 0$. \end{theorem} \betagin{remark} We recall that a function (or section of some bundle) $u$ is polyhomogeneous on $X \times \mathbb R^+$ at $X \times \{0\}$ if, near any boundary point, \[ u \sigmam \mathfrak{su}m_j \mathfrak{su}m_{\ell = 0}^{N_j} u_{j\ell}(x) y^{\gamma_j} (\log y)^\ell\ \ \mbox{as}\ y \to 0. \] Here $x$ is a local coordinate on $X$ and each coefficient $u_{j\ell}(x)$ is $\mathcal C^\infty$, while $\gamma_j$ is a sequence of complex numbers with real parts tending to infinity. In our setting, the $\gamma_j$ are explicit real numbers calculated in \cite{MazzeoWitten2013}. The second polyhomogeneity statement, near $K$, may be phrased similarly once we introduce the blowup $[X \times \mathbb R^+; K \times \{0\}]$. This is a new manifold with corners of codimension two obtained by replacing the knot $K$ at $y=0$ with its inward-pointing spherical normal bundle. The cylindrical coordinates $(x_1, R, s, \theta)$ are nonsingular on this space, and the two boundaries are defined by $\{R=0\}$ and $\{s=0\}$. A function or section $u$ is polyhomogeneous on this space if it admits a classical expansion as described above near each point in the interior of the codimension one boundaries, while near the corner $\{R = s = 0\}$ it admits a product type expansion \[ u \sigmam \mathfrak{su}m_{j, k} \mathfrak{su}m_{\ell=0}^{N_j} \mathfrak{su}m_{m = 0}^{M_j} u_{j k \ell m}(x_1, \theta) s^{\gamma_j} R^{\mu_k} (\log s)^\ell (\log R)^m, \] where as before, each coefficient function is smooth in the variables $t,\theta$ along the corner. In our setting the $\gamma_j$ and $N_j$ are the same numbers as in the previous expansion, while the $\mu_k$ are real numbers calculated (somewhat less explicitly, i.e., only in terms of spectral data of some auxiliary operator) in \cite{MazzeoWitten2017}. The paper \cite{Heexpansion18} considers various refined aspects of the higher terms in the expansion in $y$. We have described this precise regularity for the sake of completeness, but in fact, we do not use the full power of these expansions here, but only the estimates \betagin{equation*} \betagin{split} & |\nabla_x^{\ell}\nabla_y^m a|_{\mathcal{C}^0}\leq C_{\ell,m}y^{2-m+\epsilon},\ \ |\nabla_x^{\ell}\nabla_y^m b|_{\mathcal{C}^0}\leq C_{l,m}y^{1-m+\epsilon}, \\ & |\nabla_{x_1}^{\ell}\nabla_R^{m}\nabla_s^n a|_{\mathcal{C}^0}\leq C_{\ell,m,n}R^{-\epsilon-m}s^{2-\epsilon-n},\ \ |\nabla_{x_1}^{\ell}\nabla_R^{m}\nabla_s^n b|_{\mathcal{C}^0} \leq C_{\ell,m,n}R^{-\epsilon-m}s^{1-\epsilon-n} \end{split} \end{equation*} for any $\epsilon>0$ and any $\ell,m,n \in \mathbb N$. \end{remark} \section{The Extended Bogomolny Equations} We next recall the dimensional reduction of the Kapustin-Witten equations from $S^1\times\Sigma\times \mathbb R^+$ to $\Sigma \times \mathbb R^+$, obtained by considering fields invariant in the $S^1$ direction. This was previously studied in \cite{HeMazzeo2017,MazzeoHe18}, and is closely related to the Atiyah-Floer approach to counting Kapustin-Witten solutions \cite{gaiotto2012knot}. Assume on the one hand that the bundle $P$ on $S^1 \times \Sigma \times \mathbb R^+$ is pulled back from $\Sigma \times \mathbb R^+$. Changing notation slightly, given a solution $(\widehat{A},\widehat{\Phi})$ of the KW equations on $S^1 \times \Sigma \times \mathbb R^+$, choose a gauge for which the $S^1$ component of $A$ vanishes and $A_y \equiv 0$ as well. By virtue of the Nahm pole boundary conditions at $y=0$ and the asymptotic condition as $y \to \infty$, Proposition \ref{vanphiy} gives that $\phi_y = 0$, but we cannot gauge away the $S^1$ component $\phi_1$. Thus the remaining fields are $(\widehat{A}_\Sigma, \widehat{\Phi}_1, \widehat{\Phi}_\Sigma)$. We regard $\widehat{A}_\Sigma$ as a connection $A$ on $\Sigma$, and write $\widehat{\Phi}_1 = \phi_1$, $\widehat{\Phi}_\Sigma = \phi$. These remaining fields satisfy the {\bf extended Bogomolny equations} \betagin{equation} \label{Eq_EBE} \betagin{split} &F_A-\phi\wedge\phi-\star d_A\phi_1=0\\ &d_A\phi +\star [\phi,\phi_1]= 0,\\ &d_A^{\star}\phi= 0. \end{split} \end{equation} On the other hand, given a solution $(A, \phi, \phi_1)$ of the extended Bogomolny equations\; on $\Sigma \times \mathbb R^+$, then denoting by $\pi:S^1\times \Sigma\times \mathbb R^+\to \Sigma\times \mathbb R^+$ the natural projection, we define the connection $\widehat{A} = \pi^* A$ and Higgs field $\widehat{\Phi} = \pi^{\star}\phi+\pi^{\star}\phi_1dx_1$. It is straightforward to check that $(\widehat{A},\widehat{\Phi})$ satisfies the KW equations. Let $D=\{(p_i,\mathbf{k}_i=(k_1^i,\cdots,k_{n-1}^i))\}$ where for each $i$, $k_j^i$ are non-negative integers with at least one of them nonzero. \betagin{definition} Let $(A,\phi,\phi_1)$ be a solution to the extended Bogomolny equations\; on $\Sigma \times \mathbb R^+$. \betagin{itemize} \item [i)] The fields $(A,\phi,\phi_1)$ satisfy the \textbf{Nahm pole boundary condition} if the corresponding fields $(\widehat{A},\widehat{\Phi})$ satisfy the Nahm pole boundary condition on $S^1 \times \Sigma \times \mathbb R^+$. \item [ii)] Similarly, $(A,\phi,\phi_1)$ satisfies the \textbf{Nahm pole boundary condition with knot data $D$} if the corresponding pull back fields $(\widehat{A},\widehat{\Phi})$ satisfy the Nahm pole boundary condition with knots at $K_i:=S^1\times \{p_i\}$ with weight $\mathbf{k}_i$. \end{itemize} \end{definition} The moduli space we shall consider are: \betagin{equation} \betagin{split} \mathcal{M}_{\mathrm{NP}}^{\mathrm{EBE}}:=\{(A,\phi,\phi_1): \ \mathrm{EBE}(A,\phi,\phi_1)=0, \ (A,\phi,\phi_1) \mbox{ converges to a flat } \mathrm{SL}(n,\mathbb{C}) \\ \mbox{connection as} \ y\to\infty\ \mbox{and}\ \mbox{ satisfies the Nahm Pole boundary condition at} \ y=0\}/\mathcal{G}_0, \end{split} \end{equation} and \betagin{equation} \betagin{split} \mathcal{M}^{\mathrm{EBE}}_{\mathrm{NPK}} & :=\left\{ (A,\phi,\phi_1): \mathrm{EBE}(A,\phi,\phi_1)=0,\ (A,\phi,\phi_1)\ \textrm{ satisfies the Nahm pole } \right. \\ & \textrm{boundary condition with knot and converges to a flat} \\ & \left. \mathrm{SL}(n,\mathbb{C}) \ \textrm{connection as} \ y\to\infty \right\}/\mathcal{G}_0, \label{complexgeometrymodulispace} \end{split} \end{equation} where $\mathcal{G}_0$ is the gauge transformations that preserve the boundary condition. \mathfrak{su}bsection{Hermitian-Yang-Mills Structure} In \cite{gaiotto2012knot,witten2011fivebranes}, it is observed that the extended Bogomolny equations have a Hermitian-Yang-Mills structure. By this we mean the following. Let $E$ be complex vector bundle of rank $n$ over $\Sigma\times\mathbb R^+$ with $\det E=0$. A choice of Hermitian metric $H$ on $E$ induces an $\mathrm{SU}(n)$ structure on this bundle, and we denote by $\mathfrak{g}_E$ the associated adjoint bundle. Writing \[ d_A=\nabla_2dx_2+\nabla_3dx_3+\nabla_ydy,\ \mbox{and}\ \ \phi=\phi_2dx_2+\phi_3dx_3 =\frac{1}{2}(\varphi_z dz+\varphi_{\bar{z}} d\bar{z}), \] we define the operators \betagin{equation} \betagin{split} &\mathcal{D}_1=(\nabla_2+i\nabla_3)d{\bar{z}}=(2\partial_{\bar{z}}+A_1+iA_2)d{\bar{z}},\\ &\mathcal{D}_2=\operatorname{ad}\varphi=[\varphi,\cdot ]=[(\phi_2 - i \phi_3) \, dz ,\cdot ], \\ &\mathcal{D}_3=\nabla_y-i\phi_1=\partial_y+A_y-i\phi_1. \end{split} \end{equation} Their adjoints with respect to $H$ are denoted $\mathcal{D}_i^{\daggerg_H}$. The extended Bogomolny equations\; can then be written in the elegant form \betagin{equation} \betagin{split} &[\mathcal{D}_i,\mathcal{D}_j]=0, \ \ i,j=1,2,3,\\ & \frac{i}{2}\lambdabdambda \left([\mathcal{D}_1, \mathcal{D}_1^{\dagger_H}]+[\mathcal{D}_2,\mathcal{D}_2^{\dagger_H}]\right)+ [\mathcal{D}_3,\mathcal{D}_3^{\dagger_H}]=0, \label{Eq_EBE_Hermitian} \end{split} \end{equation} where $\lambdabdambda:\Omega^{1,1}\to\Omega^0$ is the inner product with the K\"ahler form (normalized as $(i/2)dz\wedge d\bar{z}$ when the metric on $\Sigma$ is flat). The action $\mathcal{D}_i\to g^{-1}\mathcal{D}_ig$ of the gauge group $\mathcal{G}$ preserves the Hermitian metric; the complex gauge group is denoted $\mathcal{G}_{\mathbb{C}}$. The smaller system $[\mathcal{D}_i,\mathcal{D}_j]=0$ is invariant under $\mathcal{G}_{\mathbb{C}}$, while the full set of equations \eqref{Eq_EBE_Hermitian} is invariant only under $\mathcal{G}$. The final equation is a real moment map condition. Following Donaldson \cite{donaldson1985anti} and Uhlenbeck-Yau \cite{uhlenbeck1986existence}, geometric data from the $\mathcal{G}_{\mathbb{C}}$-invariant equations play an important role in understanding the moment map equation. \mathfrak{su}bsection{Higgs Bundles and Flat Connections} The appearance of Higgs bundles over $\Sigma$ in this story is motivated by the fact that the $y$-independent versions of the equations of \eqref{Eq_EBE}, when in addition $\phi_1 = 0$, are simply the Hitchin equations. Recall that a Higgs bundle over $\Sigma$ is a pair $(\mathcal{E},\varphi)$ where $\mathcal{E}$ is a holomorphic bundle of rank $n$ with $\det \mathcal{E}=0$ and $\varphi\in H^0(\mathrm{End}(\mathcal{E})\otimesmes K)$. A Higgs pair (which is an alternate phrase for Higgs bundles) $(\mathcal{E},\varphi)$ is called stable if for any holomorphic subbundle $V$ with $\varphi(V)\mathfrak{su}bset V\otimesmes K$, we have $\deg (V)<0$, and polystable if it is a direct sum of stable Higgs pairs. Setting $\mathcal{D}_3 = 0$ in the extended Bogomolny equations\; (or alternately, considering only the equations for $\mathcal{D}_1$ and $\mathcal{D}_2$ on each slice $\Sigma_y := \Sigma \times \{y\}$), we obtain the Hitchin equations: \betagin{equation} F_H+[\varphi,\varphi^{\star_H}]=0,\;\bar{\partial}\varphi=0. \label{Hitchinequation} \end{equation} The initial term $F_H$ is the curvature of the Chern connection $\nablabla_H$ associated to $H$ and the holomorphic structure, and $\varphi^{\star_H}$ is the adjoint with respect to $H$. Irreducibility of the fields $(A,\varphi+\varphi^{\star_H})$ is defined in the obvious way. One may regard \eqref{Hitchinequation} as an equation for the fields $(A, \varphi)$ or else for the Hermitian metric $H$; we consider $H$ as the variable here. \betagin{theorem}{\cite{Hitchin1987Selfdual}} \label{nonabelianhodge} For any Higgs pair $(\mathcal{E},\varphi)$ on $\Sigma$, there exists an irreducible solution $H$ to the Hitchin equations if and only if this pair is stable, and a reducible solution if and only if it is polystable. \end{theorem} To any solution $H$ of \eqref{Hitchinequation} we associate the flat $\mathrm{SL}(n,\mathbb C)$ connection $D = \nablabla_H + \varphi + \varphi^{\star_H}$. This determines, in turn, a representation $\rho: \pi_1(\Sigma) \to \mathrm{SL}(n,\mathbb C)$ which is well-defined up to conjugation. Irreducibility of the solution is the same as irreducibility of the representation, while complete reducibility corresponds to the fact that $\rho$ is reductive. The map from flat connections back to solutions of the Hitchin system is defined as follows: first find a harmonic metric, cf.\, \cite{corlette1988}, which determines a decomposition $D = D^{\mathrm{skew}} + D^{\mathrm{Herm}}$ into skew-Hermitian and Hermitian parts. After that, the further decomposition $D^{\mathrm{Herm}} = \varphi + \varphi^{\starar_H}$ determines $\varphi$, and hence the Higgs bundle $( (D^{\mathrm{skew}})^{0,1}, \varphi)$. Denoting by $\mathcal{M}_{\mathrm{Higgs}}:=\{(\mathcal{E},\varphi)\}^{\mathrm{stable}}/\mathcal{G}_{\mathbb{C}}$ the moduli space of stable $\mathrm{SL}(n,\mathbb{C})$ Higgs bundle, we are then led to define \betagin{equation} \label{Pinf} P_{\infty}^{\NP}:\mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}}\to\mathcal{M}_{\mathrm{Higgs}},\;P_{\infty}^{\NP}K:\mathcal{M}^{\mathrm{EBE}}_{\mathrm{NPK}}\to\mathcal{M}_{\mathrm{Higgs}}; \end{equation} this is the map which assigns to a solution $(A,\phi,\phi_1)$ of the extended Bogomolny equations\; its limiting flat connection, and then, under Theorem \ref{nonabelianhodge}, the corresponding Higgs bundle. The Hitchin fibration is the map \betagin{equation} \betagin{split} &\pi:\mathcal{M}_{\mathrm{Higgs}}\to \oplus_{i=2}^n H^0(\Sigma,\;K^{i})\\ &\pi(\varphi)=(p_2(\varphi),\;\cdots,\;p_{n}(\varphi)), \label{HitchinFiberationMap} \end{split} \end{equation} where $\det (\lambdabda -\varphi) = \mathfrak{su}m \lambdabda^{n-j} (-1)^j p_{j}(\varphi)$. By \cite{hitchin1987stable}, this is a proper map. We next introduce the Hitchin component (also called the Hitchin section). Choose a spin structure $K^{\frac{1}{2}}$ and set $B_{i}=i(n-i)$. Now define the Higgs bundle $(\mathcal{E}, \varphi)$, where \betagin{equation} \betagin{aligned} \mathcal{E}: & =S^{n-1}(K^{-\frac{1}{2}}\oplus K^{\frac{1}{2}})=K^{-\frac{n-1}{2}}\oplus K^{-\frac{n-1}{2}+1}\oplus\cdots\oplus K^{\frac{n-1}{2}} \\[5mm] \varphi & =\betagin{pmatrix} 0 & \sqrt{B_1} & 0 &\cdots& 0\\ 0 & 0 & \sqrt{B_2} & \cdots& 0\\ \vdots & \vdots &\ddots & &\vdots\\ 0 & \vdots & &\ddots &\sqrt{B_{n-1}}\\ q_{n}& q_{n-1} & \cdots & q_2 &0 \end{pmatrix}. \end{aligned} \label{HitchincomponentHiggsfield} \end{equation} The constant $\sqrt{B_i}$ in the $(i,i+1)$ entry represents this multiple of the natural isomorphism $K^{-\frac{n-1}{2} + i } \to K^{-\frac{n-1}{2} + i -1 }\otimesmes K$, and similarly, $H^0(\Sigma,K^{n-i}) \ni q_{n-i} :K^{- \frac{n-1}{2} + i }\to K^{ \frac{n-1}{2}} \otimesmes K$. The Hitchin component $\mathcal{M}_{\mathrm{Hit}}$ is the complex gauge orbit of this family of Higgs bundle, \betagin{equation} \mathcal{M}_{\mathrm{Hit}}:=\{(\mathcal{E}:=S^{n-1}(K^{-\frac{1}{2}}\oplus K^{\frac{1}{2}}),\ \varphi\ \mbox{as in}\ \eqref{HitchincomponentHiggsfield})\}/\mathcal{G}_{\mathbb{C}}. \end{equation} The following theorem explains its importance. \betagin{theorem}{\cite{hitchin1992lie}} Every element in $\mathcal{M}_{\mathrm{Hit}}$ is a stable Higgs pair. Furthermore, the map assigning to each element of $\oplus_{i=2}^{n}H^0(\Sigma,K^{i})$ the unique solution of the Hitchin equations corresponding to the associated Higgs pair is a diffeomorphism to one of the $n^{2g}$ choices for the Hitchin component; thus its inverse, the restriction of the Hitchin fibration $\pi|_{\mathcal{M}_{\mathrm{Hit}}}$, is also a diffeomorphism. \end{theorem} Note that the image of this map is only one component of the space of all irreducible flat $\mathrm{SL}(n,\mathbb R)$ connections, which explains the name `Hitchin component.' \mathfrak{su}bsection{The Kobayashi-Hitchin Correspondence} \label{datasetdefinition} We now recall the Kobayashi-Hitchin correspondence for the extended Bogomolny equations\; moduli space \cite{gaiotto2012knot,HeMazzeo2017,MazzeoHe18}. As noted earlier, from the Hermitian structure in \eqref{Eq_EBE_Hermitian} and the commutation relationship $[\mathcal{D}_1,\mathcal{D}_2]=0$, we obtain a Higgs bundle $(\mathcal{E}_y,\varphi_y)$ on each slice $\Sigma\times\{y\}$. The commutation relationship $[\mathcal{D}_3,\mathcal{D}_1]=[\mathcal{D}_3,\mathcal{D}_2]=0$ means that parallel transport by $\mathcal{D}_3$ identifies these Higgs bundles for different values of $y$. Suppose first that the solution of the extended Bogomolny equations\; satisfies the Nahm pole boundary condition without knots. As explained in more detail in \cite[Section 4]{MazzeoHe18}, there is a holomorphic line subbundle $L \mathfrak{su}bset E$ determined by the property that the parallel transports (under $\mathcal{D}_3$ parallel transport) of its sections vanish at the fastest possible rate as $y \to 0$, measured with respect to the Hermitian metric $H$. In other words, a solution of the extended Bogomolny equations\; satisfying these boundary conditions determines a triple $(\mathcal{E},\varphi,L)$, consisting of a Higgs bundle and a line subbundle. More generally, consider any triple $(\mathcal{E}, \varphi, L)$ where $L$ is any holomorphic line subbundle of $\mathcal{E}$. Define holomorphic maps $$ f_i:=1\wedge\varphi\cdots \wedge\varphi^{i-1} \in H^0(\Sigma; L^{- i}\otimesmes \wedge^{i}E\otimesmes K^{\frac{i(i-1)}{2}}),\ \ 1 \leq i \leq n. $$ Note that $Z(f_j)-Z(f_{j-1})=\mathfrak{su}m_i k^{i}_j p_i$ for some $k^{i}_j \in \mathbb N$. Setting $\mathbf{k}_i:=(k^i_1,\cdots, k^i_{n-1}) \in \mathbb N^{n-1}$, then we define the knot data set to be $\mathfrak d(\mathcal{E},\varphi,L):=\{(p_i,\mathbf{k}_i)\}$. Note the important special case (which holds by noting that $f_n\neq 0$ everywhere): \betagin{proposition}{\cite[Section 4]{MazzeoHe18}} If $\mathfrak d(\mathcal{E},\varphi,L)=\emptyset$, then $(\mathcal{E},\varphi)\in\mathcal{M}_{\mathrm{Hit}}$ and $L\cong K^{\frac{n-1}{2}}$. \end{proposition} We then state the main equivalences between the extended Bogomolny equations\; moduli spaces and the spaces of triples $(\mathcal{E}, \varphi, L)$, first for data in the Hitchin component and then for general data. \betagin{theorem}{\cite{HeMazzeo2017,MazzeoHe18}} \label{Thm_KobayashiHitchinNP} There is a diffeomorphism of moduli spaces \[ \mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}}\cong \mathcal{M}_{\mathrm{Hit}}. \] More specifically, recall the map $P_{\infty}^{\mathrm{NP}}$ from \eqref{Pinf}. \betagin{itemize} \item [i)] For any $(\mathcal{E},\varphi)\in\mathcal{M}_{\mathrm{Hit}}$, there exists a unique Nahm pole solution $(A,\phi,\phi_1)\in \mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}}$ such that $P_{\infty}^{\mathrm{NP}}(A,\phi,\phi_1)=(\mathcal{E},\varphi)$; \item [ii)] Given any Higgs bundle $(\mathcal{E},\varphi)\notin\mathcal{M}_{\mathrm{Hit}}$, there is no solution to the extended Bogomolny equations which converges to the flat connection determined by $(\mathcal{E},\varphi)$. In other word, $(P_{\infty}^{\mathrm{NP}})^{-1}(\mathcal{E},\varphi)=\emptyset$. \end{itemize} \end{theorem} \betagin{theorem}{\cite{HeMazzeo2017,MazzeoHe18}} \label{Thm_KobayashiHitchinKnot} Fix a data set $D=\{(p_i,\mathbf{k}_i=(k_1^i,\cdots,k_{n-1}^i))\}$. If $(\mathcal{E}, \varphi)$ is any stable Higgs bundle over $\Sigma$ with genus $g(\Sigma)>1$, there exists a solution to the extended Bogomolny equations\; satisfying the general Nahm pole boundary condition with knot singularities at $p_i$ with weight $\mathbf{k}_i$ if and only if there exists a line bundle $L\mathfrak{su}bset \mathcal{E}$ such that $\mathfrak d(\mathcal{E},\varphi,L)=D$. In other words, there is a bijection \[ \mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}K}\cong \{(\mathcal{E},\varphi,L)\}/\mathcal{G}_{\mathbb{C}}, \] where the pairs $(\mathcal{E},\varphi)$ on the right are stable Higgs bundles and $L \mathfrak{su}bset \mathcal{E}$ is a line subbundle. \end{theorem} \betagin{remark} Notice that in the second result, when knot singularities are allowed, we do not claim that this bijection of moduli spaces is a diffeomorphism. Indeed, while the space of triples $(\mathcal{E}, \varphi, L)$ maps onto the space of all stable Higgs pairs, i.e., onto the entire Hitchin moduli space, it is not clear that this space of triples is even a manifold. As a second remark, if $(\mathcal{E},\varphi)$ is polystable, it seems likely that there are no solutions to the extended Bogomolny equations\; which satisfy Nahm pole boundary conditions with knot singularities which converge to $(\mathcal{E},\varphi)$. However, we do not prove this. \end{remark} \section{A Weitzenb\"ock Identity for the Kapustin-Witten Equations} In this section, we establish a Weitzenb\"ock identity analogous to the one in \cite{MazzeoWitten2017}, and use this to show that all solutions to the KW equations over $M:=S^1\times \Sigma \times \mathbb R^+$ are invariant in the $S^1$ direction, hence determine solutions to the extended Bogomolny equations. In all the following, we use coordinates $x_1 \in S^1$, $z \in \Sigma$ and $y \in \mathbb R^+$. \mathfrak{su}bsection{Weitzenb\"ock Identity} As before, let $P$ be an $\mathrm{SU}(n)$ bundle over $M:=S^1\times \Sigma\times \mathbb R^+$, and fix a connection $\widehat{A}$ and a $\mathfrak{g}_P$-valued $1$-form $\widehat{\Phi}$ on $M$; assume that $\widehat{A}_1 = \widehat{A}_y = \widehat{\Phi}_y \equiv 0$. Write $d_{\widehat{A}}=d_A+dx_1\wedge \nabla_1$ and $\widehat{\Phi}=\phi+\phi_1 dx_1$. We also fix a product metric on $S^1\times\Sigma\times\mathbb R^+$ with orientation $dx_1\wedge dA_\Sigmagma\wedge dy$. Now write $F_{\widehat{A}}=F_A+B_A\wedge dx_1$; the Bianchi identity $d_{\widehat{A}}F_{\widehat{A}}=0$ is equivalent to \betagin{equation} \label{Bianchi} \nabla_1 F_A+d_AB_A=0 \end{equation} In the following, we write $\star_4$ and $\star$ for the Hodge star operators on $M$ and $S^1\times\Sigma$, respectively. We first compute \betagin{equation} \betagin{split} &F_{\widehat{A}}- \widehat{\Phi}\wedge \widehat{\Phi}+\star_4 d_{\widehat{A}}\widehat{\Phi} \\ & \qquad \quad = (F_A-\phi\wedge\phi+\star(\nabla_1\phi-d_A\phi_1))+(B_A-[\phi,\phi_1]-\star d_A\phi)\wedge dx_1,\\ &d_{\widehat{A}}^{\star_4}\widehat{\Phi}=d_A^{\star}\phi-\nabla_1\phi_1. \end{split} \end{equation} Next, for any $\epsilon\in(0,1)$, write $M_{\epsilon}:=S^1\times\Sigma\times [\epsilon, \epsilon^{-1}]$. Then \betagin{equation} \betagin{split} \int_{M_{\epsilon}}|KW|^2 &= \int_{M_{\epsilon}}|F_A-\phi\wedge\phi+\star(\nabla_1\phi-d_A\phi_1)|^2 \\ & \qquad \ \ + |B_A-[\phi,\phi_1]-\star d_A\phi|^2+|d_A^{\star}\phi-\nabla_1\phi_1|^2\\ & =\int_{M_{\epsilon}}|F_A-\phi\wedge\phi-\star d_A\phi_1|^2+|\nabla_1\phi|^2+ \\ & \qquad \ \ |B_A|^2+|[\phi,\phi_1]+\star d_A\phi|^2+|d_A^{\star}\phi|^2+|\nabla_1\phi_1|^2+\int_{ M_\epsilon} \chi, \end{split} \end{equation} where \betagin{equation} \chi:=2\langle F_A-\phi\wedge\phi-\star d_A\phi_1,\star \nabla_1\phi\rightarrowngle-2\langle B_A,\star d_A\phi+[\phi,\phi_1]\rightarrowngle-2\langle d_A^{\star}\phi,\nabla_1\phi_1 \rightarrowngle. \end{equation} The inner product here is $\langle A,B\rightarrowngle:=- \mathrm{Tr}(A\wedge \star_4 B)$. \betagin{lemma} We have the following identities: \betagin{itemize} \item [i)] $\quad \langle F_A,\star \nabla_1\phi \rightarrowngle-\langle B_A,\star d_A\phi\rightarrowngle=\nabla_1\mathrm{Tr}(F_A\wedge \phi)\wedge dx_1+d\mathrm{Tr}(B_A\wedge\phi)\wedge dx_1$, \item [ii)] \betagin{equation*} \betagin{split} &\langle \star d_A\phi_1,\star \nabla_1\phi\rightarrowngle+\langle B_A,[\phi,\phi_1]\rightarrowngle+\langle \star d_A\phi_1,\star \nabla_1\phi\rightarrowngle\\ =&\nabla_1\mathrm{Tr}(\phi\wedge \star d_A\phi_1)\wedge dx_1-\nabla_1\mathrm{Tr}(\phi_1\wedge d_A\star \phi\wedge dx_1) \end{split} \end{equation*} \item [iii)] $\quad \langle \phi\wedge\phi, \star \nabla_1\phi\rightarrowngle=\frac{1}{3}\nabla_1\mathrm{Tr}(\phi\wedge\phi\wedge\phi\wedge dx_1)$ \end{itemize} \end{lemma} \betagin{proof} For (i), we compute \betagin{equation*} \betagin{split} \langle F_A, & \star \nabla_1\phi \rightarrowngle -\langle B_A,\star_3 d_A\phi\rightarrowngle\\ &= \mathrm{Tr}(F_A\wedge \nabla_1\phi)\wedge dx_1-\mathrm{Tr}(B_A\wedge d_A\phi)\wedge dx_1\\ &= \nabla_1\mathrm{Tr}(F_A\wedge \phi)\wedge dx_1-\mathrm{Tr}(\nabla_1F_A\wedge \phi)\wedge dx_1 \\ & \qquad \qquad +d\mathrm{Tr}(B_A\wedge\phi)\wedge dx_1-\mathrm{Tr}(d_AB_A\wedge\phi)\wedge dx_1\\ &= \nabla_1\mathrm{Tr}(F_A\wedge \phi)\wedge dx_1+d\mathrm{Tr}(B_A\wedge\phi)\wedge dx_1, \end{split} \end{equation*} where the last step uses \eqref{Bianchi}. Next, for (ii), \betagin{equation*} \betagin{split} \ \ \langle \star d_A\phi_1, & \star \nabla_1\phi \rightarrowngle \\ & = \nabla_1\mathrm{Tr}(\phi\wedge \star d_A\phi_1)\wedge dx_1-\mathrm{Tr}(\phi\wedge \nabla_1(\star d_A\phi_1))\wedge dx_1\\ & = \nabla_1\mathrm{Tr}(\phi\wedge \star d_A\phi_1)\wedge dx_1-\mathrm{Tr}(\phi\wedge \star B_A\wedge \phi_1)\wedge dx_1, \end{split} \end{equation*} \betagin{equation*} \betagin{split} \ \langle d_A^{\star}\phi, & \nabla_1\phi_1\rightarrowngle \\ & = -\mathrm{Tr}(\nabla_1\phi_1\wedge d_A\star \phi)\wedge dx_1\\ & = -\nabla_1\mathrm{Tr}(\phi_1\wedge d_A\star \phi\wedge dx_1)+\mathrm{Tr}(\phi_1\wedge B_A\wedge \star \phi)\wedge dx_1, \end{split} \end{equation*} and \betagin{equation*} \langle B_A, [\phi,\phi_1]\rightarrowngle=-\mathrm{Tr}(B_A\wedge \star \phi\phi_1-B_A\wedge\phi_1\wedge \star \phi)\wedge dx_1. \end{equation*} Adding these three equalities yields (ii). The proof of (iii) is straightforward. \end{proof} \betagin{corollary} We have \betagin{equation*} \label{boundaryterms} \betagin{split} \int_{{M_{\ep}}} \chi=&\int_{M_{\ep}} \nabla_1\mathrm{Tr}(2F_A\wedge\phi-\frac{2}{3}\phi^3-2\phi\wedge \star d_A\phi_1+\phi_1\wedge d_A\star \phi)\wedge dx_1\\ &+2\int_{M_{\ep}} d \, \mathrm{Tr}(B_A\wedge \phi)\wedge dx_1. \end{split} \end{equation*} \end{corollary} \betagin{lemma} \label{limitflatSLCconnection} Let $A^\rho+\phi^\rho$ be a flat $SL(n,\mathbb{C})$ connection over $S^1\times\Sigma$, and write $A^\rho=A^\rho_1+A^\rho_\Sigma$, $\phi^{\rho}=\phi^{\rho}_1+\phi^{\rho}_\Sigma$. \betagin{itemize} \item [i)] If we write $F_{A^{\rho}}=B_{A^{\rho}}\wedge dx_1+E_{A^{\rho}}$, then $B_{A^{\rho}}=0$; \item [ii)] Up to a unitary gauge transformation, we can assume $A^{\rho}_1$ and $\phi^\rho_1$ are invariant in the $\Sigma$ directions and $A^{\rho}_\Sigma$, $\phi^{\rho}_\Sigma$ are invariant in the $S^1$ directions. \item [iii)] Up to a unitary gauge transformation, $\phi_1^{\rho}=0$. \end{itemize} \end{lemma} \betagin{proof} Items i) and ii) follow from the fact that $\pi_1(\Sigma\times S^1)=\pi_1(\Sigma)\times\pi_1(S^1)$. For iii), observe that $A^{\rho}_1$ and $\phi^{\rho}_1$ come from the contribution of $\pi_1(S^1)\to SL(n,\mathbb{C})$. Since $\pi_1(S^1)$ is abelian, and $A^{\rho}_1+i\phi^{\rho}_1$ is an unitary connection, we obtain that $\phi^{\rho}_1=0$. \end{proof} We now prove vanishing of the second part of the boundary contribution: \betagin{lemma} Suppose that $(A, \phi, \phi_1)$ is a solution to the extended Bogomolny equations. \betagin{itemize} \item [i)] If $(A,\phi)$ satisfies the Nahm pole boundary conditions at $y=0$, with or without knot singularities, then $$ \lim_{\epsilon\to0}\int_{S^1\times\Sigma\times\{\epsilon\}}\mathrm{Tr}(B_A\wedge\phi)=0; $$ \item [ii)] If $(A,\phi)$ converges to a flat $\mathrm{SL}(n,\mathbb C)$ connection as $y \to \infty$, then $$ \lim_{\epsilon\to0}\int_{S^1\times\Sigma\times (1/\epsilon)} \mathrm{Tr}(B_A\wedge\phi)=0. $$ \end{itemize} \end{lemma} \betagin{proof} First consider i). Away from knots, Theorem \ref{expansions} gives that $A\sigmam A_{LC}+\mathcal{O}(y^{2-\epsilon})$ for any $\epsilon>0$, which implies that $B_A=B_{A_{LC}}+ \mathcal{O}(y^{2-\epsilon})+dy \wedge (B_{A})_y$. (The `LC' subscript denotes Levi-Civita.) The $dy$ component vanishes in the integration so we may disregard it. In addition, since we are using the product metric, $B_{A_{LC}}=0$. Finally, since $\phi\sigmam\frac{e}{y}$, we conclude that $B_A\wedge\phi\sigmam \mathcal{O}(y^{1-\epsilon})$, so there are no boundary contributions in this region. Near a knot $K$, we use spherical coordinates $(R,s,x_1)$ as before, and consider the boundary term as $R\to 0$. By Theorem \ref{expansions}, $B_A\sigmam B_{A^{{\mathrm{mod}}}}+\mathcal{O}(1)\sigmam \mathcal{O}(1)$ because $B_{A^{{\mathrm{mod}}}}$. In addition, $\phi\sigmam R^{-1}$, so $B_A\wedge\phi\sigmam R^{-1}$. Since the volume form is $R^2dR ds dx_1$, this boundary contribution vanishes too. Part ii) follows directly from the previous lemma. \end{proof} The other terms in $\chi$ are derivatives with respect to $x_1$, and hence vanish once we integrate over $S^1$. \betagin{corollary} \label{boundarytermvanishes} Under the previous assumptions, $\int_M\chi=0$. \end{corollary} \mathfrak{su}bsection{$S^1$-invariance} In summary, we may now conclude the \betagin{theorem} \label{S1invariant} Any solution to the KW equations over $S^1\times \Sigma\times \mathbb R^+$ satisfying Nahm pole boundary condition at $y=0$ (possibly with knot singularities at $K=S^1\times D\times \{0\}$), and which converges to a flat $\mathrm{SL}(n,\mathbb C)$ connection as $y \to \infty$, is $S^1$ invariant and reduces to a solution of the extended Bogomolny equations. In addition, $A_1\equiv 0$. \end{theorem} \betagin{proof} By Corollary \ref{boundarytermvanishes}, any solution to the KW equations with these boundary and asymptotic conditions must satisfy $$ \mathrm{EBE}(A,\phi,\phi_1)=0,\ \nabla_1\phi=0,\ B_A=0,\ \nabla_1\phi_1=0, $$ where $\mathrm{EBE}$ is the extended Bogomolny equation operator. By Lemma \ref{limitflatSLCconnection}, up to gauge we can assume that $(A,\phi,\phi_1)$ converges to $(A^{\rho},\phi^{\rho},0)$ as $y\to\infty$, where $A^{\rho},\phi^{\rho}$ is $S^1$ invariant. Since $(A,\phi,\phi_1)$ is a solution to the extended Bogomolny equations\;, Theorem \ref{Thm_KobayashiHitchinNP} and Theorem \ref{Thm_KobayashiHitchinKnot} imply that $(A,\phi,\phi_1)$ is $S^1$ invariant. From $B_A= \nabla_1\phi=0$, we obtain $d_AA_1=0$ and $[\phi,A_1]=0$. Irreducibility of solutions to the extended Bogomolny equations\; with Nahm pole boundary conditions give finally that $A_1=0$. \end{proof} The projection map $\pi:S^1\times\Sigma\times\mathbb R^+\to\Sigma\times\mathbb R^+$ naturally induces morphisms \betagin{equation*} \betagin{split} &\pi^{\star}:\mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}}\to\mathcal{M}^{\mathrm{KW}}_{\mathrm{NP}}, \\ & \pi^{\star}:\mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}K}\to\mathcal{M}^{\mathrm{KW}}_{\mathrm{NP}K} \end{split} \end{equation*} We obtain from this the \betagin{corollary} $\pi^{\star}:\mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}}\to\mathcal{M}^{\mathrm{KW}}_{\mathrm{NP}}$ and $\pi^{\star}:\mathcal{M}^{\mathrm{EBE}}_{\mathrm{NP}K}\to\mathcal{M}^{\mathrm{KW}}_{\mathrm{NP}K}$ are bijections. \end{corollary} \betagin{comment} \mathfrak{su}bsection{Conclusion} Let $(\widehat{A},\widehat{\Phi})$ be a solution to the Kapustin-Witten equations over $S^1\times\Sigma\times\mathbb R^+_y$ with product metric and let $y$ be the coordinate of $\mathbb R^+$, we work on the following boundary conditions: (A)At $y=0$, $(\widehat{A},\widehat{\Phi})$ satisfies the Nahm pole boundary condition, (B)At $y=\infty$, it convergence to a flat $SL(n,\mathbb{C})$ connection $\rho:\pi_1(S^1\timesmes \Sigma)\to \mathrm{SL}(n,\mathbb{C})$ \betagin{proposition} For all solutions to the Kapustin-Witten equations over $S^1\times\Sigma\times\mathbb R^+$, \betagin{itemize} \item [i)]if it satisfy $(A)(B2)$ then it is $S^1$-invariant, \item [ii)]if if satisfy $(A)(B1)$ then there is no Nahm pole solution. \end{itemize} \end{proposition} Based on this observation, we conclude the following classification of the solution to the Kapustin-Witten equations with boundary condition (A)(B): \betagin{theorem} Let $(\widehat{A},\widehat{\Phi})$ be a solution to the Kapustin-Witten equations satisfies condition (A) and (B) with limiting $SL(n,\mathbb{C})$ flat connection $\rho$, then \betagin{itemize} \item [i)] If $g(\Sigma)=0$, then there is no Nahm pole solution. \rm{(If $g(\Sigma)=0$, then the condition $(B)$ automatically satisfied.)} \item [ii)] If $g(\Sigma)=1$ and $\rho$ is the trivial flat connection, then we have a unique solution, \rm{(This can also be obtained by the method used in \cite{MazzeoWitten2013})}, \item[iii)] If $g(\Sigma)>1$ and \betagin{itemize} \item [a)]if $\rho|_{\pi_1(\Sigma)}\in SU(n)$, then there is no solution. \item [b)] if $\rho|_{\pi_1(S_1)}=0$, then there exists a unique solution if and only if $\rho$ lies in the Hitchin component.(\rm{When n=2 it is the Teichm\"uller component for $SL(2,\mathbb{R})$}). \end{itemize} \end{itemize} \end{theorem} There is also something to say if there exists knots singularity along the $S^1$-direction: there is only finite number of solutions if we fixed the knots type and the finite asymptotic. \end{comment} \section{Classification} We are now able to complete our main theorem. \mathfrak{su}bsection{Case 1: $\Sigma=S^2$} \betagin{proposition} There is no Nahm pole solution to the KW equations on $S^1\times S^2\times\mathbb R^+$. \end{proposition} \betagin{proof} By Theorem \ref{S1invariant}, all such solutions must be $S^1$-invariant and reduce to solutions of the Extended Bogomolny equations. Hence any such solution would lead to a stable Higgs bundle over $S^2$ with nonvanishing Higgs field. However, these do not exist \cite{Hitchin1987Selfdual}. \end{proof} \mathfrak{su}bsection{Case 2: $\Sigma=T^2$} We next classify Nahm pole solutions over $T^3\times\mathbb R^+$. Let $M=T^3\times\mathbb R^+$ with flat metric $g$. If $A$ is a connection, then $d_A=\nabla_A^{\perp}+\nabla_y$, where $\nabla^{\perp}_A$ is the covariant derivative on $T^3$. We quote the following identity for solutions of the KW equations from \cite{RyosukeEnergy,MazzeoWitten2013}: \betagin{equation} \betagin{split} \int_{M_{\epsilon}}& |KW(A,\Phi)|^2 \\ & =\int_{M_{\epsilon}}(|F_A|^2+|\nabla_A^{\perp}|^2+| \nabla_y\phi+\star \phi\wedge\phi|^2+ \langle gle \mathrm{Ric}(\phi), \phi \rightarrownglegle ) +2\int_{\partial M_{\epsilon}}\phi\wedge F_A, \label{KWindentities2} \end{split} \end{equation} where $M_\epsilon:=T^3\times (\epsilon,\frac{1}{\epsilon})$ and $\star$ is the Hodge star operator on $T^3$. \betagin{proposition} If $(A,\Phi)$ is a solution to the KW equations over $T^3\times\mathbb R^+$ satisfying the Nahm pole boundary conditions, then \betagin{equation} \betagin{split} F_A=0,\ ;\nabla_A^{\perp}\Phi=0,\ \nabla_y\phi+\star \phi\wedge\phi=0. \label{reducedNahmpoletorus} \end{split} \end{equation} \end{proposition} \betagin{proof} From \cite{Heexpansion18}, $F_A\sigmam F_{A_{LC}}+\mathcal{O}(y^2) = \mathcal{O}(y^2)$, where $A_{LC}$ is the Levi-Civita connection on $T^3$, but since the metric on $T^3$ is flat, $F_{A_{LC}}=0$. This shows that $\lim_{\epsilon\to0}\int_{T^3\times\{\epsilon\}}\phi\wedge F_A=0$. Furthermore, since $(A,\Phi)$ converges to a flat connection on $T^3$, Lemma \ref{limitflatSLCconnection} implies that $\lim_{\epsilon\to0}\int_{T^3\times\{\frac{1}{\epsilon}\}}\phi\wedge F_A=0$. \end{proof} \betagin{proposition} Let $e$ be a dreibein which is parallel along $T^3$. Then $(0,\frac{e}{y})$ is the only solution to \eqref{reducedNahmpoletorus}. \end{proposition} \betagin{proof} Use the temporal gauge in the $y$-direction, so $\nabla_y=\partial_y$. Then $\nabla_y\phi+\star \phi\wedge\phi=0$ is just the Nahm equations. Uniqueness of solutions to the Nahm equations with these boundary conditions implies that $\phi \equiv \frac{e}{y}$ for some dreibein $e$. Up to a unitary gauge transformation, we can write $e=\mathfrak{su}m_{i=1}^3dx_i \mathfrak{t}_i$ where $de=0$, $dx_i$ is an orthogonal basis of $T^{\star}T^3$ and the triplet $\mathfrak{t}_i\in\mathfrak{g}_P$ satisfies $[\mathfrak{t}_i,\mathfrak{t}_j]=\epsilon_{ijk}\mathfrak{t}_k$. Finally, $\nabla_A^{\perp}\Phi=0$ together with $de=0$ implies that $A^{\perp}=0$. \end{proof} \mathfrak{su}bsection{Case 3: $g(\Sigma)>1$} \betagin{proposition} Let $(A,\Phi)$ be a solution to the KW equations on $S^1\times\Sigma\times\mathbb R^+_y$ satisfying Nahm pole boundary conditions and which converges to a flat $\mathrm{SL}(n,\mathbb C)$ connection $(A^\rho, \phi^\rho)$ as $y\to\infty$. If $g(\Sigma)>1$, then there exists a unique solution if and only if $\rho$ is $S^1$ independent and lies in the Hitchin component. \end{proposition} \betagin{proof} By Theorem \ref{S1invariant}, all Nahm pole solutions are $S^1$ invariant and thus satisfy the extended Bogomolny equations\; and the statement then follows from Theorem \ref{Thm_KobayashiHitchinNP}. \end{proof} \mathfrak{su}bsection{Case 4: Knots} Suppose now that the Nahm pole boundary condition has an additional singularity along the knot $K=\cup_iK_i$ where $K_i=S^1\times \{p_i\}\mathfrak{su}bset S^1\times\Sigma$ with weight $\mathbf{k}_i=(k_1^i,\cdots, k_{n-1}^i)$. \betagin{theorem} \label{InvariantKnotstatement} There is no solution $(A,\Phi)$ to the KW equations over $S^1\times S^2 \times\mathbb R^+_y$ satisfying the Nahm pole boundary conditions with knots $K_i$ and weight $\mathbf{k}_i$, and which converges to a flat $\mathrm{SL}(n,\mathbb{C})$ connection as $y \to \infty$. On the other hand, solutions to these equations with these boundary and asymptotic conditions on $S^1 \times \Sigma \times \mathbb R^+$ exist when $g(\Sigma) > 1$ if and only if there exists a line subbundle $L \mathfrak{su}bset \mathcal{E}$, where $(\mathcal{E}, \varphi)$ is the Higgs data corresponding to the flat bundle at infinity, such that $\mathfrak{d} (\mathcal{E},\varphi,L)=\{(p_i,\mathbf{k}_i)\}$. \end{theorem} \betagin{proof} By Proposition \ref{S1invariant}, solutions in either case are necessarily $S^1$-invariant. As there are no Higgs bundles with non-vanishing Higgs field over $S^2$, there is no solution over $S^1\times S^2\times \mathbb R^+$. The rest of the statement is just Theorem \ref{Thm_KobayashiHitchinKnot}. \end{proof} \betagin{corollary} Let $\rho$ be an irreducible flat $\mathrm{SL}(n,\mathbb C)$ connection. Then there exists at most $n^{2g}$ solutions to the KW equations satisfying Nahm pole boundary condition with a knot singularity along $K$ at $y=0$ and which converges to $\rho$ in the cylindrical end. \end{corollary} \betagin{proof} Denote by $(\mathcal{E},\varphi)$ the Higgs bundle corresponding to $\rho$. By Theorem \ref{InvariantKnotstatement}, existence of a solution is equivalent to the existence of a line bundle $L\mathfrak{su}bset \mathcal{E}$ for which $\mathfrak{d}(\mathcal{E},\varphi,L)=\{p_i,\mathbf{k}_i=(k^i_1,\cdots,k^i_{n-1})\}$. The knot data determines the divisor $D=\mathfrak{su}m_{i}p_i(\mathfrak{su}m_{j=1}^{n-1}k_j^i)$, and we have $Z(f_n) = D$ where $f_n:=1\wedge\varphi\wedge\cdots \wedge \varphi^{n-1}$. If $L_D$ is the line bundle associated to $D$, then $L^n=L_D^{-1}\otimesmes K^{\frac{n(n-1)}{2}}$. However, this determines $L$ only up an $n^{\mathrm{th}}$ root of unity: if $N$ is any line bundle with $N^n=\mathcal{O}$, then $(L\otimesmes N)^n=L^n$. There are $n^{2g}$ choice of $N$, hence $n^{2g}$ possible solutions. However, it is not necessarily the case that each $(L \otimesmes N)^j$ is a subbundle of $\mathcal{E}$, so there may not be $n^{2g}$ actual solutions. \end{proof} \betagin{theorem} Let $D=\mathfrak{su}m_{i}p_i(\mathfrak{su}m_{j=1}^{n-1}k_j^i)$ be the divisor determined by the given knot data. If $\deg D$ is not divisible by $n$, then there exists no solution. \end{theorem} \betagin{proof} Let $L_D$ be the line bundle associated to $D$ and $(\mathcal{E},\varphi)$ the Higgs bundle determined by $\rho$. Suppose there exists a solution; then there exists a subbundle $L \mathfrak{su}bset \mathcal{E}$ such that $L^n=L_D^{-1}\otimesmes K^{\frac{n(n-1)}{2}}$. Therefore, $\deg D=-n\deg(L)+n(n-1)(g-1)$, so $n$ divides $\deg D$. \end{proof} \betagin{corollary} Let $K=S^1\times \{p\}\mathfrak{su}bset S^1\times\Sigma$ with weight $1$ and suppose $g(\Sigma)>1$. Then there is no $\mathrm{SU}(2)$ solution to the KW equations with Nahm pole singularity and knot $K$. \end{corollary} We now focus on the special case where $\rho$ lies in one of the ``non-Hitchin'' components of $\mathrm{SL}(2,\mathbb{R})$ Higgs bundles. These components are described as follows. Let $\ell$ be a line bundle with $0<\deg \ell <g-1$ and consider the stable Higgs bundle $$ \mathcal{E}=\ell^{-1}\oplus \ell,\ \varphi=\betagin{pmatrix} 0& \alpha\\ \betata& 0 \end{pmatrix}, $$ where $\alpha\in H^{0}(\ell^{-2}\otimesmes K)$ and $\betata\in H^{0}(\ell^2\otimesmes K)$ are nontrivial sections. Then the zeroes of $f_2:=1\wedge\varphi: \ell^2\to K$ coincide with those of $\alpha$, and the number of zeroes counted with multiplicity equals $2g-2-2\deg \ell$. \betagin{proposition} With all notation as above, fix the knot data $D=\mathfrak{su}m_i p_i k_i$. \betagin{itemize} \item [(i)] If $\deg D=2g-2-2\deg \ell$, then there exists a unique Nahm pole solution if and only if $D=\alpha$ and no solution otherwise; \item [(ii)] if $2g-2>\deg D>2g-2-2\deg \ell$, there is no solution. \end{itemize} \end{proposition} \betagin{proof} With $L_D$ the line bundle for $D$, by Theorem \ref{InvariantKnotstatement} the necessary condition for existence of a Nahm pole solution is that there exists $L\mathfrak{su}bset \mathcal{E}$ such that $L^2=L_D^{-1}\otimesmes K$. For (i), if $\deg (D)=2g-2-2\deg \ell$, then $\deg L=\deg \ell$. However, since $\mathcal{E}$ has rank $2$ and $\deg\;\mathcal{E}=0$, there is a unique subbundle of positive degree, so $L= \ell$. By the form of the Higgs bundle, we conclude that $D=\alpha$. For (ii), if $2g-2>\deg D>2g-2-2\deg \ell$, if there is solution with line bundle $L$, then $0<\deg L<\deg \ell$, which is impossible. \end{proof} \end{document}
math
57,073
\begin{document} \title{Practical continuous-variable quantum key distribution with composable security } \author{Nitin Jain} \email{[email protected]} \affiliation{\mbox{Center for Macroscopic Quantum States (bigQ), Department of Physics,} Technical University of Denmark, 2800 Kongens Lyngby, Denmark} \author{Hou-Man Chin} \affiliation{\mbox{Department of Photonics, Technical University of Denmark}, 2800 Kongens Lyngby, Denmark} \affiliation{\mbox{Center for Macroscopic Quantum States (bigQ), Department of Physics,} Technical University of Denmark, 2800 Kongens Lyngby, Denmark} \author{Hossein Mani} \affiliation{\mbox{Center for Macroscopic Quantum States (bigQ), Department of Physics,} Technical University of Denmark, 2800 Kongens Lyngby, Denmark} \author{\mbox{Cosmo Lupo}} \affiliation{\mbox{Department of Physics and Astronomy,} University of Sheffield, S3 7RH Sheffield, UK} \affiliation{\mbox{Department of Computer Science,} University of York, York YO10 5GH, UK} \author{\mbox{Dino Solar Nikolic}} \affiliation{\mbox{Center for Macroscopic Quantum States (bigQ), Department of Physics,} Technical University of Denmark, 2800 Kongens Lyngby, Denmark} \author{Arne Kordts} \affiliation{\mbox{Center for Macroscopic Quantum States (bigQ), Department of Physics,} Technical University of Denmark, 2800 Kongens Lyngby, Denmark} \author{Stefano Pirandola} \affiliation{\mbox{Department of Computer Science,} University of York, York YO10 5GH, UK} \author{Thomas Brochmann Pedersen} \affiliation{Cryptomathic A/S, Aaboulevarden 22, 8000 Aarhus, Denmark} \author{\mbox{Matthias Kolb}} \author{\mbox{Bernhard {\"O}mer}} \author{Christoph Pacher} \affiliation{Center for Digital Safety \& Security, AIT Austrian Institute of Technology GmbH, 1210 Vienna, Austria.} \author{Tobias Gehring} \email{[email protected]} \author{Ulrik L. Andersen} \email{[email protected]} \affiliation{\mbox{Center for Macroscopic Quantum States (bigQ), Department of Physics,} Technical University of Denmark, 2800 Kongens Lyngby, Denmark} \date{\today} \begin{abstract} A quantum key distribution (QKD) system must fulfill the requirement of universal composability to ensure that any cryptographic application (using the QKD system) is also secure. Furthermore, the theoretical proof responsible for security analysis and key generation should cater to the number $N$ of the distributed quantum states being finite in practice. Continuous-variable (CV) QKD based on coherent states, despite being a suitable candidate for integration in the telecom infrastructure, has so far been unable to demonstrate composability as existing proofs require a rather large $N$ for successful key generation. Here we report the first Gaussian-modulated coherent state CVQKD system that is able to overcome these challenges and can generate composable keys secure against collective attacks with $N \lesssim 3.5\times10^8$ coherent states. With this advance, possible due to novel improvements to the security proof and a fast, yet low-noise and highly stable system operation, CVQKD implementations take a significant step towards their discrete-variable counterparts in practicality, performance, and security. \begin{description} \item[PACS numbers] May be entered using the \verb+\pacs{#1}+ command. \end{description} \end{abstract} \pacs{Valid PACS appear here} \maketitle \section{Introduction} Quantum key distribution (QKD) is the only known cryptographic solution for distributing secret keys to users across a public communication channel while being able to detect the presence of an eavesdropper~\cite{Scarani2009, Pirandola2019}. Legitimate QKD users (Alice and Bob) encrypt their messages with the secret keys and exchange them with the assurance that the eavesdropper (Eve) cannot break the confidentiality of the encrypted messages. In particular, if the obtained secret key is (at least) as long as the length of the message, information theoretic security guarantees that Eve cannot break the security even if equipped with unlimited computing resources. Alice and Bob perform a sequence of steps, shown in Fig.~\ref{fig:scheme}, to obtain a key of a certain length. Such a `QKD protocol' begins with preparation, transmission (on a quantum channel), measurement of quantum states, and concludes with classical data processing and security analysis, performed in accordance with a mathematical proof. \begin{figure*} \caption{Composability in continuous-variable quantum key distribution (CVQKD). Alice and Bob obtain bitstreams $s_A$ and $s_B$, respectively, after going through the different steps of the QKD protocol that involve both the quantum and authenticated channels, assumed to be in full control of Eve. Various dashed lines (with arrows) indicate local operations and classical communication. An application using a CVQKD system (transmitter and receiver) to provide composable security must satisfy certain criteria associated with correctness, robustness, and secrecy of the protocol~\cite{Muller-Quade2009, Leverrier2015} \label{fig:scheme} \end{figure*} Amongst the many physical considerations included in the security proof, one is that the number of quantum states available to Alice and Bob are not infinite. Such \textit{finite-size corrections} adversely affect the key length but are essential for the security assurance. Another related property of a cryptographic key is \textit{composability}~\cite{Canetti2001}, which allows specifying the security requirements for combining different cryptographic applications in a unified and systematic way. In the context of practical QKD, composability is of utmost importance because the secret keys obtained from a QKD protocol are almost always used in other applications, e.g.\ data encryption~\cite{Muller-Quade2009}. A QKD implementation that outputs a key not proven to be composable is thus practically useless. In one of the most well-known flavours of QKD, the quantum information is coded in continuous variables, such as the amplitude and phase quadratures, of the optical field~\cite{Ralph99, Diamanti2015, Laudenbach2018, Pirandola2019}. Typical continuous-variable (CV)QKD protocols have been Gaussian-modulated coherent state (GMCS) implementations~\cite{Jouguet2013, Huang2016, Wang2018, Wang2020}, and finite-size effects were also considered, though the proof~\cite{Leverrier2010} was non-composable. Composable security in CVQKD was first proven and experimentally demonstrated using two-mode squeezed states, however, since the employed entropic uncertainty relation is not tight, the achievable communication distance was rather limited~\cite{Furrer2012, Gehring2015}. Composable security proofs for CVQKD systems using coherent states and dual quadrature detection, first proposed in 2015~\cite{Leverrier2015}, have been progressively improved~\cite{Lupo2018, Papanastasiou2021, Pirandola2021}. These proofs promise keys at distances much longer than in Ref.~\cite{Furrer2012} apart from the advantage of dealing with coherent states, which are much easier to generate than squeezed states. Nonetheless, an experimental demonstration of composability has remained elusive, due to a combination of the strict security bounds (because of a complex parameter estimation routine), the large number of required quantum state transmissions (to keep the finite-size terms sufficiently low), and the stringent requirements on the tolerable excess noise. In this article, we demonstrate a practical GMCS-CVQKD system that is capable of generating composable keys secure against collective attacks. We achieve this by deriving a new method for establishing confidence intervals that is compatible with collective attacks, which allows us to work on smaller (and thus more practical) block sizes than originally required~\cite{Leverrier2015}. On the experimental front, we are able to keep the excess noise below the null key length threshold by performing a careful analysis (followed by eradication or avoidance) of the various spurious noise components, and by implementing a machine learning framework for phase compensation~\cite{Chin2020}. After taking finite-size effects as well as confidence intervals from various system calibrations into account, we achieve a positive composable key length with merely $N \lesssim 3.5\times10^8$ coherent states (also referred to as `quantum symbols'). With $N = 10^9$, we obtain $>53.4$ Mbits worth of composably secure key material in the worst case. \section{Composably secure key}\label{sec:Theory} In the security analysis, we assume collective attacks and take into account the finite number of coherent states transmitted by Alice and measured by Bob. A digital signal processing (DSP) routine yields the digital quantum symbols $\bar{Y}$ discretized with $d$ bits per quadrature and this stream is divided into $M$ frames for information reconciliation (IR), after which we perform parameter estimation (PE) and privacy amplification (PA); as visualized in Fig.~\ref{fig:scheme}. We derive the secret key bound for reverse reconciliation, i.e., Alice correcting her data according to Bob's quantum symbols. The (composable) secret key length $s_n$ for $n$ coherent state transmissions is calculated using tools from Refs.~\cite{Leverrier2015, Pirandola2021} as well as new results presented in the following. The key length is bounded per the leftover hash lemma in terms of the smooth min-entropy $H_{\min}$ of the alphabet of $\bar{Y}$, conditioned on the quantum state of the eavesdropper $E$~\cite{Tomamichel2012}. From this we subtract the information reconciliation leakage $\mathrm{leak_{IR}}(n,\epsilon_{\mathrm{IR}})$ and obtain, \begin{equation} s_{n}^{\epsilon_h+\epsilon_s+\epsilon_{\mathrm{IR}}} \geq H_{\min}^{\epsilon_s}(\bar Y | E )_{ \rho^{n} } -\mathrm{leak_{IR}}(n,\epsilon_{\mathrm{IR}}) + 2 \log_2{(\sqrt{2}\epsilon_h)} \, . \label{eq:skfSimple} \end{equation} The security parameter $\epsilon_h$ characterizes the hashing function, $\epsilon_s$ is the smoothing parameter entering the smooth conditional min-entropy, and $\epsilon_{\mathrm{IR}}$ describes the failure probability of the correctness test after IR. The probability $p^\prime$ that IR succeeds in a frame is related to the frame error rate (FER) by $p^\prime = 1 - $FER. All frames in which IR failed are discarded from the raw key stream, and this step thereby projects the original tensor product state $\rho^{n} \equiv \rho^{\otimes n}$ into a non i.i.d. state $\tau^{n}$. To take this into account, one replaces the smooth min-entropy term in Eq.~\eqref{eq:skfSimple} with the expression~\cite{ Pirandola2021}: \begin{align}\label{new-min} H_{\min}^{\epsilon_s}(\bar Y | E)_{\tau^{n'}} \geq H_{\min}^{\frac{p^\prime}{3} \epsilon_s^2}(\bar Y |E)_{ \rho^{\otimes n'} } + \log_2{\left( p^\prime - \frac{p^\prime}{3}\epsilon_s^2\right) } \, , \end{align} where $n' = n p^\prime$ is the number of quantum symbols remaining after error correction. The asymptotic equipartition property (AEP) bounds the conditional min-entropy by the von-Neumann conditional entropy, \begin{equation*} H_{\min}^{\delta}(\bar Y | E)_{\rho^{\otimes n'}} \geq n' H(\bar Y |E)_{\rho}-\sqrt{n'}\,\Delta_{\mathrm{AEP}}(\delta,d)\ , \end{equation*} where \begin{equation} \Delta_{\mathrm{AEP}}(\delta,d)\leq 4(d+1)\sqrt{\log_2{(2/\delta^{2})}}\ , \end{equation} is an improved penalty in comparison to Ref.~\cite{Leverrier2015,Pirandola2021}, and is proven in the Supplement. The conditional von-Neumann entropy is given by \begin{equation} H(\bar Y | E )_\rho = H(\bar Y)_\rho - I(\bar Y ; E)_\rho \, . \end{equation} We estimate the first term directly from the data (up to a probability not larger than $\epsilon_\mathrm{ent}$; further details regarding the confidence intervals are in the Supplement). The second term is bound by the Holevo information, \begin{equation*} I(\bar Y ; E)_\rho \leq I( Y ; E )_\rho \leq I( Y ; E )_{\rho_G} \ , \end{equation*} where $Y$ is the continuous version of $\bar Y$ and $I( Y ; E )_{\rho_G}$ is the Holevo information obtained after using the extremality property of Gaussian attacks. The Holevo information is estimated by evaluating the covariance matrix using worst-case estimates for its entries based on confidence intervals. We improved the confidence intervals of Ref.~\cite{Leverrier2015} by exploiting the properties of the beta distribution. Let $\hat{x}$, $\hat{y}$, $\hat{z}$ be the estimators for the variance of the transmitted ensemble of coherent states, the received variance and the co-variance, respectively. The true values $y$ and $z$ are bound by \begin{align} y &\le \left(1 + \delta_\text{Var}(n, \epsilon_\text{PE}/2)\right)\hat{y}\ , \label{eq:dvar} \\ z &\ge \left(1 - 2\delta_\text{Cov}(n, \epsilon_\text{PE}/2)\frac{\sqrt{\hat x \hat y}}{\hat z}\right)\hat{z}\ , \label{eq:dcov} \end{align} with $\epsilon_\text{PE}$ denoting the failure probability of parameter estimation, and \begin{align*} \delta_\mathrm{Var} (n,\epsilon) & = a'\left(\epsilon/6\right) \left( 1 + \frac{120}{\epsilon} e^{-\frac{n}{16}} \right) - 1 \ , \\ \delta_\mathrm{Cov}(n,\epsilon) & = \frac{1}{2}\left[\frac{a'\left(\epsilon/6\right)-b'\left(\epsilon/6\right)}{2} + a'\left(\frac{\epsilon^2}{324}\right) - b'\left(\frac{\epsilon^2}{324} \right) \right] \end{align*} being the new confidence intervals (derived in the Supplement). In the above equations, \begin{align*} a'\left(\epsilon\right) & = 2 \left[ 1 - \mathrm{invcdf}_{\mathrm{Beta}(n/2,n/2)}\left(\epsilon\right) \right] \, , \\ b'\left(\epsilon\right) & = 2 \, \mathrm{invcdf}_{\mathrm{Beta}(n/2,n/2)}\left(\epsilon\right) \, . \end{align*} As detailed in section~\ref{sec:resNdis}, the (length of the) secret key we eventually obtain in our experiment requires an order of magnitude lower $N$ due to these confidence intervals. Finally, we remark here on a technical limitation arising due to the digitization of Alice's and Bob's data. In practice, it is impossible to implement a \emph{true} GMCS protocol because the Gaussian distribution is both unbounded and continuous, while the devices used in typical CVQKD systems have a finite range and bit resolution~\cite{Jouguet2012}. In our work, we consider a range of 7 standard deviations and use $d=6$ bits (leading to a constellation with $2^{2d} = 4096$ coherent states), which per recent results~\cite{Lupo2020, Denys2021}, should suffice to minimise the impact of digitization on the security of the protocol. \section{Experiment} \begin{figure*} \caption{\textbf{Schematic of the experiment.} \label{fig:setup} \end{figure*} Figure~\ref{fig:setup} shows the schematic of our setup, with the caption detailing the components and their role briefly. Below we summarize the setup's operation, calibration measurements, and our protocol implementation. In the Supplement, we describe the different functional blocks of Fig.~\ref{fig:setup} in further detail. \subsection{Transmitter (Tx)}\label{Exp:Tx} We performed optical single sideband modulation with carrier suppression (OSSB-CS) using an off-the-shelf IQ modulator and automatic bias controller (ABC). An arbitrary waveform generator (AWG) was connected to the RF ports to modulate the sidebands. The coherent states were produced in a $B=100\,$MHz wide frequency sideband, shifted away from the optical carrier~\cite{Lance2005, Jain2021}. The random numbers that formed the complex amplitudes of these coherent states were drawn from a Gaussian distribution, obtained by transforming the uniform distribution of a vacuum-fluctuation based quantum random number generator (QRNG), with a security parameter $\epsilon_\text{qrng} = 2 \times 10^{-6}$~\cite{Gehring2021}. To this wideband `quantum data' signal, centered at $f_u=200\,$MHz, we multiplexed in frequency a `pilot tone' at $f_p=25\,$MHz for sharing a phase reference with the receiver~\cite{Qi2015, Soh2015, Huang2015, Kleis2017}. The left inset of Fig.~\ref{fig:setup} shows the complex spectra of the RF modulation signal. \subsection{Receiver (Rx)} \label{Exp:Rx} After propagating through the quantum channel---a 20 km long standard single mode fiber spool---the signal field's polarization was manually tuned to match the polarization of the real local oscillator (RLO) for heterodyning~\cite{Qi2015,Soh2015,Huang2015}. The Rx laser that supplied the RLO was free-running with respect to the Tx laser and detuned in frequency by $\sim320\,$ MHz, giving rise to a beat signal, as labelled in the solid-red spectral trace in the right inset of Fig.~\ref{fig:setup}. The quantum data band and pilot tone generated by the AWG are also labelled. Due to finite OSSB~\cite{Jain2021}, a suppressed pilot tone is also visible; the corresponding suppressed quantum band was however outside the receiver bandwidth (we used a low pass filter with a cutoff frequency around 360 MHz at the output of the heterodyne detector). In separate measurements, we also measured the vacuum noise (Tx laser off, Rx laser on) and the electronic noise of the detector (both Tx and Rx lasers off), depicted by the dotted-blue and dashed-green traces, respectively, in the right inset of Fig.~\ref{fig:setup}. The clearance of the vacuum noise over the electronic noise is $>15\,$dB over the entire quantum data band. \subsection{Noise analysis \& Calibration}\label{Exp:NoiseCal} \begin{table*}[!t] \centering \caption{Experimental parameters. PNU: photon number units. The security parameter $\epsilon_\text{qrng}$ is limited by the digitization error of the ADC used in the QRNG, but could be improved using longer measurement periods~\cite{Gehring2021}.} \begin{tabular}{l|l} \textbf{Transmitter} & \\ \hline Rate of coherent states, $B$ & 100 MSymbols/s \\ Modulation strength (channel input), $\mu$ & 1.45 PNU \\ \hline \textbf{Receiver calibration} & \\ \hline Trusted efficiency (incl. optical loss), $\tau$ & 0.69 \\ Trusted electronic noise, $t$ & 25.71$\,\times 10^{-3}$ PNU \\ \hline \textbf{Channel parameter estimation} & \\ \hline Untrusted efficiency, $\eta$ & 0.35 \\ Untrusted excess noise, $u$ & 6.30$\,\times 10^{-3}$ PNU \\ \hline \textbf{Information reconciliation} & \\ \hline Signal-to-noise ratio & 0.32 \\ Frame error rate, FER & 0.36\% \\ Reconciliation efficiency, $\beta$ & 91.6\% \\ Leaked bits & $1.60 \times 10^9$ \\ \hline \textbf{Secret key calculation} & \\ \hline Raw key length (symbols), $N_\text{PA}$ & $9.84 \times 10^8$ \\ Security parameters & $\epsilon_h = \epsilon_\text{cal} = \epsilon_s = \epsilon_\text{PE} = 10^{-10}$, $ \epsilon_\text{qrng} = 2 \times 10^{-6}$, $\epsilon_\text{IR} = 10^{-12}$ \\ Final secret key length (bits) & 53452436 \\ \end{tabular} \label{tab:exp_params} \end{table*} A careful choice of the parameters defining the pilot tone and the quantum data band, and their locations with respect to the beat signal is crucial in minimizing the excess noise. A strong pilot tone enables more accurate phase reference but at the expense of higher leakage in the quantum band and an increased number of spurious tones. The latter may arise as a result of frequency mixing of the (desired) pilot tone with e.g., the beat signal or the suppressed pilot tone. As can be observed in the right inset of Fig.~\ref{fig:setup}, we avoided spurious noise peaks resulting from sum- or difference-frequency generation of the various discrete components (in the solid-red trace) from landing inside the wide quantum data band. As is well known in CVQKD implementations, Alice needs to optimize the modulation strength of the coherent state alphabet at the input of the quantum channel to maximize the secret key length. For this, we connected the transmitter and receiver directly, i.e., without the quantum channel, and performed heterodyne measurements to calibrate the \emph{mean photon number} $\mu$ of the resulting thermal state from the ensemble of generated coherent states, as explained in section~\ref{Exp:Tx}. The modulation strength can be controlled in a fine-grained manner using the electronic gain of the AWG and the optical attenuation from the VATT. The DSP that aided in this calibration is explained in detail in the supplement. Since we conducted our experiment in the non-paranoid scenario~\cite{Scarani2009, Jouguet2012}, i.e., we trusted some parts of the overall loss and excess noise by assuming them to be beyond Eve's control, some extra measurements and calibrations for the estimation of trusted parameters become necessary. More specifically, we decomposed the total transmittance and excess noise into respective trusted and untrusted components. In the Supplement, we present the details of how we evaluated the trusted transmittance $\tau$ and trusted noise $t$ for our setup. Table~\ref{tab:exp_params} presents the values of $\mu$, $\tau$ and $t$ pertinent to the experimental measurement described in section~\ref{Exp:Protocol}. Let us remark here that in our work, we express the noise and other variance-like quantities, e.g., the modulation strength, in photon number units (PNU) as opposed to the traditional shot noise units (SNU) because the former is independent of quadratures, and in case of $\mu$, facilitates a comparison with discrete-variable (DV) QKD systems\footnote{Assuming symmetry between the quadratures, the modulation variance $V_{\text{mod}} = 2 \mu$ in SNU.}. Finally, note that we recorded a total of $10^{10}$ ADC samples for each of the calibration measurements, and all the acquired data was stored on a hard drive for offline processing. \subsection{Protocol operation}\label{Exp:Protocol} We connected the transmitter and receiver using the 20 km channel, optimized the signal polarization, and then collected heterodyne data using the same Gaussian distributed random numbers as mentioned in section~\ref{Exp:NoiseCal}. Offline DSP~\cite{Chin2020} was performed at the receiver workstation to obtain the symbols that formed the raw key. The preparation and measurement was performed with a total of $10^9$ complex symbols, modulated and acquired in 25 blocks, each block containing $4 \times 10^7$ symbols. After discarding some symbols due to a synchronization delay, Alice and Bob had a total of $N_\text{IR} = 9.88 \times 10^8$ correlated symbols at the beginning of the classical phase of the protocol; see Fig.~\ref{fig:scheme}. Below we provide details of the actual protocol we implemented, where we assumed that the classical channel connecting Alice and Bob was already authenticated. \begin{enumerate} \item IR was based on a multi-dimensional scheme~\cite{MD-Recon-PRA.77.042325} using multi-edge-type low-density-parity-check error correcting codes~\cite{Mani2018}. Table~\ref{tab:exp_params} lists some parameters related to the operating regime and the performance of these codes; more information is available in the Supplement. As shown in Fig.~\ref{fig:scheme}, Bob sent the mapping and the syndromes, together with the hashes computed using a randomly chosen Toeplitz function, to Alice, who performed correctness confirmation and communicated it to Bob. \item During PE, Alice estimated the entropy of the corrected symbols, and together with the symbols from the erroneous frames, i.e., frames that could not be reconciled successfully (and were publicly announced by Bob), Alice evaluated the covariance matrix. This was followed by evaluating the channel parameters as well as performing the `parameter estimation test' (refer Theorem 2 in Ref.~\cite{Leverrier2015}) and getting a bound on Eve's Holevo information. Using the expression for the secret key length with the security parameters from Table~\ref{tab:exp_params}, Alice then calculated the number of bits expected in the output secret key in the worst-case scenario. This length was communicated together with a seed to Bob. \item For PA, the shared seed from the previous step was used to select a random Toeplitz hash function by Alice and Bob, who then employed the high-speed and large-scale PA scheme~\cite{Tang2019} to generate the final secret key. \end{enumerate} \section{Results \& Discussion}\label{sec:resNdis} \begin{figure*} \caption{Composable SKF results. (a) Pseudo-temporal evolution of the composable SKF with the time parameter calculated as the ratio of the cumulative number $N$ of complex symbols available for the classical steps of the protocol and the rate $B$ at which these symbols are modulated. (b) Variation of untrusted noise $u$ measured in the experiment (lower point) and its worst-case estimator (upper point), and the noise threshold to beat in order to get a positive composable SKF. The reason for the deviation of the traces in (a) from the experimental data between 1 and 4 seconds is due to the slight increase in $u$. (c) and (d) Comparison of confidence intervals derived in this manuscript (Beta; solid-red trace and Gaussian; dashed-green trace) with those derived in the original composable security proof (Ref.~\cite{Leverrier2015} \label{fig:results} \end{figure*} Table~\ref{tab:exp_params} summarizes the relevant parameters in our experiment. Alice prepared an ensemble of $10^9$ coherent states, characterized by a modulation strength of 1.45 PNU, transmitted them over a 20 km channel to Bob, who measured them with a total excess noise $u+t = 6.3 + 25.7 = 32.0\,$mPNU and a total transmittance $\eta \cdot \tau = 0.35 \cdot 0.69 = 0.24$ averaged over the amplitude and phase (I and Q) quadratures. With a total of $N_\text{IR} = 9.88 \times 10^8$ correlated symbols, Bob and Alice performed reverse reconciliation with an efficiency $\beta = $ 91.6\% as explained in section~\ref{Exp:Protocol}. Notably, due to the low frame error rate (FER = 0.0036) during IR, Alice and Bob were left with $N_\text{PA} = 9.84 \times 10^8$ symbols for performing the last classical step of the protocol. Using the equations presented in section~\ref{sec:Theory}, we can calculate the composably secure key length (in bits) for a certain number $N$ of the quantum symbols. We partitioned $N_\text{IR}$ in 25 blocks, estimated the key length considering the total number $N_k$ of symbols accumulated from the first $k$ blocks, for $k \in \{1, 2, \ldots, 25\}$. Dividing this length by $N_k$ yields the composable secret key fraction (SKF) in bits/symbols. If we neglect the time taken by data acquisition, DSP, and the classical steps of the protocol, i.e., only consider the time taken to modulate $N=N_k$ coherent states at the transmitter (at a rate $B = 100\,$MSymbols/s), we can construct a hypothetical time axis to show the evolution of the CVQKD system. Figure~\ref{fig:results}(a) depicts such a time evolution of the SKF after proper consideration to the finite-size corrections due to the average and worst-case (red and blue data points, respectively) values of the underlying parameters. Similarly, Fig.~\ref{fig:results}(b) shows the experimentally measured untrusted noise $u$ (lower squares) together with the worst-case estimator (upper dashes) calculated using $N_k$ in the security analysis. To obtain a positive key length, the worst-case estimator must be below the maximum tolerable noise---null key fraction threshold---shown by the solid line, and this occurs at $N/B \lesssim 3.5$ seconds. Note that in reality, the DSP and classical data processing consume a significantly long time: In fact, we store the data from the state preparation and measurement stages on disks and perform these steps offline. The plots in Fig.~\ref{fig:results} therefore may be understood to be depicting the time evolution of the SKF and the untrusted noise \emph{if} the entire protocol operation was in real time. Joining data from both I and Q quadratures bestowed $2 N_\text{PA} = 2 \times 9.84 \times 10^8$ \emph{real} symbols, from which we then obtain a secret key with length $l = 53452436$ bits, implying a worst-case SKF = $0.027\,$bits/symbol. Referring to Fig.~\ref{fig:results}(a), the solid-blue and dashed-red traces simulate the SKF in the worst-case and average scenarios, respectively, while the dotted-black trace shows the asymptotic SKF value obtainable with the given channel parameters; refer Table~\ref{tab:exp_params}. Per projections based on the simulation, the worst-case composable SKF should be within 1\% of the asymptotic value for $N \approx 10^{12}$ complex symbols. From a theoretical perspective, the reason for being able to generate a positive composable key length with a relatively small number of coherent states ($N \lesssim 3.5\times10^8$) can mainly be attributed to the improvement in confidence intervals during PE; refer equations~\ref{eq:dvar} and \ref{eq:dcov}. Figures~\ref{fig:results}(c) and (d) quantitatively compare the scaling factor in the RHS of these equations, respectively, as a function of $N$ for three different distributions. The estimators $\hat{x}$, $\hat{y}$, $\hat{z}$ for this purpose are the actual values obtained in our experiment and we used an $\epsilon_\text{PE} = 10^{-10}$. The difference between the confidence intervals used in Ref.~\cite{Leverrier2015} (suitably modified here for a fair comparison) with those derived here, based on the Beta distribution, is quite evident at lower values of $N$, as visualized by comparing the dashed-blue trace with the solid-red one. Since the untrusted noise has a quadratic dependence on the covariance in contrast to variance where the dependence is linear, a method that tightens the confidence intervals for the covariance can be expected to have a large impact on the final composable SKF. In fact, according to simulation, our implementation would have required almost an order of magnitude higher $N_\text{PA} $ ($\gtrsim 7.5 \times 10^9$) using the confidence intervals of Ref.~\cite{Leverrier2015} to achieve the peak SKF depicted by the rightmost blue data point in Fig.~\ref{fig:results}(a). The dashed-green trace shows the confidence intervals also based on the Beta distribution, and a further assumption of the underlying data, i.e., the I and Q quadrature symbols, following a Gaussian distribution (more details provided in the Supplement). This however may restrict the security analysis to Gaussian collective attacks, therefore, we do not make this assumption in our calculations. The advantage of this method would however be even tighter confidence intervals, and thus, even lower requirements on $N$ for obtaining a composable key with positive length. On the practical front, a reasonably large transmission rate $B = 100\,$MSymbols/s of the coherent states together with the careful analysis and removal of excess noise (refer section~\ref{Exp:NoiseCal} for more details) enables an overall fast, yet low-noise and highly stable system operation, critical in quickly distributing raw correlations of high quality and keeping the finite-size corrections minimal. \section{Conclusion \& Outlook} Due to its similarity to coherent telecommunication systems, continuous-variable quantum key distribution (CVQKD) based on coherent states is perhaps the most cost-effective solution for widespread deployment of quantum cryptography at access network scales ($10-50\,$km long quantum channels). However, CVQKD protocols have lagged behind their discrete-variable counterparts in terms of security, particularly, in demonstrating composability and robustness against finite-size effects. In this work, we have implemented a prepare-and-measure Gaussian-modulated coherent state CVQKD protocol that operates over a 20 km long quantum channel connecting Alice and Bob, who, at the end of the protocol obtain a composable secret key that takes finite-size effects into account and is protected against collective attacks. Our achievement was enabled by means of several novel advances in the theoretical security analysis and technical improvements on the experimental front. Furthermore, by using a real local oscillator at the receiver, we enhance the practicality as well as the security of the QKD system against hacking. In conclusion, we believe this is a significant advance that demonstrates practicality, performance, and security of CVQKD implementations operating in the low-to-moderate channel loss regime. With an order of magnitude larger $N$ and half the current value of $u$, we expect to obtain a non-zero length of the composable key while tolerating channel losses around 8 dB, i.e.,\ distances up to $\sim 40\,$km (assuming an attenuation factor of 0.2 dB/km). This should be easily achievable with some improvements in the hardware as well as the digital signal processing. We therefore expect that in the future, users across a point-to-point link could use the composable keys from our CVQKD implementation to enable real applications such as secure data encryption, thus ushering in a new era for CVQKD. \\ \end{document}
math
33,031